├── img ├── p04 │ ├── vm_hd.jpeg │ ├── vm_id.jpeg │ ├── vm_os.jpeg │ ├── vm_boot.jpeg │ ├── vm_cpu.jpeg │ ├── vm_mem.jpeg │ ├── vm_rng.jpeg │ ├── vm_ci_dns.jpeg │ ├── vm_confirm.jpeg │ ├── vm_notes.jpeg │ ├── vm_system.jpeg │ ├── vm_tablet.jpeg │ ├── vm_cloudinit.jpeg │ ├── vm_delete_cd.jpeg │ ├── vm_enable_hd.jpeg │ ├── vm_hd_resize.jpeg │ ├── vm_unused_hd.jpeg │ ├── vm_ci_details.jpeg │ ├── vm_ci_dns_ula.jpeg │ ├── vm_hardware_all.jpeg │ ├── vm_hd_scale_up.jpeg │ ├── vm_network_port.jpeg │ ├── vm_rng_details.jpeg │ ├── vm_network_queue.jpeg │ ├── vm_ci_network_slaac.jpeg │ ├── vm_ci_network_static.jpeg │ └── download_generic_image_qcow2.jpeg ├── p01 │ ├── pve_eth.jpeg │ ├── pve_ip.jpeg │ ├── pve_email.jpeg │ ├── pve_etcher.jpeg │ ├── pve_eula.jpeg │ ├── pve_hd_fs.jpeg │ ├── pve_option.jpeg │ ├── pve_rufus.jpeg │ ├── pve_ventoy.jpeg │ ├── pve_mobaxterm.png │ ├── pve_sys_info.jpeg │ ├── pve_termius.jpeg │ ├── pve_timezone.jpeg │ ├── pve_first_boot.jpeg │ ├── pve_hd_choose.jpeg │ ├── pve_ventoy_boot.jpeg │ ├── pve_win_terminal.png │ ├── pve_download_iso.jpeg │ ├── pve_install_confirm.jpeg │ ├── pve_install_finish.jpeg │ └── pve_ventoy_boot_mode.jpeg ├── p00 │ ├── bios_cpu.jpeg │ ├── bios_pch.jpeg │ ├── bios_power.jpeg │ ├── bios_save.jpeg │ ├── bios_cpu_vmx.jpeg │ ├── bios_boot_order.jpeg │ ├── bios_c_states.jpeg │ ├── bios_fast_boot.jpeg │ ├── bios_smart_fan.jpeg │ ├── bios_turbo_max.jpeg │ ├── bios_hardware_ac.jpeg │ ├── bios_save_yeahhhh.jpeg │ ├── bios_power_control.jpeg │ ├── bios_turbo_options.jpeg │ ├── bios_hardware_monitor.jpeg │ └── bios_smart_fan_config.jpeg ├── p05 │ ├── os_login.jpeg │ └── vm_to_template.jpeg ├── p06 │ ├── vm_clone.jpeg │ ├── vm_clone_vmid.jpeg │ ├── vm_clone_ci_slaac.jpeg │ ├── vm_clone_autostart.jpeg │ ├── vm_clone_ci_static.jpeg │ └── vm_clone_autostart_order.jpeg ├── p08 │ ├── vm_job_email.jpeg │ ├── vm_job_keep.jpeg │ ├── vm_job_notes.jpeg │ ├── vm_job_time.jpeg │ ├── vm_job_advanced.jpeg │ ├── vm_job_normal.jpeg │ ├── vm_job_time_test.jpeg │ └── vm_new_backup_job.jpeg └── p02 │ ├── pve_br_create.jpeg │ ├── pve_br_phyport.jpeg │ ├── pve_net_default.jpeg │ ├── pve_net_preview.jpeg │ ├── pve_add_ipv6_dns.jpeg │ ├── pve_br_nophyport.jpeg │ ├── pve_modify_vmbr0.jpeg │ ├── pve_br_last_phyport.jpeg │ ├── pve_net_schematization.jpeg │ └── pve_br_last_phyport_ipv6.jpeg ├── src ├── pve │ ├── pve_20auto_upgrades.conf │ ├── pve_cpupower.conf │ ├── pve_cpupower_service.conf │ ├── pve_apt_daily_upgrade.conf │ ├── pve_cpufrequtils.conf │ └── pve_50unattended_upgrades.conf └── debian │ ├── debian_dns_20auto_upgrades.conf │ ├── debian_ts_20auto_upgrades.conf │ ├── debian_sources.conf │ ├── debian_ts_nic_optim_service.conf │ ├── debian_dns_dnsmasq_cron.conf │ ├── debian_dns_smartdns_cron.conf │ ├── debian_ts_nic_optim.conf │ ├── debian_dns_dnsmasq.conf │ ├── debian_dns_99_sysctl.conf │ ├── debian_ts_dnsmasq.conf │ ├── debian_dns_smartdns.conf │ ├── debian_ts_99_sysctl.conf │ ├── debian_dns_smartdns_plugin.conf │ ├── debian_ts_nftables.conf │ ├── debian_dns_50unattended_upgrades.conf │ └── debian_ts_50unattended_upgrades.conf ├── README.md ├── 00.硬件BIOS配置.md ├── 08.PVE自动备份虚拟机.md ├── 01.PVE系统安装.md ├── 04.PVE创建模板虚拟机.md ├── 02.PVE初始化配置.md ├── 05.PVE制作虚拟机模板.md ├── LICENSE ├── 06.PVE制作DNS服务器.md ├── 03.PVE系统调整.md └── 07.PVE制作TS服务器.md /img/p04/vm_hd.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_hd.jpeg -------------------------------------------------------------------------------- /img/p04/vm_id.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_id.jpeg -------------------------------------------------------------------------------- /img/p04/vm_os.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_os.jpeg -------------------------------------------------------------------------------- /img/p01/pve_eth.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_eth.jpeg -------------------------------------------------------------------------------- /img/p01/pve_ip.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_ip.jpeg -------------------------------------------------------------------------------- /img/p04/vm_boot.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_boot.jpeg -------------------------------------------------------------------------------- /img/p04/vm_cpu.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_cpu.jpeg -------------------------------------------------------------------------------- /img/p04/vm_mem.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_mem.jpeg -------------------------------------------------------------------------------- /img/p04/vm_rng.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_rng.jpeg -------------------------------------------------------------------------------- /img/p00/bios_cpu.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_cpu.jpeg -------------------------------------------------------------------------------- /img/p00/bios_pch.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_pch.jpeg -------------------------------------------------------------------------------- /img/p00/bios_power.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_power.jpeg -------------------------------------------------------------------------------- /img/p00/bios_save.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_save.jpeg -------------------------------------------------------------------------------- /img/p01/pve_email.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_email.jpeg -------------------------------------------------------------------------------- /img/p01/pve_etcher.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_etcher.jpeg -------------------------------------------------------------------------------- /img/p01/pve_eula.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_eula.jpeg -------------------------------------------------------------------------------- /img/p01/pve_hd_fs.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_hd_fs.jpeg -------------------------------------------------------------------------------- /img/p01/pve_option.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_option.jpeg -------------------------------------------------------------------------------- /img/p01/pve_rufus.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_rufus.jpeg -------------------------------------------------------------------------------- /img/p01/pve_ventoy.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_ventoy.jpeg -------------------------------------------------------------------------------- /img/p04/vm_ci_dns.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_ci_dns.jpeg -------------------------------------------------------------------------------- /img/p04/vm_confirm.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_confirm.jpeg -------------------------------------------------------------------------------- /img/p04/vm_notes.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_notes.jpeg -------------------------------------------------------------------------------- /img/p04/vm_system.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_system.jpeg -------------------------------------------------------------------------------- /img/p04/vm_tablet.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_tablet.jpeg -------------------------------------------------------------------------------- /img/p05/os_login.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p05/os_login.jpeg -------------------------------------------------------------------------------- /img/p06/vm_clone.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p06/vm_clone.jpeg -------------------------------------------------------------------------------- /img/p00/bios_cpu_vmx.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_cpu_vmx.jpeg -------------------------------------------------------------------------------- /img/p01/pve_mobaxterm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_mobaxterm.png -------------------------------------------------------------------------------- /img/p01/pve_sys_info.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_sys_info.jpeg -------------------------------------------------------------------------------- /img/p01/pve_termius.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_termius.jpeg -------------------------------------------------------------------------------- /img/p01/pve_timezone.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_timezone.jpeg -------------------------------------------------------------------------------- /img/p04/vm_cloudinit.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_cloudinit.jpeg -------------------------------------------------------------------------------- /img/p04/vm_delete_cd.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_delete_cd.jpeg -------------------------------------------------------------------------------- /img/p04/vm_enable_hd.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_enable_hd.jpeg -------------------------------------------------------------------------------- /img/p04/vm_hd_resize.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_hd_resize.jpeg -------------------------------------------------------------------------------- /img/p04/vm_unused_hd.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_unused_hd.jpeg -------------------------------------------------------------------------------- /img/p08/vm_job_email.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p08/vm_job_email.jpeg -------------------------------------------------------------------------------- /img/p08/vm_job_keep.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p08/vm_job_keep.jpeg -------------------------------------------------------------------------------- /img/p08/vm_job_notes.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p08/vm_job_notes.jpeg -------------------------------------------------------------------------------- /img/p08/vm_job_time.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p08/vm_job_time.jpeg -------------------------------------------------------------------------------- /img/p00/bios_boot_order.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_boot_order.jpeg -------------------------------------------------------------------------------- /img/p00/bios_c_states.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_c_states.jpeg -------------------------------------------------------------------------------- /img/p00/bios_fast_boot.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_fast_boot.jpeg -------------------------------------------------------------------------------- /img/p00/bios_smart_fan.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_smart_fan.jpeg -------------------------------------------------------------------------------- /img/p00/bios_turbo_max.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_turbo_max.jpeg -------------------------------------------------------------------------------- /img/p01/pve_first_boot.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_first_boot.jpeg -------------------------------------------------------------------------------- /img/p01/pve_hd_choose.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_hd_choose.jpeg -------------------------------------------------------------------------------- /img/p01/pve_ventoy_boot.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_ventoy_boot.jpeg -------------------------------------------------------------------------------- /img/p01/pve_win_terminal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_win_terminal.png -------------------------------------------------------------------------------- /img/p02/pve_br_create.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p02/pve_br_create.jpeg -------------------------------------------------------------------------------- /img/p02/pve_br_phyport.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p02/pve_br_phyport.jpeg -------------------------------------------------------------------------------- /img/p02/pve_net_default.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p02/pve_net_default.jpeg -------------------------------------------------------------------------------- /img/p02/pve_net_preview.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p02/pve_net_preview.jpeg -------------------------------------------------------------------------------- /img/p04/vm_ci_details.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_ci_details.jpeg -------------------------------------------------------------------------------- /img/p04/vm_ci_dns_ula.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_ci_dns_ula.jpeg -------------------------------------------------------------------------------- /img/p04/vm_hardware_all.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_hardware_all.jpeg -------------------------------------------------------------------------------- /img/p04/vm_hd_scale_up.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_hd_scale_up.jpeg -------------------------------------------------------------------------------- /img/p04/vm_network_port.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_network_port.jpeg -------------------------------------------------------------------------------- /img/p04/vm_rng_details.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_rng_details.jpeg -------------------------------------------------------------------------------- /img/p05/vm_to_template.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p05/vm_to_template.jpeg -------------------------------------------------------------------------------- /img/p06/vm_clone_vmid.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p06/vm_clone_vmid.jpeg -------------------------------------------------------------------------------- /img/p08/vm_job_advanced.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p08/vm_job_advanced.jpeg -------------------------------------------------------------------------------- /img/p08/vm_job_normal.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p08/vm_job_normal.jpeg -------------------------------------------------------------------------------- /img/p00/bios_hardware_ac.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_hardware_ac.jpeg -------------------------------------------------------------------------------- /img/p00/bios_save_yeahhhh.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_save_yeahhhh.jpeg -------------------------------------------------------------------------------- /img/p01/pve_download_iso.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_download_iso.jpeg -------------------------------------------------------------------------------- /img/p02/pve_add_ipv6_dns.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p02/pve_add_ipv6_dns.jpeg -------------------------------------------------------------------------------- /img/p02/pve_br_nophyport.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p02/pve_br_nophyport.jpeg -------------------------------------------------------------------------------- /img/p02/pve_modify_vmbr0.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p02/pve_modify_vmbr0.jpeg -------------------------------------------------------------------------------- /img/p04/vm_network_queue.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_network_queue.jpeg -------------------------------------------------------------------------------- /img/p06/vm_clone_ci_slaac.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p06/vm_clone_ci_slaac.jpeg -------------------------------------------------------------------------------- /img/p08/vm_job_time_test.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p08/vm_job_time_test.jpeg -------------------------------------------------------------------------------- /img/p08/vm_new_backup_job.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p08/vm_new_backup_job.jpeg -------------------------------------------------------------------------------- /img/p00/bios_power_control.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_power_control.jpeg -------------------------------------------------------------------------------- /img/p00/bios_turbo_options.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_turbo_options.jpeg -------------------------------------------------------------------------------- /img/p01/pve_install_confirm.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_install_confirm.jpeg -------------------------------------------------------------------------------- /img/p01/pve_install_finish.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_install_finish.jpeg -------------------------------------------------------------------------------- /img/p01/pve_ventoy_boot_mode.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p01/pve_ventoy_boot_mode.jpeg -------------------------------------------------------------------------------- /img/p02/pve_br_last_phyport.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p02/pve_br_last_phyport.jpeg -------------------------------------------------------------------------------- /img/p04/vm_ci_network_slaac.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_ci_network_slaac.jpeg -------------------------------------------------------------------------------- /img/p04/vm_ci_network_static.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/vm_ci_network_static.jpeg -------------------------------------------------------------------------------- /img/p06/vm_clone_autostart.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p06/vm_clone_autostart.jpeg -------------------------------------------------------------------------------- /img/p06/vm_clone_ci_static.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p06/vm_clone_ci_static.jpeg -------------------------------------------------------------------------------- /img/p00/bios_hardware_monitor.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_hardware_monitor.jpeg -------------------------------------------------------------------------------- /img/p00/bios_smart_fan_config.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p00/bios_smart_fan_config.jpeg -------------------------------------------------------------------------------- /img/p02/pve_net_schematization.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p02/pve_net_schematization.jpeg -------------------------------------------------------------------------------- /img/p02/pve_br_last_phyport_ipv6.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p02/pve_br_last_phyport_ipv6.jpeg -------------------------------------------------------------------------------- /img/p06/vm_clone_autostart_order.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p06/vm_clone_autostart_order.jpeg -------------------------------------------------------------------------------- /img/p04/download_generic_image_qcow2.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CallMeR/pve_configuration_notes/HEAD/img/p04/download_generic_image_qcow2.jpeg -------------------------------------------------------------------------------- /src/pve/pve_20auto_upgrades.conf: -------------------------------------------------------------------------------- 1 | APT::Periodic::Update-Package-Lists "1"; 2 | APT::Periodic::Unattended-Upgrade "5"; 3 | APT::Periodic::AutocleanInterval "1"; 4 | APT::Periodic::CleanInterval "1"; 5 | 6 | -------------------------------------------------------------------------------- /src/debian/debian_dns_20auto_upgrades.conf: -------------------------------------------------------------------------------- 1 | APT::Periodic::Update-Package-Lists "1"; 2 | APT::Periodic::Unattended-Upgrade "3"; 3 | APT::Periodic::AutocleanInterval "1"; 4 | APT::Periodic::CleanInterval "1"; 5 | 6 | -------------------------------------------------------------------------------- /src/debian/debian_ts_20auto_upgrades.conf: -------------------------------------------------------------------------------- 1 | APT::Periodic::Update-Package-Lists "1"; 2 | APT::Periodic::Unattended-Upgrade "7"; 3 | APT::Periodic::AutocleanInterval "1"; 4 | APT::Periodic::CleanInterval "1"; 5 | 6 | -------------------------------------------------------------------------------- /src/pve/pve_cpupower.conf: -------------------------------------------------------------------------------- 1 | # This configuration file is customized by fox, 2 | # Optimize system CPU governors. 3 | 4 | CPUPOWER_START_OPTS="frequency-set -g powersave" 5 | CPUPOWER_STOP_OPTS="frequency-set -g performance" 6 | 7 | -------------------------------------------------------------------------------- /src/debian/debian_sources.conf: -------------------------------------------------------------------------------- 1 | Types: deb 2 | URIs: https://mirrors.ustc.edu.cn/debian 3 | Suites: trixie trixie-updates 4 | Components: main contrib non-free non-free-firmware 5 | Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg 6 | 7 | Types: deb 8 | URIs: https://mirrors.ustc.edu.cn/debian-security 9 | Suites: trixie-security 10 | Components: main contrib non-free non-free-firmware 11 | Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg 12 | 13 | -------------------------------------------------------------------------------- /src/pve/pve_cpupower_service.conf: -------------------------------------------------------------------------------- 1 | # This configuration file is customized by fox, 2 | # Optimize for cpupower systemd service. 3 | 4 | [Unit] 5 | Description=Apply cpupower configuration 6 | ConditionVirtualization=!container 7 | After=syslog.target 8 | 9 | [Service] 10 | Type=oneshot 11 | EnvironmentFile=/etc/default/cpupower 12 | ExecStart=/usr/bin/cpupower $CPUPOWER_START_OPTS 13 | ExecStop=/usr/bin/cpupower $CPUPOWER_STOP_OPTS 14 | RemainAfterExit=yes 15 | 16 | [Install] 17 | WantedBy=multi-user.target 18 | 19 | -------------------------------------------------------------------------------- /src/debian/debian_ts_nic_optim_service.conf: -------------------------------------------------------------------------------- 1 | # This configuration file is customized by fox, 2 | # Optimize for TS UDP performance. 3 | 4 | [Unit] 5 | Description=Tailscale Network Optimization for Subnet/Exit Nodes 6 | ConditionVirtualization=!container 7 | After=network-online.target 8 | Wants=network-online.target 9 | Before=tailscaled.service 10 | 11 | [Service] 12 | Type=oneshot 13 | ExecStart=/usr/local/bin/tailscale-nic-optim.sh 14 | RemainAfterExit=yes 15 | StandardOutput=journal 16 | 17 | [Install] 18 | WantedBy=multi-user.target 19 | 20 | -------------------------------------------------------------------------------- /src/pve/pve_apt_daily_upgrade.conf: -------------------------------------------------------------------------------- 1 | ### Editing /etc/systemd/system/apt-daily-upgrade.timer.d/override.conf 2 | ### Anything between here and the comment below will become the new contents of the file 3 | 4 | [Timer] 5 | OnCalendar= 6 | OnCalendar=01:30 7 | RandomizedDelaySec=0 8 | 9 | ### Lines below this comment will be discarded 10 | 11 | ### /lib/systemd/system/apt-daily-upgrade.timer 12 | # [Unit] 13 | # Description=Daily apt upgrade and clean activities 14 | # After=apt-daily.timer 15 | # 16 | # [Timer] 17 | # OnCalendar=*-*-* 6:00 18 | # RandomizedDelaySec=60m 19 | # Persistent=true 20 | # 21 | # [Install] 22 | # WantedBy=timers.target 23 | 24 | -------------------------------------------------------------------------------- /src/debian/debian_dns_dnsmasq_cron.conf: -------------------------------------------------------------------------------- 1 | ## 下载加速规则安装脚本 2 | $ sudo curl -LR -o /usr/local/bin/dnsmasq-plugin.sh https://gitee.com/felixonmars/dnsmasq-china-list/raw/master/install.sh 3 | 4 | ## 设置脚本可执行权限 5 | $ sudo chmod +x /usr/local/bin/dnsmasq-plugin.sh 6 | 7 | ## 设置脚本文件防篡改 8 | $ sudo chattr +i /usr/local/bin/dnsmasq-plugin.sh 9 | 10 | ## 执行脚本 11 | $ sudo /usr/local/bin/dnsmasq-plugin.sh 12 | 13 | ## 设置 crontab 14 | 15 | 25 9 * * * /usr/bin/curl --retry-connrefused --retry 5 --retry-delay 5 --retry-max-time 60 -fsSLR -o /etc/dnsmasq.d/anti-ad.dnsmasq.conf https://anti-ad.net/anti-ad-for-dnsmasq.conf 16 | 17 | 35 9 * * * /usr/local/bin/dnsmasq-plugin.sh 18 | 19 | -------------------------------------------------------------------------------- /src/debian/debian_dns_smartdns_cron.conf: -------------------------------------------------------------------------------- 1 | # This configuration file is customized by fox, 2 | # Optimize SmartDNS crontab for local DNS server. 3 | 4 | 20 9 * * * /usr/bin/curl --retry-connrefused --retry 5 --retry-delay 5 --retry-max-time 60 -fsSLR -o /etc/smartdns.d/anti-ad.smartdns.conf https://anti-ad.net/anti-ad-for-smartdns.conf 5 | 6 | 30 9 * * * /usr/bin/systemctl restart smartdns.service 7 | 8 | 9 | ## Or when the smartdns plugin is installed 10 | 11 | 20 9 * * * /usr/bin/curl --retry-connrefused --retry 5 --retry-delay 5 --retry-max-time 60 -fsSLR -o /etc/smartdns.d/anti-ad.smartdns.conf https://anti-ad.net/anti-ad-for-smartdns.conf 12 | 13 | 30 9 * * * /usr/local/bin/smartdns-plugin.sh 14 | 15 | -------------------------------------------------------------------------------- /src/debian/debian_ts_nic_optim.conf: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | ETHTOOL_PATH=$(command -v ethtool) 4 | 5 | if [ -z "$ETHTOOL_PATH" ]; then 6 | echo "Error: ethtool command not found. Please install ethtool." 7 | exit 1 8 | fi 9 | 10 | NETDEV=$(ip route show default | awk '{for (i=1; i<=NF; i++) {if ($i == "dev") {print $(i+1); exit;}}}') 11 | 12 | if [ -z "$NETDEV" ]; then 13 | echo "Error: Could not determine default network device using 'ip route show default'." 14 | exit 1 15 | fi 16 | 17 | echo "Configuring network device $NETDEV using $ETHTOOL_PATH for Tailscale optimizations..." 18 | 19 | "$ETHTOOL_PATH" -K "$NETDEV" rx-udp-gro-forwarding on rx-gro-list off 20 | 21 | if [ "$?" -eq 0 ]; then 22 | echo "Configuration successful for $NETDEV." 23 | exit 0 24 | else 25 | echo "Error: ethtool configuration failed. Check permissions or device status." 26 | exit 1 27 | fi 28 | 29 | -------------------------------------------------------------------------------- /src/debian/debian_dns_dnsmasq.conf: -------------------------------------------------------------------------------- 1 | # This configuration file is customized by fox, 2 | # Optimize dnsmasq parameters for local DNS server. 3 | 4 | # Main Config 5 | 6 | conf-dir=/etc/dnsmasq.d/,*.conf 7 | conf-file=/etc/dnsmasq.conf 8 | 9 | log-facility=/var/log/dnsmasq.log 10 | log-async=20 11 | 12 | cache-size=2048 13 | max-cache-ttl=7200 14 | fast-dns-retry=1800 15 | 16 | interface=eth0 17 | rebind-domain-ok=/fox.internal/ 18 | 19 | bind-dynamic 20 | bogus-priv 21 | domain-needed 22 | no-hosts 23 | no-negcache 24 | no-resolv 25 | rebind-localhost-ok 26 | stop-dns-rebind 27 | 28 | # DNS Filter 29 | 30 | server=/alt/ 31 | server=/bind/ 32 | server=/example/ 33 | server=/home.arpa/ 34 | server=/internal/ 35 | server=/invalid/ 36 | server=/lan/ 37 | server=/local/ 38 | server=/localhost/ 39 | server=/onion/ 40 | server=/test/ 41 | 42 | # DNS Server 43 | 44 | server=/fox.internal/172.16.1.1 45 | server=127.0.0.1#6053 46 | 47 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Proxmox VE 折腾手记 2 | 3 | ## 介绍 4 | PVE 虚拟化平台的安装以及折腾手记。 5 | 6 | - PVE ISO 版本:9.1-1 (更新时间: 2025-11-19) 7 | 8 | - PVE 网络: 9 | - IPv4 网络 10 | - IP 地址:`172.16.1.254` 11 | - 子网掩码:`255.255.255.0` ( 即 `/24` ) 12 | - 网关:`172.16.1.1` 13 | - DNS:`172.16.1.1` 14 | - IPv6 网络 15 | - 首选 `SLAAC` 自动配置 16 | - IPv6 ULA 网络使用 `fdac::/64` 作为演示 17 | 18 | ### 系列章节 19 | 20 | 0. [硬件 BIOS 配置](./00.硬件BIOS配置.md) 21 | 1. [PVE 系统安装](./01.PVE系统安装.md) 22 | 2. [PVE 初始化配置](./02.PVE初始化配置.md) 23 | 3. [PVE 系统调整](./03.PVE系统调整.md) 24 | 4. [PVE 创建模板虚拟机](./04.PVE创建模板虚拟机.md) 25 | 5. [PVE 制作虚拟机模板](./05.PVE制作虚拟机模板.md) 26 | 6. [PVE 制作 DNS 服务器](./06.PVE制作DNS服务器.md) 27 | 7. [PVE 制作 TS 服务器](./07.PVE制作TS服务器.md) 28 | 8. [PVE 自动备份虚拟机](./08.PVE自动备份虚拟机.md) 29 | 30 | ### 文章说明 31 | 32 | 1. 本系列文章涉及的部分参数需要手动调整来符合切实使用需求。 33 | 2. 随着 PVE 系统的迭代更新,截图中的内容和实际页面显示可能存在差异。 34 | 3. 如需引用,请注明本文出处。 35 | -------------------------------------------------------------------------------- /src/debian/debian_dns_99_sysctl.conf: -------------------------------------------------------------------------------- 1 | # This configuration file is customized by fox, 2 | # Optimize sysctl parameters for local DNS server. 3 | 4 | kernel.panic = 20 5 | kernel.panic_on_oops = 1 6 | 7 | net.core.default_qdisc = fq_codel 8 | 9 | # Other adjustable system parameters 10 | 11 | net.core.netdev_budget = 1200 12 | net.core.netdev_budget_usecs = 10000 13 | 14 | net.core.somaxconn = 8192 15 | net.core.rmem_max = 33554432 16 | net.core.wmem_max = 33554432 17 | 18 | net.ipv4.igmp_max_memberships = 256 19 | 20 | net.ipv4.tcp_challenge_ack_limit = 1000 21 | net.ipv4.tcp_fastopen = 3 22 | net.ipv4.tcp_fin_timeout = 30 23 | net.ipv4.tcp_keepalive_time = 120 24 | net.ipv4.tcp_max_syn_backlog = 2048 25 | net.ipv4.tcp_notsent_lowat = 131072 26 | net.ipv4.tcp_rmem = 4096 4194304 33554432 27 | net.ipv4.tcp_wmem = 4096 4194304 33554432 28 | 29 | net.ipv6.conf.all.use_tempaddr = 0 30 | net.ipv6.conf.default.use_tempaddr = 0 31 | 32 | -------------------------------------------------------------------------------- /src/debian/debian_ts_dnsmasq.conf: -------------------------------------------------------------------------------- 1 | # This configuration file is customized by fox, 2 | # Optimize dnsmasq parameters for local TS server. 3 | 4 | # Main Config 5 | 6 | conf-dir=/etc/dnsmasq.d/,*.conf 7 | conf-file=/etc/dnsmasq.conf 8 | 9 | log-facility=/var/log/dnsmasq.log 10 | log-async=20 11 | 12 | cache-size=2048 13 | max-cache-ttl=7200 14 | fast-dns-retry=1800 15 | 16 | interface=eth0,tailscale0 17 | rebind-domain-ok=/fox.internal/ 18 | 19 | bind-dynamic 20 | bogus-priv 21 | domain-needed 22 | no-hosts 23 | no-negcache 24 | no-resolv 25 | rebind-localhost-ok 26 | stop-dns-rebind 27 | 28 | # DNS Filter 29 | 30 | server=/alt/ 31 | server=/bind/ 32 | server=/example/ 33 | server=/home.arpa/ 34 | server=/internal/ 35 | server=/invalid/ 36 | server=/lan/ 37 | server=/local/ 38 | server=/localhost/ 39 | server=/onion/ 40 | server=/test/ 41 | 42 | # DNS Server 43 | 44 | server=/ts.net/100.100.100.100 45 | server=/fox.internal/172.16.1.1 46 | server=172.16.1.1 47 | 48 | -------------------------------------------------------------------------------- /src/debian/debian_dns_smartdns.conf: -------------------------------------------------------------------------------- 1 | # This configuration file is customized by fox, 2 | # Optimize SmartDNS parameters for local DNS server. 3 | # 4 | # For use common DNS server as upstream DNS server, 5 | # please modify 'server' parameter according to 6 | # your network environment. 7 | # 8 | # eg: 9 | # server 223.5.5.5 10 | # server 180.184.1.1 11 | # server 119.29.29.29 12 | # server 114.114.114.114 13 | # server 2402:4e00:: 14 | # server 2400:3200::1 15 | 16 | conf-file /etc/smartdns.d/*.conf 17 | 18 | log-level notice 19 | log-console yes 20 | log-size 16M 21 | log-num 8 22 | 23 | audit-enable yes 24 | audit-SOA yes 25 | audit-size 16M 26 | audit-num 8 27 | 28 | plugin smartdns_ui.so 29 | smartdns-ui.ip https://[::]:5380 30 | smartdns-ui.token-expire 1800 31 | smartdns-ui.max-query-log-age 604800 32 | 33 | bind [::]:6053@lo 34 | bind-tcp [::]:6053@lo 35 | 36 | cache-size 32768 37 | max-query-limit 1024 38 | max-reply-ip-num 16 39 | 40 | prefetch-domain yes 41 | 42 | serve-expired yes 43 | serve-expired-ttl 129600 44 | serve-expired-reply-ttl 30 45 | serve-expired-prefetch-time 28800 46 | 47 | rr-ttl-min 60 48 | rr-ttl-max 28800 49 | rr-ttl-reply-max 14400 50 | 51 | server-tcp 180.184.1.1 -bootstrap-dns 52 | server-tcp 101.226.4.6 -bootstrap-dns 53 | server-tcp 2400:3200::1 -bootstrap-dns 54 | server-tcp 2400:3200:baba::1 -bootstrap-dns 55 | 56 | server-tls dot.360.cn -fallback 57 | server-quic dns.alidns.com -fallback 58 | server-https https://doh.360.cn/dns-query -fallback 59 | 60 | server-tls dot.pub 61 | server-tls dns.alidns.com 62 | server-https https://doh.pub/dns-query 63 | server-https https://dns.alidns.com/dns-query 64 | 65 | -------------------------------------------------------------------------------- /00.硬件BIOS配置.md: -------------------------------------------------------------------------------- 1 | ## 1.虚拟化选项 2 | 3 | 进入 `Advanced` 菜单的子菜单 `CPU Configuration` : 4 | 5 | ![cpu设置](img/p00/bios_cpu.jpeg) 6 | 7 | 检查 `Intel (VMX) Virtualization Technology` 选项为 `Enabled` 状态: 8 | 9 | ![cpu设置-vmx](img/p00/bios_cpu_vmx.jpeg) 10 | 11 | ## 2. CPU 功耗 12 | 13 | 进入 `Advanced` 菜单的子菜单 `Power & Performance` : 14 | 15 | ![cpu功耗](img/p00/bios_power.jpeg) 16 | 17 | 进入 `CPU - Power Management Control` : 18 | 19 | ![cpu功耗控制](img/p00/bios_power_control.jpeg) 20 | 21 | 检查 `C states` , **默认** 选项为 `Enabled` 状态,如果遇到网卡 **无法跑满** 的情况,可以尝试将该选项关闭: 22 | 23 | ![c-states](img/p00/bios_c_states.jpeg) 24 | 25 | 再进入 `HDC Control` 菜单的子菜单 `View/Configure Turbo Options` : 26 | 27 | ![turbo选项](img/p00/bios_turbo_options.jpeg) 28 | 29 | 检查以下内容,适当调整以改变功耗墙: 30 | - `Power Limit 1 Override` 选项:`Enabled` 31 | - `Power Limit 1` 选项:`50000` 32 | - `Power Limit 1 Time Window` 选项:为最大 `128` 33 | 34 | ![turbo选项确认](img/p00/bios_turbo_max.jpeg) 35 | 36 | ## 3.温控风扇 37 | 38 | 进入 `Advanced` 菜单的子菜单 `Hardware Monitor` : 39 | 40 | ![硬件监控](img/p00/bios_hardware_monitor.jpeg) 41 | 42 | 进入 `Smart Fan Function` : 43 | 44 | ![风扇状态](img/p00/bios_smart_fan.jpeg) 45 | 46 | 可对温控风扇参数进行调整: 47 | 48 | ![风扇设置](img/p00/bios_smart_fan_config.jpeg) 49 | 50 | ## 4.来电自启 51 | 52 | 进入 `Chipset` 菜单的子菜单 `PCH-IO Configuration` : 53 | 54 | ![硬件监控](img/p00/bios_pch.jpeg) 55 | 56 | 确认 `State After G3` 选项为 `Power On` 状态: 57 | 58 | ![来电自启](img/p00/bios_hardware_ac.jpeg) 59 | 60 | ## 5.快速启动 61 | 62 | PVE 系统安装完成后,进入 `Boot` 菜单,将 `Fast Boot` 选项设置为 `Enabled` 状态: 63 | 64 | ![快速启动](img/p00/bios_fast_boot.jpeg) 65 | 66 | 调整系统启动顺序,将 `Proxmox` 设置为第一启动项,并关闭其他启动项内容: 67 | 68 | ![启动顺序](img/p00/bios_boot_order.jpeg) 69 | 70 | 最后保存 BIOS 设置 `Save Changes and Exit` : 71 | 72 | ![保存BIOS](img/p00/bios_save.jpeg) 73 | 74 | 使用键盘左右方向键选择 `yes` 并回车键执行保存: 75 | 76 | ![确认保存](img/p00/bios_save_yeahhhh.jpeg) -------------------------------------------------------------------------------- /08.PVE自动备份虚拟机.md: -------------------------------------------------------------------------------- 1 | ## 1.添加备份作业 2 | 3 | 在所有虚拟机创建并配置完成后,可以使用 PVE 自带的备份功能周期性的将虚拟机备份。 4 | 5 | 点击 PVE 的 `数据中心` ,在右侧菜单中选择 `备份` 功能,并点击顶部 `添加` 。 6 | 7 | ![添加备份作业](img/p08/vm_new_backup_job.jpeg) 8 | 9 | ### 1.1.常规 10 | 11 | 在弹出的 `创建:备份作业` 对话框中,勾选底部 `高级` 选项,`备份作业` 的各项参数如下。 12 | 13 | |参数|值|说明| 14 | |--|--|--| 15 | |节点|`node01`|选择当前 PVE 服务器节点| 16 | |存储|`local`|选择存放备份文件的路径| 17 | |计划|`*-01,16 03:30`|`备份作业` 执行的时间计划| 18 | |选择模式|`包括选中的虚拟机`|执行备份的虚拟机对象| 19 | |压缩|`ZSTD`|选择备份文件的压缩算法| 20 | |模式|`停止`|选择备份虚拟机的方式,推荐使用 `停止` | 21 | |启用|**勾选**|表示该 `备份作业` 为启用状态| 22 | |作业评论|`Backup Your Server :)`|`备份作业` 的备注信息,使用英文输入| 23 | 24 | **额外说明:** 25 | 26 | 1. 计划中的 `*-01,16 03:30` 表示每月 `1` 、`16` 号的 `03:30` 执行备份任务。 27 | 28 | 2. `备份作业` 在正确配置收件人邮箱之前,并不能发出邮件。 29 | 30 | 3. `停止` 模式表示备份时会将虚拟机置于关机状态,备份完成后再恢复至之前的状态。 31 | 32 | 4. 虚拟机列表中,勾选 `备份作业` 需要备份的虚拟机(可多选)。 33 | 34 | ![备份作业常规选项](img/p08/vm_job_normal.jpeg) 35 | 36 | ### 1.2.通知 37 | 38 | |参数|值|说明| 39 | |--|--|--| 40 | |通知模式|`使用全局通知设置`|执行备份时的通知模式,保持默认即可| 41 | 42 | ![备份作业通知选项](img/p08/vm_job_email.jpeg) 43 | 44 | ### 1.3.保留 45 | 46 | 该选项将控制备份文件的保留个数,选择保留最近 `3` 份备份文件。 47 | 48 | ![备份作业保留选项](img/p08/vm_job_keep.jpeg) 49 | 50 | ### 1.4.备注模板 51 | 52 | 该选项将按照设置的内容,自动重命名备份文件。 53 | 54 | 在 `备份日志` 右侧文本框中输入 `{{vmid}}-{{guestname}}-Autobackup` 。 55 | 56 | ![备份作业备注选项](img/p08/vm_job_notes.jpeg) 57 | 58 | ### 1.5.高级 59 | 60 | 该选项提供 `备份作业` 执行时的高级可调参数,仅需勾选 `重复错过` 选项即可。 61 | 62 | 点击 `创建` ,此 `备份作业` 即完成设置与创建。 63 | 64 | ![备份作业高级选项](img/p08/vm_job_advanced.jpeg) 65 | 66 | ## 2.调度模拟器 67 | 68 | 在创建完成 `备份作业` 后,可以使用 `调度模拟器` 来模拟 `备份作业` 的执行时间。 69 | 70 | 鼠标 **单击** 选中一个 `备份作业` ,点击右上角的 `调度模拟器` 。 71 | 72 | ![备份作业调度模拟器](img/p08/vm_job_time_test.jpeg) 73 | 74 | `计划` 处将显示 `备份作业` 的执行时间参数,点击 `模拟` ,在右侧将显示模拟的时间结果。 75 | 76 | 确认 `备份作业` 的执行时间周期是否符合预期。 77 | 78 | ![备份作业时间模拟](img/p08/vm_job_time.jpeg) 79 | 80 | 至此,虚拟机的自动备份已配置完成。 81 | 82 | -------------------------------------------------------------------------------- /src/debian/debian_ts_99_sysctl.conf: -------------------------------------------------------------------------------- 1 | # This configuration file is customized by fox, 2 | # Optimize sysctl parameters for local TS server. 3 | 4 | kernel.panic = 20 5 | kernel.panic_on_oops = 1 6 | 7 | net.core.default_qdisc = fq_codel 8 | net.ipv4.tcp_congestion_control = bbr 9 | 10 | net.ipv4.ip_forward = 1 11 | 12 | net.ipv6.conf.all.forwarding = 1 13 | net.ipv6.conf.default.forwarding = 1 14 | 15 | # Other adjustable system parameters 16 | 17 | net.core.netdev_budget = 1200 18 | net.core.netdev_budget_usecs = 10000 19 | 20 | net.core.rps_sock_flow_entries = 32768 21 | net.core.somaxconn = 8192 22 | net.core.rmem_max = 67108864 23 | net.core.wmem_max = 67108864 24 | 25 | net.ipv4.conf.all.accept_redirects = 0 26 | net.ipv4.conf.default.accept_redirects = 0 27 | 28 | net.ipv4.conf.all.accept_source_route = 0 29 | net.ipv4.conf.default.accept_source_route = 0 30 | 31 | net.ipv4.conf.all.arp_ignore = 1 32 | net.ipv4.conf.default.arp_ignore = 1 33 | 34 | net.ipv4.conf.all.rp_filter = 2 35 | net.ipv4.conf.default.rp_filter = 2 36 | 37 | net.ipv4.conf.all.send_redirects = 0 38 | net.ipv4.conf.default.send_redirects = 0 39 | 40 | net.ipv4.igmp_max_memberships = 256 41 | 42 | net.ipv4.route.error_burst = 500 43 | net.ipv4.route.error_cost = 100 44 | 45 | net.ipv4.route.redirect_load = 2 46 | net.ipv4.route.redirect_silence = 2048 47 | 48 | net.ipv4.tcp_adv_win_scale = -2 49 | net.ipv4.tcp_challenge_ack_limit = 1000 50 | net.ipv4.tcp_fastopen = 3 51 | net.ipv4.tcp_fin_timeout = 30 52 | net.ipv4.tcp_keepalive_time = 120 53 | net.ipv4.tcp_max_syn_backlog = 2048 54 | net.ipv4.tcp_notsent_lowat = 131072 55 | net.ipv4.tcp_rmem = 8192 4194304 67108864 56 | net.ipv4.tcp_wmem = 4096 4194304 67108864 57 | 58 | net.ipv6.conf.all.accept_redirects = 0 59 | net.ipv6.conf.default.accept_redirects = 0 60 | 61 | net.ipv6.conf.all.accept_source_route = 0 62 | net.ipv6.conf.default.accept_source_route = 0 63 | 64 | net.ipv6.conf.all.use_tempaddr = 0 65 | net.ipv6.conf.default.use_tempaddr = 0 66 | 67 | net.netfilter.nf_conntrack_acct = 1 68 | net.netfilter.nf_conntrack_buckets = 65536 69 | net.netfilter.nf_conntrack_checksum = 0 70 | net.netfilter.nf_conntrack_max = 262144 71 | net.netfilter.nf_conntrack_tcp_timeout_established = 7440 72 | 73 | -------------------------------------------------------------------------------- /src/debian/debian_dns_smartdns_plugin.conf: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | set -e 3 | 4 | WORKDIR="$(mktemp -d)" 5 | CONFDIR="/etc/smartdns.d" 6 | SERVERS="223.5.5.5 180.184.1.1 119.29.29.29 114.114.114.114 2402:4e00:: 2400:3200::1" 7 | GROUP="flash" 8 | # Others: 223.6.6.6 119.28.28.28 9 | # Not using best possible CDN pop: 1.2.4.8 210.2.4.8 10 | # Broken?: 180.76.76.76 11 | 12 | CONF_WITH_SERVERS="accelerated-domains.china google.china apple.china" 13 | CONF_WITH_GROUP="dns-group.china" 14 | CONF_SIMPLE="bogus-nxdomain.china" 15 | 16 | echo "Checking whether the configuration folder exists..." 17 | [ -d "$CONFDIR" ] || mkdir -p "$CONFDIR" 18 | 19 | echo "Downloading latest configurations..." 20 | git clone --depth=1 https://gitee.com/felixonmars/dnsmasq-china-list.git "$WORKDIR" || { echo "Git clone failed"; rm -rf "$WORKDIR"; exit 1; } 21 | #git clone --depth=1 https://pagure.io/dnsmasq-china-list.git "$WORKDIR" 22 | #git clone --depth=1 https://github.com/felixonmars/dnsmasq-china-list.git "$WORKDIR" 23 | #git clone --depth=1 https://bitbucket.org/felixonmars/dnsmasq-china-list.git "$WORKDIR" 24 | #git clone --depth=1 https://gitlab.com/felixonmars/dnsmasq-china-list.git "$WORKDIR" 25 | #git clone --depth=1 https://e.coding.net/felixonmars/dnsmasq-china-list.git "$WORKDIR" 26 | #git clone --depth=1 https://atomgit.com/felixonmars/dnsmasq-china-list.git "$WORKDIR" 27 | #git clone --depth=1 https://codehub.devcloud.huaweicloud.com/dnsmasq-china-list00001/dnsmasq-china-list.git "$WORKDIR" 28 | #git clone --depth=1 http://repo.or.cz/dnsmasq-china-list.git "$WORKDIR" 29 | 30 | echo "Removing old configurations..." 31 | for _conf in $CONF_WITH_SERVERS $CONF_WITH_GROUP $CONF_SIMPLE; do 32 | rm -f "$CONFDIR/$_conf"*.conf 33 | done 34 | 35 | echo "Installing new configurations..." 36 | for _conf in $CONF_WITH_SERVERS $CONF_WITH_GROUP $CONF_SIMPLE; do 37 | 38 | if echo " $CONF_WITH_SERVERS " | grep -q " $_conf "; then 39 | sed -n 's/^server=\/\([^\/]*\)\/114.114.114.114$/\1/p' "$WORKDIR/$_conf.conf" | grep -v '^#' > "$WORKDIR/$_conf.step1.raw" 40 | sed -e "s/.*/nameserver \/&\/$GROUP/" "$WORKDIR/$_conf.step1.raw" > "$WORKDIR/$_conf.step2.raw" 41 | cp "$WORKDIR/$_conf.step2.raw" "$CONFDIR/$_conf.smartdns.conf" 42 | fi 43 | 44 | if echo " $CONF_WITH_GROUP " | grep -q " $_conf "; then 45 | for _server in $SERVERS; do 46 | echo "server $_server -group $GROUP -exclude-default-group" >> "$WORKDIR/$_conf.raw" 47 | done 48 | cp "$WORKDIR/$_conf.raw" "$CONFDIR/$_conf.smartdns.conf" 49 | fi 50 | 51 | if echo " $CONF_SIMPLE " | grep -q " $_conf "; then 52 | sed -e "s/=/ /" "$WORKDIR/$_conf.conf" > "$WORKDIR/$_conf.raw" 53 | cp "$WORKDIR/$_conf.raw" "$CONFDIR/$_conf.smartdns.conf" 54 | fi 55 | 56 | done 57 | 58 | echo "Restarting smartdns service..." 59 | if command -v systemctl >/dev/null 2>&1; then 60 | systemctl restart smartdns 61 | elif command -v service >/dev/null 2>&1; then 62 | service smartdns restart 63 | elif command -v rc-service >/dev/null 2>&1; then 64 | rc-service smartdns restart 65 | elif [ -x "/etc/init.d/smartdns" ]; then 66 | /etc/init.d/smartdns restart 67 | else 68 | echo "Now please restart smartdns manually." 69 | fi 70 | 71 | echo "Cleaning up..." 72 | rm -rf "$WORKDIR" 73 | 74 | -------------------------------------------------------------------------------- /src/pve/pve_cpufrequtils.conf: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | ### BEGIN INIT INFO 3 | # Provides: cpufrequtils 4 | # Required-Start: $remote_fs loadcpufreq 5 | # Required-Stop: 6 | # Default-Start: 2 3 4 5 7 | # Default-Stop: 8 | # Short-Description: set CPUFreq kernel parameters 9 | # Description: utilities to deal with CPUFreq Linux 10 | # kernel support 11 | ### END INIT INFO 12 | # 13 | 14 | DESC="CPUFreq Utilities" 15 | 16 | PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin 17 | CPUFREQ_SET=/usr/bin/cpufreq-set 18 | CPUFREQ_INFO=/usr/bin/cpufreq-info 19 | CPUFREQ_OPTIONS="" 20 | 21 | # use lsb-base 22 | . /lib/lsb/init-functions 23 | 24 | # Which governor to use. Must be one of the governors listed in: 25 | # cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors 26 | # 27 | # and which limits to set. Both MIN_SPEED and MAX_SPEED must be values 28 | # listed in: 29 | # cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies 30 | # a value of 0 for any of the two variables will disabling the use of 31 | # that limit variable. 32 | # 33 | # WARNING: the correct kernel module must already be loaded or compiled in. 34 | # 35 | # Set ENABLE to "true" to let the script run at boot time. 36 | # 37 | # eg: ENABLE="true" 38 | # GOVERNOR="ondemand" 39 | # MAX_SPEED=1000 40 | # MIN_SPEED=500 41 | 42 | ENABLE="true" 43 | GOVERNOR="powersave" 44 | MAX_SPEED="0" 45 | MIN_SPEED="0" 46 | 47 | check_governor_avail() { 48 | info="/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors" 49 | if [ -f $info ] && grep -q "\<$GOVERNOR\>" $info ; then 50 | return 0; 51 | fi 52 | return 1; 53 | } 54 | 55 | [ -x $CPUFREQ_SET ] || exit 0 56 | 57 | if [ -f /etc/default/cpufrequtils ] ; then 58 | . /etc/default/cpufrequtils 59 | fi 60 | 61 | # if not enabled then exit gracefully 62 | [ "$ENABLE" = "true" ] || exit 0 63 | 64 | if [ -n "$MAX_SPEED" ] && [ $MAX_SPEED != "0" ] ; then 65 | CPUFREQ_OPTIONS="$CPUFREQ_OPTIONS --max $MAX_SPEED" 66 | fi 67 | 68 | if [ -n "$MIN_SPEED" ] && [ $MIN_SPEED != "0" ] ; then 69 | CPUFREQ_OPTIONS="$CPUFREQ_OPTIONS --min $MIN_SPEED" 70 | fi 71 | 72 | if [ -n "$GOVERNOR" ] ; then 73 | CPUFREQ_OPTIONS="$CPUFREQ_OPTIONS --governor $GOVERNOR" 74 | fi 75 | 76 | CPUS=$(cat /proc/stat|sed -ne 's/^cpu\([[:digit:]]\+\).*/\1/p') 77 | RETVAL=0 78 | case "$1" in 79 | start|force-reload|restart|reload) 80 | log_action_begin_msg "$DESC: Setting $GOVERNOR CPUFreq governor" 81 | if check_governor_avail ; then 82 | for cpu in $CPUS ; do 83 | log_action_cont_msg "CPU${cpu}" 84 | $CPUFREQ_SET --cpu $cpu $CPUFREQ_OPTIONS 2>&1 > /dev/null || \ 85 | RETVAL=$? 86 | done 87 | log_action_end_msg $RETVAL "" 88 | else 89 | log_action_cont_msg "disabled, governor not available" 90 | log_action_end_msg $RETVAL 91 | fi 92 | ;; 93 | stop) 94 | ;; 95 | *) 96 | echo "Usage: $0 {start|stop|restart|reload|force-reload}" 97 | exit 1 98 | esac 99 | 100 | exit 0 101 | 102 | -------------------------------------------------------------------------------- /01.PVE系统安装.md: -------------------------------------------------------------------------------- 1 | ## 0.前期准备 2 | 3 | PVE 系统正式安装之前,需要准备 PVE 的安装镜像和一些必要的配套工具。 4 | 5 | ### 0.1. PVE 镜像下载 6 | 7 | PVE 下载地址:[Proxmox Virtual Environment](https://www.proxmox.com/en/downloads/proxmox-virtual-environment/iso) 8 | 9 | 页面中有多个 PVE 相关文件,本文以目前最新的 `Proxmox VE 9.x ISO Installer` 作为演示。 10 | 11 | 下载 ISO 时请注意 `SHA256SUM` ,后续将使用该校验信息对下载下来的 ISO 进行校验,以确保 ISO 文件的完整性。 12 | 13 | ![PVE下载页面](img/p01/pve_download_iso.jpeg) 14 | 15 | ### 0.2.启动盘制作工具 16 | 17 | 考虑到制作 PVE 启动盘时,所使用的操作系统可能有 Windows 、macOS 、Linux ,因此这里推荐几个常用的写盘工具。 18 | 19 | #### Ventoy 20 | 21 | 官方网站地址:https://www.ventoy.net/cn/index.html 22 | 23 | 其优点在于可在单个 U 盘内存储多个可引导 ISO 文件,省去多次格式化 U 盘并写入 ISO 的操作,使用便捷。 24 | 25 | 缺点是仅支持 Windows 与 Linux 系统,且部分机型可能无法在 UEFI 模式下启用 Secure Boot(安全启动)。 26 | 27 | ![Ventoy写盘工具](img/p01/pve_ventoy.jpeg) 28 | 29 | #### Etcher 30 | 31 | 官方网站地址:https://etcher.balena.io/ 32 | 33 | 跨平台的写盘工具,开源,写盘速度很快,缺点是该软件体积较大,下载缓慢。 34 | 35 | 支持 Windows 、macOS 、Linux 。 36 | 37 | ![Etcher写盘工具](img/p01/pve_etcher.jpeg) 38 | 39 | #### Rufus 40 | 41 | 官方网站地址:https://rufus.ie/zh/ 42 | 43 | 轻便小巧的写盘工具,仅支持 Windows 。 44 | 45 | ![Rufus写盘工具](img/p01/pve_rufus.jpeg) 46 | 47 | ### 0.3.终端工具 48 | 49 | 考虑到配置 PVE 服务器、路由器及 DNS 服务器时需在 CLI 中执行命令,以下推荐几款常用终端工具。 50 | 51 | #### Windows Terminal 52 | 53 | 官方网站地址:https://aka.ms/terminal 54 | 55 | Microsoft 官方终端工具,可以在 Github 平台或 Microsoft 应用商店中进行下载。 56 | 57 | ![Windows Terminal](img/p01/pve_win_terminal.png) 58 | 59 | #### Termius 60 | 61 | 官方地址:https://termius.com/ 62 | 63 | 企业级终端工具,支持 Windows、macOS、Linux 系统以及移动端系统。 64 | 65 | ![Termius](img/p01/pve_termius.jpeg) 66 | 67 | #### MobaXterm 68 | 69 | 官方地址:https://mobaxterm.mobatek.net 70 | 71 | 功能强大的终端工具,仅支持 Windows 系统。 72 | 73 | ![MobaXterm](img/p01/pve_mobaxterm.png) 74 | 75 | ## 1. PVE 系统安装 76 | 77 | 由于机型不同,BIOS 的设置也不同,所以本文不演示具体如何将机器设置成从 U 盘启动。 78 | 79 | 在设置 BIOS 时需要注意以下几点: 80 | 81 | - 确保启用了硬件虚拟化支持 82 | 83 | - 确保启用了设备来电自启动 84 | 85 | - 检查 CPU 的功耗设置(可选) 86 | 87 | - PVE 8.1 及后续版本支持安全启动,详情参考 [PVE - Secure Boot Setup](https://pve.proxmox.com/wiki/Secure_Boot_Setup) 88 | 89 | - 若通过 Ventoy 引导安装 PVE 系统,建议参考 [UEFI 模式安全启动操作说明](https://www.ventoy.net/cn/doc_secure.html) 或 **临时关闭** 安全启动 90 | 91 | - 对设备安全性有要求时,建议使用 Etcher 工具制作启动 U 盘并直接安装 PVE 系统 92 | 93 | ### 1.1.设备引导 94 | 95 | 使用 Ventoy 进行设备引导后出现如下画面,需选择 PVE 的安装 ISO 进行系统引导。 96 | 97 | ![Ventoy引导设备](img/p01/pve_ventoy_boot.jpeg) 98 | 99 | 使用 `Boot in normal mode` 选项进行启动,等待引导跑码完成后,即可进入 PVE 的安装界面。 100 | 101 | ![Ventoy引导模式](img/p01/pve_ventoy_boot_mode.jpeg) 102 | 103 | ### 1.2. PVE 安装选项 104 | 105 | 新版 PVE 安装程序支持纯键盘操作,常用快捷键如下。 106 | 107 | |键盘按键|作用|说明| 108 | |--|--|--| 109 | |Esc|返回|返回上一步骤| 110 | |Enter|确认|确认输入,激活选项或进入下一步骤| 111 | |方向键|导航|用于选项选择,或切换输入焦点| 112 | |Tab|导航|与方向键功能类似| 113 | |ALT + N|下一步|进入下一步骤| 114 | 115 | 使用 `方向键` 选择第一项 `Install Proxmox VE (Graphical)` 图形化安装界面,按键盘 `Enter` 进入下一步骤。 116 | 117 | ![PVE安装选项](img/p01/pve_option.jpeg) 118 | 119 | 设备将继续跑码,直到出现最终用户许可协议 EULA ,按键盘组合键 `ALT + N ` 进入下一步骤。 120 | 121 | ![PVE用户协议](img/p01/pve_eula.jpeg) 122 | 123 | ### 1.3. PVE 硬盘选项 124 | 125 | 此时会出现 `Target Harddisk` 选项,会显示出设备中存在的硬盘列表,可通过下拉框选择安装 PVE 的目标硬盘。 126 | 127 | ![PVE选择安装硬盘](img/p01/pve_hd_choose.jpeg) 128 | 129 | 点击硬盘列表右侧的 `Options` ,对 PVE 的硬盘安装参数进行一些调整。 130 | 131 | 推荐将 `Filesystem` 也就是硬盘的文件系统,设置成 `xfs` 。 132 | 133 | ![PVE文件系统](img/p01/pve_hd_fs.jpeg) 134 | 135 | ### 1.4. PVE 时区选项 136 | 137 | 此时 PVE 处于未联网的 “离线” 状态,因此不会从互联网中读取时区信息,需要手动设置时区。 138 | 139 | 在 `Contry` 处手动输入 `China` ,下方的 `Time zone` 将自动变更为 `Asia/Shanghai` 。 140 | 141 | ![PVE时区](img/p01/pve_timezone.jpeg) 142 | 143 | ### 1.5. PVE 账户与邮箱 144 | 145 | PVE 为最关键的虚拟化层,建议使用强密码,包含大小写字母、数字以及特殊符号。 146 | 147 | `Email` 必须为一个 “合法” 的邮箱地址,不然系统会判定邮箱地址不合法并拒绝继续安装。 148 | 149 | ![PVE邮箱设置](img/p01/pve_email.jpeg) 150 | 151 | ### 1.6. PVE 网络设置 152 | 153 | 默认情况下,PVE 会使用编号较小的第一个网口作为管理口。 154 | 155 | 而某些设备,其物理网口顺序与该页面显示的网口顺序 **不一致** ,因此保持默认设置即可。 156 | 157 | ![PVE管理网口设置](img/p01/pve_eth.jpeg) 158 | 159 | FQDN 为 PVE 的域,PVE 将使用 FQDN 中的二级域名作为其主机名。 160 | 161 | 演示中 FQDN 为 `node01.fox.internal` ,因此 PVE 的主机名为 `node01` 。 162 | 163 | 根据规划,PVE 的 IPv4 管理地址为 `172.16.1.254` 。 164 | 165 | 未来主路由的 IPv4 地址为 `172.16.1.1` ,因此均使用该地址作为 PVE 的网关地址和 DNS 地址。 166 | 167 | PVE 安装完成后,可通过其提供的 Web 管理界面,进一步调整管理口的相关设置。 168 | 169 | |参数|值|说明| 170 | |--|--|--| 171 | |Hostname (FQDN)|`node01.fox.internal`|设置 PVE `域` 和 `主机名` | 172 | |IP Address (CIDR)|`172.16.1.254/24`|设置 PVE IPv4 地址| 173 | |Gateway|`172.16.1.1`|设置 PVE IPv4 网关| 174 | |DNS Server|`172.16.1.1`|设置 PVE IPv4 DNS | 175 | 176 | ![PVE管理IP地址](img/p01/pve_ip.jpeg) 177 | 178 | ### 1.7. PVE 参数确认 179 | 180 | 该页面会显示当前 PVE 的安装配置总览,确认无误后即可开始安装。 181 | 182 | ![PVE安装确认](img/p01/pve_install_confirm.jpeg) 183 | 184 | 安装完成后,PVE 会告知用户登录的 `IP 地址` 和 `端口` 。 185 | 186 | ![PVE安装完成](img/p01/pve_install_finish.jpeg) 187 | 188 | ## 2. PVE 安装后检查 189 | 190 | PVE 安装完成后会自动重启,等待系统重启完成会显示如下界面,并使用 `root` 账户进行登录,密码为刚才设置的管理密码。 191 | 192 | **注意:Linux 操作系统,在输入密码时是不显示任何字符信息的,输入完成后输入回车即可。** 193 | 194 | ![PVE首次重启](img/p01/pve_first_boot.jpeg) 195 | 196 | 登录系统后,使用一些命令来查看一些基本信息。 197 | 198 | ```bash 199 | ## 查看系统硬盘与挂载 200 | $ df -hT 201 | 202 | $ df -hiT 203 | 204 | $ cat /etc/fstab 205 | 206 | ## 查看系统代号 207 | $ cat /etc/os-release 208 | ``` 209 | 210 | 此处显示出 PVE 底层使用的是 Debian 的系统,代号为 `trixie` ,该代号后续会使用到。 211 | 212 | ![PVE系统信息](img/p01/pve_sys_info.jpeg) 213 | 214 | 至此 PVE 的安装步骤已经完成。 215 | 216 | -------------------------------------------------------------------------------- /src/debian/debian_ts_nftables.conf: -------------------------------------------------------------------------------- 1 | #!/usr/sbin/nft -f 2 | 3 | # This configuration file is customized by fox, 4 | # Optimize nftables rules for local TS server. 5 | 6 | table inet router 7 | flush table inet router 8 | 9 | table inet router { 10 | 11 | # 12 | # Flowtable 13 | # 14 | 15 | flowtable ft { 16 | hook ingress priority filter; 17 | devices = { eth0 }; 18 | counter; 19 | } 20 | 21 | 22 | # 23 | # Filter rules 24 | # 25 | 26 | chain input { 27 | type filter hook input priority filter; policy drop; 28 | iif "lo" accept comment "defconf: accept traffic from loopback" 29 | ct state vmap { established : accept, related : accept } comment "defconf: handle inbound flows" 30 | tcp flags & (fin | syn | rst | ack) == syn jump syn_flood comment "defconf: rate limit new TCP connections" 31 | iifname "eth0" jump input_lan comment "defconf: handle LAN IPv4 / IPv6 input traffic" 32 | iifname "tailscale0" jump input_tailscale comment "tsconf: handle TS IPv4 / IPv6 input traffic" 33 | } 34 | 35 | chain forward { 36 | type filter hook forward priority filter; policy drop; 37 | ct state established,related flow add @ft; 38 | ct state vmap { established : accept, related : accept } comment "defconf: handle forwarded flows" 39 | iifname "eth0" jump forward_lan comment "defconf: handle LAN IPv4 / IPv6 forward traffic" 40 | iifname "tailscale0" jump forward_tailscale comment "tsconf: handle TS IPv4 / IPv6 forward traffic" 41 | } 42 | 43 | chain output { 44 | type filter hook output priority filter; policy accept; 45 | oif "lo" accept comment "defconf: accept traffic towards loopback" 46 | ct state vmap { established : accept, related : accept } comment "defconf: handle outbound flows" 47 | oifname "eth0" jump output_lan comment "defconf: handle LAN IPv4 / IPv6 output traffic" 48 | oifname "tailscale0" jump output_tailscale comment "tsconf: handle TS IPv4 / IPv6 output traffic" 49 | } 50 | 51 | chain syn_flood { 52 | limit rate 50/second burst 100 packets return comment "defconf: accept new TCP connections below rate-limit" 53 | counter drop comment "defconf: drop excess new TCP connections" 54 | } 55 | 56 | chain input_lan { 57 | ct status dnat accept comment "lanconf: accept port redirect" 58 | jump accept_from_lan 59 | } 60 | 61 | chain forward_lan { 62 | jump accept_to_tailscale comment "tsconf: accept LAN to TS forwarding" 63 | ct status dnat accept comment "lanconf: accept port forwards" 64 | jump accept_to_lan 65 | } 66 | 67 | chain output_lan { 68 | jump accept_to_lan 69 | } 70 | 71 | chain accept_from_lan { 72 | iifname "eth0" accept comment "defconf: accept LAN IPv4 / IPv6 traffic" 73 | } 74 | 75 | chain accept_to_lan { 76 | meta nfproto ipv4 oifname "eth0" ct state invalid counter drop comment "defconf: prevent LAN NATv4 leakage" 77 | oifname "eth0" accept comment "defconf: accept LAN IPv4 / IPv6 traffic" 78 | } 79 | 80 | chain input_tailscale { 81 | jump accept_from_tailscale 82 | } 83 | 84 | chain forward_tailscale { 85 | jump accept_to_lan comment "tsconf: accept TS to LAN forwarding" 86 | jump accept_to_tailscale 87 | } 88 | 89 | chain output_tailscale { 90 | jump accept_to_tailscale 91 | } 92 | 93 | chain accept_from_tailscale { 94 | meta nfproto ipv4 iifname "tailscale0" counter accept comment "tsconf: accept TS IPv4 traffic" 95 | meta nfproto ipv6 iifname "tailscale0" counter accept comment "tsconf: accept TS IPv6 traffic" 96 | } 97 | 98 | chain accept_to_tailscale { 99 | meta nfproto ipv4 oifname "tailscale0" counter accept comment "tsconf: accept TS IPv4 traffic" 100 | meta nfproto ipv6 oifname "tailscale0" counter accept comment "tsconf: accept TS IPv6 traffic" 101 | } 102 | 103 | 104 | # 105 | # NAT rules 106 | # 107 | 108 | chain dstnat { 109 | type nat hook prerouting priority dstnat; policy accept; 110 | iifname { "eth0", "tailscale0" } meta l4proto { tcp, udp } th dport domain jump dstnat_lan comment "defconf: handle LAN IPv4 / IPv6 dstnat traffic" 111 | } 112 | 113 | chain srcnat { 114 | type nat hook postrouting priority srcnat; policy accept; 115 | oifname "eth0" jump srcnat_lan comment "defconf: handle LAN IPv4 / IPv6 srcnat traffic" 116 | } 117 | 118 | chain dstnat_lan { 119 | meta nfproto ipv4 meta l4proto { tcp, udp } th dport domain counter redirect to domain comment "lanconf: LAN IPv4 DNS redirect" 120 | meta nfproto ipv6 meta l4proto { tcp, udp } th dport domain counter redirect to domain comment "lanconf: LAN IPv6 DNS redirect" 121 | } 122 | 123 | chain srcnat_lan { 124 | meta nfproto ipv4 counter masquerade comment "defconf: masquerade LAN IPv4 traffic" 125 | } 126 | 127 | 128 | # 129 | # Mangle rules 130 | # 131 | 132 | chain mangle_postrouting { 133 | type filter hook postrouting priority mangle; policy accept; 134 | oifname "eth0" tcp flags syn / fin,syn,rst tcp option maxseg size set rt mtu comment "defconf: zone LAN IPv4 / IPv6 egress MTU fixing" 135 | } 136 | 137 | chain mangle_forward { 138 | type filter hook forward priority mangle; policy accept; 139 | iifname "eth0" tcp flags syn / fin,syn,rst tcp option maxseg size set rt mtu comment "defconf: zone LAN IPv4 / IPv6 ingress MTU fixing" 140 | } 141 | 142 | } 143 | 144 | -------------------------------------------------------------------------------- /04.PVE创建模板虚拟机.md: -------------------------------------------------------------------------------- 1 | ## 0.前期准备 2 | 3 | 将虚拟机制作成模板,可在后续新建虚拟机时快速从模板中创建,减少系统安装配置时间。 4 | 5 | 该虚拟机模板主要用作内网 DNS 服务器,由 `Adguard Home` 或 `SmartDNS` 提供 DNS 解析服务。 6 | 7 | 本文将使用 Debian 的云镜像 `debian-13-generic-amd64.qcow2` 作为模板虚拟机的镜像。 8 | 9 | 访问 [Debian Official Cloud Images](https://cloud.debian.org/images/cloud/) 官方网站,下载最新版 `Trixie` 云镜像以及对应的校验文件。 10 | 11 | ![下载镜像](img/p04/download_generic_image_qcow2.jpeg) 12 | 13 | ## 1.创建虚拟机 14 | 15 | 登录 PVE 管理后台,点击页面右上角 `创建虚拟机` ,进入虚拟机创建流程。 16 | 17 | ### 1.1.常规 18 | 19 | 勾选底部 `高级` 选项,显示完整的配置参数,节点即 “本机” ,`VM ID` 和 `名称` 可自定义。 20 | 21 | ![虚拟机名称](img/p04/vm_id.jpeg) 22 | 23 | ### 1.2.操作系统 24 | 25 | 无需使用任何安装介质,客户机操作系统类别选择 `Linux` , 版本选择 `6.x - 2.6 Kernel` 即可。 26 | 27 | ![虚拟机操作系统](img/p04/vm_os.jpeg) 28 | 29 | ### 1.3.系统 30 | 31 | SCSI 控制器保持默认 `VirtIO SCSI single` ,机型可选 `q35` ,并勾选 `Qemu代理` 选项。 32 | 33 | 固件可选 `OVMF (UEFI)` ,并勾选 `添加 TPM` 选项,存储路径需根据实际情况进行调整,演示为 `local-lvm` 。 34 | 35 | `预注册密钥` 可启用,启用后将预装特定于发行版和 Microsoft 标准的安全启动密钥。 36 | 37 | ![虚拟机系统](img/p04/vm_system.jpeg) 38 | 39 | ### 1.4.磁盘 40 | 41 | 因使用 Debian 云镜像制作虚拟机模板,此处需删除所有 `磁盘` 。 42 | 43 | ![虚拟机磁盘](img/p04/vm_hd.jpeg) 44 | 45 | ### 1.5.CPU 46 | 47 | CPU `类别` 选择 `host` ,`插槽` 与 `核心` 数根据物理 CPU 核心数进行酌情设置。 48 | 49 | 若 PVE 服务器内有多颗物理 CPU ,则推荐勾选 `启用NUMA` 选项。 50 | 51 | ![虚拟机CPU](img/p04/vm_cpu.jpeg) 52 | 53 | ### 1.6.内存 54 | 55 | 内存一般 `2G` 足够使用,取消勾选 `Ballooning设备` 。 56 | 57 | 对于 `Kernel Samepage Merging (KSM)` 内核同页合并功能,建议保留勾选状态。 58 | 59 | ![虚拟机内存](img/p04/vm_mem.jpeg) 60 | 61 | ### 1.7.网络 62 | 63 | 通常情况下,模板虚拟机的 `桥接` 参数设为 PVE 管理口所在虚拟网桥即可。 64 | 65 | 因在 [02.PVE初始化配置](./02.PVE初始化配置.md) 中曾创建未桥接物理网口的内部网桥,故可根据实际需求选择是否使用纯内部网桥作为该参数值。 66 | 67 | ![虚拟机网口](img/p04/vm_network_port.jpeg) 68 | 69 | 多数情况下,无需使用 PVE 内建防火墙,因此 **取消勾选** 网络设备的 `防火墙` 选项。 70 | 71 | 推荐在 `Multiqueue` 处根据前面设置的 CPU 核心数进行网卡多队列设置,设置比例为 1:1 。 72 | 73 | 即有 n 个 CPU 核心,此处多队列也设置为 n 。 74 | 75 | ![虚拟机网卡多队列](img/p04/vm_network_queue.jpeg) 76 | 77 | ### 1.8.确认 78 | 79 | 接下来查看设置总览,确认无误后即可点击 `完成` 。 80 | 81 | ![虚拟机确认](img/p04/vm_confirm.jpeg) 82 | 83 | ## 2.调整硬件参数 84 | 85 | ### 2.1.删除光驱 86 | 87 | 查看虚拟机详情页,在虚拟机 `硬件` 配置页面,移除其 `CD/DVD驱动器` 。 88 | 89 | ![虚拟机删除光驱](img/p04/vm_delete_cd.jpeg) 90 | 91 | ### 2.2.导入镜像文件 92 | 93 | 使用终端工具登录 PVE 服务器,并进入 `/tmp` 目录,执行以下命令创建一个目录。 94 | 95 | ```bash 96 | ## 创建存放 Debian 云镜像的临时目录 97 | $ mkdir -p /tmp/Debian 98 | 99 | ## 进入目录 100 | $ cd /tmp/Debian 101 | ``` 102 | 103 | 将 Debian 云镜像传输到该目录,并检查 `hash` 。 104 | 105 | ```bash 106 | ## 下载云镜像校验文件 107 | $ curl -LR -O https://cloud.debian.org/images/cloud/trixie/latest/SHA512SUMS 108 | 109 | ## 下载云镜像 110 | $ curl -LR -O https://cloud.debian.org/images/cloud/trixie/latest/debian-13-generic-amd64.qcow2 111 | 112 | ## 检查文件是否存在 113 | $ ls -lah 114 | 115 | ## 显示校验文件内容 116 | $ cat SHA512SUMS 117 | 118 | ## 计算文件 hash 119 | $ sha512sum debian-13-generic-amd64.qcow2 120 | ``` 121 | 122 | 确认无误后,将镜像文件导入刚才创建的虚拟机,命令中的 `VM ID` 需要根据实际情况替换,演示为 `1001` 。 123 | 124 | ```bash 125 | ## 将 qcow2 镜像导入虚拟机中 126 | $ qm importdisk 1001 debian-13-generic-amd64.qcow2 local-lvm 127 | 128 | #### 镜像导入示例输出 129 | unused0: successfully imported disk 'local-lvm:vm-1001-disk-2' 130 | ``` 131 | 132 | 磁盘导入成功后,虚拟机硬件列表中将显示一块未使用的磁盘设备,可鼠标 **双击** 该设备进行配置调整。 133 | 134 | ![虚拟机新磁盘](img/p04/vm_unused_hd.jpeg) 135 | 136 | 当宿主机使用 SSD 作为物理存储设备,并且虚拟磁盘采用 Thin Provisioning (精简置备)模式时,可考虑开启以下选项: 137 | 138 | - `Discard` ( `丢弃` ) 选项,有助于存储空间回收。 139 | 140 | - `SSD Emulation` ( `SSD 仿真` ) 选项,让虚拟机将虚拟磁盘视为 SSD 存储设备。 141 | 142 | 在弹出的对话框中,确认 `IO thread` 选项为 **勾选** 状态,并点击 `添加` 。 143 | 144 | ![虚拟机使用该磁盘](img/p04/vm_enable_hd.jpeg) 145 | 146 | 导入的镜像只有 `3G` 磁盘空间,为了后续方便使用,需要对磁盘进行扩容。 147 | 148 | 鼠标 **单击** 选中该磁盘,选择页面顶部 `磁盘操作` 菜单的子菜单 `调整大小` 。 149 | 150 | ![虚拟机磁盘扩容](img/p04/vm_hd_resize.jpeg) 151 | 152 | 在弹出的对话框中,给该磁盘增加 `21G` 磁盘空间。 153 | 154 | ![虚拟机磁盘增加18G](img/p04/vm_hd_scale_up.jpeg) 155 | 156 | ### 2.3.添加 CloudInit 157 | 158 | 为确保 Cloud-Init 能正常初始化系统,需为模板虚拟机添加 `CloudInit 设备` 。 159 | 160 | 点击顶部 `添加` 菜单,选择 `CloudInit 设备` 。 161 | 162 | ![虚拟机添加ci](img/p04/vm_cloudinit.jpeg) 163 | 164 | `总线/设备` 选择 `SCSI` ,编号为 `1` ,存储路径需根据实际情况进行调整,演示为 `local-lvm` 。 165 | 166 | ![虚拟机ci参数](img/p04/vm_ci_details.jpeg) 167 | 168 | ### 2.4.添加 VirtIO RNG 169 | 170 | 为确保系统拥有充足的可用熵,需为模板虚拟机添加 `VirtIO RNG` 熵源。 171 | 172 | 点击顶部 `添加` 菜单,选择 `VirtIO RNG` 。 173 | 174 | ![虚拟机添加RNG](img/p04/vm_rng.jpeg) 175 | 176 | `熵源` 选择 `/dev/urandom` ,其余参数保持默认即可。 177 | 178 | ![虚拟机RNG参数](img/p04/vm_rng_details.jpeg) 179 | 180 | 虚拟机硬件设备修改完成后,如下图所示。 181 | 182 | ![虚拟机全部硬件](img/p04/vm_hardware_all.jpeg) 183 | 184 | ## 3.调整配置参数 185 | 186 | 进入左侧虚拟机 `选项` 页面,可以看到当前虚拟机的配置参数。 187 | 188 | 通常情况下,模板虚拟机的配置参数需要修改以下内容: 189 | 190 | 1. 开机自启动(模板无需修改,克隆的虚拟机需要修改) 191 | 192 | 2. 启动/关机顺序(模板无需修改,克隆的虚拟机需要修改) 193 | 194 | 3. 引导顺序(仅模板需要修改) 195 | 196 | 4. 使用平板指针(仅模板需要修改) 197 | 198 | ### 3.1.修改引导顺序 199 | 200 | 鼠标 **双击** `引导顺序` 选项,进入编辑界面。 201 | 202 | 在 `scsi0` 设备处勾选 “已启用” 复选框,并通过排序功能将其拖拽至首位,点击 `OK` 即可。 203 | 204 | ![虚拟机引导顺序](img/p04/vm_boot.jpeg) 205 | 206 | ### 3.2.设置平板指针 207 | 208 | 关闭 `使用平板指针` 选项,可以一定程度上降低虚拟机的 CPU 使用率。 209 | 210 | ![虚拟机平板指针](img/p04/vm_tablet.jpeg) 211 | 212 | ## 4.设置 Cloud-Init 213 | 214 | 进入左侧虚拟机 `Cloud-Init` 页面,可以看到当前虚拟机的初始化参数。 215 | 216 | ### 4.1.自动配置 IPv6 217 | 218 | 根据之前的网络规划,内网的 IP 地址段为 `172.16.1.0/24` ,因此模板虚拟机的参数如下。 219 | 220 | |参数|值|说明| 221 | |--|--|--| 222 | |用户|`fox`|新系统的管理员账户| 223 | |密码|`********`|使用强密码| 224 | |DNS 域|`fox.internal`|内网域名(可选)| 225 | |DNS 服务器|`172.16.1.1`|本机 DNS 服务器| 226 | |SSH 公钥|`无`|使用秘钥登录服务器,暂不使用| 227 | |升级程序包|`是`|启动时更新软件包,保持默认即可| 228 | |IP 配置 (net0)|`ip=172.16.1.250/24,gw=172.16.1.1,ip6=auto`|模板的 IP 设置| 229 | 230 | **额外说明:** 231 | 232 | 1. 修改 `Cloud-Init` 参数时,需要在虚拟机关机情况下修改才会生效。 233 | 234 | 2. `DNS服务器` 参数支持输入多个 IPv4 / IPv6 地址,使用空格隔开。 235 | 236 | 3. 当前 `DNS服务器` 参数为主路由 LAN 口 IPv4 地址,确保虚拟机能正常联网。 237 | 238 | ![CI网络配置](img/p04/vm_ci_dns.jpeg) 239 | 240 | `Cloud-Init` 的 `IP 配置` ,IPv4 使用 `静态` 地址,IPv6 使用 `SLAAC` 自动配置,如下图所示。 241 | 242 | ![CI网络配置](img/p04/vm_ci_network_slaac.jpeg) 243 | 244 | ### 4.2.手动配置 IPv6 245 | 246 | 当主路由配置了 IPv6 ULA 网段,且希望指定内网 DNS 服务器的 IPv6 ULA 地址时,需要调整 `Cloud-Init` 参数。 247 | 248 | 本文 IPv6 ULA 演示地址段为 `fdac::/64` ,`DNS 服务器` 和 `IP 配置` 参数调整如下。 249 | 250 | |参数|值|说明| 251 | |--|--|--| 252 | |DNS域|`fox.internal`|内网域名(可选)| 253 | |DNS服务器|`172.16.1.1 fdac::1`|本机 DNS 服务器| 254 | |IP配置(net0)|`ip=172.16.1.250/24,gw=172.16.1.1,ip6=fdac::fa/64`|模板的 IP 设置| 255 | 256 | `DNS 服务器` 参数中需要加入主路由 LAN 口 IPv6 ULA 地址。 257 | 258 | ![CI网络配置](img/p04/vm_ci_dns_ula.jpeg) 259 | 260 | `IP 配置` 参数,IPv4 使用 `静态` 地址,IPv6 同样使用 `静态` 地址,如下图所示。 261 | 262 | IPv6 使用 `静态` 地址后,并不影响虚拟机通过主路由获取 IPv6 GUA 公网地址。 263 | 264 | ![CI网络配置](img/p04/vm_ci_network_static.jpeg) 265 | 266 | ## 5.设置备注信息 267 | 268 | 进入左侧虚拟机 `概要` 页面,修改虚拟机的备注信息。 269 | 270 | ```bash 271 | ### 服务器信息 272 | 273 | - 系统: Debian 13 274 | 275 | - 用途: 内网 DNS 服务器 ( 模板 ) 276 | 277 | - 自启: 否 278 | 279 | - 用户: fox 280 | 281 | - IPv4: 172.16.1.250/24 282 | 283 | - IPv6: SLAAC 284 | 285 | ``` 286 | 287 | ![虚拟机备注](img/p04/vm_notes.jpeg) 288 | 289 | 至此,模板虚拟机创建完成,可将该虚拟机开机。 290 | 291 | -------------------------------------------------------------------------------- /02.PVE初始化配置.md: -------------------------------------------------------------------------------- 1 | ## 1.更换软件源 2 | 3 | 在上一篇文章 [01.PVE系统安装](./01.PVE系统安装.md) 中,从刚装好的 PVE 系统中获取了系统的一些参数。 4 | 5 | ![PVE系统信息](img/p01/pve_sys_info.jpeg) 6 | 7 | 此处显示出 PVE 底层使用的是 Debian 的系统,代号为 `trixie` 。 8 | 9 | 为了后续能对 PVE 系统进行升级,需要更换其镜像仓库,也就是大家熟知的软件源。 10 | 11 | 当前处于 “离线” 安装状态,更换软件源后,因无网络连接,无法执行系统更新操作。 12 | 13 | ### 1.1.系统软件源 14 | 15 | 使用终端工具登录到 PVE 服务器,查看现有软件源配置文件。 16 | 17 | ```bash 18 | ## 查看系统默认软件源 19 | $ cat /etc/apt/sources.list.d/debian.sources 20 | 21 | #### 系统默认软件源示例输出 22 | Types: deb 23 | URIs: http://deb.debian.org/debian/ 24 | Suites: trixie trixie-updates 25 | Components: main contrib non-free-firmware 26 | Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg 27 | 28 | Types: deb 29 | URIs: http://security.debian.org/debian-security/ 30 | Suites: trixie-security 31 | Components: main contrib non-free-firmware 32 | Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg 33 | ``` 34 | 35 | 这里将使用 [中国科技大(USTC)](http://mirrors.ustc.edu.cn/help/proxmox.html) 的镜像仓库进行替换,使用如下命令。 36 | 37 | ```bash 38 | ## 配置 PVE 系统默认软件源脚本 39 | 40 | $ cat > /etc/apt/sources.list.d/debian.sources < /etc/apt/sources.list.d/pve-no-subscription.sources < /etc/apt/sources.list.d/pve-ceph-no-subscription.sources < /etc/apt/sources.list.d/debian.sources < to keep the current choice[*], or type selection number: 2 164 | update-alternatives: using /usr/bin/nvim to provide /usr/bin/editor (editor) in manual mode 165 | ``` 166 | 167 | ### 1.6.调整内核参数 168 | 169 | 由于该虚拟机将作为内网 DNS 服务器的克隆模板,需调整内核参数以优化性能。 170 | 171 | 编辑 **内核参数** 配置文件,执行以下命令。 172 | 173 | ```bash 174 | ## 编辑 内核参数 配置文件 175 | $ sudo editor /etc/sysctl.d/99-sysctl.conf 176 | ``` 177 | 178 | 在配置文件中输入以下内容,注意配置中间的空格。 179 | 180 | ```bash 181 | # This configuration file is customized by fox, 182 | # Optimize sysctl parameters for local DNS server. 183 | 184 | kernel.panic = 20 185 | kernel.panic_on_oops = 1 186 | 187 | net.core.default_qdisc = fq_codel 188 | 189 | # Other adjustable system parameters 190 | 191 | net.core.netdev_budget = 1200 192 | net.core.netdev_budget_usecs = 10000 193 | 194 | net.core.somaxconn = 8192 195 | net.core.rmem_max = 33554432 196 | net.core.wmem_max = 33554432 197 | 198 | net.ipv4.igmp_max_memberships = 256 199 | 200 | net.ipv4.tcp_challenge_ack_limit = 1000 201 | net.ipv4.tcp_fastopen = 3 202 | net.ipv4.tcp_fin_timeout = 30 203 | net.ipv4.tcp_keepalive_time = 120 204 | net.ipv4.tcp_max_syn_backlog = 2048 205 | net.ipv4.tcp_notsent_lowat = 131072 206 | net.ipv4.tcp_rmem = 4096 4194304 33554432 207 | net.ipv4.tcp_wmem = 4096 4194304 33554432 208 | 209 | net.ipv6.conf.all.use_tempaddr = 0 210 | net.ipv6.conf.default.use_tempaddr = 0 211 | 212 | ``` 213 | 214 | 保存该配置文件后,重启系统或者执行以下命令让配置生效。 215 | 216 | ```bash 217 | ## 让内核参数生效 218 | $ sudo sysctl --system 219 | ``` 220 | 221 | ### 1.7.调整系统时间 222 | 223 | 默认情况下 Debian 云镜像的系统时间需要调整,执行以下命令将系统时区设置为中国时区。 224 | 225 | ```bash 226 | ## 设置系统时区 227 | $ sudo timedatectl set-timezone Asia/Shanghai 228 | 229 | ## 检查系统时间 230 | $ date -R 231 | 232 | #### 系统时间示例输出 233 | Wed, 27 Aug 2025 17:02:17 +0800 234 | ``` 235 | 236 | Debian 云镜像默认使用 `systemd-timesyncd.service` 同步时间,且需要调整为使用国内 NTP 服务器。 237 | 238 | 调整 NTP 服务器参数,执行以下命令。 239 | 240 | ```bash 241 | ## 创建 NTP 配置目录 242 | $ sudo mkdir -p /etc/systemd/timesyncd.conf.d 243 | 244 | ## 创建 NTP 配置文件 245 | $ sudo editor /etc/systemd/timesyncd.conf.d/10-server-ntp.conf 246 | ``` 247 | 248 | 在配置文件中输入以下内容,并保存。 249 | 250 | ```bash 251 | # This configuration file is customized by fox, 252 | # Optimize system NTP server. 253 | 254 | [Time] 255 | NTP=ntp.aliyun.com ntp.tencent.com cn.pool.ntp.org 256 | 257 | ``` 258 | 259 | 保存该配置文件后,需重启 `systemd-timesyncd.service` 服务,并再次检查系统 NTP 服务器地址。 260 | 261 | ```bash 262 | ## 重启 systemd-timesyncd.service 263 | $ sudo systemctl restart systemd-timesyncd.service 264 | 265 | ## 检查系统 NTP 服务器 266 | $ sudo systemctl status systemd-timesyncd.service 267 | ``` 268 | 269 | 如果输出以下类似内容,则表示系统 NTP 服务设置正确。 270 | 271 | ```bash 272 | #### NTP 服务示例输出 273 | ● systemd-timesyncd.service - Network Time Synchronization 274 | Loaded: loaded (/usr/lib/systemd/system/systemd-timesyncd.service; enabled; preset: enabled) 275 | Active: active (running) since Wed 2025-08-27 17:03:13 CST; 6s ago 276 | Invocation: 7dcb974725ac43638a4ae3df51644f84 277 | Docs: man:systemd-timesyncd.service(8) 278 | Main PID: 4540 (systemd-timesyn) 279 | Status: "Contacted time server 203.107.6.88:123 (ntp.aliyun.com)." 280 | Tasks: 2 (limit: 2317) 281 | Memory: 1.5M (peak: 2.2M) 282 | CPU: 261ms 283 | CGroup: /system.slice/systemd-timesyncd.service 284 | └─4540 /usr/lib/systemd/systemd-timesyncd 285 | 286 | Aug 27 17:03:12 DNST01 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... 287 | Aug 27 17:03:13 DNST01 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. 288 | Aug 27 17:03:13 DNST01 systemd-timesyncd[4540]: Contacted time server 203.107.6.88:123 (ntp.aliyun.com). 289 | Aug 27 17:03:13 DNST01 systemd-timesyncd[4540]: Initial clock synchronization to Wed 2025-08-27 17:03:13.200663 CST. 290 | ``` 291 | 292 | ### 1.8.配置自动更新 293 | 294 | 配置 Debian 云镜像的系统自动更新,与配置 PVE 系统自动更新方法基本一致,参阅 [03.PVE系统调整](./03.PVE系统调整.md) 。 295 | 296 | 配置系统更新之前,先检查当前系统定时器状态。 297 | 298 | ```bash 299 | ## 检查系统定时器 300 | $ sudo systemctl status apt-daily-upgrade.timer 301 | ``` 302 | 303 | 配置系统自动更新策略,执行以下命令,使用键盘 `左右方向键` 进行选择,`回车键` 进行确认。 304 | 305 | ```bash 306 | ## 配置自动更新策略 307 | $ sudo dpkg-reconfigure -plow unattended-upgrades 308 | 309 | ## 选择 “是” 310 | 311 | ``` 312 | 313 | 进一步调整 `20auto-upgrades` 配置文件。 314 | 315 | ```bash 316 | ## 编辑 20auto-upgrades 配置文件 317 | $ sudo editor /etc/apt/apt.conf.d/20auto-upgrades 318 | ``` 319 | 320 | 清空当前全部配置项后,输入以下内容,并保存。 321 | 322 | 配置文件中,用来控制更新周期的参数为 `APT::Periodic::Unattended-Upgrade` ,`3` 表示更新周期为 `3` 天。 323 | 324 | ```bash 325 | ## 系统更新周期配置项 326 | 327 | APT::Periodic::Update-Package-Lists "1"; 328 | APT::Periodic::Unattended-Upgrade "3"; 329 | APT::Periodic::AutocleanInterval "1"; 330 | APT::Periodic::CleanInterval "1"; 331 | 332 | ``` 333 | 334 | 进一步调整 `50unattended-upgrades` 配置文件。 335 | 336 | ```bash 337 | ## 编辑 50unattended-upgrades 配置文件 338 | $ sudo editor /etc/apt/apt.conf.d/50unattended-upgrades 339 | ``` 340 | 341 | 因为该配置文件很长,完整的配置文件可查看 [debian_dns_50unattended_upgrades.conf](./src/debian/debian_dns_50unattended_upgrades.conf) 以便对比。 342 | 343 | ```bash 344 | ## 删除以下行前面的注释符 // ,代表启用 345 | 346 | "origin=Debian,codename=${distro_codename}-updates"; 347 | 348 | ## 在配置文件末尾增加以下内容,代表启用,并调整参数 349 | 350 | Unattended-Upgrade::AutoFixInterruptedDpkg "true"; 351 | 352 | Unattended-Upgrade::Remove-Unused-Kernel-Packages "true"; 353 | 354 | Unattended-Upgrade::Remove-New-Unused-Dependencies "true"; 355 | 356 | Unattended-Upgrade::Remove-Unused-Dependencies "true"; 357 | 358 | Unattended-Upgrade::Automatic-Reboot "true"; 359 | 360 | Unattended-Upgrade::Automatic-Reboot-Time "03:00"; 361 | 362 | ``` 363 | 364 | 系统自动更新配置文件修改完成后,需调整自动更新定时器,执行以下命令。 365 | 366 | ```bash 367 | ## 配置系统定时器 368 | $ sudo systemctl edit apt-daily-upgrade.timer 369 | ``` 370 | 371 | 根据配置文件中的提示,在中间空白处填入以下内容。 372 | 373 | ```bash 374 | ## 定时器配置项 375 | 376 | [Timer] 377 | OnCalendar= 378 | OnCalendar=02:00 379 | RandomizedDelaySec=0 380 | 381 | ``` 382 | 383 | 设置完成后,重启自动更新定时器并检查其状态,执行以下命令。 384 | 385 | 在输出结果中,看到系统自动更新的触发时间为 `02:00` 则表示设置正确。 386 | 387 | ```bash 388 | ## 重启触发器 389 | $ sudo systemctl restart apt-daily-upgrade.timer 390 | 391 | ## 再次检查触发器状态 392 | $ sudo systemctl status apt-daily-upgrade.timer 393 | ``` 394 | 395 | ### 1.9.配置定时任务 396 | 397 | 本步骤为可选操作,主要设置系统定时重启。 398 | 399 | ```bash 400 | ## 查看系统定时任务 401 | $ sudo crontab -l 402 | 403 | ## 编辑系统定时任务,编辑器选择 nano 404 | $ sudo crontab -e 405 | ``` 406 | 407 | 在配置文件末尾,增加以下内容。 408 | 409 | ```bash 410 | ## 定时任务配置项 411 | 412 | 30 4 8,24 * * /usr/sbin/reboot 413 | 414 | ``` 415 | 416 | ### 1.10.清理系统 417 | 418 | Debian 模板虚拟机已经配置完成,在将其转换为模板前需要对系统进行清理。 419 | 420 | ```bash 421 | ## 清理系统软件包 422 | $ sudo bash -c 'apt clean && apt autoclean && apt autoremove --purge' 423 | 424 | ## 清理系统缓存 425 | $ sudo bash -c 'find /var/cache/apt/ /var/cache/smartdns/ /var/lib/apt/lists/ /tmp/ -type f -print -delete' 426 | 427 | ## 重置机器标识符 428 | $ sudo truncate -s 0 /etc/machine-id /var/lib/dbus/machine-id 429 | 430 | ## 清理系统日志 431 | $ sudo bash -c 'find /var/log/ -type f -print -delete' 432 | 433 | ## 清理命令历史记录文件 434 | $ rm -rvf ~/.bash_history ~/.zsh_history ~/.zcompdump* && history -c 435 | 436 | ## 关闭系统 437 | $ sudo shutdown now 438 | ``` 439 | 440 | ## 2.虚拟机转为模板 441 | 442 | 在模板虚拟机关机后,进入 PVE 的 WEB 管理界面。 443 | 444 | 在左侧虚拟机列表中,鼠标 **右键单击** 模板虚拟机,在弹出的菜单中选择 `转换成模板` 。 445 | 446 | 需要说明的是,虚拟机 `转换成模板` 的操作具有不可逆性。 447 | 448 | 若系统配置存在问题,仅能在该模板克隆的新虚拟机中修改,或删除现有模板并重新制作。 449 | 450 | ![虚拟机转模板](img/p05/vm_to_template.jpeg) 451 | 452 | 至此 Debian 虚拟机模板制作完成。 453 | 454 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /06.PVE制作DNS服务器.md: -------------------------------------------------------------------------------- 1 | ## 1.克隆虚拟机 2 | 3 | 在上一篇文章 [05.PVE制作虚拟机模板](./05.PVE制作虚拟机模板.md) 中,已经制作好了虚拟机模板。 4 | 5 | 接下来将基于该模板克隆新虚拟机,并通过安装 DNS 服务器软件,完成内网 DNS 服务器的搭建。 6 | 7 | 鼠标 **右键单击** 虚拟机模板,在弹出的菜单中选择 `克隆` 。 8 | 9 | ![克隆虚拟机](img/p06/vm_clone.jpeg) 10 | 11 | 在弹出的虚拟机克隆对话框中,根据实际情况及下方表格内容,修改虚拟机参数。 12 | 13 | |参数|值|说明| 14 | |--|--|--| 15 | |目标节点|`node01`|当前 PVE 服务器节点| 16 | |VM ID|`201`|可自定义,不能与现存虚拟机 `VM ID` 相同| 17 | |名称|`DNS01`|可自定义,`Cloud-Init` 将使用该名称作为虚拟机 `hostname` | 18 | |模式|`完整克隆`|选择虚拟机的克隆模式| 19 | |目标存储|`local-lvm`|克隆出的虚拟机文件存储位置| 20 | 21 | 修改参数后,点击 `克隆` ,即可使用该模板克隆出新的虚拟机。 22 | 23 | ![克隆虚拟机参数](img/p06/vm_clone_vmid.jpeg) 24 | 25 | 26 | ## 2.调整 Cloud-Init 27 | 28 | 克隆出来的虚拟机的 `Cloud-Init` 参数默认与模板完全一致。 29 | 30 | 根据 **内部网络地址** 规划,内网 DNS 服务器 IPv4 地址规划如下: 31 | 32 | - `172.16.1.2/24` 33 | 34 | - `172.16.1.3/24` 35 | 36 | 因此需要调整新虚拟机的 `Cloud-Init` 参数。 37 | 38 | - `IP配置` 中的 IPv4 地址参数为 `172.16.1.2/24` ,网关保持 `172.16.1.1` 不变。 39 | 40 | - `IP配置` 中的 IPv6 地址参数为 `auto` ,网关保持为空。 41 | 42 | 需要注意的是,如果修改了 `用户` 参数,相当于新建了一个系统管理员,之前设置的 `oh-my-zsh` 需要在新管理员下重新设置。 43 | 44 | ![调整新虚拟机Cloud-Init](img/p06/vm_clone_ci_slaac.jpeg) 45 | 46 | 当主路由配置了 IPv6 ULA 网段,内网 DNS 服务器 IPv6 ULA 地址规划如下: 47 | 48 | - `fdac::2/64` 49 | 50 | - `fdac::3/64` 51 | 52 | 此时需进一步调整新虚拟机的 `Cloud-Init` 参数,让该虚拟机使用指定的 IPv6 ULA 地址,参数如下。 53 | 54 | ![调整新虚拟机Cloud-Init](img/p06/vm_clone_ci_static.jpeg) 55 | 56 | ## 3.调整配置参数 57 | 58 | 在 [04.PVE创建模板虚拟机](./04.PVE创建模板虚拟机.md) 中提到过,新虚拟机需要修改配置参数才能自动启动。 59 | 60 | 进入左侧虚拟机 `选项` 页面,将虚拟机 `开机自启动` 参数设置为 `是` 。 61 | 62 | ![克隆虚拟机自动启动](img/p06/vm_clone_autostart.jpeg) 63 | 64 | 鼠标 **双击** `启动/关机顺序` 选项,可调整虚拟机的自动开机参数。 65 | 66 | `启动/关机顺序` 为 `2` ,表示该虚拟机第 `2` 个启动,倒数第 `2` 个关机。 67 | 68 | `启动延时` 为 `15` ,表示该虚拟机启动后,延迟 `15` 秒再启动下一个虚拟机。 69 | 70 | ![克隆虚拟机自动启动顺序](img/p06/vm_clone_autostart_order.jpeg) 71 | 72 | ## 4.调整系统端口 73 | 74 | 设置完成后,将该虚拟机开机,使用终端工具登录,并执行以下命令检查端口占用。 75 | 76 | ```bash 77 | ## 检查 53 端口占用 78 | $ sudo lsof -n -i :53 79 | 80 | #### 端口占用示例输出 81 | COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME 82 | systemd-r 357 systemd-resolve 18u IPv4 3293 0t0 UDP 127.0.0.53:domain 83 | systemd-r 357 systemd-resolve 19u IPv4 3294 0t0 TCP 127.0.0.53:domain (LISTEN) 84 | systemd-r 357 systemd-resolve 20u IPv4 3295 0t0 UDP 127.0.0.54:domain 85 | systemd-r 357 systemd-resolve 21u IPv4 3296 0t0 TCP 127.0.0.54:domain (LISTEN) 86 | ``` 87 | 88 | 当前系统 `53` 端口被 `systemd-resolved.service` 占用,会导致设置 DNS 服务时监听端口失败。 89 | 90 | 为了正常使用 `53` 端口,需要对 `systemd-resolved.service` 进行配置,执行以下命令。 91 | 92 | ```bash 93 | ## 创建 systemd-resolved 配置目录 94 | $ sudo mkdir -p /etc/systemd/resolved.conf.d 95 | 96 | ## 创建 systemd-resolved 配置文件 97 | $ sudo editor /etc/systemd/resolved.conf.d/10-server-dns.conf 98 | ``` 99 | 100 | 在配置文件中输入以下内容,并保存。 101 | 102 | ```bash 103 | # This configuration file is customized by fox, 104 | # Optimize system resolve parameters for local DNS server. 105 | 106 | [Resolve] 107 | DNS=127.0.0.1 108 | DNS=::1 109 | DNSStubListener=no 110 | 111 | ``` 112 | 113 | 保存该配置文件后,还需调整系统 `resolv.conf` 配置文件,执行以下命令。 114 | 115 | ```bash 116 | ## 创建 resolv.conf 软链接 117 | $ sudo ln -sf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf 118 | ``` 119 | 120 | 配置完成后,需重启 `systemd-resolved.service` 服务,并再次检查系统 `53` 端口占用。 121 | 122 | ```bash 123 | ## 重启 systemd-resolved.service 124 | $ sudo systemctl restart systemd-resolved.service 125 | ``` 126 | 127 | ## 5. Adguard Home 128 | 129 | `Adguard Home` 将采用 `snap` 形式安装,执行以下命令。 130 | 131 | ```bash 132 | ## 安装 Snap 133 | $ sudo apt install snapd 134 | 135 | ## 安装 Adguard Home 136 | $ sudo snap install adguard-home 137 | ``` 138 | 139 | ### 5.1.自动更新 140 | 141 | 查看 `Snap` 当前的更新策略,执行以下命令。 142 | 143 | ```bash 144 | ## 显示当前 Snap 自动更新设置 145 | $ sudo snap refresh --time 146 | ``` 147 | 148 | 将 `Snap` 自动更新时间设置为每天 `2:30-3:30` 和 `14:30-15:30` 两个时间段。 149 | 150 | ```bash 151 | ## 修改 Snap 自动更新时间 152 | $ sudo snap set system refresh.timer=2:30-3:30,14:30-15:30 153 | 154 | ## 其他 Snap 自动更新时间设置语法参考 155 | $ sudo snap set system refresh.timer=mon,2:30,,fri,2:30 156 | ``` 157 | 158 | ### 5.2.配置 Adguard Home 159 | 160 | 关于 `Adguard Home` 配置相关内容,请参阅 [Adguard Home 折腾手记](https://gitee.com/callmer/agh_toss_notes) 。 161 | 162 | ### 5.3.定时任务 163 | 164 | 本步骤为可选操作,主要用于设置 `Adguard Home` 定时重启。 165 | 166 | ```bash 167 | ## 查看系统定时任务 168 | $ sudo crontab -l 169 | 170 | ## 编辑系统定时任务,编辑器选择 nano 171 | $ sudo crontab -e 172 | ``` 173 | 174 | 在配置文件末尾,增加以下内容。 175 | 176 | ```bash 177 | ## 定时任务配置项 178 | 179 | 30 4 * * * /usr/bin/snap restart adguard-home 180 | 181 | ``` 182 | 183 | ## 6. SmartDNS 184 | 185 | 若需使用 `SmartDNS` 代替 `Adguard Home` ,可使用 Debian 官方源进行安装,但其版本通常较为 “过时” 。 186 | 187 | 因此,更推荐使用其 Github 仓库中的最新稳定版进行安装,官方仓库请参阅 [pymumu/smartdns](https://github.com/pymumu/smartdns/releases) 。 188 | 189 | 多数情况下,`SmartDNS` 足以提供良好的 DNS 解析服务,但为了进一步优化 DNS 解析流程,推荐与 `Dnsmasq` 嵌套使用。 190 | 191 | ```bash 192 | ## 安装 Dnsmasq 193 | $ sudo apt install dnsmasq 194 | ``` 195 | 196 | 检查 `dnsmasq.service` 服务状态,确保该服务开机自启。 197 | 198 | ```bash 199 | ## 检查 dnsmasq.service 200 | $ sudo systemctl status dnsmasq.service 201 | 202 | ## 设置 dnsmasq.service 开机自启 203 | $ sudo systemctl enable dnsmasq.service 204 | 205 | ## 停止 dnsmasq.service 206 | $ sudo systemctl stop dnsmasq.service 207 | ``` 208 | 209 | 下载 `SmartDNS` 当前最新版本时,请根据系统架构选择合适的版本,执行以下命令。 210 | 211 | ```bash 212 | ## 创建存放 SmartDNS 安装包的临时目录 213 | $ mkdir -p /tmp/SmartDNS 214 | 215 | ## 进入目录 216 | $ cd /tmp/SmartDNS 217 | 218 | ## 下载 SmartDNS 安装包 219 | $ curl -LR -O https://github.com/pymumu/smartdns/releases/download/Release47.1/smartdns.1.2025.11.09-1443.x86_64-linux-all.tar.gz 220 | 221 | ## 解压缩 SmartDNS 安装包 222 | $ tar zxf smartdns.*.x86_64-linux-all.tar.gz 223 | 224 | ## 进入安装包目录 225 | $ cd smartdns 226 | 227 | ## 设置脚本可执行权限 228 | $ chmod +x ./install 229 | 230 | ## 安装 SmartDNS 231 | $ sudo ./install -i 232 | ``` 233 | 234 | 后续 `SmartDNS` 推出新版本需原位升级时:先下载最新版本,再将以下命令替代原安装命令执行,升级后现有配置不会丢失。 235 | 236 | ```bash 237 | ## 升级 SmartDNS 238 | $ sudo ./install -U 239 | ``` 240 | 241 | 修改 `SmartDNS` 配置之前,需检查 `smartdns.service` 服务状态,确保该服务开机自启。 242 | 243 | ```bash 244 | ## 检查 smartdns.service 245 | $ sudo systemctl status smartdns.service 246 | 247 | ## 设置 smartdns.service 开机自启 248 | $ sudo systemctl enable smartdns.service 249 | ``` 250 | 251 | ### 6.1. SmartDNS 附加配置 252 | 253 | 本步骤为可选操作,通过安装 `SmartDNS` 附加配置文件,以达到屏蔽广告或加速中国境内域名解析速度的目的。 254 | 255 | 若需使用 `SmartDNS` 屏蔽广告,则需下载广告规则配置文件。 256 | 257 | ```bash 258 | ## 创建 SmartDNS 配置目录 259 | $ sudo mkdir -p /etc/smartdns.d 260 | 261 | ## 下载广告规则配置文件 262 | $ sudo curl -LR -o /etc/smartdns.d/anti-ad.smartdns.conf https://anti-ad.net/anti-ad-for-smartdns.conf 263 | ``` 264 | 265 | `SmartDNS` 的加速规则通过 `bash` 脚本安装,脚本生成的配置文件位于 `/etc/smartdns.d` 目录。 266 | 267 | 关于脚本的详细介绍,请参阅 [SmartDNS China List 安装脚本](https://gitee.com/callmer/smartdns_china_list_installer) 。 268 | 269 | ```bash 270 | ## 下载加速规则安装脚本 271 | $ sudo curl -LR -o /usr/local/bin/smartdns-plugin.sh https://gitee.com/callmer/smartdns_china_list_installer/raw/main/smartdns_plugin.sh 272 | 273 | ## 设置脚本可执行权限 274 | $ sudo chmod +x /usr/local/bin/smartdns-plugin.sh 275 | 276 | ## 设置脚本文件防篡改 277 | $ sudo chattr +i /usr/local/bin/smartdns-plugin.sh 278 | 279 | ## 执行脚本 280 | $ sudo /usr/local/bin/smartdns-plugin.sh 281 | ``` 282 | 283 | ### 6.2.定时任务 284 | 285 | 本步骤为可选操作,主要用于设置 `SmartDNS` 定时更新附加配置文件和定时重启。 286 | 287 | ```bash 288 | ## 编辑系统定时任务,编辑器选择 nano 289 | $ sudo crontab -e 290 | ``` 291 | 292 | 在配置文件末尾,增加以下内容。 293 | 294 | ```bash 295 | ## 定时任务配置项 296 | 297 | 20 9 * * * /usr/bin/curl --retry-connrefused --retry 5 --retry-delay 5 --retry-max-time 60 -fsSLR -o /etc/smartdns.d/anti-ad.smartdns.conf https://anti-ad.net/anti-ad-for-smartdns.conf 298 | 299 | 30 9 * * * /usr/bin/systemctl restart smartdns.service 300 | 301 | ``` 302 | 303 | 若使用了 `SmartDNS` 加速规则的安装脚本,由于脚本自带服务重启功能,因此定时任务可修改如下。 304 | 305 | ```bash 306 | ## 定时任务配置项 307 | 308 | 20 9 * * * /usr/bin/curl --retry-connrefused --retry 5 --retry-delay 5 --retry-max-time 60 -fsSLR -o /etc/smartdns.d/anti-ad.smartdns.conf https://anti-ad.net/anti-ad-for-smartdns.conf 309 | 310 | 30 9 * * * /usr/local/bin/smartdns-plugin.sh 311 | ``` 312 | 313 | ### 6.3. SmartDNS 主配置 314 | 315 | `SmartDNS` 配置较为复杂,可按需制定各类 DNS 请求规则,建议先查阅官方提供的 [配置指导](https://pymumu.github.io/smartdns/config/basic-config/) 和 [配置选项](https://pymumu.github.io/smartdns/configuration/) 。 316 | 317 | 修改 `SmartDNS` 主配置文件之前,建议关闭 `SmartDNS` 并清理 DNS 缓存文件。 318 | 319 | ```bash 320 | ## 关闭 smartdns.service 321 | $ sudo systemctl stop smartdns.service 322 | 323 | ## 清理进程标识文件 324 | $ sudo rm -rvf /var/run/smartdns.pid /run/smartdns.pid 325 | ``` 326 | 327 | `SmartDNS` 的主配置文件一般位于 `/etc/smartdns` 目录下,修改配置文件之前,执行以下命令将其备份。 328 | 329 | ```bash 330 | ## 备份 SmartDNS 主配置文件 331 | $ sudo mv /etc/smartdns/smartdns.conf /etc/smartdns/smartdns.conf.bak 332 | ``` 333 | 334 | 创建 `SmartDNS` 主配置文件,执行以下命令。 335 | 336 | ```bash 337 | ## 创建 SmartDNS 主配置文件 338 | $ sudo editor /etc/smartdns/smartdns.conf 339 | ``` 340 | 341 | 在编辑器对话框中输入以下内容,并保存。 342 | 343 | **额外说明:** 344 | 345 | - `SmartDNS` 端口监听参数为 `6053@lo` ,WebUI 端口监听参数为 `5380` ,请根据实际情况进行调整 346 | 347 | - WebUI 默认用户名密码为 `admin/password` ,若需公网访问建议将默认密码修改为强密码 348 | 349 | ```bash 350 | # This configuration file is customized by fox, 351 | # Optimize SmartDNS parameters for local DNS server. 352 | # 353 | # For use common DNS server as upstream DNS server, 354 | # please modify 'server' parameter according to 355 | # your network environment. 356 | # 357 | # eg: 358 | # server 223.5.5.5 359 | # server 180.184.1.1 360 | # server 119.29.29.29 361 | # server 114.114.114.114 362 | # server 2402:4e00:: 363 | # server 2400:3200::1 364 | 365 | conf-file /etc/smartdns.d/*.conf 366 | 367 | log-level notice 368 | log-console yes 369 | log-size 16M 370 | log-num 8 371 | 372 | audit-enable yes 373 | audit-SOA yes 374 | audit-size 16M 375 | audit-num 8 376 | 377 | plugin smartdns_ui.so 378 | smartdns-ui.ip https://[::]:5380 379 | smartdns-ui.token-expire 1800 380 | smartdns-ui.max-query-log-age 604800 381 | 382 | bind [::]:6053@lo 383 | bind-tcp [::]:6053@lo 384 | 385 | cache-size 32768 386 | max-query-limit 1024 387 | max-reply-ip-num 16 388 | 389 | prefetch-domain yes 390 | 391 | serve-expired yes 392 | serve-expired-ttl 129600 393 | serve-expired-reply-ttl 30 394 | serve-expired-prefetch-time 28800 395 | 396 | rr-ttl-min 60 397 | rr-ttl-max 28800 398 | rr-ttl-reply-max 14400 399 | 400 | server-tcp 180.184.1.1 -bootstrap-dns 401 | server-tcp 101.226.4.6 -bootstrap-dns 402 | server-tcp 2400:3200::1 -bootstrap-dns 403 | server-tcp 2400:3200:baba::1 -bootstrap-dns 404 | 405 | server-tls dot.360.cn -fallback 406 | server-quic dns.alidns.com -fallback 407 | server-https https://doh.360.cn/dns-query -fallback 408 | 409 | server-tls dot.pub 410 | server-tls dns.alidns.com 411 | server-https https://doh.pub/dns-query 412 | server-https https://dns.alidns.com/dns-query 413 | 414 | ``` 415 | 416 | ### 6.4.配置 Dnsmasq 417 | 418 | `Dnsmasq` 的主配置文件一般位于 `/etc` 目录下,修改配置文件之前,执行以下命令。 419 | 420 | ```bash 421 | ## 创建 Dnsmasq 配置目录 422 | $ sudo mkdir -p /etc/dnsmasq.d 423 | ``` 424 | 425 | 创建 `Dnsmasq` 主配置文件,执行以下命令。 426 | 427 | ```bash 428 | ## 创建 Dnsmasq 主配置文件 429 | $ sudo editor /etc/dnsmasq.d/10-server-dnsmasq.conf 430 | ``` 431 | 432 | 在编辑器对话框中输入以下内容,并保存。 433 | 434 | **额外说明:** 435 | 436 | - 请根据系统内存使用情况,调整缓存参数 `cache-size` 437 | 438 | - 配置文件中监听的网卡名为 `eth0` ,请根据实际情况进行调整 439 | 440 | - 配置文件中内网域名为 `fox.internal` ,请根据实际情况进行调整 441 | 442 | - `Dnsmasq` 有且仅有 `SmartDNS` 作为上游 DNS 服务器 443 | 444 | ```bash 445 | # This configuration file is customized by fox, 446 | # Optimize dnsmasq parameters for local DNS server. 447 | 448 | # Main Config 449 | 450 | conf-dir=/etc/dnsmasq.d/,*.conf 451 | conf-file=/etc/dnsmasq.conf 452 | 453 | log-facility=/var/log/dnsmasq.log 454 | log-async=20 455 | 456 | cache-size=2048 457 | max-cache-ttl=7200 458 | fast-dns-retry=1800 459 | 460 | interface=eth0 461 | rebind-domain-ok=/fox.internal/ 462 | 463 | bind-dynamic 464 | bogus-priv 465 | domain-needed 466 | no-hosts 467 | no-negcache 468 | no-resolv 469 | rebind-localhost-ok 470 | stop-dns-rebind 471 | 472 | # DNS Filter 473 | 474 | server=/alt/ 475 | server=/bind/ 476 | server=/example/ 477 | server=/home.arpa/ 478 | server=/internal/ 479 | server=/invalid/ 480 | server=/lan/ 481 | server=/local/ 482 | server=/localhost/ 483 | server=/onion/ 484 | server=/test/ 485 | 486 | # DNS Server 487 | 488 | server=/fox.internal/172.16.1.1 489 | server=127.0.0.1#6053 490 | 491 | ``` 492 | 493 | 至此,新虚拟机已配置完成,重启后即可作为内网 DNS 服务器使用。 494 | 495 | -------------------------------------------------------------------------------- /03.PVE系统调整.md: -------------------------------------------------------------------------------- 1 | ## 0.必要条件 2 | 3 | 在上一篇文章 [02.PVE初始化配置](./02.PVE初始化配置.md) 中,已初始化了 PVE 系统,接下来需要对 PVE 系统进一步调整。 4 | 5 | 在 PVE 系统调整之前,请确认必要的软件包已经安装完成,本文后续命令均在 SSH 终端下执行。 6 | 7 | Debian 系统默认编辑器为 `nano` ,推荐将其更换为 `neovim` ,关于 [Neovim](https://neovim.io/) 的基础使用方法,请参阅:[Neovim 基础教程](https://cn.bing.com/search?q=Neovim+%E5%9F%BA%E7%A1%80%E6%95%99%E7%A8%8B) 。 8 | 9 | ```bash 10 | ## 修改系统默认编辑器(选择 nvim 所在条目,即可将 neovim 设为默认编辑器) 11 | $ update-alternatives --config editor 12 | 13 | #### 系统默认编辑器示例输出 14 | There are 3 choices for the alternative editor (providing /usr/bin/editor). 15 | 16 | Selection Path Priority Status 17 | ------------------------------------------------------------ 18 | * 0 /bin/nano 40 auto mode 19 | 1 /bin/nano 40 manual mode 20 | 2 /usr/bin/nvim 30 manual mode 21 | 3 /usr/bin/vim.tiny 15 manual mode 22 | 23 | Press to keep the current choice[*], or type selection number: 2 24 | update-alternatives: using /usr/bin/nvim to provide /usr/bin/editor (editor) in manual mode 25 | ``` 26 | 27 | ## 1.系统时区 28 | 29 | 如果在安装 PVE 系统时选错了时区,导致系统时间和北京时间不一致,可以执行以下命令修正。 30 | 31 | 若输出结果和北京时间一致,则代表修改正确。 32 | 33 | ```bash 34 | ## 修改系统时区 35 | $ timedatectl set-timezone Asia/Shanghai 36 | 37 | ## 检查系统时间 38 | $ date -R 39 | 40 | #### 系统时间示例输出 41 | Wed, 27 Aug 2025 15:12:21 +0800 42 | ``` 43 | 44 | Debian 系统常用 `systemd-timesyncd.service` 来同步时间,而 PVE 系统使用 `chrony.service` 来同步时间。 45 | 46 | 为了使用国内的 NTP 服务器,需要对 `chrony.service` 进行配置,执行以下命令。 47 | 48 | ```bash 49 | ## 编辑 chrony 配置文件 50 | $ editor /etc/chrony/chrony.conf 51 | ``` 52 | 53 | 在编辑器对话框中,将 `pool 2.debian.pool.ntp.org iburst` “注释” 掉,并添加国内的 NTP 服务器。 54 | 55 | ```bash 56 | ## chrony 配置项 57 | 58 | # Use Debian vendor zone. 59 | # pool 2.debian.pool.ntp.org iburst ## 在这行前面增加注释符 # 来注释 60 | 61 | # Use Custom vendor zone. 62 | pool ntp.aliyun.com iburst 63 | pool ntp.tencent.com iburst 64 | pool cn.pool.ntp.org iburst 65 | 66 | ``` 67 | 68 | 保存该配置文件后,需重启 `chrony.service` ,并再次检查系统 NTP 服务器地址。 69 | 70 | ```bash 71 | ## 重启 chrony.service 72 | $ systemctl restart chrony.service 73 | 74 | ## 检查系统 NTP 服务器 75 | $ chronyc sources -V 76 | 77 | #### 系统 NTP 服务器示例输出 78 | MS Name/IP address Stratum Poll Reach LastRx Last sample 79 | =============================================================================== 80 | ^* 203.107.6.88 2 6 17 0 -875us[-2496us] +/- 22ms 81 | ^- 106.55.184.199 2 6 17 3 -1331us[-1331us] +/- 50ms 82 | ^- time.neu.edu.cn 2 6 17 8 +935us[ +935us] +/- 23ms 83 | ^- 119.28.206.193 2 6 65 5 +33us[ +33us] +/- 65ms 84 | ^- electrode.felixc.at 2 6 17 11 -1320us[-1320us] +/- 126ms 85 | ^- dns1.synet.edu.cn 1 6 17 13 -44us[ -44us] +/- 22ms 86 | ``` 87 | 88 | ## 2. CPU 调度器 89 | 90 | 安装 `linux-cpupower` 后,需检查 CPU 当前调度器。 91 | 92 | ```bash 93 | ## 检查 CPU 当前调度器 94 | $ cpupower -c all frequency-info 95 | 96 | #### 设备 CPU - J4125 示例输出 97 | analyzing CPU 0: 98 | driver: intel_cpufreq 99 | CPUs which run at the same hardware frequency: 0 100 | CPUs which need to have their frequency coordinated by software: 0 101 | maximum transition latency: 20.0 us 102 | hardware limits: 800 MHz - 2.70 GHz 103 | available cpufreq governors: conservative ondemand userspace powersave performance schedutil 104 | current policy: frequency should be within 800 MHz and 2.70 GHz. 105 | The governor "performance" may decide which speed to use 106 | within this range. 107 | current CPU frequency: Unable to call hardware 108 | current CPU frequency: 2.60 GHz (asserted by call to kernel) 109 | boost state support: 110 | Supported: yes 111 | Active: yes 112 | 113 | #### 设备 CPU - N100 示例输出 114 | analyzing CPU 0: 115 | driver: intel_pstate 116 | CPUs which run at the same hardware frequency: 0 117 | CPUs which need to have their frequency coordinated by software: 0 118 | maximum transition latency: Cannot determine or is not supported. 119 | hardware limits: 800 MHz - 3.30 GHz 120 | available cpufreq governors: performance powersave 121 | current policy: frequency should be within 800 MHz and 3.30 GHz. 122 | The governor "performance" may decide which speed to use 123 | within this range. 124 | current CPU frequency: Unable to call hardware 125 | current CPU frequency: 2.00 GHz (asserted by call to kernel) 126 | boost state support: 127 | Supported: yes 128 | Active: yes 129 | ``` 130 | 131 | 这里面主要关注两个点: 132 | 133 | - `driver` :CPU 调频驱动方案,例如 `intel_cpufreq` 或 `intel_pstate` 134 | 135 | - `governor` :调频策略中使用的调度器,例如 `performance` 136 | 137 | CPU 驱动一般不建议手动调整,而 `governor` 后面的参数表示 CPU 当前调度器设置。 138 | 139 | 接下来,需要了解 CPU 支持的调度器有哪些,执行以下命令。 140 | 141 | ```bash 142 | ## 检查 CPU 调度器支持情况 143 | $ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors 144 | 145 | #### 设备 CPU - J4125 示例输出 146 | conservative ondemand userspace powersave performance schedutil 147 | 148 | #### 设备 CPU - N100 示例输出 149 | performance powersave 150 | ``` 151 | 152 | 根据 CPU 所使用的驱动不同,可选调度器也不同,至于每种调度器有什么优劣,欢迎大家深度挖掘。 153 | 154 | - CPU 驱动为 `intel_cpufreq` 时,推荐使用 `schedutil` 调度器。 155 | 156 | - CPU 驱动为 `intel_pstate` 时,推荐使用 `powersave` 调度器。 157 | 158 | 本文使用 `powersave` 调度器创建 `cpupower` 的配置文件。 159 | 160 | ```bash 161 | ## 创建 cpupower 默认配置文件 162 | $ editor /etc/default/cpupower 163 | ``` 164 | 165 | 在配置文件中输入以下内容,并保存。 166 | 167 | ```bash 168 | # This configuration file is customized by fox, 169 | # Optimize system CPU governors. 170 | 171 | CPUPOWER_START_OPTS="frequency-set -g powersave" 172 | CPUPOWER_STOP_OPTS="frequency-set -g performance" 173 | 174 | ``` 175 | 176 | 进一步创建 `cpupower` 服务配置文件,以满足系统自动化设置需求。 177 | 178 | ```bash 179 | ## 创建 cpupower.service 配置文件 180 | $ editor /etc/systemd/system/cpupower.service 181 | ``` 182 | 183 | 在服务配置文件中输入以下内容,并保存。 184 | 185 | ```bash 186 | # This configuration file is customized by fox, 187 | # Optimize for cpupower systemd service. 188 | 189 | [Unit] 190 | Description=Apply cpupower configuration 191 | ConditionVirtualization=!container 192 | After=syslog.target 193 | 194 | [Service] 195 | Type=oneshot 196 | EnvironmentFile=/etc/default/cpupower 197 | ExecStart=/usr/bin/cpupower $CPUPOWER_START_OPTS 198 | ExecStop=/usr/bin/cpupower $CPUPOWER_STOP_OPTS 199 | RemainAfterExit=yes 200 | 201 | [Install] 202 | WantedBy=multi-user.target 203 | 204 | ``` 205 | 206 | 由于修改了服务项,需要执行以下命令进行重载。 207 | 208 | ```bash 209 | ## 服务重载 210 | $ systemctl daemon-reload 211 | ``` 212 | 213 | 执行以下命令让 `cpupower` 服务开机自启动。 214 | 215 | ```bash 216 | ## 设置 cpupower 服务开机自启 217 | $ systemctl enable cpupower.service 218 | ``` 219 | 220 | 修改完成后,需重启 PVE 服务器,并再次查看 CPU 调度器,检验配置文件是否生效。 221 | 222 | 这里提供两个额外命令,方便实时查看 CPU 当前频率和温度状况。 223 | 224 | ```bash 225 | ## 查看 CPU 当前频率 226 | $ watch cat /sys/devices/system/cpu/cpu[0-9]*/cpufreq/scaling_cur_freq 227 | 228 | ## 查看内部温度 229 | $ watch -d sensors 230 | ``` 231 | 232 | ## 3. PVE 定时重启 233 | 234 | 有时需要让 PVE 服务器周期性的定时重启,则可执行以下命令。 235 | 236 | 参数表示每月 `1` 、 `16` 号的 `02:30` 执行系统重启命令。 237 | 238 | ```bash 239 | ## 查看系统定时任务 240 | $ crontab -l 241 | 242 | ## 编辑系统定时任务,编辑器选择 nano 243 | $ crontab -e 244 | ``` 245 | 246 | 在配置文件末尾,增加以下内容。 247 | 248 | ```bash 249 | ## 定时任务配置项 250 | 251 | 30 2 1,16 * * /usr/sbin/reboot 252 | 253 | ``` 254 | 255 | ## 4. PVE 自动更新 256 | 257 | ### 4.1.系统定时器 258 | 259 | 配置系统自动更新之前,需检查系统当前定时器状态。 260 | 261 | 后续将手动调整该定时器的时间,使其每 `5` 天的 `01:30` 进行触发。 262 | 263 | ```bash 264 | ## 检查系统定时器 265 | $ systemctl status apt-daily-upgrade.timer 266 | 267 | #### 系统定时器示例输出 268 | ● apt-daily-upgrade.timer - Daily apt upgrade and clean activities 269 | Loaded: loaded (/usr/lib/systemd/system/apt-daily-upgrade.timer; enabled; preset: enabled) 270 | Active: active (waiting) since Wed 2025-08-27 13:41:10 CST; 1h 37min ago 271 | Invocation: 26ebb62a4f6545998522fff8c4104b3e 272 | Trigger: Thu 2025-08-28 06:14:37 CST; 14h left 273 | Triggers: ● apt-daily-upgrade.service 274 | 275 | Aug 27 13:41:10 node01 systemd[1]: Started apt-daily-upgrade.timer - Daily apt upgrade and clean activities. 276 | ``` 277 | 278 | ### 4.2.配置更新策略 279 | 280 | 执行以下命令,启用系统自动更新。 281 | 282 | 执行命令后,使用 “左右” 方向键进行选择,“回车” 键进行确认。 283 | 284 | ```bash 285 | ## 配置自动更新策略 286 | $ dpkg-reconfigure -plow unattended-upgrades 287 | 288 | ## 选择 “是” 289 | 290 | ``` 291 | 292 | 进一步调整 `20auto-upgrades` 配置文件。 293 | 294 | ```bash 295 | ## 进入 apt 的配置目录 296 | $ cd /etc/apt/apt.conf.d 297 | 298 | ## 编辑 20auto-upgrades 配置文件 299 | $ editor /etc/apt/apt.conf.d/20auto-upgrades 300 | ``` 301 | 302 | 清空当前全部配置项后,输入以下内容,并保存。 303 | 304 | 配置文件中,用来控制更新周期的参数为 `APT::Periodic::Unattended-Upgrade` ,`5` 表示更新周期为 `5` 天。 305 | 306 | ```bash 307 | ## 系统更新周期配置项 308 | 309 | APT::Periodic::Update-Package-Lists "1"; 310 | APT::Periodic::Unattended-Upgrade "5"; 311 | APT::Periodic::AutocleanInterval "1"; 312 | APT::Periodic::CleanInterval "1"; 313 | 314 | ``` 315 | 316 | 进一步调整 `50unattended-upgrades` 配置文件。 317 | 318 | ```bash 319 | ## 编辑 50unattended-upgrades 配置文件 320 | $ editor /etc/apt/apt.conf.d/50unattended-upgrades 321 | ``` 322 | 323 | 配置文件中,被修改的参数解释如下: 324 | 325 | - 启用了 Debian `trixie-updates` 相关更新。 326 | 327 | - 增加并启用 PVE 自有仓库的更新。 328 | 329 | - 增加并启用 PVE Ceph 仓库的更新,请按需启用。 330 | 331 | - 自动修复被打断的 Dpkg 安装。 332 | 333 | - 自动移除无用的的内核包。 334 | 335 | - 自动移除因更新而出现的无用依赖包。 336 | 337 | - 自动移除以前的无用依赖包。 338 | 339 | - 自动重启:开启。 340 | 341 | - 自动重启时间:`02:30` 。 342 | 343 | 因为该配置文件很长,完整的配置文件可查看 [pve_50unattended_upgrades.conf](./src/pve/pve_50unattended_upgrades.conf) 以便对比。 344 | 345 | ```bash 346 | ## 删除以下行前面的注释符 // ,代表启用 347 | 348 | "origin=Debian,codename=${distro_codename}-updates"; 349 | 350 | ## 添加 PVE 系统更新项目 351 | 352 | "origin=Proxmox,codename=${distro_codename},label=Proxmox Debian Repository"; 353 | 354 | ## 按需添加 PVE Ceph 更新项目 355 | 356 | "origin=Proxmox,codename=${distro_codename},label=Proxmox Ceph 19 Squid Debian Repository"; 357 | 358 | ## 在配置文件末尾增加以下内容,代表启用,并调整参数 359 | 360 | Unattended-Upgrade::AutoFixInterruptedDpkg "true"; 361 | 362 | Unattended-Upgrade::Remove-Unused-Kernel-Packages "true"; 363 | 364 | Unattended-Upgrade::Remove-New-Unused-Dependencies "true"; 365 | 366 | Unattended-Upgrade::Remove-Unused-Dependencies "true"; 367 | 368 | Unattended-Upgrade::Automatic-Reboot "true"; 369 | 370 | Unattended-Upgrade::Automatic-Reboot-Time "02:30"; 371 | 372 | ``` 373 | 374 | ### 4.3.重设触发器 375 | 376 | 系统自动更新配置文件修改完成后,需要重设自动更新定时器,执行以下命令。 377 | 378 | 完整的配置文件可查看 [pve_apt_daily_upgrade.conf](./src/pve/pve_apt_daily_upgrade.conf) 以便对比。 379 | 380 | ```bash 381 | ## 配置系统定时器 382 | $ systemctl edit apt-daily-upgrade.timer 383 | ``` 384 | 385 | 根据配置文件中的提示,在中间空白处填入以下内容。 386 | 387 | ```bash 388 | ## 定时器配置项 389 | 390 | [Timer] 391 | OnCalendar= 392 | OnCalendar=01:30 393 | RandomizedDelaySec=0 394 | 395 | ``` 396 | 397 | 设置完成后,重启自动更新触发器。 398 | 399 | ```bash 400 | ## 重启触发器 401 | $ systemctl restart apt-daily-upgrade.timer 402 | 403 | ## 再次检查触发器状态 404 | $ systemctl status apt-daily-upgrade.timer 405 | 406 | #### 系统自动更新触发器示例输出 407 | ● apt-daily-upgrade.timer - Daily apt upgrade and clean activities 408 | Loaded: loaded (/usr/lib/systemd/system/apt-daily-upgrade.timer; enabled; preset: enabled) 409 | Drop-In: /etc/systemd/system/apt-daily-upgrade.timer.d 410 | └─override.conf 411 | Active: active (waiting) since Wed 2025-08-27 15:23:29 CST; 8s ago 412 | Invocation: e38354f06d284894b084a52e1f19af73 413 | Trigger: Thu 2025-08-28 01:30:00 CST; 10h left 414 | Triggers: ● apt-daily-upgrade.service 415 | 416 | Aug 27 15:23:29 node01 systemd[1]: Stopping apt-daily-upgrade.timer - Daily apt upgrade and clean activities... 417 | Aug 27 15:23:29 node01 systemd[1]: Started apt-daily-upgrade.timer - Daily apt upgrade and clean activities. 418 | ``` 419 | 420 | ## 5.硬件直通 421 | 422 | ### 5.1.修改 Grub 423 | 424 | 参考官方文档 [qm_pci_passthrough](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough) 和 [Pci passthrough](https://pve.proxmox.com/wiki/Pci_passthrough) 开启 PVE 硬件直通功能。 425 | 426 | 使用终端工具登录到 PVE 服务器,编辑系统 `Grub` 的配置文件 `/etc/default/grub` 。 427 | 428 | ```bash 429 | ## 编辑 Grub 配置文件 430 | $ editor /etc/default/grub 431 | ``` 432 | 433 | 在编辑器对话框中修改 `GRUB_CMDLINE_LINUX_DEFAULT` 参数,注意命令中间的空格。 434 | 435 | ```bash 436 | ## Intel 处理器添加参数 437 | 438 | GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt" 439 | 440 | ``` 441 | 442 | 根据官方文档的说明,AMD 处理器下硬件直通功能将会自动打开,否则需要手动修改 `Grub` 配置文件。 443 | 444 | ```bash 445 | ## AMD 处理器添加参数 446 | 447 | GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt" 448 | 449 | ``` 450 | 451 | 修改并保存后,需要更新系统 `Grub` 。 452 | 453 | ```bash 454 | ## 更新系统 Grub 455 | $ update-grub 456 | ``` 457 | 458 | ### 5.2.添加内核模块 459 | 460 | 编辑 **内核模块** 配置文件,增加必要的系统模块,执行以下命令。 461 | 462 | ```bash 463 | ## 编辑系统配置文件 464 | $ editor /etc/modules-load.d/10-server-modules.conf 465 | ``` 466 | 467 | 在配置文件中输入以下内容,并保存。 468 | 469 | ```bash 470 | # This configuration file is customized by fox, 471 | # Optimize vfio related modules at system boot. 472 | 473 | vfio 474 | vfio_iommu_type1 475 | vfio_pci 476 | 477 | ``` 478 | 479 | 执行以下命令更新 `initramfs` ,更新完成后,建议重启 PVE 服务器。 480 | 481 | ```bash 482 | ## 更新 initramfs 483 | $ update-initramfs -u -k all 484 | ``` 485 | 486 | ### 5.3.检查硬件直通 487 | 488 | PVE 服务器重启完成后,再次使用终端工具登录,并执行以下命令检查硬件直通状态。 489 | 490 | 主要查看 `IOMMU` 、 `Directed I/O` 或 `Interrupt Remapping` 的启用状态。 491 | 492 | ```bash 493 | ## 检查系统硬件直通状态 494 | $ dmesg | grep -e DMAR -e IOMMU -e AMD-Vi 495 | 496 | #### 设备 CPU - J4125 示例输出 497 | [ 0.008356] ACPI: DMAR 0x000000007953E000 0000A8 (v01 INTEL GLK-SOC 00000003 BRXT 0100000D) 498 | [ 0.008406] ACPI: Reserving DMAR table memory at [mem 0x7953e000-0x7953e0a7] 499 | [ 0.055618] AMD-Vi: Unknown option - 'on' 500 | [ 0.220916] DMAR: Host address width 39 501 | [ 0.220919] DMAR: DRHD base: 0x000000fed64000 flags: 0x0 502 | [ 0.220939] DMAR: dmar0: reg_base_addr fed64000 ver 1:0 cap 1c0000c40660462 ecap 9e2ff0505e 503 | [ 0.220944] DMAR: DRHD base: 0x000000fed65000 flags: 0x1 504 | [ 0.220954] DMAR: dmar1: reg_base_addr fed65000 ver 1:0 cap d2008c40660462 ecap f050da 505 | [ 0.220960] DMAR: RMRR base: 0x000000794a9000 end: 0x000000794c8fff 506 | [ 0.220963] DMAR: RMRR base: 0x0000007b800000 end: 0x0000007fffffff 507 | [ 0.220967] DMAR-IR: IOAPIC id 1 under DRHD base 0xfed65000 IOMMU 1 508 | [ 0.220970] DMAR-IR: HPET id 0 under DRHD base 0xfed65000 509 | [ 0.220973] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. 510 | [ 0.223069] DMAR-IR: Enabled IRQ remapping in x2apic mode 511 | [ 0.493005] DMAR: No ATSR found 512 | [ 0.493008] DMAR: No SATC found 513 | [ 0.493010] DMAR: dmar0: Using Queued invalidation 514 | [ 0.493018] DMAR: dmar1: Using Queued invalidation 515 | [ 0.494092] DMAR: Intel(R) Virtualization Technology for Directed I/O 516 | ``` 517 | 518 | 检查系统 `IOMMU` 分组,执行以下命令。 519 | 520 | ```bash 521 | ## 检查 IOMMU group 522 | $ find /sys/kernel/iommu_groups/ -type l 523 | 524 | #### 设备 CPU - J4125 示例输出 525 | /sys/kernel/iommu_groups/17/devices/0000:04:00.0 526 | /sys/kernel/iommu_groups/7/devices/0000:00:13.3 527 | /sys/kernel/iommu_groups/15/devices/0000:02:00.0 528 | /sys/kernel/iommu_groups/5/devices/0000:00:13.0 529 | /sys/kernel/iommu_groups/13/devices/0000:00:1f.0 530 | /sys/kernel/iommu_groups/13/devices/0000:00:1f.1 531 | /sys/kernel/iommu_groups/3/devices/0000:00:0f.0 532 | /sys/kernel/iommu_groups/11/devices/0000:00:17.0 533 | /sys/kernel/iommu_groups/11/devices/0000:00:17.3 534 | /sys/kernel/iommu_groups/11/devices/0000:00:17.1 535 | /sys/kernel/iommu_groups/11/devices/0000:00:17.2 536 | /sys/kernel/iommu_groups/1/devices/0000:00:00.0 537 | /sys/kernel/iommu_groups/18/devices/0000:05:00.0 538 | /sys/kernel/iommu_groups/8/devices/0000:00:14.0 539 | /sys/kernel/iommu_groups/16/devices/0000:03:00.0 540 | /sys/kernel/iommu_groups/6/devices/0000:00:13.2 541 | /sys/kernel/iommu_groups/14/devices/0000:01:00.0 542 | /sys/kernel/iommu_groups/4/devices/0000:00:12.0 543 | /sys/kernel/iommu_groups/12/devices/0000:00:1c.0 544 | /sys/kernel/iommu_groups/2/devices/0000:00:0e.0 545 | /sys/kernel/iommu_groups/10/devices/0000:00:15.0 546 | /sys/kernel/iommu_groups/0/devices/0000:00:02.0 547 | /sys/kernel/iommu_groups/9/devices/0000:00:14.1 548 | ``` 549 | 550 | ## 6. BTRFS 调整 551 | 552 | **额外说明:** 553 | 554 | 1. 本节专为 `BTRFS` 单盘 `RAID0`(条带模式)安装的 PVE 系统设计,使用其他安装模式时,请跳过此节。 555 | 556 | 2. `BTRFS` 文件系统当前仍为技术预览状态,请谨慎操作。 557 | 558 | 3. 有关在 PVE 中使用 `BTRFS` 的详情,请参阅 [Proxmox VE - BTRFS](https://pve.proxmox.com/wiki/BTRFS) 。 559 | 560 | 安装 PVE 时,若使用了 `BTRFS` 单盘 `RAID0` 的安装模式,系统默认未启用 swap 和 zstd 压缩,需要手动开启。 561 | 562 | 通常情况下,内存与 swap 的 **推荐** 比例为 `1:1` 。本机具有 `16GB` 内存,因此设置 `16GB` swap 空间。 563 | 564 | 执行以下命令,在 `BTRFS` 文件系统中创建子卷,并配置激活 swapfile 。 565 | 566 | ```bash 567 | ## 创建用于存放交换文件的子卷 568 | $ btrfs subvolume create /swap 569 | 570 | ## 在子卷中创建 16GB 的交换文件 571 | $ btrfs filesystem mkswapfile --size 16g --uuid clear /swap/swapfile 572 | 573 | ## 激活交换文件 574 | $ swapon /swap/swapfile 575 | ``` 576 | 577 | 此时还需进一步修改系统的 `fstab` 配置文件,以启用 `BTRFS` 的 zstd 压缩功能并确保 swap 在系统启动时自动激活。 578 | 579 | ```bash 580 | ## 编辑 fstab 配置文件 581 | $ editor /etc/fstab 582 | ``` 583 | 584 | `fstab` 为系统关键配置文件,直接影响系统启动,修改此文件时,请注意以下几点: 585 | 586 | - 仅修改根目录 `/` 对应的挂载选项,添加 `compress=zstd` 参数。 587 | 588 | - 在文件末尾新增一行,添加 swap 的自动挂载。 589 | 590 | - 请 **不要** 修改其余配置参数,尤其是设备的唯一标识符( `UUID` ),切勿修改。 591 | 592 | 修改完成后,示例如下。 593 | 594 | ```bash 595 | #### 系统 fstab 示例配置 596 | 597 | # 598 | 599 | UUID= / btrfs defaults,compress=zstd 0 1 600 | 601 | UUID= /boot/efi vfat defaults 0 1 602 | proc /proc proc defaults 0 0 603 | 604 | /swap/swapfile none swap defaults 0 0 605 | ``` 606 | 607 | ## 7.系统清理 608 | 609 | PVE 系统配置完成后,可执行以下命令,对系统进行清理。 610 | 611 | ```bash 612 | ## 清理系统软件包 613 | $ apt clean && apt autoclean && apt autoremove --purge 614 | 615 | ## 清理系统缓存 616 | $ bash -c 'find /var/cache/apt/ /var/lib/apt/lists/ /tmp/ -type f -print -delete' 617 | 618 | ## 清理系统日志 619 | $ bash -c 'find /var/log/ -type f -print -delete' 620 | 621 | ## 清理命令历史记录文件 622 | $ rm -rvf ~/.bash_history && history -c 623 | ``` 624 | 625 | 至此 PVE 的系统调整已经完成。 626 | -------------------------------------------------------------------------------- /07.PVE制作TS服务器.md: -------------------------------------------------------------------------------- 1 | ## 0.前期准备 2 | 3 | 某些业务场景下需要构建安全可靠的网络隧道,来打通异地内网环境或从外部访问内网的私有资源。 4 | 5 | 经过实际测试,当 TS 服务器具有 IPv6 GUA 地址时,能稳定建立隧道。 6 | 7 | 本文将使用 Debian 云镜像以及 `Tailscale` 来制作内网组网服务器。 8 | 9 | 对于虚拟机创建部分,请参考 [04.PVE创建模板虚拟机](./04.PVE创建模板虚拟机.md) ,其他 `Cloud-Init` 相关参数如下。 10 | 11 | |参数|值|说明| 12 | |--|--|--| 13 | |虚拟机名称|`SVR01`| TS 服务器 `主机名` | 14 | |DNS 域|`fox.internal`| TS 服务器 `Cloud-Init` | 15 | |DNS 服务器|`172.16.1.1`| TS 服务器 `Cloud-Init` | 16 | |IPv4|`172.16.1.4/24`| TS 服务器 `Cloud-Init` | 17 | |IPv4 网关|`172.16.1.1`| TS 服务器 `Cloud-Init` | 18 | |IPv6|`SLAAC`| TS 服务器 `Cloud-Init` | 19 | 20 | ## 1.配置系统 21 | 22 | 由于 TS 服务器具备路由功能,所以在配置方法和系统参数方面与内网 DNS 服务器有一些区别。 23 | 24 | ### 1.1.配置 SSH 25 | 26 | 与配置 Debian 模板虚拟机时一样,首先需要调整系统的 SSH 登录权限参数。 27 | 28 | 在虚拟机的命令行界面,使用 `vim` 编辑器编辑 `sshd` 服务的配置文件,执行以下命令。 29 | 30 | ```bash 31 | ## 编辑 SSH 配置文件 32 | $ sudo vim /etc/ssh/sshd_config.d/10-server-sshd.conf 33 | ``` 34 | 35 | 在配置文件中输入以下内容,并保存。 36 | 37 | ```bash 38 | ## SSH 配置项 39 | 40 | PasswordAuthentication yes 41 | PermitEmptyPasswords no 42 | UseDNS no 43 | 44 | ``` 45 | 46 | 修改完成后,需要重启 SSH 服务。 47 | 48 | ```bash 49 | ## 重启 ssh.service 50 | $ sudo systemctl restart ssh.service 51 | ``` 52 | 53 | ### 1.2.配置软件源 54 | 55 | 使用如下命令对 `debian.sources` 配置文件进行修改。 56 | 57 | ```bash 58 | ## 配置 Debian 系统默认软件源脚本 59 | 60 | $ sudo bash -c 'cat > /etc/apt/sources.list.d/debian.sources < /dev/null 95 | 96 | ## 添加 TS 软件源 97 | $ curl -fsSL https://pkgs.tailscale.com/stable/debian/trixie.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list 98 | ``` 99 | 100 | ### 1.3.安装软件 101 | 102 | 软件源设置完成后,需要更新系统,执行以下命令。 103 | 104 | ```bash 105 | ## 清理不必要的包 106 | $ sudo bash -c 'apt clean && apt autoclean && apt autoremove --purge' 107 | 108 | ## 更新软件源 109 | $ sudo apt update 110 | 111 | ## 更新系统 112 | $ sudo apt full-upgrade 113 | ``` 114 | 115 | 接下来安装系统必要软件,安装 `iperf3` 后,系统将询问是否将其作为系统服务开机自启,选择 `no` 即可。 116 | 117 | ```bash 118 | ## 安装系统软件 119 | $ sudo apt install qemu-guest-agent btop curl tmux logrotate cron neovim zsh git 120 | 121 | ## 安装系统自动更新工具 122 | $ sudo apt install unattended-upgrades powermgmt-base 123 | 124 | ## 安装网络工具 125 | $ sudo apt install ethtool dnsmasq conntrack nftables sshguard lsof knot-dnsutils 126 | 127 | ## 安装 TS 128 | $ sudo apt install tailscale 129 | 130 | ## 安装网络检测工具(可选) 131 | $ sudo apt install iftop iperf3 iperf 132 | 133 | ## 刷新 systemd 日志分类索引 134 | $ sudo journalctl --update-catalog 135 | 136 | ## 写入磁盘 137 | $ sudo sync 138 | ``` 139 | 140 | ### 1.4.配置 ZSH 141 | 142 | `Zsh` 是比 `Bash` 好用的 `Shell` 程序,使用 `oh-my-zsh` 进行配置。 143 | 144 | ```bash 145 | ## 使用清华大学镜像站安装 oh-my-zsh 146 | $ cd && git clone --depth=1 https://mirrors.tuna.tsinghua.edu.cn/git/ohmyzsh.git 147 | 148 | $ cd ohmyzsh/tools && REMOTE=https://mirrors.tuna.tsinghua.edu.cn/git/ohmyzsh.git sh install.sh 149 | 150 | ## 询问是否切换默认 shell,输入 Y 151 | 152 | #### 示例输出 153 | Time to change your default shell to zsh: 154 | Do you want to change your default shell to zsh? [Y/n] y 155 | 156 | ## oh-my-zsh 安装后清理 157 | $ cd && rm -rvf ohmyzsh .bash_history .zsh_history .shell.pre-oh-my-zsh 158 | ``` 159 | 160 | ### 1.5.配置默认编辑器 161 | 162 | Debian 系统默认编辑器为 `nano` ,推荐将默认编辑器更换为 `neovim` ,执行以下命令。 163 | 164 | ```bash 165 | ## 修改系统默认编辑器(选择 nvim 所在条目,即可将 neovim 设为默认编辑器) 166 | $ sudo update-alternatives --config editor 167 | 168 | #### 系统默认编辑器示例输出 169 | There are 4 choices for the alternative editor (providing /usr/bin/editor). 170 | 171 | Selection Path Priority Status 172 | ------------------------------------------------------------ 173 | * 0 /bin/nano 40 auto mode 174 | 1 /bin/nano 40 manual mode 175 | 2 /usr/bin/nvim 30 manual mode 176 | 3 /usr/bin/vim.basic 30 manual mode 177 | 4 /usr/bin/vim.tiny 15 manual mode 178 | 179 | Press to keep the current choice[*], or type selection number: 2 180 | update-alternatives: using /usr/bin/nvim to provide /usr/bin/editor (editor) in manual mode 181 | ``` 182 | 183 | ### 1.6.调整内核模块 184 | 185 | 编辑 **内核模块** 配置文件,执行以下命令。 186 | 187 | ```bash 188 | ## 创建 内核模块 配置文件 189 | $ sudo editor /etc/modules-load.d/10-server-modules.conf 190 | ``` 191 | 192 | 在配置文件中输入以下内容,并保存。 193 | 194 | ```bash 195 | # This configuration file is customized by fox, 196 | # Optimize netfilter related modules at system boot. 197 | 198 | nf_conntrack 199 | 200 | ``` 201 | 202 | ### 1.7.调整内核参数 203 | 204 | 编辑 **内核参数** 配置文件,执行以下命令。 205 | 206 | ```bash 207 | ## 编辑 内核参数 配置文件 208 | $ sudo editor /etc/sysctl.d/99-sysctl.conf 209 | ``` 210 | 211 | 在配置文件中输入以下内容,注意配置中间的空格。 212 | 213 | ```bash 214 | # This configuration file is customized by fox, 215 | # Optimize sysctl parameters for local TS server. 216 | 217 | kernel.panic = 20 218 | kernel.panic_on_oops = 1 219 | 220 | net.core.default_qdisc = fq_codel 221 | net.ipv4.tcp_congestion_control = bbr 222 | 223 | net.ipv4.ip_forward = 1 224 | 225 | net.ipv6.conf.all.forwarding = 1 226 | net.ipv6.conf.default.forwarding = 1 227 | 228 | # Other adjustable system parameters 229 | 230 | net.core.netdev_budget = 1200 231 | net.core.netdev_budget_usecs = 10000 232 | 233 | net.core.rps_sock_flow_entries = 32768 234 | net.core.somaxconn = 8192 235 | net.core.rmem_max = 67108864 236 | net.core.wmem_max = 67108864 237 | 238 | net.ipv4.conf.all.accept_redirects = 0 239 | net.ipv4.conf.default.accept_redirects = 0 240 | 241 | net.ipv4.conf.all.accept_source_route = 0 242 | net.ipv4.conf.default.accept_source_route = 0 243 | 244 | net.ipv4.conf.all.arp_ignore = 1 245 | net.ipv4.conf.default.arp_ignore = 1 246 | 247 | net.ipv4.conf.all.rp_filter = 2 248 | net.ipv4.conf.default.rp_filter = 2 249 | 250 | net.ipv4.conf.all.send_redirects = 0 251 | net.ipv4.conf.default.send_redirects = 0 252 | 253 | net.ipv4.igmp_max_memberships = 256 254 | 255 | net.ipv4.route.error_burst = 500 256 | net.ipv4.route.error_cost = 100 257 | 258 | net.ipv4.route.redirect_load = 2 259 | net.ipv4.route.redirect_silence = 2048 260 | 261 | net.ipv4.tcp_adv_win_scale = -2 262 | net.ipv4.tcp_challenge_ack_limit = 1000 263 | net.ipv4.tcp_fastopen = 3 264 | net.ipv4.tcp_fin_timeout = 30 265 | net.ipv4.tcp_keepalive_time = 120 266 | net.ipv4.tcp_max_syn_backlog = 2048 267 | net.ipv4.tcp_notsent_lowat = 131072 268 | net.ipv4.tcp_rmem = 8192 4194304 67108864 269 | net.ipv4.tcp_wmem = 4096 4194304 67108864 270 | 271 | net.ipv6.conf.all.accept_redirects = 0 272 | net.ipv6.conf.default.accept_redirects = 0 273 | 274 | net.ipv6.conf.all.accept_source_route = 0 275 | net.ipv6.conf.default.accept_source_route = 0 276 | 277 | net.ipv6.conf.all.use_tempaddr = 0 278 | net.ipv6.conf.default.use_tempaddr = 0 279 | 280 | net.netfilter.nf_conntrack_acct = 1 281 | net.netfilter.nf_conntrack_buckets = 65536 282 | net.netfilter.nf_conntrack_checksum = 0 283 | net.netfilter.nf_conntrack_max = 262144 284 | net.netfilter.nf_conntrack_tcp_timeout_established = 7440 285 | 286 | ``` 287 | 288 | 保存该配置文件后,重启系统或者执行以下命令让配置生效。 289 | 290 | ```bash 291 | ## 让内核参数生效 292 | $ sudo sysctl --system 293 | ``` 294 | 295 | ### 1.8.调整系统时间 296 | 297 | 默认情况下 Debian 云镜像的系统时间需要调整,执行以下命令将系统时区设置为中国时区。 298 | 299 | ```bash 300 | ## 设置系统时区 301 | $ sudo timedatectl set-timezone Asia/Shanghai 302 | 303 | ## 检查系统时间 304 | $ date -R 305 | ``` 306 | 307 | Debian 云镜像默认使用 `systemd-timesyncd.service` 同步时间,且需要调整为使用国内 NTP 服务器。 308 | 309 | 调整 NTP 服务器参数,执行以下命令。 310 | 311 | ```bash 312 | ## 创建 NTP 配置目录 313 | $ sudo mkdir -p /etc/systemd/timesyncd.conf.d 314 | 315 | ## 创建 NTP 配置文件 316 | $ sudo editor /etc/systemd/timesyncd.conf.d/10-server-ntp.conf 317 | ``` 318 | 319 | 在配置文件中输入以下内容,并保存。 320 | 321 | ```bash 322 | # This configuration file is customized by fox, 323 | # Optimize system NTP server. 324 | 325 | [Time] 326 | NTP=ntp.aliyun.com ntp.tencent.com cn.pool.ntp.org 327 | 328 | ``` 329 | 330 | 保存该配置文件后,需重启 `systemd-timesyncd.service` 服务,并再次检查系统 NTP 服务器地址。 331 | 332 | ```bash 333 | ## 重启 systemd-timesyncd.service 334 | $ sudo systemctl restart systemd-timesyncd.service 335 | 336 | ## 检查系统 NTP 服务器 337 | $ sudo systemctl status systemd-timesyncd.service 338 | ``` 339 | 340 | ### 1.9.配置自动更新 341 | 342 | 配置系统自动更新策略,执行以下命令,使用键盘 `左右方向键` 进行选择,`回车键` 进行确认。 343 | 344 | ```bash 345 | ## 配置自动更新策略 346 | $ sudo dpkg-reconfigure -plow unattended-upgrades 347 | 348 | ## 选择 “是” 349 | 350 | ``` 351 | 352 | 进一步调整 `20auto-upgrades` 配置文件。 353 | 354 | ```bash 355 | ## 编辑 20auto-upgrades 配置文件 356 | $ sudo editor /etc/apt/apt.conf.d/20auto-upgrades 357 | ``` 358 | 359 | 清空当前全部配置项后,输入以下内容,并保存。 360 | 361 | 配置文件中,用来控制更新周期的参数为 `APT::Periodic::Unattended-Upgrade` ,`7` 表示更新周期为 `7` 天。 362 | 363 | ```bash 364 | ## 系统更新周期配置项 365 | 366 | APT::Periodic::Update-Package-Lists "1"; 367 | APT::Periodic::Unattended-Upgrade "7"; 368 | APT::Periodic::AutocleanInterval "1"; 369 | APT::Periodic::CleanInterval "1"; 370 | 371 | ``` 372 | 373 | 进一步调整 `50unattended-upgrades` 配置文件。 374 | 375 | ```bash 376 | ## 编辑 50unattended-upgrades 配置文件 377 | $ sudo editor /etc/apt/apt.conf.d/50unattended-upgrades 378 | ``` 379 | 380 | 因为该配置文件很长,完整的配置文件可查看 [debian_ts_50unattended_upgrades.conf](./src/debian/debian_ts_50unattended_upgrades.conf) 以便对比。 381 | 382 | ```bash 383 | ## 删除以下行前面的注释符 // ,代表启用 384 | 385 | "origin=Debian,codename=${distro_codename}-updates"; 386 | 387 | ## 添加 TS 更新项目 388 | 389 | "origin=Tailscale,codename=${distro_codename},label=Tailscale"; 390 | 391 | ## 在配置文件末尾增加以下内容,代表启用,并调整参数 392 | 393 | Unattended-Upgrade::AutoFixInterruptedDpkg "true"; 394 | 395 | Unattended-Upgrade::Remove-Unused-Kernel-Packages "true"; 396 | 397 | Unattended-Upgrade::Remove-New-Unused-Dependencies "true"; 398 | 399 | Unattended-Upgrade::Remove-Unused-Dependencies "true"; 400 | 401 | Unattended-Upgrade::Automatic-Reboot "true"; 402 | 403 | Unattended-Upgrade::Automatic-Reboot-Time "13:00"; 404 | 405 | ``` 406 | 407 | 系统自动更新配置文件修改完成后,需调整自动更新定时器,执行以下命令。 408 | 409 | ```bash 410 | ## 配置系统定时器 411 | $ sudo systemctl edit apt-daily-upgrade.timer 412 | ``` 413 | 414 | 根据配置文件中的提示,在中间空白处填入以下内容。 415 | 416 | ```bash 417 | ## 定时器配置项 418 | 419 | [Timer] 420 | OnCalendar= 421 | OnCalendar=12:00 422 | RandomizedDelaySec=0 423 | 424 | ``` 425 | 426 | 设置完成后,重启自动更新定时器并检查其状态,执行以下命令。 427 | 428 | 在输出结果中,看到系统自动更新的触发时间为 `12:00` 则表示设置正确。 429 | 430 | ```bash 431 | ## 重启触发器 432 | $ sudo systemctl restart apt-daily-upgrade.timer 433 | 434 | ## 再次检查触发器状态 435 | $ sudo systemctl status apt-daily-upgrade.timer 436 | ``` 437 | 438 | ### 1.10.配置防火墙 439 | 440 | 修改防火墙配置之前,需检查 `nftables.service` 服务状态,确保该服务开机自启。 441 | 442 | ```bash 443 | ## 检查 nftables.service 444 | $ sudo systemctl status nftables.service 445 | 446 | ## 设置 nftables.service 开机自启 447 | $ sudo systemctl enable nftables.service 448 | ``` 449 | 450 | 修改 `nftables` 配置文件,执行以下命令。 451 | 452 | ```bash 453 | ## 备份 nftables 配置文件 454 | $ sudo mv /etc/nftables.conf /etc/nftables.conf.bak 455 | 456 | ## 创建新的 nftables 配置文件 457 | $ sudo editor /etc/nftables.conf 458 | ``` 459 | 460 | 由于防火墙配置文件很长,因此请查阅文件 [debian_ts_nftables.conf](./src/debian/debian_ts_nftables.conf) 进行复制。 461 | 462 | 配置完成后,需重启 `nftables.service` 服务。 463 | 464 | ```bash 465 | ## 重启 nftables.service 466 | $ sudo systemctl restart nftables.service 467 | ``` 468 | 469 | ### 1.11.调整系统端口 470 | 471 | 为了正常使用 `53` 端口,需要对 `systemd-resolved.service` 进行配置,执行以下命令。 472 | 473 | ```bash 474 | ## 创建 systemd-resolved 配置目录 475 | $ sudo mkdir -p /etc/systemd/resolved.conf.d 476 | 477 | ## 创建 systemd-resolved 配置文件 478 | $ sudo editor /etc/systemd/resolved.conf.d/10-server-dns.conf 479 | ``` 480 | 481 | 在配置文件中输入以下内容,并保存。 482 | 483 | ```bash 484 | # This configuration file is customized by fox, 485 | # Optimize system resolve parameters for local TS server. 486 | 487 | [Resolve] 488 | DNS=127.0.0.1 489 | DNS=::1 490 | DNSStubListener=no 491 | 492 | ``` 493 | 494 | 保存该配置文件后,还需调整系统 `resolv.conf` 配置文件,执行以下命令。 495 | 496 | ```bash 497 | ## 创建 resolv.conf 软链接 498 | $ sudo ln -sf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf 499 | ``` 500 | 501 | 配置完成后,需重启 `systemd-resolved.service` 服务。 502 | 503 | ```bash 504 | ## 重启 systemd-resolved.service 505 | $ sudo systemctl restart systemd-resolved.service 506 | ``` 507 | 508 | ### 1.12.配置 Dnsmasq 509 | 510 | 检查 `dnsmasq.service` 服务状态,确保该服务开机自启。 511 | 512 | ```bash 513 | ## 检查 dnsmasq.service 514 | $ sudo systemctl status dnsmasq.service 515 | 516 | ## 设置 dnsmasq.service 开机自启 517 | $ sudo systemctl enable dnsmasq.service 518 | ``` 519 | 520 | `Dnsmasq` 的主配置文件一般位于 `/etc` 目录下,修改配置文件之前,执行以下命令。 521 | 522 | ```bash 523 | ## 创建 Dnsmasq 配置目录 524 | $ sudo mkdir -p /etc/dnsmasq.d 525 | ``` 526 | 527 | 创建 `Dnsmasq` 主配置文件,执行以下命令。 528 | 529 | ```bash 530 | ## 创建 Dnsmasq 主配置文件 531 | $ sudo editor /etc/dnsmasq.d/10-server-dnsmasq.conf 532 | ``` 533 | 534 | 在编辑器对话框中输入以下内容,并保存。 535 | 536 | **额外说明:** 537 | 538 | - 请根据系统内存使用情况,调整缓存参数 `cache-size` 539 | 540 | - 配置文件中监听的网卡名为 `eth0` 和 `tailscale0` ,请根据实际情况进行调整 541 | 542 | - 配置文件中内网域名为 `fox.internal` ,请根据实际情况进行调整 543 | 544 | - `Dnsmasq` 上游 DNS 服务器分为三类,请根据实际情况进行调整 545 | - `server=/ts.net/100.100.100.100` :TS 服务 `MagicDNS` 专用 DNS 服务器 546 | - `server=/fox.internal/172.16.1.1` :内网域名解析 DNS 服务器,通常为主路由地址 547 | - `server` 参数中的其他 DNS 服务器供 TS 服务器自身及其下游设备使用 548 | 549 | ```bash 550 | # This configuration file is customized by fox, 551 | # Optimize dnsmasq parameters for local TS server. 552 | 553 | # Main Config 554 | 555 | conf-dir=/etc/dnsmasq.d/,*.conf 556 | conf-file=/etc/dnsmasq.conf 557 | 558 | log-facility=/var/log/dnsmasq.log 559 | log-async=20 560 | 561 | cache-size=2048 562 | max-cache-ttl=7200 563 | fast-dns-retry=1800 564 | 565 | interface=eth0,tailscale0 566 | rebind-domain-ok=/fox.internal/ 567 | 568 | bind-dynamic 569 | bogus-priv 570 | domain-needed 571 | no-hosts 572 | no-negcache 573 | no-resolv 574 | rebind-localhost-ok 575 | stop-dns-rebind 576 | 577 | # DNS Filter 578 | 579 | server=/alt/ 580 | server=/bind/ 581 | server=/example/ 582 | server=/home.arpa/ 583 | server=/internal/ 584 | server=/invalid/ 585 | server=/lan/ 586 | server=/local/ 587 | server=/localhost/ 588 | server=/onion/ 589 | server=/test/ 590 | 591 | # DNS Server 592 | 593 | server=/ts.net/100.100.100.100 594 | server=/fox.internal/172.16.1.1 595 | server=172.16.1.1 596 | 597 | ``` 598 | 599 | 配置完成后,需重启 `dnsmasq.service` 服务。 600 | 601 | ```bash 602 | ## 重启 dnsmasq.service 603 | $ sudo systemctl restart dnsmasq.service 604 | ``` 605 | 606 | ## 2. Tailscale 607 | 608 | 根据不同的启动参数,TS 服务将具有不同的业务能力。 609 | 610 | ### 2.1.网卡调优 611 | 612 | 根据 TS 官方文档 [Performance best practices](https://tailscale.com/kb/1320/performance-best-practices) 的技术指引,优化网卡的 `rx-udp-gro-forwarding` 与 `rx-gro-list` 参数可提升 UDP 转发性能。 613 | 614 | 本文将使用 `ethtool` 工具并结合 `systemd` 服务提供参数自动配置方案。 615 | 616 | ```bash 617 | ## 创建 tailscale-nic-optim 优化脚本 618 | $ sudo editor /usr/local/bin/tailscale-nic-optim.sh 619 | ``` 620 | 621 | 在脚本文件中输入以下内容,并保存。 622 | 623 | ```bash 624 | #!/bin/sh 625 | 626 | ETHTOOL_PATH=$(command -v ethtool) 627 | 628 | if [ -z "$ETHTOOL_PATH" ]; then 629 | echo "Error: ethtool command not found. Please install ethtool." 630 | exit 1 631 | fi 632 | 633 | NETDEV=$(ip route show default | awk '{for (i=1; i<=NF; i++) {if ($i == "dev") {print $(i+1); exit;}}}') 634 | 635 | if [ -z "$NETDEV" ]; then 636 | echo "Error: Could not determine default network device using 'ip route show default'." 637 | exit 1 638 | fi 639 | 640 | echo "Configuring network device $NETDEV using $ETHTOOL_PATH for Tailscale optimizations..." 641 | 642 | "$ETHTOOL_PATH" -K "$NETDEV" rx-udp-gro-forwarding on rx-gro-list off 643 | 644 | if [ "$?" -eq 0 ]; then 645 | echo "Configuration successful for $NETDEV." 646 | exit 0 647 | else 648 | echo "Error: ethtool configuration failed. Check permissions or device status." 649 | exit 1 650 | fi 651 | 652 | ``` 653 | 654 | 设置脚本可执行权限,并防止脚本被意外修改。 655 | 656 | ```bash 657 | ## 设置脚本可执行权限 658 | $ sudo chmod +x /usr/local/bin/tailscale-nic-optim.sh 659 | 660 | ## 设置脚本文件防篡改 661 | $ sudo chattr +i /usr/local/bin/tailscale-nic-optim.sh 662 | ``` 663 | 664 | 进一步创建 `tailscale-nic-optim` 服务配置文件,以满足系统自动化设置需求。 665 | 666 | ```bash 667 | ## 创建 tailscale-nic-optim.service 配置文件 668 | $ sudo editor /etc/systemd/system/tailscale-nic-optim.service 669 | ``` 670 | 671 | 在服务配置文件中输入以下内容,并保存。 672 | 673 | ```bash 674 | # This configuration file is customized by fox, 675 | # Optimize for TS UDP performance. 676 | 677 | [Unit] 678 | Description=Tailscale Network Optimization for Subnet/Exit Nodes 679 | ConditionVirtualization=!container 680 | After=network-online.target 681 | Wants=network-online.target 682 | Before=tailscaled.service 683 | 684 | [Service] 685 | Type=oneshot 686 | ExecStart=/usr/local/bin/tailscale-nic-optim.sh 687 | RemainAfterExit=yes 688 | StandardOutput=journal 689 | 690 | [Install] 691 | WantedBy=multi-user.target 692 | 693 | ``` 694 | 695 | 由于修改了服务项,需要执行以下命令进行重载。 696 | 697 | ```bash 698 | ## 服务重载 699 | $ sudo systemctl daemon-reload 700 | ``` 701 | 702 | 执行以下命令让 `tailscale-nic-optim` 服务开机自启动。 703 | 704 | ```bash 705 | ## 设置 tailscale-nic-optim 服务开机自启 706 | $ sudo systemctl enable --now tailscale-nic-optim.service 707 | ``` 708 | 709 | ### 2.2.启动模式 710 | 711 | 若仅需 TS 组网功能,执行以下命令。 712 | 713 | ```bash 714 | ## TS 普通组网模式 715 | $ sudo tailscale up 716 | ``` 717 | 718 | 若需 TS 提供 `Exit Node` 功能,执行以下命令。 719 | 720 | ```bash 721 | ## TS Exit Node 模式 722 | $ sudo tailscale up --advertise-exit-node --reset 723 | 724 | ## TS Exit Node 模式,但不使用 MagicDNS 725 | $ sudo tailscale up --advertise-exit-node --accept-dns=false --reset 726 | ``` 727 | 728 | 若需 TS 提供内网路由功能并能访问内网私有服务,执行以下命令。 729 | 730 | **额外说明:** 731 | 732 | - 请根据内网网段,调整 TS 内网路由参数 `advertise-routes` 733 | 734 | ```bash 735 | ## TS 内网路由模式 736 | $ sudo tailscale up --advertise-exit-node --accept-routes --advertise-routes=172.16.1.0/24 --reset 737 | ``` 738 | 739 | 执行命令后,TS 将自动显示登录链接,只需根据链接进行登录操作即可。 740 | 741 | ### 2.3.自动更新 742 | 743 | 目前 TS 将跟随系统自动更新,若需额外开启 TS 的自动更新功能,执行以下命令。 744 | 745 | ```bash 746 | ## TS 开启自动更新 747 | $ sudo tailscale set --auto-update 748 | 749 | ## TS 关闭自动更新 750 | $ sudo tailscale set --auto-update=false 751 | ``` 752 | 753 | ### 2.4.定时任务 754 | 755 | 本步骤为可选操作,主要用于设置 TS 定时重启。 756 | 757 | ```bash 758 | ## 编辑系统定时任务,编辑器选择 nano 759 | $ sudo crontab -e 760 | ``` 761 | 762 | 在配置文件末尾,增加以下内容。 763 | 764 | ```bash 765 | ## 定时任务配置项 766 | 767 | 30 10 * * * /usr/bin/systemctl restart tailscaled.service 768 | 769 | ``` 770 | 771 | 至此,TS 服务器已配置完成。 772 | 773 | --------------------------------------------------------------------------------