├── .gitignore ├── Makefile ├── building.rst ├── compiling.rst ├── conclude.rst ├── conf.py ├── debug.rst ├── editing.rst ├── files.rst ├── index.rst ├── introduction.rst ├── make.bat ├── origin ├── Building.md ├── Compiling.md ├── Debugging.md ├── Editing.md ├── Files.md ├── Introduction.md ├── Revisions.md ├── ltrace-vim.png ├── vim-diff.png └── vim-quickfix.png ├── readme.md └── revisions.rst /.gitignore: -------------------------------------------------------------------------------- 1 | .* 2 | !.gitignore 3 | *~ 4 | 5 | # KDE 6 | .directory 7 | 8 | .*.sw[a-z] 9 | *.un~ 10 | Session.vim 11 | 12 | _build/* 13 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = _build 9 | 10 | # Internal variables. 11 | PAPEROPT_a4 = -D latex_paper_size=a4 12 | PAPEROPT_letter = -D latex_paper_size=letter 13 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 14 | # the i18n builder cannot share the environment and doctrees with the others 15 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 16 | 17 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext 18 | 19 | help: 20 | @echo "Please use \`make ' where is one of" 21 | @echo " html to make standalone HTML files" 22 | @echo " dirhtml to make HTML files named index.html in directories" 23 | @echo " singlehtml to make a single large HTML file" 24 | @echo " pickle to make pickle files" 25 | @echo " json to make JSON files" 26 | @echo " htmlhelp to make HTML files and a HTML help project" 27 | @echo " qthelp to make HTML files and a qthelp project" 28 | @echo " devhelp to make HTML files and a Devhelp project" 29 | @echo " epub to make an epub" 30 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 31 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 32 | @echo " text to make text files" 33 | @echo " man to make manual pages" 34 | @echo " texinfo to make Texinfo files" 35 | @echo " info to make Texinfo files and run them through makeinfo" 36 | @echo " gettext to make PO message catalogs" 37 | @echo " changes to make an overview of all changed/added/deprecated items" 38 | @echo " linkcheck to check all external links for integrity" 39 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 40 | 41 | clean: 42 | -rm -rf $(BUILDDIR)/* 43 | 44 | html: 45 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 46 | @echo 47 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 48 | 49 | dirhtml: 50 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 51 | @echo 52 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 53 | 54 | singlehtml: 55 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 56 | @echo 57 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 58 | 59 | pickle: 60 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 61 | @echo 62 | @echo "Build finished; now you can process the pickle files." 63 | 64 | json: 65 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 66 | @echo 67 | @echo "Build finished; now you can process the JSON files." 68 | 69 | htmlhelp: 70 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 71 | @echo 72 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 73 | ".hhp project file in $(BUILDDIR)/htmlhelp." 74 | 75 | qthelp: 76 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 77 | @echo 78 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 79 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 80 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Uni.qhcp" 81 | @echo "To view the help file:" 82 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Uni.qhc" 83 | 84 | devhelp: 85 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 86 | @echo 87 | @echo "Build finished." 88 | @echo "To view the help file:" 89 | @echo "# mkdir -p $$HOME/.local/share/devhelp/Uni" 90 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Uni" 91 | @echo "# devhelp" 92 | 93 | epub: 94 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 95 | @echo 96 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 97 | 98 | latex: 99 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 100 | @echo 101 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 102 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 103 | "(use \`make latexpdf' here to do that automatically)." 104 | 105 | latexpdf: 106 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 107 | @echo "Running LaTeX files through pdflatex..." 108 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 109 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 110 | 111 | text: 112 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 113 | @echo 114 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 115 | 116 | man: 117 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 118 | @echo 119 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 120 | 121 | texinfo: 122 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 123 | @echo 124 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 125 | @echo "Run \`make' in that directory to run these through makeinfo" \ 126 | "(use \`make info' here to do that automatically)." 127 | 128 | info: 129 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 130 | @echo "Running Texinfo files through makeinfo..." 131 | make -C $(BUILDDIR)/texinfo info 132 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 133 | 134 | gettext: 135 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 136 | @echo 137 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 138 | 139 | changes: 140 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 141 | @echo 142 | @echo "The overview file is in $(BUILDDIR)/changes." 143 | 144 | linkcheck: 145 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 146 | @echo 147 | @echo "Link check complete; look for any errors in the above output " \ 148 | "or in $(BUILDDIR)/linkcheck/output.txt." 149 | 150 | doctest: 151 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 152 | @echo "Testing of doctests in the sources finished, look at the " \ 153 | "results in $(BUILDDIR)/doctest/output.txt." 154 | -------------------------------------------------------------------------------- /building.rst: -------------------------------------------------------------------------------- 1 | 构建 2 | ==== 3 | 4 | 编译项目有时是个复杂而重复的过程。一个好的集成开发环境可以提供一种简单、高效甚至自动化的软件编译方法。Unix 及其衍生系统用 ``Makefile`` 5 | 来完成这个工序。这是一种标准格式的文档,是用来把源代码和目标文件编译成可执行文件的“菜谱”。它能够考虑源文件的修改决定只编译必要的文件,以此避免重复编译的浪费。 6 | 7 | 关于 ``make`` 还有个非常有趣的特点值得注意,就是虽然它通常用于自动化编译而且它为此提供了不少捷径,但是其实凡是把一堆文件生成为另一堆文件的情况都可以利用它。一种可能的用法是,网站部署时将原图片优化成网页友好的图片;另一种可以是从代码生成静态 HTML 页面,而非运行时生成页面(译者注:利用 github 建博客就是这个原理)。正是基于这样一种更宽泛的“编译”概念,一些现代的此类工具(如 `Ruby's rake `_ )才得到广泛地应用于自动化一些普通流程,生产和安装各种代码和文件。 8 | 9 | 剖析 ``Makefile`` 10 | ----------------- 11 | 12 | ``Makefile`` 的一般格式包含一系列变量、一系列目标,以及用来生产目标的源和/或对象。目标也不一定是链接了的二进制文件;它们也可以包含操作产生文件的动作,比如 ``install`` 用来把生成的编译文件部署到系统里,或者用 ``clean`` 从源代码树里清除已编译的文件。 13 | 14 | 正是这种目标产物的灵活性使得 ``make`` 可以自动化任何与生成产品软件相关的任务;不仅仅是编译器执行的典型的语法分析、预处理、编译和连接步骤,还包括运行测试( ``make test`` ),或把文档的源文件编译成一种或多种适当的格式,或将代码自动部署到产品系统里,比如通过 ``git push`` 或类似的内容跟踪系统来上传到网站。 15 | 16 | 一个简单的软件项目的 ``Makefile`` 看起来大概像这样: :: 17 | 18 | all: example 19 | 20 | example: main.o example.o library.o 21 | gcc main.o example.o library.o -o example 22 | 23 | main.o: main.c 24 | gcc -c main.c -o main.o 25 | 26 | example.o: example.c 27 | gcc -c example.c -o example.o 28 | 29 | library.o: library.c 30 | gcc -c library.c -o library.o 31 | 32 | clean: 33 | rm *.o example 34 | 35 | install: example 36 | cp example /usr/bin 37 | 38 | 以上的 ``Makefile`` 可能并不是最优的,但是它提供了一种只用输入 ``make`` 就可以编译并安装已链接的二进制文件的方法。每个 *目标文件( target )* 都包含一系列之后的命令所需要的 *依赖项( dependencies )*\。这意味着定义的顺序是任意的,调用 ``make`` 的时候相应的命令会按合适的顺序被运行。 39 | 40 | 例子中很多是冗余或重复的,比方说如果一个目标文件是直接从一个同名 C 文件编译而来,我们并不需要包含这样的目标文件, ``make`` 会帮我们解决这些。类似的,把经常调用的命令用变量来代替是明智的,这样的话我们在替换编译器或变更标签的时候也不用一个一个地修改了。一个更加简明扼要的版本如下所示: :: 41 | 42 | CC = gcc 43 | OBJECTS = main.o example.o library.o 44 | BINARY = example 45 | 46 | all: example 47 | 48 | example: $(OBJECTS) 49 | $(CC) $(OBJECTS) -o $(BINARY) 50 | 51 | clean: 52 | rm -f $(BINARY) $(OBJECTS) 53 | 54 | install: example 55 | cp $(BINARY) /usr/bin 56 | 57 | 58 | ``make`` 更为广泛的使用 59 | ------------------------ 60 | 61 | 然而,为了自动化,将思维扩散到只对代码的编译和连接之外是很具启发性的。举个简单的网站项目的例子,该项目涉及到要部署 PHP 代码到产品服务器。这项任务通常不会被联系到 ``make`` 的使用上,但是其机制是一样的:代码编译完成可以执行时,我们还有一些目的需要达到。 62 | 63 | PHP 文件当然是不需要编译的了,但是网站资源文件经常需要。对网站开发人员来说很熟悉的例子就是为了部署,把图片从源矢量图导出成已缩放和优化过的光栅图片。你管理着你的源文件,到了部署的时候你要生成网页友好的版本。 64 | 65 | 我们假设这个项目里,网站用到一组4个图标,这些图标都是64*64像素的。我们有 SVG 矢量格式的源文件,在版本控制系统中静静地等待着。现在我们需要为网站生成小一些、可以部署的位图。我们这就可以定义一个目标叫 ``icons``\,设置好依赖项,然后输入要运行的命令。 ``Makefile`` 的句法中才是 Unix 命令行工具真正开始闪闪发光的地方: :: 66 | 67 | icons: create.png read.png update.png delete.png 68 | 69 | create.png: create.svg 70 | convert create.svg create.raw.png && \ 71 | pngcrush create.raw.png create.png 72 | 73 | read.png: read.svg 74 | convert read.svg read.raw.png && \ 75 | pngcrush read.raw.png read.png 76 | 77 | update.png: update.svg 78 | convert update.svg update.raw.png && \ 79 | pngcrush update.raw.png update.png 80 | 81 | delete.png: delete.svg 82 | convert delete.svg delete.raw.png && \ 83 | pngcrush delete.raw.png delete.png 84 | 85 | 做完以上的这些,输入命令 ``make icons`` 就会在 Bash 循环里把每个源图标文件遍历一遍,将它们用 ImageMagick 的 ``convert`` 从 SVG 转化成 PNG,再用 ``pngcrush`` 优化,于是便生成了可以上传的图片文件。 86 | 87 | 生成多种格式的帮助文件也可以用类似的方法,比如从 Markdown 源文件生成 HTML 文件: :: 88 | 89 | docs: README.html credits.html 90 | 91 | README.html: README.md 92 | markdown README.md > README.html 93 | 94 | credits.html: credits.md 95 | markdown credits.md > credits.html 96 | 97 | 最后也可以用 ``git push web`` 部署网站,但只能是在图标文件已栅格化和文档已转化 _之后_ : :: 98 | 99 | deploy: icons docs 100 | git push web 101 | 102 | 为了更加简短而高效地把一种后缀的文件转化成另一种,你可以用 ``.SUFFIXES`` 指令,通过定义一些特殊符号来做到这些。转化图片的代码可能会变成下面这样,在这个例子里, ``$<`` 指源文件, ``$*`` 指没有后缀的文件名, ``$@`` 指目标文件: :: 103 | 104 | icons: create.png read.png update.png delete.png 105 | 106 | .SUFFIXES: .svg .png 107 | 108 | .svg.png: 109 | convert $< $*.raw.png && \ 110 | pngcrush $*.raw.png $@ 111 | 112 | 113 | 创建 ``Makefile`` 的工具 114 | ------------------------ 115 | 116 | GNU Autotools toolchain 里有多样工具用来为大型软件项目从更高层构造 ``configure`` 117 | 脚本和 ``make`` 文件,具体来说就是 `autoconf `_ 和 `automake `_\。 118 | 使用这些工具可以在很大的源上生成 ``configure`` 脚本和 ``make`` 文件,它们免除了你必须要手动编写大量 makefile,并且一些自动步骤的运行可 119 | 以保证源文件在不同的操作系统上保持一致性而且可编译。 120 | 121 | 这个过程涵盖的内容之复杂足够再写一系列文章加以阐述,这已经超出了本篇的范畴。 122 | 123 | *在此特别感谢用户 samwyse 在评论中关于* ``.SUFFIXES`` *的建议* 124 | -------------------------------------------------------------------------------- /compiling.rst: -------------------------------------------------------------------------------- 1 | 编译 2 | ==== 3 | 4 | Unix 平台下有很多编译和解释代码的工具,它们用法各异。然而,但是概念上很多步骤是一样的。这里我将讨论用 GNU 编译器集里的 ``gcc`` 编译 C,并简要介绍用 ``perl`` 作为解释器。 5 | 6 | GCC 7 | --- 8 | 9 | `GCC `_ 是个非常成熟的 GPL 许可的编译器集。也许大家知道得最多的还是用它来编译 C 和 C++ 程序。它的免费以及它广泛的装载在 Unix 类似的系统上(例如 Linux 和 BSD),使得它如此长久地流行。当然,现在还有一些更加现代化的使用 `LLVM `_ 架构的替代编译器,比如 `Clang `_\。 10 | 11 | 最好别把 GNU 编译器集的前端二进制码当作是一组各自为政的完整的编译器,而是把它当作将一组离散的工具串联起来的驱动器,用以执行分析、编译和链接等步骤。这就意味着你既可以把 GCC 当作相对简单的命令行来把 C 源文件直接编译到可执行二进制,你也可以用它来检查和调试编译过程中的每个小步骤。 12 | 13 | 在这里我不会去讨论 ``make`` 文件,虽说对于任何多于一个文件的 C 项目必不可少。我们会在下一章有关创建和自动化工具的文章中讨论。 14 | 15 | 目标码的编译和汇编 16 | ------------------ 17 | 18 | 你可以如此将 C 源码编译到目标码: :: 19 | 20 | $ gcc -c example.c -o example.o 21 | 22 | 假设这是个 C 程序代码没有问题,这将会在当前目录有下生成一个未链接的二进制目标文件 ``example.o``\,或者它会告诉你编译为何失败。你可以用 ``objdump`` 来检查它的汇编码0: :: 23 | 24 | $ objdump -D example.o 25 | 26 | 此外,你还可以用加 ``-S`` 参数来使得 ``gcc`` 输出适当的汇编码: :: 27 | 28 | $ gcc -c -S example.c -o example.s 29 | 30 | 当汇编码和程序源代码被一起打印出来的时候可以是很有用的,或者至少是很有趣的: :: 31 | 32 | $ gcc -c -g -Wa,-a,-ad example.c > example.lst 33 | 34 | 预处理器 35 | -------- 36 | 37 | C 预处理器 ``cpp`` 是用来将头文件和宏定义加入到代码里的。一般来说它是 ``gcc`` 的一部分,但你也可以来直接调用它来查看它生成的 C 代码: :: 38 | 39 | $ cpp example.c 40 | 41 | 这将会打印出将要被编译的完整版代码,即包含了头文件并已实施了相关的宏。 42 | 43 | 目标码的连接 44 | ------------ 45 | 46 | 一个或多个目标码可以像这样被连接成适当的二进制文件: :: 47 | 48 | $ gcc example.o -o example 49 | 50 | 在这个例子里,GCC 只是抽象化呼叫了 GNU 连接器 ``ld``\。以上的命令生成了一个可执行二进制文件 ``example``\。 51 | 52 | 编译、装配和链接 53 | ---------------- 54 | 55 | 以上所有的步骤其实可以一步做完: :: 56 | 57 | $ gcc example.c -o example 58 | 59 | 这看起来简单了一些。但是,单个地编译目标码其实在实际的效率上有其过人之处,因为当重新编译的时候有些不必要的代码就不用再编译了。至于这一点,我会在下一篇文章中讨论。 60 | 61 | 包含和连接 62 | ---------- 63 | 64 | C 文件和头文件可以被显式地包含在编译命令里,即用参数 ``-I``\: :: 65 | 66 | $ gcc -I/usr/include/somelib.h example.c -o example 67 | 68 | 类似的,如果代码需要被动态地连接到一个已编译好的系统库,这些库通常在像 ``/lib`` 或 ``/usr/lib`` 的共有位置。假设是 ``ncurses``\,我们可以在命令里包含一个 ``-l`` 参数: :: 69 | 70 | $ gcc -lncurses example.c -o example 71 | 72 | 如果在你的编译中有很多必要的包含和连接,将其放进环境变量是很明智的: :: 73 | 74 | $ export CFLAGS=-I/usr/include/somelib.h 75 | $ export CLIBS=-lncurses 76 | $ gcc $CFLAGS $CLIBS example.c -o example 77 | 78 | 这种常见的步骤也是 ``Makefile`` 可以帮你省略的东西之一。 79 | 80 | 编译计划 81 | -------- 82 | 83 | 为了查看 ``gcc`` 都用那些命令干了些什么,你可以在编译命令里加上 ``-v`` 开关。这样它就会把它的编译计划从标准错误流里打印出来: :: 84 | 85 | $ gcc -v -c example.c -o example.o 86 | 87 | 如果你不想要编译器真正地去生产目标文件或者已连接的二进制文件。有时用 ``-###`` 更好(译者按:这个方法貌似在zsh下不能使用): :: 88 | 89 | $ gcc -### -c example.c -o example.o 90 | 91 | 这有助于让你看到哪些步骤 ``gcc`` 帮你简化了。但同时,在一些特殊的情况下,你也可以用它来发现哪些是你不希望编译器帮你做的步骤却在编译时被执行了。 92 | 93 | 更冗长的错误查看 94 | ---------------- 95 | 96 | 在用 ``gcc`` 编译的时候加上 ``-Wall`` 和/或 ``-pedantic`` 来让它输出不一定会产生错误的警告: :: 97 | 98 | $ gcc -Wall -pedantic -c example.c -o example.o 99 | 100 | 将它放进你的 ``Makefile`` 或 Vim 的 `makeprg `_ 声明是个好主意,如前一篇文章所讨论的那样,它们在快速修正(quickfix)窗口里的输出效果很好。这种高强度的错误警告往往会使你写出可读性更强、兼容性更好、更少错误的代码。 101 | 102 | 编译时间剖析 103 | ------------ 104 | 105 | 你可以将 ``-time`` 标记放进 ``gcc``\,让它输出编译每一步所用的时间: :: 106 | 107 | $ gcc -time -c example.c -o example.o 108 | 109 | 优化 110 | ---- 111 | 112 | 传如一般的优化选项给 ``gcc`` 就可以让它为你构建更加搞笑的目标文件和连接好的二进制文件,当然优化需要花更长的编译时间。我发现 ``-O2`` 对于产品来说是个不错的适中选择: 113 | 114 | - ``gcc -O1`` 115 | - ``gcc -O2`` 116 | - ``gcc -O3`` 117 | 118 | 就像其他 Bash 命令一样,它们都可以直接从 Vim 呼叫: :: 119 | 120 | :!gcc % -o example 121 | 122 | 解释器 123 | ------ 124 | 125 | 类 Unix 系统里对解释型语言代码的处理方式就很不一样了。在下面的例子里,我将使用 Perl,但是大多数原则都适用于解释如 Python 或 Ruby 的代码。 126 | 127 | 内联 128 | ---- 129 | 130 | 以下任意一种方式你都能够运行 Perl 代码的字符串。此例为在屏幕上打印一行“Hello, world.”,并且另起一行。第一种可能是最简约最标准的方法;而第二种则使用了 `heredoc `_ 字符串;第三种则是使用了经典的 Unix shell pipe。 :: 131 | 132 | $ perl -e 'print "Hello world.\n";' 133 | $ perl <<<'print "Hello world.\n";' 134 | $ echo 'print "Hello world.\n";' | perl 135 | 136 | 当然,更典型的是将代码保存在文件中,那文件也可以直接被运行: :: 137 | 138 | $ perl hello.pl 139 | 140 | 不管用以上何种方式运行,你都可以在不运行的前提情况下用 ``-c`` 来检查代码的语法: :: 141 | 142 | $ perl -c hello.pl 143 | 144 | 但是如果要把这种脚本当作逻辑化的二进制文件用,即你可以直接运行它而不需要知道或关系此脚本的种类。你就得在代码最前面加上所谓的“shebang”特殊的一行,它会指定用什么解释器来运行下面的脚本。 :: 145 | 146 | #!/usr/bin/env perl 147 | print "Hello, world.\n"; 148 | 149 | 然后需要用 ``chmod`` 把脚本设置成可执行模式。在此,将其后缀抹去也是个很好的实践,因为现在脚本已经被当作逻辑化二进制文件了: :: 150 | 151 | $ mv hello{.pl,} 152 | $ chmod +x hello 153 | 154 | 这样就可以像运行编译好的二进制文件一样来直接运行它了: :: 155 | 156 | $ ./hello 157 | 158 | 这种用法很好用以至于现在很多现代化的 Linux 系统中都在用 Perl 甚至 Python 做常用工具,比如 ``adduser`` 就是 ``useradd`` 的友好前端工具。 159 | 160 | 下一篇文章,我将介绍用 ``make`` 来可匹敌 IDE 的定义和自动化建立项目。让我们向拥抱同样想法 Ruby 的 ``rake`` 打个招呼。 161 | -------------------------------------------------------------------------------- /conclude.rst: -------------------------------------------------------------------------------- 1 | 寫在後面的話 2 | ============ 3 | -------------------------------------------------------------------------------- /conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 3 | # Unix 即集成开发环境 documentation build configuration file, created by 4 | # sphinx-quickstart on Tue Feb 28 02:37:06 2012. 5 | # 6 | # This file is execfile()d with the current directory set to its containing dir. 7 | # 8 | # Note that not all possible configuration values are present in this 9 | # autogenerated file. 10 | # 11 | # All configuration values have a default; values that are commented out 12 | # serve to show the default. 13 | 14 | import sys, os 15 | 16 | # If extensions (or modules to document with autodoc) are in another directory, 17 | # add these directories to sys.path here. If the directory is relative to the 18 | # documentation root, use os.path.abspath to make it absolute, like shown here. 19 | #sys.path.insert(0, os.path.abspath('.')) 20 | 21 | # -- General configuration ----------------------------------------------------- 22 | 23 | # If your documentation needs a minimal Sphinx version, state it here. 24 | #needs_sphinx = '1.0' 25 | 26 | # Add any Sphinx extension module names here, as strings. They can be extensions 27 | # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. 28 | extensions = [] 29 | 30 | # Add any paths that contain templates here, relative to this directory. 31 | templates_path = ['_templates'] 32 | 33 | # The suffix of source filenames. 34 | source_suffix = '.rst' 35 | 36 | # The encoding of source files. 37 | #source_encoding = 'utf-8-sig' 38 | 39 | # The master toctree document. 40 | master_doc = 'index' 41 | 42 | # General information about the project. 43 | project = u'Unix 即集成开发环境' 44 | copyright = u'Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License' 45 | 46 | # The version info for the project you're documenting, acts as replacement for 47 | # |version| and |release|, also used in various other places throughout the 48 | # built documents. 49 | # 50 | # The short X.Y version. 51 | version = '1.0' 52 | # The full version, including alpha/beta/rc tags. 53 | release = '1.0' 54 | 55 | # The language for content autogenerated by Sphinx. Refer to documentation 56 | # for a list of supported languages. 57 | language = 'zh_CN' 58 | 59 | # There are two options for replacing |today|: either, you set today to some 60 | # non-false value, then it is used: 61 | #today = '' 62 | # Else, today_fmt is used as the format for a strftime call. 63 | #today_fmt = '%B %d, %Y' 64 | 65 | # List of patterns, relative to source directory, that match files and 66 | # directories to ignore when looking for source files. 67 | exclude_patterns = ['_build'] 68 | 69 | # The reST default role (used for this markup: `text`) to use for all documents. 70 | #default_role = None 71 | 72 | # If true, '()' will be appended to :func: etc. cross-reference text. 73 | #add_function_parentheses = True 74 | 75 | # If true, the current module name will be prepended to all description 76 | # unit titles (such as .. function::). 77 | #add_module_names = True 78 | 79 | # If true, sectionauthor and moduleauthor directives will be shown in the 80 | # output. They are ignored by default. 81 | #show_authors = False 82 | 83 | # The name of the Pygments (syntax highlighting) style to use. 84 | pygments_style = 'sphinx' 85 | highlight_language = 'bash' 86 | 87 | # A list of ignored prefixes for module index sorting. 88 | #modindex_common_prefix = [] 89 | 90 | 91 | # -- Options for HTML output --------------------------------------------------- 92 | 93 | # The theme to use for HTML and HTML Help pages. See the documentation for 94 | # a list of builtin themes. 95 | html_theme = 'nature' 96 | 97 | # Theme options are theme-specific and customize the look and feel of a theme 98 | # further. For a list of options available for each theme, see the 99 | # documentation. 100 | html_theme_options = { 101 | "nosidebar": "true", 102 | } 103 | 104 | # Add any paths that contain custom themes here, relative to this directory. 105 | #html_theme_path = [] 106 | 107 | # The name for this set of Sphinx documents. If None, it defaults to 108 | # " v documentation". 109 | #html_title = None 110 | 111 | # A shorter title for the navigation bar. Default is the same as html_title. 112 | #html_short_title = None 113 | 114 | # The name of an image file (relative to this directory) to place at the top 115 | # of the sidebar. 116 | #html_logo = None 117 | 118 | # The name of an image file (within the static path) to use as favicon of the 119 | # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 120 | # pixels large. 121 | #html_favicon = None 122 | 123 | # Add any paths that contain custom static files (such as style sheets) here, 124 | # relative to this directory. They are copied after the builtin static files, 125 | # so a file named "default.css" will overwrite the builtin "default.css". 126 | html_static_path = ['_static'] 127 | 128 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 129 | # using the given strftime format. 130 | #html_last_updated_fmt = '%b %d, %Y' 131 | 132 | # If true, SmartyPants will be used to convert quotes and dashes to 133 | # typographically correct entities. 134 | #html_use_smartypants = True 135 | 136 | # Custom sidebar templates, maps document names to template names. 137 | #html_sidebars = {} 138 | 139 | # Additional templates that should be rendered to pages, maps page names to 140 | # template names. 141 | #html_additional_pages = {} 142 | 143 | # If false, no module index is generated. 144 | #html_domain_indices = True 145 | 146 | # If false, no index is generated. 147 | #html_use_index = True 148 | 149 | # If true, the index is split into individual pages for each letter. 150 | #html_split_index = False 151 | 152 | # If true, links to the reST sources are added to the pages. 153 | #html_show_sourcelink = True 154 | 155 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 156 | #html_show_sphinx = True 157 | 158 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. 159 | #html_show_copyright = True 160 | 161 | # If true, an OpenSearch description file will be output, and all pages will 162 | # contain a tag referring to it. The value of this option must be the 163 | # base URL from which the finished HTML is served. 164 | #html_use_opensearch = '' 165 | 166 | # This is the file name suffix for HTML files (e.g. ".xhtml"). 167 | #html_file_suffix = None 168 | 169 | # Output file base name for HTML help builder. 170 | htmlhelp_basename = 'Unidoc' 171 | 172 | 173 | # -- Options for LaTeX output -------------------------------------------------- 174 | 175 | latex_elements = { 176 | # The paper size ('letterpaper' or 'a4paper'). 177 | #'papersize': 'letterpaper', 178 | 179 | # The font size ('10pt', '11pt' or '12pt'). 180 | #'pointsize': '10pt', 181 | 182 | # Additional stuff for the LaTeX preamble. 183 | #'preamble': '', 184 | } 185 | 186 | # Grouping the document tree into LaTeX files. List of tuples 187 | # (source start file, target name, title, author, documentclass [howto/manual]). 188 | latex_documents = [ 189 | ('index', 'Uni.tex', u'Unix 即集成开发环境', 190 | u'Conan', 'manual'), 191 | ] 192 | 193 | # The name of an image file (relative to this directory) to place at the top of 194 | # the title page. 195 | #latex_logo = None 196 | 197 | # For "manual" documents, if this is true, then toplevel headings are parts, 198 | # not chapters. 199 | #latex_use_parts = False 200 | 201 | # If true, show page references after internal links. 202 | #latex_show_pagerefs = False 203 | 204 | # If true, show URL addresses after external links. 205 | #latex_show_urls = False 206 | 207 | # Documents to append as an appendix to all manuals. 208 | #latex_appendices = [] 209 | 210 | # If false, no module index is generated. 211 | #latex_domain_indices = True 212 | 213 | 214 | # -- Options for manual page output -------------------------------------------- 215 | 216 | # One entry per manual page. List of tuples 217 | # (source start file, name, description, authors, manual section). 218 | man_pages = [ 219 | ('index', 'uni', u'Unix 即集成开发环境', 220 | [u'Conan'], 1) 221 | ] 222 | 223 | # If true, show URL addresses after external links. 224 | #man_show_urls = False 225 | 226 | 227 | # -- Options for Texinfo output ------------------------------------------------ 228 | 229 | # Grouping the document tree into Texinfo files. List of tuples 230 | # (source start file, target name, title, author, 231 | # dir menu entry, description, category) 232 | texinfo_documents = [ 233 | ('index', 'Uni', u'Unix 即集成开发环境', 234 | u'Conan', 'Uni', 'A Chinese translation of Tom Ryder\'s series: Unix as IDE', 235 | 'Miscellaneous'), 236 | ] 237 | 238 | # Documents to append as an appendix to all manuals. 239 | #texinfo_appendices = [] 240 | 241 | # If false, no module index is generated. 242 | #texinfo_domain_indices = True 243 | 244 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 245 | #texinfo_show_urls = 'footnote' 246 | 247 | 248 | # -- Options for Epub output --------------------------------------------------- 249 | 250 | # Bibliographic Dublin Core info. 251 | epub_title = u'Unix 即集成开发环境' 252 | epub_author = u'Conan' 253 | epub_publisher = u'Conan' 254 | epub_copyright = u'2012, Conan' 255 | 256 | # The language of the text. It defaults to the language option 257 | # or en if the language is not set. 258 | #epub_language = '' 259 | 260 | # The scheme of the identifier. Typical schemes are ISBN or URL. 261 | #epub_scheme = '' 262 | 263 | # The unique identifier of the text. This can be a ISBN number 264 | # or the project homepage. 265 | #epub_identifier = '' 266 | 267 | # A unique identification for the text. 268 | #epub_uid = '' 269 | 270 | # A tuple containing the cover image and cover page html template filenames. 271 | #epub_cover = () 272 | 273 | # HTML files that should be inserted before the pages created by sphinx. 274 | # The format is a list of tuples containing the path and title. 275 | #epub_pre_files = [] 276 | 277 | # HTML files shat should be inserted after the pages created by sphinx. 278 | # The format is a list of tuples containing the path and title. 279 | #epub_post_files = [] 280 | 281 | # A list of files that should not be packed into the epub file. 282 | #epub_exclude_files = [] 283 | 284 | # The depth of the table of contents in toc.ncx. 285 | #epub_tocdepth = 3 286 | 287 | # Allow duplicate toc entries. 288 | #epub_tocdup = True 289 | -------------------------------------------------------------------------------- /debug.rst: -------------------------------------------------------------------------------- 1 | 调试 2 | ==== 3 | 4 | 程序在运行时有意料之外的行为时,Linux 提供了广泛而多样的命令行工具来诊断问题。用集成开发环境工具设断点来在程序运行时检查程序状态的朋友应该会对 ``gdb``\(GNU debugger),以及其相关的较没名气的 Perl 调试程序比较熟悉。其他还有一些工具则更专注于观察程序和系统的交互以及系统资源使用的细节。 5 | 6 | 用 ``gdb`` 做调试 7 | ----------------- 8 | 9 | 你可以用类似 Eclipse 和 Visual Studio 的调试方式去使用 ``gdb``\。如果你在调试一个你刚刚编译好的程序,编译时加个调试标签是有道理的。你只需要在用 ``gcc`` 编译的时候加上个 ``-g`` 的选项。如果代码写得有问题,你也可以加上 ``-Wall``\,这样所有的错误信息都会显示: :: 10 | 11 | $ gcc -g -Wall example.c -o example 12 | 13 | ``gdb`` 的经典用法就是在命令行下运行 C 或 C++ 编译的程序,从而在其运行直至崩溃时观察程序的状态。 :: 14 | 15 | $ gdb example 16 | ... 17 | Reading symbols from /home/tom/example...done. 18 | (gdb) 19 | 20 | 在 ``(gdb)`` 交互命令下,你可以输入 ``run`` 来运行程序,它会反馈给你更多有关导致错误的细节信息,比如下例的内存访问越界错误、出错的源码文件以及出错的代码行号。如果你像上面提到的那样在编译时加入调试符并观察其运行,排错任务会变得非常简单。 :: 21 | 22 | (gdb) run 23 | Starting program: /home/tom/gdb/example 24 | 25 | Program received signal SIGSEGV, Segmentation fault. 26 | 0x000000000040072e in main () at example.c:43 27 | 43 printf("%d\n", *segfault); 28 | 29 | 在错误终止程序之后,你可以在 ``(gdb)`` 命令行下输入 ``backtrace`` 查看刚刚是哪一个功能模块运行了,传进该功能模块的参数也可能跟程序的崩溃有关。 :: 30 | 31 | (gdb) backtrace 32 | #0 0x000000000040072e in main () at example.c:43 33 | 34 | 你也可以用 ``break`` 来为 ``(gdb)`` 设置断点,这样程序运行到相应行号或某模块调用的时候就会暂停: :: 35 | 36 | (gdb) break 42 37 | Breakpoint 1 at 0x400722: file example.c, line 42. 38 | (gdb) break malloc 39 | Breakpoint 1 at 0x4004c0 40 | (gdb) run 41 | Starting program: /home/tom/gdb/example 42 | 43 | Breakpoint 1, 0x00007ffff7df2310 in malloc () from /lib64/ld-linux-x86-64.so.2 44 | 45 | 其后,用 ``step`` 来单步调试之后的代码会非常有帮助。你可以像使用其他 ``(gdb)`` 命令一样,按回车键重复单步调试: :: 46 | 47 | (gdb) step 48 | Single stepping until exit from function _start, 49 | which has no line number information. 50 | 0x00007ffff7a74db0 in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6 51 | 52 | 你甚至可以将 ``gdb`` 附到一个正在运行的进程上,只需要找到该进程的 ID 并将此 ID 传入 ``gdb``\: :: 53 | 54 | $ pgrep example 55 | 1524 56 | $ gdb -p 1524 57 | 58 | 这样做对 `重定向某些耗时长的任务的输出流 `_ 很有帮助。 59 | 60 | 用 ``valgrind`` 调试 61 | -------------------- 62 | 63 | 较新的 `valgrind `_ 可以用类似的方法来用作调试工具。它有好多种检测和调试的方式,但是有一种是最为有用的,即 Memcheck 工具,这个工具可以用来侦测常见的类似缓冲区溢出的内存错误: :: 64 | 65 | $ valgrind --leak-check=yes ./example 66 | ==29557== Memcheck, a memory error detector 67 | ==29557== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. 68 | ==29557== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info 69 | ==29557== Command: ./example 70 | ==29557== 71 | ==29557== Invalid read of size 1 72 | ==29557== at 0x40072E: main (example.c:43) 73 | ==29557== Address 0x0 is not stack'd, malloc'd or (recently) free'd 74 | ==29557== 75 | ... 76 | 77 | ``gdb`` 和 ``valgrind`` 可以 `组合使用 `_ 从而更加全面的观察程序运行。 Zed Shaw 写的 `《笨办法学 C 语言》 `_ 中就有个对 ``valgrind`` 非常好的介绍,有关如何用一些入门用法来调试某故意弄错的程序。 78 | 79 | 利用 ``ltrace`` 追踪系统和库的调用 80 | ---------------------------------- 81 | 82 | ``strace`` 和 ``ltrace`` 是为查看某程序的系统和库调用情况而设计的,追踪结果可以被显示在屏幕上也可以被写入到文件。 83 | 84 | 将你想监视的程序作为参数传进 ``ltrace`` 就可以开始监视了。它会将程序从头到尾调用的所有的系统和库都列出来。 :: 85 | 86 | $ ltrace ./example 87 | __libc_start_main(0x4006ad, 1, 0x7fff9d7e5838, 0x400770, 0x400760 88 | srand(4, 0x7fff9d7e5838, 0x7fff9d7e5848, 0, 0x7ff3aebde320) = 0 89 | malloc(24) = 0x01070010 90 | rand(0, 0x1070020, 0, 0x1070000, 0x7ff3aebdee60) = 0x754e7ddd 91 | malloc(24) = 0x01070030 92 | rand(0x7ff3aebdee60, 24, 0, 0x1070020, 0x7ff3aebdeec8) = 0x11265233 93 | malloc(24) = 0x01070050 94 | rand(0x7ff3aebdee60, 24, 0, 0x1070040, 0x7ff3aebdeec8) = 0x18799942 95 | malloc(24) = 0x01070070 96 | rand(0x7ff3aebdee60, 24, 0, 0x1070060, 0x7ff3aebdeec8) = 0x214a541e 97 | malloc(24) = 0x01070090 98 | rand(0x7ff3aebdee60, 24, 0, 0x1070080, 0x7ff3aebdeec8) = 0x1b6d90f3 99 | malloc(24) = 0x010700b0 100 | rand(0x7ff3aebdee60, 24, 0, 0x10700a0, 0x7ff3aebdeec8) = 0x2e19c419 101 | malloc(24) = 0x010700d0 102 | rand(0x7ff3aebdee60, 24, 0, 0x10700c0, 0x7ff3aebdeec8) = 0x35bc1a99 103 | malloc(24) = 0x010700f0 104 | rand(0x7ff3aebdee60, 24, 0, 0x10700e0, 0x7ff3aebdeec8) = 0x53b8d61b 105 | malloc(24) = 0x01070110 106 | rand(0x7ff3aebdee60, 24, 0, 0x1070100, 0x7ff3aebdeec8) = 0x18e0f924 107 | malloc(24) = 0x01070130 108 | rand(0x7ff3aebdee60, 24, 0, 0x1070120, 0x7ff3aebdeec8) = 0x27a51979 109 | --- SIGSEGV (Segmentation fault) --- 110 | +++ killed by SIGSEGV +++ 111 | 112 | 你同样也可以将其附到某已运行的进程上: :: 113 | 114 | $ pgrep example 115 | 5138 116 | $ ltrace -p 5138 117 | 118 | 一般情况,监视结果会超过一屏,所以用 ``-o`` 来设定一个输出文件会很有用,这样结果就全被记录到该文件里了: :: 119 | 120 | $ ltrace -o example.ltrace ./example 121 | 122 | 然后在用类似于 Vim 的工具打开 trace 文件, ``ltrace`` 文件会被语法高亮: 123 | 124 | .. figure:: origin/ltrace-vim.png 125 | :scale: 70% 126 | :alt ltrace-vim 127 | 128 | 用 Vim 打开 ltrace 文件 129 | 130 | 我发觉在调试误连接或在 ``chroot`` 环境下缺某些资源时候用 ``ltrace`` 特别有用,因为输出信息显示了它在动态连接时搜索库文件、打开 ``/etc`` 下的配置文件、以及使用像 ``/dev/random`` 或 ``/dev/zero`` 这样的设备。 131 | 132 | 利用 ``lsof`` 监视打开的文件 133 | ----------------------------- 134 | 135 | 如果你想查看一个正在运行的进程打开了哪些设备、文件或流,你可以使用 ``lsof``\: :: 136 | 137 | $ pgrep example 138 | 5051 139 | $ lsof -p 5051 140 | 141 | 举个例子,我家里服务器里的 ``apache2`` 进程的开始几行是这样的: :: 142 | 143 | # lsof -p 30779 144 | COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME 145 | apache2 30779 root cwd DIR 8,1 4096 2 / 146 | apache2 30779 root rtd DIR 8,1 4096 2 / 147 | apache2 30779 root txt REG 8,1 485384 990111 /usr/lib/apache2/mpm-prefork/apache2 148 | apache2 30779 root DEL REG 8,1 1087891 /lib/x86_64-linux-gnu/libgcc_s.so.1 149 | apache2 30779 root mem REG 8,1 35216 1079715 /usr/lib/php5/20090626/pdo_mysql.so 150 | ... 151 | 152 | 有趣的是,还有另一种办法可以办到这个,就是检查动态目录 ``/proc`` 里的相应记录: :: 153 | 154 | # ls -l /proc/30779/fd 155 | 156 | 这在遇到文件锁的令人困惑的情况或鉴定某进程是否保有不需要用到的文件的时候非常有用。 157 | 158 | 用 ``pmap`` 查看内存分配 159 | ------------------------ 160 | 161 | 最后一个调试小技巧,你可以用 ``pmap`` 查看某进程的内存分配情况: :: 162 | 163 | # pmap 30779 164 | 30779: /usr/sbin/apache2 -k start 165 | 00007fdb3883e000 84K r-x-- /lib/x86_64-linux-gnu/libgcc_s.so.1 (deleted) 166 | 00007fdb38853000 2048K ----- /lib/x86_64-linux-gnu/libgcc_s.so.1 (deleted) 167 | 00007fdb38a53000 4K rw--- /lib/x86_64-linux-gnu/libgcc_s.so.1 (deleted) 168 | 00007fdb38a54000 4K ----- [ anon ] 169 | 00007fdb38a55000 8192K rw--- [ anon ] 170 | 00007fdb392e5000 28K r-x-- /usr/lib/php5/20090626/pdo_mysql.so 171 | 00007fdb392ec000 2048K ----- /usr/lib/php5/20090626/pdo_mysql.so 172 | 00007fdb394ec000 4K r---- /usr/lib/php5/20090626/pdo_mysql.so 173 | 00007fdb394ed000 4K rw--- /usr/lib/php5/20090626/pdo_mysql.so 174 | ... 175 | total 152520K 176 | 177 | 178 | 以上的结果可以显示出在运行的进程使用了哪些库,包括那些在共享内存里的库。最后给出的总计可能会有点令人误解,因为正在运行的进程很有可能加载了共享库,而此进程也不一定是唯一在用此库的进程。当某进程调用共享库的时候, `确定此进程的“真实”内存使用 `_ 要比想象中的情况更加复杂。 179 | -------------------------------------------------------------------------------- /editing.rst: -------------------------------------------------------------------------------- 1 | 编辑器 2 | ====== 3 | 4 | 文本编辑器对所有程序员来说都是核心工具,这也是为什么它会引发虽不当真却也狂热的争议。Unix,很显而易见的,是与两大最经久不衰的受宠编辑器最有渊源的操作系统。这两个编辑器是 Emacs 和 Vi,它们的现代版本便是 GNU Emacs 和 Vim,两个编辑哲学迥异的编辑器在效能上却旗鼓相当。 5 | 6 | 作为 Vim 的异教徒,我来聊聊 Vim 对于编程不可或缺的功能,具体来说就是从 Vim 里调用 Linux shell 工具来完善编辑器的内建功能性。这里谈到的一些原理对 Emacs 也适用,但是对那些欠强大的编辑器来说并无参考价值,比如 Nano。 7 | 8 | 这篇帖子只能是纵览全局,毕竟 Vim 的编程工具组非常多,但即使是很泛泛地谈,它也会是一篇相当长的帖子。我会把重点放在要点和一些我认为很有帮助的东西上,并且有可能的话提供一些文章的链接以便对此话题有更加全面的了解。同时也别忘了 Vim 的 ``:help``\,很多新人都没有想到这份文档是如此高质量又好用。 9 | 10 | 文件类型侦测 11 | ------------ 12 | 13 | Vim 有内建的设置用于调整其运作方式,具体来说比如它基于被加载的文件类型进行句法高亮,这么做一直非常有效。另外,文件类型的侦测还可以让你根据某种语言常规的书写风格为此语言设置特定的缩进风格。这应该是第一批你需要加进 ``.vimrc`` 里的一部分: :: 14 | 15 | if has("autocmd") 16 | filetype on 17 | filetype indent on 18 | filetype plugin on 19 | endif 20 | 21 | 句法高亮 22 | -------- 23 | 24 | 即便你只是在16位色的命令行下工作,如果你还没有这么做,那赶紧在你的 ``.vimrc`` 里加入这句话吧: :: 25 | 26 | syntax on 27 | 28 | 默认的16位色的命令行配色不怎么好看也是不得已,但是他们已经起到作用了,绝大多数语言的句法定义文件是可轻易获取的,而且它们的效果都不错。这儿有 `一大堆配色方案 `_\,而且调整甚至自己写都不困难。当然用 `256色的命令行 `_ 或 gVim 会提供更多选项。好的句法高亮文件还会在有明显句法错误的时候用醒目的红色背景标示。 29 | 30 | 行号 31 | ---- 32 | 33 | 可能你在传统的 IDE 里经常使用,打开行号: :: 34 | 35 | set number 36 | 37 | 如果你现在在用高于 Vim 7.3 的版本,你可能也会想试试把绝对行号换成相对行号: :: 38 | 39 | set relativenumber 40 | 41 | 标签文件 42 | -------- 43 | 44 | Vim 对 ``ctags`` 实用工具的输出 `支持得很好 `_\。它能让你快速地在整个项目中搜索某个特定的标示符;或者不管在不在同一个文件,直接从某变量被使用的地方跳转到该变量被申明的位置。对还有多个文件的 C 语言项目,这可以节省大把本来会被浪费掉的时间,而且很有可能 Vim 是目前有类似功能的主流 IDE 中最好的。 45 | 46 | 在你的项目根目录下(很多流行的编程语言都可以)运行 ``:!ctags -R`` 来生成 ``tags`` 文件,此文件里是整个项目里所有的申明和标示符的位置。一旦 ``ctags`` 文件被生成,你就可以像下面这样来搜索某些标签了: :: 47 | 48 | :tag someClass 49 | 50 | 用 ``:tn`` 和 ``:tp``\,你就可以遍历搜索结果了。自带的标签功能已经可以满足你大部分的需求了,但是像标签列表窗口这样的功能,你可以试试安装很受欢迎的 `Taglist 插件 `_\。Tim Pope 的 `Unimpaired 插件 `_ 也有一些有用的相关映射。 51 | 52 | 调用外部程序 53 | ------------ 54 | 55 | 有两中主要方法可以在 Vim 里调用外部程序: 56 | 57 | * ``:!``\——在从 Vim 内容跑某命令时很有用,尤其是在你想把运行结果输出到 Vim buffer 的情况下。 58 | * ``:shell``\——以 Vim 子进程的方式弹开一个命令行。适合交互式命令。 59 | 60 | 第三种方法我不想在这里深入讨论,就是用像 `Conque `_ 在 Vim buffer 里模拟命令行。我自己试了下发现几乎不能用,我敢断言这是个糟糕的设计。一下摘自 ``:help design-not:``\: 61 | 62 | Vim 不是 Shell 也不是操作系统。你不能在 Vim 里跑 Shell 也不能用它来控制调试器。相反的:把 Vim 当作 Shell 或 IDE 的一部分。 63 | 64 | Lint 程序和句法检查器 65 | ````````````````````` 66 | 67 | 调用外部程序(如 ``perl -c``\, ``gcc``\)来检测句法是在 Vim 里用 ``:!`` 很好的例子。如果你在编写 Perl 文件,就可以这样跑: :: 68 | 69 | :!perl -c % 70 | 71 | /home/tom/project/test.pl syntax OK 72 | 73 | Press Enter or type command to continue 74 | 75 | 上面的百分号 ``%`` 是一种表示当前显示内容的简略方式。如果运行的命令有回显,那回显就会显示在命令下方。如果你需要经常调用句法检查器,你也可以在 ``.vimrc`` 里把它设置成命令,甚至再设置一个组合键。在这个例子里,我们可以定义一个 ``:PerlLint`` 命令,并且可以在正常模式下用 ``\l`` 触发(译者按:1. 作者指的正常模式是相对于 Vim 里的输入模式和选择模式; 2. 作者 Vim 的 ```` 76 | 键是用的 ``\``\,这个设置因人而异): :: 77 | 78 | command PerlLint !perl -c % 79 | nnoremap l :PerlLint 80 | 81 | 对不少语言而言其实都有一个更好的办法来实现以上的事情,而且将要谈的这个方法能让我们利用起 Vim 自带的 quickfix 窗口。首先我们要对特定的文件类型设置一个合适的 ``makeprg``\,在这个例子里面,包含可以被 Vim 用以输出到 quicklist 的模块并定义两种输出格式: :: 82 | 83 | :set makeprg=perl\ -c\ -MVi::QuickFix\ % 84 | :set errorformat+=%m\ at\ %f\ line\ %l\. 85 | :set errorformat+=%m\ at\ %f\ line\ %l 86 | 87 | 你有可能先得从 CPAN 或 Debian 包管理器安装 ``libvi-quickfix-perl`` 模块。安装完成,保存文档,然后输入 ``:make`` 来检查句法。如果找到错误了,你可以用 ``:copen`` 打开 quicklist 窗口检查那些错误。用 ``:cn`` 和 ``:cp`` 上下移动。 88 | 89 | .. figure:: origin/vim-quickfix.png 90 | :alt: vim-quickfix 91 | 92 | 用 Vim quickfix 检测一个 Perl 文件 93 | 94 | 这种方法同样也适用于 `gcc `_ 的输出,以及几乎其他任何一种句法检测的输出,输出包括文件名、行号、错误信息。它甚至可以支持 `像 PHP 一样专注于网页的语言 `_\,还有像 `JSLint for JavaScript `_ 95 | 这样的工具。另外还有一个非常棒的插件叫 `Syntastic `_ 也有类似的功效。 96 | 97 | 从其他命令读取输出 98 | `````````````````` 99 | 你可以用 ``:r!`` 把呼叫命令的回显直接贴到当前工作的文档里。例如,把当前目录的文档列表放进当前编辑文件就可以输入: :: 100 | 101 | :r!ls 102 | 103 | 这种读取方式当然不光可以用在命令回显;你可以用 ``:r`` 轻松读进其他文件的内容,比如你的公钥或是你自定义的样版文件: :: 104 | 105 | :r ~/.ssh/id_rsa.pub 106 | :r ~/dev/perl/boilerplate/copyright.pl 107 | 108 | 用其他命令过滤输出 109 | `````````````````` 110 | 加以延伸,其实你可以把 Vim buffer 里的文字放进外部命令过滤,或者是用选择模式选择一个文本区块,然后用命令的输出覆盖。因为 Vim 的块状选择模式很适合用在柱形数据,所以它很适合配合 ``column``\、 ``cut``\、 ``sort``\、 ``awk`` 等类似的工具使用。 111 | 112 | 例如,你可以将整个文件按第二列逆序排列: :: 113 | 114 | :%!sort -k2 -r 115 | 116 | 你可以在所选则文字中找到符合 ``/vim/`` 样式并只显示其中的第三列: :: 117 | 118 | :'<,'>!awk '/vim/ {print $3}' 119 | 120 | 你也可以把前10行的关键词用漂亮的行列格式排好: :: 121 | 122 | :1,10!column -t 123 | 124 | 真的,所有类型的文字过滤器或命令都可以像上面的例子一样用在 Vim 里,一个简单的互操作性就可以让编辑器的能力无限延伸。这很有效地将 Vim buffer 变成了字符流,而字符流正是这些经典工具之间用以交流的语言。 125 | 126 | 自带的其他选择 127 | `````````````` 128 | 值得注意的是,像排序和查找之类常见的操作,Vim 有其自带的方法 ``:sort`` 和 ``:grep``\,这些或许在你在 Windows 下用 Vim 时遇到困难时很有帮助,但是这些自带方法并不具备适应 shell 命令的能力。 129 | 130 | 对比文件 131 | -------- 132 | 133 | Vim 有个 *diffing* 模式,即 ``vimdiff``\,它不但允许你查看不同版本文件间的区别,还提供三向合并用以解决版本冲突,你可以用 ``:diffput`` 和 ``:diffget`` 这样的命令来选择合适的代码段。你可以在命令行下直接运行 ``vimdiff``\,需要至少两个文件才能做对比: :: 134 | 135 | $ vimdiff file-v1.c file-v2.c 136 | 137 | .. figure:: origin/vim-diff.png 138 | :scale: 70% 139 | :alt: vim-diff 140 | 141 | 用 Vimdiff 对比 .vimrc 文件 142 | 143 | 版本控制 144 | -------- 145 | 146 | 你可以在 Vim 下直接调用版本控制的命令,这可能也是你大多数时候最需要的。``%`` 永远是当前激活显示窗口的内容,记住这点非常有用: :: 147 | 148 | :!svn status 149 | :!svn add % 150 | :!git commit -a 151 | 152 | 最近集成 Git 功能到 Vim 的冠军插件很明显就是 Tim Pope 的 `Fugitive `_ 了。我强烈建议每个用 Git 和 Vim 的开发者使用。此系列的第七部分会更多更详细地介绍关于 Unix 的版本控制和历史。 153 | 154 | 差异 155 | ---- 156 | 157 | 不少用惯了图形化界面 IDE 的程序员把 Vim 当作玩具或文物的一部分原因是它经常只是被看作是在服务器下修改文件用的工具,而非其强大编辑能力的英雄本色。它自带的一些功能对 Unix 环境很友好以至于很多有经验的用户都会被它的强大功能震惊。 158 | -------------------------------------------------------------------------------- /files.rst: -------------------------------------------------------------------------------- 1 | 文件 2 | ==== 3 | 4 | 自带的文件管理系统是 IDE 亮点功能之一,从最基本的文件移动、重命名和删除功能,到更加具体的编译和句法检查。批量文件的操作也非常有用,比如找出某种后缀或大小的所有文件,或者搜索出有一定命名模式的文件。在本系列的第一篇文章里,我将考察一些非常有用的工具来对批量文件进行操作,这些工具应该也是大多数 Linux 用户所熟知的。 5 | 6 | 列举文件 7 | -------- 8 | 9 | ``ls`` 恐怕是某管理员最初学的用以显示某路径下的文件列表的命令之一,而大多数管理员也应该是知道 ``-a`` 和 ``-l`` 选项,它们分别是用来显示包括点文件(即隐藏文件)在内的所有文件和更详细的文件相关的信息栏。 10 | 11 | 还有一些 ``ls`` 的选项不那么常用,但是对编程很有帮助: 12 | 13 | * ``-t`` —— 按最后编辑日期排序,最新的最先。这在某个大目录里找出最近修改的文件列表时很有用,比如将结果导入( ``pipe`` ) ``head`` 或者 ``sed 10q``\。或许加上 ``-l`` 会效果更好。当然如果你想获取最旧的文件列表,只要加 ``-r`` 反转列表即可。 14 | * ``-X`` —— 按文件类型分类。这在多语言或多后缀的项目中特别方便,比如头文件和源文件分开,或区分开源文件和生成文件或目录。 15 | * ``-v`` —— 按照文件名里的版本号排序。 16 | * ``-S`` —— 按文件大小排序。 17 | * ``-R`` —— 递归地列举文件。这个选项和 ``-l`` 组合使用并将结果导出到 ``less`` 效果很好。 18 | 19 | 因为生成的列表就是文本,所以你其实可以把结果导出给类似 ``vim`` 的进程,然后再加上一些对每个文件作用的解释,这就成了一个目录文件,又或者把列表加进 README 文件: :: 20 | 21 | $ ls -XR | vim - 22 | 23 | 这种工作其实我们可以很容易地让 ``make`` 自动生成,我会在之后的文章里谈到。 24 | 25 | 查找文件 26 | -------- 27 | 28 | 非常有趣的是你其实可以只用一个 ``find`` 命令,不用任何其他参数就列出所有文件,甚至还包括相对路径。当然如果把结果导入 ``sort`` 肯定效果更好: :: 29 | 30 | $ find | sort 31 | . 32 | ./Makefile 33 | ./README 34 | ./build 35 | ./client.c 36 | ./client.h 37 | ./common.h 38 | ./project.c 39 | ./server.c 40 | ./server.h 41 | ./tests 42 | ./tests/suite1.pl 43 | ./tests/suite2.pl 44 | ./tests/suite3.pl 45 | ./tests/suite4.pl 46 | 47 | 如果你想要 ``ls -l`` 样式的列表,只要在 ``find`` 后面加上 ``-ls``\: :: 48 | 49 | $ find -ls | sort -k 11 50 | 1155096 4 drwxr-xr-x 4 tom tom 4096 Feb 10 09:37 . 51 | 1155152 4 drwxr-xr-x 2 tom tom 4096 Feb 10 09:17 ./build 52 | 1155155 4 -rw-r--r-- 1 tom tom 2290 Jan 11 07:21 ./client.c 53 | 1155157 4 -rw-r--r-- 1 tom tom 1871 Jan 11 16:41 ./client.h 54 | 1155159 32 -rw-r--r-- 1 tom tom 30390 Jan 10 15:29 ./common.h 55 | 1155153 24 -rw-r--r-- 1 tom tom 21170 Jan 11 05:43 ./Makefile 56 | 1155154 16 -rw-r--r-- 1 tom tom 13966 Jan 14 07:39 ./project.c 57 | 1155080 28 -rw-r--r-- 1 tom tom 25840 Jan 15 22:28 ./README 58 | 1155156 32 -rw-r--r-- 1 tom tom 31124 Jan 11 02:34 ./server.c 59 | 1155158 4 -rw-r--r-- 1 tom tom 3599 Jan 16 05:27 ./server.h 60 | 1155160 4 drwxr-xr-x 2 tom tom 4096 Feb 10 09:29 ./tests 61 | 1155161 4 -rw-r--r-- 1 tom tom 288 Jan 13 03:04 ./tests/suite1.pl 62 | 1155162 4 -rw-r--r-- 1 tom tom 1792 Jan 13 10:06 ./tests/suite2.pl 63 | 1155163 4 -rw-r--r-- 1 tom tom 112 Jan 9 23:42 ./tests/suite3.pl 64 | 1155164 4 -rw-r--r-- 1 tom tom 144 Jan 15 02:10 ./tests/suite4.pl 65 | 66 | 要注意,在这种情况下,我得设定 ``sort`` 对第11列排序,即对文件名排序;这里所用的标签是 ``-k``\。 67 | 68 | ``find`` 有它自己的一套复杂的过滤语句。下面列举的是一些最常用的你可以用以获取某些文件列表的过滤器: 69 | 70 | * ``find -name '*.c'`` —— 查找符合某 shell 式样式的文件名的文件。用 ``iname`` 开启大小写不敏感搜索。 71 | * ``find -path '*test*'`` —— 查找符合某 shell 式样式的路径的文件。用 ``ipath`` 开启大小写不敏感搜索。 72 | * ``find -mtime -5`` —— 查找近五天内编辑过的文件。你也可以用 ``+5`` 来查找五天之前编辑过的文件。 73 | * ``find -newer server.c`` —— 查找比 ``server.c`` 更新的文件。 74 | * ``file -type d`` —— 查找所有文件夹。如果想找出所有文件,那就用 ``-type f``\;找符号连接就用 ``-type l``\。 75 | 76 | 要注意,上面提到的这些过滤器都是可以组合使用的,例如找出近两天内编辑过的 C 源码: :: 77 | 78 | $ find -name '*.c' -mtime -2 79 | 80 | 默认情况下, ``find`` 对搜索结果所采取的动作只是简单地通过标准输出输出一个列表,然而其实还有其他一些有用的后续动作: 81 | 82 | * ``-ls`` —— 如前文,提供了一种类 ``ls -l`` 式的列表。 83 | * ``-delete`` —— 删除符合查找条件的文件。 84 | * ``-exec`` —— 对搜索结果里的每个文件都运行某个命令, ``{}`` 会被替换成适当的文件名,并且命令用 ``\;`` 终结。例如: :: 85 | 86 | $ find -name '*.pl' -exec perl -c {} \; 87 | 88 | 你也可以使用 ``+`` 作为终止符来对所有结果运行一次命令。我还发现一个我经常使用的小技巧,就是用 ``find`` 生成一个文件列表,然后在 Vim 的垂直分窗中编辑: :: 89 | 90 | $ find -name '*.c' -exec vim {} + 91 | 92 | *早先版本的 Unix as IDE 建议* ``xargs`` *与* ``find`` *配合使用。在大多数情况下并不需要这么做,而且使用* ``-exec`` *或是* ``while read -r`` *循环的方式来处理文件名中带空格的文件更加灵活。* 93 | 94 | 搜索文件 95 | -------- 96 | 97 | 更多时候,我们对基于文件内容的搜索比基于文件属性的搜索更感兴趣。这毫无疑问用 ``grep``\,更加具体来说,应该是 ``grep -R``\。它会递归地找出当前目录下符合‘someVar’的文件: :: 98 | 99 | $ grep -FR 'someVar' . 100 | 101 | 別忘了大小不敏感的标签,因为 ``grep`` 默认工作方式是大小写敏感的: :: 102 | 103 | $ grep -iR 'somevar' . 104 | 105 | 而且,你也可以用 ``grep -l`` 光打印出符合条件的文件名而非文件内容选段。 :: 106 | 107 | $ grep -lR 'somevar' . 108 | 109 | 如果你写的脚本或批处理任务需要上面的输出内容,可以使用 ``while`` 和 ``read`` 来处理文件名中的空格和其他特殊字符: :: 110 | 111 | grep -lR someVar | while IFS= read -r file; do 112 | head "$file" 113 | done 114 | 115 | 如果你在你的项目里使用了版本控制软件,它通常会在 ``.svn``\, ``.git``\, ``.hg`` 目录下包含一些元数据。你也可以很容易地用 ``grep -v`` 把这些目录移出搜索范围,当然得用 ``grep -F`` 指定一个恰当且确定的字符串,即要移除的目录名: :: 116 | 117 | $ grep -R 'someVar' . | grep -vF '.svn' 118 | 119 | 部分版本的 ``grep`` 包含了 ``--exclude`` 和 ``--exclude-dir`` 选项,这看起来更加易读。 120 | 121 | 当然,还有另外一种很流行的 `代替 grep `_ 的工具叫做 ``ack``\,默认情况下它就帮你把上面那些个麻烦的东西免除了。它同样也支持大多数黑客最爱的 Perl 兼容的正则表达式(PCRE)。而且它还有很多实用功能来帮助你完成有关源代码的工作。当然使用古朴的 ``grep`` 没什么不好的,无论怎样它是 Unix 系统自带的,但是如果你可以安装 ``ack``\,我还是非常推荐的。现在你已经可以很容易地用个叫 ``ack-grep`` Debian 包或者一个 Perl 脚本来安装。 122 | 123 | 我提到用一些较新的 Perl 脚本来代替经典工具 ``grep`` 可能会让一些 Unix 纯粹主义者很不爽。但是我不认为 Unix 哲学或以 Unix 作 IDE ,就是非要在有一些可用来解决新问题的工具时反而使用一些“古典”工具,毕竟这些新工具跟那些“古典”工具在思想上是一致的。 124 | 125 | 文件元数据 126 | ---------- 127 | 128 | ``file`` 工具可以对所给的文件一行简短的介绍,它用文件后缀、头部信息和一些其他的线索来判断文件。你在检查一堆你不熟悉的文件时使用 ``find`` 非常方便: :: 129 | 130 | $ find -exec file {} \; 131 | .: directory 132 | ./hanoi: Perl script, ASCII text executable 133 | ./.hanoi.swp: Vim swap file, version 7.3 134 | ./factorial: Perl script, ASCII text executable 135 | ./bits.c: C source, ASCII text 136 | ./bits: ELF 32-bit LSB executable, Intel 80386, version ... 137 | 138 | 匹配文件 139 | -------- 140 | 141 | 作为本篇文章的最后一个技巧,我会建议你学习一些有关模式匹配和 Bash 下的括号表达式。你可以在我之前的一篇叫做 `Bash shell expansion `_ 的文章里看到。 142 | 143 | 以上便把经典 Unix 命令行变成了一个可在编程项目中使用的非常强大的文件管理器。 144 | -------------------------------------------------------------------------------- /index.rst: -------------------------------------------------------------------------------- 1 | .. Unix 即集成开发环境 documentation master file, created by 2 | sphinx-quickstart on Tue Feb 28 02:37:06 2012. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | 7 | 8 | ========================= 9 | Unix 即集成开发环境 10 | ========================= 11 | 12 | 动机 13 | ---- 14 | 15 | 前阵子,我在 `Hacker News `_ 上看到Tom Ryder的一个系列文章, `谈关于Unix和IDE(集成开发环境)的 `_\。其实类似的文章在HN上经常看到,只是由于我是开源和Vim狂热者才会每次都点开看看——每次都能学到新的东西。 16 | 17 | 那一阵子也正好好几次跟同学和朋友说到有关的话题。他们都是 IDE 控,这不是什么不好的事情,我自己在一些时候也会选择 IDE。但是他们普遍对使用 Unix 和类似 Vim 的编辑工具没有正确的认识,以至于我的观点得不到应有程度的共鸣或共识。而我恰好看到了这个系列文章,觉得这是一个非常不错的入门系列。而且我在很多观点上都认同作者,所以本打算写一篇博文谈谈来着,现在索性来翻译这个系列文章好了。(之所以要翻译而不是直接转载是因为我发现对于 Unix 以及相关开发工具的使用的知识缺乏在中国存在度很高,语言可以是原因之一,所以这个系列不是为我的朋友和同学而翻译,更大程度上是为了普及知识。) 18 | 19 | 如果您对此话题很感兴趣,不妨也读一读 `The unix programming environment `_\。如果你现在还在大学里,也不妨上一些有关的课程,一定会对你的职业生涯有很大的帮助。 20 | 21 | 项目主页: ``_ 22 | 23 | **最后声明** ,我并不想鼓吹使用某种开发工具或使用某种工作流程,和此系列作者一样,只普及知识。如果这个系列的文章颠覆了你的世界观,你因此变成了 Unix 粉,我们概不负责。 24 | 25 | 目录 26 | ---- 27 | 28 | .. toctree:: 29 | :maxdepth: 3 30 | 31 | 前言 32 | 文件 33 | 编辑器 34 | 编译 35 | 创建 36 | 调试 37 | 版本控制 38 | 39 | 项目说明 40 | -------- 41 | 42 | * 使用的是 `sphinx `_ 文档生成器。 43 | * 项目主页 `Unix as IDE (Chinese) `_\。 44 | 45 | 贡献者 46 | ------ 47 | 48 | 本翻译项目的贡献者: `ConanChou `_\, 49 | `ccl13 `_\, 50 | `A2ZH `_\, 51 | `Peter `_\, 52 | `Derrick `_\。 53 | 54 | .. Indices and tables 55 | ================== 56 | 57 | * :ref:`genindex` 58 | * :ref:`modindex` 59 | * :ref:`search` 60 | -------------------------------------------------------------------------------- /introduction.rst: -------------------------------------------------------------------------------- 1 | 前言 2 | ==== 3 | 4 | 不管是刚刚入门的新手还是经验丰富的大牛程序员,他们都很喜欢集成开发环境(IDE)的概念。它作为兼备文件结构组织、编写、维护、测试和排错工具于一体的应用程序,对程序员们非常有价值。而且,它为各种语言量身定做,提供了类似自动补全、句法检测和高亮等功能。 5 | 6 | 这样的工具在包括 Linux 和 BSD 在内的主流桌面系统下都可以使用,并且绝大多数都是免费的,在这种情况下你还有什么理由用 Windows 记事本或者 ``nano`` 或 ``cat`` 来写程序呢。 7 | 8 | 然而,在 Unix 以及它的现代分支的忠饭血液里流淌着一种文化基因,他们认为“Unix 就是集成开发环境”,因为那些在命令行里的工具可以轻易地实现上面提到的各种桌面版的牛叉IDE。从这一点来说,分歧很大。你可能会对由小小 Bash shell 所能达成的复杂开发环境惊讶,甚至你会觉得 Unix IDE 跟 Eclipse 或 Microsoft Visual Studio之类的IDE根本不是同一意义的。 9 | 10 | 11 | Unix 怎么就是个IDE了? 12 | ---------------------- 13 | 14 | 使用 IDE 最主要的原因是它集成了你所有要用的工具,而且你可以在不用太费事去配置各个应用的前提下就能很协调地使用它们,并且用户界面还是基本一致的。在图形界面下大家都想要这种能够集成在一起的工具,那是因为这类窗口应用除了用复制粘贴,没有别的方法使他们更好地协同工作,它们缺失一种 *共用接口(common interface)* 。 15 | 16 | 有关这个问题有趣的是,对于 shell 用户来说这些设计巧妙、经久不衰的 Unix 工具已经有共用接口了,要么是以文本流的形式,要么是以持久化文件对象的形式,这用一句 Unix 世界的格言说就是“一切皆文件”。Unix 里几乎所有东西都是围绕这两个基本概念来组建的,加之,这些有着40年历史的高性能工具的用户和开发者都极具一流的互用性,这些都为 Unix 能成为一个足够强大、成熟而全面的 IDE 打下了坚实基础。 17 | 18 | 正确的观点 19 | ---------- 20 | 21 | 对 Unix 作 IDE 的观点并不是古老的 Unix 卫道者遗留下来的。其实你可以用另一种方式去看待,就像 Emacs 和 Vi 这两款古老的文本编辑器的现代化身(GNU Emacs 和 Vim)有着非常活跃的社区为它们开发各种插件用以各种编辑任务。这两种编辑器都有着各种各样的插件足以满足你所有的编程需求,而且就像其他 Vim 热衷者一样,我可以一口气说上至少六七个我觉得是必备的插件。 22 | 23 | 然而,我经常读到一些文章讲一些开发者如何努力去把这些文本编辑器转变成IDE。比如 `never needing to leave Vim `_\,或者 `never needing to leave Emacs `_\。但是我认为硬把 Vim 或 Emacs 转变成它们本来不是的东西,是在解决问题的思维方式上出了问题。Vim 的作者 Bram Moolenaar,在某种程度上貌似是同意我的想法的,可见 `:help design-not `_\。其实只需要按``Control+Z``就可以回到 Shell,而 Shell 的成熟和高度可组合性的工具组会给你任何编辑器都没法给你的力量。 24 | 25 | 有关此系列文章 26 | -------------- 27 | 28 | 在此系列文章中,我会遍历 IDE 的六大功能,并用一些例子来展现如何轻松地组合使用这些 Linux 里的已有工具。这个系列不会涵盖所有知识,我所演示的工具也不会是唯一的解决方案。 29 | 30 | * 文件和项目管理—— ``ls``, ``find``, ``grep``/``ack``, ``bash`` 31 | * 文本编辑器和编辑工具—— ``vim``, ``awk``, ``sort``, ``column`` 32 | * 编译器及解释器—— ``gcc``, ``perl`` 33 | * 创建工具—— ``make`` 34 | * 排错器—— ``gdb``, ``valgrind``, ``ltrace``, ``lsof``, ``pmap`` 35 | * 版本控制—— ``diff``, ``patch``, ``svn``, ``git`` 36 | 37 | 别误解我 38 | -------- 39 | 40 | 我不认为 IDE 不好,相反,我觉得它们是很美好的存在,所以我才试图说服你 Unix 可以用作 IDE,或至少可以当作是。我也不是想说 Unix 总是所有编程任务的最佳工具。相比一些像 Java 或 C# 之类的“行业”语言或大量需要编写GUI的项目, Unix 公认地适合做 C, C++, Python, Perl,或者 Shell 之类的开发。尤其是,我更加不是要试图说服你要你放弃来之不易的 Eclipse 或 Microsoft Visual Studio 的知识转而投奔有时有些难以理解的命令行世界。我只是想展示一下栅栏另一边的世界,仅此而已。 41 | 42 | -------------------------------------------------------------------------------- /make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | REM Command file for Sphinx documentation 4 | 5 | if "%SPHINXBUILD%" == "" ( 6 | set SPHINXBUILD=sphinx-build 7 | ) 8 | set BUILDDIR=_build 9 | set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . 10 | set I18NSPHINXOPTS=%SPHINXOPTS% . 11 | if NOT "%PAPER%" == "" ( 12 | set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% 13 | set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% 14 | ) 15 | 16 | if "%1" == "" goto help 17 | 18 | if "%1" == "help" ( 19 | :help 20 | echo.Please use `make ^` where ^ is one of 21 | echo. html to make standalone HTML files 22 | echo. dirhtml to make HTML files named index.html in directories 23 | echo. singlehtml to make a single large HTML file 24 | echo. pickle to make pickle files 25 | echo. json to make JSON files 26 | echo. htmlhelp to make HTML files and a HTML help project 27 | echo. qthelp to make HTML files and a qthelp project 28 | echo. devhelp to make HTML files and a Devhelp project 29 | echo. epub to make an epub 30 | echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter 31 | echo. text to make text files 32 | echo. man to make manual pages 33 | echo. texinfo to make Texinfo files 34 | echo. gettext to make PO message catalogs 35 | echo. changes to make an overview over all changed/added/deprecated items 36 | echo. linkcheck to check all external links for integrity 37 | echo. doctest to run all doctests embedded in the documentation if enabled 38 | goto end 39 | ) 40 | 41 | if "%1" == "clean" ( 42 | for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i 43 | del /q /s %BUILDDIR%\* 44 | goto end 45 | ) 46 | 47 | if "%1" == "html" ( 48 | %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html 49 | if errorlevel 1 exit /b 1 50 | echo. 51 | echo.Build finished. The HTML pages are in %BUILDDIR%/html. 52 | goto end 53 | ) 54 | 55 | if "%1" == "dirhtml" ( 56 | %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml 57 | if errorlevel 1 exit /b 1 58 | echo. 59 | echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. 60 | goto end 61 | ) 62 | 63 | if "%1" == "singlehtml" ( 64 | %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml 65 | if errorlevel 1 exit /b 1 66 | echo. 67 | echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. 68 | goto end 69 | ) 70 | 71 | if "%1" == "pickle" ( 72 | %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle 73 | if errorlevel 1 exit /b 1 74 | echo. 75 | echo.Build finished; now you can process the pickle files. 76 | goto end 77 | ) 78 | 79 | if "%1" == "json" ( 80 | %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json 81 | if errorlevel 1 exit /b 1 82 | echo. 83 | echo.Build finished; now you can process the JSON files. 84 | goto end 85 | ) 86 | 87 | if "%1" == "htmlhelp" ( 88 | %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp 89 | if errorlevel 1 exit /b 1 90 | echo. 91 | echo.Build finished; now you can run HTML Help Workshop with the ^ 92 | .hhp project file in %BUILDDIR%/htmlhelp. 93 | goto end 94 | ) 95 | 96 | if "%1" == "qthelp" ( 97 | %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp 98 | if errorlevel 1 exit /b 1 99 | echo. 100 | echo.Build finished; now you can run "qcollectiongenerator" with the ^ 101 | .qhcp project file in %BUILDDIR%/qthelp, like this: 102 | echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Uni.qhcp 103 | echo.To view the help file: 104 | echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Uni.ghc 105 | goto end 106 | ) 107 | 108 | if "%1" == "devhelp" ( 109 | %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp 110 | if errorlevel 1 exit /b 1 111 | echo. 112 | echo.Build finished. 113 | goto end 114 | ) 115 | 116 | if "%1" == "epub" ( 117 | %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub 118 | if errorlevel 1 exit /b 1 119 | echo. 120 | echo.Build finished. The epub file is in %BUILDDIR%/epub. 121 | goto end 122 | ) 123 | 124 | if "%1" == "latex" ( 125 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 126 | if errorlevel 1 exit /b 1 127 | echo. 128 | echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. 129 | goto end 130 | ) 131 | 132 | if "%1" == "text" ( 133 | %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text 134 | if errorlevel 1 exit /b 1 135 | echo. 136 | echo.Build finished. The text files are in %BUILDDIR%/text. 137 | goto end 138 | ) 139 | 140 | if "%1" == "man" ( 141 | %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man 142 | if errorlevel 1 exit /b 1 143 | echo. 144 | echo.Build finished. The manual pages are in %BUILDDIR%/man. 145 | goto end 146 | ) 147 | 148 | if "%1" == "texinfo" ( 149 | %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo 150 | if errorlevel 1 exit /b 1 151 | echo. 152 | echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. 153 | goto end 154 | ) 155 | 156 | if "%1" == "gettext" ( 157 | %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale 158 | if errorlevel 1 exit /b 1 159 | echo. 160 | echo.Build finished. The message catalogs are in %BUILDDIR%/locale. 161 | goto end 162 | ) 163 | 164 | if "%1" == "changes" ( 165 | %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes 166 | if errorlevel 1 exit /b 1 167 | echo. 168 | echo.The overview file is in %BUILDDIR%/changes. 169 | goto end 170 | ) 171 | 172 | if "%1" == "linkcheck" ( 173 | %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck 174 | if errorlevel 1 exit /b 1 175 | echo. 176 | echo.Link check complete; look for any errors in the above output ^ 177 | or in %BUILDDIR%/linkcheck/output.txt. 178 | goto end 179 | ) 180 | 181 | if "%1" == "doctest" ( 182 | %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest 183 | if errorlevel 1 exit /b 1 184 | echo. 185 | echo.Testing of doctests in the sources finished, look at the ^ 186 | results in %BUILDDIR%/doctest/output.txt. 187 | goto end 188 | ) 189 | 190 | :end 191 | -------------------------------------------------------------------------------- /origin/Building.md: -------------------------------------------------------------------------------- 1 | # Unix as IDE: Building 2 | 3 | Posted on [February 13, 2012](http://blog.sanctum.geek.nz/unix-as-ide- 4 | building/) by [Tom Ryder](http://blog.sanctum.geek.nz/author/tom/) 5 | 6 | Because compiling projects can be such a complicated and repetitive process, a 7 | good IDE provides a means to abstract, simplify, and even automate software 8 | builds. Unix and its descendents accomplish this process with a `Makefile`, a 9 | prescribed recipe in a standard format for generating executable files from 10 | source and object files, taking account of changes to only rebuild what's 11 | necessary to prevent costly recompilation. 12 | 13 | One interesting thing to note about `make` is that while it's generally used 14 | for compiled software build automation and has many shortcuts to that effect, 15 | it can actually effectively be used for any situation in which it's required 16 | to generate one set of files from another. One possible use is to generate 17 | web-friendly optimised graphics from source files for deployment for a 18 | website; another use is for generating static HTML pages from code, rather 19 | than generating pages on the fly. It's on the basis of this more flexible 20 | understanding of software “building” that modern takes on the tool like 21 | [Ruby’s `rake`](http://rake.rubyforge.org/) have become popular, automating 22 | the general tasks for producing and installing code and files of all kinds. 23 | 24 | ## Anatomy of a `Makefile` 25 | 26 | The general pattern of a `Makefile` is a list of variables and a list of 27 | _targets_, and the sources and/or objects used to provide them. Targets may 28 | not necessarily be linked binaries; they could also constitute actions to 29 | perform using the generated files, such as `install` to instate built files 30 | into the system, and `clean` to remove built files from the source tree. 31 | 32 | It's this flexibility of targets that enables `make` to automate any sort of 33 | task relevant to assembling a production build of software; not just the 34 | typical parsing, preprocessing, compiling proper and linking steps performed 35 | by the compiler, but also running tests (`make test`), compiling documentation 36 | source files into one or more appropriate formats, or automating deployment of 37 | code into production systems, for example, uploading to a website via a `git 38 | push` or similar content-tracking method. 39 | 40 | An example `Makefile` for a simple software project might look something like 41 | the below: 42 | 43 | 44 | all: example 45 | 46 | example: main.o example.o library.o 47 | gcc main.o example.o library.o -o example 48 | 49 | main.o: main.c 50 | gcc -c main.c -o main.o 51 | 52 | example.o: example.c 53 | gcc -c example.c -o example.o 54 | 55 | library.o: library.c 56 | gcc -c library.c -o library.o 57 | 58 | clean: 59 | rm *.o example 60 | 61 | install: example 62 | cp example /usr/bin 63 | 64 | The above isn't the most optimal `Makefile` possible for this project, but it 65 | provides a means to build and install a linked binary simply by typing `make`. 66 | Each _target_ definition contains a list of the _dependencies_ required for 67 | the command that follows; this means that the definitions can appear in any 68 | order, and the call to `make` will call the relevant commands in the 69 | appropriate order. 70 | 71 | Much of the above is needlessly verbose or repetitive; for example, if an 72 | object file is built directly from a single C file of the same name, then we 73 | don't need to include the target at all, and `make` will sort things out for 74 | us. Similarly, it would make sense to put some of the more repeated calls into 75 | variables so that we would not have to change them individually if our choice 76 | of compiler or flags changed. A more concise version might look like the 77 | following: 78 | 79 | 80 | CC = gcc 81 | OBJECTS = main.o example.o library.o 82 | BINARY = example 83 | 84 | all: example 85 | 86 | example: $(OBJECTS) 87 | $(CC) $(OBJECTS) -o $(BINARY) 88 | 89 | clean: 90 | rm -f $(BINARY) $(OBJECTS) 91 | 92 | install: example 93 | cp $(BINARY) /usr/bin 94 | 95 | ## More general uses of `make` 96 | 97 | In the interests of automation, however, it's instructive to think of this a 98 | bit more generally than just code compilation and linking. An example could be 99 | for a simple web project involving deploying PHP to a live webserver. This is 100 | not normally a task people associate with the use of `make`, but the 101 | principles are the same; with the source in place and ready to go, we have 102 | certain targets to meet for the build. 103 | 104 | PHP files don't require compilation, of course, but web assets often do. An 105 | example that will be familiar to web developers is the generation of scaled 106 | and optimised raster images from vector source files, for deployment to the 107 | web. You keep and version your original source file, and when it comes time to 108 | deploy, you generate a web-friendly version of it. 109 | 110 | Let's assume for this particular project that there's a set of four icons used 111 | throughout the site, sized to 64 by 64 pixels. We have the source files to 112 | hand in SVG vector format, safely tucked away in version control, and now need 113 | to _generate_ the smaller bitmaps for the site, ready for deployment. We could 114 | therefore define a target `icons`, set the dependencies, and type out the 115 | commands to perform. This is where command line tools in Unix really begin to 116 | shine in use with `Makefile` syntax: 117 | 118 | 119 | icons: create.png read.png update.png delete.png 120 | 121 | create.png: create.svg 122 | convert create.svg create.raw.png && \ 123 | pngcrush create.raw.png create.png 124 | 125 | read.png: read.svg 126 | convert read.svg read.raw.png && \ 127 | pngcrush read.raw.png read.png 128 | 129 | update.png: update.svg 130 | convert update.svg update.raw.png && \ 131 | pngcrush update.raw.png update.png 132 | 133 | delete.png: delete.svg 134 | convert delete.svg delete.raw.png && \ 135 | pngcrush delete.raw.png delete.png 136 | 137 | With the above done, typing `make icons` will go through each of the source 138 | icons files in a Bash loop, convert them from SVG to PNG using ImageMagick's 139 | `convert`, and optimise them with `pngcrush`, to produce images ready for 140 | upload. 141 | 142 | A similar approach can be used for generating help files in various forms, for 143 | example, generating HTML files from Markdown source: 144 | 145 | 146 | docs: README.html credits.html 147 | 148 | README.html: README.md 149 | markdown README.md > README.html 150 | 151 | credits.html: credits.md 152 | markdown credits.md > credits.html 153 | 154 | And perhaps finally deploying a website with `git push web`, but only _after_ 155 | the icons are rasterized and the documents converted: 156 | 157 | 158 | deploy: icons docs 159 | git push web 160 | 161 | For a more compact and abstract formula for turning a file of one suffix into 162 | another, you can use the `.SUFFIXES` pragma to define these using special 163 | symbols. The code for converting icons could look like this; in this case, 164 | `$<` refers to the source file, `$*` to the filename with no extension, and 165 | `$@` to the target. 166 | 167 | 168 | 169 | icons: create.png read.png update.png delete.png 170 | 171 | .SUFFIXES: .svg .png 172 | 173 | .svg.png: 174 | convert $< $*.raw.png && \ 175 | pngcrush $*.raw.png $@ 176 | 177 | 178 | ## Tools for building a `Makefile` 179 | 180 | A variety of tools exist in the GNU Autotools toolchain for the construction 181 | of `configure` scripts and `make` files for larger software projects at a 182 | higher level, in particular 183 | `[autoconf](http://en.wikipedia.org/wiki/Autoconf)` and 184 | `[automake](http://en.wikipedia.org/wiki/Automake)`. The use of these 185 | tools allows generating `configure` scripts and `make` files covering very 186 | large source bases, reducing the necessity of building otherwise extensive 187 | makefiles manually, and automating steps taken to ensure the source remains 188 | compatible and compilable on a variety of operating systems. 189 | 190 | Covering this complex process would be a series of posts in its own right, and 191 | is out of scope of this survey. 192 | 193 | _Thanks to user samwyse for the `.SUFFIXES` suggestion in the comments._ 194 | 195 | 196 | [Unix as IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/) 197 | 198 | * [Unix as IDE: Introduction](http://blog.sanctum.geek.nz/unix-as-ide-introduction/) 199 | * [Unix as IDE: Files](http://blog.sanctum.geek.nz/unix-as-ide-files/) 200 | * [Unix as IDE: Editing](http://blog.sanctum.geek.nz/unix-as-ide-editing/) 201 | * [Unix as IDE: Compiling](http://blog.sanctum.geek.nz/unix-as-ide-compiling/) 202 | * Unix as IDE: Building 203 | * [Unix as IDE: Debugging](http://blog.sanctum.geek.nz/unix-as-ide-debugging/) 204 | * [Unix as IDE: Revisions](http://blog.sanctum.geek.nz/unix-as-ide-revisions/) 205 | 206 | This entry is part 5 of 7 in the series [Unix as 207 | IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/). 208 | 209 | [<< Unix as IDE: Compiling](http://blog.sanctum.geek.nz/unix-as-ide- 210 | compiling/)[Unix as IDE: Debugging >>](http://blog.sanctum.geek.nz/unix-as- 211 | ide-debugging/) 212 | -------------------------------------------------------------------------------- /origin/Compiling.md: -------------------------------------------------------------------------------- 1 | # Unix as IDE: Compiling 2 | 3 | Posted on [February 12, 2012](http://blog.sanctum.geek.nz/unix-as-ide- 4 | compiling/) by [Tom Ryder](http://blog.sanctum.geek.nz/author/tom/) 5 | 6 | There are a lot of tools available for compiling and interpreting code on the 7 | Unix platform, and they tend to be used in different ways. However, 8 | conceptually many of the steps are the same. Here I'll discuss compiling C 9 | code with `gcc` from the GNU Compiler Collection, and briefly the use of 10 | `perl` as an example of an interpreter. 11 | 12 | ## GCC 13 | 14 | [GCC](http://gcc.gnu.org/) is a very mature GPL-licensed collection of 15 | compilers, perhaps best-known for working with C and C++ programs. Its free 16 | software license and near ubiquity on free Unix-like systems like Linux and 17 | BSD has made it enduringly popular for these purposes, though more modern 18 | alternatives are available in compilers using the [LLVM](http://llvm.org/) 19 | infrastructure, such as [Clang](http://clang.llvm.org/). 20 | 21 | The frontend binaries for GNU Compiler Collection are best thought of less as 22 | a set of complete compilers in their own right, and more as _drivers_ for a 23 | set of discrete programming tools, performing parsing, compiling, and linking, 24 | among other steps. This means that while you can use GCC with a relatively 25 | simple command line to compile straight from C sources to a working binary, 26 | you can also inspect in more detail the steps it takes along the way and tweak 27 | it accordingly. 28 | 29 | I won't be discussing the use of `make` files here, though you'll almost 30 | certainly be wanting them for any C project of more than one file; that will 31 | be discussed in the next article on build automation tools. 32 | 33 | ### Compiling and assembling object code 34 | 35 | You can compile object code from a C source file like so: 36 | 37 | 38 | $ gcc -c example.c -o example.o 39 | 40 | Assuming it's a valid C program, this will generate an unlinked binary object 41 | file called `example.o` in the current directory, or tell you the reasons it 42 | can't. You can inspect its assembler contents with the `objdump` tool: 43 | 44 | 45 | $ objdump -D example.o 46 | 47 | Alternatively, you can get `gcc` to output the appropriate assembly code for 48 | the object directly with the `-S` parameter: 49 | 50 | 51 | $ gcc -c -S example.c -o example.s 52 | 53 | This kind of assembly output can be particularly instructive, or at least 54 | interesting, when printed inline with the source code itself, which you can do 55 | with: 56 | 57 | 58 | $ gcc -c -g -Wa,-a,-ad example.c > example.lst 59 | 60 | ### Linking objects 61 | 62 | One or more objects can be linked into appropriate binaries like so: 63 | 64 | 65 | $ gcc example.o -o example 66 | 67 | In this example, GCC is not doing much more than abstracting a call to `ld`, 68 | the GNU linker. The command produces an executable binary called `example`. 69 | 70 | ### Compiling, assembling, and linking 71 | 72 | All of the above can be done in one step with: 73 | 74 | 75 | $ gcc example.c -o example 76 | 77 | This is a little simpler, but compiling objects independently turns out to 78 | have some practical performance benefits in not recompiling code 79 | unnecessarily, which I'll discuss in the next article. 80 | 81 | ### Including and linking 82 | 83 | C files and headers can be explicitly included in a compilation call with the 84 | `-I` parameter: 85 | 86 | 87 | $ gcc -I/usr/include/somelib.h example.c -o example 88 | 89 | Similarly, if the code needs to be dynamically linked against a compiled 90 | system library available in common locations like `/lib` or `/usr/lib`, such 91 | as `ncurses`, that can be included with the `-l` parameter: 92 | 93 | 94 | $ gcc -lncurses example.c -o example 95 | 96 | If you have a lot of necessary inclusions and links in your compilation 97 | process, it makes sense to put this into environment variables: 98 | 99 | 100 | $ export CFLAGS=-I/usr/include/somelib.h 101 | $ export CLIBS=-lncurses 102 | $ gcc $CFLAGS $CLIBS example.c -o example 103 | 104 | This very common step is another thing that a `Makefile` is designed to 105 | abstract away for you. 106 | 107 | ### Compilation plan 108 | 109 | To inspect in more detail what `gcc` is doing with any call, you can add the 110 | `-v` switch to prompt it to print its compilation plan on the standard error 111 | stream: 112 | 113 | 114 | $ gcc -v -c example.c -o example.o 115 | 116 | If you don't want it to actually generate object files or linked binaries, 117 | it's sometimes tidier to use `-###` instead: 118 | 119 | 120 | $ gcc -### -c example.c -o example.o 121 | 122 | This is mostly instructive to see what steps the `gcc` binary is abstracting 123 | away for you, but in specific cases it can be useful to identify steps the 124 | compiler is taking that you may not necessarily want it to. 125 | 126 | ### More verbose error checking 127 | 128 | You can add the `-Wall` and/or `-pedantic` options to the `gcc` call to prompt 129 | it to warn you about things that may not necessarily be errors, but could be: 130 | 131 | 132 | $ gcc -Wall -pedantic -c example.c -o example.o 133 | 134 | This is good for including in your `Makefile` or in your 135 | `[makeprg](http://vim.wikia.com/wiki/Errorformat_and_makeprg)` definition in 136 | Vim, as it works well with the quickfix window discussed in the previous 137 | article and will enable you to write more readable, compatible, and less 138 | error-prone code as it warns you more extensively about errors. 139 | 140 | ### Profiling compilation time 141 | 142 | You can pass the flag `-time` to `gcc` to generate output showing how long 143 | each step is taking: 144 | 145 | 146 | $ gcc -time -c example.c -o example.o 147 | 148 | ### Optimisation 149 | 150 | You can pass generic optimisation options to `gcc` to make it attempt to build 151 | more efficient object files and linked binaries, at the expense of compilation 152 | time. I find `-O2` is usually a happy medium for code going into production: 153 | 154 | * `gcc -O1` 155 | * `gcc -O2` 156 | * `gcc -O3` 157 | 158 | Like any other Bash command, all of this can be [called from within 159 | Vim](http://blog.sanctum.geek.nz/unix-as-ide-editing/) by: 160 | 161 | 162 | :!gcc % -o example 163 | 164 | ## Interpreters 165 | 166 | The approach to interpreted code on Unix-like systems is very different. In 167 | these examples I'll use Perl, but most of these principles will be applicable 168 | to interpreted Python or Ruby code, for example. 169 | 170 | ### Inline 171 | 172 | You can run a string of Perl code directly into the interpreter in any one of 173 | the following ways, in this case printing the single line "Hello, world." to 174 | the screen, with a linebreak following. The first one is perhaps the tidiest 175 | and most standard way to work with Perl; the second uses a 176 | '[heredoc](http://tldp.org/LDP/abs/html/here-docs.html)' string, and the third 177 | a classic Unix shell pipe. 178 | 179 | 180 | $ perl -e 'print "Hello world.\n";' 181 | $ perl <<<'print "Hello world.\n";' 182 | $ echo 'print "Hello world.\n";' | perl 183 | 184 | Of course, it's more typical to keep the code in a file, which can be run 185 | directly: 186 | 187 | 188 | $ perl hello.pl 189 | 190 | In either case, you can check the syntax of the code without actually running 191 | it with the `-c` switch: 192 | 193 | 194 | $ perl -c hello.pl 195 | 196 | But to use the script as a _logical binary_, so you can invoke it directly 197 | without knowing or caring what the script is, you can add a special first line 198 | to the file called the "shebang" that does some magic to specify the 199 | interpreter through which the file should be run. 200 | 201 | 202 | #!/usr/bin/perl 203 | print "Hello, world.\n"; 204 | 205 | The script then needs to be made executable with a `chmod` call. It's also 206 | good practice to rename it to remove the extension, since it is now taking the 207 | shape of a logic binary: 208 | 209 | 210 | $ mv hello{.pl,} 211 | $ chmod +x hello 212 | 213 | And can thereafter be invoked directly, as if it were a compiled binary: 214 | 215 | 216 | $ ./hello 217 | 218 | This works so well that many of the common utilities on modern Linux systems, 219 | such as the `adduser` frontend to `useradd`, are actually Perl or even Python 220 | scripts. 221 | 222 | In the next post, I'll describe the use of `make` for defining and automating 223 | building projects in a manner comparable to IDEs, with a nod to newer takes on 224 | the same idea with Ruby's `rake`. 225 | 226 | 227 | [Unix as IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/) 228 | 229 | * [Unix as IDE: Introduction](http://blog.sanctum.geek.nz/unix-as-ide-introduction/) 230 | * [Unix as IDE: Files](http://blog.sanctum.geek.nz/unix-as-ide-files/) 231 | * [Unix as IDE: Editing](http://blog.sanctum.geek.nz/unix-as-ide-editing/) 232 | * Unix as IDE: Compiling 233 | * [Unix as IDE: Building](http://blog.sanctum.geek.nz/unix-as-ide-building/) 234 | * [Unix as IDE: Debugging](http://blog.sanctum.geek.nz/unix-as-ide-debugging/) 235 | * [Unix as IDE: Revisions](http://blog.sanctum.geek.nz/unix-as-ide-revisions/) 236 | 237 | This entry is part 4 of 7 in the series [Unix as 238 | IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/). 239 | 240 | [<< Unix as IDE: Editing](http://blog.sanctum.geek.nz/unix-as-ide- 241 | editing/)[Unix as IDE: Building >>](http://blog.sanctum.geek.nz/unix-as-ide- 242 | building/) 243 | -------------------------------------------------------------------------------- /origin/Debugging.md: -------------------------------------------------------------------------------- 1 | # Unix as IDE: Debugging 2 | 3 | Posted on [February 14, 2012](http://blog.sanctum.geek.nz/unix-as-ide- 4 | debugging/) by [Tom Ryder](http://blog.sanctum.geek.nz/author/tom/) 5 | 6 | When unexpected behaviour is noticed in a program, Linux provides a wide 7 | variety of command-line tools for diagnosing problems. The use of `gdb`, the 8 | GNU debugger, and related tools like the lesser-known Perl debugger, will be 9 | familiar to those using IDEs to set breakpoints in their code and to examine 10 | program state as it runs. Other tools of interest are available however to 11 | observe in more detail how a program is interacting with a system and using 12 | its resources. 13 | 14 | ## Debugging with `gdb` 15 | 16 | You can use `gdb` in a very similar fashion to the built-in debuggers in 17 | modern IDEs like Eclipse and Visual Studio. If you are debugging a program 18 | that you've just compiled, it makes sense to compile it with its _debugging 19 | symbols_ added to the binary, which you can do with a `gcc` call containing 20 | the `-g` option. If you're having problems with some code, it helps to also 21 | use `-Wall` to show any errors you may have otherwise missed: 22 | 23 | 24 | $ gcc -g -Wall example.c -o example 25 | 26 | The classic way to use `gdb` is as the shell for a running program compiled in 27 | C or C++, to allow you to inspect the program's state as it proceeds towards 28 | its crash. 29 | 30 | 31 | $ gdb example 32 | ... 33 | Reading symbols from /home/tom/example...done. 34 | (gdb) 35 | 36 | At the `(gdb)` prompt, you can type `run` to start the program, and it may 37 | provide you with more detailed information about the causes of errors such as 38 | segmentation faults, including the source file and line number at which the 39 | problem occurred. If you're able to compile the code with debugging symbols as 40 | above and inspect its running state like this, it makes figuring out the cause 41 | of a particular bug a lot easier. 42 | 43 | 44 | (gdb) run 45 | Starting program: /home/tom/gdb/example 46 | 47 | Program received signal SIGSEGV, Segmentation fault. 48 | 0x000000000040072e in main () at example.c:43 49 | 43 printf("%d\n", *segfault); 50 | 51 | After an error terminates the program within the `(gdb)` shell, you can type 52 | `backtrace` to see what the calling function was, which can include the 53 | specific parameters passed that may have something to do with what caused the 54 | crash. 55 | 56 | 57 | (gdb) backtrace 58 | #0 0x000000000040072e in main () at example.c:43 59 | 60 | You can set breakpoints for `gdb` using the `break` to halt the program's run 61 | if it reaches a matching line number or function call: 62 | 63 | 64 | (gdb) break 42 65 | Breakpoint 1 at 0x400722: file example.c, line 42. 66 | (gdb) break malloc 67 | Breakpoint 1 at 0x4004c0 68 | (gdb) run 69 | Starting program: /home/tom/gdb/example 70 | 71 | Breakpoint 1, 0x00007ffff7df2310 in malloc () from /lib64/ld-linux-x86-64.so.2 72 | 73 | Thereafter it's helpful to _step_ through successive lines of code using 74 | `step`. You can repeat this, like any `gdb` command, by pressing Enter 75 | repeatedly to step through lines one at a time: 76 | 77 | 78 | (gdb) step 79 | Single stepping until exit from function _start, 80 | which has no line number information. 81 | 0x00007ffff7a74db0 in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6 82 | 83 | You can even attach `gdb` to a process that is already running, by finding the 84 | process ID and passing it to `gdb`: 85 | 86 | 87 | $ pgrep example 88 | 1524 89 | $ gdb -p 1524 90 | 91 | This can be useful for [redirecting streams of 92 | output](http://stackoverflow.com/questions/593724/redirect-stderr-stdout-of-a 93 | -process-after-its-been-started-using-command-lin) for a task that is taking 94 | an unexpectedly long time to run. 95 | 96 | ## Debugging with `valgrind` 97 | 98 | The much newer [valgrind](http://valgrind.org/) can be used as a debugging 99 | tool in a similar way. There are many different checks and debugging methods 100 | this program can run, but one of the most useful is its Memcheck tool, which 101 | can be used to detect common memory errors like buffer overflow: 102 | 103 | 104 | $ valgrind --leak-check=yes ./example 105 | ==29557== Memcheck, a memory error detector 106 | ==29557== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. 107 | ==29557== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info 108 | ==29557== Command: ./example 109 | ==29557== 110 | ==29557== Invalid read of size 1 111 | ==29557== at 0x40072E: main (example.c:43) 112 | ==29557== Address 0x0 is not stack'd, malloc'd or (recently) free'd 113 | ==29557== 114 | ... 115 | 116 | The `gdb` and `valgrind` tools [can be used 117 | together](http://valgrind.org/docs/manual/manual-core-adv.html#manual-core- 118 | adv.gdbserver) for a very thorough survey of a program's run. Zed Shaw's 119 | [Learn C the Hard Way](http://c.learncodethehardway.org/book/) includes a 120 | really good introduction for elementary use of `valgrind` with a deliberately 121 | broken program. 122 | 123 | ## Tracing system and library calls with `ltrace` 124 | 125 | The `strace` and `ltrace` tools are designed to allow watching system calls 126 | and library calls respectively for running programs, and logging them to the 127 | screen or, more usefully, to files. On Linux, `ltrace` is preferred as it 128 | enables you to log both system and library calls. 129 | 130 | You can run `ltrace` and have it run the program you want to monitor in this 131 | way for you by simply providing it as the sole parameter. It will then give 132 | you a listing of all the system and library calls it makes until it exits. 133 | 134 | 135 | $ ltrace ./example 136 | __libc_start_main(0x4006ad, 1, 0x7fff9d7e5838, 0x400770, 0x400760 137 | srand(4, 0x7fff9d7e5838, 0x7fff9d7e5848, 0, 0x7ff3aebde320) = 0 138 | malloc(24) = 0x01070010 139 | rand(0, 0x1070020, 0, 0x1070000, 0x7ff3aebdee60) = 0x754e7ddd 140 | malloc(24) = 0x01070030 141 | rand(0x7ff3aebdee60, 24, 0, 0x1070020, 0x7ff3aebdeec8) = 0x11265233 142 | malloc(24) = 0x01070050 143 | rand(0x7ff3aebdee60, 24, 0, 0x1070040, 0x7ff3aebdeec8) = 0x18799942 144 | malloc(24) = 0x01070070 145 | rand(0x7ff3aebdee60, 24, 0, 0x1070060, 0x7ff3aebdeec8) = 0x214a541e 146 | malloc(24) = 0x01070090 147 | rand(0x7ff3aebdee60, 24, 0, 0x1070080, 0x7ff3aebdeec8) = 0x1b6d90f3 148 | malloc(24) = 0x010700b0 149 | rand(0x7ff3aebdee60, 24, 0, 0x10700a0, 0x7ff3aebdeec8) = 0x2e19c419 150 | malloc(24) = 0x010700d0 151 | rand(0x7ff3aebdee60, 24, 0, 0x10700c0, 0x7ff3aebdeec8) = 0x35bc1a99 152 | malloc(24) = 0x010700f0 153 | rand(0x7ff3aebdee60, 24, 0, 0x10700e0, 0x7ff3aebdeec8) = 0x53b8d61b 154 | malloc(24) = 0x01070110 155 | rand(0x7ff3aebdee60, 24, 0, 0x1070100, 0x7ff3aebdeec8) = 0x18e0f924 156 | malloc(24) = 0x01070130 157 | rand(0x7ff3aebdee60, 24, 0, 0x1070120, 0x7ff3aebdeec8) = 0x27a51979 158 | --- SIGSEGV (Segmentation fault) --- 159 | +++ killed by SIGSEGV +++ 160 | 161 | You can also attach it to a process that's already running: 162 | 163 | 164 | $ pgrep example 165 | 5138 166 | $ ltrace -p 5138 167 | 168 | Generally, there's quite a bit more than a couple of screenfuls of text 169 | generated by this, so it's helpful to use the `-o` option to specify an output 170 | file to which to log the calls: 171 | 172 | 173 | $ ltrace -o example.ltrace ./example 174 | 175 | You can then view this trace in a text editor like Vim, which includes syntax 176 | highlighting for `ltrace` output: 177 | 178 | [![Vim session with ltrace output](http://blog.sanctum.geek.nz/wp- 179 | content/uploads/2012/02/ltrace-vim.png)](http://blog.sanctum.geek.nz/wp- 180 | content/uploads/2012/02/ltrace-vim.png) 181 | 182 | Vim session with ltrace output 183 | 184 | I've found `ltrace` very useful for debugging problems where I suspect 185 | improper linking may be at fault, or the absence of some needed resource in a 186 | `chroot` environment, since among its output it shows you its search for 187 | libraries at dynamic linking time and opening configuration files in `/etc`, 188 | and the use of devices like `/dev/random` or `/dev/zero`. 189 | 190 | ## Tracking open files with `lsof` 191 | 192 | If you want to view what devices, files, or streams a running process has 193 | open, you can do that with `lsof`: 194 | 195 | 196 | $ pgrep example 197 | 5051 198 | $ lsof -p 5051 199 | 200 | For example, the first few lines of the `apache2` process running on my home 201 | server are: 202 | 203 | 204 | # lsof -p 30779 205 | COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME 206 | apache2 30779 root cwd DIR 8,1 4096 2 / 207 | apache2 30779 root rtd DIR 8,1 4096 2 / 208 | apache2 30779 root txt REG 8,1 485384 990111 /usr/lib/apache2/mpm-prefork/apache2 209 | apache2 30779 root DEL REG 8,1 1087891 /lib/x86_64-linux-gnu/libgcc_s.so.1 210 | apache2 30779 root mem REG 8,1 35216 1079715 /usr/lib/php5/20090626/pdo_mysql.so 211 | ... 212 | 213 | Interestingly, another way to list the open files for a process is to check 214 | the corresponding entry for the process in the dynamic `/proc` directory: 215 | 216 | 217 | # ls -l /proc/30779/fd 218 | 219 | This can be very useful in confusing situations with file locks, or 220 | identifying whether a process is holding open files that it needn't. 221 | 222 | ## Viewing memory allocation with `pmap` 223 | 224 | As a final debugging tip, you can view the memory allocations for a particular 225 | process with `pmap`: 226 | 227 | 228 | # pmap 30779 229 | 30779: /usr/sbin/apache2 -k start 230 | 00007fdb3883e000 84K r-x-- /lib/x86_64-linux-gnu/libgcc_s.so.1 (deleted) 231 | 00007fdb38853000 2048K ----- /lib/x86_64-linux-gnu/libgcc_s.so.1 (deleted) 232 | 00007fdb38a53000 4K rw--- /lib/x86_64-linux-gnu/libgcc_s.so.1 (deleted) 233 | 00007fdb38a54000 4K ----- [ anon ] 234 | 00007fdb38a55000 8192K rw--- [ anon ] 235 | 00007fdb392e5000 28K r-x-- /usr/lib/php5/20090626/pdo_mysql.so 236 | 00007fdb392ec000 2048K ----- /usr/lib/php5/20090626/pdo_mysql.so 237 | 00007fdb394ec000 4K r---- /usr/lib/php5/20090626/pdo_mysql.so 238 | 00007fdb394ed000 4K rw--- /usr/lib/php5/20090626/pdo_mysql.so 239 | ... 240 | total 152520K 241 | 242 | This will show you what libraries a running process is using, including those 243 | in shared memory. The total given at the bottom is a little misleading as for 244 | loaded shared libraries, the running process is not necessarily the only one 245 | using the memory; [determining “actual” memory usage for a given 246 | process](http://stackoverflow.com/questions/118307/a-way-to-determine-a 247 | -processs-real-memory-usage-i-e-private-dirty-rss) is a little more in-depth 248 | than it might seem with shared libraries added to the picture. 249 | 250 | 251 | [Unix as IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/) 252 | 253 | * [Unix as IDE: Introduction](http://blog.sanctum.geek.nz/unix-as-ide-introduction/) 254 | * [Unix as IDE: Files](http://blog.sanctum.geek.nz/unix-as-ide-files/) 255 | * [Unix as IDE: Editing](http://blog.sanctum.geek.nz/unix-as-ide-editing/) 256 | * [Unix as IDE: Compiling](http://blog.sanctum.geek.nz/unix-as-ide-compiling/) 257 | * [Unix as IDE: Building](http://blog.sanctum.geek.nz/unix-as-ide-building/) 258 | * Unix as IDE: Debugging 259 | * [Unix as IDE: Revisions](http://blog.sanctum.geek.nz/unix-as-ide-revisions/) 260 | 261 | This entry is part 6 of 7 in the series [Unix as 262 | IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/). 263 | 264 | [<< Unix as IDE: Building](http://blog.sanctum.geek.nz/unix-as-ide- 265 | building/)[Unix as IDE: Revisions >>](http://blog.sanctum.geek.nz/unix-as-ide- 266 | revisions/) 267 | -------------------------------------------------------------------------------- /origin/Editing.md: -------------------------------------------------------------------------------- 1 | # Unix as IDE: Editing 2 | 3 | Posted on [February 11, 2012](http://blog.sanctum.geek.nz/unix-as-ide- 4 | editing/) by [Tom Ryder](http://blog.sanctum.geek.nz/author/tom/) 5 | 6 | The text editor is the core tool for any programmer, which is why choice of 7 | editor evokes such tongue-in-cheek zealotry in debate among programmers. Unix, 8 | of course, is the operating system most strongly linked with two enduring 9 | favourites, Emacs and Vi, and their modern versions in GNU Emacs and Vim, two 10 | editors with very different editing philosophies but comparable power. 11 | 12 | Being a Vim heretic myself, here I'll discuss the indispensable features of 13 | Vim for programming, and in particular the use of Linux shell tools called 14 | from _within_ Vim to complement the editor's built-in functionality. Some of 15 | the principles discussed here will be applicable to those using Emacs as well, 16 | but probably not for underpowered editors like Nano. 17 | 18 | This will be a very general survey, as Vim's toolset for programmers is 19 | _enormous_, and it'll still end up being quite long. I'll focus on the 20 | essentials and the things I feel are most helpful, and try to provide links to 21 | articles with a more comprehensive treatment of the topic. Don't forget that 22 | Vim's `:help` has surprised many people new to the editor with its high 23 | quality and usefulness. 24 | 25 | ## Filetype detection 26 | 27 | Vim has built-in settings to adjust its behaviour, in particular its syntax 28 | highlighting, based on the filetype being loaded, which it happily detects and 29 | generally does a good job at doing so. In particular, this allows you to set 30 | an indenting style conformant with the way a particular language is usually 31 | written. This should be one of the first things in your `.vimrc` file. 32 | 33 | 34 | if has("autocmd") 35 | filetype on 36 | filetype indent on 37 | filetype plugin on 38 | endif 39 | 40 | ## Syntax highlighting 41 | 42 | Even if you're just working with a 16-color terminal, just include the 43 | following in your `.vimrc` if you're not already: 44 | 45 | 46 | syntax on 47 | 48 | The colorschemes with a default 16-color terminal are not pretty largely by 49 | necessity, but they do the job, and for most languages syntax definition files 50 | are available that work very well. There's a [tremendous array of 51 | colorschemes](http://code.google.com/p/vimcolorschemetest/) available, and 52 | it's not hard to tweak them to suit or even to write your own. Using a 53 | [256-color terminal](http://vim.wikia.com/wiki/256_colors_in_vim) or gVim will 54 | give you more options. Good syntax highlighting files will show you definite 55 | syntax errors with a glaring red background. 56 | 57 | ## Line numbering 58 | 59 | To turn line numbers on if you use them a lot in your traditional IDE: 60 | 61 | 62 | set number 63 | 64 | You might like to try this as well, if you have at least Vim 7.3 and are keen 65 | to try numbering lines relative to the current line rather than absolutely: 66 | 67 | 68 | set relativenumber 69 | 70 | ## Tags files 71 | 72 | Vim [works very well](http://amix.dk/blog/post/19329) with the output from the 73 | `ctags` utility. This allows you to search quickly for all uses of a 74 | particular identifier throughout the project, or to navigate straight to the 75 | declaration of a variable from one of its uses, regardless of whether it's in 76 | the same file. For large C projects in multiple files this can save huge 77 | amounts of otherwise wasted time, and is probably Vim's best answer to similar 78 | features in mainstream IDEs. 79 | 80 | You can run `:!ctags -R` on the root directory of projects in many popular 81 | languages to generate a `tags` file filled with definitions and locations for 82 | identifiers throughout your project. Once a `tags` file for your project is 83 | available, you can search for uses of an appropriate tag throughout the 84 | project like so: 85 | 86 | 87 | :tag someClass 88 | 89 | The commands `:tn` and `:tp` will allow you to iterate through successive uses 90 | of the tag elsewhere in the project. The built-in tags functionality for this 91 | already covers most of the bases you'll probably need, but for features such 92 | as a tag list window, you could try installing the very popular [Taglist 93 | plugin](http://vim-taglist.sourceforge.net/). Tim Pope's [Unimpaired 94 | plugin](https://github.com/tpope/vim-unimpaired) also contains a couple of 95 | useful relevant mappings. 96 | 97 | ## Calling external programs 98 | 99 | There are two major methods of calling external programs during a Vim session: 100 | 101 | * **`:!`** — Useful for issuing commands from within a Vim context, particularly in cases where you intend to record output in a buffer. 102 | * **`:shell`** — Drop to a shell as a subprocess of Vim. Good for interactive commands. 103 | 104 | A third, which I won't discuss in depth here, is using plugins such as 105 | [Conque](http://code.google.com/p/conque/) to emulate a shell within a Vim 106 | buffer. Having tried this myself and found it nearly unusable, I've concluded 107 | it's simply bad design. From `:help design-not`: 108 | 109 | > Vim is not a shell or an Operating System. You will not be able to run a 110 | shell inside Vim or use it to control a debugger. This should work the other 111 | way around: Use Vim as a component from a shell or in an IDE. 112 | 113 | ### Lint programs and syntax checkers 114 | 115 | Checking syntax or compiling with an external program call (e.g. `perl -c`, 116 | `gcc`) is one of the calls that's good to make from within the editor using 117 | `:!` commands. If you were editing a Perl file, you could run this like so: 118 | 119 | 120 | :!perl -c % 121 | 122 | /home/tom/project/test.pl syntax OK 123 | 124 | Press Enter or type command to continue 125 | 126 | The `%` symbol is shorthand for the file loaded in the current buffer. Running 127 | this prints the output of the command, if any, below the command line. If you 128 | wanted to call this check often, you could perhaps map it as a command, or 129 | even a key combination in your `.vimrc` file. In this case, we define a 130 | command `:PerlLint` which can be called from normal mode with `\l`: 131 | 132 | 133 | command PerlLint !perl -c % 134 | nnoremap l :PerlLint 135 | 136 | For a lot of languages there's an even better way to do this, though, which 137 | allows us to capitalise on Vim's built-in quickfix window. We can do this by 138 | setting an appropriate `makeprg` for the filetype, in this case including a 139 | module that provides us with output that Vim can use for its quicklist, and a 140 | definition for its two formats: 141 | 142 | 143 | :set makeprg=perl\ -c\ -MVi::QuickFix\ % 144 | :set errorformat+=%m\ at\ %f\ line\ %l\. 145 | :set errorformat+=%m\ at\ %f\ line\ %l 146 | 147 | You may need to install this module first via CPAN, or the Debian package 148 | `libvi-quickfix-perl`. This done, you can type `:make` after saving the file 149 | to check its syntax, and if errors are found, you can open the quicklist 150 | window with `:copen` to inspect the errors, and `:cn` and `:cp` to jump to 151 | them within the buffer. 152 | 153 | [![Vim quickfix working on a Perl file](http://blog.sanctum.geek.nz/wp- 154 | content/uploads/2012/02/vim-quickfix.png)](http://blog.sanctum.geek.nz/wp- 155 | content/uploads/2012/02/vim-quickfix.png) 156 | 157 | Vim quickfix working on a Perl file 158 | 159 | This also works for output from `[gcc](http://tldp.org/HOWTO/C-editing-with- 160 | VIM-HOWTO/quickfix.html)`, and pretty much any other compiler syntax checker 161 | that you might want to use that includes filenames, line numbers, and error 162 | strings in its error output. It's even possible to do this with [web-focused 163 | languages like PHP](http://stackoverflow.com/questions/7193547/debugging-php- 164 | with-vim-using-quickfix), and for tools like [JSLint for 165 | JavaScript](https://github.com/hallettj/jslint.vim). There's also an excellent 166 | plugin named [Syntastic](http://www.vim.org/scripts/script.php?script_id=2736) 167 | that does something similar. 168 | 169 | ### Reading output from other commands 170 | 171 | You can use `:r!` to call commands and paste their output directly into the 172 | buffer with which you're working. For example, to pull a quick directory 173 | listing for the current folder into the buffer, you could type: 174 | 175 | 176 | :r!ls 177 | 178 | This doesn't just work for commands, of course; you can simply read in other 179 | files this way with just `:r`, like public keys or your own custom 180 | boilerplate: 181 | 182 | 183 | :r ~/.ssh/id_rsa.pub 184 | :r ~/dev/perl/boilerplate/copyright.pl 185 | 186 | ### Filtering output through other commands 187 | 188 | You can extend this to actually filter text in the buffer through external 189 | commands, perhaps selected by a range or visual mode, and replace it with the 190 | command's output. While Vim's visual block mode is great for working with 191 | columnar data, it's very often helpful to bust out tools like `column`, `cut`, 192 | `sort`, or `awk`. 193 | 194 | For example, you could sort the entire file in reverse by the second column by 195 | typing: 196 | 197 | 198 | :%!sort -k2 -r 199 | 200 | You could print only the third column of some selected text where the line 201 | matches the pattern `/vim/` with: 202 | 203 | 204 | :'<,'>!awk '/vim/ {print $3}' 205 | 206 | You could arrange keywords from lines 1 to 10 in nicely formatted columns 207 | like: 208 | 209 | 210 | :1,10!column -t 211 | 212 | Really _any kind_ of text filter or command can be manipulated like this in 213 | Vim, a simple interoperability feature that expands what the editor can do by 214 | an order of magnitude. It effectively makes the Vim buffer into a text stream, 215 | which is a language that all of these classic tools speak. 216 | 217 | ### Built-in alternatives 218 | 219 | It's worth noting that for really common operations like sorting and 220 | searching, Vim has built-in methods in `:sort` and `:grep`, which can be 221 | helpful if you're stuck using Vim on Windows, but don't have nearly the 222 | adaptability of shell calls. 223 | 224 | ## Diffing 225 | 226 | Vim has a _diffing_ mode, `vimdiff`, which allows you to not only view the 227 | differences between different versions of a file, but also to resolve 228 | conflicts via a three-way merge and to replace differences to and fro with 229 | commands like `:diffput` and `:diffget` for ranges of text. You can call 230 | `vimdiff` from the command line directly with at least two files to compare 231 | like so: 232 | 233 | 234 | $ vimdiff file-v1.c file-v2.c 235 | 236 | [![Vim diffing a .vimrc file](http://blog.sanctum.geek.nz/wp- 237 | content/uploads/2012/02/vim-diff.png)](http://blog.sanctum.geek.nz/wp- 238 | content/uploads/2012/02/vim-diff.png) 239 | 240 | Vim diffing a .vimrc file 241 | 242 | ## Version control 243 | 244 | You can call version control methods directly from within Vim, which is 245 | probably all you need most of the time. It's useful to remember here that `%` 246 | is always a shortcut for the buffer's current file: 247 | 248 | 249 | :!svn status 250 | :!svn add % 251 | :!git commit -a 252 | 253 | Recently a clear winner for Git functionality with Vim has come up with Tim 254 | Pope's [Fugitive](https://github.com/tpope/vim-fugitive), which I highly 255 | recommend to anyone doing Git development with Vim. There'll be a more 256 | comprehensive treatment of version control's basis and history in Unix in Part 257 | 7 of this series. 258 | 259 | ## The difference 260 | 261 | Part of the reason Vim is thought of as a toy or relic by a lot of programmers 262 | used to GUI-based IDEs is its being seen as just a tool for editing files on 263 | servers, rather than a very capable editing component for the shell in its own 264 | right. Its own built-in features being so composable with external tools on 265 | Unix-friendly systems makes it into a text editing powerhouse that sometimes 266 | surprises even experienced users. 267 | 268 | 269 | [Unix as IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/) 270 | 271 | * [Unix as IDE: Introduction](http://blog.sanctum.geek.nz/unix-as-ide-introduction/) 272 | * [Unix as IDE: Files](http://blog.sanctum.geek.nz/unix-as-ide-files/) 273 | * Unix as IDE: Editing 274 | * [Unix as IDE: Compiling](http://blog.sanctum.geek.nz/unix-as-ide-compiling/) 275 | * [Unix as IDE: Building](http://blog.sanctum.geek.nz/unix-as-ide-building/) 276 | * [Unix as IDE: Debugging](http://blog.sanctum.geek.nz/unix-as-ide-debugging/) 277 | * [Unix as IDE: Revisions](http://blog.sanctum.geek.nz/unix-as-ide-revisions/) 278 | 279 | This entry is part 3 of 7 in the series [Unix as 280 | IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/). 281 | 282 | [<< Unix as IDE: Files](http://blog.sanctum.geek.nz/unix-as-ide-files/)[Unix 283 | as IDE: Compiling >>](http://blog.sanctum.geek.nz/unix-as-ide-compiling/) 284 | -------------------------------------------------------------------------------- /origin/Files.md: -------------------------------------------------------------------------------- 1 | # Unix as IDE: Files 2 | 3 | Posted on [February 10, 2012](http://blog.sanctum.geek.nz/unix-as-ide-files/) 4 | by [Tom Ryder](http://blog.sanctum.geek.nz/author/tom/) 5 | 6 | One prominent feature of an IDE is a built-in system for managing files, both 7 | the elementary functions like moving, renaming, and deleting, and ones more 8 | specific to development, like compiling and checking syntax. It may also be 9 | useful to have operations on sets of files, such as finding files of a certain 10 | extension or size, or searching files for specific patterns. In this first 11 | article, I'll explore some useful ways to use tools that will be familiar to 12 | most Linux users for the purposes of working with sets of files in a project. 13 | 14 | ## Listing files 15 | 16 | Using `ls` is probably one of the first commands an administrator will learn 17 | for getting a simple list of the contents of the directory. Most 18 | administrators will also know about the `-a` and `-l` switches, to show all 19 | files including dot files and to show more detailed data about files in 20 | columns, respectively. 21 | 22 | There are a few other switches to `ls` which are a bit less frequently used, 23 | and turn out to be very useful for programming: 24 | 25 | * `-t` — List files in order of last modification date, newest first. This is useful for very large directories when you want to get a quick list of the most recent files changed, maybe piped through `head` or `sed 10q`. Probably most useful combined with `-l`. If you want the _oldest_ files, you can add `-r` to reverse the list. 26 | * `-X` — Group files by extension; handy for polyglot code, to group header files and source files separately, or to separate source files from directories or build files. 27 | * `-v` — Naturally sort version numbers in filenames. 28 | * `-S` — Sort by filesize. 29 | * `-R` — List files recursively. This one is good combined with `-l` and piped through a pager like `less`. 30 | 31 | Since the listing is text like anything else, you could, for example, pipe the 32 | output of this command into a `vim` process, so you could add explanations of 33 | what each file is for and save it as an `inventory` file or add it to a 34 | README: 35 | 36 | 37 | $ ls -XR | vim - 38 | 39 | This kind of stuff can even be automated by `make` with a little work, which 40 | I'll cover in another article later in the series. 41 | 42 | ## Finding files 43 | 44 | Funnily enough, you can get a complete list of files including relative paths 45 | by simply typing `find` with no arguments, though it's usually a good idea to 46 | pipe it through `sort`: 47 | 48 | 49 | $ find | sort 50 | . 51 | ./Makefile 52 | ./README 53 | ./build 54 | ./client.c 55 | ./client.h 56 | ./common.h 57 | ./project.c 58 | ./server.c 59 | ./server.h 60 | ./tests 61 | ./tests/suite1.pl 62 | ./tests/suite2.pl 63 | ./tests/suite3.pl 64 | ./tests/suite4.pl 65 | 66 | If you want an `ls -l` style listing, you can add `-ls` as the action to 67 | `find` results: 68 | 69 | 70 | $ find -ls | sort -k 11 71 | 1155096 4 drwxr-xr-x 4 tom tom 4096 Feb 10 09:37 . 72 | 1155152 4 drwxr-xr-x 2 tom tom 4096 Feb 10 09:17 ./build 73 | 1155155 4 -rw-r--r-- 1 tom tom 2290 Jan 11 07:21 ./client.c 74 | 1155157 4 -rw-r--r-- 1 tom tom 1871 Jan 11 16:41 ./client.h 75 | 1155159 32 -rw-r--r-- 1 tom tom 30390 Jan 10 15:29 ./common.h 76 | 1155153 24 -rw-r--r-- 1 tom tom 21170 Jan 11 05:43 ./Makefile 77 | 1155154 16 -rw-r--r-- 1 tom tom 13966 Jan 14 07:39 ./project.c 78 | 1155080 28 -rw-r--r-- 1 tom tom 25840 Jan 15 22:28 ./README 79 | 1155156 32 -rw-r--r-- 1 tom tom 31124 Jan 11 02:34 ./server.c 80 | 1155158 4 -rw-r--r-- 1 tom tom 3599 Jan 16 05:27 ./server.h 81 | 1155160 4 drwxr-xr-x 2 tom tom 4096 Feb 10 09:29 ./tests 82 | 1155161 4 -rw-r--r-- 1 tom tom 288 Jan 13 03:04 ./tests/suite1.pl 83 | 1155162 4 -rw-r--r-- 1 tom tom 1792 Jan 13 10:06 ./tests/suite2.pl 84 | 1155163 4 -rw-r--r-- 1 tom tom 112 Jan 9 23:42 ./tests/suite3.pl 85 | 1155164 4 -rw-r--r-- 1 tom tom 144 Jan 15 02:10 ./tests/suite4.pl 86 | 87 | Note that in this case I have to specify to `sort` that it should sort by the 88 | 11th column of output, the filenames; this is done with the `-k` option. 89 | 90 | `find` has a complex filtering syntax all of its own; the following examples 91 | show some of the most useful filters you can apply to retrieve lists of 92 | certain files: 93 | 94 | * `find -name '*.c'` — Find files with names matching a shell-style pattern. Use `-iname` for a case-insensitive search. 95 | * `find -path '*test*'` — Find files with paths matching a shell-style pattern. Use `-ipath` for a case-insensitive search. 96 | * `find -mtime -5` — Find files edited within the last five days. You can use `+5` instead to find files edited _before_ five days ago. 97 | * `find -newer server.c` — Find files more recently modified than `server.c`. 98 | * `find -type d` — Find directories. For files, use `-type f`; for symbolic links, use `-type l`. 99 | 100 | Note, in particular, that all of these can be combined, for example to find C 101 | source files edited in the last two days: 102 | 103 | 104 | $ find -name '*.c' -mtime -2 105 | 106 | By default, the action `find` takes for search results is simply to list them 107 | on standard output, but there are several other useful actions: 108 | 109 | * `-ls` — Provide an `ls -l` style listing, as above 110 | * `-delete` — Delete matching files 111 | * `-exec` — Run an arbitrary command line on each file, replacing `{}` with the appropriate filename, and terminated by `\;`; for example: 112 | 113 | $ find -name '*.pl' -exec perl -c {} \; 114 | 115 | It might be a bit more straightforward to use `xargs` in most cases, though, 116 | to turn the printed results into arguments for a command: 117 | 118 | 119 | $ find -name '*.pl' | xargs perl -c 120 | 121 | * `-print0` — If you're dealing with filenames with spaces and intend to pipe results to `xargs` as above, use this to make the record separator a null character rather than a space to handle this, along with the `-0` option for `xargs`: 122 | 123 | $ find -name '*.jpg' -print0 | xargs -0 jpegoptim 124 | 125 | One trick I find myself using often is using `find` to generate lists of files 126 | that I then edit in vertically split Vim windows: 127 | 128 | 129 | $ vim -O $(find . -name '*.c') 130 | 131 | ## Searching files 132 | 133 | More often than _attributes_ of a set of files, however, you want to find 134 | files based on their _contents_, and it's no surprise that `grep`, in 135 | particular `grep -R`, is useful here. This searches the current directory tree 136 | recursively for anything matching 'someVar': 137 | 138 | 139 | $ grep -FR 'someVar' . 140 | 141 | Don't forget the case insensitivity flag either, since by default `grep` works 142 | with fixed case: 143 | 144 | 145 | $ grep -iR 'somevar' . 146 | 147 | Also, you can print a list of files that match without printing the matches 148 | themselves with `grep -l`, which again is very useful for building a list of 149 | files to edit in your chosen text editor: 150 | 151 | 152 | $ vim -O $(grep -lR 'somevar' .) 153 | 154 | If you're using version control for your project, this often includes metadata 155 | in the `.svn`, `.git`, or `.hg` directories. This is dealt with easily enough 156 | by _excluding_ (`grep -v`) anything matching an appropriate fixed (`grep -F`) 157 | string: 158 | 159 | 160 | $ grep -R 'someVar' . | grep -vF '.svn' 161 | 162 | With all this said, there's a very popular [alternative to 163 | grep](http://betterthangrep.com/) called `ack`, which excludes this sort of 164 | stuff for you by default. It also allows you to use Perl-compatible regular 165 | expressions (PCRE), which are a favourite for many hackers. It has a lot of 166 | utilities that are generally useful for working with source code, so while 167 | there's nothing wrong with good old `grep` since you know it will always be 168 | there, if you can install `ack` I highly recommend it. There's a Debian 169 | package called `ack-grep`, and being a Perl script it's otherwise very simple 170 | to install. 171 | 172 | Unix purists might be displeased with my even mentioning a relatively new Perl 173 | script alternative to classic `grep`, but I don't believe that the Unix 174 | philosophy or using Unix as an IDE is dependent on sticking to the same 175 | classic tools when alternatives with the same spirit that solve new problems 176 | are available. 177 | 178 | ## File metadata 179 | 180 | The `file` tool gives you a one-line summary of what kind of file you're 181 | looking at, based on its extension, headers and other cues. This is very handy 182 | used with `find` and `xargs` when examining a set of unfamiliar files: 183 | 184 | 185 | $ find | xargs file 186 | .: directory 187 | ./hanoi: Perl script, ASCII text executable 188 | ./.hanoi.swp: Vim swap file, version 7.3 189 | ./factorial: Perl script, ASCII text executable 190 | ./bits.c: C source, ASCII text 191 | ./bits: ELF 32-bit LSB executable, Intel 80386, version ... 192 | 193 | ## Matching files 194 | 195 | As a final tip for this section, I'd suggest learning a bit about pattern 196 | matching and brace expansion in Bash, which you can do in my earlier post 197 | entitled [Bash shell expansion](http://blog.sanctum.geek.nz/bash-shell- 198 | expansion/). 199 | 200 | All of the above make the classic UNIX shell into a pretty powerful means of 201 | managing files in programming projects. 202 | 203 | 204 | [Unix as IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/) 205 | 206 | * [Unix as IDE: Introduction](http://blog.sanctum.geek.nz/unix-as-ide-introduction/) 207 | * Unix as IDE: Files 208 | * [Unix as IDE: Editing](http://blog.sanctum.geek.nz/unix-as-ide-editing/) 209 | * [Unix as IDE: Compiling](http://blog.sanctum.geek.nz/unix-as-ide-compiling/) 210 | * [Unix as IDE: Building](http://blog.sanctum.geek.nz/unix-as-ide-building/) 211 | * [Unix as IDE: Debugging](http://blog.sanctum.geek.nz/unix-as-ide-debugging/) 212 | * [Unix as IDE: Revisions](http://blog.sanctum.geek.nz/unix-as-ide-revisions/) 213 | 214 | This entry is part 2 of 7 in the series [Unix as 215 | IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/). 216 | 217 | [<< Unix as IDE: Introduction](http://blog.sanctum.geek.nz/unix-as-ide- 218 | introduction/)[Unix as IDE: Editing >>](http://blog.sanctum.geek.nz/unix-as- 219 | ide-editing/) 220 | -------------------------------------------------------------------------------- /origin/Introduction.md: -------------------------------------------------------------------------------- 1 | # Unix as IDE: Introduction 2 | 3 | Posted on [February 9, 2012](http://blog.sanctum.geek.nz/unix-as-ide- 4 | introduction/) by [Tom Ryder](http://blog.sanctum.geek.nz/author/tom/) 5 | 6 | Newbies and experienced professional programmers alike appreciate the concept 7 | of the IDE, or [integrated development 8 | environment](http://en.wikipedia.org/wiki/Integrated_development_environment). 9 | Having the primary tools necessary for organising, writing, maintaining, 10 | testing, and debugging code in an integrated application with common 11 | interfaces for all the different tools is certainly a very valuable asset. 12 | Additionally, an environment expressly designed for programming in various 13 | languages affords advantages such as autocompletion, and syntax checking and 14 | highlighting. 15 | 16 | With such tools available to developers on all major desktop operating systems 17 | including Linux and BSD, and with many of the best free of charge, there's not 18 | really a good reason to write your code in Windows Notepad, or with `nano` or 19 | `cat`. 20 | 21 | However, there's a minor meme among devotees of Unix and its modern-day 22 | derivatives that "Unix is an IDE", meaning that the tools available to 23 | developers on the terminal cover the major features in cutting-edge desktop 24 | IDEs with some ease. Opinion is quite divided on this, but whether or not you 25 | feel it's fair to call Unix an IDE in the same sense as Eclipse or Microsoft 26 | Visual Studio, it may surprise you just how comprehensive a development 27 | environment the humble Bash shell can be. 28 | 29 | ## How is UNIX an IDE? 30 | 31 | The primary rationale for using an IDE is that it gathers all your tools in 32 | the same place, and you can use them in concert with roughly the same user 33 | interface paradigm, and without having to exert too much effort to make 34 | separate applications cooperate. The reason this becomes especially desirable 35 | with GUI applications is because it's very difficult to make windowed 36 | applications speak a common language or work well with each other; aside from 37 | cutting and pasting text, they don't share a _common interface_. 38 | 39 | The interesting thing about this problem for shell users is that well-designed 40 | and enduring Unix tools already share a common user interface in _streams of 41 | text_ and _files as persistent objects_, otherwise expressed in the axiom 42 | "everything's a file". Pretty much everything in Unix is built around these 43 | two concepts, and it's this common user interface, coupled with a forty-year 44 | history of high-powered tools whose users and developers have especially 45 | prized interoperability, that goes a long way to making Unix as powerful as a 46 | full-blown IDE. 47 | 48 | ## The right idea 49 | 50 | This attitude isn't the preserve of battle-hardened Unix greybeards; you can 51 | see it in another form in the way the modern incarnations of the two grand old 52 | text editors Emacs and Vi (GNU Emacs and Vim) have such active communities 53 | developing plugins to make them support pretty much any kind of editing task. 54 | There are plugins to do pretty much anything you could really want to do in 55 | programming in both editors, and like any Vim junkie I could spout off at 56 | least six or seven that I feel are "essential". 57 | 58 | However, it often becomes apparent to me when reading about these efforts that 59 | the developers concerned are trying to make these text editors into IDEs in 60 | their own right. There are posts about [never needing to leave 61 | Vim](http://symbolsystem.com/2010/12/15/this-is-your-brain-on-vim/), or [never 62 | needing to leave Emacs](http://news.ycombinator.com/item?id=819447). But I 63 | think that trying to shoehorn Vim or Emacs into becoming something that it's 64 | not isn't quite thinking about the problem in the right way. Bram Moolenaar, 65 | the author of Vim, appears to agree to some extent, as you can see by reading 66 | `[:help design-not](http://vimdoc.sourceforge.net/htmldoc/develop.html#design- 67 | not)`. The shell is only ever a Ctrl+Z away, and its mature, highly composable 68 | toolset will afford you more power than either editor ever could. 69 | 70 | ## About this series 71 | 72 | In this series of posts, I will be going through six major features of an IDE, 73 | and giving examples showing how common tools available in Linux allow you to 74 | use them together with ease. This will by no means be a comprehensive survey, 75 | nor are the tools I will demonstrate the only options. 76 | 77 | * **File and project management** — `ls`, `find`, `grep`/`ack`, `bash` 78 | * **Text editor and editing tools** — `vim`, `awk`, `sort`, `column` 79 | * **Compiler and/or interpreter** — `gcc`, `perl` 80 | * **Build tools** — `make` 81 | * **Debugger** — `gdb`, `valgrind`, `ltrace`, `lsof`, `pmap` 82 | * **Version control** — `diff`, `patch`, `svn`, `git` 83 | 84 | ## What I'm not trying to say 85 | 86 | I don't think IDEs are bad; I think they're brilliant, which is why I'm trying 87 | to convince you that Unix can be used as one, or at least thought of as one. 88 | I'm also not going to say that Unix is always the best tool for any 89 | programming task; it is arguably much better suited for C, C++, Python, Perl, 90 | or Shell development than it is for more "industry" languages like Java or C#, 91 | especially if writing GUI-heavy applications. In particular, I'm not going to 92 | try to convince you to scrap your hard-won Eclipse or Microsoft Visual Studio 93 | knowledge for the sometimes esoteric world of the command line. All I want to 94 | do is show you what we're doing on the other side of the fence. 95 | 96 | 97 | [Unix as IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/) 98 | 99 | * Unix as IDE: Introduction 100 | * [Unix as IDE: Files](http://blog.sanctum.geek.nz/unix-as-ide-files/) 101 | * [Unix as IDE: Editing](http://blog.sanctum.geek.nz/unix-as-ide-editing/) 102 | * [Unix as IDE: Compiling](http://blog.sanctum.geek.nz/unix-as-ide-compiling/) 103 | * [Unix as IDE: Building](http://blog.sanctum.geek.nz/unix-as-ide-building/) 104 | * [Unix as IDE: Debugging](http://blog.sanctum.geek.nz/unix-as-ide-debugging/) 105 | * [Unix as IDE: Revisions](http://blog.sanctum.geek.nz/unix-as-ide-revisions/) 106 | 107 | This entry is part 1 of 7 in the series [Unix as 108 | IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/). 109 | 110 | [Unix as IDE: Files >>](http://blog.sanctum.geek.nz/unix-as-ide-files/) 111 | 112 | -------------------------------------------------------------------------------- /origin/Revisions.md: -------------------------------------------------------------------------------- 1 | # Unix as IDE: Revisions 2 | 3 | Posted on [February 15, 2012](http://blog.sanctum.geek.nz/unix-as-ide- 4 | revisions/) by [Tom Ryder](http://blog.sanctum.geek.nz/author/tom/) 5 | 6 | Version control is now seen as an indispensable part of professional software 7 | development, and GUI IDEs like Eclipse and Visual Studio have embraced it and 8 | included support for industry standard version control systems in their 9 | products. Modern version control systems trace their lineage back to Unix 10 | concepts from programs such as `diff` and `patch` however, and there are 11 | plenty of people who will insist that the best way to use a version control 12 | system is still at a shell prompt. 13 | 14 | In this last article in the [Unix as an IDE 15 | series](http://blog.sanctum.geek.nz/series/unix-as-ide/), I'll follow the 16 | evolution of common open-source version control systems from the basic 17 | concepts of `diff` and `patch`, among the very first version control tools. 18 | 19 | ## `diff`, `patch`, and RCS 20 | 21 | A central concept for version control systems has been that of the _unified 22 | diff_, a file expressing in human and computer readable terms a set of changes 23 | made to a file or files. The `diff` command was first released by Douglas 24 | McIlroy in 1974 for the 5th Edition of Unix, so it's one of the oldest 25 | commands still in regular use on modern systems. 26 | 27 | A _unified diff_, the most common and interoperable format, can be generated 28 | by comparing two versions of a file with the following syntax: 29 | 30 | 31 | $ diff -u example.{1,2}.c 32 | --- example.c.1 2012-02-15 20:15:37.000000000 +1300 33 | +++ example.c.2 2012-02-15 20:15:57.000000000 +1300 34 | @@ -1,8 +1,9 @@ 35 | #include 36 | +#include 37 | 38 | int main (int argc, char* argv[]) 39 | { 40 | printf("Hello, world!\n"); 41 | - return 0; 42 | + return EXIT_SUCCESS; 43 | } 44 | 45 | In this example, the second file has a header file added, and the call to 46 | `return` changed to use the standard `EXIT_SUCCESS` rather than a literal `0` 47 | as the return value for `main()`. Note that the output for `diff` also 48 | includes metadata such as the filename that was changed and the last 49 | modification time of each of the files. 50 | 51 | A primitive form of version control for larger code bases was thus for 52 | developers to trade `diff` output, called _patches_ in this context, so that 53 | they could be applied to one another's code bases with the `patch` tool. We 54 | could save the output from `diff` above as a patch like so: 55 | 56 | 57 | $ diff -u example.{1,2}.c > example.patch 58 | 59 | We could then send this patch to a developer who still had the old version of 60 | the file, and they could automatically apply it with: 61 | 62 | 63 | $ patch example.1.c < example.patch 64 | 65 | A patch can include `diff` output from more than one file, including within 66 | subdirectories, so this provides a very workable way to apply changes to a 67 | source tree. 68 | 69 | The operations involved in using `diff` output to track changes were 70 | sufficiently regular that for keeping in-place history of a file, the [Source 71 | Code Control System](http://en.wikipedia.org/wiki/Source_Code_Control_System) 72 | and the [Revision Control 73 | System](http://en.wikipedia.org/wiki/Revision_Control_System) that has pretty 74 | much replaced it were developed. RCS enabled "locking" files so that they 75 | could not be edited by anyone else while "checked out" of the system, paving 76 | the way for other concepts in more developed version control systems. 77 | 78 | RCS retains the advantage of being very simple to use. To place an existing 79 | file under version control, one need only type `ci ` and provide an 80 | appropriate description for the file: 81 | 82 | 83 | $ ci example.c 84 | example.c,v <-- example.c 85 | enter description, terminated with single '.' or end of file: 86 | NOTE: This is NOT the log message! 87 | >> example file 88 | >> . 89 | initial revision: 1.1 90 | done 91 | 92 | This creates a file in the same directory, `example.c,v`, that will track the 93 | changes. To make changes to the file, you _check it out_, make the changes, 94 | then _check it back in_: 95 | 96 | 97 | $ co -l example.c 98 | example.c,v --> example.c 99 | revision 1.1 (locked) 100 | done 101 | $ vim example.c 102 | $ ci -u example.c 103 | example.c,v <-- example.c 104 | new revision: 1.2; previous revision: 1.1 105 | enter log message, terminated with single '.' or end of file: 106 | >> added a line 107 | >> . 108 | done 109 | 110 | You can then view the history of a project with `rlog`: 111 | 112 | 113 | $ rlog example.c 114 | 115 | RCS file: example.c,v 116 | Working file: example.c 117 | head: 1.2 118 | branch: 119 | locks: strict 120 | access list: 121 | symbolic names: 122 | keyword substitution: kv 123 | total revisions: 2; selected revisions: 2 124 | description: 125 | example file 126 | ---------------------------- 127 | revision 1.2 128 | date: 2012/02/15 07:39:16; author: tom; state: Exp; lines: +1 -0 129 | added a line 130 | ---------------------------- 131 | revision 1.1 132 | date: 2012/02/15 07:36:23; author: tom; state: Exp; 133 | Initial revision 134 | ============================================================================= 135 | 136 | And get a patch in unified `diff` format between two revisions with `rcsdiff 137 | -u`: 138 | 139 | 140 | $ rcsdiff -u -r1.1 -r1.2 ./example.c 141 | =================================================================== 142 | RCS file: ./example.c,v 143 | retrieving revision 1.1 144 | retrieving revision 1.2 145 | diff -u -r1.1 -r1.2 146 | --- ./example.c 2012/02/15 07:36:23 1.1 147 | +++ ./example.c 2012/02/15 07:39:16 1.2 148 | @@ -4,6 +4,7 @@ 149 | int main (int argc, char* argv[]) 150 | { 151 | printf("Hello, world!\n"); 152 | + printf("Extra line!\n"); 153 | return EXIT_SUCCESS; 154 | } 155 | 156 | It would be misleading to imply that simple patches were now in disuse as a 157 | method of version control; they are still very commonly used in the forms 158 | above, and also figure prominently in both centralised and decentralised 159 | version control systems. 160 | 161 | ## CVS and Subversion 162 | 163 | To handle the problem of resolving changes made to a code base by multiple 164 | developers, _centralized version systems_ were developed, with the [Concurrent 165 | Versions System 166 | (CVS)](http://en.wikipedia.org/wiki/Concurrent_Versions_System) developed 167 | first and the slightly more advanced 168 | [Subversion](http://en.wikipedia.org/wiki/Apache_Subversion) later on. The 169 | central feature of these systems are using a _central server_ that contains 170 | the repository, from which authoritative versions of the codebase at any 171 | particular time or revision can be retrieved. These are termed _working 172 | copies_ of the code. 173 | 174 | For these systems, the basic unit of the systems remained the _changeset_, and 175 | the most common way to represent these to the user was in the archetypal 176 | `diff` format used in earlier systems. Both systems work by keeping records of 177 | these changesets, rather than the actual files themselves from state to state. 178 | 179 | Other concepts introduced by this generation of systems were of _branching_ 180 | projects so that separate instances of the same project could be worked on 181 | concurrently, and then merged into the mainline, or _trunk_ with appropriate 182 | testing and review. Similarly, the concept of _tagging_ was introduced to flag 183 | certain revisions as representing the state of a codebase at the time of a 184 | release of the software. The concept of the `merge` was also introduced; 185 | reconciling conflicting changes made to a file manually. 186 | 187 | ## Git and Mercurial 188 | 189 | The next generation of version control systems are _distributed_ or 190 | _decentralized_ systems, in which working copies of the code themselves 191 | contain a complete history of the project, and are hence not reliant on a 192 | central server to contribute to the project. In the open source, Unix-friendly 193 | environment, the standout systems are Git and Mercurial, with their client 194 | programs `git` and `hg`. 195 | 196 | For both of these systems, the concept of communicating changesets is done 197 | with the operations `push`, `pull` and `merge`; changes from one repository 198 | are accepted by another. This decentralized system allows for a very complex 199 | but tightly controlled ecosystem of development; Git was originally developed 200 | by Linus Torvalds to provide an open-source DVCS capable of managing 201 | development for the Linux kernel. 202 | 203 | Both Git and Mercurial differ from CVS and Subversion in that the basic unit 204 | for their operations is not changesets, but complete files (blobs) saved using 205 | compression. This makes finding the log history of a single file or the 206 | differences between two revisions of a file slightly more expensive, but the 207 | output of `git log --patch` still retains the familiar unified `diff` output 208 | for each revision, some forty years after `diff` was first being used: 209 | 210 | 211 | commit c1e5559ddb09f8d02b989596b0f4100ad1aab422 212 | Author: Tom Ryder 213 | Date: Thu Feb 2 01:14:21 2012 214 | 215 | Changed my mind about this one. 216 | 217 | diff --git a/vim/vimrc b/vim/vimrc 218 | index cfbe8e0..65a3143 100644 219 | --- a/vim/vimrc 220 | +++ b/vim/vimrc 221 | @@ -47,10 +47,6 @@ set shiftwidth=4 222 | set softtabstop=4 223 | set tabstop=4 224 | 225 | -" Heresy 226 | -inoremap 227 | -inoremap 228 | - 229 | " History 230 | set history=1000 231 | 232 | The two systems have considerable overlap in functionality and even in command 233 | set, and the question of which to use provokes [considerable 234 | debate](http://stackoverflow.com/questions/35837/what-is-the-difference- 235 | between-mercurial-and-git). The best introductions I've seen to each are [Pro 236 | Git](http://progit.org/) by Scott Chacon, and [Hg Init](http://hginit.com/) by 237 | Joel Spolsky. 238 | 239 | ## Conclusion 240 | 241 | This is the last post in the [Unix as IDE 242 | series](http://blog.sanctum.geek.nz/series/unix-as-ide/); I've tried to offer 243 | a rapid survey of the basic tools available just within a shell on Linux for 244 | all of the basic functionality afforded by professional IDEs. At points I've 245 | had to be not quite as thorough as I'd like in explaining certain features, 246 | but to those unfamiliar to development on Linux machines this will all have 247 | hopefully given some idea of how comprehensive a development environment the 248 | humble shell can be, and all with free, highly mature, and standard software 249 | tools. 250 | 251 | 252 | [Unix as IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/) 253 | 254 | * [Unix as IDE: Introduction](http://blog.sanctum.geek.nz/unix-as-ide-introduction/) 255 | * [Unix as IDE: Files](http://blog.sanctum.geek.nz/unix-as-ide-files/) 256 | * [Unix as IDE: Editing](http://blog.sanctum.geek.nz/unix-as-ide-editing/) 257 | * [Unix as IDE: Compiling](http://blog.sanctum.geek.nz/unix-as-ide-compiling/) 258 | * [Unix as IDE: Building](http://blog.sanctum.geek.nz/unix-as-ide-building/) 259 | * [Unix as IDE: Debugging](http://blog.sanctum.geek.nz/unix-as-ide-debugging/) 260 | * Unix as IDE: Revisions 261 | 262 | This entry is part 7 of 7 in the series [Unix as 263 | IDE](http://blog.sanctum.geek.nz/series/unix-as-ide/). 264 | 265 | [<< Unix as IDE: Debugging](http://blog.sanctum.geek.nz/unix-as-ide- 266 | debugging/) 267 | -------------------------------------------------------------------------------- /origin/ltrace-vim.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ConanChou/Unix-as-IDE--Chinese-/d8efd28aa2645d6a15765a10a4ac1f4355594911/origin/ltrace-vim.png -------------------------------------------------------------------------------- /origin/vim-diff.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ConanChou/Unix-as-IDE--Chinese-/d8efd28aa2645d6a15765a10a4ac1f4355594911/origin/vim-diff.png -------------------------------------------------------------------------------- /origin/vim-quickfix.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ConanChou/Unix-as-IDE--Chinese-/d8efd28aa2645d6a15765a10a4ac1f4355594911/origin/vim-quickfix.png -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # 动机 2 | 3 | 前阵子,我在[Hacker News](http://news.ycombinator.com/)上看到一个Tom Ryder的系列文章,[谈关于Unix和IDE(集成开发环境)的](http://blog.sanctum.geek.nz/series/unix-as-ide/ "Unix as IDE")。其实类似的文章在HN上经常看到,只是由于我是开源和Vim狂热者才会每次都点开看看——每次都能学到新的东西。 4 | 5 | 那一阵子也正好好几次跟同学和朋友说到有关的话题。他们都是IDE控,这不是什么不好的事情,我自己在一些时候也会选择IDE。但是他们普遍对使用Unix和类似Vim的编辑工具没有正确的认识,以至于我的观点得不到应有程度的共鸣或共识。而我恰好看到了这个系列文章,觉得这是一个非常不错的入门系列。而且我在很多观点上都认同作者,所以本打算写一篇博文谈谈来着,现在索性来翻译这个系列文章好了。(之所以要翻译而不是直接转载是因为我发现对于Unix以及相关开发工具的使用的知识缺乏在中国存在度很高,语言可以是原因之一,所以这个系列不是为我的朋友和同学而翻译,更大程度上是为了普及知识。) 6 | 7 | 如果您对此话题很感兴趣,不妨也读一读[The unix programming environment](http://markburgess.org/unix/unix_toc.html)。如果你现在还在大学里,也不妨上一些有关的课程,一定会对你的职业生涯有很大的帮助。 8 | 9 | **最后声明**,我并不想鼓吹使用某种开发工具或使用某种工作流程,和此系列作者一样,只普及知识。如果这个系列的文章颠覆了你的世界观,你因此变成了Unix粉,我们概不负责。 10 | 11 | # 项目说明 12 | * 使用的是[sphinx](http://sphinx.pocoo.org/)文档生成器。 13 | * 项目主页[Unix as IDE (Chinese)](https://github.com/ConanChou/Unix-as-IDE--Chinese-)。 14 | 15 | # 贡献者 16 | 本翻译项目的贡献者是:[ConanChou](https://github.com/ConanChou), [ccl13](https://github.com/ccl13), [A2ZH](https://github.com/theJian), [Peter](https://github.com/petergithub), [Derrick](https://github.com/zillou)。 17 | 18 | 也请未来的维护和贡献者们在提交 PR 时将自己的用户名和链接更新到本文件和 `index.rst` 文件 (或者谁来写一个脚本自动化这个也行:)) 19 | -------------------------------------------------------------------------------- /revisions.rst: -------------------------------------------------------------------------------- 1 | 版本控制 2 | ======== 3 | 4 | 现在,版本控制已经被视为专业软件开发中不可或缺的部分。图形化的集成开发环境例如 Eclipse 和 Visual Studio 已经包含版本控制,并且引入了对工业标准版本控制系统的支持。现代版本控制系统的血统可以追溯到来自于 ``diff`` 和 ``patch`` 等 Unix 程序的概念。而且依旧有很多人坚持认为最好的版本控制系统是命令行形式的。 5 | 6 | 在这个系列的最后一篇文章中,我会跟随常见开源版本控制系统发展的脚步,在最早的版本控制中从 ``diff`` 和 ``patch`` 的基本概念开始。 7 | 8 | ``diff``\、 ``patch`` 和 RCS 9 | ---------------------------- 10 | 11 | 版本控制的一个核心概念就是 *unified diff*\,即统一差别格式,一种使用人机皆可理解的表达方式来表现文件中的变化。 ``diff`` 命令最初是由 Douglas McIlroy 在1974年随着第五版Unix发行的,所以它可以称得上是现代系统中仍在使用的最老的命令之一。 12 | 13 | 统一差别格式( *unified diff* )是最常见的可协作格式,可以由比较一个文件的两个不同版本来产生,它遵循以下语法: :: 14 | 15 | $ diff -u example.{1,2}.c 16 | --- example.c.1 2012-02-15 20:15:37.000000000 +1300 17 | +++ example.c.2 2012-02-15 20:15:57.000000000 +1300 18 | @@ -1,8 +1,9 @@ 19 | #include 20 | +#include 21 | 22 | int main (int argc, char* argv[]) 23 | { 24 | printf("Hello, world!\n"); 25 | - return 0; 26 | + return EXIT_SUCCESS; 27 | } 28 | 29 | 在这个例子中,第二个文件比第一个文件中添加了一个头文件,并且其 ``main()`` 函数返回使用了标准的 ``EXIT_SUCCESS`` 而不是数字 ``0``\。并且注意开头, ``diff`` 还给出了了相应元数据,例如比较的文件名还有文件的最后修改时间。 30 | 31 | 在代码量较大的情况下,一种原始的版本控制方法即是交换`diff`输出。这些输出被称作 *patches*\,即补丁,补丁可以用 ``patch`` 来补到原基础代码上。我们可以这样将上面例子的 ``diff`` 输出保存成补丁: :: 32 | 33 | $ diff -u example.{1,2}.c > example.patch 34 | 35 | 然后我们就可以把补丁发送给一个尚在使用旧版源文件的开发者,然后他就可以像这样自动化打补丁: :: 36 | 37 | $ patch example.1.c < example.patch 38 | 39 | 补丁文件可以包含本目录及子目录内多组 ``diff`` 文件比较输出的结果,这样补丁就可以很容易地应用到源代码树上了。 40 | 41 | 使用 ``diff`` 输出来跟踪改动所涉及的操作足够规则用以保存改动历史, 42 | 人们开发出了 `Source Code Control System `_ 和 `Revision Control 43 | System `_ (译注:源代码控制系统,暂无中文维基页面,出于可信度考虑不引用其他百科) 来实现这一目的,基本上已经取代了 ``diff`` 输出的方法。RCS提供了“锁定(lock)”文件的功能,防止一个文件在被“签出(check out)”时被其他人修改。这个概念给其他更成熟的版本控制系统铺平了道路。 44 | 45 | RCS保留了简单易用的优势。将一个文件纳入版本控制,只需要键入 ``ci `` 并且提供一个合适的文件描述: :: 46 | 47 | $ ci example.c 48 | example.c,v <-- example.c 49 | enter description, terminated with single '.' or end of file: 50 | NOTE: This is NOT the log message! 51 | >> example file 52 | >> . 53 | initial revision: 1.1 54 | done 55 | 56 | 这样就在该文件目录内创建了一个新文件 ``example.c,v``\,用来跟踪文件的修改。修改文件之前,你需要 *签出(check out)* ,然后修改,最后再把它 *签入(check in)* 回去: :: 57 | 58 | $ co -l example.c 59 | example.c,v --> example.c 60 | revision 1.1 (locked) 61 | done 62 | $ vim example.c 63 | $ ci -u example.c 64 | example.c,v <-- example.c 65 | new revision: 1.2; previous revision: 1.1 66 | enter log message, terminated with single '.' or end of file: 67 | >> added a line 68 | >> . 69 | done 70 | 71 | 你可以使用 ``rlog`` 来查看一个项目的修改历史: :: 72 | 73 | $ rlog example.c 74 | 75 | RCS file: example.c,v 76 | Working file: example.c 77 | head: 1.2 78 | branch: 79 | locks: strict 80 | access list: 81 | symbolic names: 82 | keyword substitution: kv 83 | total revisions: 2; selected revisions: 2 84 | description: 85 | example file 86 | ---------------------------- 87 | revision 1.2 88 | date: 2012/02/15 07:39:16; author: tom; state: Exp; lines: +1 -0 89 | added a line 90 | ---------------------------- 91 | revision 1.1 92 | date: 2012/02/15 07:36:23; author: tom; state: Exp; 93 | Initial revision 94 | 95 | 使用 `rcsdiff -u` 命令获得两个修订版本之间统一差别( ``diff`` )格式的补丁文件: :: 96 | 97 | $ rcsdiff -u -r1.1 -r1.2 ./example.c 98 | =================================================================== 99 | RCS file: ./example.c,v 100 | retrieving revision 1.1 101 | retrieving revision 1.2 102 | diff -u -r1.1 -r1.2 103 | --- ./example.c 2012/02/15 07:36:23 1.1 104 | +++ ./example.c 2012/02/15 07:39:16 1.2 105 | @@ -4,6 +4,7 @@ 106 | int main (int argc, char* argv[]) 107 | { 108 | printf("Hello, world!\n"); 109 | + printf("Extra line!\n"); 110 | return EXIT_SUCCESS; 111 | } 112 | 113 | 这样的用法可能让你觉得简单的补丁文件已经不再是一种版本控制的方法了。实际上它们依然很是很常用于像上面那样的场合,并且对于集中式和分散式的版本控制系统来说依然是很重要的。 114 | 115 | CVS 和 Subversion 116 | ----------------- 117 | 118 | 为了解决多个开发者基于同一个代码做修改的问题,人们就开发了 *中心化版本控制系统* ,最早的是 `协作版本系统 (CVS) `_ ,之后出现了稍微高级一些的 `Subversion `_\。这些系统的核心特性就是任何时刻或者任何修订版代码的官方版本都能从作为代码仓库的 *中心服务器* 上得到。如此获得的一个代码库被称为 *工作副本*\。 119 | 120 | 对于这些系统来说,基本的操作单位叫做 *变更集(changeset)*\。早期此类系统中最常见的向用户表现 *变更集* 的方式就是通过提供一个原型 ``diff`` 格式输出。这两种版本控制系统的工作方式都是记录变更集,而不是记录不同版本的原始文件本身。 121 | 122 | 这一代版本控制系统还引入了一些其他概念。例如 *分支(branch)* 一个项目,使得一个项目的多个不同版本可以同时存在同时修改,最终通过一些列测试和审查合并到主线或者叫 *主干(trunk)* 中。类似的一个概念是 *标签*\,可以将代码库的一个特定的版本标记为对应软件的一个发布版本。 ``merge``\(合并)的概念也引入了,允许手动解决因为对同一个文件的同部分修改所造成冲突。 123 | 124 | 125 | Git 和 Mercurial 126 | ---------------- 127 | 128 | 后一代版本控制系统则是 *分布式* ,或者叫 *无中心式* 的系统。在这些系统中工作副本包括了代码和项目的完整历史,所以不需要中心服务器就可以向这个项目提交修改。在开源、Unix 友好的环境中,突出的此类系统是 Git 和 Mercurial,它们的客户端程序是 ``git`` 和 ``hg``\。 129 | 130 | 这两种系统中,交换变更集的操作是 ``push``\(推送), ``pull``\(拉取)和 ``merge``\(合并),一个仓库的修改可以被另一个接受。这样的无中心系统允许一种很复杂但是受到严格控制的开发生态。Git 就是 Linux Trovalds 为管理 Linux 内核开发工作而开发的分布式版本控制系统。 131 | 132 | Git 和 Mercurial 不同于 CVS 和 Subversion,它们的基本操作单位不是修改集,而是压缩保存的完整的文件(blob)。这样搜索某单个文件的历史或者查阅某文件的两个版本间的修改成本会略高,但是对于每个修订版 ``git log --patch`` 命令仍然能输出统一 ``diff`` 格式,即便是在 ``diff`` 命令出现四十年之后的今天: :: 133 | 134 | commit c1e5559ddb09f8d02b989596b0f4100ad1aab422 135 | Author: Tom Ryder 136 | Date: Thu Feb 2 01:14:21 2012 137 | 138 | Changed my mind about this one. 139 | 140 | diff --git a/vim/vimrc b/vim/vimrc 141 | index cfbe8e0..65a3143 100644 142 | --- a/vim/vimrc 143 | +++ b/vim/vimrc 144 | @@ -47,10 +47,6 @@ set shiftwidth=4 145 | set softtabstop=4 146 | set tabstop=4 147 | 148 | -" Heresy 149 | -inoremap 150 | -inoremap 151 | - 152 | " History 153 | set history=1000 154 | 155 | 这两种系统在功能甚至命令集上都有很多交叠,应该使用哪一种已经导致了 `大量争论 `_\。我见过的有关这两种系统的最好的介绍是 Scott Chacon 的 `Pro Git `_ 和 Joel Spolsky 的 `Hg Init `_\。 156 | 157 | 结语 158 | ---- 159 | 160 | 这是本系列文章的最后一篇。我试着给出了一个简捷的概览,介绍 Linux shell 里就已提供的一些基本工具和它们所提供的基本功能,而这些基本功能也恰好是一些专业 IDE 能够提供的。有时候我必须略过一些内容即便我想详细说明。不过我希望这些文章依然能让一个不熟悉在 Linux 系统上开发的人了解到这不起眼的 shell 也能成为非常全面的开发环境,并且完全使用的是免费、高度成熟且标准化的软件工具。 161 | --------------------------------------------------------------------------------