├── part1.txt ├── part2.txt └── part4.txt /part1.txt: -------------------------------------------------------------------------------- 1 | Part I: Working With Files 2 | 3 | 1. Empty a file (truncate to 0 size) 4 | 5 | $ > file 6 | 7 | This one-liner uses the output redirection operator >. Redirection of output causes the file to be opened for writing. If the file does not exist it is created; if it does exist it is truncated to zero size. As we're not redirecting anything to the file it remains empty. 8 | 9 | If you wish to replace the contents of a file with some string or create a file with specific content, you can do this: 10 | 11 | $ echo "some string" > file 12 | 13 | This puts the string "some string" in the file. 14 | 15 | 2. Append a string to a file 16 | 17 | $ echo "foo bar baz" >> file 18 | 19 | This one-liner uses a different output redirection operator >>, which appends to the file. If the file does not exist it is created. The string appended to the file is followed by a newline. If you don't want a newline appended after the string, add the -n argument to echo: 20 | 21 | $ echo -n "foo bar baz" >> file 22 | 23 | 3. Read the first line from a file and put it in a variable 24 | 25 | $ read -r line < file 26 | 27 | This one-liner uses the built-in bash command read and the input redirection operator <. The read command reads one line from the standard input and puts it in the line variable. The -r parameter makes sure the input is read raw, meaning the backslashes won't get escaped (they'll be left as is). The redirection command < file opens file for reading and makes it the standard input to the read command. 28 | 29 | The read command removes all characters present in the special IFS variable. IFS stands for Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read built-in command. By default IFS contains space, tab, and newline, which means that the leading and trailing tabs and spaces will get removed. If you wish to preserve them, you can set IFS to nothing for the time being: 30 | 31 | $ IFS= read -r line < file 32 | 33 | This will change the value of IFS just for this command and will make sure the first line gets read into the line variable really raw with all the leading and trailing whitespaces. 34 | 35 | Another way to read the first line from a file into a variable is to do this: 36 | 37 | $ line=$(head -1 file) 38 | 39 | This one-liner uses the command substitution operator $(...). It runs the command in ..., and returns its output. In this case the command is head -1 file that outputs the first line of the file. The output is then assigned to the line variable. Using $(...) is exactly the same as `...`, so you could have also written: 40 | 41 | $ line=`head -1 file` 42 | 43 | However $(...) is the preferred way in bash as it's cleaner and easier to nest. 44 | 45 | 4. Read a file line-by-line 46 | 47 | $ while read -r line; do 48 | # do something with $line 49 | done < file 50 | 51 | This is the one and only right way to read lines from a file one-by-one. This method puts the read command in a while loop. When the read command encounters end-of-file, it returns a positive return code (code for failure) and the while loop stops. 52 | 53 | Remember that read trims leading and trailing whitespace, so if you wish to preserve it, clear the IFS variable: 54 | 55 | $ while IFS= read -r line; do 56 | # do something with $line 57 | done < file 58 | 59 | If you don't like the to put < file at the end, you can also pipe the contents of the file to the while loop: 60 | 61 | $ cat file | while IFS= read -r line; do 62 | # do something with $line 63 | done 64 | 65 | 5. Read a random line from a file and put it in a variable 66 | 67 | $ read -r random_line < <(shuf file) 68 | 69 | There is no clean way to read a random line from a file with just bash, so we'll need to use some external programs for help. If you're on a modern Linux machine, then it comes with the shuf utility that's in GNU coreutils. 70 | 71 | This one-liner uses the process substitution <(...) operator. This process substitution operator creates an anonymous named pipe, and connects the stdout of the process to the write part of the named pipe. Then bash executes the process, and it replaces the whole process substitution expression with the filename of the anonymous named pipe. 72 | 73 | When bash sees <(shuf file) it opens a special file /dev/fd/n, where n is a free file descriptor, then runs shuf file with its stdout connected to /dev/fd/n and replaces <(shuf file) with /dev/fd/n so the command effectively becomes: 74 | 75 | $ read -r random_line < /dev/fd/n 76 | 77 | Which reads the first line from the shuffled file. 78 | 79 | Here is another way to do it with the help of GNU sort. GNU sort takes the -R option that randomizes the input. 80 | 81 | $ read -r random_line < <(sort -R file) 82 | 83 | Another way to get a random line in a variable is this: 84 | 85 | $ random_line=$(sort -R file | head -1) 86 | 87 | Here the file gets randomly sorted by sort -R and then head -1 takes the first line. 88 | 89 | 6. Read the first three columns/fields from a file into variables 90 | 91 | $ while read -r field1 field2 field3 throwaway; do 92 | # do something with $field1, $field2, and $field3 93 | done < file 94 | 95 | If you specify more than one variable name to the read command, it shall split the line into fields (splitting is done based on what's in the IFS variable, which contains a whitespace, a tab, and a newline by default), and put the first field in the first variable, the second field in the second variable, etc., and it will put the remaining fields in the last variable. That's why we have the throwaway variable after the three field variables. if we didn't have it, and the file had more than three columns, the third field would also get the leftovers. 96 | 97 | Sometimes it's shorter to just write _ for the throwaway variable: 98 | 99 | $ while read -r field1 field2 field3 _; do 100 | # do something with $field1, $field2, and $field3 101 | done < file 102 | 103 | Or if you have a file with exactly three fields, then you don't need it at all: 104 | 105 | $ while read -r field1 field2 field3; do 106 | # do something with $field1, $field2, and $field3 107 | done < file 108 | 109 | Here is an example. Let's say you wish to find out number of lines, number of words, and number of bytes in a file. If you run wc on a file you get these 3 numbers plus the filename as the fourth field: 110 | 111 | $ cat file-with-5-lines 112 | x 1 113 | x 2 114 | x 3 115 | x 4 116 | x 5 117 | 118 | $ wc file-with-5-lines 119 | 5 10 20 file-with-5-lines 120 | 121 | So this file has 5 lines, 10 words, and 20 chars. We can use the read command to get this info into variables: 122 | 123 | $ read lines words chars _ < <(wc file-with-5-lines) 124 | 125 | $ echo $lines 126 | 5 127 | $ echo $words 128 | 10 129 | $ echo $chars 130 | 20 131 | 132 | Similarly you can use here-strings to split strings into variables. Let's say you have a string "20 packets in 10 seconds" in a $info variable and you want to extract 20 and 10. Not too long ago I'd have written this: 133 | 134 | $ packets=$(echo $info | awk '{ print $1 }') 135 | $ time=$(echo $info | awk '{ print $4 }') 136 | 137 | However given the power of read and our bash knowledge, we can now do this: 138 | 139 | $ read packets _ _ time _ <<< "$info" 140 | 141 | Here the <<< is a here-string, which lets you pass strings directly to the standard input of commands. 142 | 143 | 7. Find the size of a file, and put it in a variable 144 | 145 | $ size=$(wc -c < file) 146 | 147 | This one-liner uses the command substitution operator $(...) that I explained in one-liner #3. It runs the command in ..., and returns its output. In this case the command is wc -c < file that prints the number of chars (bytes) in the file. The output is then assigned to size variable. 148 | 149 | 8. Extract the filename from the path 150 | 151 | Let's say you have a /path/to/file.ext, and you wish to extract just the filename file.ext. How do you do it? A good solution is to use the parameter expansion mechanism: 152 | 153 | $ filename=${path##*/} 154 | 155 | This one-liner uses the ${var##pattern} parameter expansion. This expansion tries to match the pattern at the beginning of the $var variable. If it matches, then the result of the expansion is the value of $var with the longest matching pattern deleted. 156 | 157 | In this case the pattern is */ which matches at the beginning of /path/to/file.ext and as it's a greedy match, the pattern matches all the way till the last slash (it matches /path/to/). The result of this expansion is then just the filename file.ext as the matched pattern gets deleted. 158 | 159 | 9. Extract the directory name from the path 160 | 161 | This is similar to the previous one-liner. Let's say you have a /path/to/file.ext, and you wish to extract just the path to the file /path/to. You can use the parameter expansion again: 162 | 163 | $ dirname=${path%/*} 164 | 165 | This time it's the ${var%pattern} parameter expansion that tries to match the pattern at the end of the $var variable. If the pattern matches, then the result of the expansion is the value of $var shortest matching pattern deleted. 166 | 167 | In this case the pattern is /*, which matches at the end of /path/to/file.ext (it matches /file.ext). The result then is just the dirname /path/to as the matched pattern gets deleted. 168 | 169 | 10. Make a copy of a file quickly 170 | 171 | Let's say you wish to copy the file at /path/to/file to /path/to/file_copy. Normally you'd write: 172 | 173 | $ cp /path/to/file /path/to/file_copy 174 | 175 | However you can do it much quicker by using the brace expansion {...}: 176 | 177 | $ cp /path/to/file{,_copy} 178 | 179 | Brace expansion is a mechanism by which arbitrary strings can be generated. In this particular case /path/to/file{,_copy} generates the string /path/to/file /path/to/file_copy, and the whole command becomes cp /path/to/file /path/to/file_copy. 180 | 181 | Similarly you can move a file quickly: 182 | 183 | $ mv /path/to/file{,_old} 184 | 185 | This expands to mv /path/to/file /path/to/file_old. 186 | -------------------------------------------------------------------------------- /part2.txt: -------------------------------------------------------------------------------- 1 | rt II: Working With Strings 2 | 3 | 1. Generate the alphabet from a-z 4 | 5 | $ echo {a..z} 6 | 7 | This one-liner uses brace expansion. Brace expansion is a mechanism for generating arbitrary strings. This one-liner uses a sequence expression of the form {x..y}, where x and y are single characters. The sequence expression expands to each character lexicographically between x and y, inclusive. 8 | 9 | If you run it, you get all the letters from a-z: 10 | 11 | $ echo {a..z} 12 | a b c d e f g h i j k l m n o p q r s t u v w x y z 13 | 14 | 2. Generate the alphabet from a-z without spaces between characters 15 | 16 | $ printf "%c" {a..z} 17 | 18 | This is an awesome bash trick that 99.99% bash users don't know about. If you supply a list of items to the printf function it actually applies the format in a loop until the list is empty! printf as a loop! There is nothing more awesome than that! 19 | 20 | In this one-liner the printf format is "%c", which means "a character" and the arguments are all letters from a-z separated by space. So what printf does is it iterates over the list outputting each character after character until it runs out of letters. 21 | 22 | Here is the output if you run it: 23 | 24 | abcdefghijklmnopqrstuvwxyz 25 | 26 | This output is without a terminating newline because the format string was "%c" and it doesn't include \n. To have it newline terminated, just add $'\n' to the list of chars to print: 27 | 28 | $ printf "%c" {a..z} $'\n' 29 | 30 | $'\n' is bash idiomatic way to represent a newline character. printf then just prints chars a to z, and the newline character. 31 | 32 | Another way to add a trailing newline character is to echo the output of printf: 33 | 34 | $ echo $(printf "%c" {a..z}) 35 | 36 | This one-liner uses command substitution, which runs printf "%c" {a..z} and replaces the command with its output. Then echo prints this output and adds a newline itself. 37 | 38 | Want to output all letters in a column instead? Add a newline after each character! 39 | 40 | $ printf "%c\n" {a..z} 41 | 42 | Output: 43 | 44 | a 45 | b 46 | ... 47 | z 48 | 49 | Want to put the output from printf in a variable quickly? Use the -v argument: 50 | 51 | $ printf -v alphabet "%c" {a..z} 52 | 53 | This puts abcdefghijklmnopqrstuvwxyz in the $alphabet variable. 54 | 55 | Similarly you can generate a list of numbers. Let's say from 1 to 100: 56 | 57 | $ echo {1..100} 58 | 59 | Output: 60 | 61 | 1 2 3 ... 100 62 | 63 | Alternatively, if you forget this method, you can use the external seq utility to generate a sequence of numbers: 64 | 65 | $ seq 1 100 66 | 67 | 3. Pad numbers 0 to 9 with a leading zero 68 | 69 | $ printf "%02d " {0..9} 70 | 71 | Here we use the looping abilities of printf again. This time the format is "%02d ", which means "zero pad the integer up to two positions", and the items to loop through are the numbers 0-9, generated by the brace expansion (as explained in the previous one-liner). 72 | 73 | Output: 74 | 75 | 00 01 02 03 04 05 06 07 08 09 76 | 77 | If you use bash 4, you can do the same with the new feature of brace expansion: 78 | 79 | $ echo {00..09} 80 | 81 | Older bashes don't have this feature. 82 | 83 | 4. Produce 30 English words 84 | 85 | $ echo {w,t,}h{e{n{,ce{,forth}},re{,in,fore,with{,al}}},ither,at} 86 | 87 | This is an abuse of brace expansion. Just look at what this produces: 88 | 89 | when whence whenceforth where wherein wherefore wherewith wherewithal whither what then thence thenceforth there therein therefore therewith therewithal thither that hen hence henceforth here herein herefore herewith herewithal hither hat 90 | Crazy awesome! 91 | 92 | Here is how it works - you can produce permutations of words/symbols with brace expansion. For example, if you do this, 93 | 94 | $ echo {a,b,c}{1,2,3} 95 | 96 | It will produce the result a1 a2 a3 b1 b2 b3 c1 c2 c3. It takes the first a, and combines it with {1,2,3}, producing a1 a2 a3. Then it takes b and combines it with {1,2,3}, and then it does the same for c. 97 | 98 | So this one-liner is just a smart combination of braces that when expanded produce all these English words! 99 | 100 | 5. Produce 10 copies of the same string 101 | 102 | $ echo foo{,,,,,,,,,,} 103 | 104 | This one-liner uses the brace expansion again. What happens here is foo gets combined with 10 empty strings, so the output is 10 copies of foo: 105 | 106 | foo foo foo foo foo foo foo foo foo foo foo 107 | 6. Join two strings 108 | 109 | $ echo "$x$y" 110 | 111 | This one-liner simply concatenates two variables together. If the variable x contains foo and y contains bar then the result is foobar. 112 | 113 | Notice that "$x$y" were quoted. If we didn't quote it, echo would interpret the $x$y as regular arguments, and would first try to parse them to see if they contain command line switches. So if $x contains something beginning with -, it would be a command line argument rather than an argument to echo: 114 | 115 | x=-n 116 | y=" foo" 117 | echo $x$y 118 | Output: 119 | 120 | foo 121 | Versus the correct way: 122 | 123 | x=-n 124 | y=" foo" 125 | echo "$x$y" 126 | Output: 127 | 128 | -n foo 129 | If you need to put the two joined strings in a variable, you can omit the quotes: 130 | 131 | var=$x$y 132 | 7. Split a string on a given character 133 | 134 | Let's say you have a string foo-bar-baz in the variable $str and you wish to split it on the dash and iterate over it. You can simply combine IFS with read to do it: 135 | 136 | $ IFS=- read -r x y z <<< "$str" 137 | 138 | Here we use the read x command that reads data from stdin and puts the data in the x y z variables. We set IFS to - as this variable is used for field splitting. If multiple variable names are specified to read, IFS is used to split the line of input so that each variable gets a single field of the input. 139 | 140 | In this one-liner $x gets foo, $y gets bar, $z gets baz. 141 | 142 | Also notice the use of <<< operator. This is the here-string operator that allows strings to be passed to stdin of commands easily. In this case string $str is passed as stdin to read. 143 | 144 | You can also put the split fields and put them in an array: 145 | 146 | $ IFS=- read -ra parts <<< "foo-bar-baz" 147 | 148 | The -a argument to read makes it put the split words in the given array. In this case the array is parts. You can access array elements through ${parts[0]}, ${parts[1]}, and ${parts[0]}. Or just access all of them through ${parts[@]}. 149 | 150 | 8. Process a string character by character 151 | 152 | $ while IFS= read -rn1 c; do 153 | # do something with $c 154 | done <<< "$str" 155 | 156 | Here we use the -n1 argument to read command to make it read the input character at a time. Similarly we can use -n2 to read two chars at a time, etc. 157 | 158 | 9. Replace "foo" with "bar" in a string 159 | 160 | $ echo ${str/foo/bar} 161 | 162 | This one-liner uses parameter expansion of form ${var/find/replace}. It finds the string find in var and replaces it with replace. Really simple! 163 | 164 | To replace all occurrences of "foo" with "bar", use the ${var//find/replace} form: 165 | 166 | $ echo ${str//foo/bar} 167 | 168 | 10. Check if a string matches a pattern 169 | 170 | $ if [[ $file = *.zip ]]; then 171 | # do something 172 | fi 173 | 174 | Here the one-liner does something if $file matches *.zip. This is a simple glob pattern matching, and you can use symbols * ? [...] to do matching. Code * matches any string, ? matches a single char, and [...] matches any character in ... or a character class. 175 | 176 | Here is another example that matches if answer is Y or y: 177 | 178 | $ if [[ $answer = [Yy]* ]]; then 179 | # do something 180 | fi 181 | 182 | 11. Check if a string matches a regular expression 183 | 184 | $ if [[ $str =~ [0-9]+\.[0-9]+ ]]; then 185 | # do something 186 | fi 187 | 188 | This one-liner tests if the string $str matches regex [0-9]+\.[0-9]+, which means match a number followed by a dot followed by number. The format for regular expressions is described in man 3 regex. 189 | 190 | 12. Find the length of the string 191 | 192 | $ echo ${#str} 193 | 194 | Here we use parameter expansion ${#str} which returns the length of the string in variable str. Really simple. 195 | 196 | 13. Extract a substring from a string 197 | 198 | $ str="hello world" 199 | $ echo ${str:6} 200 | 201 | This one-liner extracts world from hello world. It uses the substring expansion. In general substring expansion looks like ${var:offset:length}, and it extracts length characters from var starting at index offset. In our one-liner we omit the length that makes it extract all characters starting at offset 6. 202 | 203 | Here is another example: 204 | 205 | $ echo ${str:7:2} 206 | 207 | Output: 208 | 209 | or 210 | 14. Uppercase a string 211 | 212 | $ declare -u var 213 | $ var="foo bar" 214 | 215 | The declare command in bash declares variables and/or gives them attributes. In this case we give the variable var attribute -u, which upper-cases its content whenever it gets assigned something. Now if you echo it, the contents will be upper-cased: 216 | 217 | $ echo $var 218 | 219 | FOO BAR 220 | Note that -u argument was introduced in bash 4. Similarly you can use another feature of bash 4, which is the ${var^^} parameter expansion that upper-cases a string in var: 221 | 222 | $ str="zoo raw" 223 | $ echo ${str^^} 224 | 225 | Output: 226 | 227 | ZOO RAW 228 | 15. Lowercase a string 229 | 230 | $ declare -l var 231 | $ var="FOO BAR" 232 | 233 | Similar to the previous one-liner, -l argument to declare sets the lower-case attribute on var, which makes it always be lower-case: 234 | 235 | $ echo $var 236 | 237 | foo bar 238 | The -l argument is also available only in bash 4 and later. 239 | 240 | Another way to lowercase a string is to use ${var,,} parameter expansion: 241 | 242 | $ str="ZOO RAW" 243 | $ echo ${str,,} 244 | 245 | Output: 246 | 247 | zoo raw 248 | -------------------------------------------------------------------------------- /part4.txt: -------------------------------------------------------------------------------- 1 | Part IV: Working with history 2 | 3 | 1. Erase all shell history 4 | 5 | $ rm ~/.bash_history 6 | 7 | Bash keeps the shell history in a hidden file called .bash_history. This file is located in your home directory. To get rid of the history, just delete it. 8 | 9 | Note that if you logout after erasing the shell history, this last rm ~/.bash_history command will be logged. If you want to hide that you erased shell history, see the next one-liner. 10 | 11 | 2. Stop logging history for this session 12 | 13 | $ unset HISTFILE 14 | 15 | The HISTFILE special bash variable points to the file where the shell history should be saved. If you unset it, bash won't save the history. 16 | 17 | Alternatively you can point it to /dev/null, 18 | 19 | $ HISTFILE=/dev/null 20 | 21 | 3. Don't log the current command to history 22 | 23 | Just start the command with an extra space: 24 | 25 | $ command 26 | 27 | If the command starts with an extra space, it's not logged to history. 28 | 29 | Note that this only works if the HISTIGNORE variable is properly configured. This variable contains : separated values of command prefixes that shouldn't be logged. 30 | 31 | For example to ignore spaces set it to this: 32 | 33 | HISTIGNORE="[ ]*" 34 | My HISTIGNORE looks like this: 35 | 36 | HISTIGNORE="&:[ ]*" 37 | The ampersand has a special meaning - don't log repeated commands. 38 | 39 | 4. Change the file where bash logs command history 40 | 41 | $ HISTFILE=~/docs/shell_history.txt 42 | 43 | Here we simply change the HISTFILE special bash variable and point it to ~/docs/shell_history.txt. From now on bash will save the command history in that file. 44 | 45 | 5. Add timestamps to history log 46 | 47 | $ HISTTIMEFORMAT="%Y-%m-%d %H:%M:%S" 48 | 49 | If you set the HISTTIMEFORMAT special bash variable to a valid date format (see man 3 strftime) then bash will log the timestamps to the history log. It will also display them when you call the history command (see the next one-liner). 50 | 51 | 6. Show the history 52 | 53 | $ history 54 | 55 | The history command displays the history list with line numbers. If HISTTIMEFORMAT is set, it also displays the timestamps. 56 | 57 | 7. Show the last 50 commands from the history 58 | 59 | $ history 50 60 | 61 | If you specify a numeric argument, such as 50, to history, it prints the last 50 commands from the history. 62 | 63 | 7. Show the top 10 most used commands from the bash history 64 | 65 | $ history | 66 | sed 's/^ \+//;s/ / /' | 67 | cut -d' ' -f2- | 68 | awk '{ count[$0]++ } END { for (i in count) print count[i], i }' | 69 | sort -rn | 70 | head -10 71 | 72 | This one-liner combines bash with sed, cut, awk, sort and head. The perfect combination. Let's walk through this to understand what happens. Let's say the output of history is: 73 | 74 | $ history 75 | 1 rm .bash_history 76 | 2 dmesg 77 | 3 su - 78 | 4 man cryptsetup 79 | 5 dmesg 80 | 81 | First we use the sed command to remove the leading spaces and convert the double space after the history command number to a single space: 82 | 83 | $ history | sed 's/^ \+//;s/ / /' 84 | 1 rm .bash_history 85 | 2 dmesg 86 | 3 su - 87 | 4 man cryptsetup 88 | 5 dmesg 89 | 90 | Next we use cut to remove the first column (the history numbers): 91 | 92 | $ history | 93 | sed 's/^ \+//;s/ / /' | 94 | cut -d' ' -f2- 95 | 96 | rm .bash_history 97 | dmesg 98 | su - 99 | man cryptsetup 100 | dmesg 101 | 102 | Next we use awk to record how many times each command has been seen: 103 | 104 | $ history | 105 | sed 's/^ \+//;s/ / /' | 106 | cut -d' ' -f2- | 107 | awk '{ count[$0]++ } END { for (i in count) print count[i], i }' 108 | 109 | 1 rm .bash_history 110 | 2 dmesg 111 | 1 su - 112 | 1 man cryptsetup 113 | 114 | Then we sort the output numerically and reverse it: 115 | 116 | $ history | 117 | sed 's/^ \+//;s/ / /' | 118 | cut -d' ' -f2- | 119 | awk '{ count[$0]++ } END { for (i in count) print count[i], i }' | 120 | sort -rn 121 | 122 | 2 dmesg 123 | 1 rm .bash_history 124 | 1 su - 125 | 1 man cryptsetup 126 | 127 | Finally we take the first 10 lines that correspond to 10 most frequently used commands: 128 | 129 | $ history | 130 | sed 's/^ \+//;s/ / /' | 131 | cut -d' ' -f2- | 132 | awk '{ count[$0]++ } END { for (i in count) print count[i], i }' | 133 | sort -rn | 134 | head -10 135 | 136 | Here is what my 10 most frequently used commands look like: 137 | 138 | 2172 ls 139 | 1610 gs 140 | 252 cd .. 141 | 215 gp 142 | 213 ls -las 143 | 197 cd projects 144 | 155 gpu 145 | 151 cd 146 | 119 gl 147 | 119 cd tests/ 148 | 149 | Here I've gs that's an alias for git status, gp is git push, gpu is git pull and gl is git log. 150 | 151 | 8. Execute the previous command quickly 152 | 153 | $ !! 154 | 155 | That's right. Type two bangs. The first bang starts history substitution, and the second one refers to the last command. Here is an example: 156 | 157 | $ echo foo 158 | foo 159 | $ !! 160 | foo 161 | 162 | Here the echo foo command was repeated. 163 | 164 | It's especially useful if you wanted to execute a command through sudo but forgot. Then all you've to do is run: 165 | 166 | $ rm /var/log/something 167 | rm: cannot remove `/var/log/something': Permission denied 168 | $ 169 | $ sudo !! # executes `sudo rm /var/log/something` 170 | 171 | 9. Execute the most recent command starting with the given string 172 | 173 | $ !foo 174 | 175 | The first bang starts history substitution, and the second one refers to the most recent command starting with foo. 176 | 177 | For example, 178 | 179 | $ echo foo 180 | foo 181 | $ ls / 182 | /bin /boot /home /dev /proc /root /tmp 183 | $ awk -F: '{print $2}' /etc/passwd 184 | ... 185 | $ !ls 186 | /bin /boot /home /dev /proc /root /tmp 187 | 188 | Here we executed commands echo, ls, awk, and then used !ls to refer to the ls / command. 189 | 190 | 10. Open the previous command you executed in a text editor 191 | 192 | $ fc 193 | 194 | Fc opens the previous command in a text editor. It's useful if you've a longer, more complex command and want to edit it. 195 | 196 | For example, let's say you've written a one-liner that has an error, such as: 197 | 198 | $ for wav in wav/*; do mp3=$(sed 's/\.wav/\.mp3/' <<< "$wav"); ffmpeg -i "$wav" "$m3p"; done 199 | 200 | And you can't see what's going on because you've to scroll around, then you can simply type fc to load it in your text editor, and then quickly find that you mistyped mp3 at the end. 201 | --------------------------------------------------------------------------------