
To continue the trend set by James Clark... Here is my choice for useful command-line tool of the day. fold Fold is a simple little command line util that lets you fold lines of text at a particular width. Eg if you have some long lines of text, say out of some HTML or other document format that only maintains paragraph breaks, that you want to show on a console screen you can do this: fold -s -w 70 long-lines.txt |less This command will fold the text from long-lines.txt at 70 characters (or the nearest preceding space). If you leave off -s the lines will get cut in the middle of words. Most consoles are 80 columns wide and fold defaults to a width (-w) of 80 columns. This can also be useful in scripts where you need to display a paragraph of text with interpolated variables which you don't know the width of beforehand. Eg: echo -n "Enter some text: " read var para="You said, '$var'. That's really interesting. But \ do you know the answer to the ultimate question of life, \ the universe, and everything?"; echo $para | fold -s -w 78 - Regards -- Oliver Jones » Roving Code Warrior oliver(a)deeperdesign.com » +64 (21) 41 2238 » www.deeperdesign.com

Oliver Jones wrote:
To continue the trend set by James Clark... Here is my choice for useful command-line tool of the day.
fold
Ooh, my favourite is "join(1)"[1] with a bit of join(1), cut(1), grep(1) you can do basic rdbms queries on text files. The thing to remember about join is it requires both input files to be sorted on the key. an example join command: w | awk '{print $1}' | sort | join -j 1 -t: -o 2.5 - <(getent passwd | sort) to break this down w | List the currently logged in users and pipe its output to: awk '{print $1}' | awk then for each line of the file prints the first column (getting rid of the other random junk w has) and pipes its output to: sort | Both import files must be sorted, so we sort the input. yay. This gets piped to the join. The join also reads from a named pipe (thats what the <() is about) consisting of: getent passwd | sort List everyone in the passwd database and sort it (everyone is in ldap here, so the usual sort /etc/passwd doesn't work) The join command then joins on field 1 on both files (-j 1), considering fields to be separated by :'s (-t :), and outputting column 5 from file 2 (-o 2.5), and reading from standard in (our post processed w pipeline) and from the other passwd file parsing pipeline. So the net result of this command is that it lists the gecos information (real name etc) of all the users currently logged in, a bit long winded, but useful on multiuser machines where you don't directly know who that random person is that's running ./memhog. ---- [1]: Which nobody uses.

And if you put that little one liner in a script or alias I'm sure it can be very helpful. Incidentally I stumbled across join while looking for neat utils. I figured fold was easier for newbies to digest. But yes, join is neat.
Ooh, my favourite is "join(1)"[1] with a bit of join(1), cut(1), grep(1) you can do basic rdbms queries on text files. The thing to remember about join is it requires both input files to be sorted on the key.
an example join command:
w | awk '{print $1}' | sort | join -j 1 -t: -o 2.5 - <(getent passwd | sort)
-- Oliver Jones » Roving Code Warrior oliver(a)deeperdesign.com » +64 (21) 41 2238 » www.deeperdesign.com
participants (2)
-
Oliver Jones
-
Perry Lorier