« Begin with a single stepPrivacy & social media »

Sat, Mar 17, 2012

[Icon][Icon]Adding complexity to reduce complexity

• Post categories: Omni, FOSS, Technology, My Life, Programming, Helpful

When I started working at $current_job, I was pleased to find that vim was the standard editor. It was frequently invoked with an odd syntax, though. If one wanted the file ./MyApp/Model/Foo.pm then the command to use was vai Foo.pm

This puzzled me for a while. Then there were people who would talk about things like "l/Foo" and "r/Foo". What WERE they on about?

I finally worked it out. vai was actually a bash function that would find all files matching the partial name it was given & open them in vim. The vai Foo.pm command was equivalent to:

vi `find . -path *Foo.pm`

And when you're working with files spread across multiple subdirectories, this functionality is a godsend. So it was very heavily used.

One downside, though, was that there WERE files we didn't want included. Such as (in the days before we moved to git) anything in the .svn directory.

So the command got updated a bit, to:

vi `find . -path *Foo.pm | grep -v .svn`

And that was ok. But more and more exclusions were getting applied, and we were getting slowed down by the sheer number of files it was finding that we simply weren't interested in.

So I decided it was time to make it more robust. A simple bit of perl scripting would make it faster and tidier.

My first attempt was simple: The script would still invoke find via backticks, but it would have an array of directories to exclude, and it would append ! -path $exclude so find would ignore the unwanted directories, instead of looking through them, returning any matches, and then getting them filtered out with grep - faster and nicer to maintain.

A win all around, then. Except... I have a habit of calling system commands instead of just doing what I want with pure perl. And I keep trying to find places to apply techniques I read about in all the books I have laying around at home, and HOP has a whole lot of useful things to say about writing code to walk directories.

So I re-wrote the script I had to avoid using find: Instead, I would write my first ever recursive function. Go me! :)

The re-written script was substantially longer, and I was wondering if replacing a one-line command-line function with an 80-line perl script was maybe not what you might call progress.

But I comment my code heavily, so I checked how long the script was in actual lines of code. And cat vai | grep -v '#' | grep -v -P '^$' | wc -l told me the answer was 42

Clearly, I was onto a winner here :o)

And the exclusion functionality was nicer to maintain and more efficient - the script should in theory be faster to use. And it would support using all kinds of perl regexes in filenames, which was potentially handy... So I mentioned in in the staff meeting and asked if anyone had any functionality they wanted added whilst I was at it.

And my boss (gbjk) made the simple-seeming suggestion of tab-completion.

This was, make no mistake, a brilliant idea. Instead of having to have a separate command, as I currently planned, to give a list of the files that would be opened if you ran the vai command, it would instead just work with normal bash-style tab completion. Brilliant!

Except.. my fondness for the power and flexibility of bash is matched only for my hatred of having to write bash scripts. Shell scripts of any type, I find painful to parse or write.

But it was such a good idea..

So I hit Google and started tracking down how to make dynamic tab-completion work. And then figuring out how to make the generic usage work for my specific case, which was rather harder.

Eventually, I figured it out. What was once a simple three-line bash function has now become an 80-line bash script with a corresponding shell script in /etc/bash_completion.d and the end result is indistinguishable to the end user.

You might say this isn't progress.

But on the other hand.. it's more efficient. It's more powerful. It's easier to maintain. And it's given me some practice in applying techniques I'd only read about before.

All in all, I consider it a win.

And if you ever find yourself with a group of files, such as:


and you wish you could edit the Foo controller without changing directory or giving the full path; or open all the Foo.pm files easily; or open all the controller files in one go.. you too might like to have the scripts that will allow you to do so using the commands

vai r/Foo.pm
vai Foo.pm
vai Controller.*

So here they are:


The tab completion script - systems vary, but this should be saved to something like /etc/bash_completion.d/vai
local cur names

names=$(for x in `vai ${cur} 1`; do echo ${x}; done)
PLY=( ${names} ${cur} )
return 0
complete -F _vai vai

And then the actual vai script, to be placed in any directory in your $PATH - maybe /usr/bin ?


use strict;
use warnings;

use v5.10;

# Command to open all files within your current directory and subdirectories
# in vim that match a given name. e.g.
# vai file.txt
# will open ./file.txt, ./directory/file.txt, ./sub/directory/file.txt
# excluding any directories that are specified in the @exclude array

# Get the filename from passed-in argument
my $path = $ARGV[0];
# Also check if we want to open the files or just get a preview list
my $preview = $ARGV[1];
die "Must supply a filename!\n" unless $path;

# Specify the directories to skip
# Maybe use config file(s) for this instead?
my @exclude = (qw/.svn blib .git/);

# Get the list of files via recursive function
# Probably should try and avoid using global files array, but I can't really be bothered
my @files;

# Just show the files if we're using preview
if ($preview) {
say join "\n", @files;

# Stop if we're trying to open more than a sensible number of files;
my $results = scalar @files;
unless ($results < 20) {
print "Too many files ($results) - exiting.";

my $files = join ' ', @files;

# Open them with vim
system("vim $files");

# That's it. Just define the recursive subroutine we want now

# The following software is based on code that is Copyright 2005 by Elsevier Inc.
# You may use it under the terms of the license at http://perl.plover.com/hop/LICENSE.txt.

sub dir_walk {
my ($top, $code) = @_;
my $DIR;

# Is this a file we want?
if ($top =~ m#$path$# && -f $top) {
push @files, $top;

if (-d $top) {
# Check if this is a directory we want to skip
my $excluded;
foreach my $skip (@exclude) {
$excluded = 1 if $top =~ m#$skip$#;

# Skip excluded directories and symlinks
unless ($excluded || -l $top) {
my $file;
unless (opendir $DIR, $top) {
warn "Couldn't open directory $top: $!; skipping.\n";
while ($file = readdir $DIR) {
next if $file eq '.' || $file eq '..';
dir_walk("$top/$file", $code);

I'm considering adding one further enhancement - to make it work with config files instead of a hard-coded array. But I'm not sure there's any call for it as yet. So we'll see, I guess.


Comment from: LJ [Visitor]
vim ./**/*Foo.pm will do the equivalent in zsh, with tab completion. You can get the same thing in bash (v4 and above) by enabling the globstar: shopt -s globstar. No tab completion in bash though.

Still seems a bit bonkers though, surely you want to edit one file then maybe open another with :Se/:tabe/:split/whatever?
17/03/12 @ 22:00
Comment from: oneandoneis2 [Member] · http://geekblog.oneandoneis2.org/
"vi ./**/*Foo.pm" is a lot more typing than "vai Foo.pm", and I don't know of any way to make it exclude unwanted directories.

It's not about opening multiple files - tho that's something I use it for frequently - so much as opening one or more files without having to care (or know) about its full path.
17/03/12 @ 22:15
Gabor Szabo
Comment from: Gabor Szabo [Visitor] Email · http://szabgab.com/

Could you, please, try to tell the blog engine to keep the code indented? Maybe adding <pre> tags around it?

I cannot use them in the comment so I could not try them. :(
18/03/12 @ 21:15
Jakub Narębski
Comment from: Jakub Narębski [Visitor]
Why not use File::Find or equivalent?
18/03/12 @ 23:49
Comment from: oneandoneis2 [Member] · http://geekblog.oneandoneis2.org/
@Gabor - I'll try & get around to it - I currently have to choose between <code> tags which loose indentation, and <pre> tags that lose the ends of long lines. I know it's just a CSS tweak, but I never quite seem to find the time..

@Jakub - When I started I was planning on making it very simple.. and then it seemed like a good opportunity to apply stuff I'd read about but not really used yet.. it never really occurred to me to look for something on CPAN, TBH
19/03/12 @ 12:19
Arun Prasaad
Comment from: Arun Prasaad [Visitor]
File::Find is a core module already included with Perl.

But why not use ack?


vim `ack -g Foo.pm`
19/03/12 @ 15:50
Christopher Cashell
Comment from: Christopher Cashell [Visitor]
I've got to agree with @Jakub on this one. File::Find seems like a win. Or, if you were using the Unix/Linux 'find' command before, find2perl (included with the standard Perl distribution) will take a find compatible expression and turn it into Perl code utilizing File::Find.

Makes a great starter for any perl code that needs to walk a filesystem and find a set of files.
19/03/12 @ 16:08
Comment from: oneandoneis2 [Member] · http://geekblog.oneandoneis2.org/
ack would have meant calling system commands, which is what I was trying to get away from in the first place.

File::Find would have worked.. I don't really see what great advantage it would have given, tho, TBH. Maybe I'm missing something.
19/03/12 @ 16:15
Comment from: TDK [Visitor]
:help find - works with tab completion and ctrl-d.
24/03/12 @ 01:32

[Links][icon] My links

[Icon][Icon]About Me

[Icon][Icon]About this blog

[Icon][Icon]My /. profile

[Icon][Icon]My Wishlist


[FSF Associate Member]

March 2017
Mon Tue Wed Thu Fri Sat Sun
 << <   > >>
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    


User tools

XML Feeds

eXTReMe Tracker

Valid XHTML 1.0 Transitional

Valid CSS!

[Valid RSS feed]

multi-blog platform