What good is an iterator?

This post is inspired by the following Stackoverflow question:

I want to output all the combinations of two letters, but I want the combinations between the same letters to be shown first. Basically, I want to get something like this:

If I have

@list = ("A", "B", "C")

I want my output to be:

AA  -
BB  -
CC  -
AB
AC
BA
BC
CA
CB

I have since deleted my answer in frustration, but the I think my original answer and subsequent revisions do highlight important trade-offs whose exploration might be useful.

Let's first note that the OP did not put much thought into the question. In addition, I was a little surprised that such a simple question had elicited some dismissive responses.

So, my first response involved two things 1) a block-box way of generating all the 2-tuples involved; 2) sorting the diagonal elements of that set before other elements.

If you don't want to spend time checking the correctness of your interpretation of an algorithm, you can another implementation you trust. So, I took Algorithm::Combinatorics::variations_with_repetition to generate the tuples. For the sorting, I used a simple comparison function:

my $all_same_pat = qr/\A(.)\1+\z/s;

my $set = ['A', 'B', 'C', 'D', 'E'];
my @seqs = sort {
    my $fa = ($a =~ $all_same_pat);
    my $fb = ($b =~ $all_same_pat);

    if (($fa and $fb) or not($fa or $fb)) {
        return $a cmp $b;
    }
    return [1, -1]->[$fa];
}
map join('', @$_), variations_with_repetition($set, 2);

I do not recommend that you use this. I would not use this. But, someone who just wants someone else to solve his/her immediate task lifts this, well, that's not my business ;-)

On the other hand, a person who is willing to think and explore might in this way be introduced to the great Algorithm::Combinatorics, or might be intrigued by the logical xor in there, or even try to find out what [1, -1]->[$fa] does.

Now, tucked below this was the following:

Of course, if you just want to output pairs, you could also do it with much less memory overhead:

#!/usr/bin/env perl

use strict;
use warnings;
use feature 'say';

diagonal_first(['A', 'B']);
diagonal_first(['A', 'B', 'C']);
diagonal_first(['A', 'B', 'C', 'D']);

sub diagonal_first {
    my $set = shift;
    my $k = $#$set;

    say "$_$_" for @$set;
    for my $i (0 .. $k) {
        for my $j (0 .. $k) {
            next if $i == $j;
            say "$set->[$i]$set->[$j]";
        }
    }
    return;
}

The OP accepted my answer. At about the same time, I thought providing an iterator based solution might be cute as well. So, I updated my answer with:

#!/usr/bin/env perl

use strict;
use warnings;
use feature 'say';

my $set = ['a' .. 'z'];
my $diagonal = make_diagonal_iterator($set);
my $off_diagonal = make_off_diagonal_iterator($set);

while (my $x = $diagonal->()) {
    say "@$x";
}

while (defined(my $x = $off_diagonal->())) {
    say "@$x";
}

sub make_diagonal_iterator {
    my $set = shift;
    my $k = @$set;
    my $i = 0;
    return sub {
        return if $i >= $k;
        my $x = $set->[$i];
        $i += 1;
        [$x, $x];
    };
}

sub make_off_diagonal_iterator {
    my $set = shift;
    my $k = @$set;
    my ($i, $j) = (0, 0);
    return sub {
        if ($j == $i) {
            $j += 1;
        }
        if ($j >= $k) {
            $j = 0;
            $i += 1;
        }
        return if $i >= $k;
        my ($x, $y) = @$set[$i, $j];
        $j += 1;
        [$x, $y];
    };
}

Let's agree at the outset that if you just want to just want to print all the tuples, or store them in an array, the iterator based solution is going to be much slower. But, that does not make an iterator based solution useless.

For one thing, if you do want to do some real processing for each tuple other than just to print them, you may not want to have to wait until all the tuples have been generated to start processing. You may not want to dedicate memory to keeping all the tuples in an array. You may want to able to wrap your set generator in an object and do other things etc. Yes, of course, if you take the OP's original question at face value, it would be hard to beat something like diagonal_first. But, I am more interested in exploring how you can generalize these approaches.

So, let's see if we can improve on the iterator based solution. Please bear in mind that I had originally planned to incorporate these improvements in my answer on Stackoverflow, but got sidetracked by another poster inserting various other bits in my post.

First, let's just admit that having two generators, and then two while loops is just ugly. Second, and this is related to the first, iterator exhaustion is not handled elegantly.

Before proceeding any further, I should note that I believe I first saw these techniques in Higher Order Perl which is an incredibly valuable book. So, while all errors are mine and mine only, anything and everything you like in this post is to MJD's credit.

One way to solve both problems to think of an iterator generator as being given a list of sequence generators. When each sequence is exhausted, the iterator moves to the following sequence. When all the sequences are exhausted, the iterator switches to the trivial sequence generator:

sub { [] }

Implicit in the formulation is the assumption that each element of a sequence is returned in an array ref so we can distinguish between undefined values that are part of a sequence as opposed to no element being returned due to sequence exhaustion.

Without loss of generality, the iterators below will actually generate tuples of array indexes. The only information they require is the cardinality of the underlying set. Of course, if the underlying set is infinite, then we never run out of diagonal elements, so the task is not that interesting.

We first defined the empty sequence generator: sub empty_sequence { [] }.

With that, we can define an iterator over elements of a sequence of sequences:

sub make_iterator {
    my $seqs = shift;
    @$seqs or return \&empty_sequence;

    my $gen = shift @$seqs;
    sub {
        my $x = $gen->();
        @$x and return $x;

        $gen = shift( @$seqs ) || \&empty_sequence;
        $x = $gen->();
        $x;
    };
}

We call this with a list of sequence generators. We assume each generator returns the empty tuple when it is exhausted. Once all the sequences are exhausted, we switch to the empty sequence generator. If iterating code handles the empty tuple correctly, this will generate the only empty tuple, the iterator itself will be exhausted. Otherwise, the iterator keeps generating empty tuples.

Obviously, for small enough sets, this is much smaller than just pushing tuples to an array. But, the memory requirements grow pretty fast. And, even for small n, one might prefer the convenience of changing the way tuples are generated.

Iterator based implementation

To use the iterator approach, we just need to define generators for the diagonal and off-diagonal sequences:

#!/usr/bin/env perl
# diagfirstit.pl
use strict;
use warnings;

run(@ARGV);

sub run {
    my $k = shift;
    my $it = make_iterator(
        [ diagonal($k), off_diagonal($k) ],
    );

    while (my $tuple = $it->()) {
        @$tuple or last;
    }
    return;
}

sub diagonal {
    my $k = shift;
    my $i = 0;
    sub {
        return [] unless $i < $k;
        my $x = [$i, $i];
        $i += 1;
        $x;
    };
}

sub off_diagonal {
    my $k = shift;
    my ($i, $j) = (0, 1);
    sub {
        if ($j == $i) {
            $j += 1;
        }
        if ($j >= $k) {
            $j = 0;
            $i += 1;
        }
        return [] unless $i < $k;
        my $x = [$i, $j];
        $j += 1;
        $x;
    };
}

sub empty_sequence { [] };

sub make_iterator {
    my $seqs = shift;
    @$seqs
        or return \&empty_sequence;

    my $gen = shift @$seqs;
    sub {
        my $x = $gen->();
        @$x and return $x;

        $gen = shift( @$seqs ) || \&empty_sequence;
        $x = $gen->();
        $x;
    };
}

Loop based approach:

#!/usr/bin/env perl
# diagfirstloop.pl
use strict;
use warnings;

run(@ARGV);

sub run {
    my $k = shift;
    my @x = diagonal_first($k);
}


sub diagonal_first {
    my $k = shift;
    my @rv;

    push @rv, [$_, $_] for 0 .. ($k - 1);

    for my $i (0 .. ($k - 1)) {
        for my $j (0 .. ($k - 1)) {
            next if $i == $j;
            push @rv, [$i, $j];
        }
    }
    return @rv;
}

Comparison of iterator and loop based approaches:

Not only is the loop based approach simpler, it is also faster for small n:

$ time ./diagfirstloop.pl 1000
./diagfirstloop.pl 1000  1.09s user 0.22s system 99% cpu 1.310 total
$ time ./diagfirstit.pl 1000
./diagfirstit.pl 1000  1.70s user 0.01s system 99% cpu 1.719 total

The thing is, with $k = 1000, the memory footprint of diagfirstloop.pl is about 200 MB my MacBook whereas diagfirstit.pl clocks in under a megabyte.

With $k = 5000, the memory usage of diagfirstloop.pl quickly shoots up to about 5GB.

Sure, diagfirstit.pl takes a long time when $k = 10000, but it's memory footprint never change:

$ time ./diagfirstit.pl 10000
./diagfirstit.pl 10000  161.50s user 0.09s system 99% cpu 2:41.95 total

On the other hand, when I tested diagfirstloop.pl with parameter value of 10,000, memory usage quickly shot up to over 10 GB and something else happened:

$ time ./diagfirstloop.pl 10000
zsh: segmentation fault  ./diagfirstloop.pl 10000
./diagfirstloop.pl 10000  91.03s user 89.13s system 81% cpu 3:40.45 total

So, while in the trivial cases, pushing everything on to an array, and returning that might be feasible, in the general case, an iterator based approach gives you the option of the program being able to run to completion. I did not try to calculate how large a swap file would have been created on my MacBook if I had allowed it.

If the purpose of generating these tuples was to do something more meaningful with them rather than just printing, wouldn't you want this incidental task to take as little memory as possible so the real task has more room to play with? What if each tuple is an input to a simulation where it matters if you can have each core work in parallel on a 5 GB dataset that fits in memory?

That's why it's important to process input in well-defined chunks, and that's why sometimes there is nothing wrong with choosing the seemingly slower iterator based approach.