Another look at stock market behavior around "change" presidential elections in the U.S.

Nothing discussed herein should be taken as investment advice or as a recommendation regarding any particular investment vehicle or course of action. All statements herein are statements of subjective opinion and for information and entertainment purposes only. Seek a duly licensed professional for investment advice.

In January, prompted by stories about selling the inauguration, I looked at stock market performance around change elections in the U.S. Following the MarketWatch article, I used the S&P 500 as a broad stock market indicator, and looked at its performance over the period 100 trading days before and after each election. Of course, that interval is rather arbitrary, but, then, so was the interval of 54 days before to 66 days after the election used in the original article: I tend to consider nice round arbitrary numbers less arbitrary than oddly specific arbitrary numbers ;-)

Due to a number of recent distractions, I totally missed the fact that it has now been 105 trading days since the 2016 presidential election. In my previous posts, I noted that, with the data available up to that point, the stock market reaction to Trump’s election looked most similar to that following Clinton’s election in 1992. But, the future had not been written at that point, so we did not know how it would compare to the 7.4% gain after 100 days of the Clinton administration. Below, I am going to go over some Perl and SQL to fetch and transform the S&P 500 data you can obtain from Yahoo! to do your analysis, but, if you are curious, at the end of the 100th trading day following Trump’s election, the S&P 500 was up by 10.31% relative to its level on election day — a level that compared to the 7.35% gain over the same period following Clinton’s election.

In fact, among change elections (those in 1952, 1960, 1968, 1976, 1980, 1992, 2000, 2008, and 2016), the performance of S&P 500 100 trading days after the election was second only to its performance over a comporable period following Kennedy’s election: On April 4, 1961, S&P 500 closed at about 66 points, compared to about 55 points around election day (markets were closed on November 8, 1960).

Once again, let me stress that the future is yet to be written. I don’t put any more stock in this kind of “analysis” than I put in Turkish coffee reading. Looking forward, there are plenty of reasons to be cautious: There is ample uncertainty due to the looming threat of war and terrorism and political volatility. After all, S&P 500 did not perform well over March, and today looks like it’s going to be another down day. Where we go from here will depend on what happens in the future, not what happened in 1992 or 1952.

With that aside, let’s look at some code.

This time, I decided to leverage SQLite to produce the output tables I wanted instead of munging Perl arrays. I could have written the whole analysis using a mix of SQLite directives and SQL, but I dove into Perl first.

The first order of business was getting rid of the step of manually copying the URL for the relevant date ranges of S&P 500 data on Yahoo! Finance. So, I wrote the following simple function so I did not have to remember which single letter in the URL corresponded to which parameter:

# Assumes $start and $end contain YMD dates with no separators
# Yahoo! Finance expects zero-based month number

sub sp500_url ( $start, $end ) {
    const state $URL_TMPL => 'https://chart.finance.yahoo.com/table.csv?s=^GSPC&c=%d;a=%d&b=%d&f=%d&d=%d&e=%d&g=d&ignore=.csv';

    my @start = unpack 'A4 A2 A2', $start;
    my @end   = unpack 'A4 A2 A2', $end;

    -- ->[1] for \(@start, @end);

    sprintf $URL_TMPL, @start, @end;
}

A few notes: I am using my favorite constant handling module Const::Fast, along with the state variables feature introduced in Perl 5.10 along with the subroutine signatures feature introduced in Perl 5.20. In a sense, these uses are all gratuitous in that it is easy to write Perl without depending on features beyond what exists in 5.8, but why not take advantage of niceties if there is no pressing backward compatibility constraints?

Similarly, I wanted to make sure the filenames I used adhered to a simple convention, so, I wrote:

sub sp500_filename ( $start, $end, $ext ) {
    sprintf 'sp500-%s-%s.%s', $start, $end, $ext;
}

This way, I only have to change one thing if I want to name files differently.

To download the data, I used HTTP::Tiny:

sub download ( $url, $file ) {
    my $response = HTTP::Tiny->new->mirror( $url => $file, { verify_SSL => 1 });

    unless ( $response->{success} ) {
        croak "Failed to download from '$url' and save in '$file'";
    }
}

The main routine is rather straightforward. You can run the script with a start and end date for the daily S&P 500 data table you want to download from Yahoo! Finance. If those dates are not specified, the script defaults to downloading the entire series. I manually looked up the date of each presidential election on Wikipedia. Somehow, that was quicker than writing a script to do it for me.

One complication has to do with the fact that markets were closed on election days in 1952, 1960, 1976, and 1980, and open on election days in 1992, 2000, 2008, and 2016, but I decided to ignore that as there is not a huge difference in the levels of the index on the days before and after the election in those years:

sqlite> select * from sp500 where dt between '1952-11-03' and '1952-11-05';
1952-11-03|24.6
1952-11-05|24.67

sqlite> select * from sp500 where dt between '1960-11-07' and '1960-11-09';
1960-11-07|55.110001
1960-11-09|55.349998

sqlite> select * from sp500 where dt between '1968-11-04' and '1968-11-06';
1968-11-04|103.099998
1968-11-06|103.269997

sqlite> select * from sp500 where dt between '1976-11-01' and '1976-11-03';
1976-11-01|103.099998
1976-11-03|101.919998

sqlite> select * from sp500 where dt between '1980-11-03' and '1980-11-05';
1980-11-03|129.039993
1980-11-05|131.330002

As you can see below, the routine create_election_tables takes ‘days before’ and ‘days after’ arguments. I decided that ‘days before’ should include election day if markets were open. I did this because I did not want to think too much. Looking at it now, omitting the election day from all analyses might have made more sense. But, then, you have to deal with whether to look at performance relative to the day before election day. If I did this, S&P 500 would be up 10.73% over the 100 trading days following the 2016 election. Or, if I looked at performance relative to the day following the election, S&P 500 would be up 9.1% over the same period.

It makes more sense a priori to look at performance relative to the point when the winner of the election had not yet been revealed, but then one has to think about whether to ignore everything that happened between November 7, 2000 and December 12, 2000, and that would have been incompatible with my desire to avoid thinking.

#!/usr/bin/env perl

=for PURPOSE

Download S&P 500 data from Yahoo!, and produce an output file whose
rows are trading days relative to each president's election, and whose
columns are the values of S&P 500 relative to its value on election
day.

=cut

use v5.24; # why not?!
use warnings;

use feature 'signatures';
no warnings 'experimental::signatures';

use autouse Carp => 'croak';
use autouse 'YAML::XS' => 'Dump';

use Const::Fast;
use DBI;
use HTTP::Tiny;

run(  );

sub run {
    const my @ELECTIONS => (
        [ eisenhower => '1952-11-04' ],
        [ kennedy    => '1960-11-08' ],
        [ nixon      => '1968-11-05' ],
        [ carter     => '1976-11-02' ],
        [ reagan     => '1980-11-04' ],
        [ clinton    => '1992-11-03' ],
        [ bush       => '2000-11-07' ],
        [ obama      => '2008-11-04' ],
        [ trump      => '2016-11-08' ],
    );

    const my %CONFIG => (
        colswanted  => [0, -1],
        sp500_start => '19500103',
    );

    my $start = [0] ? [0] : $CONFIG{sp500_start};
    my $end = [1];
    unless ( $end ) {
        require DateTime;
        $end = DateTime->now(time_zone => 'America/New_York')->ymd('');
    }

    my $db   = sp500_filename( $start, $end, 'db');
    my $file = sp500_filename( $start, $end, 'csv' );
    my $url  = sp500_url( $start, $end );

    -e $file or download( $url, $file );

    my $dbh = import( $file, $db, $CONFIG{colswanted} );

    create_election_tables(
        $dbh,
        \@ELECTIONS,
        101,
        100
    );

    export_analysis_table(
        $dbh,
        \@ELECTIONS,
        101,
        'sp500-elections.tsv',
    );

    const my @ELECTIONS_PRE16 => grep ->[0] ne 'trump', @ELECTIONS;

    create_election_tables(
        $dbh,
        \@ELECTIONS_PRE16,
        51,
        200
    );

    export_analysis_table(
        $dbh,
        \@ELECTIONS_PRE16,
        51,
        'sp500-electionspre16.tsv',
    );

    $dbh->disconnect;
}

The import routine is below. It simply reads the downloaded file, and creates a simple table whose primary key is the date column and the only other column is the adjusted daily close value for S&P 500. The $colswanted argument tells us the indexes of the columns we want. Of course, one could make this routine much more generic, but, my goal is to create a small script that records each step taken to create the data tables I want in a reproducible way with the minimum fuss. I use DBI, but, in a sense, just opening a pipe to sqlite3 would have been just as easy and would probably have performed faster, but I wrote the first thing that popped into my head.

sub import ( $file, $db, $colswanted ) {
    my $data = read_data( $file, $colswanted );

    my $dbh = DBI->connect("dbi:SQLite:$db", undef, undef,
        { AutoCommit => 0, RaiseError => 1 }
    );

    $dbh->do( q{DROP TABLE IF EXISTS sp500} );

    $dbh->do( q{CREATE TABLE sp500 (
        dt  CHAR[10] PRIMARY KEY,
        p   REAL NOT NULL
    )} );

    $dbh->commit;

    my $sth = $dbh->prepare(q{INSERT INTO sp500(dt, p) VALUES (?, ?)});

    for my $i ( $data ) {
        $sth->bind_param_array($i + 1, $data->[$i]);
    }

    $sth->execute_array({}, $data->@*);

    $dbh->commit;

    return $dbh;
}

My next step was to write create_election_tables whose job is to select $lo days before (and possibly including) election day and $hi days following it. The SQL below looks like an abomination to me, but the multiple levels of SELECT statements was the only way I was able to get the correct output:

sub create_election_tables ( $dbh, $elections, $lo, $hi ) {
    for my $election ( $elections->@* ) {
        my ($president, $date) = $election->@

        $dbh->do("DROP TABLE IF EXISTS $president");
        $dbh->commit;

        $dbh->do(sprintf(q{
            CREATE TABLE %s AS
                SELECT * FROM
                (
                    SELECT * FROM
                    (
                        SELECT dt, p
                        FROM sp500
                        WHERE dt <= ?
                        ORDER BY dt DESC
                        LIMIT ?
                    )
                )
                UNION ALL
                SELECT * FROM
                (
                    SELECT dt, p FROM
                    (
                        SELECT dt, p
                        FROM sp500
                        WHERE dt > ?
                        ORDER BY dt
                        LIMIT ?
                    )
                )
                ORDER BY dt
        }, $president), {}, $date, $lo, $date, $hi);
        $dbh->commit;
    }

    return;
}

After create_election_tables returns, we have a table for each election listed in our @ELECTIONS array. The next step is to join these tables so the trading days around each election line up. Luckily, SQLite creates a rowid column with each of those tables such that

Tables created using CREATE TABLE AS are initially populated with the rows of data returned by the SELECT statement. Rows are assigned contiguously ascending rowid values, starting with 1, in the order that they are returned by the SELECT statement (see SQLite documentation for CREATE TABLE).

The following creates the SQL needed to join the tables created above so that a t column gives the number of trading days relative to the base, and there is a column named after each president whose rows give S&P 500 closing values relative to the base trading day. It fetches the resulting rows, and writes them to an output file so I can import it to a another application for further manipulation.

sub export_analysis_table ( $dbh, $elections, $lo, $output_file ) {
    open my $fh, '>', $output_file
        or croak "Failed to open '$output_file' for writing";

    my $header = join("\t", t => map ->[0], $elections->@*);

    say $fh $header;

    my $vars = join ', ', map {
       my $president = ->[0];
        sprintf(
            'round( %s.p / (select p from %s where rowid = %d), 4 ) as %s',
            ($president) x 2, $lo, $president
        )
    } $elections->@

    my $tables = join q{ JOIN }, map ->[0], $elections->@

    my $cond = join q{ AND },  map sprintf(
        '( %s.rowid = %s.rowid )', $elections->[ - 1][0], $elections->[][0]
    ), 1 .. $elections;

    my $sth = $dbh->prepare(qq{
        SELECT clinton.rowid - $lo AS t, $vars
        FROM $tables
        WHERE $cond
    });

    $sth->execute;

    while ( my $row = $sth->fetch ) {
        say $fh join("\t", $row->@*);
    }

    close $fh
        or croak "Failed to close '$output_file': ";

    return;
}

Note that I am using another favorite syntax improvement that was introduced experimentally with Perl 5.20 and become stable in 5.24: Postfix dereferencing. Once again, writing @$row is not horrible, but postfix dereferencing is really useful when you are doing some deep dereferencing, and I prefer to stick with it unless there is an overriding backward compatibility requirement, or unless I simply forget ;-)

Here are a couple of pictures. First, a look at S&P 500 around all the elections under consideration. Note that S&P 500 performance 100 days from the election is bounded from below by its performance during the first days of the Bush and Obama presidencies, and from above by its performance under Kennedy. While the S&P 500 is down since March 1, it is still about 10% above its level on election day. By inauguration day, it was up by about 6.2% relative to election day. As I write this, today is not over and S&P 500 has given up about 2.2% since March 1, but it still up 3.2% since inauguration day. So, if you had bought into the “sell the inauguration” calls, you would have given up something between 5.5% to 3.2%, corresponding to selling on March 1 and today, respectively.

[ S&P 500 around change elections ]

And, here is the comparison between S&P 500 around the 1992 election and the 2016 election:

[ S&P 500 around change elections ]

PS: You can discuss this post on r/perl