Uncategorized

Egnyte Support for Email It In

Email It In has just added support for the Egnyte hybrid cloud storage service. Unlike the other services Email It In supports (Google Drive, OneDrive and Dropbox), Egnyte allows their customers to maintain storage on-premises, as well as providing a cloud storage service.

To add Egnyte support we worked very closely with both Egnyte and their users to ensure we offered the best service possible. Users of Egnyte can sign up for free trials at https://emailitin.com/

Standard
Uncategorized

Haraka and shellshock

It has recently been shown that Qmail is vulnerable to shellshock if you use a pipe filter in a .qmail file (as I do on one of my own machines).

I want Haraka users to know that if you have Haraka in front of Qmail, you are NOT vulnerable to this.
The reason being that Haraka validates MAIL FROM commands according to RFC 5321 rules, whereas Qmail does not – it simply passes any string through untested into the environment. I believe the same safety to be true of Qpsmtpd, though I have not tested it there.
All that said, upgrade your servers anyway. This is a nasty bug with multiple attack vectors.
Standard
Uncategorized

Irritate your developers

Nothing motivates a developer more than being irritated by something.

It’s why we’re hackers in the first place. I remember my first real programming experience being using the “Freeze Frame” cartridge on the Commodore 64 so I could change the number of lives I had in a game. The irritation of not being able to finish the game drew me into hacking. Most programmers have similar stories – of how something irritated them enough to start hacking on it.

We can use this to our advantage in our day to day work. One excellent example I remember hearing when I was in the Perl community (I think it was from @chromatic, but I don’t recall exactly) was at the end of each day, write a test that fails based on what you’re working on. You’ll come in next morning and want/need to fix that test.

My most recent examples of irritating myself have been when trying out new external systems that implement webhooks. When I implemented Stripe payments for EmailItIn I had no idea which webhooks I might need to pay attention to. So rather than pour over the documentation for hours until I figured it out, I just set it up to irritate me – every call from Stripe webhooks emails me the entire JSON structure they send. Over time this becomes irritating fairly quickly, and so like any good hacker I start to notice patterns and things I’m interested in and not interested in, and I can setup code to filter out irrelevant webhooks, and write code that deals with things like “subscription_cancelled”.

Another way you can use this to your advantage is to email every single error (or in Node, every call to console.error()) on your site. This might scare a lot of people, but if you’re seeing errors often enough to irritate you, that is something that needs fixed, fast. And if you don’t do this those errors often get lost in your logs (you do keep logs, right?). At Ideal Candidate we email all developers when any of the following conditions occur:

  • A console.error() call – we shim this to provide a stack trace in the email
  • A server side exception which brings the server down
  • A client side exception occurs – this does a POST back to our server which triggers the email

Does this amount of emails scare you? If it does you probably have far too many errors occurring in your application. Give it a try – you might find a lot of issues that you didn’t even know you had.
Continue reading

Standard
Uncategorized

SOLVED: Drobo doesn’t remount at boot on OS X Mavericks

For a long time I loved my drobo. Then I upgraded to Mavericks and all went to shit. On reboot I would have to carefully unmount the now read only drobo, and remount, a process that took around 30 minutes. Only then would it be properly mounted.

I recently discovered HFS journaling wasn’t enabled on the drive. Turning that on in disk utility (a big button in the toolbar) fixed the reboot woes.

Putting this blog post here in the hopes I will help others with the same problem.

Standard
Haraka

Haraka v2.5.0

After a massive development effort a new stable release of Haraka is out.

v2.5.0 contains a huge number of changes, of which these are the highlights:

  • A new feature called ResultStore (or Results) which allows plugins to communicate information they found to each other. This is primarily used by the karma plugin to use the results of other plugins to penalise senders
  • A -o/–order option to bin/haraka to show the order all currently configured plugins will run.
  • IPv6 support on outbound via `echo 1 > config/outbound.ipv6_enabled`
  • Attachment streams now have access to the MIME header for that attachment via stream.header
  • A new “deferred” hook for outbound mail, called when mail is temporarily denied
  • Outbound “bounce” hooks now receive an Error object, containing the MX and recipient information. Existing uses of the error being treated as a string should still function as normal
  • Outbound gets more parameters on the “delivered” hook to allow detailed analysis of sent mail
  • Outbound UUID now distinguishes between different domains using .1, .2 etc just as inbound gives a new index for each transaction
  • Outbound won’t try and send to domains publishing NULL MX (draft-delany-nullmx-02)
  • Outbound template can use `extended_reason` to show a more detailed error
  • Log lines are coloured if sent to the terminal
  • Log lines can have timestamps prepended via `echo 1 > config/log_timestamps`
  • Net_utils gets a large number of new support functions
  • Listening on port 465 automatically enables SSL support on that socket
  • New plugins: connect.asn, access, connect.fcrdns, and relay
  • A new utility: `haraka_grep` – a grep-like tool for Haraka log lines which displays all log lines for a given connection when it finds matching lines in the logs
  • Haraka is now continuously tested by Travis-CI
  • Dependencies are now more strictly managed with “~Version” in package.json

This release also features many updates to existing plugins, and many bug fixes particularly in the Outbound sending engine.

Installation or upgrading is as simple as “npm install -g Haraka”, and restarting your server, with the caveat that we urge anyone upgrading to test upgrades thoroughly before putting them into production.

As usual the development of Haraka could not be done without the help of many in the community. Please see the git logs for a full list of all changes and contributors.

Standard
Uncategorized

Postgresql: Converting TEXT columns to JSON

This is an “I fucked up so you don’t have to” post, and is here so someone googling as I did will have more luck.

I recently upgraded a database from Postgresql 9.1 to 9.3. In the process I wanted to convert my TEXT columns containing JSON to Postgresql’s JSON data type.

Here’s what I did (DO NOT DO THIS):

ALTER TABLE table1 ALTER COLUMN col1 TYPE JSON USING to_json(col1);

Here’s what I should have done:

ALTER TABLE table1 ALTER COLUMN col1 TYPE JSON USING col1::JSON;

What happens with the former is you get a single string (which is valid JSON) containing your JSON. Effectively a double-encoding bug.

Standard
Node

Node.js multiple query transactions

When using relational databases it’s a common pattern to perform several data manipulation commands (insert/update/delete) within a transaction. A typical use case of this is to insert into two tables related by foreign keys, where you need the auto-incremented “id” value from the first insert to use in the second. Often people will use stored procedures to do this, but what if you wanted to do it in Node? This can get complex due to the nested callbacks that naive node.js code can generate, and shockingly is even the recommended way to do things from the node-pg documentation. Here’s how it generally looks:

db.begin_transaction(function (err, client) { // assume we have implemented a "begin_transaction" function
    if (err) return cb(err);
    client.query("INSERT INTO Table1 (col1, col2) VALUES ($1, $2) RETURNING id", table1_values, function (err, results) {
        if (err) {
            return client.rollback_transaction(function () {
                cb(err);
            }
        }
        var table1_id = results.rows[0].id; 
        var values = [ table1_id, table2_values[0], table2_values[1] ];
        client.query("INSERT INTO Table2 (table1_id, col1, col2) VALUES ($1, $2, $3) RETURNING id", values, function (err, results) {
            if (err) {
                return client.rollback_transaction(function () {
                    cb(err);
                }
            }
            client.commit_transaction(function (err) {
                if (err) {
                    return cb(err);
                }
                cb(null, table1_id, results.rows[0].id);
            });
        });
    });
});

As you can imagine this gets much worse when you get to 3 queries or more.

But let’s break it down – what are we looking at here? It’s effectively a “waterfall”, very much like async.waterfall. So let’s implement the equivalent to async.waterfall in our database library:

exports.waterfall = function waterfall (tasks, cb) {
    pg.connect(connstring, function (err, client, done) {
        if (err) {
            return cb(err);
        }

        client.query(begin_transaction, function (err) {
            if (err) {
                done();
                return cb(err);
            }
            
            var wrapIterator = function (iterator) {
                return function (err) {
                    if (err) {
                        client.query(rollback_transaction, function () {
                            done();
                            cb(err);
                        });
                    }
                    else {
                        var args = Array.prototype.slice.call(arguments, 1);
                        var next = iterator.next();
                        if (next) {
                            args.unshift(client);
                            args.push(wrapIterator(next));
                        }
                        else {
                            args.unshift(client);
                            args.push(function (err, results) {
                                var args = Array.prototype.slice.call(arguments, 0);
                                if (err) {
                                    client.query(rollback_transaction, function () {
                                        done();
                                        cb(err);
                                    });
                                }
                                else {
                                    client.query(commit_transaction, function () {
                                        done();
                                        cb.apply(null, args);
                                    })
                                }
                            })
                        }
                        async.setImmediate(function () {
                            iterator.apply(null, args);
                        });
                    }
                };
            };
            wrapIterator(async.iterator(tasks))();
        });
    });
}

While it looks complicated, most of the code was cut-and-paste from the internal implementation of async.waterfall.

So now we have that, what does it look like in use?

db.waterfall([
        function (client, cb) {
            client.query("INSERT INTO Table1 (col1, col2) VALUES ($1, $2) RETURNING id", table1_values, cb);
        },
        function (client, results, cb) {
            var table1_id = results.rows[0].id;
            var values = [ table1_id, table2_values[0], table2_values[1] ];
            client.query("INSERT INTO Table2 (table1_id, col1, col2) VALUES ($1, $2, $3) RETURNING id", values, cb);
        },
    ], cb);

Much simpler and easier to debug.

Let me know if you find this useful.

Standard