Skip to main content

Benjamin Oakes

Photo of Ben Oakes

Hi, I'm Ben Oakes and this is my geek blog. Currently, I'm a Ruby/JavaScript Developer at Liaison. Previously, I was a Developer at Continuity and Hedgeye, a Research Assistant in the Early Social Cognition Lab at Yale University and a student at the University of Iowa. I also organize TechCorridor.io, ICRuby, OpenHack Iowa City, and previously organized NewHaven.rb. I have an amazing wife named Danielle Oakes.

Filtering for the Security category. Clear

Google announces the first practical technique for generating a SHA-1 collision

by Ben

This is big news.

We hope that our practical attack against SHA-1 will finally convince the industry that it is urgent to move to safer alternatives such as SHA-256.

Source: Announcing the first SHA1 collision – Google Online Security Blog

The technology community still uses SHA-1 for many things.  One of the most concerning implications of this team’s technique is that it implies attacks against Git, which uses SHA-1 for every commit.  Imagine if you had a tag (a SHA-1 sum) that referred to two different sets of changes: a benign changeset on your machine and a malicious changeset on GitHub.  Then you deploy that tag and the malicious code runs instead of the code you expected.

As far as I know, such an attack on Git hasn’t been demonstrated yet, but in theory, I think you could replace a SHA-1 commit as I described.  I bet someone will demonstrate that someday.  (Think of padding files with bogus comments until you get the checksum you want.)  It would be difficult (though not impossible) to switch Git to SHA-256, but I don’t know of any efforts to do that — though Git 2.11 is starting to acknowledge that abbreviated SHA-1 checksums do collide in practice.

Will such an attack happen today or tomorrow?  Probably not; it takes a huge amount of resources right now.  However, computation is cheaper than ever; I bet attackers will start to use services like Travis CI for computations like this, like I’ve heard is starting to be done with Bitcoin mining in pull requests on open source projects.

The best mitigation I’m currently aware of is cryptographically signing your commits, and this may be a catalyst for that to become standard practice.

Oracle to ‘sinner’ customers: Reverse engineering is a sin and we know best

by Ben

 “Recently, I have seen a large-ish uptick in customers reverse engineering our code to attempt to find security vulnerabilities in it. < Insert big sigh here. > This is why I’ve been writing a lot of letters to customers that start with “hi, howzit, aloha” but end with “please comply with your license agreement and stop reverse engineering our code, already.”

Source: Oracle to ‘sinner’ customers: Reverse engineering is a sin and we know best | ZDNet

This article is hard to believe.  Imagine if the people that discover these vulnerabilities sold them on the black market instead of reporting them to Oracle.  I would hope that Oracle would prefer receiving an email to widespread zero-day attacks.

Though I’m not a lawyer, this makes me wonder what constitutes reverse engineering, and also the legality of license clauses that disallow reverse engineering in this situation.  Unfortunately, Wikipedia doesn’t mention anything about reporting security vulnerabilities, which seems like something that should always be allowed.

From the Wikipedia article:

In the United States even if an artifact or process is protected by trade secrets, reverse-engineering the artifact or process is often lawful as long as it has been legitimately obtained.

Reverse engineering of computer software in the US often falls under both contract law as a breach of contract as well as any other relevant laws. This is because most EULA’s (end user license agreement) specifically prohibit it, and U.S. courts have ruled that if such terms are present, they override the copyright law which expressly permits it (see Bowers v. Baystate Technologies).

Sec. 103(f) of the DMCA (17 U.S.C. § 1201 (f)) says that a person who is in legal possession of a program, is permitted to reverse-engineer and circumvent its protection if this is necessary in order to achieve “interoperability” – a term broadly covering other devices and programs being able to interact with it, make use of it, and to use and transfer data to and from it, in useful ways. A limited exemption exists that allows the knowledge thus gained to be shared and used for interoperability purposes.

Do security vulnerabilities fall under “interoperability”?  Are there “whistle blower” laws that encourage security vulnerabilities to be reported and dealt with responsibly?  If not, should there be?