In our previous post about Petya, we speculated that the short-cuts, design flaws, and non-functional mechanisms observed in the malware could have arisen due to it being developed under a tight deadline. I’d now like to elaborate a little on what we meant by that.
As a recap, this is what the latest version of Petya looks like (to us).
Since the previous post, we’ve determined that the user-mode encryption-decryption functionality is actually working as intended. That’s the mechanism used to encrypt and decrypt files on the system.
However, the MBR encryption-decryption functionality doesn’t work. This is because the “personal installation key” displayed in the MBR ransom screen is just a randomly generated string. This is one of the main reasons why some people are calling Petya a “wiper” (or “WTFware”).
Naturally, this unimplemented functionality has led to much discussion on the Internet. And also here at F-Secure.
We discussed the “illusory” MBR functionality in some detail. Here are a few points that were made.
Here we establish that the author of this malware obviously knows his code isn’t going to work. The author also knows that members of the community have meticulously studied prior versions of Petya, especially the MBR code, which is of interest, since it’s unique to Petya. So the malware author would be aware of the fact that it wouldn’t take reverse engineers all that long to figure out that the encryption-decryption mechanism is bogus.
However, consider the possibility that this malware was developed on a tight deadline, and released ahead of a reasonable schedule. I’m sure there isn’t a single software developer in the world who hasn’t been through that “process”.
But let me put on my “developer hat”. If I were to develop a piece of malware that includes Petya, I’d most likely start doing it first by embedding it in my project and maybe first filling all the variables with random data. Just to make sure the overall concept of embedding it works, Petya MBR code kicks in after reboot, and so on.
After everything seems to work, then I’d move on to the next step, where I’d implement the actual Elliptic-curve stuff that would ultimately replace the placeholder random data.
Let’s note at this point that all the Elliptic-curve-related code is completely absent in this new variant. It’s a complicated piece of code that would probably take over a week to develop and test properly.
So, what if my friend, who’s supposed to “ship” this thing gets a wrong version, or doesn’t bother to wait for it to be ready, or whatever. He will just package everything up, run it to see that everything looks like it works, and ship it. And off we go.
Putting together a proper build process for software isn’t easy. Tracking changes in different modules, making sure your final package contains the right things, and testing it thoroughly enough to catch discrepancies, or wrong versions is also time-consuming. Many “real” software companies have problems with build processes and versioning. The theory that the final build of Petya contained an old version of one of the components is not at all far-fetched. Neither is the theory that they shipped a “minimum viable product”.
Plenty of other evidence points towards this piece of software being developed in a hurry, and not thoroughly tested.For instance, a machine can re-infect itself and encrypt files twice. There’s also this bug:
And I’m sure we’ll find more bugs. (Whoever wrote this is getting a lot of free QA!)
At the end of the day, if someone wanted to build a “wiper”, why build an almost functional ransomware, save for a few bugs and a possibly misconfigured final package?
My colleague, Sean Sullivan wrote a follow post to this:
If this attack was aimed purely at the Ukraine, given the collateral damage we’ve seen, and the information emerging related to aggressive lateral propagation mechanisms employed by this malware, I’d add that (network propagation) logic to the list of bugs/design flaws present in this malware.