A few months ago while discussing with some user about code signing (PTHPasteboard project), I had the idea to “revirgin” the code signed binary by removing the Mach-O LC_CODE_SIGNATURE command. As usual with my many ideas, I never explored that one, until today when I received an email asking about this idea. I decided to give it a try. My code is a simple Hello world, compiled for i386 only. After binary is compiled, I sign it with my test certificate and mark the process to be killed if code signing fails. Let me show you the differences:

Without code sign:

hello:
Mach header
magic cputype cpusubtype  caps    filetype ncmds sizeofcmds      flags
0xfeedface       7          3  0x00          2    12        960 0x00000085
With code sign:

hello.codesign:
Mach header
magic cputype cpusubtype  caps    filetype ncmds sizeofcmds      flags
0xfeedface       7          3  0x00          2    13        976 0x00000085

The extra command is the LC_CODE_SIGNATURE. Here it is:

Load command 12
cmd LC_CODE_SIGNATURE
cmdsize 16
dataoff  12592
datasize 5232

I went checking the code for my offset.pl (I tried to comment it since I know I forget these things) and the simplest idea ocurred to me. If it’s an extra command, why not reduce the number of commands and hope that the loader will ignore that extra one? It’s simple to try and worth the shot!
Load 0xEd and modify one byte (World to Wolld) and try to run the modified signed binary. Process is killed! Code signing is working. Now let’s change the number of commands back to 12. Since it’s a i386 binary I don’t have to mess with fat headers so the ncmds is 20 bytes head the beginning of the program. Modify 0xD to 0xC, save and launch the program! Voila, it runs! This simple technique works!

Next step is to modify offset.pl into a new util called removecodesign.pl. It’s pretty damn easy, calculate offset where ncmds is (support for fat binaries of course!), read it, subtract one, write it back to the binary and re-read again to make sure everything went ok. Tool is ready to be tested, so let’s launch it against a copy of the original signed binary and what happens? It is killed! What the hell??!?!?!?!?
Load the modified binary into an hex editor to make sure my modification is correct (you know, I suck at coding :X) and yes it is… This is weird…
Do some tests and it still doesn’t work! Maybe my initial test had some problem. Redo initial test and it works. Generate the SHA1 checksum for the modified binary. With 0xEd, modify ncmds to the original value and test if it works. It does… Modify again ncmds to 12. Test if it works and it does! Generate the SHA1 checksum for this modified binary and it’s the same! The not working binary and the working binary have the same checksum! A user at the IRC channel suggests it could be mtime or other flags. Verify everything with stat, no differences. Maybe perl is messing up, so write a quick patcher in C. Same result! Hummmm this is weird. There is some pattern here. To understand if I’m right, load HexFiend (another hex editor) and edit another copy and test if it works. NO, it doesn’t. Load VMware, download a trial of WinHex and modify another copy but, it still doesn’t work.

Result: the trick results if 0xEd is used! At this moment I cannot understand why this is happening. All checksums match, so there’s no byte difference between working and non working copies. Right now I have no ideas to solve this (I’m pretty sure I will cook up something while distracted with something else), so is anyone out there who knows about this or has any other ideas? I must be missing some small detail (been a loooong week!).

I’m already trying to understand where to patch the kernel to remove the code sign check (you never know when this might be useful 😄).

Have fun,
fG!