Re: executable obfuscator?
Re: executable obfuscator?
- Subject: Re: executable obfuscator?
- From: "Andy O'Meara" <email@hidden>
- Date: Sun, 10 Dec 2006 15:52:07 -0500
Greg, great points--you summed it up perfectly, imho.
Steve, assuming you're willing expend a level that yields a pretty
high bang for the buck, below is some simple stuff that our company
does. Our software targets both Mac OS and Windows, so the stuff
that we do has to be maintenance free as well as cross-platform. The
big picture is block entry-level crackers, and as Steve pointed out,
you'll never be able to stop determined blackbelt crackers. To us,
this is a winning strategy since entry-level crackers are typically
teenager-types that give up if it's not an easy crack. The
blackbelts are professional software guys by day and are considerably
fewer in numbers (so you have to have a big big product to get their
attention). By definition, the blackbelt's time is worth money, so
as long as you make it a PITA for them and your title isn't a must-
have, they'll move on to something else. Here we go...
(a) Lightly encrypt "telltale" strings (and decrypt them on demand)
so that a cracker looking at the app's string table can't go any
useful leads. A string like "Sorry, the serial numbered you entered
is invalid" and where it's loaded, is obviously the things crackers
look for. If, instead, the cracker opens your string table and sees
30 strings that read like gibberish, he'll be instantly turned off.
Naturally, the more strings you make like this (including red
herrings), the more time it will take for a cracker to locate the
critical parts of code. Make a macro that decrypts the string and
checks against a checksum:
#define _LoadStr( _s, _str, _chk ) if ( _s -> AssignFromEncoded
( _str ) != _chk ) delete this;
...
_LoadStr( myStr, "ffxekjhfjknxkeffsdfsdfnjwehfxx", 0x3452FB98 )
...
As you can see, any string tampering will result in an instability
that doesn't show up till later, which is very ugly for a cracker to
deal with. Obviously, the downside is that you needed to have a
simple util on hand to encode your strings, but this isn't typically
a big deal since most strings aren't edited often by developers.
(b) Don't use blocking dialogs/alerts in the proximity of critical
decision making. Cracking 101 is to break the process when the app
is displaying some dialog. Doing this will allow a cracker to (a)
get an exact position in your code to start checking out, and (b)
browse the code for clues to the critical branch to edit. Instead,
execute your critical logic far away from where (and when) the user
gets informed. That way, the cracker has a lot of work to do to find
the source logic. To make things tougher on the cracker, make the
logic a chain of separated steps, so that even watching a couple
memory locations (and when something is altered) doesn't reveal the
location of the originating logic.
(c) Use macros for your critical logic (and perform it in multiple
places), so that there's not a single key branch for a cracker to
target. This makes the cracker have to work a lot harder (not to
mention much less satisfying--most crackers get off a single/elegant
edit). The second a crack turns into something nasty an inelegant,
most amateur crackers will want to move on to something else. Also,
a crack that requires non-trivial edits to a binary will require the
cracker to make a patch app, which is obviously a chunk of work most
crackers aren't willing to do. The nice thing about a macro is that
you're guaranteed that'll be inlined and you still only have one copy
of the code to maintain.
(d) When you have detected an integrity violation (that can only be
caused by a cracker messing with stuff), write a flag in the system
file system (or registry) to put you app in a safe mode where its not
executing your sensitive logic (and falling into a passive/disabled
state). This will cause the cracker to have to burn time discovering
and disabling this, so that he can get back on track. The way these
guys make progress is by repeatedly quitting and restarting your app,
so if the environment changes on them, you're making it a lot more
difficult for them.
In general, think back in your life on some of the most difficult
bugs that you had to find that led to inconsistent instability (race
conditions, memory smashing, double deletion, uninitialized data).
All you need to do is replicate that and you'll drive crackers nuts
and they'll move on to something else.
Good luck,
Andy
On Dec 9, 2006, at 12:41 PM, Greg Guerin wrote:
Steve Hershey wrote:
According to my web investigation, this isn't enough.
Skilled reverse engineers with advanced disassemblers
can recover a remarkable amount of information from
stripped and optimized executables. I've never done
reverse engineering of an executable, so I'm relying
purely on the opinions of others. I imagine that this
type of work could be quite interesting and
challenging.
Your goal is almost certainly futile. It's even more futile if you
haven't
quantified who you're defending against and what you're willing to
expend
for defenses.
It is technically impossible to protect executable code that runs
on client
hosts from an adversary who can throw an unspecified amount of
time, money,
and skill at reverse-engineering it. Obfuscators in the Java world
have a
long history and considerable sophistication, yet even they are not
perfect. For an example of some really extraordinary measures,
read some
of what the SandMark program offers:
<http://sandmark.cs.arizona.edu/>
I researched code protection for Java classes a couple years ago, and
SandMark had the most sophisticated algorithms for obfuscation and
watermarking I was able to find. It's a research project, though,
and not
easily licensed (you have to negotiate with UofA, so the first step
is to
hire an IP lawyer), so my client decided on a simpler approach. My
client
also had no illusions that they were creating an invulnerable defense.
They were fully aware from the beginning that it was a stumbling
block, at
best, and had a limited lifetime in the field.
So, accepting that invulnerability against reverse-engineering is
impossible, under any practical definition, you first have to
decide what
level of adversary you're willing to invest resources to defend
against.
In other words, decide who you enemy is, so you know what you have to
defend against.
Either that, or you have to change your architecture so the
defended code
never executes or even exists outside a sheltered environment, such
as a
running on a remote server via RPC, or sitting in a tamper-
resistant chip
on a card.
The other approach to this is to decide what resources you're
willing to
spend on defense, then find a suitable solution that cost no more than
that. That is, budget first, then shop, as distinct from deciding a
defensive level first, then budgeting. You may have to iterate a
few times.
It really comes down to only a few things:
1. If you don't know who and what you're defending against, you
can't
make an informed decision.
2. If you haven't decided on a budget or a defensive level, you
can't
make an informed decision.
3. If you can't obtain (buy or acquire) a tool that does what you
need,
you have to change your requirements or create your own tool.
"Change your requirements" means choosing a simpler worst-case
adversary or
changing your budget (usually upwards). It may seem like there are
easier
answers, but there really aren't.
If you want some examples of the lengths that adversaries will go
to, look
at the breaking of the Xbox's original defenses, and also look at
other
consoles where people ground the lids off chips to get inside them.
Remember, these people were attacking GAME machines whose sole
purpose was
ENTERTAINMENT. If your code is really worth attacking, you'll have to
erect a pretty high barrier if you hope to defend against thousands of
people who'd crack it just for fun.
The problem is that your adversary has your protected content (the
code),
and can attack it at leisure using whatever resources he cares to
apply.
Being code, it must also fundamentally remain directly executable,
at least
until we have CPUs that can decrypt encrypted code entirely on-chip
on-the-fly, which also have strong built-in key-handling. It's the
same
basic problem as recently described in this article by Bruce Schneier,
"Separating Data Ownership and Device Ownership":
<http://www.schneier.com/blog/archives/2006/11/separating_data.html>
The "two safes" analogy directly applies to defending client-side
code.
-- GG
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Xcode-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
40soundspectrum.com
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Xcode-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden