Re: executable obfuscator?
Re: executable obfuscator?
- Subject: Re: executable obfuscator?
- From: Greg Guerin <email@hidden>
- Date: Sat, 9 Dec 2006 10:41:05 -0700
Steve Hershey wrote:
>According to my web investigation, this isn't enough.
>Skilled reverse engineers with advanced disassemblers
>can recover a remarkable amount of information from
>stripped and optimized executables. I've never done
>reverse engineering of an executable, so I'm relying
>purely on the opinions of others. I imagine that this
>type of work could be quite interesting and
>challenging.
Your goal is almost certainly futile. It's even more futile if you haven't
quantified who you're defending against and what you're willing to expend
for defenses.
It is technically impossible to protect executable code that runs on client
hosts from an adversary who can throw an unspecified amount of time, money,
and skill at reverse-engineering it. Obfuscators in the Java world have a
long history and considerable sophistication, yet even they are not
perfect. For an example of some really extraordinary measures, read some
of what the SandMark program offers:
<http://sandmark.cs.arizona.edu/>
I researched code protection for Java classes a couple years ago, and
SandMark had the most sophisticated algorithms for obfuscation and
watermarking I was able to find. It's a research project, though, and not
easily licensed (you have to negotiate with UofA, so the first step is to
hire an IP lawyer), so my client decided on a simpler approach. My client
also had no illusions that they were creating an invulnerable defense.
They were fully aware from the beginning that it was a stumbling block, at
best, and had a limited lifetime in the field.
So, accepting that invulnerability against reverse-engineering is
impossible, under any practical definition, you first have to decide what
level of adversary you're willing to invest resources to defend against.
In other words, decide who you enemy is, so you know what you have to
defend against.
Either that, or you have to change your architecture so the defended code
never executes or even exists outside a sheltered environment, such as a
running on a remote server via RPC, or sitting in a tamper-resistant chip
on a card.
The other approach to this is to decide what resources you're willing to
spend on defense, then find a suitable solution that cost no more than
that. That is, budget first, then shop, as distinct from deciding a
defensive level first, then budgeting. You may have to iterate a few times.
It really comes down to only a few things:
1. If you don't know who and what you're defending against, you can't
make an informed decision.
2. If you haven't decided on a budget or a defensive level, you can't
make an informed decision.
3. If you can't obtain (buy or acquire) a tool that does what you need,
you have to change your requirements or create your own tool.
"Change your requirements" means choosing a simpler worst-case adversary or
changing your budget (usually upwards). It may seem like there are easier
answers, but there really aren't.
If you want some examples of the lengths that adversaries will go to, look
at the breaking of the Xbox's original defenses, and also look at other
consoles where people ground the lids off chips to get inside them.
Remember, these people were attacking GAME machines whose sole purpose was
ENTERTAINMENT. If your code is really worth attacking, you'll have to
erect a pretty high barrier if you hope to defend against thousands of
people who'd crack it just for fun.
The problem is that your adversary has your protected content (the code),
and can attack it at leisure using whatever resources he cares to apply.
Being code, it must also fundamentally remain directly executable, at least
until we have CPUs that can decrypt encrypted code entirely on-chip
on-the-fly, which also have strong built-in key-handling. It's the same
basic problem as recently described in this article by Bruce Schneier,
"Separating Data Ownership and Device Ownership":
<http://www.schneier.com/blog/archives/2006/11/separating_data.html>
The "two safes" analogy directly applies to defending client-side code.
-- GG
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Xcode-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden