Secure C# Assemblies from unauthorized Callers
Is there any way to secure your assembly down to the class/property & class/method level to prevent the using/calling of them from another assembly that isn't signed by our company?
I would like to do this without any requirements on strong naming (like using StrongNameIdentityPermission) and stick with how an assembly is signed. I really do not want to resort to using the InternalsVisibleTo attribute as that is not maintainable in a ever changing software ecosystem.
For example:
Scenario One
Foo.dll is signed by my company and Bar.dll is not signed at all.
Foo has Class A Bar has Class B
Class A has public method GetSomething() Class B tries to call Foo.A.GetSomething() and is rejected
Rejected can be an exception or being ignored in someway
Scenario Two
Foo.dll is signed by my company and Moo.dll is also signed by my company.
Foo has Class A Moo has Class C
Class A has public开发者_如何学Go method GetSomething() Class C tries to call Foo.A.GetSomething() and is not rejected
If you are wanting to limit the callers to only code that has been authenticode signed by a specific certificate, you can still use CAS (just not StrongNameIdentityPermission).
Use PublisherIdentityPermission just like you would have used any CAS permissions. Or if you want to do it declaratively, use an attribute.
Obviously you have to perform a check on every call from within the called method - any external system trying to enforce the restrictions is easily bypassed using reflection.
From within the method you can use
new StackTrace().GetFrame(1).GetMethod().Module.Assembly
to get the calling assembly. Now you can use
callingAssembly.GetName().GetPublicKey()
to obtain the public key of the calling assembly and compare it with the public key of the called assembly. If they match - assuming all your assemblies are signed with the same key pair - the caller is accepted as a legitimated caller.
But there is one loop hole - a 3rd party assembly can be delay signed with your companies public key and excluded from the digital signature verification. In consequence the loader will load the 3rd party assembly with a strong name and your companies public key even if it is not yet signed. To close this loop hole you have to check the signature. There is no managed API and you have to P/Invoke
Boolean StrongNameSignatureVerificationEx(
String wszFilePath,
Boolean fForceVerification,
ref Boolean pfWasVerified)
with fForceVerification
set to true
and check if the result is true
.
All together this may be quite a lot overhead per call. The temptation is probably to cache the result but assuming a caller with reflection permission it is probably not very hard to manipulate such a cache. On the other hand you will never be 100% sure. Who ever controls the system is free to do (almost) everything he wants - attach an debugger, modify memory content, manipulate libraries or the whole runtime. Finally you have to efficiently protect your assembly from decompilation and modification, too.
I think it's too much fuss for nothing! If you really want security, put your code behind a server and use a client-server architecture. Or web services. Or something in between like WCF, or remoting. Then use authentication to authenticate a client.
Heck you can make everything private, expose a public API and root the calls locally.
Securing a dll from unauthorized callers in a desktop only environment only makes matters more complicated and harder to work with. Not to mention that it would look pretty ugly on the inside.
I see some conventions emerging. And that may work for you. But it doesn't give you the "total security" you require. If you have an assembly that is supposed to be hidden from customers, don't put it in the GAC. Use namespaces postfixed with something like "INTERNAL".
First of all, as you realize, it's not enough to use InternalsVisibleTo
- you would also need to sign and strongly-name each assembly to ensure someone can't just spoof the name.
Now that that's out of the way, you would have to develop a challenge-response implementation on your own - this isn't something you can do unless you're willing to use the InternalsVisibleTo
approach that you explicitly describe you don't want to use.
In a C-R model, you would need to pass some kind of token with every method call (or perhaps just to instantiate an object). The token would be a class that only your code can create an instance of - I would make this an internal class of the assembly you want to consume and make it accessible with InternalsVisibleTo
- this way only a single class needs to be managed:
// SharedAssembly.dll
// marks ConsumingAssembly.dll as having access to internals...
internal sealed class AccessToken { }
public class SecuredClass
{
public static bool WorkMethod( AccessToken token, string otherParameter )
{
if( token == null )
throw new ArgumentException(); // you may want a custom exception.
// do your business logic...
return true;
}
}
// ConsumingAssembly.dll (has access via InternalsVisibleTo)
public class MainClass
{
public static void Main()
{
var token = new AccessToken(); // can create this because of IVT access
SecuredClass.WorkMethod( token, "" ); // tada...
}
}
You may want to put the AccessToken
class in a third assembly that both the service provider and consumer know about so that you don't have to constantly maintain a different group for access token classes for different assemblies.
Building a C-R mechanism for every method is cumbersome and tedious. It also isn't 100% foolproof - someone with enough time and patience could probably find a way around it.
The best option (which may or may not be possible in your case) would be to keep your private code on your own servers and only expose it as a webservice (or something similar). This allows you to actively manage accessiblity to your IP and allows you to update who has access in a centralized (rather than distributed) manner. Technologies already exist to restrict access to web services using certificates, message-signature, and encryption. This would be the most reliable (and proven) way to control access to you IP.
I have seen DLL's written by companies (most notably Pegasus Imaging) that use a challenge/response system to unlock the assembly. The purchaser of the DLL is provided with a "License Code," tied to the name of the purchaser, which the consumer of the DLL then uses to unlock a specific subset of the DLL's features.
So when the assembly is used the first time by the application, the Unlock()
method is called in the assembly. The user name and unlock code are passed in and run through an algorithm that verifies identity, presumably using a Public Key Encryption algorithm of some sort.
There are some bits encoded in the unlock code that specify features; these bits then set some feature flags in the assembly. All calling functions must check these flags to determine if the appropriate feature is enabled. The Unlock()
method is called only once, and is good for the lifetime of the loaded assembly.
Of course, since you have to provide the "private key" in the assembly, this procedure is not hack-proof (what is?), but it is reasonably secure, and will keep the honest people honest.
I don't think there's a way to do this if you don't control the execution environment under which the code is run. Code running with full trust on a user's machine would be able to get around any restrictions you added. For example, full trust code could invoke private or internal methods with the reflection APIs, so even using the InternalsVisibleToAttribute wouldn't work.
If you control the execution environment, you can create an AppDomain where your code is fully trusted, and third party code is partially trusted and can't call your code unless you put an AllowPartiallyTrustedCallersAttribute (APTCA) on the assembly. You can restrict which methods can be called in an APTCA assembly with the SecurityCritical and SecuritySafeCritical attributes.
How to: Run Partially Trusted Code in a Sandbox
精彩评论