开发者

Does an algorithm exist to convert any number in the Aleph-Null set into the smallest possible computable number? [duplicate]

This question already has answers here: Closed 12 years ago.

Possible Duplicate:

Programming Logic: Finding the smallest equation to a large number.

I'm looking for an algorithm that will take an arbitrary number from the Aleph-Null set (all positive integers)(likely to be absolutely enormous) and attempt to simplify it into a computable number (if the computable number takes up less space than the integer value it is trying to represent)(specifically not floating point). Involving tetration/hyperoperators would be optimal.

Does anyone kno开发者_Python百科w if anything like this exists? I've looked around quite a bit this morning, but have been unable to find anything. C# code would be optimal, but really, it could be in any language

Edit: Programming Logic: Finding the smallest equation to a large number : http://mrob.com/pub/ries/index.html looks promising, but I wonder how well it will deal with large numbers, and if it's capable of implementing hyperoperators. I'll try it out.


(all positive integers) and attempt to simplify it into a computable number (if the computable number takes up less space than the integer value it is trying to represent)(specifically not floating point). Involving tetration/hyperoperators would be optimal.

Yes, and then again, no.

First, you can't actually take inputs from "all positive integers" in a physical computer. At best, you can have an integer whose representational length is the size of your hard drive.

So your input is now physically constrained to the set I = [0, MAX], where MAX is a physical constant. Congratulations, that makes this problem solvable.

You can consider this from an information-theoretic point of view- each member of I is possible and representable. The compressability comes in when you consider representations. If each representation is unique, your goal is to reduce each i in I to the representation that is nearest the entropy of the number of i itself.

Or, restated, compression comes in by removing redundancy. If your representation has redundancy, it can be compressed.

Possibly - this would be domain knowledge - you can write the formula for generating the number in a fashion that is highly compressed. But that relies on a certain regularity in how you get the number, it becomes no longer arbitrary.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜