开发者

Chromosome representation for real numbers?

I am trying to solve a problem using genetic开发者_运维问答 algorithms.

The problem is to find the set of integral and real values that optimizes a function.

I need to represent the problem using a binary string (simply because I understand the concept of crossover/mutation etc much better when applied to binary string chromosomes).

A candidate solution S would be the set {I1, I2, ... IN, R1, R2, RM }

Where the I variables are integers and the R variables are floating point numbers.

I want to be able to transform the candidate solution S into a binary string, but I don't know how to encode the floating point numbers.

Any ideas on how to encode the set S into a chromosome?

Although the solution is supposed to be language agnostic, my prefered choice of language (in decreasing order of preference for this particular task) is:

Python, C++, C

BTW, I am coding the problem using Pyevolve


No, I think that binary representation is wrong for your problem. Your basic data is not binary, so, why use binary? Do mutation and crossover on real and integer numbers, not on their binary representation.

Simplest crossover: first parent: ABCDE where A, B, ... are floating points numbers, second parents MNOPQ. Choose randomly D, first spring: ABCDQ, second: MNOPE.


i would suggest rethinking if you actually need the bit string representation.. the conversion from float to bit and back is a bit much maybe. if you could just stay with floating point values i.e. a single solution candidate is an array of N floats you can pass that easily to your evaluation function (objective function). then use crossover methods like simulated binary crossover (SBX, http://www.slideshare.net/paskorn/self-adaptive-simulated-binary-crossover-presentation) which mimics the effects you'd get after converting your floats to a binary representation and performing crossover on that. the results of SBX are pretty good and also the analogy to what woudl happen if you'd deal with bit strings becomes pretty clear after a while.. it looks like much in this slideshow.. but it all comes down to just a few lines of implementing the sbx crossover.


You can pack binary data into buffers with the facilities provided in the struct module. See here: http://docs.python.org/library/struct.html

That said, I personally love Python, BUT: if you want to repeatedly and efficiently take a set of integers and floating point numbers, treat it as one big bitsring, and apply bitwise mutation to it, I don't think Python is the best choice for a language. This is much more straightforward (and fast) in lower-level languages -- I'd go for C.

Good luck with the algorithm!


Use IEEE754 representation for floating-point numbers and two's complement representation for integers. Or, use integers and floating-point numbers, safe in the knowledge that behind the scenes your computer is already using these binary representations.


If I understand the problem correctly, this is completely language agnostic. You should be able to represent the floating point number in the standardized IEEE representation. Here's a tutorial.

Once you have that representation you don't what is what, you just apply whatever crossover (single, double point whatever) to your bits.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜