Is converting a maximization algorithm into a minimization one a matter of changing max to min?
This might come across as a silly question but I am curious to know if given a maximization algorithm and asked to get the dual (minimization version), it is just a matter of converting all max's into min's and doing other basic adjustments?
If yes, are there any problems where this would not be the case? If not,开发者_StackOverflow中文版 is there a good intuitive reason why this does not work?
Yes, maximization and minimization problems are basically the same. The solution for max(f(x))
is the same as -min(-f(x))
.
When searching game trees this relation is used for example to convert a minimax search into a negamax search. This has the advantage that instead of writing two functions, one for maximizing your score and another for minimizing the opponent's score, you write a single maximizing function but flip the sign of the result of the evaluation function when it's the other person's move.
It depends on how your maximization algorithm works. Numerical algorithms that need gradients will probably do more than max
and min
, and other complexities can come up.
However, there is indeed a very easy fix. Maximizing a function f(a,b,c)
is equivalent to minimizing -f(a,b,c)
. So just negate result of the function.
It works if the problem is obviously symmetrical like finding the maximum vs. a minimum on a 2D-surface. However, since you are quoting the Knapsack problem: that is a whole other class of problems, the optimum cannot be found by applying some max() function in a greedy manner to a vector.
精彩评论