by suref on 9/3/22, 1:02 PM with 6 comments
by montecarl on 9/5/22, 4:33 AM
Its mentioned in the article, but what I find neat about multi-objective optimization is that (for a certain type of well behaved problem) the "solution" is not a single point (0 dimensional) like in normal optimization, but is N-1 dimensional where N is the number of objective functions. So if you have 2 objective functions the best solutions all lay on some 1d curve and if you have 3 they fall on some 2d surface and so on. This is called the Pareto front and Wikipedia has some nice visualizations[1]. It is then left as an additional exercise to pick out the best solution to your problem from the Pareto front.
A common example from engineering is optimizing for strength and weight. You may want an airplane wing to be very strong and very light and the Pareto front represents the best solutions for at a given strength/weight and then you can use other information to pick a particular solution.
by rented_mule on 9/5/22, 10:43 PM
I've repeatedly seen this result in oscillations where X is optimized over the course of a year. When those efforts runs dry, the effort switches to optimizing Y for the next year without realizing most of the gains in X were sacrificed for Y (repeat). It is a lot simpler to only think of one dimension at a time, but progress can be so much slower. If they've done a really good job, they are dancing in circles near the Pareto front and, to the extent their environment is static, their efforts are going to be neutral at best.
I've also seen this turned around when it was later understood that it was multi-objective optimization. That involved hiring people with a background in mathematical optimization (operations research, game theory, control theory, statistics, etc.) to build a system that would constantly adjust system parameters to stay near optimality. They built ML models, control systems, and auction systems that all worked together. The result was incredibly different. What had been years of experimentation, often with latter experiments undoing earlier experiments, the system adapted in near real time to changing conditions. The pandemic likely would have hammered this company because it put many of their customers out of business. Instead, the system changed its own behavior to get roughly the best results it could and kept adapting as the pandemic went through all its stages. Their results are now ahead of where they were at the start of the pandemic while something like half of their customers are out of business.
A downside of automating this is that it is very difficult to experiment on the optimization system itself. Measuring improvements requires advanced experiment design and analysis, typically necessitating people with a PhD in specific areas of statistics (stats as used in vaccine trials, market economics, etc.). It is also difficult to understand what the system is doing and why. And without real constraints placed on it by those running it, a lot of damage can be done to variables not represented in the system as it cannibalizes them to optimize the variable it does know about - I suspect this is the source of a lot of the high profile damage Facebook has done to the world.