The source files for all examples can be found in /examples.

Relaxed Two-Way Partitioning Problem

We consider the (nonconvex) two-way partitioning problem from Boyd and Vandenberghe (2004), p.219 [1]:

\[\begin{array}{ll} \text{minimize} & x^\top W x \\ \text{subject to} & x_i^2 = 1, \quad i=1,\dots, n, \end{array}\]

with $x \in \mathbf{R}^n$ and $W \in \mathbf{S}^n$. The problem can be interpreted as finding a partition of the points $i = 1,\dots, n$ into two sets, where the cost of two points in the same set is $W_{ij}$ and $-W_{ij}$ otherwise. Brute forcing the solution $x^*$ takes $2^n$ attempts and becomes quickly intractable, e.g. for $n \geq 30$. However, a lower-bound for this problem can be computed by solving the convex dual of the problem:

\[\begin{array}{ll} \text{maximize} & -\sum_i \nu \\ \text{subject to} & W + \text{diag}(\nu) \in \mathbf{S}_+^n. \end{array}\]

Solving this problem with optimal objective $d^*$ yields a lower bound as least as good as a lower bound based on the minimum eigenvalue $d^* \geq \lambda_{\text{min}}(W)$. Let's set up the problem for $n = 20$ and with a specially structured (banded) $W$:

using LinearAlgebra, Random, COSMO, JuMP, IterTools

rng = Random.MersenneTwister(212313);
n = 20;

W = diagm(0 => randn(rng, n));
W += diagm(1 => randn(rng, n-1));
W = Symmetric(W)
20×20 LinearAlgebra.Symmetric{Float64, Matrix{Float64}}:
 1.6562   1.03129     0.0        …   0.0        0.0        0.0
 1.03129  1.38424     0.0970901      0.0        0.0        0.0
 0.0      0.0970901   0.648929       0.0        0.0        0.0
 0.0      0.0        -0.210802       0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0        …   0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0        …   0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0        …   0.0        0.0        0.0
 0.0      0.0         0.0            0.500723   0.0        0.0
 0.0      0.0         0.0           -0.951779   0.518206   0.0
 0.0      0.0         0.0            0.518206  -0.136439  -0.175429
 0.0      0.0         0.0            0.0       -0.175429   0.781579

As you can see, the matrix $W$ imposes a structure on the LMI-constraint. Accordingly, COSMO will be able to use chordal decomposition to decompose the LMI constraint into multiple smaller constraints. This can make a significant difference in the performance of the algorithm for large $n$.

model = JuMP.Model(COSMO.Optimizer);
@variable(model, ν[1:n]);
@objective(model, Max, -ones(n)' * ν )
@constraint(model, Symmetric(W + diagm(ν) )  in JuMP.PSDCone());
JuMP.optimize!(model)
------------------------------------------------------------------
          COSMO v0.8.8 - A Quadratic Objective Conic Solver
                         Michael Garstka
                University of Oxford, 2017 - 2022
------------------------------------------------------------------

Problem:  x ∈ R^{38},
          constraints: A ∈ R^{57x38} (56 nnz),
          matrix size to factor: 95x95,
          Floating-point precision: Float64
Sets:     PsdConeTriangle of dim: 3 (2x2)
          PsdConeTriangle of dim: 3 (2x2)
          PsdConeTriangle of dim: 3 (2x2)
          PsdConeTriangle of dim: 3 (2x2)
          PsdConeTriangle of dim: 3 (2x2)
          ... and 14 more
Decomp:   Num of original PSD cones: 1
          Num decomposable PSD cones: 1
          Num PSD cones after decomposition: 19
          Merge Strategy: CliqueGraphMerge
Settings: ϵ_abs = 1.0e-05, ϵ_rel = 1.0e-05,
          ϵ_prim_inf = 1.0e-04, ϵ_dual_inf = 1.0e-04,
          ρ = 0.1, σ = 1e-06, α = 1.6,
          max_iter = 5000,
          scaling iter = 10 (on),
          check termination every 25 iter,
          check infeasibility every 40 iter,
          KKT system solver: QDLDL
Acc:      Anderson Type2{QRDecomp},
          Memory size = 15, RestartedMemory,
          Safeguarded: true, tol: 2.0
Setup Time: 0.15ms

Iter:	Objective:	Primal Res:	Dual Res:	Rho:
1	-6.1146e+02	1.7072e+01	5.9998e-01	1.0000e-01
25	 2.3958e+01	2.9812e-01	1.1258e-09	1.0000e-01
50	 2.5146e+01	2.0059e-01	1.6359e-08	1.0000e-01
75	 2.5430e+01	1.3731e-01	4.0566e-11	1.0000e-01
100	 2.5624e+01	1.2992e-10	3.3307e-16	1.0000e-01

------------------------------------------------------------------
>>> Results
Status: Solved
Iterations: 121 (incl. 21 safeguarding iter)
Optimal objective: 25.62
Runtime: 0.108s (107.73ms)

Looking at the solver output you can see how the PSD constraint was decomposed into 19 PSD constraints. Let's look at the lower bound:

JuMP.objective_value(model)
-25.623709242674522

As $n$ is small, we can verify our result by finding the optimal solution by trying out all possible combinations:

function brute_force_optimisation(W, n)
   opt_obj = Inf
   opt_x = Inf * ones(n)

   for xt in Iterators.product([[1.; -1.] for k = 1:n]...)
      x = [i for i in xt]
      obj_val = x' * W * x
      if obj_val < opt_obj
         opt_obj = obj_val
         opt_x = x
      end
   end
   return opt_obj, opt_x
end
opt_obj, opt_x = brute_force_optimisation(W, n)
(-25.623709242765823, [-1.0, 1.0, -1.0, -1.0, -1.0, 1.0, -1.0, 1.0, 1.0, 1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, 1.0, -1.0, 1.0, 1.0])

References

[1] Boyd and Vandenberghe - Convex Optimization, Cambridge University Press (2004)


This page was generated using Literate.jl.