The source files for all examples can be found in /examples.

Relaxed Two-Way Partitioning Problem

We consider the (nonconvex) two-way partitioning problem from Boyd and Vandenberghe (2004), p.219 [1]:

\[\begin{array}{ll} \text{minimize} & x^\top W x \\ \text{subject to} & x_i^2 = 1, \quad i=1,\dots, n, \end{array}\]

with $x \in \mathbf{R}^n$ and $W \in \mathbf{S}^n$. The problem can be interpreted as finding a partition of the points $i = 1,\dots, n$ into two sets, where the cost of two points in the same set is $W_{ij}$ and $-W_{ij}$ otherwise. Brute forcing the solution $x^*$ takes $2^n$ attempts and becomes quickly intractable, e.g. for $n \geq 30$. However, a lower-bound for this problem can be computed by solving the convex dual of the problem:

\[\begin{array}{ll} \text{maximize} & -\sum_i \nu \\ \text{subject to} & W + \text{diag}(\nu) \in \mathbf{S}_+^n. \end{array}\]

Solving this problem with optimal objective $d^*$ yields a lower bound as least as good as a lower bound based on the minimum eigenvalue $d^* \geq \lambda_{\text{min}}(W)$. Let's set up the problem for $n = 20$ and with a specially structured (banded) $W$:

using LinearAlgebra, Random, COSMO, JuMP, IterTools

rng = Random.MersenneTwister(212313);
n = 20;

W = diagm(0 => randn(rng, n));
W += diagm(1 => randn(rng, n-1));
W = Symmetric(W)
20×20 LinearAlgebra.Symmetric{Float64, Matrix{Float64}}:
 1.6562   1.03129     0.0        …   0.0        0.0        0.0
 1.03129  1.38424     0.0970901      0.0        0.0        0.0
 0.0      0.0970901   0.648929       0.0        0.0        0.0
 0.0      0.0        -0.210802       0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0        …   0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0        …   0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0            0.0        0.0        0.0
 0.0      0.0         0.0        …   0.0        0.0        0.0
 0.0      0.0         0.0            0.500723   0.0        0.0
 0.0      0.0         0.0           -0.951779   0.518206   0.0
 0.0      0.0         0.0            0.518206  -0.136439  -0.175429
 0.0      0.0         0.0            0.0       -0.175429   0.781579

As you can see, the matrix $W$ imposes a structure on the LMI-constraint. Accordingly, COSMO will be able to use chordal decomposition to decompose the LMI constraint into multiple smaller constraints. This can make a significant difference in the performance of the algorithm for large $n$.

model = JuMP.Model(COSMO.Optimizer);
@variable(model, ν[1:n]);
@objective(model, Max, -ones(n)' * ν )
@constraint(model, Symmetric(W + diagm(ν) )  in JuMP.PSDCone());
JuMP.optimize!(model)
------------------------------------------------------------------
          COSMO v0.8.9 - A Quadratic Objective Conic Solver
                         Michael Garstka
                University of Oxford, 2017 - 2022
------------------------------------------------------------------

Problem:  x ∈ R^{38},
          constraints: A ∈ R^{57x38} (56 nnz),
          matrix size to factor: 95x95,
          Floating-point precision: Float64
Sets:     PsdConeTriangle of dim: 3 (2x2)
          PsdConeTriangle of dim: 3 (2x2)
          PsdConeTriangle of dim: 3 (2x2)
          PsdConeTriangle of dim: 3 (2x2)
          PsdConeTriangle of dim: 3 (2x2)
          ... and 14 more
Decomp:   Num of original PSD cones: 1
          Num decomposable PSD cones: 1
          Num PSD cones after decomposition: 19
          Merge Strategy: CliqueGraphMerge
Settings: ϵ_abs = 1.0e-05, ϵ_rel = 1.0e-05,
          ϵ_prim_inf = 1.0e-04, ϵ_dual_inf = 1.0e-04,
          ρ = 0.1, σ = 1e-06, α = 1.6,
          max_iter = 5000,
          scaling iter = 10 (on),
          check termination every 25 iter,
          check infeasibility every 40 iter,
          KKT system solver: QDLDL
Acc:      Anderson Type2{QRDecomp},
          Memory size = 15, RestartedMemory,
          Safeguarded: true, tol: 2.0
Setup Time: 0.09ms

Iter:	Objective:	Primal Res:	Dual Res:	Rho:
1	-6.1146e+02	1.7072e+01	5.9998e-01	1.0000e-01
25	 2.3958e+01	2.9812e-01	1.1267e-09	1.0000e-01
50	 2.5146e+01	2.0059e-01	3.3875e-08	1.0000e-01
75	 2.5430e+01	1.3731e-01	1.6859e-10	1.0000e-01
100	 4.6635e+01	4.1935e-04	1.0000e+00	2.0235e+03
125	 2.5430e+01	1.3731e-01	4.9535e-06	7.7002e-02
150	 2.5430e+01	1.3731e-01	4.1990e-12	7.7002e-02
175	 2.5749e+01	8.8781e-04	1.0000e+00	1.5929e+03
200	 2.5697e+01	4.2068e-09	1.0000e+00	5.7766e-02
225	 2.5430e+01	1.3731e-01	1.6800e-11	5.7766e-02
250	 2.5773e+01	1.1828e-03	1.0000e+00	1.1956e+03
275	 2.5933e+01	9.3413e-09	1.0000e+00	1.1956e+03
300	 2.5430e+01	1.3731e-01	5.7127e-11	1.8017e-02
325	 2.5430e+01	1.3731e-01	3.3307e-16	1.8017e-02
350	 2.7406e+01	1.3745e-05	1.0000e+00	3.7327e+02
375	 2.5429e+01	1.3731e-01	5.0149e-07	6.0167e-02
400	 2.5430e+01	1.3731e-01	2.0717e+04	1.2465e+03
425	 3.6045e+01	6.8600e-06	1.0000e+00	1.2465e+03
450	 2.5430e+01	1.3731e-01	2.7724e-07	3.0784e-02
475	 2.5430e+01	1.3731e-01	4.7296e-14	3.0784e-02
500	 3.6956e+01	1.7243e-04	1.0000e+00	6.3774e+02
525	 2.5431e+01	1.3731e-01	1.0869e-06	2.1186e-02
550	 2.5430e+01	1.3731e-01	3.4306e-14	2.1186e-02
575	 2.5697e+01	1.1600e-03	1.0000e+00	4.3890e+02
600	 2.5624e+01	1.8021e-10	4.7280e-10	4.3890e+02

------------------------------------------------------------------
>>> Results
Status: Solved
Iterations: 677 (incl. 77 safeguarding iter)
Optimal objective: 25.62
Runtime: 0.114s (113.56ms)

Looking at the solver output you can see how the PSD constraint was decomposed into 19 PSD constraints. Let's look at the lower bound:

JuMP.objective_value(model)
-25.623709241260592

As $n$ is small, we can verify our result by finding the optimal solution by trying out all possible combinations:

function brute_force_optimisation(W, n)
   opt_obj = Inf
   opt_x = Inf * ones(n)

   for xt in Iterators.product([[1.; -1.] for k = 1:n]...)
      x = [i for i in xt]
      obj_val = x' * W * x
      if obj_val < opt_obj
         opt_obj = obj_val
         opt_x = x
      end
   end
   return opt_obj, opt_x
end
opt_obj, opt_x = brute_force_optimisation(W, n)
(-25.623709242765823, [-1.0, 1.0, -1.0, -1.0, -1.0, 1.0, -1.0, 1.0, 1.0, 1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, 1.0, -1.0, 1.0, 1.0])

References

[1] Boyd and Vandenberghe - Convex Optimization, Cambridge University Press (2004)


This page was generated using Literate.jl.