Journal of Harbin Institute of Technology (New Series)  2019, Vol. 26 Issue (2): 41-47  DOI: 10.11916/j.issn.1005-9113.17078
0

Citation 

Yadan Wang, Hongwei Liu, Zexian Liu. A Modified Full-NT-Step Infeasible Interior-Point Algorithm for SDP Based on a Specific Kernel Function[J]. Journal of Harbin Institute of Technology (New Series), 2019, 26(2): 41-47. DOI: 10.11916/j.issn.1005-9113.17078.

Fund

Sponsored by the National Natural Science Foundation of China (Grant No.11461021), the Natural Science Basic Research Plan in Shaanxi Province of China (Grant No.2017JM1014) and the Scientific Research Project of Hezhou University (Grant Nos.2014YBZK06 and 2016HZXYSX03)

Corresponding author

Yadan Wang, E-mail: wangyadan111@163.com

Article history

Received: 2017-06-19
A Modified Full-NT-Step Infeasible Interior-Point Algorithm for SDP Based on a Specific Kernel Function
Yadan Wang1, Hongwei Liu1, Zexian Liu1,2     
1. Department of Mathematics, Xidian University, Xi'an 710126, China;
2. School of Mathematics and Computer Science, Hezhou University, Hezhou 542899, Guangxi, China
Abstract: This paper proposes a new full Nesterov-Todd (NT) step infeasible interior-point algorithm for semidefinite programming. Our algorithm uses a specific kernel function, which is adopted by Liu and Sun, to deduce the feasibility step. By using the step, it is remarkable that in each iteration of the algorithm it needs only one full-NT step, and can obtain an iterate approximate to the central path. Moreover, it is proved that the iterative bound corresponds with the known optimal one for semidefinite optimization problems.
Keywords: semidefinite programming     infeasible interior-point methods     full Nesterov-Todd steps     kernel functions     polynomial complexity    
1 Introduction

Semidefinite programming (SDP) is a relatively new field of mathematical programming. During the past decade, SDP has gradually become a hot topic in the field of mathematical programming, because SDP has been widely used in some territories, for example combinatorial optimization[1-2], control theory[3] and electrical engineering[4-5]. Particularly, lots of interior-point methods (IPMs) for SDP[6-8] are proposed based on linear programming (LP), because of their practical efficiency and polynomial complexity.

Infeasible IPMs (IIPMs) for LP were first presented by Lusting[9]. Zhang[7] firstly drew the conclusion that the iteration complexity of IIPMs for SDP was polynomial. Roos proposed IIPMs for LP in Ref.[10] and later Mansouri et al.[11-14] extended this algorithm to SDP. IIPMs with full Newton step have become the main research in recent years, since the algorithm adopts full Newton step and does not need line-searches. Many researchers have studied full Newton step IIPMs for LP or SDP and achieved beautiful results, e.g., Ref.[15-17]. The search direction for IIPMs in these approaches is obtained from the Newton's method. Later, by using the self-regular proximities, a kind of new IPMs is described by Peng et al.[17] These methods use the direction, which can be seen as the negative gradient direction of a self-regular barrier function, instead of the classic Newton directions. The simple univariate function defines a barrier function, called a kernel function of the barrier function. The proposed algorithms in the above include one feasibility step and several centering steps in every iteration. Lately, by reformulating the central path, a feasible IPMs is presented by Zhang et al.[18] for solving LP problems. The key of the algorithm analysis is to update parameters and gain new iteration points at the same time. Moreover, we need not to prove the quadratic convergence of the algorithm. After that, Mansouri et al.[19]used the thought, which was proposed by Zhang et al.[18], to obtain a new search direction, and proved that any centering step was not required in each iteration to gain a new iterate approach the central path.

Inspired by their work, we present a new infeasible IPM for SDP. In the algorithm, we apply the kernel function ϕ(t)=(1/2)(t-1)2 introduced by Sun and Liu[20]to generate the search directions, and utilize the analysis used in Ref.[19] into the SDP case. Compared with the existing algorithm, the greatest advantage is that we need not use any centering step in every iteration. Different with the analyses in Ref.[20], the algorithm needs only one full-Newton step in each iteration and can receive the iterates approximate to the central path. Simultaneously, the complexity result proves that the new algorithm matches the best iteration complexity for SDP.

The article is organized as follows. We give the SDP problem and its perturbed problem in Section 2. In addition, we use the kernel function proposed by Liu and Sun[20] to induce the feasibility step of our algorithm. After that, in Section 3, it gives some lemmas for the analysis of an infeasible IMPs proposed in this paper. A modified infeasible algorithm is proposed in Section 4. Then, the convergence analysis in the algorithm is established. At last, we give several concluding remarks.

Some symbols are adopted in the paper. The space of p×q real matrices is written as Rp×q, and Rm denotes m dimensional real Euclidian vector space. ‖·‖ and ‖·‖ are the infinity norm and the Frobenius norm of matrices, respectively. The cone of symmetric, symmetric positive semidefinite and symmetric positive definite n×n matrices are recorded as Sn, S+n and S++n respectively. I denotes n×n identity matrix. The primal and dual P feasible sets are denoted by $ \mathcal{P}$ and $ \mathcal{D}$, respectively. We utilize the classical Löwner partial order ≽ for symmetric matrices. Hence MN (MN) implies that M-N is positive definite (positive semidefinite). For any QS++n, λmin(Q) denotes the minimal eigenvalue of the matrix Q, respectively.

2 Preliminaries 2.1 SDP Problem

The SDP problem is defined as:

$ \begin{array}{l}{\min \operatorname{Tr}(C X)} \\ {\text { s.t. } \operatorname{Tr}\left(A_{j} X\right)=b_{j}, j=1, 2, \ldots, l} \\ {X \succcurlyeq 0}\end{array}~~~~~(P) $

and its dual as

$ {\max {\mathit{\boldsymbol{b}}^{\text{T}}}\mathit{\boldsymbol{y}}} \\ {{\text{ s}}{\text{.t}}{\text{. }}\sum\limits_{j = 1}^l {{y_j}} {A_j} + S = C} \\ {S \succcurlyeq 0} ~~~~~~(D) $

where each AjSn, CSn and b, yRl. Generally, we suppose that Aj are linearly independent.

We suppose that the interior-point condition (IPC) of (P) and (D) holds, i.e., the feasible sets $\mathcal{P} $ and $\mathcal{D} $ are nonempty. Under this assumption, (P) and (D) are solvable and their optimality conditions are as follows:

$ \begin{array}{*{20}{l}} {{\mathop{\rm Tr}\nolimits} \left( {{A_j}X} \right) = {b_j}, j = 1, 2, \ldots , l, X \succ 0}\\ {\sum\limits_{j = 1}^l {{y_j}} {A_j} + S = C, S > 0}\\ {XS = 0} \end{array} $ (1)

In general, modifying the system (1) by slackening the equation XS=0 yields:

$ \begin{array}{*{20}{l}} {{\mathop{\rm Tr}\nolimits} \left( {{A_j}X} \right) = {b_j}, j = 1, 2, \ldots , l, X \succ 0}\\ {\sum\limits_{j = 1}^l {{y_j}} {A_j} + S = C, S \succ 0}\\ {XS = \mu I} \end{array} $ (2)

Assuming that (P) and (D) satisfy the IPC, then the system (2) exists a sole solution for any μ>0, denoted by (X(μ), y(μ), S(μ)).It is surprising that the limit of (X(μ), y(μ), S(μ)) exists when μ tends to zero, and the limit value is just a solution of system(1).

2.2 Perturbed Problems and Its Central Path

Suppose an optimal solution exists, we now give the initial iteration (X0, y0, S0), which is denoted by:

$ \left(X^{0}, y^{0}, S^{0}\right)=\xi(I, 0, I) $ (3)

where X*+S*ξI, for some optimal solutions (X*, y*, S*) for (P) and (D). We use rb0 and Rc0 denoted in Ref.[12, 16] as the starting values of the residual, respectively:

$ \left(r_{b}^{0}\right)_{j} :=b_{j}-\operatorname{Tr}\left(A_{j} X^{0}\right), j=1, 2, \ldots, l $ (4)
$ R_{c}^{0} :=C-\sum\limits_{j=1}^{l} y_{j}^{0} A_{j}-S^{0} $ (5)

For 0 < ν≤1, we discuss the perturbed problem

$ \begin{array}{l}{\min \operatorname{Tr}\left(\left(C-\nu R_{c}^{0}\right) X\right)} \\ {\mathrm{s.t.} \operatorname{Tr}\left(A_{j} X\right)=b_{j}-\nu\left(r_{b}^{0}\right)_{j}} \\ {X \succcurlyeq 0}\end{array}~~~~~~\left(P_{\nu}\right) $

and its dual problem (Dν) as

$ \begin{array}{*{20}{l}} {\max \sum\limits_{j = 1}^l {\left( {{b_j} - \nu {{\left( {r_b^0} \right)}_j}} \right)} {y_j}}\\ {{\rm{ s}}{\rm{.t}}{\rm{. }}\sum\limits_{j = 1}^l {{y_j}} {A_j} + S = C - \nu R_c^0}\\ {S \succcurlyeq 0} \end{array}~~~~~\left(D_{\nu}\right) $

Let ν=1 then (X, y, S)=(X0, y0, S0), which are strictly feasible solutions for (Pν) and (Dν). Namely, (Pν) and (Dν) satisfy the IPC with ν=1.

Lemma 2.1  Let (P) and (D) be feasible. Then the perturbed problems (Pν) and (Dν) are strictly feasible with 0 < ν≤1.

We conclude that if the feasible sets $ \mathcal{P}$ and $\mathcal{D} $ are nonempty and Lemma 2.1 implies that for each 0 < ν≤1, (Pν) and (Dν) are feasible. Therefore, the central paths of (Pν) and (Dν) exist, which can be recorded as:

$ \left\{\begin{array}{l}{b_{j}-\operatorname{Tr}\left(A_{j} X\right)=\nu\left(r_{b}^{0}\right)_{j}, j=1, 2, \ldots, l} \\ {C-\sum\limits_{j=1}^{l} y_{j} A_{j}-S=\nu R_{c}^{0}} \\ {X S=\mu I}\end{array}\right. $ (6)

There is a unique solution of this system, for every μ>0. In this section, if the relation μ=νμ0 always holds for 0 < ν≤1, we can use (X(ν), y(ν), S(v)) to express this solution, which are the μ-centers of (Pν) and(Dν). Since X0S0=μ0I with μ0=ξ2, (X0, y0, S0) are the μ0-centers of (P1) and (D1), namely, (X(1), y(1), S(1))=(X0, y0, S0).

2.3 Infeasible IMPs with a Kernel Function

We describe search directions of the feasibility step used by Mansouri and Roos[12]. The following system determines the search directions (ΔX, Δy, ΔS) of their algorithm.

$ \begin{array}{*{20}{l}} {{\mathop{\rm Tr}\nolimits} \left( {{A_j}\mathit{\Delta }X} \right) = \theta \nu {{\left( {r_b^0} \right)}_j}, j = 1, 2, \ldots , l}\\ {\sum\limits_{j = 1}^l \mathit{\Delta } {y_j}{A_j} + \mathit{\Delta }S = \theta vR_c^0}\\ {\mathit{\Delta }X + P\mathit{\Delta }S{P^{\rm{T}}} = \mu {S^{ - 1}} - X} \end{array} $

here

$ \begin{aligned} P &=S^{-1 / 2}\left(S^{1 / 2} X S^{1 / 2}\right)^{1 / 2} S^{-1 / 2}\left(=X^{1 / 2}\left(X^{1 / 2}\right.\right.\\ & S X^{1 / 2} )^{-1 / 2} X^{1 / 2} ) \end{aligned} $

which is symmetric and nonsingular.

Define

$ \left\{ {\begin{array}{*{20}{l}} {{D_X} = \frac{1}{{\sqrt \mu }}{D^{ - 1}}\mathit{\Delta }X{D^{ - 1}}}\\ {{D_S} = \frac{1}{{\sqrt \mu }}D\mathit{\Delta }SD}\\ {H = \frac{1}{{\sqrt \mu }}{D^{ - 1}}X{D^{ - 1}} = \frac{1}{{\sqrt \mu }}DSD} \end{array}} \right. $ (7)

where D=P1/2. According to the definition of DX and DS, the system can be described as follows:

$ \left\{ {\begin{array}{*{20}{l}} {{\mathop{\rm Tr}\nolimits} \left( {{{\left( {D{A_j}, D} \right)}^{\rm{T}}}{D_x}} \right) = \frac{1}{{\sqrt \mu }}\theta \nu {{\left( {r_b^0} \right)}_j}, j = 1, 2, \ldots , l}\\ {\sum\limits_{j = 1}^l {\frac{{\Delta {y_j}}}{{\sqrt \mu }}} D{A_j}D + {D_s} = \frac{1}{{\sqrt \mu }}\theta \nu DR_c^0D}\\ {{D_X} + {D_s} = {H^{ - 1}} - H} \end{array}} \right. $ (8)

Unexpectedly, the expression H-1-H is equal to the negative gradient of the classical logarithmic barrier function:

$ \psi(H) :=\sum\limits_{j=1}^{n} \psi\left(\lambda_{j}(H)\right), \lambda_{j}(H)=\frac{1}{\mu} \lambda_{j}\left(D^{-1} X S D\right) $

whose kernel function is:

$ \phi(t)=\frac{t^{2}-1}{2}-\log t $

The key contribution of the article is that we modify the search directions of the feasibility step by replacing DX+DS=-∇Φ(H) with the third equation of Eq.(8). Here we use ϕ(t)=(1/2)(t-1)2, introduced by Liu and Sun[20], as the kernel function of Φ(H). The search directions of the algorithm changed into

$ \left\{ {\begin{array}{*{20}{l}} {{\mathop{\rm Tr}\nolimits} \left( {{{\left( {D{A_j}D} \right)}^{\rm{T}}}{D_X}} \right) = \frac{1}{{\sqrt \mu }}\theta \nu {{\left( {r_b^0} \right)}_j}, j = 1, 2, \ldots , l}\\ {\sum\limits_{j = 1}^l {\frac{{\mathit{\Delta }{y_j}}}{{\sqrt \mu }}} D{A_j}D + {D_S} = \frac{1}{{\sqrt \mu }}\theta \nu DR_c^0D}\\ {{D_s} + {D_X} = I - H} \end{array}} \right. $ (9)

We can find that DS and DX are orthogonal:

$ \operatorname{Tr}\left(D_{X} D_{S}\right)=0 $

Then, applying the equation DS+DX=I-H, we get:

$ \left\|D_{s}\right\|^{2}+\left\|D_{x}\right\|^{2}=\left\|D_{s}+D_{x}\right\|^{2}=\|I-H\|^{2} $

which means that only when DS and DX are equal to zero will the equality I-H=0 be accepted. Namely, we can get XS=μI, which implies that S and X are the μ-centers. Therefore, we will use the centrality function:

$ \delta(X, S ; \mu) :=\delta(H) :=\frac{1}{2}\|I-H\| $ (10)

As Aj are linearly independent, for any X≻0 and S≻0, the system (9) defines a sole (ΔX, Δy, ΔS). We get iterates after a full-Newton step as follows:

$ {X^ + } = X + \mathit{\Delta }X, \;\;\;{y^ + } = y + \mathit{\Delta }y, \;\;\;{S^ + } = S + \mathit{\Delta }S $

After each iteration, the new iterates are feasible for (Pv+) and (Dν+).

3 Some Technical Results

By multiplying left matrix H on both sides of DX+DS=I-H,

$ H D_{X}+H D_{S}=H-H^{2} $ (11)

Using Eqs.(7), (8) and (11), we obtain:

$ X^{+} S^{+}=\mu D\left(H+D_{X}\right)\left(H+D_{S}\right) D^{-1} $

For convenience, we apply:

$ D_{X S} :=\frac{1}{2}\left(D_{X} D_{S}+D_{S} D_{X}\right) $ (12)

and

$ M :=\left(D_{X} H-H D_{X}\right)+\frac{1}{2}\left(D_{X} D_{S}-D_{S} D_{X}\right) $ (13)

Note that DXS and M are the symmetric matrix and the skew-symmetric matrix, respectively.

Using Eq.(11), we have:

$ \begin{aligned}\left(H+D_{X}\right)\left(H+D_{S}\right) &=H^{2}+H D_{S}+D_{X} H+D_{X} D_{S}=\\ & H-H D_{X}+D_{X} H+D_{X} D_{S} \end{aligned} $

Applying Eqs.(12), (13) and subtracting and adding $\frac{1}{2} D_{S} D_{X} $ to the last expression yields:

$ \begin{array}{l}{\left(H+D_{X}\right)\left(H+D_{S}\right)=H+\frac{1}{2}\left(D_{X} D_{S}+D_{S} D_{X}\right)+} \\ ~~~~~~~~~~~~~~~~{\left(D_{X} H-H D_{X}\right)+\frac{1}{2}\left(D_{X} D_{S}-D_{S} D_{X}\right)=} \\~~~~~~~~~~~ ~~~~~{H+M+D_{X S}}\end{array} $

Then

$ \begin{aligned} X^{+} S^{+} \sim & \mu\left(H+D_{X}\right)\left(H+D_{S}\right)=\\ & \mu\left(H+M+D_{X S}\right) \end{aligned} $ (14)

Lemma 3.1  Only when H+DXS≻0 holds can the Newton step have a strictly feasible point.

Proof   From the proof of Lemma 5.4 in Ref.[12], we can prove Lemma 3.1.

Lemma 3.2  One has

$ 1-2 \delta(H) \leqslant \lambda_{j}(H) \leqslant 1+2 \delta(H), 1 \leqslant j \leqslant n $

Proof  Since |1-λj(H)|≤‖I-H‖=2δ(H), the Lemma is proved.

Corollary 3.1  X+≻0 and S+≻0 if ‖DXS < λmin(H).

Proof  Since ‖DXS < λmin(H), we can find that H+DXS≻0. Thus, using Lemma 3.1, the proof is proved.

Lemma 3.3  If $ \delta(H)<\frac{\sqrt{3}-1}{2}$, then the new iterations are strictly feasible.

Proof  Since

$ \begin{array}{c}{\left\|D_{X S}\right\|_{\infty} \leqslant \frac{1}{2}\left(\left\|D_{S}\right\|^{2}+\left\|D_{X}\right\|^{2}_{\infty}\right) \leqslant} \\ {\frac{1}{2}\left(\left\|D_{S}\right\|^{2}+\left\|D_{X}\right\|^{2}\right)}\end{array} $

then from Eq.(10) and Lemma 3.2, one has:

$ \left\|D_{X S}\right\|_{\infty} \leqslant 2 \delta(H)^{2} $

and 1-2δ(H)≤λmin(H).

It can be proved that the in equality

$ \left\|D_{X S}\right\|_{\infty}<\lambda_{\min }(H) $

holds for

$ 2 \delta(H)^{2}<1-2 \delta(H) $

which equals $ \delta(H)<(\sqrt{3}-1) / 2$. Applying Corollary 3.1, we can easily prove that X+≻0 and S+≻0.

Lemma 3.4  If $ \delta(H)<(\sqrt{3}-1) / 2$, then Tr(X+ S+) < 2.

Proof  Applying Eq.(14), Lemma 3.2 and remembering that the orthogonality of matrix DS and DX, one has

$ \begin{array}{*{20}{l}} {{\mathop{\rm Tr}\nolimits} \left( {{X^ + }{S^ + }} \right) = \mu {\mathop{\rm Tr}\nolimits} \left( {H + {D_{XS}} + M} \right) = }\\ {\mu {\mathop{\rm Tr}\nolimits} \left( {H + {D_{XS}}} \right) = \mu \sum\limits_{j = 1}^n {{\lambda _j}} \left( {H + {D_{XS}}} \right) < }\\ {\mu \left( {n{\lambda _{\max }}(H) + \sum\limits_{j = 1}^n {{\lambda _j}} \left( {{D_{XS}}} \right)} \right) \le }\\ {n\mu \left( {1 + 2\delta (H) + {{\left\| {{D_{XS}}} \right\|}_\infty }} \right) \le }\\ {n\mu \left( {1 + 2\delta (H) + 2\delta {{(H)}^2}} \right)} \end{array} $

Since $\delta(H)<(\sqrt{3}-1) / 2 $, one has

$ \operatorname{Tr}\left(X^{+} S^{+}\right)<2 n \mu $

which implies that the result holds.

4 Full-Newton Step Infeasible IPM 4.1 Description of Algorithm

Just as introduced in Section 2, we use δ(X, S; μ) as defined in Eq.(10) to measure the distance between the μ-center and the new iterates. So, at first we have δ(X, S; μ)=0. At the beginning of every iteration, one supposes that δ(X, S; μ) is less than a critical value τ>0 before the μ-update. Thus, this is obviously satisfied at the start of the first iteration.

Now we introduce the algorithm.Assume that with some μ∈(0, μ0), the iterates (X, y, S) satisfy the system (6) for ν=μ/μ0 and Tr(XS)= and δ(X, S; μ)≤τ. By reducing the parameter μ to μ+=(1-θ)μ with θ∈(0, 1), we get the iterates (X+, y+, S+) such that the system (6) with ν replaced by ν=μ+/μ0, Tr(X+ S+) < 2 and δ(X+, S+; μ+)≤τ hold. In what follows, the analysis of our algorithm is presented.

4.2 Infeasible Interior-point Algorithm

Under the fact that the parameter μ is reduced by the factor 1-θ, the decreasing rates of the duality gap and the residuals are the same as μ after each iteration. The algorithm runs until the norms of the duality gap and the residuals are smaller than the accuracy parameter ε. In Table 1, a formal characterization of the algorithm is given.

Table 1 Full NT-step infeasible IPM

5 Algorithm Analysis

From the analysis of the fourth part, we find that the algorithm obtains the iterates (X+, y+, S+) that are feasible for (Pν+) and (Dν+) after the full-Newton. It is important for the analysis to prove that we can obtain δ(X+, S+; μ+)≤τ after each full-Newton step.

From the equation DX+DS=I-H, we have

$ X^{+} S^{+} \sim \mu\left(H+M+D_{X S}\right) $

In what follows, we give a lemma to ensure the iterates (X+, y+, S+) for (Pν+) and (Dν+) are strictly feasible.

Lemma 5.1  When H+DXS≻0, the iterates (X+, y+, S+) are strictly feasible.

Proof  We can prove the lemma the same as the Lemma 3.1.

Denote (H+)2=(μ+)- D-1X+ S+ D, it follows from Eq.(14),

$ \left(H^{+}\right)^{2}=\frac{\mu\left(H+M+D_{X S}\right)}{\mu^{+}}=\frac{H+M+D_{X S}}{1-\theta} $ (15)

Then

$ \begin{array}{l}{\lambda_{\min }\left(\left(H^{+}\right)^{2}\right) \geqslant \frac{\lambda_{\min }(H)-\left\|D_{XS }\right\|_{\infty}}{1-\theta} \geqslant} \\ {\frac{1-2 \delta(H)-\frac{1}{2}\left(\left\|D_{X}\right\|^{2}+\left\|D_{S}\right\|^{2}\right)}{1-\theta}}\end{array} $

Supposing that X+≻0 and S+≻0, one can get:

$ \lambda_{\min }\left(H^{+}\right) \geqslant \sqrt{\frac{1-2 \delta(H)-\frac{1}{2}\left(\left\|D_{x}\right\|^{2}+\left\|D_{S}\right\|^{2}\right)}{1-\theta}} $ (16)

which is used in the next theorem to study the influence of the centrality function after a μ-update and the full-Newton step.

In this section, we denote:

$ \omega(H)=\frac{1}{2}\left(\left\|D_{X}\right\|^{2}+\left\|D_{S}\right\|^{2}\right) $

Theorem 5.1  Let δ(H+)=δ(X+, S+; μ+) and

$ H^{+}=\frac{1}{\sqrt{\mu^{+}}} D^{-1} X^{+} D^{-1}=\frac{1}{\sqrt{\mu^{+}}} D S^{+} D $

then one has:

$ \delta\left(H^{+}\right) \leqslant \frac{2 \delta(H)+\theta \sqrt{n}+\omega(H)}{2(1-\theta)+2 \sqrt{1-\theta} \sqrt{1-2 \delta(H)-\omega(H)}} $ (17)

Proof  Since

$ \delta\left(H^{+}\right)=\frac{1}{2}\left\|I-H^{+}\right\| $ (18)

we use Corollary 3.1 and Eq.(16) to obtain:

$ \begin{array}{*{20}{l}} {4\delta {{\left( {{X^ + }, {S^ + };{\mu ^ + }} \right)}^2} = {{\left\| {I - {H^ + }} \right\|}^2} = \sum\limits_{j = 1}^n {\lambda _j^2} (I - }\\ {{H^ + }) = \sum\limits_{j = 1}^n {{{\left( {1 - {\lambda _j}\left( {{H^ + }} \right)} \right)}^2}} = }\\ \begin{array}{l} \sum\limits_{j = 1}^n {\frac{{{{\left( {1 - \lambda _j^2\left( {{H^ + }} \right)} \right)}^2}}}{{{{\left( {1 + {\lambda _j}\left( {{H^ + }} \right)} \right)}^2}}}} \le \\ \frac{{\sum\limits_{j = 1}^n {{{\left( {1 - \lambda _j^2\left( {{H^ + }} \right)} \right)}^2}} }}{{{{\left( {1 + {\lambda _{\min }}\left( {{H^ + }} \right)} \right)}^2}}} = \\ \frac{{||I - \frac{{H + M + {D_{XS}}}}{{1 - \theta }}|{|^2}}}{{{{\left( {1 + {\lambda _{\min }}\left( {{H^ + }} \right)} \right)}^2}}} \le \\ \frac{{\frac{1}{{{{(1 - \theta )}^2}}}{{\left( {\left\| {I - H} \right\| + \left\| {\theta I} \right\| + \left\| {{D_{XS}}} \right\|} \right)}^2}}}{{{{\left( {1 + {\lambda _{\min }}\left( {{H^ + }} \right)} \right)}^2}}} \le \\ \frac{{\frac{1}{{{{(1 - \theta )}^2}}}{{\left( {2\delta (H) + \theta \sqrt n + \left\| {{D_{XS}}} \right\|} \right)}^2}}}{{{{\left( {1 + {\lambda _{\min }}\left( {{H^ + }} \right)} \right)}^2}}} \le \\ \frac{{\frac{1}{{{{(1 - \theta )}^2}}}{{(2\delta (H) + \theta \sqrt n + \omega (H))}^2}}}{{(1 + {{\sqrt {\frac{{1 - 2\delta (H) - \omega (H)}}{{1 - \theta }})} }^2}}} \end{array} \end{array} $

For the above inequality, the square root is obtained:

$ \delta\left(H^{+}\right) \leqslant \frac{2 \delta(H)+\theta \sqrt{n}+\omega(H)}{2(1-\theta)+2 \sqrt{1-\theta} \sqrt{1-2 \delta(H)-\omega(H)}} $

then the proof is completed.

Next, we should guarantee that δ(H+)≤τ.Based on the result of Theorem 5.1, we need to have:

$ \frac{2 \delta(H)+\theta \sqrt{n}+\omega(H)}{2(1-\theta)+2 \sqrt{1-\theta} \sqrt{1-2 \delta(H)-\omega(H)}} \leqslant \tau $ (19)

By several calculations, we conclude that Eq.(19) holds if

$ \begin{array}{c}{2 \omega(H) \leqslant 4 \tau(1-\theta)(1-\tau)-4 \delta(H)-2 \sqrt{n} \theta+} \\ {4 \tau \sqrt{(1-\theta)(\tau(1-\theta)(\tau-2)+1+\sqrt{n} \theta)}}\end{array} $ (20)
5.1 Upper Bound for 2ω(H)

In the following, discuss the space $\mathcal{K} $ :

$ \mathcal{K} :=\left\{\eta \in S^{n} : \operatorname{Tr}\left(\left(D A_{j} D\right)^{\mathrm{T}} \eta\right)=0, j=1, \ldots, l\right\} $

From the equation

$ \operatorname{Tr}\left(\left(D A_{j} D\right)^{\mathrm{T}} D_{X}\right)=\frac{1}{\sqrt{\mu}} \theta \nu\left(r_{b}^{0}\right)_{j}, j=1, 2, \ldots, l $

we find that the affine space:

$ \left\{\eta \in S^{n} : \operatorname{Tr}\left(\left(D A_{j} D\right)^{\mathrm{T}} \eta\right)=\frac{\theta \nu\left(r_{b}^{0}\right)_{j}}{\sqrt{\mu}}, j=1, \ldots, l\right\} $

equals $D_{X}+\mathcal{K}$. We find

$ D_{S} \in \frac{1}{\sqrt{\mu}} \theta \nu D R_{c}^{0} D+\mathcal{K}^{\perp} $

from the second equation in system (9). For $\mathcal{K} \cap \mathcal{K}^{\perp}=\{0\} $, the space $D_{X}+\mathcal{K} $ and $D_{S}+\mathcal{K}^{\perp} $ meet in a sole matrix. We can definite the matrix by G.

Lemma 5.2  The unique intersectional matrix of affine space $D_{X}+\mathcal{K} $ and $D_{S}+\mathcal{K}^{\perp} $ is denoted by G. Then

$ 2 \omega(H) \leqslant\|G\|^{2}+(\|G\|+2 \delta(H))^{2} $ (21)

Proof   Here we omit the proof of this Lemma due to it is analogous to Lemma 5.11 in Ref.[12].

5.2 An Upper Bound for ‖G

From Lemma 5.2, it is readily found that the sole solution of the system:

$ \left\{ {\begin{array}{*{20}{l}} {{\mathop{\rm Tr}\nolimits} \left( {{{\left( {D{A_j}D} \right)}^{\rm{T}}}G} \right) = \frac{1}{{\sqrt \mu }}\theta \nu {{\left( {r_b^0} \right)}_j}, j = 1, 2, \ldots , l}\\ {\sum\limits_{j = 1}^l {\frac{{\mathit{\Delta }{y_j}}}{{\sqrt \mu }}} D{A_j}D + G = \frac{1}{{\sqrt \mu }}\theta \nu DR_c^0D} \end{array}} \right. $ (22)

can be recorded as G.

The following lemma will give an upper bound for ‖G‖.

Lemma 5.3  From the definition of (X0, y0, S0) in Eq.(3), we get:

$ \|G\| \leqslant \frac{\theta}{\xi \lambda_{\min }(H)} \operatorname{Tr}(X+S) $ (23)
5.3 An Upper Bound for Tr(X+S)

Suppose that Pν and Dν have a feasible point (X, y, S). For the purpose of achieving an upper bound for Tr(X+S), we need to review Lemma 5.15 from Ref.[12] without further proof.

Lemma 5.4  Let X and (y, S) be feasible for Pν and Dν respectively and let (X0, y0, S0) and (X*, y*, S*) be as defined in Eq.(3). Then we have

$ \nu \xi \operatorname{Tr}(X+S) \leqslant \operatorname{Tr}(S X)+\nu n \xi^{2} $ (24)

Lemma 5.5  Let Pν and Dν have a feasible point (X, y, S) with Tr(XS) < 2 and (X0, y0, S0) as defined in Eq.(3). Then we obtain:

$ \operatorname{Tr}(X+S) \leqslant 3 n \xi $ (25)

Proof  For S≽0, X≽0, S*≽0 and X*≽0, from Lemma 5.4, one can get:

$ \operatorname{Tr}(X+S) \leqslant \operatorname{Tr}\left(\frac{S X}{\mu}\right) \xi+n \xi $

Using Tr(XS) < 2, we obtain:

$ \operatorname{Tr}(X+S) \leqslant 3 n \xi $

Therefore, the lemma follows.

From Lemma 3.2 and substitution Eq.(25) in Eq.(23), we get:

$ \|G\| \leqslant \frac{3 n \theta}{1-2 \delta(H)} $ (26)

By substituting Eq.(26) in Eq.(21), we get:

$ 2 \omega(H) \leqslant\left(\frac{3 n \theta}{1-2 \delta(H)}\right)^{2}+\left(\frac{3 n \theta}{1-2 \delta(H)}+2 \delta(H)\right)^{2} $

To get Eq.(20), we study the inequality as follows:

$ \begin{array}{*{20}{c}} {{{\left( {\frac{{3n\theta }}{{1 - 2\delta (H)}}} \right)}^2} + {{\left( {\frac{{3n\theta }}{{1 - 2\delta (H)}} + 2\delta (H)} \right)}^2}} \le \\ {4\tau (1 - \theta )(1 - \tau ) - 4\delta (H) - 2\sqrt n \theta + }\\ {\quad 4\tau \sqrt {(1 - \theta )(\tau (1 - \theta )(\tau - 2) + 1 + \sqrt n \theta )} } \end{array} $ (27)

Clearly, the left-hand side of Eq.(27) is a non-decreasing function on δ(H). Thus, we can show that Eq.(27) holds if

$ \tau=\frac{1}{16}, \quad \theta=\frac{1}{18 n}, n \geqslant 2 $ (28)

This implies that we obtain (X+, S+)≻0 and $\delta\left(X^{+}, S^{+} ; \mu^{+}\right) \leqslant \frac{1}{16} $ from the algorithm. Therefore, the algorithm is well defined.

5.4 Complexity Analysis

Through the description of the preceding section, we have known that if δ(X, S; μ)≤(1/16) holds at the first iteration of the algorithm, then with θ=(1/18n)(n≥2), the new iterates satisfy δ(X+, S+; μ+)≤(1/16).Hence each main iteration includes at most one inner iteration, and in each iteration we only require to calculate a search direction of the full-Newton step. What's more, the descent factors of the norms of the residual vectors and the duality gap are 1-θ in each iteration. So, applying θ=(1/18n)(n≥2) and Tr(X0S0)=2, the bound of the total number of iterations is:

$ 18 n \log \frac{\max \left\{n \xi^{2}, \left\|r_{b}^{0}\right\|, \left\|R_{c}^{0}\right\|\right\}}{\varepsilon} $

This bound corresponds directly with the best bound for IIPMs used to SDP at present. Thus, the following theorem certificates the polynomial iteration complexity of the algorithm.

Theorem 5.2  Suppose (X*, y*, S*) is the optimal solution of (P) and (D), then after at most

$ 18 n \log \frac{\max \left\{n \xi^{2}, \left\|r_{b}^{0}\right\|, \left\|R_{c}^{0}\right\|\right\}}{\varepsilon} $

inner iterations, an ε-optimal solution for (D) and (P) is generated.

6 Conclusions

We present a new full-Newton steps infeasible interior-point algorithm for SDP. Our method uses a specific kernel function, which is proposed by Liu and Sun, to draw the feasibility step. The algorithm requires only one feasibility step without any centering step for every iteration. Furthermore, the received complexity bound matches the existing iteration bound for SDP.

References
[1]
Alizadeh F. Combinatorial optimization with interior point methods and semi-definite matrices. Minneapolis: University of Minnesota, 1991. (0)
[2]
Peña J, Vera J, Zuluaga L F. Computing the stability number of a graph via linear and semidefinite programming. SIAM J. Optim., 2007, 18(1): 87-105. DOI:10.1137/05064401X (0)
[3]
Boyd S E, Vandenberghe L. Convex Optimization. Cambridge: Cambridge University Press, 2004. (0)
[4]
Zhao Z, Braams B J, Fukuda M, et al. The reduced density matrix method for electronic structure calculations and role of three-index representability. J. Chem. Phys., 2004, 120(5): 2095-2104. DOI:10.1063/1.1636721 (0)
[5]
Lasserre J B, Prieto-Rumeau T. SDP vs. LP relaxations for the moment approach in some performance evaluation problems. Stoch. Models, 2004, 20(4): 439-456. DOI:10.1081/STM-200033112 (0)
[6]
Luo Z Q, Sturm J F, Zhang S Z. Superlinear convergence of asymmetric primal-dual path following algorithm for semidefinite programming. SIAM J. Optim., 1998, 8(1): 59-81. DOI:10.1137/S1052623496299187 (0)
[7]
Zhang Y. On extending some primal-dual interior-point algorithms from linear programming to semidefinite programming. SIAM J. Optim., 1998, 8(2): 365-386. DOI:10.1137/S1052623495296115 (0)
[8]
Liu H W, Liu C H, Yang X M. New complexity analysis of a Mehrotra-type predictor-corrector algorithm for semidefinite programming. Optim. Methods Softw., 2013, 28(6): 1179-1194. DOI:10.1080/10556788.2012.679270 (0)
[9]
Lusting I J. Feasible issues in a primal-dual interior point method for linear programming. Math. Program., 1990/1991, 49: 145-162. DOI:10.1007/BF01588785 (0)
[10]
Roos C. A full-Newton step ο(n) infeasible interior-point algorithm for linear optimization. SIAM J. Optim., 2006, 16(4): 1110-1136. DOI:10.1137/050623917 (0)
[11]
Mansouri H. Full-Newton Step Interior-Point Methods for Conic Optimization. TU Delft: Faculty of Mathematics and Computer Science, 2008. (0)
[12]
Mansouri H, Roos C. A new full-Newton step infeasible interior-point algorithm for semidefinite optimization. J. Numer. Alogrithms, 2009, 52(2): 225-255. DOI:10.1007/s11075-009-9270-03 (0)
[13]
Mansouri H, Zangiabadi M. New complexity analysis of full Nesterov-Todd steps for semidefinite optimization. Bull. Iranian Math. Soc., 2011, 37(1): 269-286. (0)
[14]
Mansouri H, Siyavash T, Zangiabadi M. A path-following infeasible interior-point algorithm for semidefinite programming. Iranian J. Oper. Res., 2012, 3(1): 11-30. (0)
[15]
Gu G, Mansouri H, Zangiabadi M, et al. Improved full-Newton step O(nL) infeasible interior-point method for linear optimization. J. Optim. Theory Appl., 2010, 145(2): 271-288. DOI:10.1007/s10957-009-9634-0 (0)
[16]
Wang G Q, Bai Y Q, Gao X Y, et al. Improved complexity analysis of full-nesterov-todd step interior-point method for semidefinite optimization. J. Optim. Theory Appl., 2015, 165(2): 242-262. DOI:10.1007/s10957-014-0619-2 (0)
[17]
Peng J, Roos C, Terlaky T. Self-regular functions and new search directions for linear and semidefinite optimization. Math. Prog., 2002, 93(1): 129-171. DOI:10.1007/s101070200296 (0)
[18]
Zhang Lipu, Xu Yinghong. A full-Newton step interior-point algorithm based on modified Newton direction. Oper. Res. Lett., 2011, 39(5): 318-322. DOI:10.1016/j.orl.2011.06.006 (0)
[19]
Mansouri H, Zangiabadi M, Arzani M. A modified infeasible-interior-point algorithm for linear optimization problems. J. Optim. Theory Appl., 2015, 166(2): 605-618. DOI:10.1007/s10957-015-0719-7 (0)
[20]
Sun W, Liu Z. A full-NT-step infeasible interior-point algorithm for SDP based on Kernel functions. Applied Mathematics and Computation, 2011, 217(10): 4990-4999. DOI:10.1016/j.amc.2010.11.049 (0)