Let H be a real Hilbert space with transvection
Seek
$\left.\left\langle F\left(\boldsymbol{s}^*\right), \boldsymbol{\rho}-\boldsymbol{s}^*\right)\right\rangle \geqslant 0, \forall \boldsymbol{\rho} \in \boldsymbol{C} $ | (1) |
where
(C1) The answer set
(C2) The mapping F is defined as pseudomonotonic, i.e.,
$\begin{aligned} & \langle F(\boldsymbol{c}), \boldsymbol{\rho}-\boldsymbol{c}\rangle \geqslant 0 \Rightarrow \\ & \langle F(\boldsymbol{\rho}), \boldsymbol{\rho}-\boldsymbol{c}\rangle \geqslant 0, \forall \boldsymbol{c}, \boldsymbol{\rho} \in \boldsymbol{H} \\ & \end{aligned} $ |
(C2') The mapping F is
$ \begin{gathered} \langle F(\boldsymbol{\rho}), \boldsymbol{c}-\boldsymbol{\rho}\rangle \geqslant 0 \Rightarrow\langle F(\boldsymbol{c}), \boldsymbol{c}-\boldsymbol{\rho}\rangle \geqslant \\ \beta\|\boldsymbol{c}-\boldsymbol{\rho}\|^2, \forall \boldsymbol{c}, \boldsymbol{\rho} \in \boldsymbol{H} \end{gathered} $ |
(C3) The mapping F is Lipschitz continuity and fulfills
$ \|F(\boldsymbol{c})-F(\boldsymbol{\rho})\| \leqslant L\|\boldsymbol{c}-\boldsymbol{\rho}\|, \forall \boldsymbol{c}, \boldsymbol{\rho} \in \boldsymbol{H} $ |
where L>0 is a Lipschitz constant.
(C4) The practicable set
The symbols
Seek
$ s^*=P_C\left(s^*-\tau F\left(s^*\right)\right) $ | (2) |
where
VIP (1), developed by Fichera[1-2] and Stampacchia[3], is of great importance in the field of applied mathematics, which serves as a useful tool for the study of complementarity problems, transportation, network equilibrium problems and many more[4-6]. Because of its role, scholars concentrate their attention on exploring and figuring out its approximate solution, so numerous projection-like methods that have been suggested to deal with VIP (1) with its associated optimization problems (refer to Refs.[7-20]).
To be specific, the original approach to solve VIP (1) is projected gradient method, whose numerical advantage is that only one projection onto
$ \left\{\begin{array}{l} \boldsymbol{y}_n=P_C\left(\boldsymbol{c}_n-\boldsymbol{\tau}_n F\left(\boldsymbol{c}_n\right)\right) \\ \boldsymbol{T}_n:=\left\{\boldsymbol{\xi} \in \boldsymbol{H} \mid\left\langle\boldsymbol{y}_n-\boldsymbol{\xi}, \boldsymbol{c}_n-\boldsymbol{\tau}_n F\left(\boldsymbol{c}_n\right)-\boldsymbol{y}_n\right\rangle \geqslant 0\right\} \\ \boldsymbol{c}_{n+1}=P_{\boldsymbol{T}_n}\left(\boldsymbol{c}_n-\varLambda_n \boldsymbol{\tau}_n F\left(\boldsymbol{y}_n\right)\right) \end{array}\right. $ | (3) |
where
$ \begin{aligned} \boldsymbol{\varTheta}_n & =\boldsymbol{c}_n-\boldsymbol{y}_n-\tau_n\left(F\left(\boldsymbol{c}_n\right)-F\left(\boldsymbol{y}_n\right)\right) \\ \varLambda_n & =\frac{\left\langle\boldsymbol{c}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle}{\left\|\boldsymbol{\varTheta}_n\right\|^2} \end{aligned} $ | (4) |
and the stepsize
$ \tau\left\|F\left(c_n\right)-F\left(y_n\right)\right\| \leqslant \bar{\omega}\left\|c_n-y_n\right\| $ | (5) |
They proved that algorithm (3) is weakly convergent when the hypothesis about F is Lipschitz continuous and monotonic. Noting that algorithm (3) runs an Armijo-like line search rule for finding a proper stepsize per iteration, which leads to additional computation costs.
The inertial extrapolation technique was adopted by Nesterov[34] to accelerate the convergence rate about the heavy sphere method[35]. Motivated by the inertial idea, Alvarez and Attouch[36] offered an inertial proximal point algorithm in order to set the maximal monotone operator. Recently, it has been employed by many researchers to quicken the extragradient method for VIP (1) [25-31, 37-39]. However, the weak convergence of the algorithm[24] with inertial techniques has yet to be considered. So, a natural question emerges as follows.
Is is possible to obtain a new modification of the subgradient extragradient method[24] such that a weak convergence theorem and numerical improvement can be gained under much weaker conditions than monotonicity and sequential weak continuity of the mapping F?
In response to the above question, concrete contributions by this work are the following:
Add an inertial effect to the modified subgradient extragradient method[24] for accelerating the convergence properties, which is inspired by some excellent works[20, 26, 34, 36, 39];
Introduce a new stepsize different from that in Refs.[14, 17, 40] and overcome the drawback of additional computation projections onto
Present an inertial subgradient extragradient method and its weak convergence does not require monotonicity and sequential weak continuity of the cost mapping F, compared with the work by Thong et al.[18-19, 29] and by Cai et al.[20];
Ultimately, some numerical computations are presented to demonstrate the effectiveness of this newly proposed algorithm.
This article is organized as below. Several fundamental lemmas and concepts which are applied in the subsequent sections are introduced in Section 1. Weak convergence theorem of this new proposed algorithm is established in Section 2 and R-linear convergence rate is obtained in Section 3. Numerical implementations and corresponding results are presented in Section 4 and display a brief summary in Section 5.
1 PreliminariesSuppose that
$ P_C(\boldsymbol{c}):=\operatorname{argmin}\{\|\boldsymbol{c}-\boldsymbol{\rho}\| \mid \boldsymbol{\rho} \in \boldsymbol{C}\} $ |
The projection of
$ P_T(\boldsymbol{\zeta})=\boldsymbol{\zeta}-\max \left\{0, \frac{\langle\boldsymbol{v}, \boldsymbol{\zeta}-\boldsymbol{c}\rangle}{\|\boldsymbol{v}\|^2}\right\} \boldsymbol{v} $ | (6) |
where
Lemma 1[11, 42] For each
(ⅰ)
(ⅱ)
Lemma 2[43] Let
(ⅰ)
(ⅱ)
Lemma 3[44] Presume that
$ \boldsymbol{s}^* \in \Omega \Leftrightarrow\left\langle F(\boldsymbol{t}), \boldsymbol{t}-\boldsymbol{s}^*\right\rangle \geqslant 0, \forall \boldsymbol{t} \in \boldsymbol{C} $ |
Lemma 4[45] Assume that
Lemma 5[46-47] Presume that the sequence
$ \xi_{n+1} \leqslant q_n \xi_n+\varpi_n, \quad \forall n \in N $ |
where
Lemma 6[48] Presume that
(ⅰ)
(ⅱ)
(iii)
Then
Lemma 7[42, 48] Presume that
(ⅰ) for any
(ⅱ) any sequential weak cluster point of
Then
In the section, inertial effects and new stepsizes are added to the subgradient extragradient algorithm to solve VIP (1), and its weak convergence is established under some weaker conditions that F are pseudomonotone and uniformly continuous, compared with the work in Ref.[24]. Next, some useful conditions are stated as follows.
(C5) The operator
Whenever
Lemma 8 If the sequence
Algorithm 1 Step 0 Take Step 1 With
and update
If Step 2 Figure out where
Remark 1 It is found that the criteria in Lemma 8 are easy to calculate. Concretely, when two iterative points
$ \bar{\psi}_n= \begin{cases}\min \left\{\frac{\vartheta_n}{\left\|c_n-c_{n-1}\right\|^2}, \vartheta\right\}, & \text { if } c_n \neq c_{n-1} \\ \vartheta, & \text { otherwise }\end{cases} $ | (11) |
where
Remark 2 With relation (7), it can be easily found that when the proposed method creates some finite iterations,
Lemma 9 If presumptions (C1)-(C3) are satisfied, the stepsizes Eq.(8) is well-defined and
Proof In the situation of
$ \frac{\delta\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|}{\left\|F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right\|} \geqslant \frac{\delta\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|}{L\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|}=\frac{\delta}{L} $ |
This further discloses that
$ \begin{aligned} \tau_{n+1}= & \min \left\{\frac{\delta\left\|\boldsymbol{d}_n-y_n\right\|}{\left\|F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right\|}, q_n \boldsymbol{\tau}_n\right\} \geqslant \\ & \min \left\{\frac{\delta}{L}, \tau_n\right\} \end{aligned} $ |
where
$\tau_{n+1} \leqslant q_n \tau_n+\varpi_n $ |
From Lemma 5, when
Lemma 10 Suppose that presumptions (C1)-(C4) hold. If succession
$ \begin{array}{r} \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2 \leqslant\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|^2- \\ \quad \frac{(1-\chi)^2}{(1+\chi)^2}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2, \exists N \geqslant 0, \forall n \geqslant N \end{array} $ |
Proof From
$ \begin{gathered} 2\left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2 \leqslant 2\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{s}, \boldsymbol{d}_n-\tau_n \varLambda_n F\left(\boldsymbol{y}_n\right)-\boldsymbol{s}\right\rangle= \\ \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2+\left\|\boldsymbol{d}_n-\tau_n \varLambda_n F\left(\boldsymbol{y}_n\right)-\boldsymbol{s}\right\|^2- \\ \left\|\boldsymbol{c}_{n+1}-\boldsymbol{d}_n+\tau_n \varLambda_n F\left(\boldsymbol{y}_n\right)\right\|^2=\left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2+ \\ \left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2+\tau_n{ }^2 \varLambda_n^2\left\|F\left(\boldsymbol{y}_n\right)\right\|^2-2\left\langle\boldsymbol{d}_n-\right. \\ \left.\boldsymbol{s}, \tau_n \varLambda_n F\left(\boldsymbol{y}_n\right)\right\rangle-\left\|\boldsymbol{c}_{n+1}-\boldsymbol{d}_n\right\|^2- \end{gathered} $ |
$ \begin{aligned} & \tau_n^2 \varLambda_n^2\left\|F\left(\boldsymbol{y}_n\right)\right\|^2-2\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{d}_n, \tau_n \varLambda_n F\left(\boldsymbol{y}_n\right)\right\rangle= \\ & \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2+\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n+1}-\boldsymbol{d}_n\right\|^2- \\ & 2\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{s}, \boldsymbol{\tau}_n \varLambda_n F\left(\boldsymbol{y}_n\right)\right\rangle \end{aligned} $ |
After arrangement,
$ \begin{gathered} \left\|c_{n+1}-s\right\|^2 \leqslant\left\|d_n-s\right\|^2-\left\|c_{n+1}-d_n\right\|^2- \\ 2 \tau_n \varLambda_n\left\langle c_{n+1}-s, F\left(y_n\right)\right\rangle \end{gathered} $ | (12) |
Using the pseudomonotonicity of
Its version is the following:
$ \begin{aligned} & -2 \tau_n \varLambda_n\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{s}, F\left(\boldsymbol{y}_n\right)\right\rangle \leqslant \\ & -2 \tau_n \varLambda_n\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{y}_n, F\left(\boldsymbol{y}_n\right)\right\rangle \end{aligned} $ | (13) |
Meanwhile, both the definition of
$ \left\langle\boldsymbol{c}_{n+1}-\boldsymbol{y}_n, \boldsymbol{d}_n-\tau_n F\left(\boldsymbol{d}_n\right)-\boldsymbol{y}_n\right\rangle \leqslant 0 $ |
After its change, there is
$ \left\langle\boldsymbol{c}_{n+1}-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle \leqslant \tau_n\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{y}_n, F\left(\boldsymbol{y}_n\right)\right\rangle $ | (14) |
With relations (10), (13) and (14), it can be derived that
$ \begin{aligned} -2 \tau_n \varLambda_n\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{s}, F\left(\boldsymbol{y}_n\right)\right\rangle \leqslant-2 \varLambda_n\left\langle\boldsymbol{c}_{n+1}-\right. \\ \left.\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle=-2 \varLambda_n\left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle+2 \varLambda_n\left\langle\boldsymbol{d}_n-\right. \\ \left.\boldsymbol{c}_{n+1}, \boldsymbol{\varTheta}_n\right\rangle=-2 \varLambda_n \frac{\left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle}{\left\|\boldsymbol{\varTheta}_n\right\|^2}\left\|\boldsymbol{\varTheta}_n\right\|^2+ \\ 2 \varLambda_n\left\langle\boldsymbol{d}_n-\boldsymbol{c}_{n+1}, \boldsymbol{\varTheta}_n\right\rangle=-2 \varLambda_n^2\left\|\boldsymbol{\varTheta}_n\right\|^2+ \\ 2 \varLambda_n\left\langle\boldsymbol{d}_n-\boldsymbol{c}_{n+1}, \boldsymbol{\varTheta}_n\right\rangle \end{aligned} $ | (15) |
In response to the term
$ \begin{aligned} & 2 \varLambda_n\left\langle\boldsymbol{d}_n-\boldsymbol{c}_{n+1}, \boldsymbol{\varTheta}_n\right\rangle=\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}\right\|^2+ \\ & \quad \varLambda_n^2\left\|\boldsymbol{\varTheta}_n\right\|^2-\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|^2 \end{aligned} $ | (16) |
which comes from the usual relation
$ \begin{aligned} & \left\|c_{n+1}-s\right\|^2 \leqslant\left\|d_n-s\right\|^2- \\ & \quad\left\|d_n-c_{n+1}-\varLambda_n \varTheta_n\right\|^2-\varLambda_n^2\left\|\varTheta_n\right\|^2 \end{aligned} $ | (17) |
For the term
$ \begin{aligned} & \left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle=\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2- \\ & \tau_n\left\langle F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right), \boldsymbol{d}_n-\boldsymbol{y}_n\right\rangle \geqslant \\ & \left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2-\tau_n\left\|F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right\| \cdot \\ & \left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \geqslant\left(1-\frac{\delta \tau_n}{\tau_{n+1}}\right)\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2 \end{aligned} $ |
and
$ \begin{aligned} \left\|\boldsymbol{\varTheta}_n\right\|= & \left\|\boldsymbol{d}_n-\boldsymbol{y}_n-\tau_n\left(F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right)\right\| \geqslant \\ & \left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|-\tau_n\left\|F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right\| \geqslant \\ & \left(1-\frac{\delta \tau_n}{\tau_{n+1}}\right)\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \end{aligned} $ |
Since
$ \lim \limits_{n \rightarrow \infty}\left(1-\frac{\delta \tau_n}{\tau_{n+1}}\right)=1-\delta>1-\chi $ |
where
$ \exists N^{\prime} \geqslant 0, \forall n \geqslant N^{\prime}, 1-\frac{\delta \tau_n}{\tau_{n+1}}>1-\chi $ |
So,
$ \left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle \geqslant(1-\chi)\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2 $ | (18) |
and
$\left\|\boldsymbol{\varTheta}_n\right\| \geqslant(1-\chi)\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| $ | (19) |
On the other hand,
$ \begin{aligned} \left\|\boldsymbol{\varTheta}_n\right\|= & \left\|\boldsymbol{d}_n-\boldsymbol{y}_n-\tau_n\left(F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right)\right\| \leqslant \\ & \left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|+\tau_n\left\|F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right\| \leqslant \\ & \left(1+\frac{\delta \tau_n}{\tau_{n+1}}\right)\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \end{aligned} $ |
Since
$ \lim \limits_{n \rightarrow \infty}\left(1+\frac{\delta \tau_n}{\tau_{n+1}}\right)=1+\delta <1+\chi $ |
where
$ \exists N^{\prime \prime} \geqslant 0, \forall n \geqslant N^{\prime \prime}, 1+\frac{\delta \tau_n}{\tau_{n+1}} <1+\chi $ |
So,
$\left\|\boldsymbol{\varTheta}_n\right\| \leqslant(1+\chi)\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| $ | (20) |
With Eqs.(18) and (20), it can be derived that
$ \begin{gathered} \varLambda_n^2\left\|\boldsymbol{\varTheta}_n\right\|^2=\frac{\left(\left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle\right)^2}{\left\|\boldsymbol{\varTheta}_n\right\|^2} \geqslant \\ (1-\chi)^2 \frac{\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^4}{\left\|\boldsymbol{\varTheta}_n\right\|^2} \geqslant \\ \frac{(1-\chi)^2}{(1+\chi)^2}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2 \end{gathered} $ | (21) |
It verifies from Eq.(19) that
$ \varLambda_n=\frac{\left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle}{\left\|\boldsymbol{\varTheta}_n\right\|^2} \leqslant \frac{\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|}{\left\|\boldsymbol{\varTheta}_n\right\|} \leqslant \frac{1}{1-\chi} $ | (22) |
Also, by Eqs.(17) and (21), it deduces that
$ \begin{gathered} \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2 \leqslant\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|^2- \\ \frac{(1-\chi)^2}{(1+\chi)^2}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \end{gathered} $ | (23) |
Theorem 1 Assume that presumptions (C1)-(C5) are satisfied and
$ \exists N>0, \forall n \geqslant N, \sum\limits_{n=N}^{\infty} \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 <\infty $ |
So sequence
Proof Let
$ \begin{gathered} \left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2=\left\|\psi_n\left(\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right)+\boldsymbol{c}_n-\boldsymbol{s}\right\|^2= \\ \left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2+2 \psi_n\left\langle\boldsymbol{c}_n-\boldsymbol{c}_{n-1}, \boldsymbol{c}_n-\boldsymbol{s}\right\rangle+ \\ \psi_n^2\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \end{gathered} $ | (24) |
Applying Lemma 1 (ⅰ), there is
$\begin{gathered} \left\langle\boldsymbol{c}_n-\boldsymbol{c}_{n-1}, \boldsymbol{c}_n-\boldsymbol{s}\right\rangle=0.5\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2+ \\ 0.5\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-0.5\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2 \end{gathered} $ |
This, along with Eq.(24), shows that
$\begin{array}{r} \left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2=\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2+\psi_n\left(\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2+\right. \\ \left.\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2\right)+\psi_n^2\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \leqslant \\ \left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2+\psi_n\left(\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2\right)+2 \\ \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \end{array} $ | (25) |
The Combination of Eqs.(23) and (25) signifies that
$\begin{aligned} \| \boldsymbol{c}_{n+1}- & \boldsymbol{s}\left\|^2 \leqslant\right\| \boldsymbol{d}_n-\boldsymbol{s}\left\|^2-\right\| \boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n \|^2- \\ & \frac{(1-\chi)^2}{(1+\chi)^2}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2 \leqslant\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2+ \\ & \psi_n\left(\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2\right)+ \\ & 2 \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2-\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|^2- \\ & \frac{(1-\chi)^2}{(1+\chi)^2}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2 \leqslant\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2+ \\ & \psi_n\left(\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2\right)+ \\ & 2 \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \end{aligned} $ | (26) |
Use Lemma 6 with
$ b_n=\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2 $ |
and
$ w_n=2 \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 $ |
With
$ \sum\limits_{n=N}^{\infty}\left[\left\|c_n-\boldsymbol{s}\right\|^2-\left\|c_{n-1}-\boldsymbol{s}\right\|^2\right]_{+} <\infty $ |
where
$ \lim \limits_{n \rightarrow \infty}\left[\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2\right]_{+}=0 $ |
Applying Eq.(26), it verifies that
$ \begin{gathered} \left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \varTheta_n\right\|^2+\frac{(1-\chi)^2}{(1+\chi)^2}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2 \leqslant \\ \left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2+ \\ \psi_n\left(\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2\right)+ \\ 2 \psi_n\left\|\boldsymbol{c}_n-c_{n-1}\right\|^2 \leqslant\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2- \\ \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2+\psi_n\left[\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\| \boldsymbol{c}_{n-1}-\right. \\ \left.\boldsymbol{s} \|^2\right]_{+}+2 \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \end{gathered} $ |
which means that
$\lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|=0 $ | (27) |
From Eqs.(7) and (11), it can be derived that
$ \begin{gathered} \left\|\boldsymbol{d}_n-\boldsymbol{c}_n\right\|^2=\psi_n^2\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \leqslant \\ \vartheta \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \rightarrow 0, n \rightarrow \infty \end{gathered} $ |
which corresponds to
$ \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{d}_n-\boldsymbol{c}_n\right\|=0 $ | (28) |
This and Eq.(27) verify that
$ \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{c}_n-\boldsymbol{y}_n\right\|=0 $ | (29) |
With Eqs.(20), (22) and (27), it can be deduced that
$\begin{gathered} \left\|d_n-c_{n+1}\right\| \leqslant\left\|d_n-c_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|+ \\ \left\|\varLambda_n \boldsymbol{\varTheta}_n\right\| \leqslant\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|+ \\ \frac{1+\chi}{1-\chi}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \rightarrow 0, n \rightarrow \infty . \end{gathered} $ | (30) |
Invoking Eqs.(28) and (30) can derive
$ \begin{array}{r} \left\|\boldsymbol{c}_{n+1}-\boldsymbol{c}_n\right\| \leqslant\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}\right\|+ \\ \left\|\boldsymbol{d}_n-\boldsymbol{c}_n\right\| \rightarrow 0, n \rightarrow \infty \end{array} $ | (31) |
Note that
$ \left\langle\boldsymbol{d}_{n_i}-\tau_{n_i} F\left(\boldsymbol{d}_{n_i}\right)-\boldsymbol{y}_{n_i}, \boldsymbol{t}-\boldsymbol{y}_{n_i}\right\rangle \leqslant 0, \forall \boldsymbol{t} \in \boldsymbol{C}, $ |
After arrangement, its version is that
$ \frac{1}{\tau_{n_i}}\left\langle\boldsymbol{d}_{n_i}-\boldsymbol{y}_{n_i}, \boldsymbol{t}-\boldsymbol{y}_{n_i}\right\rangle \leqslant\left\langle F\left(\boldsymbol{d}_{n_i}\right), \boldsymbol{t}-\boldsymbol{y}_{n_i}\right\rangle, \forall \boldsymbol{t} \in \boldsymbol{C} $ |
Further there is:
$ \begin{gathered} \frac{1}{\tau_{n_i}}\left\langle\boldsymbol{d}_{n_i}-\boldsymbol{y}_{n_i}, \boldsymbol{t}-\boldsymbol{y}_{n_i}\right\rangle+\left\langle F\left(\boldsymbol{d}_{n_i}\right), \boldsymbol{y}_{n_i}-\boldsymbol{d}_{n_i}\right\rangle \leqslant \\ \left\langle F\left(\boldsymbol{d}_{n_i}\right), \boldsymbol{t}-\boldsymbol{d}_{n_i}\right\rangle, \forall \boldsymbol{t} \in \boldsymbol{C} \end{gathered} $ | (32) |
Since the sequence
$ \liminf\limits_{i \rightarrow \infty}\left\langle F\left(\boldsymbol{d}_{n_i}\right), t-\boldsymbol{d}_{n_i}\right\rangle \geqslant 0, \forall \boldsymbol{t} \in \boldsymbol{C} $ | (33) |
Also,
$ \begin{gathered} \left\langle F\left(\boldsymbol{y}_{n_i}\right), \boldsymbol{t}-\boldsymbol{y}_{n_i}\right\rangle=\left\langle F\left(\boldsymbol{y}_{n_i}\right)-F\left(\boldsymbol{d}_{n_i}\right), \boldsymbol{t}-\boldsymbol{d}_{n_i}\right\rangle+ \\ \left\langle F\left(\boldsymbol{d}_{n_i}\right), \boldsymbol{t}-\boldsymbol{d}_{n_i}\right\rangle+\left\langle F\left(\boldsymbol{y}_{n_i}\right), \boldsymbol{d}_{n_i}-\boldsymbol{y}_{n_i}\right\rangle \end{gathered} $ | (34) |
From
$ \lim \limits_{i \rightarrow \infty}\left\|F\left(\boldsymbol{d}_{n_i}\right)-F\left(\boldsymbol{y}_{n_i}\right)\right\|=0 $ |
Along with Eqs.(33) and (34), it means that
$\liminf \limits_{i \rightarrow \infty}\left\langle F\left(\boldsymbol{y}_{n_i}\right), \boldsymbol{t}-\boldsymbol{y}_{n_i}\right\rangle \geqslant 0 $ |
From now on,
$ \left\langle F\left(\boldsymbol{y}_{n_k}\right), \boldsymbol{t}-\boldsymbol{y}_{n_k}\right\rangle+\theta_i \geqslant 0, k \geqslant N_i $ | (35) |
As the sequence
$\begin{gathered} \boldsymbol{v}_{N_i}=\frac{F\left(\boldsymbol{y}_{N_i}\right)}{\left\|F\left(\boldsymbol{y}_{N_i}\right)\right\|^2} \end{gathered} $ |
$\left\langle F\left(\boldsymbol{y}_{N_i}\right), \boldsymbol{t}+\theta_i \boldsymbol{v}_{N_i}-\boldsymbol{y}_{N_i}\right\rangle \geqslant 0 $ |
Since F is pseudomonotone, it means that
$\left\langle F\left(\boldsymbol{t}+\theta_i \boldsymbol{v}_{N_i}\right), \boldsymbol{t}+\theta_i \boldsymbol{v}_{N_i}-\boldsymbol{y}_{N_i}\right\rangle \geqslant 0 $ |
This shows that
$ \begin{gathered} \left\langle F(\boldsymbol{t}), \boldsymbol{t}-\boldsymbol{y}_{N_i}\right\rangle \geqslant\left\langle F(\boldsymbol{t})-F\left(\boldsymbol{t}+\theta_i v_{N_i}\right), \boldsymbol{t}+\right. \\ \left.\boldsymbol{\theta}_i v_{N_i}-\boldsymbol{y}_{N_i}\right\rangle-\boldsymbol{\theta}_i\left\langle\boldsymbol{F}(\boldsymbol{t}), \boldsymbol{v}_{N_i}\right\rangle \end{gathered} $ | (36) |
Below,
$ 0 <\left\|F\left(s^*\right)\right\| \leqslant \liminf \limits_{i \rightarrow \infty}\left\|\boldsymbol{F}\left(\boldsymbol{y}_{n_i}\right)\right\| $ |
Furthermore, since
$ \begin{gathered} 0 \leqslant \limsup _{i \rightarrow \infty} \sup \theta_i \boldsymbol{v}_{N_i} \|=\limsup _{i \rightarrow \infty}\left(\frac{\theta_i}{\left\|F\left(\boldsymbol{y}_{n_i}\right)\right\|}\right) \leqslant \\ \frac{\limsup _{i \rightarrow \infty}\left\|\theta_i\right\|}{\liminf \limits_{i \rightarrow \infty}\left\|F\left(\boldsymbol{y}_{n_i}\right)\right\|}=0 \end{gathered} $ |
which yields that
Finally, let
$\liminf \limits_{i \rightarrow \infty}\left\langle F(\boldsymbol{t}), \boldsymbol{t}-\boldsymbol{y}_{N_i}\right\rangle \geqslant 0 $ |
Thus, for any
$ \begin{gathered} \left\langle F(\boldsymbol{t}), \boldsymbol{t}-\boldsymbol{s}^*\right\rangle=\lim \limits_{i \rightarrow \infty}\left\langle F(\boldsymbol{t}), \boldsymbol{t}-\boldsymbol{y}_{N_i}\right\rangle= \\ \liminf \limits_{i \rightarrow \infty}\left\langle\boldsymbol{F}(\boldsymbol{t}), \boldsymbol{t}-\boldsymbol{y}_{N_i}\right\rangle \geqslant 0 \end{gathered} $ |
According to Lemma 3, it can be derived that
Remark 3
(1) It is remarkable that the weak convergence of Algorithm 1 under presumptions (C2) and (C5) are much weaker than the hypothesis of monotonicity and sequential weak continuity of F employed in existing works[20, 24, 29, 49];
(2) The obtained results in this paper improve the results of Theorem 3.1 in Ref.[18], the Theorem 3.1 in Ref.[24], the Theorem 3.1 in Ref. [29], the Theorem 3.1 in Ref. [37] and the Theorem 4.1 in Ref. [38], because the convergence is obtained without monotonicity and sequential weak continuity.
3 Linear Convergence RateOn the portion, linear convergence rate of the succession {cn} created by Algorithm 1 is discussed.
Theorem 2 Suppose that presumptions (C1), (C2') and (C3) are satisfied. Let
$r=\left[1-\frac{\gamma}{1+\frac{L}{\beta}+\frac{1}{\beta \tau_1}}\right]^{\frac{1}{2}} $ |
where
$ \psi_n= \begin{cases}0, & n=\text { even } \\ \min \left\{\bar{\psi}_n, \frac{1-r}{1+r}\right\}, & n=\text { odd }\end{cases} $ |
{cn} converges at least R-linearly to the unique solution of
Proof The operator F fulfills strongly pseudomonotone, which means that VIP (1) has a unique solution denoted by
$ \begin{aligned} \left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{s}-\boldsymbol{y}_n\right\rangle \leqslant \tau_n\left\langle F\left(\boldsymbol{d}_n\right), \boldsymbol{s}-\boldsymbol{y}_n\right\rangle= \\ \tau_n\left\langle F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right), \boldsymbol{s}-\boldsymbol{y}_n\right\rangle+ \\ \tau_n\left\langle F\left(\boldsymbol{y}_n\right), \boldsymbol{s}-\boldsymbol{y}_n\right\rangle \leqslant \tau_n \| F\left(\boldsymbol{d}_n\right)- \\ F\left(\boldsymbol{y}_n\right)\|\cdot\| \boldsymbol{s}-\boldsymbol{y}_n\left\|-\beta \tau_n\right\| \boldsymbol{s}-\boldsymbol{y}_n \|^2 \leqslant \\ \tau_n L\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \cdot\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\|- \\ \beta \tau_n\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\|^2 \end{aligned} $ | (37) |
After its change, it can be derived that
$ \begin{gathered} \beta \tau_n\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\|^2 \leqslant \tau_n L\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\|- \\ \left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{s}-\boldsymbol{y}_n\right\rangle \leqslant \tau_n L\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \cdot \\ \left\|\boldsymbol{s}-\boldsymbol{y}_n\right\|+\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\| \end{gathered} $ |
This means that
$\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\| \leqslant \frac{1+\tau_n L}{\beta \tau_n}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| $ |
Hence
$ \begin{gathered} \left\|\boldsymbol{d}_n-\boldsymbol{s}\right\| \leqslant\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|+\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\| \leqslant \\ {\left[1+\frac{1+\tau_n L}{\beta \tau_n}\right]\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|} \end{gathered} $ |
This shows that
$ \left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \geqslant\left[1+\frac{1+\tau_n L}{\beta \tau_n}\right]^{-1}\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\| $ | (38) |
Combining Lemma 10 and Eq.(38), there is
$ \begin{gathered} \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2 \leqslant\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2-\gamma\left\|\boldsymbol{d}_n-\boldsymbol{c}_n\right\|^2 \leqslant \\ {\left[1-\frac{\gamma}{1+\frac{L}{\beta}+\frac{1}{\beta \tau_n}}\right]\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2} \end{gathered} $ |
From Lemma 9 and the regulation of r, it signifies that
$ \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2 \leqslant r^2\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2 $ | (39) |
From the regulation of
$ \begin{array}{r} \left\|\boldsymbol{d}_{2 n+1}-\boldsymbol{s}\right\|^2=\left\|\psi_{2 n+1}\left(\boldsymbol{c}_{2 n+1}-\boldsymbol{c}_{2 n}\right)+\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right\|^2= \\ \left\|\left(1+\psi_{2 n+1}\right)\left(\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right)+\left(-\psi_{2 n+1}\right)\left(\boldsymbol{c}_{2 n}-\boldsymbol{s}\right)\right\|^2= \\ \left(1+\psi_{2 n+1}\right) \quad\left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right\|^2-\psi_{2 n+1} \\ \left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2+\psi_{2 n+1}\left(1+\psi_{2 n+1}\right) \\ \left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{c}_{2 n}\right\|^2 \end{array} $ | (40) |
Combining Eq.(39) and the regulation of
$ \left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right\|^2 \leqslant r^2\left\|\boldsymbol{d}_{2 n}-\boldsymbol{s}\right\|^2=r^2\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2 $ | (41) |
Let
$ \begin{gathered} \left\|\boldsymbol{c}_{2 n+2}-\boldsymbol{s}\right\|^2 \leqslant r^2\left\|\boldsymbol{d}_{2 n+1}-\boldsymbol{s}\right\|^2=r^2[(1+ \\ \left.\psi_{2 n+1}\right)\left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right\|^2-\psi_{2 n+1}\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2+ \\ \left.\psi_{2 n+1}\left(1+\psi_{2 n+1}\right) \cdot\left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{c}_{2 n}\right\|^2\right] \leqslant \\ r^2\left[\left(1+\psi_{2 n+1}\right) r^2\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2-\psi_{2 n+1} \cdot\right. \\ \left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2+\psi_{2 n+1}\left(1+\psi_{2 n+1}\right)\left(\left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right\|+\right. \\ \left.\left.\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|\right)^2\right] \leqslant r^2\left[\left(1+\psi_{2 n+1}\right) r^2-\right. \\ \left.\psi_{2 n+1}+\psi_{2 n+1}\left(1+\psi_{2 n+1}\right)(1+r)^2\right] \cdot \\ \left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2 \leqslant r^2\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2 \end{gathered} $ | (42) |
where the last inequality is due to
$ \begin{gathered} \left\|\boldsymbol{c}_{2 n+2}-\boldsymbol{s}\right\| \leqslant r\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\| \leqslant \cdots \\ \leqslant r^{n-N+1}\left\|\boldsymbol{c}_{2 N}-\boldsymbol{s}\right\| \end{gathered} $ | (43) |
With Eqs.(41) and (43), it can be implied that
$ \begin{aligned} & \left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right\| \leqslant r\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\| \leqslant \cdots \\ & \quad \leqslant r^{n-N+1}\left\|\boldsymbol{c}_{2 N}-\boldsymbol{s}\right\| \end{aligned} $ | (44) |
Consequently, from Eqs.(43) and (44), it can be concluded that
On the portion, several numerical implementations relative to the pseudomonotone VIP (1) are presented. The proposed Algorithm 1 (Alg.1) are compared with some recent self-adaptive algorithms such as Yao's Algorithm 1 (YISAlg.1) [25], Reich's Algorithm 3 (SDPLAlg. 3) [26] and Thong's Algorithm 3.1 (TVRAlg. 3.1) [29]. The tests are performed in MATLAB R2020b on Intel(R) Core(TM) i5-7200U CPU @ 2.50 GHZ, RAM 4.00 GB.
Example 1 The first classical example is shown in Refs.[26, 50-51]. The practicable set fulfills
$ a_{i, j}= \begin{cases}-1, & \text { if } j=m+1-i>i \\ 1, & \text { if } j=m+1-i <i \\ 0, & \text { otherwise }\end{cases} $ |
For even m, the zero vector is the answer of the example. It is a classical problem that the usual gradient method does not converge. For all tests,
Remark 4 According to the results in Table 1, it is easy to observe that Alg. 1 enjoys a faster convergence speed than YISAlg. 1 and SDPLAlg. 3 in the aspects of iteration number and CPU time. So, the proposed method in this paper is of feasibility.
Example 2 The second problem (also applied in Refs.[25, 29]) is HpHard problem. Adopt
The corresponding experimental results (execution time in seconds and number of iterations) are exhibited by employing different dimensions m. Table 2 records the experimental results.
Remark 5 As shown in Table 2, Alg.1 works better than YISAlg. 1 and TVRAlg. 3.1. To be specific, the proposed algorithm in this paper needs less time and smaller iteration numbers than the compared ones.
5 ConclusionsIn the article, the modified subgradient extragradient algorithm with inertial effects and non-monotone stepsizes is proposed and analyzed to solve variational inequality problems with pseudomonotonicity. Furthermore, its weak convergence under weaker presumptions is proved and the R-linear convergence rate is obtained. Finally, numerical experiments verify the correctness of the theoretical results.
[1] |
Fichera G. Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. lincei, VⅢ. Ser. Rend. Cl. Sci. Fis. Mat. Nat, 1963, 34(8): 138-142. (0) |
[2] |
Fichera G. Problemi elastostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. (Italian) Atti Accad. Naz. Lincei Mem. Cl. Sci. Fis. Mat. Natur. Sez. Ia, 1963/64, 7(8): 91-140.
(0) |
[3] |
Stampacchia G. Formes bilineaires coercitives sur les ensembles convexes. Comptes Rendus de l'Academie des Sciences, 1964, 258(18): 4413-4416. (0) |
[4] |
Elliott C M. Variational and quasivariational inequalities applications to free boundary problems. SIAM Review, 1987, 29(2): 314-315. DOI:10.1137/1029059 (0) |
[5] |
Kinderlehrer D, Stampacchia G. An Introduction to Variational Inequalities and Their Applications. Philadelphia: Society for Industrial and Applied Mathematics, 2000. DOI:10.1137/1.9780898719451 (0) |
[6] |
Konnov I V. Combined Relaxation Methods for Variational Inequalities. Berlin: Springer, 2001. DOI:10.1007/978-3-642-56886-2
(0) |
[7] |
Censor Y, Gibali A, Reich S. Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optimization Methods and Software, 2011, 26(4-5): 827-845. DOI:10.1088/10556788.2010.551536 (0) |
[8] |
Censor Y, Gibali A, Reich S. Extensions of Korpelevich's extragradient method for the variational inequality problem in Euclidean space. Optimization, 2012, 61(9): 1119-1132. DOI:10.1080/02331934.2010.539689 (0) |
[9] |
Thong D V, Shehu Y, Iyiola O S. Weak and strong convergence theorems for solving pseudo-monotone variational inequalities with non-Lipschitz mappings. Numerical Algorithms, 2020, 84(2): 795-823. DOI:10.1007/s11075-019-00780-0 (0) |
[10] |
Facchinei F, Pang J S. Finite-Dimensional Variational Inequalities and Complementarity Problems. New York: Springer, 2003. DOI:10.1007/b97544
(0) |
[11] |
Shehu Y, Iyiola O S. Iterative algorithms for solving fixed point problems and variational inequalities with uniformly continuous monotone operators. Numerical Algorithms, 2018, 79(2): 529-553. DOI:10.1007/s11075-017-0449-z (0) |
[12] |
Konnov I V. Combined Relaxation Methods for Variational Inequalities. Berlin: Springer, 2001. DOI:10.1007/978-3-642-56886-2
(0) |
[13] |
Kanzow C, Shehu Y. Strong convergence of a double projection-type method for monotone variational inequalities in Hilbert spaces. Journal of Fixed Point Theory and Applications, 2018, 20(1): 1-24. DOI:10.1007/s11784-018-0531-8 (0) |
[14] |
Liu H, Yang J. Weak convergence of iterative methods for solving quasimonotone variational inequalities. Computational Optimization and Applications, 2020, 77(2): 491-508. DOI:10.1007/s10589-020-00217-8 (0) |
[15] |
Malitsky Y V, Semenov V V. A hybrid method without extrapolation step for solving variational inequality problems. Journal of Global Optimization, 2015, 61(1): 193-202. DOI:10.1007/s10898-014-0150-x (0) |
[16] |
Solodov M V, Svaiter B F. A new projection method for variational inequality problems. SIAM Journal on Control and Optimization, 1999, 37(3): 765-776. DOI:10.1137/S036312997317475 (0) |
[17] |
Yang J, Liu H W. Strong convergence result for solving monotone variational inequalities in Hilbert space. Numerical Algorithms, 2019, 80(3): 741-752. DOI:10.1007/s11075-018-0504-4 (0) |
[18] |
Thong D V, Yang J, Cho Y J, et al. Explicit extragradient-like method with adaptive stepsizes for pseudomonotone variational inequalities. Optimization Letters, 2021, 15(6): 2181-2199. DOI:10.1007/s11590-020-01678-w (0) |
[19] |
Thong D V, Li X H, Dong Q L, et al. An inertial Popov's method for solving pseudomonotone variational inequalities. Optimization Letters, 2021, 15(2): 757-777. DOI:10.1007/s11590-020-01599-8 (0) |
[20] |
Cai G, Dong Q L, Peng Y. Strong convergence theorems for solving variational inequality problems with pseudo-monotone and non-lipschitz operators. Journal of Optimization Theory and Applications, 2021, 188(2): 447-472. DOI:10.1007/s10957-020-01792-w (0) |
[21] |
Korpelevich G M. The extragradient method for finding saddle points and other problems. Ekonomika I Matematicheskie Metody, 1976, 12: 747-756. (0) |
[22] |
Antipin A S. On a method for convex programs using a symmetrical modification of the Lagrange function. Ekonomika I Matematicheskie Metody, 1976, 12(6): 1164-1173. (0) |
[23] |
Censor Y, Gibali A, Reich S. The subgradient extragradient method for solving variational inequalities in Hilbert space. Journal of Optimization Theory and Applications, 2011, 148(2): 318-335. DOI:10.1007/s10957-010-9757-3 (0) |
[24] |
Dong Q L, Jiang D, Gibali A. A modified subgradient extragradient method for solving the variational inequality problem. Numerical Algorithms, 2018, 79(3): 927-940. DOI:10.1007/s11075-017-0467-x (0) |
[25] |
Yao Y, Iyiola O S, Shehu Y. Subgradient extragradient method with double inertial steps for variational inequalities. Journal of Scientific Computing, 2022, 90(2): 1-29. DOI:10.1007/s10915-021-01751-1 (0) |
[26] |
Reich S, Thong D V, Cholamjiak P, et al. Inertial projection-type methods for solving pseudomonotone variational inequality problems in Hilbert space. Numerical Algorithms, 2021, 88(2): 813-835. DOI:10.1007/s11075-020-01058-6 (0) |
[27] |
Thong D V, Vinh N T, Cho Y J, et al. Accelerated subgradient extragradient methods for variational inequality problems. Journal of Scientific Computing, 2019, 80(3): 1438-1462. DOI:10.1007/s10915-019-00984-5 (0) |
[28] |
Ogwo G N, Izuchukwu C, Shehu Y, et al. Convergence of relaxed inertial subgradient extragradient methods for quasimonotone variational inequality problems. Journal of Scientific Computing, 2022, 90(1): 1-35. DOI:10.1007/s10915-021-01670-1 (0) |
[29] |
Thong D V, Van Hieu D, Rassias T M. Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems. Optimization Letters, 2020, 14(1): 115-144. DOI:10.1007/s11590-019-01511-z (0) |
[30] |
Thong D V, Cholamjiak P, Rassias M T, et al. Strong convergence of inertial subgradient extragradient algorithm for solving pseudomonotone equilibrium problems. Optimization Letters, 2022, 16: 545-573. DOI:10.1007/s11590-021-01734-z (0) |
[31] |
Shehu Y, Iyiola O S, Thong D V, et al. An inertial subgradient extragradient algorithm extended to pseudomonotone equilibrium problems. Mathematical Methods Operations Research, 2021, 93(2): 213-242. DOI:10.1007/s00186-020-00730-w (0) |
[32] |
He B S. A class of projection and contraction methods for monotone variational inequalities. Applied Mathematics and Optimization, 1997, 35(1): 69-76. DOI:10.1007/BF02683320 (0) |
[33] |
Sun D F. A class of iterative methods for solving nonlinear projection equations. Journal of Optimization Theory and Applications, 1996, 91(1): 123-140. DOI:10.1007/BF02192286 (0) |
[34] |
Nesterov Y E. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk SSSR, 1983, 269(3): 543-547. (0) |
[35] |
Polyak B T. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 1964, 4(5): 1-17. DOI:10.1016/0041-5553(64)90137-5 (0) |
[36] |
Alvarez F, Attouch H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal, 2001, 9(1): 3-11. DOI:10.1023/A:1011253113155 (0) |
[37] |
Shehu Y, Iyiola O S, Reich S. A modified inertial subgradient extragradient method for solving variational inequalities. Optimization and Engineering, 2021, 23: 421-429. DOI:10.1007/s11081-020-09593-w (0) |
[38] |
Chang X K, Liu S Y, Deng Z, et al. An inertial subgradient extragradient algorithm with adaptive stepsizes for variational inequality problems. Optimization Methods and Software, 2021, 1-20. DOI:10.1080/10556788.2021.1910946 (0) |
[39] |
Izuchukwu C, Shehu Y, Yao J C. New inertial forward-backward type for variational inequalities with Quasi-monotonicity. Journal of Global Optimization, 2022, 1-24. DOI:10.1007/s10898-022-01152-0 (0) |
[40] |
Chang X, Bai J C. A projected extrapolated gradient method with larger step size for monotone variational inequalities. Journal of Optimization Theory and Applications, 2021, 190(2): 602-627. DOI:10.1007/s10957-021-01902-2 (0) |
[41] |
He S N, Yang C P, Duan P C. Realization of the hybrid method for mann iterations. Applied Mathematics and Computation, 2010, 217(8): 4239-4247. DOI:10.1016/j.amc.2010.10.039 (0) |
[42] |
Bauschke H H, Combettes P L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. New York: Springer, 2011. DOI:10.1007/978-3-319-48311-5
(0) |
[43] |
Agarwal R P, Regan D O, Sahu D R. Fixed Point Theory for Lipschitzian-Type Mappings with Applications. New York: Springer, 2009. DOI:10.1007/978-0-387-75818-3
(0) |
[44] |
Cottle R W, Yao J C. Pseudo-monotone complementarity problems in Hilbert space. Journal of Optimization Theory and Applications, 1992, 75(2): 281-295. DOI:10.1007/BF00941468 (0) |
[45] |
Mashreghi J, Nasri M. Forcing strong convergence of Korpelevichs method in Banach spaces with its applications in game theory. Nonlinear Analysis: Theory, Methods & Applications, 2010, 72(3-4): 2086-2099. DOI:10.1016/j.na.2009.10.009 (0) |
[46] |
Osilike M O, Aniagbosor S C. Weak and strong convergence theorems for fixed points of asymptotically nonexpansive mappings. Mathematical and Computer Modelling, 2000, 32(10): 1181-1191. DOI:10.1016/S0895-7177(00)00199-0 (0) |
[47] |
Ma X J, Liu H W. An inertial Halpern-type CQ algorithm for solving split feasibility problems in Hilbert spaces. Journal of Applied Mathematics and Computing, 2021, 68: 1699-1717. DOI:10.1007/s12190-021-01585-y (0) |
[48] |
Sahu D R, Cho Y J, Dong Q L, et al. Inertial relaxed CQ algorithms for solving a split feasibility problem in Hilbert spaces. Numerical Algorithms, 2021, 87(3): 1075-1095. DOI:10.1007/s11075-020-00999-2 (0) |
[49] |
Tan B, Qin X L, Yao J C. Two modified inertial projection algorithms for bilevel pseudomonotone variational inequalities with applications to optimial control problems. Numerical Algorithms, 2021, 88(4): 1757-1786. DOI:10.1007/s11075-021-01093-x (0) |
[50] |
Maingé P E, Gobinddass M L. Convergence of one-step projected gradient methods for variational inequalities. Journal of Optimization Theory and Applications, 2016, 171(1): 146-168. DOI:10.1007/s10957-016-0972-4 (0) |
[51] |
Malitsky Y V. Projected reflected gradient methods for monotone variational inequalities. SIAM Journal on Optimization, 2015, 25(1): 502-520. DOI:10.1137/14097238X (0) |