Journal of Harbin Institute of Technology (New Series)  2022, Vol. 29 Issue (4): 41-48  DOI: 10.11916/j.issn.1005-9113.2020026
0

Citation 

Jiajia Cheng, Hongwei Liu. Modified Subgradient Extragradient Method for Pseudomonotone Variational Inequalities[J]. Journal of Harbin Institute of Technology (New Series), 2022, 29(4): 41-48.   DOI: 10.11916/j.issn.1005-9113.2020026

Corresponding author

Jiajia Cheng, master student. E-mail: chengjiajia_shanxi@163.com

Article history

Received: 2020-06-10
Modified Subgradient Extragradient Method for Pseudomonotone Variational Inequalities
Jiajia Cheng, Hongwei Liu     
School of Mathematics and Statistics, Xidian University, Xi'an 710126, China
Abstract: Many approaches have been put forward to resolve the variational inequality problem. The subgradient extragradient method is one of the most effective. This paper proposes a modified subgradient extragradient method about classical variational inequality in a real Hilbert interspace. By analyzing the operator's partial message, the proposed method designs a non-monotonic step length strategy which requires no line search and is independent of the value of Lipschitz constant, and is extended to solve the problem of pseudomonotone variational inequality. Meanwhile, the method requires merely one map value and a projective transformation to the practicable set at every iteration. In addition, without knowing the Lipschitz constant for interrelated mapping, weak convergence is given and R-linear convergence rate is established concerning algorithm. Several numerical results further illustrate that the method is superior to other algorithms.
Keywords: variational inequality    subgradient extragradient method    non-monotonic stepsize strategy    pseudomonotone mapping    
0 Introduction

For real Hilbert space H possessing transvection 〈·, ·〉 and norm ‖·‖, strong and weak converge of succession {xn} to a dot x are distinguished and expressed as xnx and xnx. With regard to a preset non-null, closed convex set CH, variational inequality problem (VI (P)) need to be considered. A dot xC is sought and the below formula holds:

$ \left\langle\boldsymbol{x}-\boldsymbol{x}^{\prime}, F\left(\boldsymbol{x}^{\prime}\right)\right\rangle \geqslant 0, \forall \boldsymbol{x} \in \boldsymbol{C} $ (1)

As is known, x is a solution of inequality (1) which equals x=PC(xλF(x)) for all λ>0.

Variational inequality is a particularly practical model. Quite a few problems can be expressed as VI (P), for instance, nonlinear equation, complementarity matters, barrier questions, and many others[1-2]. In recent decades, some projection algorithms to solve VI (P) have been raised and developed by many researchers[3-11].

The most classical algorithm is projection gradient method[12-13]:

$ \boldsymbol{x}_{n+1}=P_C\left(\boldsymbol{x}_n-\lambda F\left(\boldsymbol{x}_n\right)\right) $

Only when the mapped F is a(n) (inverse) strong monotone L-Lipschitz consecutive mapping and the step length satisfies λ∈(0, (2η)/L2), the convergence of algorithm can be obtained, where L>0 and η>0 represent Lipschitz and strong monotonicity constants, respectively. To solve the monotone L-Lipschitz continuous variational inequality, Korpelevich[14] proposed an extragradient algorithm. This method has attracted the attention of many researchers and presents a variety of improved methods[15-19]. Later, Censor et al.[5] introduced an algorithm: the second projection is converted from C to a specific semi-space, which is called subgradient extragradient algorithm. Its form is listed as follows:

$ \boldsymbol{z}_{n+1}=P_{T_n}\left(\boldsymbol{z}_n-\lambda F \boldsymbol{x}_n\right), \boldsymbol{x}_{n+1}=P_C\left(\boldsymbol{z}_{n+1}-\lambda F \boldsymbol{z}_{n+1}\right) $

where

$ \lambda \in\left(0, \frac{1}{L}\right), \boldsymbol{T}_n=\left\{\boldsymbol{x} \in \boldsymbol{H}:\left\langle\boldsymbol{x}_n-\boldsymbol{x}, \boldsymbol{x}_n+\lambda F z_n-z_n\right\rangle \leqslant 0\right\} $

In the subgradient extragradient method, two F values need to be calculated per iteration. Recently, Popov[20] and Malitsky and Semenov[3] studied an algorithm that merely needs one F value per iteration, which is known as modified subgradient extragradient algorithm, but it requires knowledge of the L-Lipschitz constant or estimated value in advance. In 2018, Yang et al.[21] proposed a modified subgradient extragradient method. When strict conditions are satisfied, the algorithm is proved to be at least R-linearly convergent. On this basis, this paper weakens its condition and solves some more general VI (P) under a non-monotonic stepsize strategy.

The other part of this article is arranged as follows. Section 1 reviews several relevant knowledge topics for later use. Section 2 states a subgradient extragradient method with a non-monotonic stepsize strategy. Section 3 includes two parts: the weak convergence and R-linearly convergence rate under the hypothesis of pseudomonotonicity and strong pseudomonotonicity respectively. In the last section, two classical examples are given to explain the advantages of the proposed algorithm.

1 Preliminaries

Several notations and lemmas are provided for later use.

Definition 1.1[22-23]  For any α, βH, assume F on H is

(a) strongly monotone

$ \exists \gamma>0 \text {, s.t. } \gamma\|\boldsymbol{\alpha}-\boldsymbol{\beta}\|^2 \leqslant\langle\boldsymbol{\alpha}-\boldsymbol{\beta}, F \boldsymbol{\alpha}-F \boldsymbol{\beta}\rangle $

(b) monotone

$ \langle\boldsymbol{\alpha}-\boldsymbol{\beta}, F \boldsymbol{\beta}\rangle \leqslant\langle\boldsymbol{\alpha}-\boldsymbol{\beta}, F \boldsymbol{\alpha}\rangle $

(c) strongly pseudomonotone

$ \begin{array}{r} \exists \gamma>0 \text {, s.t. } 0 \leqslant\langle\boldsymbol{\beta}-\boldsymbol{\alpha}, F \boldsymbol{\alpha}\rangle \Rightarrow \\ \gamma\|\boldsymbol{\beta}-\boldsymbol{\alpha}\|^2 \leqslant\langle\boldsymbol{\beta}-\boldsymbol{\alpha}, F \boldsymbol{\beta}\rangle \end{array} $

(d) pseudomonotone

$ 0 \leqslant\langle\boldsymbol{\alpha}-\boldsymbol{\beta}, F \boldsymbol{\beta}\rangle \Rightarrow 0 \leqslant\langle\boldsymbol{\alpha}-\boldsymbol{\beta}, F \boldsymbol{\alpha}\rangle $

(e) L-Lipschitz continuous

$ \exists L>0 \text {, s.t. }\|F \boldsymbol{\alpha}-F \boldsymbol{\beta}\| \leqslant L\|\boldsymbol{\alpha}-\boldsymbol{\beta}\| $

(f) sequentially weakly continuous: if random succession {αn} satisfies αnα, there is FαnFα.

Remark 1[22]  The deduced process of (a)⇒(b) and (c)⇒(d) are obvious. Furthermore, (a) and (c) ensure that VI (P) has at most one solution.

Lemma 1.1[24]  For a non-null closed convex set CH, any αH, βC satisfies

(a) 〈PCαα, βPCα 〉≥0

(b) ‖PCαα2+‖PC αβ2≤‖ αβ2

Lemma 1.2[21]  (Peter-Paul inequality) When a and b are real numbers, and ε>0, the following inequality below can be obtained:

$ 2 a b \leqslant \frac{a^2}{\bar{\varepsilon}}+\bar{\varepsilon} b^2 $

Lemma 1.3[25]  For two nonnegative real successions {an} and {bn}, the existence of N>0 makes an+1anbn, ∀nN hold. It can be deduced that {an} is convergent and $\lim\limits_{n \rightarrow \infty} b_n=0$.

Lemma 1.4[26]  For VI (P), CH is a non-null enclosed convex and F: CH is pseudo-monotone successive. α as an answer to VI (P) is equal to

$ \left\langle\boldsymbol{\alpha}-\boldsymbol{\alpha}^{\prime}, F \boldsymbol{\alpha}\right\rangle \geqslant 0, \quad \forall \boldsymbol{\alpha} \in \boldsymbol{C} $

Lemma 1.5 (Opial)  Consider succession {αn}⊆H. If αnα is true, there is

$ \liminf\limits_{n \rightarrow \infty}\left\|\boldsymbol{\alpha}_n-\boldsymbol{\alpha}\right\|<\liminf\limits_{n \rightarrow \infty}\left\|\boldsymbol{\alpha}_n-\overline{\boldsymbol{\alpha}}\right\|, \forall \overline{\boldsymbol{\alpha}} \neq \boldsymbol{\alpha} $
2 Algorithm

The proposed subgradient extragradient algorithm combines a non-monotonic stepsize strategy, which will be illustrated in this section.

Algorithm A

Step 1   Take

$ \boldsymbol{x}_0=\boldsymbol{y}_0 \in \boldsymbol{H}, \lambda_0 \geqslant \lambda_1>0, \boldsymbol{\alpha} \in(0, 2-\sqrt{2}) $

Choose a non-negative real number succession {pn} which satisfies $\sum\limits_{n=1}^{\infty} p_n<+\infty$. Calculate

$ \boldsymbol{x}_1=P_C\left(\boldsymbol{x}_0-\lambda_0 F\left(\boldsymbol{y}_0\right)\right), \boldsymbol{y}_1=P_C\left(\boldsymbol{x}_1-\lambda_1 F\left(\boldsymbol{y}_0\right)\right) $

Step 2  Knowing the present iterates yn-1, xn, and yn, construct

$ \boldsymbol{T}_n=\left\{\boldsymbol{x} \in \boldsymbol{H}:\left\langle\boldsymbol{y}_n-\boldsymbol{x}, \boldsymbol{x}_n-\lambda_n F\left(\boldsymbol{y}_{n-1}\right)-\boldsymbol{y}_n\right\rangle \geqslant 0\right\} $

and figure out

$ \boldsymbol{x}_{n+1}=P_{\boldsymbol{T}_n}\left(\boldsymbol{x}_n-\lambda_n F\left(\boldsymbol{y}_n\right)\right) $

Step 3  Calculate a stepsize

$ \lambda_{n+1}= \\ \left\{\begin{array}{l} \min \left\{\frac{\boldsymbol{\alpha}\left\|\boldsymbol{x}_{n+1}-\boldsymbol{y}_n\right\|^2+\frac{\boldsymbol{\alpha}}{2}\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|^2}{2\left\langle\boldsymbol{x}_{n+1}-\boldsymbol{y}_n, F\left(\boldsymbol{y}_{n-1}\right)-F\left(\boldsymbol{y}_n\right)\right\rangle}\right\} , \\ \left\langle\boldsymbol{x}_{n+1}-\boldsymbol{y}_n, F\left(\boldsymbol{y}_{n-1}\right)-F\left(\boldsymbol{y}_n\right)\right\rangle>0 \\ \lambda_n+p_n, \text { otherwise } \end{array}\right. $

Step 4  Compute

$ \boldsymbol{y}_{n+1}=P_C\left(\boldsymbol{x}_{n+1}-\lambda_{n+1} F\left(\boldsymbol{y}_n\right)\right) $

If xn+1=xn and yn= yn-1 (or yn+1= xn+1= yn), stop and yn is a solution. Otherwise, let n: =n+1, and go to Step 2.

Lemma 2.1  Algorithm A can produce step lengths order {λn}, and it satisfies $\lim\limits_{n \rightarrow \infty} \lambda_n=\lambda$ and $\min \left\{\frac{\boldsymbol{\alpha}}{\sqrt{2} L}, \lambda_1\right\} \leqslant \lambda \leqslant \lambda_1+P, \text { where } P=\sum\limits_{n=1}^{\infty} p_n$.

Proof  Because F is L-Lipschitz continuous mapping, if 〈xn+1yn, F(yn-1)-F(yn)〉>0, then

$ \begin{aligned} &\frac{\boldsymbol{\alpha}\left\|\boldsymbol{x}_{n+1}-\boldsymbol{y}_n\right\|^2+\frac{\boldsymbol{\alpha}}{2}\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|^2}{2\left\langle\boldsymbol{x}_{n+1}-\boldsymbol{y}_n, F\left(\boldsymbol{y}_{n-1}\right)-F\left(\boldsymbol{y}_n\right)\right\rangle} \geqslant\\ &\frac{\sqrt{2} \boldsymbol{\alpha}\left\|\boldsymbol{x}_{n+1}-\boldsymbol{y}_n\right\|\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|}{2\left\|\boldsymbol{x}_{n+1}-\boldsymbol{y}_n\right\|\left\|F\left(\boldsymbol{y}_{n-1}\right)-F\left(\boldsymbol{y}_n\right)\right\|} \geqslant\\ &\frac{\boldsymbol{\alpha}\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|}{\sqrt{2} L\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|}=\frac{\boldsymbol{\alpha}}{\sqrt{2} L} \end{aligned} $

Through derivation of λn+1 and mathematical induction, it is known that the highest and lowest values of step succession {λn} correspond to λ1+P and $\min \left\{\frac{\boldsymbol{\alpha}}{\sqrt{2} L}, \lambda_1\right\}$, respectively. From the expression of λn, order tn+1=λn+1λn, and the following inequalities are obtained:

$ t_{n+1}^{+} \leqslant p_n, \sum\limits_{n=1}^{\infty} p_n<+\infty $

Here $t_{n+1}^{+}=\max \left\{0, t_{n+1}\right\}, t_{n+1}^{-}=\max \left\{0, -t_{n+1}\right\}$. The positive term $\sum\limits_{n=1}^{\infty} t_{n+1}^{+}$ is astringent in the aforesaid inequality. The subsequent task is to derive that $\sum\limits_{n=1}^{\infty} t_{n+1}^{-}$ also converges. Otherwise, assuming $\sum\limits_{n=1}^{\infty} t_{n+1}^{-}=+\infty$, there is

$ t_{n+1}=t_{n+1}^{+}-t_{n+1}^{-} $

which implies that

$ \lambda_{m+1}-\lambda_1=\sum\limits_{n=1}^m t_{n+1}=\sum\limits_{n=1}^m t_{n+1}^{+}-\sum\limits_{n=1}^m t_{n+1}^{-} $

As m→+∞, λm→-∞, the above equation is in conflict with the previous conclusion. Because $\sum\limits_{n=1}^{\infty} t_{n+1}^{+}$ and $\sum\limits_{n=1}^{\infty} t_{n+1}^{-}$ are convergent, let m→+∞, then $\lim\limits_{m \rightarrow \infty} \lambda_m$=λ in the above formula. Further, since {λn} has the lowest value $\min \left\{\frac{\boldsymbol{\alpha}}{\sqrt{2} L}, \lambda_1\right\}$ and the highest value λ1+P, there must be $\min \left\{\frac{\boldsymbol{\alpha}}{\sqrt{2} L}, \lambda_1\right\} \leqslant \lambda \leqslant \lambda_1+P$.

3 Convergence Analysis

The relevant convergence is given in this part. Firstly, some sequences weakly converging to the solution were analyzed and the R-linear convergence rate was obtained. Moreover, the following hypotheses is made about VI (P):

(A1) The disaggregation S of inequality (1) is non-null;

(A2) F is pseudomonotone over H and weakly sequentially continuous over C;

(A3) F is L-Lipschitz consecutive over H.

Convergence of the algorithm is discussed below. The following lemma is essential to gain the main theorem.

Lemma 3.1  Algorithm A can produce two successions {xn} and {yn}, which are bounded.

Proof  For vS, ynC naturally has

$ \left\langle\boldsymbol{y}_n-\boldsymbol{v}, F(\boldsymbol{v})\right\rangle \geqslant 0, \quad \forall n \geqslant 1 $

Then 〈ynv, F(yn)〉≥0 in consideration of pseudomonotonicity, which in turn is equivalent to

$ \left\langle\boldsymbol{y}_n-\boldsymbol{x}_{n+1}, F\left(\boldsymbol{y}_n\right)\right\rangle \geqslant\left\langle\boldsymbol{v}-\boldsymbol{x}_{n+1}, F\left(\boldsymbol{y}_n\right)\right\rangle $ (2)

The proving process is analogue to Lemma 3.3 of Ref. [23], it can be obtained that

$ \begin{aligned} &\left\|\boldsymbol{x}_{n+1}-\boldsymbol{v}\right\|^2 \leqslant\left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2+(\sqrt{2}-1) \| \boldsymbol{x}_n- \\ &\boldsymbol{y}_{n-1}\left\|^2-\left(1-\frac{1}{\sqrt{2}}\right)\right\| \boldsymbol{y}_{n-1}-\boldsymbol{y}_n\left\|^2-\right\| \boldsymbol{y}_n-\boldsymbol{x}_{n+1} \|^2+ \\ &2 \lambda_n\left\langle\boldsymbol{x}_{n+1}-\boldsymbol{y}_n, F\left(\boldsymbol{y}_{n-1}\right)-F\left(\boldsymbol{y}_n\right)\right\rangle \end{aligned} $ (3)

In accordance with the definition of λn, inequality (3) yields that

$ \begin{aligned} &\left\|\boldsymbol{x}_{n+1}-\boldsymbol{v}\right\|^2+(\sqrt{2}-1)\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2 \leqslant \\ &\left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2+(\sqrt{2}-1)\left\|\boldsymbol{y}_{n-1}-\boldsymbol{x}_n\right\|^2- \\ &\left(1-\frac{1}{\sqrt{2}}\right)\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|^2-(2-\sqrt{2})\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2+ \\ &\frac{\lambda_n}{\lambda_{n+1}}\left(\boldsymbol{\alpha}\left\|\boldsymbol{x}_{n+1}-\boldsymbol{y}_n\right\|^2+\frac{\boldsymbol{\alpha}}{2}\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|^2\right)= \\ &\left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2+(\sqrt{2}-1)\left\|\boldsymbol{y}_{n-1}-\boldsymbol{x}_n\right\|^2- \\ &\left(1-\frac{1}{\sqrt{2}}-\frac{\boldsymbol{\alpha}}{2} \frac{\lambda_n}{\lambda_{n+1}}\right)\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|^2- \\ &\left(2-\sqrt{2}-\boldsymbol{\alpha} \frac{\lambda_n}{\lambda_{n+1}}\right)\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2 \end{aligned} $ (4)

Taking the limit n→∞, due to $\boldsymbol{\alpha} \in(0, 2-\sqrt{2})$,

$ \begin{aligned} &1-\frac{1}{\sqrt{2}}-\frac{\boldsymbol{\alpha}}{2} \cdot \frac{\lambda_n}{\lambda_{n+1}} \rightarrow 1-\frac{1}{\sqrt{2}}-\frac{\boldsymbol{\alpha}}{2}>0 \\ &2-\sqrt{2}-\boldsymbol{\alpha} \frac{\lambda_n}{\lambda_{n+1}} \rightarrow 2-\sqrt{2}-\boldsymbol{\alpha}>0 \end{aligned} $

N1>0 and G1>0, such that for any n>N1, there is

$ 1-\frac{1}{\sqrt{2}}-\frac{\boldsymbol{\alpha}}{2} \cdot \frac{\lambda_n}{\lambda_{n+1}}>G_1, 2-\sqrt{2}-\boldsymbol{\alpha} \frac{\lambda_n}{\lambda_{n+1}}>G_1 $

Inequality (4) can be rewritten by

$ \begin{gathered} \left\|\boldsymbol{x}_{n+1}-\boldsymbol{v}\right\|^2+(\sqrt{2}-1)\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2 \leqslant \\ \left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2+(\sqrt{2}-1)\left\|\boldsymbol{y}_{n-1}-\boldsymbol{x}_n\right\|^2 \\ G_1\left(\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|^2+\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2\right) \end{gathered} $ (5)

For n>N1, set

$ \begin{gathered} a_n=\left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2+(\sqrt{2}-1)\left\|\boldsymbol{y}_{n-1}-\boldsymbol{x}_n\right\|^2 \\ b_n=G_1\left(\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|^2+\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2\right) \end{gathered} $

Therefore, inequality (5) can be rewritten as an+1anbn, ∀nN1. According to Lemma 1.3, {an} is convergent and $\lim\limits_{n \rightarrow \infty} b_n=0 . \quad\left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2 \leqslant a_n$, which means that {xn} is bounded and

$ \lim\limits_{n \rightarrow \infty}\left\|\boldsymbol{y}_n-\boldsymbol{y}_{n-1}\right\|=\lim\limits_{n \rightarrow \infty}\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|=0 $

By means of trigonometric inequality, there is xnyn and {yn} is limited as well.

Theorem 3.1  Suppose that successions {xn} and {yn} are produced by Algorithm A, it can be deduced that they weakly converge to the same pot xS.

Proof  Lemma 3.1 yields the result that there is a sub-series {xnk}⊆{ xn} that converges weakly to a dot x. Obviously there is {ynk} weakly converging to xC.

Since ynk+1=PC(xnk+1λnk+1F(ynk)), applying Lemma 1.1(a), there is

$ \left\langle z-\boldsymbol{y}_{n_k+1}, \boldsymbol{y}_{n_k+1}-\boldsymbol{x}_{n_k+1}+\lambda_{n_k+1} F\left(\boldsymbol{y}_{n_k}\right)\right\rangle \geqslant 0, \forall z \in \boldsymbol{C} $

or equivalently

$ \begin{aligned} 0 \leqslant &\left\langle\boldsymbol{z}-\boldsymbol{y}_{n_k+1}, \boldsymbol{y}_{n_k+1}-\boldsymbol{x}_{n_k+1}\right\rangle+\lambda_{n_k+1}\left\langle\boldsymbol{z}-\boldsymbol{y}_{n_k+1}\right.\\ &\left.F\left(\boldsymbol{y}_{n_k}\right)\right\rangle=\left\langle\boldsymbol{z}-\boldsymbol{y}_{n_k+1}, \boldsymbol{y}_{n_k+1}-\boldsymbol{x}_{n_k+1}\right\rangle+\lambda_{n_k+1}\langle\boldsymbol{z}-\\ &\left.\boldsymbol{y}_{n_k}, F\left(\boldsymbol{y}_{n_k}\right)\right\rangle+\lambda_{n_k+1}\left\langle\boldsymbol{y}_{n_k}-\boldsymbol{y}_{n_k+1}, F\left(\boldsymbol{y}_{n_k}\right)\right\rangle \end{aligned} $

Taking k→∞ in the above-mentioned expression, since λnk+1>0, the following is derived:

$ \lim\limits_{k \rightarrow \infty} \inf \left\langle\boldsymbol{z}-\boldsymbol{y}_{n_k}, F\left(\boldsymbol{y}_{n_k}\right)\right\rangle \geqslant 0 $ (6)

There is a positive succession {εk}, which decreases and has a limit of zero. A rigorously increasing sequence of positive integers {Nk} can be found, thus the inequality below is obtained:

$ \varepsilon_k+\left\langle \boldsymbol{z}-\boldsymbol{y}_{n_j}, F\left(\boldsymbol{y}_{n_j}\right)\right\rangle \geqslant 0, \quad \forall j \geqslant N_k $ (7)

In light of inequality (6), it can be concluded that Nk exists. In addition, for each k, let

$ \boldsymbol{w}_{N_k}=\frac{F\left(\boldsymbol{y}_{N_k}\right)}{\left\|F\left(\boldsymbol{y}_{N_k}\right)\right\|^2} $

There is 〈F(yNk), wNk〉=1. It follows from inequality (7) that for each k, there is

$ \left\langle \boldsymbol{z}+\varepsilon_k \boldsymbol{w}_{N_k}-\boldsymbol{y}_{N_k}, F\left(\boldsymbol{y}_{N_k}\right)\right\rangle \geqslant 0 $

Furthermore, F is pseudomonotone over H, so there is

$ \left\langle \boldsymbol{z}+\varepsilon_k \boldsymbol{w}_{N_k}-\boldsymbol{y}_{N_k}, F\left(\boldsymbol{z}+\varepsilon_k \boldsymbol{w}_{N_k}\right)\right\rangle \geqslant 0 $

which corresponds to

$ \begin{gathered} \left\langle \boldsymbol{z}+\varepsilon_k \boldsymbol{w}_{N_k}-\boldsymbol{y}_{N_k}, F(\boldsymbol{z})-F\left(\boldsymbol{z}+\varepsilon_k \boldsymbol{w}_{N_k}\right)\right\rangle- \\ \varepsilon_k\left\langle\boldsymbol{w}_{N_k}, F(\boldsymbol{\boldsymbol{z}})\right\rangle \leqslant\left\langle {z}-\boldsymbol{y}_{N_k}, F(\boldsymbol{z})\right\rangle \end{gathered} $ (8)

$\lim\limits_{k \rightarrow \infty} \varepsilon_k \boldsymbol{w}_{N_k}=0$ is known. According to the definition of weak sequential continuity, it is evident that {F(ynk)} converges weakly to {F(x)}. Assume that F(x)≠0 (or else x is the solution), because of the weak sequential lower continuity of norm mapping, the following result is obtained:

$ 0<\left\|F\left(\boldsymbol{x}^{\prime}\right)\right\| \leqslant \lim\limits_{k \rightarrow \infty} \inf \left\|F\left(\boldsymbol{y}_{n_k}\right)\right\| $

Due to {yNk}⊂{ ynk} and $\lim\limits_{k \rightarrow \infty} \varepsilon_k=0$, the following expression is deduced:

$ \begin{gathered} 0 \leqslant \lim\limits_{k \rightarrow \infty} \sup \left\|\varepsilon_k \boldsymbol{w}_{N_k}\right\|=\lim\limits_{k \rightarrow \infty} \sup \left(\frac{\varepsilon_k}{\left\|F\left(\boldsymbol{y}_{N_k}\right)\right\|}\right) \leqslant \\ \frac{\lim\limits_{k \rightarrow \infty} \sup \varepsilon_k}{\lim\limits_{k \rightarrow \infty} \inf \left\|F\left(\boldsymbol{y}_{N_k}\right)\right\|}=0 \end{gathered} $

which implies that $\lim\limits_{k \rightarrow \infty} \varepsilon_k \boldsymbol{w}_{N_k}=0$. When k→∞, the right limit of inequality (8) is zero. It is not difficult to figure out $\lim\limits_{k \rightarrow \infty} \inf \left\langle \boldsymbol{z}-\boldsymbol{y}_{N_k}, F(\boldsymbol{z})\right\rangle \geqslant 0$. Hence,

$ \begin{gathered} \left\langle \boldsymbol{z}-\boldsymbol{x}^{\prime}, F(\boldsymbol{z})\right\rangle=\lim\limits_{k \rightarrow \infty}\left\langle \boldsymbol{z}-\boldsymbol{y}_{N_k}, F(\boldsymbol{z})\right\rangle= \\ \lim\limits_{k \rightarrow \infty} \inf \left\langle \boldsymbol{z}-\boldsymbol{y}_{N_k}, F(\boldsymbol{z})\right\rangle \geqslant 0, \forall \boldsymbol{z} \in \boldsymbol{z} \end{gathered} $

Invoking Lemma 1.4 yields xS.

Finally, the conclusion that {xn} converges weakly to x is gained. Suppose that it has at any rate two weak accumulation dots xS and xS. At this time, there is xx. Let the limit of {xni} be x, from Lemma 1.5 the following can be known:

$ \begin{aligned} &\lim\limits_{n \rightarrow \infty}\left\|\boldsymbol{x}_n-\boldsymbol{x}^{\prime \prime}\right\|=\lim\limits_{i \rightarrow \infty}\left\|\boldsymbol{x}_{n_i}-\boldsymbol{x}^{\prime \prime}\right\|= \\ &\quad \lim\limits_{i \rightarrow \infty} \inf \left\|\boldsymbol{x}_{n_i}-\boldsymbol{x}^{\prime \prime}\right\|<\lim\limits_{i \rightarrow \infty} \inf \left\|\boldsymbol{x}_{n_i}-\boldsymbol{x}^{\prime}\right\| \\ &\quad \lim\limits_{n \rightarrow \infty}\left\|\boldsymbol{x}_n-\boldsymbol{x}^{\prime}\right\|=\lim\limits_{k \rightarrow \infty}\left\|\boldsymbol{x}_{n_k}-\boldsymbol{x}^{\prime}\right\|= \\ &\quad \lim\limits_{k \rightarrow \infty} \inf \left\|\boldsymbol{x}_{n_k}-\boldsymbol{x}^{\prime}\right\|<\lim\limits_{k \rightarrow \infty} \inf \left\|\boldsymbol{x}_{n_k}-\boldsymbol{x}^{\prime \prime}\right\| \\ &\lim\limits_{k \rightarrow \infty}\left\|\boldsymbol{x}_{n_k}-\boldsymbol{x}^{\prime \prime}\right\|=\lim\limits_{n \rightarrow \infty}\left\|\boldsymbol{x}_n-\boldsymbol{x}^{\prime \prime}\right\| \end{aligned} $

From this contradiction, xnx is obtained. Since $\lim_{n \rightarrow \infty}\left\|\boldsymbol{x}_n-\boldsymbol{y}_n\right\|=0$, there is ynx. The proof is complete.

Subsequently, (A2) was replaced with (A2*) to prove that Algorithm A has R-linear convergence rate.

(A2*) F is strongly pseudomonotone over H.

Theorem 3.2  If the supposition (A1), (A2*), and (A3) comes into existence, Algorithm A can produce succession {xn}. It can be derived that {xn} at least R-linearly strongly converges to the only resolution u of inequality (1).

Proof  〈ynv, F(v)〉≥0, ∀ vS is true. According to the hypothesis that F is strongly pseudo-monotone, the following formula is true:

$ \gamma\left\|\boldsymbol{y}_n-\boldsymbol{v}\right\|^2 \leqslant\left\langle\boldsymbol{y}_n-\boldsymbol{v}, F\left(\boldsymbol{y}_n\right)\right\rangle $

It could also be described as

$ \begin{aligned} \gamma &\left\|\boldsymbol{y}_n-\boldsymbol{v}\right\|^2 \leqslant\left\langle\boldsymbol{y}_n-\boldsymbol{x}_{n+1}, F\left(\boldsymbol{y}_n\right)\right\rangle-\\ &\left\langle\boldsymbol{v}-\boldsymbol{x}_{n+1}, F\left(\boldsymbol{y}_n\right)\right\rangle \end{aligned} $ (9)

Therefore, invoking the regulation of Tn in the Algorithm A yields that

$ \begin{aligned} &\left\langle\boldsymbol{x}_{n+1}-\boldsymbol{y}_n, \boldsymbol{x}_n-\lambda_n F\left(\boldsymbol{y}_n\right)-\boldsymbol{y}_n\right\rangle \leqslant \\ &\lambda_n\left\langle\boldsymbol{x}_{n+1}-\boldsymbol{y}_n, F\left(\boldsymbol{y}_{n-1}\right)-F\left(\boldsymbol{y}_n\right)\right\rangle \end{aligned} $ (10)

Combining Lemma 1.1 (b) and inequality (9) with inequality (10), the following can be derived:

$ \begin{gathered} \left\|\boldsymbol{x}_{n+1}-\boldsymbol{v}\right\|^2 \leqslant\left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2-\left\|\boldsymbol{x}_n-\boldsymbol{x}_{n+1}\right\|^2+ \\ 2 \lambda_n\left\langle\boldsymbol{v}-\boldsymbol{x}_{n+1}, F\left(\boldsymbol{y}_n\right)\right\rangle \leqslant\left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2- \\ \left\|\boldsymbol{x}_{n+1}-\boldsymbol{y}_n\right\|^2-\left\|\boldsymbol{y}_n-\boldsymbol{x}_n\right\|^2+2\left\langle\boldsymbol{x}_{n+1}-\boldsymbol{y}_n, \right. \\ \left.\boldsymbol{x}_n-\boldsymbol{y}_n-\lambda_n F\left(\boldsymbol{y}_n\right)\right\rangle-2 \gamma \lambda_n\left\|\boldsymbol{y}_n-\boldsymbol{v}\right\|^2 \leqslant \\ \left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2-\left\|\boldsymbol{x}_n-\boldsymbol{y}_{n-1}\right\|^2-\left\|\boldsymbol{y}_{\mathrm{n}-1}-\boldsymbol{y}_n\right\|^2+ \\ 2\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|\left\|\boldsymbol{x}_n-\boldsymbol{y}_{n-1}\right\|-\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2+ \\ 2 \lambda_n\left\langle\boldsymbol{x}_{n+1}-\boldsymbol{y}_n, F\left(\boldsymbol{y}_{n-1}\right)-F\left(\boldsymbol{y}_n\right)\right\rangle-2 \gamma \lambda_n\left\|\boldsymbol{y}_n-\boldsymbol{v}\right\|^2 \end{gathered} $ (11)

From Lemma 1.2, substituting $\bar{\varepsilon}=\sqrt{2}$ into inequality (11) gives us inequality (12), namely,

$ \left\|\boldsymbol{x}_{n+1}-\boldsymbol{v}\right\|^2 \leqslant\left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2+(\sqrt{2}-1) \cdot \\ \begin{aligned} &\left\|\boldsymbol{x}_n-\boldsymbol{y}_{n-1}\right\|^2-\left(1-\frac{1}{\sqrt{2}}\right)\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|^2- \\ &\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2+2 \lambda_n\left\langle\boldsymbol{x}_{n+1}-\boldsymbol{y}_n, F\left(\boldsymbol{y}_{n-1}\right)-\right. \\ &\left.F\left(\boldsymbol{y}_n\right)\right\rangle-2 \gamma \lambda_n\left\|\boldsymbol{y}_n-\boldsymbol{v}\right\|^2 \end{aligned} $ (12)

From the form defined by λn, ∃N2>0, λ>0, ∀n>N2 s.t. λn>λ, ∀n>N2 is obtained. The above formula can be arranged as

$ \begin{array}{r} \left\|\boldsymbol{x}_{n+1}-\boldsymbol{v}\right\|^2+(\sqrt{2}-1)\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2 \leqslant \\ \left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2+(\sqrt{2}-1)\left\|\boldsymbol{y}_{n-1}-\boldsymbol{x}_n\right\|^2- \\ \left(1-\frac{1}{\sqrt{2}}\right)\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|^2-(2-\sqrt{2}) \cdot \\ \left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2+\frac{\lambda_n}{\lambda_{n+1}}\left(\boldsymbol{\alpha}\left\|\boldsymbol{x}_{n+1}-\boldsymbol{y}_n\right\|^2+\right. \\ \left.\frac{\boldsymbol{\alpha}}{2}\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|^2\right)-2 \gamma \lambda_n\left\|\boldsymbol{y}_n-\boldsymbol{v}\right\|^2= \\ \left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2+(\sqrt{2}-1)\left\|\boldsymbol{y}_{n-1}-\boldsymbol{x}_n\right\|^2- \\ 2 \gamma \lambda\left\|\boldsymbol{y}_n-\boldsymbol{v}\right\|^2-\left(1-\frac{1}{\sqrt{2}}-\frac{\boldsymbol{\alpha}}{2} \cdot \frac{\lambda_n}{\lambda_{n+1}}\right) \cdot \end{array} $
$ \begin{aligned} &\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|^2-\left(2-\sqrt{2}-\boldsymbol{\alpha} \cdot \frac{\lambda_n}{\lambda_{n+1}}\right) \cdot \\ &\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2 \end{aligned} $ (13)

In short, denote

$ \begin{aligned} \varepsilon &=\frac{1}{4} \cdot\left(4-2 \sqrt{2}+\frac{2-\sqrt{2}-\boldsymbol{\alpha}}{\gamma \lambda}+\right.\\ & {\left.\left[\left(4-2 \sqrt{2}+\frac{2-\sqrt{2}-\boldsymbol{\alpha}}{\gamma \lambda}\right)^2+8(2 \sqrt{2}-2)\right]^{1 / 2}\right) } \end{aligned} $

which gives the fact that

$ \varepsilon>\frac{4-2 \sqrt{2}+\sqrt{(4-2 \sqrt{2})^2+16(\sqrt{2}-1)}}{4}=1 $

Invoking Lemma 1.2 can deduce

$ \begin{aligned} &-\left\|\boldsymbol{y}_n-\boldsymbol{v}\right\|^2=-\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2-\left\|\boldsymbol{x}_{n+1}-\boldsymbol{v}\right\|^2- \\ &2\left\langle\boldsymbol{y}_n-\boldsymbol{x}_{n+1}, \boldsymbol{x}_{n+1}-\boldsymbol{v}\right\rangle \leqslant(\varepsilon-1) \cdot \\ &\quad\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2+\left(\frac{1}{\varepsilon}-1\right) \cdot\left\|\boldsymbol{x}_{n+1}-\boldsymbol{v}\right\|^2 \end{aligned} $ (14)

Using inequality (14), the following formula is obtained by rearranging inequality (13) and shifting items

$ \begin{aligned} &\left(1-2 \gamma \lambda\left(\frac{1}{\varepsilon}-1\right)\right)\left(\left\|\boldsymbol{x}_{n+1}-\boldsymbol{v}\right\|^2+(\sqrt{2}-1) \cdot\right. \\ &\left.\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2\right) \leqslant\left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2+(\sqrt{2}-1) \cdot \\ &\left\|\boldsymbol{y}_{n-1}-\boldsymbol{x}_n\right\|^2-\left(1-\frac{1}{\sqrt{2}}-\frac{\boldsymbol{\alpha}}{2} \cdot \frac{\lambda_n}{\lambda_{n+1}}\right) \cdot \\ &\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|^2-\left(2-\sqrt{2}-\boldsymbol{\alpha} \cdot \frac{\lambda_n}{\lambda_{n+1}}+(4-\right. \\ &\left.\left.2 \sqrt{2}-2 \varepsilon+\frac{2 \sqrt{2}-2}{\varepsilon}\right) \gamma \lambda\right) \cdot\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2 \end{aligned} $ (15)

Similar to the processing method in inequality (4), there is N3>N2>0, G2>0, such that for any n>N3, there is

$ \begin{aligned} &1-\frac{1}{\sqrt{2}}-\frac{\boldsymbol{\alpha}}{2} \cdot \frac{\lambda_n}{\lambda_{n+1}}>G_2 \\ &2-\sqrt{2}-\boldsymbol{\alpha} \cdot \frac{\lambda_n}{\lambda_{n+1}}+ \\ &\left(4-2 \sqrt{2}-2 \varepsilon+\frac{2 \sqrt{2}-2}{\varepsilon}\right) \gamma \lambda>G_2 \end{aligned} $

Set $\rho=-2 \gamma \lambda\left(\frac{1}{\varepsilon}-1\right)$. Obviously there is ρ>0. For ∀n>N1, set

$ \begin{gathered} a_n=\left\|\boldsymbol{x}_n-\boldsymbol{v}\right\|^2+(\sqrt{2}-1)\left\|\boldsymbol{y}_{n-1}-\boldsymbol{x}_n\right\|^2 \\ b_n=G_2\left(\left\|\boldsymbol{y}_{n-1}-\boldsymbol{y}_n\right\|^2+\left\|\boldsymbol{y}_n-\boldsymbol{x}_{n+1}\right\|^2\right) \end{gathered} $

Then inequality (15) is equivalent to

$ (1+\rho) a_{n+1} \leqslant a_n-b_n, \forall n>N_3 $

From induction, it may be deduced that for ∀n>N3 there is

$ \begin{gathered} \left\|\boldsymbol{x}_{n+1}-\boldsymbol{v}\right\|^2 \leqslant a_{n+1} \leqslant \frac{1}{1+\rho} a_n \leqslant \cdots \leqslant \\ \left(\frac{1}{1+\rho}\right)^{n+1-N_3} a_{N_3} \end{gathered} $

which completes the proof.

Remark 2  Compared with Theorem 2.1 in Ref. [23], the above theorem has three major advantages:

1) Fewer parameters are used;

2) A non-monotonic stepsize strategy is adopted instead of the monotonic decreasing stepsize;

3) The previous assumptions are weakened, namely, monotonicity are replaced with pseudomonotonicity and weak sequential continuity of F, and strong monotonicity is replaced with strong pseudomonotonicity.

4 Numerical Results

By comparing Algorithm A with Algorithm 1[21] in numerical experiments, the effectiveness of the proposed algorithm is demonstrated.

The iteration times (ite.) and calculation time (t) conducted in all tests were recorded in seconds. The condition for terminating the algorithm was ‖xn+1xn‖≤ε. For the tolerance, ε=10-6 was set in all experiments. The following parameters were used in the algorithm mentioned below:

$ \boldsymbol{x}_0=\boldsymbol{y}_0, \lambda_0=\lambda_1=0.3, \mu=0.2, \theta=\frac{1}{1000} $

Problem 1  In the first group of numerical experiments (known as HpHard problem, also considered in Refs. [25] and [27]), let G(x)=d+Ax with dRn and A = BBT+ C + D, where B, C, DRn×n. Each item of B and skew-symmetric array C is consistently produced from (-5, 5). Each term of d and diagonal D is consistently produced from (-500, 0) and (0, 0.3), respectively. The practicable set is Rn+, where PC=x+=max(0, x). For all tests, x0=y0=(1, 1, ..., 1) is taken. $\boldsymbol{\alpha}=2-\sqrt{2}-\frac{1}{1000}$ is taken for Algorithm A and three stochastic values are produced based on unlike options of A and d. The results in Table 1 indicate the effect of the proposed algorithm.

Table 1 Numerical results of Problem 1

From Table 1, pn controls the non-monotonicity of the stepsize. When pn=0, the stepsize decreases monotonically. At this time, the iteration will stop because the stepsize is too small to find the best point. Non-monotonic stepsize increases or decreases to quickly find the best advantage and make the iteration stop. It can also be seen that the non-monotone stepsize has less iterations and faster time than monotone stepsize.

Problem 2  The second question was applied to Refs. [28] and [29], where

$ \begin{gathered} \boldsymbol{H}(\boldsymbol{x})=\left(h_1(\boldsymbol{x}), h_2(\boldsymbol{x}), \ldots, h_m(\boldsymbol{x})\right) \\ h_i(\boldsymbol{x})=x_i^2+x_{i-1}^2+x_{i+1} x_i+x_i x_{i-1}+x_{i+1}+4 x_i- \\ 2 x_{i-1}-1 \\ x_0=x_{m+1}=0, i=1, 2, \ldots, m \end{gathered} $

The practicable set is C = Rm+ and PC=x+=max(0, x). Initial point is taken as x0=y0=(0, 0, ..., 0). Table 2 collects the results.

Table 2 Numerical results of Problem 2

As shown in Table 2, it can be observed that Algorithm A is more advanced than Algorithm 1 in terms of iterations and calculation time. Moreover, when the value of α is larger, the effect of the new algorithm is better.

5 Conclusions

The paper presents the convergence analysis of Lipschitz continuous pseudomonotone VI (P). When the value of Lipschitz constant is unclear, weak convergence of the sequence is proved by using non-monotonically updating stepsize strategy. Furthermore, the R-linear convergence rate of sequence was obtained and the performance of the scheme was demonstrated on some numerical examples to show the effectiveness of the proposed algorithms. References

References
[1]
Facchinei F, Pang J S. Finite-Dimensional Variational Inequalities and Complementarity Problems. New York: Springer, 2003. DOI:10.1007/b97543 (0)
[2]
Khanh P D, Vuong P T. Modified projection method for strongly pseudomonotone variational inequalities. Journal of Global Optimization, 2014, 58(2): 341-350. DOI:10.1007/s10898-013-0042-5 (0)
[3]
Malitsky Y V, Semenov V V. An extragradient algorithm for monotone variational inequalities. Cybernetics and Systems Analysis, 2014, 50(2): 271-277. DOI:10.1007/s10559-014-9614-8 (0)
[4]
Rehman H-U, Kumam P, Cho Y J, et al. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. Journal of Inequalities and Applications, 2019, 2019: 282. DOI:10.1186/s13660-019-2233-1 (0)
[5]
Censor Y, Gibali A, Reich S. The subgradient extragradient method for solving variational inequalities in Hilbert space. Journal of Optimization Theory and Applications, 2011, 148(2): 318-335. DOI:10.1007/s10957-010-9757-3 (0)
[6]
Kraikaew R, Saejung S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. Journal of Optimization Theory and Applications, 2014, 163(2): 399-412. DOI:10.1007/s10957-013-0494-2 (0)
[7]
Maingé P E, Gobinddass M L. Convergence of one step projected gradient methods for variational inequalities. Journal of Optimization Theory and Applications, 2016, 171(1): 146-168. DOI:10.1007/s10957-016-0972-4 (0)
[8]
Rehman H-U, Kumam P, Cho Y J, et al. Modified Popov's Explicit Iterative Algorithms for Solving Pseudomonotone Equilibrium Problems. https://www.ttandfonline.com/doi/full/10.1080/10556788.2020.1734805, 2020-12-09. (0)
[9]
Abubakar J, Kumam P, Rehman H-U, et al. Inertial iterative schemes with variable step sizes for variational inequality problem involving pseudomonotone operator. Mathematics, 2020, 8(4): 609-633. DOI:10.3390/math8040609 (0)
[10]
Yang J. Self-Adaptive Inertial Subgradient Extragradient Algorithm for Solving Pseudomonotone Variational Inequalities. https://www.andfonline.com/doi/full/10.1080/00036811.2019.1634257, 2020-12-09. (0)
[11]
Hieu D V, Strodiot J J, Muu L D. An explicit extragradient algorithm for solving variational inequalities. Journal of Optimization Theory and Applications, 2020, 185: 476-503. DOI:10.1007/s10957-020-01661-6 (0)
[12]
Lorenz D A, Pock T. An inertial forward-backward algorithm for monotone inclusions. Journal of Mathematical Imaging and Vision, 2015, 51(2): 311-325. DOI:10.1007/s10851-014-0523-2 (0)
[13]
Tseng P. A modified forward-backward splitting method for maximal monotone mappings. SIAM Journal on Control and Optimization, 2000, 38(2): 431-446. DOI:10.1137/s0363012998338806 (0)
[14]
Korpelevič G M. The extragradient method for finding saddle points and other problem. Èkonomika i Matematicheskie Metody, 1976, 12(4): 747-756. (0)
[15]
Rehman H-U, Kumam P, Argyros I K, et al. A self-adaptive extra-gradient methods for a family of pseudomonotone equilibrium programming with application in different classes of variational inequality problems. Symmetry, 2020, 12(4): 523. DOI:10.3390/sym12040523 (0)
[16]
Rehman H-U, Kumam P, Kumam W, et al. The inertial sub-gradient extra-gradient method for a class of pseudo-monotone equilibrium problems. Symmetry, 2020, 12(3): 463. DOI:10.3390/sym12030463 (0)
[17]
Rehman H-U, Kumam P, Argyros I K, et al. Inertial extra-gradient method for solving a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces with application in variational inequality problem. Symmetry, 2020, 12(4): 503. DOI:10.3390/sym12040503 (0)
[18]
Lyashko S I, Semenov V V, Voitova T A. Low-cost modification of Korpelevichs method for monotone equilibrium problems. Cybernetics and Systems Analysis, 2011, 47(4): 631-639. DOI:10.1007/s10559-011-9343-1 (0)
[19]
Malitsky Y V, Semenov V V. A hybrid method without extrapolation step for solving variational inequality problems. Journal of Global Optimization, 2015, 61(1): 193-202. DOI:10.1007/s10898-014-0150-x (0)
[20]
Popov L D. A modification of the Arrow-Hurwicz method for search of saddle points. Mathematical Notes of the Academy of Sciences of the USSR, 1980, 28: 845-848. DOI:10.1007/BF01141092 (0)
[21]
Yang J, Liu H W, Li G W. Convergence of a subgradient extragradient algorithm for solving monotone variational inequalities. Numerical Algorithms, 2020, 84: 389-405. DOI:10.1007/s11075-019-00759-x (0)
[22]
Karamardian S, Schaible S. Seven kinds of monotone maps. Journal of Optimization Theory and Applications, 1990, 66(1): 37-46. DOI:10.1007/BF00940531 (0)
[23]
Thong D V, Shehu Y, Iyiola O S. Weak and strong convergence theorems for solving pseudo-monotone variational inequalities with non-Lipschitz mappings. Numerical Algorithms, 2020, 84: 795-823. DOI:10.1007/s11075-019-00780-0 (0)
[24]
Goebel K, Reich S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. New York: Marcel Dekker, 1984. (0)
[25]
Malitsky Y. Proximal extrapolated gradient methods for variational inequalities. Optimization Methods and Software, 2018, 33(1): 140-164. DOI:10.1080/10556788.2017.1300899 (0)
[26]
Cottle R W, Yao J C. Pseudo-monotone complementarity problems in Hilbert space. Journal of Optimization Theory and Applications, 1992, 75(2): 281-295. DOI:10.1007/bf00941468 (0)
[27]
Solodov M V, Tseng P. Modified projection-type methods for monotone variational inequalities. SIAM Journal on Control and Optimization, 1996, 34(5): 1814-1830. DOI:10.1137/S0363012994268655 (0)
[28]
Malitsky Y. Projected reflected gradient methods for monotone variational inequalities. SIAM Journal on Optimization, 2015, 25(1): 502-520. DOI:10.1137/14097238X (0)
[29]
Yang J, Liu H W. A self-adaptive method for pseudomonotone equilibrium problems and variational inequalities. Computational Optimization and Applications, 2020, 75(1): 423-440. DOI:10.1007/s10589-019-00156-z (0)