Journal of Harbin Institute of Technology (New Series)  2023, Vol. 30 Issue (5): 65-75  DOI: 10.11916/j.issn.1005-9113.2022049
0

Citation 

Yuwan Ding, Hongwei Liu, Xiaojun Ma. Inertial Subgradient Extragradient Algorithm for Solving Variational Inequality Problems with Pseudomonotonicity[J]. Journal of Harbin Institute of Technology (New Series), 2023, 30(5): 65-75.   DOI: 10.11916/j.issn.1005-9113.2022049

Corresponding author

Yuwan Ding, Master's Degree. E-mail: dingyuwan1@163.com

Article history

Received: 2022-05-24
Inertial Subgradient Extragradient Algorithm for Solving Variational Inequality Problems with Pseudomonotonicity
Yuwan Ding, Hongwei Liu, Xiaojun Ma     
School of Mathematics and Statistics, Xidian University, Xi'an 710126, China
Abstract: In order to solve variational inequality problems of pseudomonotonicity and Lipschitz continuity in Hilbert spaces, an inertial subgradient extragradient algorithm is proposed by virtue of non-monotone stepsizes. Moreover, weak convergence and R-linear convergence analyses of the algorithm are constructed under appropriate assumptions. Finally, the efficiency of the proposed algorithm is demonstrated through numerical implementations.
Keywords: variational inequality    extragradient method    pseudomonotonicity    Lipschitz continuity    weak and linear convergence    
0 Introduction

Let H be a real Hilbert space with transvection $\langle\cdot, \cdot\rangle$ and norm $\|\cdot\|$ and $\boldsymbol{C} \subset \boldsymbol{H}$ be closed and convex nonempty. In this work, the discussion is mainly focused on the variational inequality problem (VIP) represented in the form:

Seek $\boldsymbol{s}^* \in \boldsymbol{C}$ such that

$\left.\left\langle F\left(\boldsymbol{s}^*\right), \boldsymbol{\rho}-\boldsymbol{s}^*\right)\right\rangle \geqslant 0, \forall \boldsymbol{\rho} \in \boldsymbol{C} $ (1)

where $F: \boldsymbol{H} \rightarrow \boldsymbol{H}$ is a mapping. The symbol $\Omega$ is defined as the solution of VIP (1). Here, the following presumptions hold:

(C1) The answer set $\Omega$ is nonempty, i.e., $\Omega \neq \varnothing$.

(C2) The mapping F is defined as pseudomonotonic, i.e.,

$\begin{aligned} & \langle F(\boldsymbol{c}), \boldsymbol{\rho}-\boldsymbol{c}\rangle \geqslant 0 \Rightarrow \\ & \langle F(\boldsymbol{\rho}), \boldsymbol{\rho}-\boldsymbol{c}\rangle \geqslant 0, \forall \boldsymbol{c}, \boldsymbol{\rho} \in \boldsymbol{H} \\ & \end{aligned} $

(C2') The mapping F is $\beta$-strongly pseudomonotone, i.e., there exists $\beta>0$ satisfying

$ \begin{gathered} \langle F(\boldsymbol{\rho}), \boldsymbol{c}-\boldsymbol{\rho}\rangle \geqslant 0 \Rightarrow\langle F(\boldsymbol{c}), \boldsymbol{c}-\boldsymbol{\rho}\rangle \geqslant \\ \beta\|\boldsymbol{c}-\boldsymbol{\rho}\|^2, \forall \boldsymbol{c}, \boldsymbol{\rho} \in \boldsymbol{H} \end{gathered} $

(C3) The mapping F is Lipschitz continuity and fulfills

$ \|F(\boldsymbol{c})-F(\boldsymbol{\rho})\| \leqslant L\|\boldsymbol{c}-\boldsymbol{\rho}\|, \forall \boldsymbol{c}, \boldsymbol{\rho} \in \boldsymbol{H} $

where L>0 is a Lipschitz constant.

(C4) The practicable set $\boldsymbol{C}$ is closed, convex and nonempty.

The symbols $\rightharpoonup$ and $\rightarrow$ express weak convergence and strong convergence, respectively. $P_{\mathrm{C}}: \boldsymbol{H} \rightarrow \boldsymbol{C}$ is called the metric projection (see the definition in Section 1). It is universally known that the fixed point problem below and VIP (1) are of equivalence:

Seek $\boldsymbol{s}^* \in \boldsymbol{C}$ such that

$ s^*=P_C\left(s^*-\tau F\left(s^*\right)\right) $ (2)

where $\tau>0$.

VIP (1), developed by Fichera[1-2] and Stampacchia[3], is of great importance in the field of applied mathematics, which serves as a useful tool for the study of complementarity problems, transportation, network equilibrium problems and many more[4-6]. Because of its role, scholars concentrate their attention on exploring and figuring out its approximate solution, so numerous projection-like methods that have been suggested to deal with VIP (1) with its associated optimization problems (refer to Refs.[7-20]).

To be specific, the original approach to solve VIP (1) is projected gradient method, whose numerical advantage is that only one projection onto $\boldsymbol{C}$ is finished. In the aspect of convergence proof, F is strong (or inverse strongly) monotonicity. To weaken this strong condition, Korpelevich[21] and Antipin[22] presented the extragradient method which needs to calculate the value of F twice and two projections onto $\boldsymbol{C}$, respectively. However, the complex form of $\boldsymbol{C}$ in concrete applications leads to a slow convergence rate. To improve its numerical efficiency, Censor et al[23], proposed the subgradient extragradient methodby improving Korpelevich's extragradient method to solve VIP (1) in Hilbert space. This second projection of the method is a specific subgradient projection which can be readily computed. They also established its weak convergence under the monotone and Lipschitz continuous assumptions of F. The subgradient extragradient method, due to its simplicity and feasibility, has been extensively researched and extended by many scholars (refer to Refs.[24-31] and the references therein). Inspired by the work of Censor[23], Dong et al.[24] embedded the projection and contraction method[32-33] into the subgradient extragradient method and proposed a modified subgradient extragradient method to solve VIP (1) via the following formula

$ \left\{\begin{array}{l} \boldsymbol{y}_n=P_C\left(\boldsymbol{c}_n-\boldsymbol{\tau}_n F\left(\boldsymbol{c}_n\right)\right) \\ \boldsymbol{T}_n:=\left\{\boldsymbol{\xi} \in \boldsymbol{H} \mid\left\langle\boldsymbol{y}_n-\boldsymbol{\xi}, \boldsymbol{c}_n-\boldsymbol{\tau}_n F\left(\boldsymbol{c}_n\right)-\boldsymbol{y}_n\right\rangle \geqslant 0\right\} \\ \boldsymbol{c}_{n+1}=P_{\boldsymbol{T}_n}\left(\boldsymbol{c}_n-\varLambda_n \boldsymbol{\tau}_n F\left(\boldsymbol{y}_n\right)\right) \end{array}\right. $ (3)

where

$ \begin{aligned} \boldsymbol{\varTheta}_n & =\boldsymbol{c}_n-\boldsymbol{y}_n-\tau_n\left(F\left(\boldsymbol{c}_n\right)-F\left(\boldsymbol{y}_n\right)\right) \\ \varLambda_n & =\frac{\left\langle\boldsymbol{c}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle}{\left\|\boldsymbol{\varTheta}_n\right\|^2} \end{aligned} $ (4)

and the stepsize $\tau_n$ is selected to be the largest $\tau \in \left\{\sigma, \sigma l, \sigma l^2, \ldots\right\}$ fulfilling

$ \tau\left\|F\left(c_n\right)-F\left(y_n\right)\right\| \leqslant \bar{\omega}\left\|c_n-y_n\right\| $ (5)

They proved that algorithm (3) is weakly convergent when the hypothesis about F is Lipschitz continuous and monotonic. Noting that algorithm (3) runs an Armijo-like line search rule for finding a proper stepsize per iteration, which leads to additional computation costs.

The inertial extrapolation technique was adopted by Nesterov[34] to accelerate the convergence rate about the heavy sphere method[35]. Motivated by the inertial idea, Alvarez and Attouch[36] offered an inertial proximal point algorithm in order to set the maximal monotone operator. Recently, it has been employed by many researchers to quicken the extragradient method for VIP (1) [25-31, 37-39]. However, the weak convergence of the algorithm[24] with inertial techniques has yet to be considered. So, a natural question emerges as follows.

Is is possible to obtain a new modification of the subgradient extragradient method[24] such that a weak convergence theorem and numerical improvement can be gained under much weaker conditions than monotonicity and sequential weak continuity of the mapping F?

In response to the above question, concrete contributions by this work are the following:

Add an inertial effect to the modified subgradient extragradient method[24] for accelerating the convergence properties, which is inspired by some excellent works[20, 26, 34, 36, 39];

Introduce a new stepsize different from that in Refs.[14, 17, 40] and overcome the drawback of additional computation projections onto $\boldsymbol{C}$ per iteration. This can lower the computational costs of the algorithm;

Present an inertial subgradient extragradient method and its weak convergence does not require monotonicity and sequential weak continuity of the cost mapping F, compared with the work by Thong et al.[18-19, 29] and by Cai et al.[20];

Ultimately, some numerical computations are presented to demonstrate the effectiveness of this newly proposed algorithm.

This article is organized as below. Several fundamental lemmas and concepts which are applied in the subsequent sections are introduced in Section 1. Weak convergence theorem of this new proposed algorithm is established in Section 2 and R-linear convergence rate is obtained in Section 3. Numerical implementations and corresponding results are presented in Section 4 and display a brief summary in Section 5.

1 Preliminaries

Suppose that $\boldsymbol{H}$ is known to be a real Hilbert space and $\boldsymbol{C} \subset \boldsymbol{H}$ is closed and convex nonempty. $P_C : \boldsymbol{H} \rightarrow \boldsymbol{C}$ is called the metric projection. For every dot $\boldsymbol{c} \in \boldsymbol{H}$ fulfills

$ P_C(\boldsymbol{c}):=\operatorname{argmin}\{\|\boldsymbol{c}-\boldsymbol{\rho}\| \mid \boldsymbol{\rho} \in \boldsymbol{C}\} $

The projection of $\boldsymbol{\zeta}$ onto a half-space $\boldsymbol{T}=\{\boldsymbol{u} \in \boldsymbol{H}:\langle\boldsymbol{v}, \boldsymbol{u}-\boldsymbol{c}\rangle \leqslant 0\}$ is computed by

$ P_T(\boldsymbol{\zeta})=\boldsymbol{\zeta}-\max \left\{0, \frac{\langle\boldsymbol{v}, \boldsymbol{\zeta}-\boldsymbol{c}\rangle}{\|\boldsymbol{v}\|^2}\right\} \boldsymbol{v} $ (6)

where $\boldsymbol{c} \in \boldsymbol{H}, \boldsymbol{v} \in \boldsymbol{H}$ and $\boldsymbol{v} \neq 0$[41].

Lemma 1[11, 42] For each $\boldsymbol{\xi}, \boldsymbol{\rho}, \boldsymbol{\eta} \in \boldsymbol{H}$,

(ⅰ) $\langle\boldsymbol{\xi}-\boldsymbol{\rho}, \boldsymbol{\xi}-\boldsymbol{\eta}\rangle=0.5\|\boldsymbol{\xi}-\boldsymbol{\rho}\|^2+$ $0.5\|\boldsymbol{\xi}-\boldsymbol{\eta}\|^2-0.5\|\boldsymbol{\rho}-\boldsymbol{\eta}\|^2$

(ⅱ) $ \begin{aligned} & \|\varphi \boldsymbol{\xi}+(1-\varphi) \boldsymbol{\rho}\|^2=\varphi\|\boldsymbol{\xi}\|^2+ & (1-\varphi)\|\rho\|^2-\varphi(1-\varphi)\|\boldsymbol{\xi}-\boldsymbol{\rho}\|^2, & \forall \varphi \in R \end{aligned} $

Lemma 2[43] Let $\boldsymbol{\xi} \in \boldsymbol{H}$, then

(ⅰ) $\left\langle\boldsymbol{\xi}-P_C(\boldsymbol{\xi}), \boldsymbol{\rho}-P_C(\boldsymbol{\xi})\right\rangle \leqslant 0, \forall \boldsymbol{\rho} \in \boldsymbol{C}$

(ⅱ) $\left\|P_C(\boldsymbol{\xi})-P_C(\boldsymbol{\rho})\right\|^2 \leqslant \left\langle P_C(\boldsymbol{\xi})-P_C(\boldsymbol{\rho}), \boldsymbol{\xi}-\boldsymbol{\rho}\right\rangle, \forall \boldsymbol{\rho} \in \boldsymbol{H} $

Lemma 3[44] Presume that $\boldsymbol{C} \subset \boldsymbol{H}$ is closed and convex nonempty. Let $\boldsymbol{F}: \boldsymbol{C} \rightarrow \boldsymbol{H}$ be pseudomonotone and continuous. Then the following equivalence holds:

$ \boldsymbol{s}^* \in \Omega \Leftrightarrow\left\langle F(\boldsymbol{t}), \boldsymbol{t}-\boldsymbol{s}^*\right\rangle \geqslant 0, \forall \boldsymbol{t} \in \boldsymbol{C} $

Lemma 4[45] Assume that $\boldsymbol{H}_1$ and $\boldsymbol{H}_2$ are two real Hilbert spaces. If the mapping $F: \boldsymbol{H}_1 \rightarrow \boldsymbol{H}_2$ fulfills uniform continuity on $\boldsymbol{D}$ and $\boldsymbol{D} \subset \boldsymbol{H}_1$ is bounded, then $F(\boldsymbol{D})$ is bounded.

Lemma 5[46-47] Presume that the sequence $\left\{\boldsymbol{\xi}_n\right\}$ is nonnegative number satisfying

$ \xi_{n+1} \leqslant q_n \xi_n+\varpi_n, \quad \forall n \in N $

where $\left\{\bar{\omega}_n\right\}$ and $\left\{q_n\right\}$ are nonnegative sequences fulfilling $\sum\limits_{n=1}^{\infty} \varpi_n <\infty$ and $\left\{q_n\right\} \subset[1, +\infty)$, $\sum_{n=1}^{\infty}\left(q_n-1\right) <\infty$. Then $\lim \limits_{n \rightarrow \infty} \xi_n$ exists.

Lemma 6[48] Presume that $\left\{b_n\right\} \subset[0, \infty)$ and $\left\{w_n\right\} \subset[0, \infty)$ are the sequences fulfilling:

(ⅰ) $b_{n+1} \leqslant b_n+\Delta_n\left(b_n-b_{n-1}\right)+w_n, \quad \forall n \geqslant 1$;

(ⅱ) $\sum\limits_{n=1}^{\infty} w_n <\infty$

(iii) $\left\{\Delta_n\right\} \subset[0, \vartheta]$, where $\vartheta \in[0, 1)$. Then $\left\{b_n\right\}$ is a convergent sequence and $\sum\limits_{n=1}^{\infty}\left[b_{n+1}-b_n\right]_{+} <\infty$, where $[t]_{+}=\max \{t, 0\}$ (for all $t \in R$).

Then $\left\{b_n\right\}$ is a convergent sequence and $\sum\limits_{n=1}^{\infty}\left[b_{n+1}-b_n\right]_{+} <\infty$, where $[t]_{+}=\max \{t, 0\}$ (for all $t \in R$).

Lemma 7[42, 48] Presume that $\boldsymbol{C} \subset \boldsymbol{H}$ is a nonempty set and $\left\{\boldsymbol{c}_n\right\}$ is a sequence in $\boldsymbol{H}$ fulfilling:

(ⅰ) for any $\boldsymbol{s} \in \boldsymbol{C}, \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|$ exists;

(ⅱ) any sequential weak cluster point of $\left\{\boldsymbol{c}_n\right\}$ belongs to $\boldsymbol{C}$.

Then $\left\{\boldsymbol{c}_n\right\}$ converges weakly to a dot in $\boldsymbol{C}$.

2 Weak Convergence

In the section, inertial effects and new stepsizes are added to the subgradient extragradient algorithm to solve VIP (1), and its weak convergence is established under some weaker conditions that F are pseudomonotone and uniformly continuous, compared with the work in Ref.[24]. Next, some useful conditions are stated as follows.

(C5) The operator $F: \boldsymbol{H} \rightarrow \boldsymbol{H}$ fulfills the following property[25]:

Whenever $\left\{\boldsymbol{c}_n\right\} \subset \boldsymbol{C}$ and $\boldsymbol{c}_n \rightarrow \boldsymbol{c}$, there is $\|F(\boldsymbol{c})\| \leqslant \lim \inf \limits_{n \rightarrow \infty}\left\|F\left(\boldsymbol{c}_n\right)\right\|$.

Lemma 8 If the sequence $\left\{\boldsymbol{c}_n\right\}$ is created by Algorithm 1 and suppose that $\exists N>0, \forall n \geqslant N$, $\sum_{n=N}^{\infty} \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 <\infty$. Then the succession $\left\{\boldsymbol{c}_n\right\}$ is bounded.

Algorithm 1
Step 0 Take $\tau_1>0, \delta \in(0, \mathcal{X}) \subset(0, 1)$ and $\left\{\psi_n\right\} \subset[0, \vartheta) \subset[0, 1)$ Adopt the sequence $\left\{q_n\right\}$ satisfying Lemma 5. Let $\boldsymbol{c}_0, \boldsymbol{c}_1 \in \boldsymbol{H}_1$ be initial points.
Step 1 With $\boldsymbol{c}_{n-1}, \boldsymbol{c}_n(n \geqslant 1)$, compute
     $ \begin{aligned} & \boldsymbol{d}_n=\psi_n\left(\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right)+\boldsymbol{c}_n \\ & \boldsymbol{y}_n=P_C\left(\boldsymbol{d}_n-\tau_n F\left(\boldsymbol{d}_n\right)\right)~~~~(7) \end{aligned} $
and update
     $ \tau_{n+1}=\left\{\begin{array}{c} \min \left\{\frac{\delta\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|}{\left\|F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right\|}, q_n \boldsymbol{\tau}_n\right\}, \\ \text { if } F\left(\boldsymbol{d}_n\right) \neq F\left(\boldsymbol{y}_n\right) \\ q_n \tau_n+\bar{\omega}_n, \quad \text { otherwise } \end{array}\right.~~~~(8) $
If $\boldsymbol{d}_n=\boldsymbol{y}_n$, stop, $\boldsymbol{d}_n$ is a solution of VIP (1). If not, go to Step 2.
Step 2 Figure out
    $ \boldsymbol{c}_{n+1}=P_{T_n}\left(\boldsymbol{d}_n-\varLambda_n \tau_n F\left(\boldsymbol{y}_n\right)\right)~~~~(9) $
where
     $ \begin{gathered} \boldsymbol{T}_n:=\left\{\boldsymbol{\xi} \in \boldsymbol{H} \mid\left\langle\boldsymbol{y}_n-\boldsymbol{\xi}, \boldsymbol{d}_n-\boldsymbol{\tau}_n F\left(\boldsymbol{d}_n\right)-\boldsymbol{y}_n\right\rangle \geqslant 0\right\} \\ \varLambda_n=\frac{\left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle}{\left\|\boldsymbol{\varTheta}_n\right\|^2} \\ \boldsymbol{\varTheta}_n=\boldsymbol{d}_n-\boldsymbol{y}_n-\boldsymbol{\tau}_n\left(F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right)~~~~(10) \end{gathered} $

Remark 1    It is found that the criteria in Lemma 8 are easy to calculate. Concretely, when two iterative points $\boldsymbol{c}_n$ and $\boldsymbol{c}_{n-1}$ are provided, it further calculates $\boldsymbol{c}_{n+1}$ with Eq.(9) via the choice of an inertial parameter $\psi_n$ fulfilling $0 \leqslant \psi_n \leqslant \bar{\psi}_n$, where

$ \bar{\psi}_n= \begin{cases}\min \left\{\frac{\vartheta_n}{\left\|c_n-c_{n-1}\right\|^2}, \vartheta\right\}, & \text { if } c_n \neq c_{n-1} \\ \vartheta, & \text { otherwise }\end{cases} $ (11)

where $\vartheta_n \subset[0, \infty)$ is such that $\sum\limits_{n=1}^{\infty} \vartheta_n <\infty$.

Remark 2    With relation (7), it can be easily found that when the proposed method creates some finite iterations, $d_n \in \Omega$. Therefore, unless otherwise stated, it is assumed that Algorithm 1 iterates infinitely and generates an infinite sequence.

Lemma 9    If presumptions (C1)-(C3) are satisfied, the stepsizes Eq.(8) is well-defined and $\lim \limits_{n \rightarrow \infty} \tau_n=\tau \geqslant \min \left\{\frac{\delta}{L}, \tau_1\right\}$, where $\tau_1>0$ is an initial stepsize and L>0 is a Lipschitz constant.

Proof    In the situation of $F\left(\boldsymbol{d}_n\right) \neq F\left(\boldsymbol{y}_n\right)$, as F is L-Lipschitz continuous, it can be derived that

$ \frac{\delta\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|}{\left\|F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right\|} \geqslant \frac{\delta\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|}{L\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|}=\frac{\delta}{L} $

This further discloses that

$ \begin{aligned} \tau_{n+1}= & \min \left\{\frac{\delta\left\|\boldsymbol{d}_n-y_n\right\|}{\left\|F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right\|}, q_n \boldsymbol{\tau}_n\right\} \geqslant \\ & \min \left\{\frac{\delta}{L}, \tau_n\right\} \end{aligned} $

where $q_n \geqslant 1$. By induction, the succession $\left\{\tau_n\right\}$ has a lower bound min $\min \left\{\frac{\delta}{L}, \tau_1\right\}$. With relation Eq.(8), the following inequality can be deduced:

$\tau_{n+1} \leqslant q_n \tau_n+\varpi_n $

From Lemma 5, when $\bar{\omega}_n=0$, it implies that $\lim \limits_{n \rightarrow \infty} \tau_n$ exists and $\lim \limits_{n \rightarrow \infty} \tau_n=\tau$ is denoted. $\left\{\tau_n\right\}$ has the lower bound $\min \left\{\frac{\delta}{L}, \tau_1\right\}$. Finally, $\tau>0$.

Lemma 10    Suppose that presumptions (C1)-(C4) hold. If succession $\left\{\boldsymbol{d}_n\right\}$ is produced by Algorithm 1. Then, for any $s \in \Omega$ there is

$ \begin{array}{r} \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2 \leqslant\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|^2- \\ \quad \frac{(1-\chi)^2}{(1+\chi)^2}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2, \exists N \geqslant 0, \forall n \geqslant N \end{array} $

Proof    From $\boldsymbol{s} \in \Omega \subset \boldsymbol{C} \subset \boldsymbol{T}_n$, Lemma 1 (ⅰ) and Lemma 2 (ⅱ), the following can be derived:

$ \begin{gathered} 2\left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2 \leqslant 2\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{s}, \boldsymbol{d}_n-\tau_n \varLambda_n F\left(\boldsymbol{y}_n\right)-\boldsymbol{s}\right\rangle= \\ \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2+\left\|\boldsymbol{d}_n-\tau_n \varLambda_n F\left(\boldsymbol{y}_n\right)-\boldsymbol{s}\right\|^2- \\ \left\|\boldsymbol{c}_{n+1}-\boldsymbol{d}_n+\tau_n \varLambda_n F\left(\boldsymbol{y}_n\right)\right\|^2=\left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2+ \\ \left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2+\tau_n{ }^2 \varLambda_n^2\left\|F\left(\boldsymbol{y}_n\right)\right\|^2-2\left\langle\boldsymbol{d}_n-\right. \\ \left.\boldsymbol{s}, \tau_n \varLambda_n F\left(\boldsymbol{y}_n\right)\right\rangle-\left\|\boldsymbol{c}_{n+1}-\boldsymbol{d}_n\right\|^2- \end{gathered} $
$ \begin{aligned} & \tau_n^2 \varLambda_n^2\left\|F\left(\boldsymbol{y}_n\right)\right\|^2-2\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{d}_n, \tau_n \varLambda_n F\left(\boldsymbol{y}_n\right)\right\rangle= \\ & \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2+\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n+1}-\boldsymbol{d}_n\right\|^2- \\ & 2\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{s}, \boldsymbol{\tau}_n \varLambda_n F\left(\boldsymbol{y}_n\right)\right\rangle \end{aligned} $

After arrangement,

$ \begin{gathered} \left\|c_{n+1}-s\right\|^2 \leqslant\left\|d_n-s\right\|^2-\left\|c_{n+1}-d_n\right\|^2- \\ 2 \tau_n \varLambda_n\left\langle c_{n+1}-s, F\left(y_n\right)\right\rangle \end{gathered} $ (12)

Using the pseudomonotonicity of $F, \boldsymbol{y}_n \in \boldsymbol{C}$ and $\boldsymbol{s} \in \boldsymbol{\Omega}$, and by Lemma 3 again, it can be seen that $\left\langle F\left(\boldsymbol{y}_n\right), \boldsymbol{y}_n-\boldsymbol{s}\right\rangle \geqslant 0$, which further shows that $\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{s}, \boldsymbol{F}\left(\boldsymbol{y}_n\right)\right\rangle \geqslant\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{y}_n, \boldsymbol{F}\left(\boldsymbol{y}_n\right)\right\rangle$.

Its version is the following:

$ \begin{aligned} & -2 \tau_n \varLambda_n\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{s}, F\left(\boldsymbol{y}_n\right)\right\rangle \leqslant \\ & -2 \tau_n \varLambda_n\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{y}_n, F\left(\boldsymbol{y}_n\right)\right\rangle \end{aligned} $ (13)

Meanwhile, both the definition of $\boldsymbol{T}_n$ and $\boldsymbol{c}_{n+1} \in \boldsymbol{T}_n$ indicate that

$ \left\langle\boldsymbol{c}_{n+1}-\boldsymbol{y}_n, \boldsymbol{d}_n-\tau_n F\left(\boldsymbol{d}_n\right)-\boldsymbol{y}_n\right\rangle \leqslant 0 $

After its change, there is

$ \left\langle\boldsymbol{c}_{n+1}-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle \leqslant \tau_n\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{y}_n, F\left(\boldsymbol{y}_n\right)\right\rangle $ (14)

With relations (10), (13) and (14), it can be derived that

$ \begin{aligned} -2 \tau_n \varLambda_n\left\langle\boldsymbol{c}_{n+1}-\boldsymbol{s}, F\left(\boldsymbol{y}_n\right)\right\rangle \leqslant-2 \varLambda_n\left\langle\boldsymbol{c}_{n+1}-\right. \\ \left.\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle=-2 \varLambda_n\left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle+2 \varLambda_n\left\langle\boldsymbol{d}_n-\right. \\ \left.\boldsymbol{c}_{n+1}, \boldsymbol{\varTheta}_n\right\rangle=-2 \varLambda_n \frac{\left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle}{\left\|\boldsymbol{\varTheta}_n\right\|^2}\left\|\boldsymbol{\varTheta}_n\right\|^2+ \\ 2 \varLambda_n\left\langle\boldsymbol{d}_n-\boldsymbol{c}_{n+1}, \boldsymbol{\varTheta}_n\right\rangle=-2 \varLambda_n^2\left\|\boldsymbol{\varTheta}_n\right\|^2+ \\ 2 \varLambda_n\left\langle\boldsymbol{d}_n-\boldsymbol{c}_{n+1}, \boldsymbol{\varTheta}_n\right\rangle \end{aligned} $ (15)

In response to the term $2 \varLambda_n\left\langle\boldsymbol{d}_n-\boldsymbol{c}_{n+1}, \boldsymbol{\varTheta}_n\right\rangle$ in Eq.(15), it can be estimated that

$ \begin{aligned} & 2 \varLambda_n\left\langle\boldsymbol{d}_n-\boldsymbol{c}_{n+1}, \boldsymbol{\varTheta}_n\right\rangle=\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}\right\|^2+ \\ & \quad \varLambda_n^2\left\|\boldsymbol{\varTheta}_n\right\|^2-\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|^2 \end{aligned} $ (16)

which comes from the usual relation $2 c d=c^2+d^2-$ $(c-d)^2$. Combining Eqs.(12), (15) and (16), it shows that

$ \begin{aligned} & \left\|c_{n+1}-s\right\|^2 \leqslant\left\|d_n-s\right\|^2- \\ & \quad\left\|d_n-c_{n+1}-\varLambda_n \varTheta_n\right\|^2-\varLambda_n^2\left\|\varTheta_n\right\|^2 \end{aligned} $ (17)

For the term $\varLambda_n^2\left\|\boldsymbol{\varTheta}_n\right\|^2$ in Eq.(15), from the regulation of $\boldsymbol{\varTheta}_n$ and $\tau_n$, it can be implied that

$ \begin{aligned} & \left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle=\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2- \\ & \tau_n\left\langle F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right), \boldsymbol{d}_n-\boldsymbol{y}_n\right\rangle \geqslant \\ & \left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2-\tau_n\left\|F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right\| \cdot \\ & \left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \geqslant\left(1-\frac{\delta \tau_n}{\tau_{n+1}}\right)\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2 \end{aligned} $

and

$ \begin{aligned} \left\|\boldsymbol{\varTheta}_n\right\|= & \left\|\boldsymbol{d}_n-\boldsymbol{y}_n-\tau_n\left(F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right)\right\| \geqslant \\ & \left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|-\tau_n\left\|F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right\| \geqslant \\ & \left(1-\frac{\delta \tau_n}{\tau_{n+1}}\right)\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \end{aligned} $

Since

$ \lim \limits_{n \rightarrow \infty}\left(1-\frac{\delta \tau_n}{\tau_{n+1}}\right)=1-\delta>1-\chi $

where $\delta \in(0, \chi) \subset(0, 1)$. Therefore,

$ \exists N^{\prime} \geqslant 0, \forall n \geqslant N^{\prime}, 1-\frac{\delta \tau_n}{\tau_{n+1}}>1-\chi $

So, $\forall n \geqslant N^{\prime}$,

$ \left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle \geqslant(1-\chi)\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2 $ (18)

and

$\left\|\boldsymbol{\varTheta}_n\right\| \geqslant(1-\chi)\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| $ (19)

On the other hand,

$ \begin{aligned} \left\|\boldsymbol{\varTheta}_n\right\|= & \left\|\boldsymbol{d}_n-\boldsymbol{y}_n-\tau_n\left(F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right)\right\| \leqslant \\ & \left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|+\tau_n\left\|F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right)\right\| \leqslant \\ & \left(1+\frac{\delta \tau_n}{\tau_{n+1}}\right)\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \end{aligned} $

Since

$ \lim \limits_{n \rightarrow \infty}\left(1+\frac{\delta \tau_n}{\tau_{n+1}}\right)=1+\delta <1+\chi $

where $\delta \in(0, \chi) \subset(0, 1)$. Therefore,

$ \exists N^{\prime \prime} \geqslant 0, \forall n \geqslant N^{\prime \prime}, 1+\frac{\delta \tau_n}{\tau_{n+1}} <1+\chi $

So, $\forall n \geqslant N^{\prime \prime}$,

$\left\|\boldsymbol{\varTheta}_n\right\| \leqslant(1+\chi)\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| $ (20)

With Eqs.(18) and (20), it can be derived that $\forall n \geqslant N$, where $N=\max \left\{N^{\prime}, N^{\prime \prime}\right\}$,

$ \begin{gathered} \varLambda_n^2\left\|\boldsymbol{\varTheta}_n\right\|^2=\frac{\left(\left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle\right)^2}{\left\|\boldsymbol{\varTheta}_n\right\|^2} \geqslant \\ (1-\chi)^2 \frac{\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^4}{\left\|\boldsymbol{\varTheta}_n\right\|^2} \geqslant \\ \frac{(1-\chi)^2}{(1+\chi)^2}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2 \end{gathered} $ (21)

It verifies from Eq.(19) that $\forall n \geqslant N$,

$ \varLambda_n=\frac{\left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{\varTheta}_n\right\rangle}{\left\|\boldsymbol{\varTheta}_n\right\|^2} \leqslant \frac{\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|}{\left\|\boldsymbol{\varTheta}_n\right\|} \leqslant \frac{1}{1-\chi} $ (22)

Also, by Eqs.(17) and (21), it deduces that $\forall n \geqslant N$,

$ \begin{gathered} \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2 \leqslant\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|^2- \\ \frac{(1-\chi)^2}{(1+\chi)^2}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \end{gathered} $ (23)

Theorem 1    Assume that presumptions (C1)-(C5) are satisfied and

$ \exists N>0, \forall n \geqslant N, \sum\limits_{n=N}^{\infty} \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 <\infty $

So sequence $\left\{\boldsymbol{c}_n\right\}$ created by Algorithm 1 converges weakly to a dot of $\boldsymbol{\Omega}$.

Proof    Let $\boldsymbol{s} \in \boldsymbol{\Omega}$. Employing the regulation of $\boldsymbol{d}_n$, it implies that

$ \begin{gathered} \left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2=\left\|\psi_n\left(\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right)+\boldsymbol{c}_n-\boldsymbol{s}\right\|^2= \\ \left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2+2 \psi_n\left\langle\boldsymbol{c}_n-\boldsymbol{c}_{n-1}, \boldsymbol{c}_n-\boldsymbol{s}\right\rangle+ \\ \psi_n^2\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \end{gathered} $ (24)

Applying Lemma 1 (ⅰ), there is

$\begin{gathered} \left\langle\boldsymbol{c}_n-\boldsymbol{c}_{n-1}, \boldsymbol{c}_n-\boldsymbol{s}\right\rangle=0.5\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2+ \\ 0.5\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-0.5\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2 \end{gathered} $

This, along with Eq.(24), shows that

$\begin{array}{r} \left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2=\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2+\psi_n\left(\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2+\right. \\ \left.\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2\right)+\psi_n^2\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \leqslant \\ \left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2+\psi_n\left(\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2\right)+2 \\ \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \end{array} $ (25)

The Combination of Eqs.(23) and (25) signifies that $\forall n \geqslant N$,

$\begin{aligned} \| \boldsymbol{c}_{n+1}- & \boldsymbol{s}\left\|^2 \leqslant\right\| \boldsymbol{d}_n-\boldsymbol{s}\left\|^2-\right\| \boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n \|^2- \\ & \frac{(1-\chi)^2}{(1+\chi)^2}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2 \leqslant\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2+ \\ & \psi_n\left(\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2\right)+ \\ & 2 \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2-\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|^2- \\ & \frac{(1-\chi)^2}{(1+\chi)^2}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2 \leqslant\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2+ \\ & \psi_n\left(\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2\right)+ \\ & 2 \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \end{aligned} $ (26)

Use Lemma 6 with

$ b_n=\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2 $

and

$ w_n=2 \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 $

With $\sum\limits_{n=N}^{\infty} \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 <\infty$, and Lemma 6, it can be concluded that $\lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|$ exists and

$ \sum\limits_{n=N}^{\infty}\left[\left\|c_n-\boldsymbol{s}\right\|^2-\left\|c_{n-1}-\boldsymbol{s}\right\|^2\right]_{+} <\infty $

where $[t]_{+}=\max \{t, 0\}$. Therefore,

$ \lim \limits_{n \rightarrow \infty}\left[\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2\right]_{+}=0 $

Applying Eq.(26), it verifies that $\forall n \geqslant N$,

$ \begin{gathered} \left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \varTheta_n\right\|^2+\frac{(1-\chi)^2}{(1+\chi)^2}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|^2 \leqslant \\ \left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2+ \\ \psi_n\left(\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\left\|\boldsymbol{c}_{n-1}-\boldsymbol{s}\right\|^2\right)+ \\ 2 \psi_n\left\|\boldsymbol{c}_n-c_{n-1}\right\|^2 \leqslant\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2- \\ \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2+\psi_n\left[\left\|\boldsymbol{c}_n-\boldsymbol{s}\right\|^2-\| \boldsymbol{c}_{n-1}-\right. \\ \left.\boldsymbol{s} \|^2\right]_{+}+2 \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \end{gathered} $

which means that $\lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|=0$ and

$\lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|=0 $ (27)

From Eqs.(7) and (11), it can be derived that $\forall n \geqslant N$,

$ \begin{gathered} \left\|\boldsymbol{d}_n-\boldsymbol{c}_n\right\|^2=\psi_n^2\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \leqslant \\ \vartheta \psi_n\left\|\boldsymbol{c}_n-\boldsymbol{c}_{n-1}\right\|^2 \rightarrow 0, n \rightarrow \infty \end{gathered} $

which corresponds to

$ \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{d}_n-\boldsymbol{c}_n\right\|=0 $ (28)

This and Eq.(27) verify that

$ \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{c}_n-\boldsymbol{y}_n\right\|=0 $ (29)

With Eqs.(20), (22) and (27), it can be deduced that $\forall n \geqslant N$,

$\begin{gathered} \left\|d_n-c_{n+1}\right\| \leqslant\left\|d_n-c_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|+ \\ \left\|\varLambda_n \boldsymbol{\varTheta}_n\right\| \leqslant\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}-\varLambda_n \boldsymbol{\varTheta}_n\right\|+ \\ \frac{1+\chi}{1-\chi}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \rightarrow 0, n \rightarrow \infty . \end{gathered} $ (30)

Invoking Eqs.(28) and (30) can derive

$ \begin{array}{r} \left\|\boldsymbol{c}_{n+1}-\boldsymbol{c}_n\right\| \leqslant\left\|\boldsymbol{d}_n-\boldsymbol{c}_{n+1}\right\|+ \\ \left\|\boldsymbol{d}_n-\boldsymbol{c}_n\right\| \rightarrow 0, n \rightarrow \infty \end{array} $ (31)

Note that $\lim \limits_{n \rightarrow \infty}\left\|c_n-\boldsymbol{s}\right\|$ exists, it can be inferred that $\left\{\boldsymbol{c}_n\right\}$ is bounded. Besides, $\left\{\boldsymbol{d}_n\right\}$ is bounded. $\omega_\omega\left(c_n\right)$ is defined as the weak cluster point set of the succession $\left\{\boldsymbol{c}_n\right\}$. Below, $\omega_\omega\left(\boldsymbol{c}_n\right) \subset \Omega$ is illustrated. First, let $s^* \in \omega_\omega\left(\boldsymbol{c}_n\right)$ be an arbitrary point. As $\left\{\boldsymbol{c}_n\right\}$ is bounded, there is a subsequence $\left\{\boldsymbol{c}_{n_i}\right\}$ of $\left\{\boldsymbol{c}_n\right\}$ fulfilling $\boldsymbol{c}_{n_i} \rightarrow \boldsymbol{s}^*$. Using relations Eqs.(28) and (29), $\boldsymbol{y}_{n_i} \boldsymbol{s}^*$ and $\boldsymbol{d}_{n_i} \boldsymbol{s}^*$ can be verified. With the regulation of $\boldsymbol{y}_{n_i}$ and Lemma 2 (ⅰ), the following inequality can be attained:

$ \left\langle\boldsymbol{d}_{n_i}-\tau_{n_i} F\left(\boldsymbol{d}_{n_i}\right)-\boldsymbol{y}_{n_i}, \boldsymbol{t}-\boldsymbol{y}_{n_i}\right\rangle \leqslant 0, \forall \boldsymbol{t} \in \boldsymbol{C}, $

After arrangement, its version is that

$ \frac{1}{\tau_{n_i}}\left\langle\boldsymbol{d}_{n_i}-\boldsymbol{y}_{n_i}, \boldsymbol{t}-\boldsymbol{y}_{n_i}\right\rangle \leqslant\left\langle F\left(\boldsymbol{d}_{n_i}\right), \boldsymbol{t}-\boldsymbol{y}_{n_i}\right\rangle, \forall \boldsymbol{t} \in \boldsymbol{C} $

Further there is:

$ \begin{gathered} \frac{1}{\tau_{n_i}}\left\langle\boldsymbol{d}_{n_i}-\boldsymbol{y}_{n_i}, \boldsymbol{t}-\boldsymbol{y}_{n_i}\right\rangle+\left\langle F\left(\boldsymbol{d}_{n_i}\right), \boldsymbol{y}_{n_i}-\boldsymbol{d}_{n_i}\right\rangle \leqslant \\ \left\langle F\left(\boldsymbol{d}_{n_i}\right), \boldsymbol{t}-\boldsymbol{d}_{n_i}\right\rangle, \forall \boldsymbol{t} \in \boldsymbol{C} \end{gathered} $ (32)

Since the sequence $\left\{\boldsymbol{d}_{n_i}\right\}$ is bounded, and F satisfies Lipschitz condition, then it can be deduced that $\left\{\boldsymbol{F}\left(\boldsymbol{d}_{n_i}\right)\right\}$ is bounded. Meanwhile, $\lim \limits_{i \rightarrow \infty} \tau_{n_i}=\tau \geqslant$ $\min \left\{\tau_1, \frac{\delta}{L}\right\}$. By the limit in Eq.(32) as $i \rightarrow \infty$, it arrives at

$ \liminf\limits_{i \rightarrow \infty}\left\langle F\left(\boldsymbol{d}_{n_i}\right), t-\boldsymbol{d}_{n_i}\right\rangle \geqslant 0, \forall \boldsymbol{t} \in \boldsymbol{C} $ (33)

Also,

$ \begin{gathered} \left\langle F\left(\boldsymbol{y}_{n_i}\right), \boldsymbol{t}-\boldsymbol{y}_{n_i}\right\rangle=\left\langle F\left(\boldsymbol{y}_{n_i}\right)-F\left(\boldsymbol{d}_{n_i}\right), \boldsymbol{t}-\boldsymbol{d}_{n_i}\right\rangle+ \\ \left\langle F\left(\boldsymbol{d}_{n_i}\right), \boldsymbol{t}-\boldsymbol{d}_{n_i}\right\rangle+\left\langle F\left(\boldsymbol{y}_{n_i}\right), \boldsymbol{d}_{n_i}-\boldsymbol{y}_{n_i}\right\rangle \end{gathered} $ (34)

From $\left\|\boldsymbol{d}_{n_i}-\boldsymbol{y}_{n_i}\right\| \rightarrow 0$, the resulted sequence $\left\{\boldsymbol{y}_{n_i}\right\}$ is bounded. Since F fulfills Lipschitz condition in $H, \left\{F\left(\boldsymbol{y}_{n_i}\right)\right\}$ is bounded. Thus, it can be observed that

$ \lim \limits_{i \rightarrow \infty}\left\|F\left(\boldsymbol{d}_{n_i}\right)-F\left(\boldsymbol{y}_{n_i}\right)\right\|=0 $

Along with Eqs.(33) and (34), it means that

$\liminf \limits_{i \rightarrow \infty}\left\langle F\left(\boldsymbol{y}_{n_i}\right), \boldsymbol{t}-\boldsymbol{y}_{n_i}\right\rangle \geqslant 0 $

From now on, $\boldsymbol{s}^* \in \Omega$ need to be demonstrated. First, a decreasing and positive number sequence $\left\{\theta_i\right\}$ fulfilling $\lim \limits_{i \rightarrow \infty} \theta_i=0$ is elected. For each $i \geqslant 0, N_i$ is defined as the smallest positive integer satisfying

$ \left\langle F\left(\boldsymbol{y}_{n_k}\right), \boldsymbol{t}-\boldsymbol{y}_{n_k}\right\rangle+\theta_i \geqslant 0, k \geqslant N_i $ (35)

As the sequence $\left\{\theta_i\right\}$ is decreasing, it is found that sequence $\left\{N_i\right\}$ is increasing. Moreover, as $\left\{\boldsymbol{y}_{N_i}\right\} \subset \boldsymbol{C}$, it may be assumed that $F\left(\boldsymbol{y}_{N_i}\right) \neq 0$ for each $i \geqslant 0$ (otherwise, $\boldsymbol{y}_{N_i}$ is a solution) and thus, setting

$\begin{gathered} \boldsymbol{v}_{N_i}=\frac{F\left(\boldsymbol{y}_{N_i}\right)}{\left\|F\left(\boldsymbol{y}_{N_i}\right)\right\|^2} \end{gathered} $

$\left\langle F\left(\boldsymbol{y}_{N_i}\right), v_{N_i}\right\rangle=1$ can be deduced for each i>0. Below, it can be inferred from Eq.(35), for each i>0

$\left\langle F\left(\boldsymbol{y}_{N_i}\right), \boldsymbol{t}+\theta_i \boldsymbol{v}_{N_i}-\boldsymbol{y}_{N_i}\right\rangle \geqslant 0 $

Since F is pseudomonotone, it means that

$\left\langle F\left(\boldsymbol{t}+\theta_i \boldsymbol{v}_{N_i}\right), \boldsymbol{t}+\theta_i \boldsymbol{v}_{N_i}-\boldsymbol{y}_{N_i}\right\rangle \geqslant 0 $

This shows that

$ \begin{gathered} \left\langle F(\boldsymbol{t}), \boldsymbol{t}-\boldsymbol{y}_{N_i}\right\rangle \geqslant\left\langle F(\boldsymbol{t})-F\left(\boldsymbol{t}+\theta_i v_{N_i}\right), \boldsymbol{t}+\right. \\ \left.\boldsymbol{\theta}_i v_{N_i}-\boldsymbol{y}_{N_i}\right\rangle-\boldsymbol{\theta}_i\left\langle\boldsymbol{F}(\boldsymbol{t}), \boldsymbol{v}_{N_i}\right\rangle \end{gathered} $ (36)

Below, $\lim \limits_{i \rightarrow \infty} \theta_i \boldsymbol{v}_{N_i}=0$ is shown. In fact, since $\boldsymbol{d}_{n_i} \rightarrow \boldsymbol{s}^*$ and $\lim \limits_{i \rightarrow \infty}\left\|\boldsymbol{d}_{n_i}-\boldsymbol{y}_{n_i}\right\|=0$ there is $\boldsymbol{y}_{N_i} \rightarrow \boldsymbol{s}^*$ as $i \rightarrow \infty$. Since $\left\{\boldsymbol{y}_n\right\} \subset \boldsymbol{C}$, it can be concluded that $\boldsymbol{s}^* \in C$. Suppose $F\left(\boldsymbol{s}^*\right) \neq 0$ (or else, $\boldsymbol{s}^*$ is a solution). As F fulfills the condition (C5) on $\boldsymbol{C}$, it can be deduced that

$ 0 <\left\|F\left(s^*\right)\right\| \leqslant \liminf \limits_{i \rightarrow \infty}\left\|\boldsymbol{F}\left(\boldsymbol{y}_{n_i}\right)\right\| $

Furthermore, since $\left\{\boldsymbol{y}_{N_i}\right\} \subset\left\{\boldsymbol{y}_{n_i}\right\}$ and $\lim \limits_{i \rightarrow \infty} \theta_i=0$, it can be observed that

$ \begin{gathered} 0 \leqslant \limsup _{i \rightarrow \infty} \sup \theta_i \boldsymbol{v}_{N_i} \|=\limsup _{i \rightarrow \infty}\left(\frac{\theta_i}{\left\|F\left(\boldsymbol{y}_{n_i}\right)\right\|}\right) \leqslant \\ \frac{\limsup _{i \rightarrow \infty}\left\|\theta_i\right\|}{\liminf \limits_{i \rightarrow \infty}\left\|F\left(\boldsymbol{y}_{n_i}\right)\right\|}=0 \end{gathered} $

which yields that $\lim \limits_{i \rightarrow \infty} \theta_i \boldsymbol{v}_{N_i}=0$, as has been shown.

Finally, let $i \rightarrow \infty$. It can be observed that the right limit of inequality (36) approaches to zero since F satisfies uniformly continuous condition, the succession $\left\{\boldsymbol{c}_{N_i}\right\}$ and $\left\{\boldsymbol{v}_{N_i}\right\}$ are bounded, and $\lim \limits_{i \rightarrow \infty} \theta_i v_{N_i}=0$. Hence, the following inequality can be obtained:

$\liminf \limits_{i \rightarrow \infty}\left\langle F(\boldsymbol{t}), \boldsymbol{t}-\boldsymbol{y}_{N_i}\right\rangle \geqslant 0 $

Thus, for any $\boldsymbol{t} \in \boldsymbol{C}$, it can be deduced that

$ \begin{gathered} \left\langle F(\boldsymbol{t}), \boldsymbol{t}-\boldsymbol{s}^*\right\rangle=\lim \limits_{i \rightarrow \infty}\left\langle F(\boldsymbol{t}), \boldsymbol{t}-\boldsymbol{y}_{N_i}\right\rangle= \\ \liminf \limits_{i \rightarrow \infty}\left\langle\boldsymbol{F}(\boldsymbol{t}), \boldsymbol{t}-\boldsymbol{y}_{N_i}\right\rangle \geqslant 0 \end{gathered} $

According to Lemma 3, it can be derived that $\boldsymbol{s}^* \in \boldsymbol{\Omega}$. Since $\lim \limits_{n \rightarrow \infty}\left\|c_n-\boldsymbol{s}\right\|$ exists and $\omega_\omega\left(\boldsymbol{c}_n\right) \subset \boldsymbol{\Omega}$, applying Lemma 7, it can be inferred that $\left\{\boldsymbol{c}_n\right\}$ weakly converges to a dot of $\boldsymbol{\Omega}$.

Remark 3

(1) It is remarkable that the weak convergence of Algorithm 1 under presumptions (C2) and (C5) are much weaker than the hypothesis of monotonicity and sequential weak continuity of F employed in existing works[20, 24, 29, 49];

(2) The obtained results in this paper improve the results of Theorem 3.1 in Ref.[18], the Theorem 3.1 in Ref.[24], the Theorem 3.1 in Ref. [29], the Theorem 3.1 in Ref. [37] and the Theorem 4.1 in Ref. [38], because the convergence is obtained without monotonicity and sequential weak continuity.

3 Linear Convergence Rate

On the portion, linear convergence rate of the succession {cn} created by Algorithm 1 is discussed.

Theorem 2    Suppose that presumptions (C1), (C2') and (C3) are satisfied. Let

$r=\left[1-\frac{\gamma}{1+\frac{L}{\beta}+\frac{1}{\beta \tau_1}}\right]^{\frac{1}{2}} $

where $\gamma=\frac{(1-\chi)^2}{(1+\chi)^2}$. If $0 <r <1$ and

$ \psi_n= \begin{cases}0, & n=\text { even } \\ \min \left\{\bar{\psi}_n, \frac{1-r}{1+r}\right\}, & n=\text { odd }\end{cases} $

{cn} converges at least R-linearly to the unique solution of $\Omega$.

Proof    The operator F fulfills strongly pseudomonotone, which means that VIP (1) has a unique solution denoted by $\boldsymbol{s}$. From the computation rule of $\boldsymbol{y}_n$ and applying Lemma 2 (ⅰ), it signifies that

$ \begin{aligned} \left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{s}-\boldsymbol{y}_n\right\rangle \leqslant \tau_n\left\langle F\left(\boldsymbol{d}_n\right), \boldsymbol{s}-\boldsymbol{y}_n\right\rangle= \\ \tau_n\left\langle F\left(\boldsymbol{d}_n\right)-F\left(\boldsymbol{y}_n\right), \boldsymbol{s}-\boldsymbol{y}_n\right\rangle+ \\ \tau_n\left\langle F\left(\boldsymbol{y}_n\right), \boldsymbol{s}-\boldsymbol{y}_n\right\rangle \leqslant \tau_n \| F\left(\boldsymbol{d}_n\right)- \\ F\left(\boldsymbol{y}_n\right)\|\cdot\| \boldsymbol{s}-\boldsymbol{y}_n\left\|-\beta \tau_n\right\| \boldsymbol{s}-\boldsymbol{y}_n \|^2 \leqslant \\ \tau_n L\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \cdot\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\|- \\ \beta \tau_n\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\|^2 \end{aligned} $ (37)

After its change, it can be derived that

$ \begin{gathered} \beta \tau_n\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\|^2 \leqslant \tau_n L\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\|- \\ \left\langle\boldsymbol{d}_n-\boldsymbol{y}_n, \boldsymbol{s}-\boldsymbol{y}_n\right\rangle \leqslant \tau_n L\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \cdot \\ \left\|\boldsymbol{s}-\boldsymbol{y}_n\right\|+\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\| \end{gathered} $

This means that

$\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\| \leqslant \frac{1+\tau_n L}{\beta \tau_n}\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| $

Hence

$ \begin{gathered} \left\|\boldsymbol{d}_n-\boldsymbol{s}\right\| \leqslant\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|+\left\|\boldsymbol{s}-\boldsymbol{y}_n\right\| \leqslant \\ {\left[1+\frac{1+\tau_n L}{\beta \tau_n}\right]\left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\|} \end{gathered} $

This shows that

$ \left\|\boldsymbol{d}_n-\boldsymbol{y}_n\right\| \geqslant\left[1+\frac{1+\tau_n L}{\beta \tau_n}\right]^{-1}\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\| $ (38)

Combining Lemma 10 and Eq.(38), there is $\forall n \geqslant N$,

$ \begin{gathered} \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2 \leqslant\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2-\gamma\left\|\boldsymbol{d}_n-\boldsymbol{c}_n\right\|^2 \leqslant \\ {\left[1-\frac{\gamma}{1+\frac{L}{\beta}+\frac{1}{\beta \tau_n}}\right]\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2} \end{gathered} $

From Lemma 9 and the regulation of r, it signifies that $1-\frac{\gamma}{1+\frac{L}{\beta}+\frac{1}{\beta \tau_n}} \leqslant r^2$. Thus $\forall n \geqslant N$,

$ \left\|\boldsymbol{c}_{n+1}-\boldsymbol{s}\right\|^2 \leqslant r^2\left\|\boldsymbol{d}_n-\boldsymbol{s}\right\|^2 $ (39)

From the regulation of $\boldsymbol{d}_n$ and Lemma 1 (ⅱ), it can be concluded that

$ \begin{array}{r} \left\|\boldsymbol{d}_{2 n+1}-\boldsymbol{s}\right\|^2=\left\|\psi_{2 n+1}\left(\boldsymbol{c}_{2 n+1}-\boldsymbol{c}_{2 n}\right)+\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right\|^2= \\ \left\|\left(1+\psi_{2 n+1}\right)\left(\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right)+\left(-\psi_{2 n+1}\right)\left(\boldsymbol{c}_{2 n}-\boldsymbol{s}\right)\right\|^2= \\ \left(1+\psi_{2 n+1}\right) \quad\left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right\|^2-\psi_{2 n+1} \\ \left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2+\psi_{2 n+1}\left(1+\psi_{2 n+1}\right) \\ \left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{c}_{2 n}\right\|^2 \end{array} $ (40)

Combining Eq.(39) and the regulation of $\boldsymbol{d}_n$, $\forall n \geqslant N / 2$ is obtained:

$ \left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right\|^2 \leqslant r^2\left\|\boldsymbol{d}_{2 n}-\boldsymbol{s}\right\|^2=r^2\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2 $ (41)

Let $n:=2 n+1$ in Eq.(39), from Eqs.(40) and (41), there is $\forall n \geqslant N / 2$,

$ \begin{gathered} \left\|\boldsymbol{c}_{2 n+2}-\boldsymbol{s}\right\|^2 \leqslant r^2\left\|\boldsymbol{d}_{2 n+1}-\boldsymbol{s}\right\|^2=r^2[(1+ \\ \left.\psi_{2 n+1}\right)\left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right\|^2-\psi_{2 n+1}\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2+ \\ \left.\psi_{2 n+1}\left(1+\psi_{2 n+1}\right) \cdot\left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{c}_{2 n}\right\|^2\right] \leqslant \\ r^2\left[\left(1+\psi_{2 n+1}\right) r^2\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2-\psi_{2 n+1} \cdot\right. \\ \left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2+\psi_{2 n+1}\left(1+\psi_{2 n+1}\right)\left(\left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right\|+\right. \\ \left.\left.\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|\right)^2\right] \leqslant r^2\left[\left(1+\psi_{2 n+1}\right) r^2-\right. \\ \left.\psi_{2 n+1}+\psi_{2 n+1}\left(1+\psi_{2 n+1}\right)(1+r)^2\right] \cdot \\ \left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2 \leqslant r^2\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\|^2 \end{gathered} $ (42)

where the last inequality is due to $\psi_n \leqslant(1-r) /(1+r), \forall n \geqslant 1$. By (42), there is $\forall n \geqslant N$,

$ \begin{gathered} \left\|\boldsymbol{c}_{2 n+2}-\boldsymbol{s}\right\| \leqslant r\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\| \leqslant \cdots \\ \leqslant r^{n-N+1}\left\|\boldsymbol{c}_{2 N}-\boldsymbol{s}\right\| \end{gathered} $ (43)

With Eqs.(41) and (43), it can be implied that $\forall n \geqslant N$,

$ \begin{aligned} & \left\|\boldsymbol{c}_{2 n+1}-\boldsymbol{s}\right\| \leqslant r\left\|\boldsymbol{c}_{2 n}-\boldsymbol{s}\right\| \leqslant \cdots \\ & \quad \leqslant r^{n-N+1}\left\|\boldsymbol{c}_{2 N}-\boldsymbol{s}\right\| \end{aligned} $ (44)

Consequently, from Eqs.(43) and (44), it can be concluded that $\left\{\boldsymbol{c}_n\right\}$ converges R-Linearly to $\boldsymbol{s}$.

4 Numerical Experiments

On the portion, several numerical implementations relative to the pseudomonotone VIP (1) are presented. The proposed Algorithm 1 (Alg.1) are compared with some recent self-adaptive algorithms such as Yao's Algorithm 1 (YISAlg.1) [25], Reich's Algorithm 3 (SDPLAlg. 3) [26] and Thong's Algorithm 3.1 (TVRAlg. 3.1) [29]. The tests are performed in MATLAB R2020b on Intel(R) Core(TM) i5-7200U CPU @ 2.50 GHZ, RAM 4.00 GB.

Example 1 The first classical example is shown in Refs.[26, 50-51]. The practicable set fulfills $\boldsymbol{C}=\mathbf{R}^m$ and $F(\boldsymbol{x})=\boldsymbol{A} \boldsymbol{x}$, where $\boldsymbol{A}$ is an $m \times m$ square matrix whose entries are represented by

$ a_{i, j}= \begin{cases}-1, & \text { if } j=m+1-i>i \\ 1, & \text { if } j=m+1-i <i \\ 0, & \text { otherwise }\end{cases} $

For even m, the zero vector is the answer of the example. It is a classical problem that the usual gradient method does not converge. For all tests, $\boldsymbol{t}_0$ is used as the initial point whose coordinates are randomly selected in $[-1, 1]$ and $t_1=(1, 1, \ldots, 1)$. For YISAlg. 1, $1, \tau_1=0.7, \Delta=0.9, \varepsilon=4, \alpha_n= \frac{1}{1+\varepsilon}\left(1-\frac{1}{(n+1)^{1.1}}\right)$ are adopted. For SDPLAlg. 3, $\tau_1=0.7, \Delta_n=1 / 6, \delta=\left(1-3 \Delta_n\right) /\left(1-\Delta_n+2 \Delta_n^2\right)-$ 0.1 and $q_n=\frac{1}{(n+1)^{1.1}}+1$ are chosen. For Alg. 1, $\tau_1=0.7, \delta=0.9, \vartheta=0.3, \vartheta_n=1 / n^2, q_n=\frac{1}{(n+1)^{1.1}}+1$ and $\bar{\omega}_n=0$ are taken. Further, all algorithms are terminated when $\left\|\boldsymbol{y}_n-\boldsymbol{d}_n\right\| <\varepsilon$. Here, $\varepsilon=10^{-3}$ are adopted. The experiment results are shown in Table 1.

Table 1 Results in Example 1

Remark 4    According to the results in Table 1, it is easy to observe that Alg. 1 enjoys a faster convergence speed than YISAlg. 1 and SDPLAlg. 3 in the aspects of iteration number and CPU time. So, the proposed method in this paper is of feasibility.

Example 2    The second problem (also applied in Refs.[25, 29]) is HpHard problem. Adopt $F(\boldsymbol{x})= \boldsymbol{N} \boldsymbol{x}+\boldsymbol{b}$ with $\boldsymbol{b} \in \mathbf{R}^m$ and $\boldsymbol{N}=\boldsymbol{Q} \boldsymbol{Q}^{\mathrm{T}}+\boldsymbol{D}+\boldsymbol{C}$, where the items of the matrix $\boldsymbol{Q} \in \mathbf{R}^{m \times m}$ and the skew-symmetric matrix $\boldsymbol{D} \in \mathbf{R}^{m \times m}$ are consistently produced from (-5, 5), and the diagonal items of the diagonal matrix $\boldsymbol{C} \in \mathbf{R}^{m \times m}$ are consistently produced from (0, 0.3) (N is positive symmetric definite), with the items of the vector of $\boldsymbol{b}$ consistently produced from $(-500, 0)$. In the example, the practicable set is taken as $\mathbf{R}_m^{+}$ and the initial points are $\boldsymbol{t}_0=\boldsymbol{t}_1=(1, 1, \ldots, 1)$. For YISAlg. 1, $ \tau_1=0.1, \Delta=0.9, \varepsilon=2.5 \text { and } \alpha_n=\frac{1}{1+\varepsilon} $ and $\left.\frac{1}{(n+1)^{1.1}}\right)$ are taken. For TVRAlg. 3.1, $\tau_1=0.1$, $\delta=0.9, \vartheta=0.3$, and $\vartheta_n=1 / n^2$ are chosen. For Alg.1, $1, \tau_1=0.1, \delta=0.9, \vartheta=0.3, \vartheta_n=1 / n^2, q_n=10 / (n+1)^{1.1}+1$ and $\bar{\omega}_n=0$ are adopted. Further, all algorithms are terminated when $\left\|\boldsymbol{y}_n-\boldsymbol{d}_n\right\| <\varepsilon$. Here, $\varepsilon=10^{-3}$ is adopted.

The corresponding experimental results (execution time in seconds and number of iterations) are exhibited by employing different dimensions m. Table 2 records the experimental results.

Table 2 Results in Example 2

Remark 5    As shown in Table 2, Alg.1 works better than YISAlg. 1 and TVRAlg. 3.1. To be specific, the proposed algorithm in this paper needs less time and smaller iteration numbers than the compared ones.

5 Conclusions

In the article, the modified subgradient extragradient algorithm with inertial effects and non-monotone stepsizes is proposed and analyzed to solve variational inequality problems with pseudomonotonicity. Furthermore, its weak convergence under weaker presumptions is proved and the R-linear convergence rate is obtained. Finally, numerical experiments verify the correctness of the theoretical results.

References
[1]
Fichera G. Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. lincei, VⅢ. Ser. Rend. Cl. Sci. Fis. Mat. Nat, 1963, 34(8): 138-142. (0)
[2]
Fichera G. Problemi elastostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. (Italian) Atti Accad. Naz. Lincei Mem. Cl. Sci. Fis. Mat. Natur. Sez. Ia, 1963/64, 7(8): 91-140. (0)
[3]
Stampacchia G. Formes bilineaires coercitives sur les ensembles convexes. Comptes Rendus de l'Academie des Sciences, 1964, 258(18): 4413-4416. (0)
[4]
Elliott C M. Variational and quasivariational inequalities applications to free boundary problems. SIAM Review, 1987, 29(2): 314-315. DOI:10.1137/1029059 (0)
[5]
Kinderlehrer D, Stampacchia G. An Introduction to Variational Inequalities and Their Applications. Philadelphia: Society for Industrial and Applied Mathematics, 2000. DOI:10.1137/1.9780898719451 (0)
[6]
Konnov I V. Combined Relaxation Methods for Variational Inequalities. Berlin: Springer, 2001. DOI:10.1007/978-3-642-56886-2 (0)
[7]
Censor Y, Gibali A, Reich S. Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optimization Methods and Software, 2011, 26(4-5): 827-845. DOI:10.1088/10556788.2010.551536 (0)
[8]
Censor Y, Gibali A, Reich S. Extensions of Korpelevich's extragradient method for the variational inequality problem in Euclidean space. Optimization, 2012, 61(9): 1119-1132. DOI:10.1080/02331934.2010.539689 (0)
[9]
Thong D V, Shehu Y, Iyiola O S. Weak and strong convergence theorems for solving pseudo-monotone variational inequalities with non-Lipschitz mappings. Numerical Algorithms, 2020, 84(2): 795-823. DOI:10.1007/s11075-019-00780-0 (0)
[10]
Facchinei F, Pang J S. Finite-Dimensional Variational Inequalities and Complementarity Problems. New York: Springer, 2003. DOI:10.1007/b97544 (0)
[11]
Shehu Y, Iyiola O S. Iterative algorithms for solving fixed point problems and variational inequalities with uniformly continuous monotone operators. Numerical Algorithms, 2018, 79(2): 529-553. DOI:10.1007/s11075-017-0449-z (0)
[12]
Konnov I V. Combined Relaxation Methods for Variational Inequalities. Berlin: Springer, 2001. DOI:10.1007/978-3-642-56886-2 (0)
[13]
Kanzow C, Shehu Y. Strong convergence of a double projection-type method for monotone variational inequalities in Hilbert spaces. Journal of Fixed Point Theory and Applications, 2018, 20(1): 1-24. DOI:10.1007/s11784-018-0531-8 (0)
[14]
Liu H, Yang J. Weak convergence of iterative methods for solving quasimonotone variational inequalities. Computational Optimization and Applications, 2020, 77(2): 491-508. DOI:10.1007/s10589-020-00217-8 (0)
[15]
Malitsky Y V, Semenov V V. A hybrid method without extrapolation step for solving variational inequality problems. Journal of Global Optimization, 2015, 61(1): 193-202. DOI:10.1007/s10898-014-0150-x (0)
[16]
Solodov M V, Svaiter B F. A new projection method for variational inequality problems. SIAM Journal on Control and Optimization, 1999, 37(3): 765-776. DOI:10.1137/S036312997317475 (0)
[17]
Yang J, Liu H W. Strong convergence result for solving monotone variational inequalities in Hilbert space. Numerical Algorithms, 2019, 80(3): 741-752. DOI:10.1007/s11075-018-0504-4 (0)
[18]
Thong D V, Yang J, Cho Y J, et al. Explicit extragradient-like method with adaptive stepsizes for pseudomonotone variational inequalities. Optimization Letters, 2021, 15(6): 2181-2199. DOI:10.1007/s11590-020-01678-w (0)
[19]
Thong D V, Li X H, Dong Q L, et al. An inertial Popov's method for solving pseudomonotone variational inequalities. Optimization Letters, 2021, 15(2): 757-777. DOI:10.1007/s11590-020-01599-8 (0)
[20]
Cai G, Dong Q L, Peng Y. Strong convergence theorems for solving variational inequality problems with pseudo-monotone and non-lipschitz operators. Journal of Optimization Theory and Applications, 2021, 188(2): 447-472. DOI:10.1007/s10957-020-01792-w (0)
[21]
Korpelevich G M. The extragradient method for finding saddle points and other problems. Ekonomika I Matematicheskie Metody, 1976, 12: 747-756. (0)
[22]
Antipin A S. On a method for convex programs using a symmetrical modification of the Lagrange function. Ekonomika I Matematicheskie Metody, 1976, 12(6): 1164-1173. (0)
[23]
Censor Y, Gibali A, Reich S. The subgradient extragradient method for solving variational inequalities in Hilbert space. Journal of Optimization Theory and Applications, 2011, 148(2): 318-335. DOI:10.1007/s10957-010-9757-3 (0)
[24]
Dong Q L, Jiang D, Gibali A. A modified subgradient extragradient method for solving the variational inequality problem. Numerical Algorithms, 2018, 79(3): 927-940. DOI:10.1007/s11075-017-0467-x (0)
[25]
Yao Y, Iyiola O S, Shehu Y. Subgradient extragradient method with double inertial steps for variational inequalities. Journal of Scientific Computing, 2022, 90(2): 1-29. DOI:10.1007/s10915-021-01751-1 (0)
[26]
Reich S, Thong D V, Cholamjiak P, et al. Inertial projection-type methods for solving pseudomonotone variational inequality problems in Hilbert space. Numerical Algorithms, 2021, 88(2): 813-835. DOI:10.1007/s11075-020-01058-6 (0)
[27]
Thong D V, Vinh N T, Cho Y J, et al. Accelerated subgradient extragradient methods for variational inequality problems. Journal of Scientific Computing, 2019, 80(3): 1438-1462. DOI:10.1007/s10915-019-00984-5 (0)
[28]
Ogwo G N, Izuchukwu C, Shehu Y, et al. Convergence of relaxed inertial subgradient extragradient methods for quasimonotone variational inequality problems. Journal of Scientific Computing, 2022, 90(1): 1-35. DOI:10.1007/s10915-021-01670-1 (0)
[29]
Thong D V, Van Hieu D, Rassias T M. Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems. Optimization Letters, 2020, 14(1): 115-144. DOI:10.1007/s11590-019-01511-z (0)
[30]
Thong D V, Cholamjiak P, Rassias M T, et al. Strong convergence of inertial subgradient extragradient algorithm for solving pseudomonotone equilibrium problems. Optimization Letters, 2022, 16: 545-573. DOI:10.1007/s11590-021-01734-z (0)
[31]
Shehu Y, Iyiola O S, Thong D V, et al. An inertial subgradient extragradient algorithm extended to pseudomonotone equilibrium problems. Mathematical Methods Operations Research, 2021, 93(2): 213-242. DOI:10.1007/s00186-020-00730-w (0)
[32]
He B S. A class of projection and contraction methods for monotone variational inequalities. Applied Mathematics and Optimization, 1997, 35(1): 69-76. DOI:10.1007/BF02683320 (0)
[33]
Sun D F. A class of iterative methods for solving nonlinear projection equations. Journal of Optimization Theory and Applications, 1996, 91(1): 123-140. DOI:10.1007/BF02192286 (0)
[34]
Nesterov Y E. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk SSSR, 1983, 269(3): 543-547. (0)
[35]
Polyak B T. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 1964, 4(5): 1-17. DOI:10.1016/0041-5553(64)90137-5 (0)
[36]
Alvarez F, Attouch H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal, 2001, 9(1): 3-11. DOI:10.1023/A:1011253113155 (0)
[37]
Shehu Y, Iyiola O S, Reich S. A modified inertial subgradient extragradient method for solving variational inequalities. Optimization and Engineering, 2021, 23: 421-429. DOI:10.1007/s11081-020-09593-w (0)
[38]
Chang X K, Liu S Y, Deng Z, et al. An inertial subgradient extragradient algorithm with adaptive stepsizes for variational inequality problems. Optimization Methods and Software, 2021, 1-20. DOI:10.1080/10556788.2021.1910946 (0)
[39]
Izuchukwu C, Shehu Y, Yao J C. New inertial forward-backward type for variational inequalities with Quasi-monotonicity. Journal of Global Optimization, 2022, 1-24. DOI:10.1007/s10898-022-01152-0 (0)
[40]
Chang X, Bai J C. A projected extrapolated gradient method with larger step size for monotone variational inequalities. Journal of Optimization Theory and Applications, 2021, 190(2): 602-627. DOI:10.1007/s10957-021-01902-2 (0)
[41]
He S N, Yang C P, Duan P C. Realization of the hybrid method for mann iterations. Applied Mathematics and Computation, 2010, 217(8): 4239-4247. DOI:10.1016/j.amc.2010.10.039 (0)
[42]
Bauschke H H, Combettes P L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. New York: Springer, 2011. DOI:10.1007/978-3-319-48311-5 (0)
[43]
Agarwal R P, Regan D O, Sahu D R. Fixed Point Theory for Lipschitzian-Type Mappings with Applications. New York: Springer, 2009. DOI:10.1007/978-0-387-75818-3 (0)
[44]
Cottle R W, Yao J C. Pseudo-monotone complementarity problems in Hilbert space. Journal of Optimization Theory and Applications, 1992, 75(2): 281-295. DOI:10.1007/BF00941468 (0)
[45]
Mashreghi J, Nasri M. Forcing strong convergence of Korpelevichs method in Banach spaces with its applications in game theory. Nonlinear Analysis: Theory, Methods & Applications, 2010, 72(3-4): 2086-2099. DOI:10.1016/j.na.2009.10.009 (0)
[46]
Osilike M O, Aniagbosor S C. Weak and strong convergence theorems for fixed points of asymptotically nonexpansive mappings. Mathematical and Computer Modelling, 2000, 32(10): 1181-1191. DOI:10.1016/S0895-7177(00)00199-0 (0)
[47]
Ma X J, Liu H W. An inertial Halpern-type CQ algorithm for solving split feasibility problems in Hilbert spaces. Journal of Applied Mathematics and Computing, 2021, 68: 1699-1717. DOI:10.1007/s12190-021-01585-y (0)
[48]
Sahu D R, Cho Y J, Dong Q L, et al. Inertial relaxed CQ algorithms for solving a split feasibility problem in Hilbert spaces. Numerical Algorithms, 2021, 87(3): 1075-1095. DOI:10.1007/s11075-020-00999-2 (0)
[49]
Tan B, Qin X L, Yao J C. Two modified inertial projection algorithms for bilevel pseudomonotone variational inequalities with applications to optimial control problems. Numerical Algorithms, 2021, 88(4): 1757-1786. DOI:10.1007/s11075-021-01093-x (0)
[50]
Maingé P E, Gobinddass M L. Convergence of one-step projected gradient methods for variational inequalities. Journal of Optimization Theory and Applications, 2016, 171(1): 146-168. DOI:10.1007/s10957-016-0972-4 (0)
[51]
Malitsky Y V. Projected reflected gradient methods for monotone variational inequalities. SIAM Journal on Optimization, 2015, 25(1): 502-520. DOI:10.1137/14097238X (0)