Journal of Harbin Institute of Technology (New Series)  2022, Vol. 29 Issue (1): 15-23  DOI: 10.11916/j.issn.1005-9113.2020029
0

Citation 

Lulu Yin, Hongwei Liu. Subgradient extragradient methods for equilibrium problems and fixed point problems in Hilbert space[J]. Journal of Harbin Institute of Technology (New Series), 2022, 29(1): 15-23.   DOI: 10.11916/j.issn.1005-9113.2020029

Corresponding author

Lulu Yin, E-mail: yinluluxidian@163.com

Article history

Received: 2020-06-16
Subgradient extragradient methods for equilibrium problems and fixed point problems in Hilbert space
Lulu Yin, Hongwei Liu     
School of Mathematics and Statistics, Xidian University, Xi'an 710126, China
Abstract: Inspired by inertial methods and extragradient algorithms, two algorithms were proposed to investigate fixed point problem of quasinonexpansive mapping and pseudomonotone equilibrium problem in this study. In order to enhance the speed of the convergence and reduce computational cost, the algorithms used a new step size and a cutting hyperplane. The first algorithm was proved to be weak convergence, while the second algorithm used a modified version of Halpern iteration to obtain strong convergence. Finally, numerical experiments on several specific problems and comparisons with other algorithms verified the superiority of the proposed algorithms.
Keywords: subgradient extragradient methods    inertial methods    pseudomonotone equilibrium problems    fixed point problems    Lipschitz-type condition    
0 Introduction

Let H equip with associated norm ‖·‖ and inner product 〈·, ·〉. With regard to a preset non-null, closed convex set XH and bifunction f: H×HR contenting f(z, z)=0, zX. The equilibrium problem (EP) related to f lies in seeking out $\breve{\boldsymbol{x}} \in \boldsymbol{X} $ such that

$ f(\breve{\boldsymbol{x}}, \boldsymbol{z}) \geqslant 0, \quad \forall \boldsymbol{z} \in \boldsymbol{X} $ (1)

which is also called Ky Fan inequality[1]. Here, its disaggregation is denoted by EP(X, f). Interestingly, fixed point problem, Nash equilibrium problem, variational inequality, optimization problem, and many other models can be transformed into EP[2-4]. In recent years, numerous measures of seeking the approximate solution of problem (1) have been discussed[5-12].

A common measure is extragradient method[7-8], which tackles two strongly convex programming models in every iteration. Evaluation of the subprograms of the algorithm may be extremely expensive for involute structure of bifunctions and/or feasible sets. In recent years, there is a vast amount of literature concerning the study and improvement of this algorithm, among which Refs. [9-13] are representative. To combat this drawback, Van Hieu[10] put forward an modified way to settle fixed point problem and equilibrium problem by using Halpern iteration method and subgradient extragradient, and established the strong convergence result. It is clear that the second strongly convex programming problems is performed in half space. Furthermore, to speed up the convergence of the algorithm, Rehman et al.[14] recently designed a new algorithm by applying the inertial technique[15-17], and obtained the weak convergence results under appropriate assumptions.

Motivated and inspired by the several above-mentioned advantages in Refs.[10] and [14], this paper introduces two new algorithms to analyze fixed point problem with quasinonexpansive mapping and pseudomonotone equilibrium problem. Moreover, the weak convergence results of one algorithm and the strong convergence results of another algorithm were obtained. Among the two proposed algorithms, the new step size avoids Lipschitz constants of bifunction. The experimental results reflect the numerical behavior.

The paper is structured as follows. Some preliminary information is reviewed in Section 1. Section 2 is the research of the convergence results. The similar application to variational inequalities is elaborated in Section 3. Eventually, two experiments of Section 4 show that the proposed algorithms possess perfect efficiency.

1 Preliminaries

In this paper, several notations and lemmas were introduced for later use. Stipulate s+=max{0, s}, s-=max{0, -s}, ∀sR. Presume that a, b, cH and τR are known. Then

$ \begin{array}{l} \|\tau \boldsymbol{b}+(1-\tau) \boldsymbol{c}\|^{2}=\tau\|\boldsymbol{b}\|^{2}+(1-\tau)\|\boldsymbol{c}\|^{2}- \\ \ \ \ \ \ \ \ \ \ \ \ \ \tau(1-\tau)\|\boldsymbol{c}-\boldsymbol{b}\|^{2} \end{array} $ (2)
$ \begin{array}{l} 2\langle\boldsymbol{b}-\boldsymbol{c}, \boldsymbol{b}-\boldsymbol{a}\rangle=\|\boldsymbol{b}-\boldsymbol{c}\|^{2}+\|\boldsymbol{a}-\boldsymbol{b}\|^{2}- \\ \ \ \ \ \ \ \ \ \|\boldsymbol{c}-\boldsymbol{a}\|^{2} \end{array} $ (3)

Evidently, the projection PX possesses the following feature:

$ \mathit{\boldsymbol{c}} = {\mathit{\boldsymbol{P}}_X}\mathit{\boldsymbol{b}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \Leftrightarrow {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \langle \mathit{\boldsymbol{b}} - \mathit{\boldsymbol{c}}, \mathit{\boldsymbol{a}} - \mathit{\boldsymbol{c}}\rangle \leqslant 0, \forall \mathit{\boldsymbol{a}} \in \mathit{\boldsymbol{X}} $

Definition 1.1   Bifunction f: H×HR is

●  Pseudomonotone in X:

$ f(\boldsymbol{b}, \boldsymbol{c}) \geqslant 0 \Rightarrow f(\boldsymbol{c}, \boldsymbol{b}) \leqslant 0, \forall \boldsymbol{a}, \boldsymbol{b} \in \boldsymbol{X} $

●  Lipschitz-type condition in X:

$ \begin{aligned} &\exists l_{1}, l_{2}>0 \\ &\text { s.t. }f(\boldsymbol{c}, \boldsymbol{a})-f(\boldsymbol{c}, \boldsymbol{b})-f(\boldsymbol{b}, \boldsymbol{a}) \leqslant \\ &\qquad\qquad l_{1}\|\boldsymbol{b}-\boldsymbol{a}\|-l_{2}\|\boldsymbol{c}-\boldsymbol{b}\| , \\ &\qquad\qquad\quad \forall \boldsymbol{a}, \boldsymbol{b}, \boldsymbol{c} \in \boldsymbol{X} \end{aligned} $

Further, given cX and function h: X→(-∞, ∞], its subdifferential is described by

$ \begin{aligned} \partial h(\boldsymbol{c})=&\{\xi \in \boldsymbol{H}: h(\boldsymbol{b})-h(\boldsymbol{c}) \geqslant\\ &\langle\xi, \boldsymbol{b}-\boldsymbol{c}\rangle, \forall \boldsymbol{b} \in \boldsymbol{X}\} \end{aligned} $

and its normal cone of X is

$ \boldsymbol{N}_{X}(\boldsymbol{c})=\{\eta \in \boldsymbol{H}:\langle\eta, \boldsymbol{b}-\boldsymbol{c}\rangle \leqslant 0, \boldsymbol{b} \in \boldsymbol{X}\} $

Lemma 1.1[18]   Suppose h: X→(-∞, +∞] is subdifferentiable, lower hemicontinuous, and convex. Suppose that there is a point on X that makes h continuous, or h is finite at some interior point on X. Then c*=arg min{h(c): cX} is equivalent to 0∈∂h(c*)+NX(c*).

Definition 1.2   Fix(T)≠Ø and T: HH are known, then

1) T is known as quasi-nonexpansive:

$ \forall \boldsymbol{a} \in \boldsymbol{H}, \boldsymbol{b} \in \operatorname{Fix}(T), \|T \boldsymbol{a}-\boldsymbol{b}\| \leqslant\|\boldsymbol{a}-\boldsymbol{b}\| $

2) T-I is known as demiclosed at zero:

$ \forall\left\{c_{n}\right\} \subset \boldsymbol{H}, \text { s.t. } \boldsymbol{c}_{n} \rightharpoonup \boldsymbol{z}, $
$ T \boldsymbol{c}_{n}-\boldsymbol{c}_{n} \rightarrow 0 \Rightarrow \boldsymbol{c} \in \operatorname{Fix}(T) $

The information of the proximal operator is recalled which is the basic tool for the proposed algorithm. Assume that a function h: H→(-∞, +∞] is lower hemicontinuous, proper, and convex. Given λ < 0 and cH, the proximal operator of h is described as

$ \operatorname{prox}_{\lambda h}(\boldsymbol{c})=\arg \min \left\{\lambda h(\boldsymbol{b})+\frac{1}{2}\|\boldsymbol{c}-\boldsymbol{b}\|^{2}: \boldsymbol{b} \in \boldsymbol{X}\right\} $

Remark 1.1   For ease of notation for lower hemicontinuous and proper convex function h: X→(-∞, +∞], symbol proxλhX(x) represents proximal operator of h with λ on X.

Then a crucial attribute of proximal operators is recommended.

Lemma 1.2[19]   Let cH, bX, and λ > 0. Then

$ \begin{aligned} &\lambda\left\{h(\boldsymbol{b})-h\left(\operatorname{prox}_{\lambda h}^{X}(\boldsymbol{c})\right)\right\} \geqslant \\ &\quad\left\langle\boldsymbol{c}-\operatorname{prox}_{\lambda h}^{X}(\boldsymbol{c}), \boldsymbol{b}-\operatorname{prox}_{\lambda h}^{X}(\boldsymbol{c})\right\rangle \end{aligned} $ (4)

Lemma 1.3 (Peter-Paul inequality)   Given ε > 0 and a1, a2R, the following property holds:

$ 2 a_{1} a_{2} \leqslant \frac{a_{1}^{2}}{\varepsilon}+\varepsilon a_{2}^{2} $ (5)

Lemma 1.4 (Opial) Given progression   {cn}⊂H, suppose $\boldsymbol{c}_{n} \rightharpoonup \boldsymbol{c} $. Then

$ \liminf \limits_{n \rightarrow \infty}\left\|\boldsymbol{c}_{n}-\boldsymbol{c}\right\|<\liminf \limits_{n \rightarrow \infty}\left\|\boldsymbol{c}_{n}-\boldsymbol{b}\right\|, \forall \boldsymbol{c} \neq \boldsymbol{b} $ (6)

Lemma 1.5[16]   Sequences $\left\{\vartheta_{n}\right\}, \quad\left\{\alpha_{n}\right\} $, and {ιn} in [0, +∞) satisfy

$ \iota_{n+1} \leqslant \iota_{n}+\alpha_{n}\left(\iota_{n}-\iota_{n-1}\right)+\vartheta_{n}, \sum\limits_{n=1}^{+\infty} \vartheta_{n}<+\infty $

Suppose that real number α exists and has 0≤αnα < 1, ∀nN. Thus, the following can be obtained:

1) $\sum\limits_{n=1}^{+\infty}\left(\iota_{n}-\iota_{n-1}\right)^{+}<+\infty $;

2) There is ι*∈[0, +∞) such that $\lim \limits_{n \rightarrow+\infty} \iota_{n}=\iota^{*} $.

Lemma 1.6[20]   Given progressions {an}, {bn}, and {αn}, where an≥0, αn∈(0, 1), and $\sum\limits_{n=0}^{\infty} \alpha_{n}=\infty $, suppose an+1αnbn+(1-αn)an for all nN. If for each subsequence {ank} of {an}, $\limsup \limits_{k \rightarrow \infty} b_{n_{k}} \leqslant 0 $ is established when

$ \limsup \limits_{n \rightarrow+\infty}\left(a_{n_{k}+1}-a_{n_{k}}\right) \geqslant 0 $

then there is $\lim \limits_{n \rightarrow+\infty} a_{n}=0 $.

2 Algorithms and Convergence Analysis

Two methods are proposed and studied in this section. To prove the algorithm's convergence, the following assumptions are proposed.

Condition (A):

(A1) f is pseudomonotone over X;

(A2) f(z, ·) has convexity and subdifferentiability over X for any zH;

(A3) $\mathop {\lim \sup }\limits_{n \to \infty } f\left( {{\mathit{\boldsymbol{z}}_n}, \mathit{\boldsymbol{e}}} \right) \leqslant f(\mathit{\boldsymbol{z}}, \mathit{\boldsymbol{e}}) $ from eX and {zn}⊂X with $\boldsymbol{z}_{n} \rightharpoonup \boldsymbol{z} $;

(A4) f has Lipschitz-type condition over H with l1, l2 > 0.

Remark 2.1   It is explicit when Condition (A) is satisfied. The EP(f) of problem (1) is convex and closed[7, 10]. In addition, Fix(T)⊂H is also convex and closed under the condition that T is quasi-nonexpansive[21]. Moreover, the symbol Λ=EP(X, f)∩Fix(T) is taken for convenience.

2.1 Weak Convergence

First, inspired by the works of Ref.[14], the algorithm's weak convergence is derived. In addition, the algorithm's step size is specially selected, which makes it unnecessary for the algorithm to know Lipschitz constants in advance. The first algorithm has the following form:

Algorithm 2.1

Step 0   Select x0, x1X, μ∈(0, 1), λ1 > 0. Choose a non-negative real sequence {pn} satisfying $\sum\limits_{n=0}^{\infty} p_{n}<+\infty $.

Step 1   Assume that xn-1 and xn are known. Compute

$ \boldsymbol{w}_{n}=\boldsymbol{x}_{n}+\alpha_{n}\left(\boldsymbol{x}_{n}-\boldsymbol{x}_{n-1}\right) $
$ \begin{aligned} \boldsymbol{y}_{n}=&\arg \min \left\{\boldsymbol{\lambda}_{n} f\left(\boldsymbol{w}_{n}, \boldsymbol{y}\right)+\frac{1}{2}\left\|\boldsymbol{w}_{n}-\boldsymbol{y}\right\|^{2}, \boldsymbol{y} \in\right. \\ &\boldsymbol{X}\}=\operatorname{prox}_{\lambda_{n} f\left(\boldsymbol{w}_{n}, \cdot\right)}^{X}\left(\boldsymbol{w}_{n}\right) \end{aligned} $

Step 2   Select vn2f(wn, yn) such that

$ \boldsymbol{w}_{n}-\lambda_{n} \boldsymbol{v}_{n}-\boldsymbol{y}_{n} \in \boldsymbol{N}_{X}\left(\boldsymbol{y}_{n}\right) $

Compute

$ \begin{aligned} \boldsymbol{z}_{n}=& \arg \min \left\{\lambda_{n} f\left(\boldsymbol{y}_{n}, \boldsymbol{z}\right)+\frac{1}{2}\left\|\boldsymbol{w}_{n}-\boldsymbol{z}\right\|^{2}, \boldsymbol{z} \in \boldsymbol{T}_{n}\right\}=\\ & \operatorname{prox}_{\lambda_{n} f\left(\boldsymbol{y}_{n}, \cdot\right)}^{T_{n}}\left(\boldsymbol{w}_{n}\right) \end{aligned} $

where

$ \boldsymbol{T}_{n}=\left\{\boldsymbol{x} \in \boldsymbol{H}:\left\langle\boldsymbol{w}_{n}-\lambda_{n} \boldsymbol{v}_{n}-\boldsymbol{y}_{n}, \boldsymbol{y}_{n}-\boldsymbol{x}\right\rangle \geqslant 0\right\} $

Step 3   Define xn+1=(1-βn)wn+βnTzn and

$ \begin{aligned} &\lambda_{n+1}= \\ &\left\{\begin{array}{l} \min \left\{\frac{\mu\left(\left\|\boldsymbol{w}_{n}-\boldsymbol{y}_{n}\right\|^{2}+\left\|\boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\|^{2}\right)}{2\left(f\left(\boldsymbol{w}_{n}, \boldsymbol{z}_{n}\right)-f\left(\boldsymbol{w}_{n}, \boldsymbol{y}_{n}\right)-f\left(\boldsymbol{y}_{n}, \boldsymbol{z}_{n}\right)\right)}, \lambda_{n}+p_{n}\right\}, \\ \qquad\quad \text { if } f\left(\boldsymbol{w}_{n}, \boldsymbol{z}_{n}\right)-f\left(\boldsymbol{w}_{n}, \boldsymbol{y}_{n}\right)-f\left(\boldsymbol{y}_{n}, \boldsymbol{z}_{n}\right)>0 \\ \lambda_{n}+p_{n}, \text { otherwise } \end{array}\right. \end{aligned} $

If wn = yn = xn+1, then stop, wnΛ.

Take n: =n+1 and revert to Step 1.

Remark 2.2   It is easy to confirm the existence of vn and XTn from Algorithm 2.1. Please refer to Ref.[22] for detailed proof.

Remark 2.3   The existence of the parameter μ in Algorithm 2.1 is necessary for the subsequent proof that the proposed algorithm is convergent.

Lemma 2.1   Algorithm 2.1 generates progression {λn}, which satisfies the limit λ existence of {λn} and $ \min \left\{\frac{\mu}{2 \max \left\{l_{1}, l_{2}\right\}}, \lambda_{0}\right\} \leqslant \lambda \leqslant \lambda_{0}+P$ hold where $ P=\sum\limits_{n=0}^{\infty} p_{n}$.

Proof   When

$ f\left(\boldsymbol{w}_{n}, \boldsymbol{z}_{n}\right)-f\left(\boldsymbol{w}_{n}, \boldsymbol{y}_{n}\right)-f\left(\boldsymbol{y}_{n}, \boldsymbol{z}_{n}\right)>0 $

the Lipschitz condition of f is engendered:

$ \begin{aligned} &\frac{\mu\left(\left\|\boldsymbol{w}_{n}-\boldsymbol{y}_{n}\right\|^{2}+\left\|\boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\|^{2}\right)}{2\left(f\left(\boldsymbol{w}_{n}, \boldsymbol{z}_{n}\right)-f\left(\boldsymbol{w}_{n}, \boldsymbol{y}_{n}\right)-f\left(\boldsymbol{y}_{n}, \boldsymbol{z}_{n}\right)\right)} \geqslant \\ &\ \ \ \ \ \ \ \ \frac{\mu\left(\left\|\boldsymbol{w}_{n}-\boldsymbol{y}_{n}\right\|^{2}+\left\|\boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\|^{2}\right)}{2\left(l_{1}\left\|\boldsymbol{w}_{n}-\boldsymbol{y}_{n}\right\|^{2}+l_{2}\left\|\boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\|^{2}\right)} \geqslant \\ &\ \ \ \ \ \ \ \ \frac{\mu}{2 \max \left\{l_{1}, l_{2}\right\}} \end{aligned} $

Using the above inequality and induction, it is obvious that the following expression can be derived:

$ \min \left\{\frac{\mu}{2 \max \left\{l_{1}, l_{2}\right\}}, \lambda_{0}\right\} \leqslant \lambda_{n} \leqslant \lambda_{0}+P $

Taking sn+1=λn+1-λn, definition of {λn} leads to

$ \sum\limits_{n=0}^{\infty} s_{n+1}^{+} \leqslant \sum\limits_{n=0}^{\infty} p_{n}<+\infty $ (7)

Therefore, $\sum\limits_{n=0}^{\infty} s_{n+1}^{+} $ converges. Then, the next task is to find evidence that $\sum\limits_{n=0}^{\infty} s_{n+1}^{-} $ has a limit. Suppose $\sum\limits_{n=0}^{\infty} s_{n+1}^{-}=-\infty $. Since $s_{n+1}=s_{n+1}^{+}-s_{n+1}^{-} $, the following equation is obtained:

$ \lambda_{k+1}-\lambda_{0}=\sum\limits_{n=0}^{k} s_{n+1}^{+}-\sum\limits_{n=0}^{k} s_{n+1}^{-} $ (8)

In Eq.(8), let k→+∞, there is λk→-∞, which is impossible. Because $\sum\limits_{n=0}^{\infty} s_{n+1}^{+} $ and $\sum\limits_{n=0}^{\infty} s_{n+1}^{-} $ are convergent, let k→+∞, then $\lim \limits_{n \rightarrow \infty} \lambda_{n}=\lambda $ is deduced. By boundedness of {λn}, $\min \left\{\frac{\mu}{2 \max \left\{l_{1}, l_{2}\right\}}, \lambda_{0}\right\} $λλ0+P is obtained.

Remark 2.4   Algorithm 2.1 updates progression {λn} which is not monotonically decreasing, thus the dependence on the initial step size λ0 is reduced. Furthermore, it is evident that $\lim \limits_{n\rightarrow \infty} \frac{\lambda_{n}}{\lambda_{n+1}}=1 $ can be obtained by the rule of four arithmetic operations of the convergent sequence.

Now, a lemma is introduced to pave the way for proof of convergence result.

Lemma 2.2   Suppose progression {zn} is updated by Algorithm 2.1. For $\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\in \mathrm{ }\!\!\boldsymbol{\varLambda}\!\!\text{ } $, it can deduce

$ \begin{aligned} &\left\|\boldsymbol{z}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2} \leqslant\left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}- \\ &\quad\left(1-\frac{\lambda_{n}}{\lambda_{n+1}} \mu\right)\left(\left\|\boldsymbol{w}_{n}-\boldsymbol{y}_{n}\right\|^{2}+\left\|\boldsymbol{y}_{n}-\boldsymbol{z}_{n}\right\|^{2}\right) \end{aligned} $

Proof   From Lemma 1.2 and$\boldsymbol{z}_{n}=\operatorname{prox}_{\lambda_{n}f\left(\boldsymbol{y}_{n}, \cdot\right)}^{\boldsymbol{T}_{n}}\left(\boldsymbol{w}_{n}\right) $, the following expression is deduced:

$ \begin{aligned} \lambda_{n}(&\left.f\left(\boldsymbol{y}_{n}, \boldsymbol{z}\right)-f\left(\boldsymbol{y}_{n}, \boldsymbol{z}_{n}\right)\right) \geqslant \\ &\left\langle\boldsymbol{w}_{n}-\boldsymbol{z}_{n}, \boldsymbol{z}-\boldsymbol{z}_{n}\right\rangle, \forall z \in \boldsymbol{T}_{n} \end{aligned} $ (9)

Note that Λ⊂EP(X, f)⊂XTn. Given $\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}} $Λ, employing z=$\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}} $ in Eq.(8) leads to

$ \lambda_{n}\left(f\left(\boldsymbol{y}_{n}, \mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right)-f\left(\boldsymbol{y}_{n}, \boldsymbol{z}_{n}\right)\right) \geqslant\left\langle\boldsymbol{w}_{n}-\boldsymbol{z}_{n}, \mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}-\boldsymbol{z}_{n}\right\rangle $ (10)

For any n≥0, ynX and $\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}} $∈EP(X, f) can export f(yn, $\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}} $)≤0. Because f is pseudo-monotone, f(yn, $\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}} $)≤0 can be deduced. Hence, from inequality (9) and λn > 0, it is found that

$ -\lambda_{n} f\left(\boldsymbol{y}_{n}, \boldsymbol{z}_{n}\right) \geqslant\left\langle\boldsymbol{w}_{n}-\boldsymbol{z}_{n}, \mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}-\boldsymbol{z}_{n}\right\rangle $ (11)

Using the concept of the subdifferential, znTnH and vn2f(wn, yn), the following expression is derived:

$ f\left(\boldsymbol{w}_{n}, \boldsymbol{z}_{n}\right)-f\left(\boldsymbol{w}_{n}, \boldsymbol{y}_{n}\right) \geqslant\left\langle\boldsymbol{v}_{n}, \boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\rangle $ (12)

Owing to the given form of Tn, it is found that

$ \left\langle\boldsymbol{w}_{n}-\lambda_{n} \boldsymbol{v}_{n}-\boldsymbol{y}_{n}, \boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\rangle \leqslant 0 $

Hence, there is

$ \lambda_{n}\left(f\left(\boldsymbol{w}_{n}, \boldsymbol{z}_{n}\right)-f\left(\boldsymbol{w}_{n}, \boldsymbol{y}_{n}\right)\right) \geqslant\left\langle\boldsymbol{y}_{n}-\boldsymbol{w}_{n}, \boldsymbol{y}_{n}-\boldsymbol{z}_{n}\right\rangle $ (13)

Applying inequalities (11) and (13) yields

$ \begin{array}{c} 2 \lambda_{n}\left(f\left(\boldsymbol{w}_{n}, \boldsymbol{z}_{n}\right)-f\left(\boldsymbol{w}_{n}, \boldsymbol{y}_{n}\right)-f\left(\boldsymbol{y}_{n}, \boldsymbol{z}_{n}\right)\right) \geqslant 2\left\langle\boldsymbol{y}_{n}-\right. \\ \left.\boldsymbol{w}_{n}, \boldsymbol{y}_{n}-\boldsymbol{z}_{n}\right\rangle+2\left\langle\boldsymbol{w}_{n}-\boldsymbol{z}_{n}, \mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}-\boldsymbol{z}_{n}\right\rangle=\left\|\boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\|^{2}+ \\ \left\|\boldsymbol{y}_{n}-\boldsymbol{w}_{n}\right\|^{2}+\left\|\boldsymbol{z}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}-\left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2} \end{array} $ (14)

Through the representation of λn, the following is achieved:

$ \begin{array}{c} \left\|\boldsymbol{z}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2} \leqslant\left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}-\left(\left\|\boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\|^{2}+\right. \\ \left.\left\|\boldsymbol{w}_{n}-\boldsymbol{y}_{n}\right\|^{2}\right)+\frac{\lambda_{n}}{\lambda_{n+1}} \mu\left(\left\|\boldsymbol{w}_{n}-\boldsymbol{y}_{n}\right\|^{2}+\right. \\ \left.\left\|\boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\|^{2}\right)=\left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}-(1- \\ \left.\frac{\lambda_{n}}{\lambda_{n+1}} \mu\right)\left(\left\|\boldsymbol{w}_{n}-\boldsymbol{y}_{n}\right\|^{2}+\left\|\boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\|^{2}\right) \end{array} $ (15)

Lemma 2.3   Algorithm 2.1 formulates progressions {xn}, {wn}, and {yn}. Suppose

$ \lim \limits_{k \rightarrow \infty}\left\|\boldsymbol{x}_{n_{k}}-\boldsymbol{w}_{n_{k}}\right\|=0, \lim \limits_{k \rightarrow \infty}\left\|\boldsymbol{w}_{n_{k}}-\boldsymbol{y}_{n_{k}}\right\|=0 $
$ \lim \limits_{k \rightarrow \infty}\left\|\boldsymbol{w}_{n_{k}}-\boldsymbol{z}_{n_{k}}\right\|=0, \lim \limits_{k \rightarrow \infty}\left\|\boldsymbol{y}_{n_{k}}-\boldsymbol{z}_{n_{k}}\right\|=0 $
$ \lim \limits_{n \rightarrow \infty}\left\|T \boldsymbol{z}_{n_{k}}-\boldsymbol{z}_{n_{k}}\right\|=0 $

If the sequence {xnk} to x"H is weakly convergent, then x"Λ.

Proof   Apparently, there is $ \boldsymbol{w}_{n_{k}} \rightharpoonup \boldsymbol{x}^{\prime \prime}, \boldsymbol{y}_{n_{k}} \rightharpoonup \boldsymbol{x}^{\prime \prime}, \boldsymbol{z}_{n_{k}} \rightharpoonup \boldsymbol{x}^{\prime \prime}$, and x"X. From the relation of Eq.(8), the following expression is deduced:

$ \begin{aligned} &\lambda_{n_{k}}\left(f\left(\boldsymbol{y}_{n_{k}}, \boldsymbol{z}\right)-f\left(\boldsymbol{y}_{n_{k}}, \boldsymbol{z}_{n_{k}}\right)\right) \geqslant \\ &\ \ \left\langle\boldsymbol{w}_{n_{k}}-\boldsymbol{z}_{n_{k}}, \boldsymbol{z}-\boldsymbol{z}_{n_{k}}\right\rangle, \quad \forall \boldsymbol{z} \in \boldsymbol{T}_{n} \end{aligned} $ (16)

The Lipschitz-type condition of f on X yields

$ \begin{gathered} \lambda_{n_{k}} f\left(\boldsymbol{y}_{n_{k}}, \boldsymbol{z}_{n_{k}}\right) \geqslant \lambda_{n_{k}}\left(f\left(\boldsymbol{w}_{n_{k}}, \boldsymbol{z}_{n_{k}}\right)-f\left(\boldsymbol{w}_{n_{k}}, \boldsymbol{y}_{n_{k}}\right)\right)- \\ \lambda_{n_{k}} l_{1}\left\|\boldsymbol{y}_{n_{k}}-\boldsymbol{w}_{n_{k}}\right\|^{2}-\lambda_{n_{k}} l_{2}\left\|\boldsymbol{y}_{n_{k}}-\boldsymbol{z}_{n_{k}}\right\|^{2} \end{gathered} $ (17)

According to inequalities (13) and (17), the following expression can be obtained:

$ \begin{gathered} \lambda_{n_{k}} f\left(\boldsymbol{y}_{n_{k}}, \boldsymbol{z}_{n_{k}}\right) \geqslant\left\langle\boldsymbol{w}_{n_{k}}-\boldsymbol{y}_{n_{k}}, \boldsymbol{z}_{n_{k}}-\boldsymbol{y}_{n_{k}}\right\rangle- \\ \lambda_{n_{k}} l_{1}\left\|\boldsymbol{y}_{n_{k}}-\boldsymbol{w}_{n_{k}}\right\|^{2}-\lambda_{n_{k}} l_{2}\left\|\boldsymbol{y}_{n_{k}}-\boldsymbol{z}_{n_{k}}\right\|^{2} \end{gathered} $ (18)

Combining inequalities (17) and (18), and XTn, it can be deduced for all zX that

$ \begin{aligned} f\left(\boldsymbol{y}_{n_{k}}, \boldsymbol{z}\right) \geqslant & \frac{1}{\lambda_{n_{k}}}\left\langle\boldsymbol{w}_{n_{k}}-\boldsymbol{z}_{n_{k}}, \boldsymbol{z}-\boldsymbol{z}_{n_{k}}\right\rangle+\\ & \frac{1}{\lambda_{n_{k}}}\left\langle\boldsymbol{w}_{n_{k}}-\boldsymbol{y}_{n_{k}}, \boldsymbol{z}_{n_{k}}-\boldsymbol{y}_{n_{k}}\right\rangle-\\ & l_{1}\left\|\boldsymbol{y}_{n_{k}}-\boldsymbol{w}_{n_{k}}\right\|^{2}-l_{2}\left\|\boldsymbol{y}_{n_{k}}-\boldsymbol{z}_{n_{k}}\right\|^{2} \end{aligned} $

Taking the limit in the last inequality and using

$ \begin{gathered} \lim \limits_{k \rightarrow \infty}\left\|\boldsymbol{w}_{n_{k}}-\boldsymbol{y}_{n_{k}}\right\|=\lim \limits_{k \rightarrow \infty}\left\|\boldsymbol{w}_{n_{k}}-\boldsymbol{z}_{n_{k}}\right\|= \\ \lim \limits_{k \rightarrow \infty}\left\|\boldsymbol{y}_{n_{k}}-\boldsymbol{z}_{n_{k}}\right\|=0 \end{gathered} $
$ \lim \limits_{n \rightarrow \infty} \lambda_{n}=\lambda>0 $

f(x", z)≥0, ∀zX is deduced from the assumption (A3). In other words, x"∈EP(X, f). Moreover, since $\boldsymbol{z}_{n_{k}} \rightharpoonup \boldsymbol{x}^{\prime \prime} $ and demiclosedness of zero of I-T, x"∈Fix(T) is discovered. Then, x"Λ.

Theorem 2.1   In progressions {αn} and {βn}, 0≤αnαn+1≤α < $\frac{1}{3} $ and 0 < ββn$\frac{1}{2} $. Suppose Condition (A) holds and Λ≠Ø, then each weak converge limit point of progression {xn} formed by Algorithm 2.1 belongs to Λ.

Proof   $ 1-\lambda_{n} \frac{\mu}{\lambda_{n+1}}>0, \forall n \geqslant N$ is derived from the limit $\lim \limits_{n \rightarrow \infty}\left(1-\frac{\lambda_{n}}{\lambda_{n+1}} \mu\right)=1-\mu>0 $. Using Lemma 2.2, it is derived that

$ \left\|\boldsymbol{z}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\| \leqslant\left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|, \forall n \geqslant N $ (19)

By invoking quasi-nonexpansion of T, βn≤1/2 and xn+1=(1-βn)wn+βnT zn, the following expression is obtained:

$ \begin{gathered} \left\|\boldsymbol{x}_{n+1}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}} \right\|^{2}=\left(1-\beta_{n}\right)\left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}} \right\|^{2}+\beta_{n}\left\|T \boldsymbol{z}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}} \right\|^{2}- \\ \beta_{n}\left(1-\beta_{n}\right)\left\|T \boldsymbol{z}_{n}-\boldsymbol{w}_{n}\right\|^{2} \leqslant\left(1-\beta_{n}\right)\left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}} \right\|^{2}+ \\ \beta_{n}\left\|\boldsymbol{z}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}} \right\|^{2}-\frac{1-\beta_{n}}{\beta_{n}}\left\|\boldsymbol{x}_{n+1}-\boldsymbol{w}_{n}\right\|^{2}= \\ \left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}} \right\|^{2}-\left\|\boldsymbol{x}_{n+1}-\boldsymbol{w}_{n}\right\|^{2} \end{gathered} $ (20)

Furthermore, by utilizing the definition of wn and inequality (2), it is derived

$ \begin{aligned} &\left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}=\left(1+\alpha_{n}\right)\left\|\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}_{n}-\boldsymbol{x}\right\|^{2}- \\ &\ \ \alpha_{n}\left\|\boldsymbol{x}_{n-1}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{{x}}}\right\|^{2}+\alpha_{n}\left(1+\alpha_{n}\right)\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\|^{2} \end{aligned} $ (21)

and

$ \begin{gathered} \left\|\boldsymbol{x}_{n+1}-\boldsymbol{w}_{n}\right\|^{2}=\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|^{2}+\alpha_{n}^{2}\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\|^{2}- \\ 2 \alpha_{n}\left\langle\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}, \boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\rangle \geqslant\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|^{2}+ \\ \alpha_{n}^{2}\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\|^{2}-\alpha_{n}\left(\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|+\right. \\ \left.\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\|\right)=\left(1-\alpha_{n}\right)\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|^{2}- \\ \left(\alpha_{n}-\alpha_{n}^{2}\right)\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\|^{2} \end{gathered} $ (22)

Applying inequalities (20), (22), and Eq. (21), the following expression can be deduced:

$ \begin{gathered} \left\|\boldsymbol{x}_{n+1}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2} \leqslant\left(1+\alpha_{n}\right)\left\|\boldsymbol{x}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}- \\ \alpha_{n}\left\|\boldsymbol{x}_{n-1}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}+2 \alpha_{n}\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\|^{2}- \\ \left(1-\alpha_{n}\right)\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|^{2} \end{gathered} $ (23)

The non-decreasing property of {αn} leads to

$ \begin{gathered} \left\|\boldsymbol{x}_{n+1}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}-\alpha_{n+1}\left\|\boldsymbol{x}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}+ \\ 2 \alpha_{n+1}\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|^{2} \leqslant\left\|\boldsymbol{x}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}- \\ \alpha_{n}\left\|\boldsymbol{x}_{n-1}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}+\left(2 \alpha_{n+1}-1+\right. \\ \left.\alpha_{n}\right)\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|^{2}+2 \alpha_{n}\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\|^{2} \end{gathered} $ (24)

Let

$ \begin{aligned} &\iota_{n}=\left\|\boldsymbol{x}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}-\alpha_{n}\left\|\boldsymbol{x}_{n-1}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}+ \\ &\ \ \ \ 2 \alpha_{n}\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\|^{2} \end{aligned} $

then (23) is equivalent to

$ \iota_{n+1}-\iota_{n} \leqslant\left(2 \alpha_{n+1}-1+\alpha_{n}\right)\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|^{2} $ (25)

Since $ 0 \leqslant \alpha_{n} \leqslant \alpha<\frac{1}{3}$, it holds that

$ 2 \alpha_{n+1}-1+\alpha_{n} \leqslant 3 \alpha-1<0 $

In inequality (25), $-\vartheta=3 \alpha-1 $ leads to

$ 0 \geqslant-\vartheta\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|^{2} \geqslant \iota_{n+1}-\iota_{n} $ (26)

In other words, progression {ιn} does not increase. Moreover, utilizing the form of {ιn}, the following expression is deduced:

$ \iota_{j} \geqslant\left\|\boldsymbol{x}_{j}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}-\alpha_{j}\left\|\boldsymbol{x}_{j-1}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2} $ (27)

Also, considering the form of ιn+1, the following formula is obtained:

$ \alpha_{j+1}\left\|\boldsymbol{x}_{j}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2} \geqslant-\iota_{j+1} $ (28)

Hence, inequalities (27) and (28) indicate that

$ \begin{aligned} -\iota_{j+1} & \leqslant \alpha\left(\iota_{j}+\alpha\left\|\boldsymbol{x}_{j-1}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}\right) \leqslant \cdots \leqslant \\ & \alpha\left(\alpha^{j-N}\left\|\boldsymbol{x}_{N}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}+\iota_{N}\left(1+\alpha+\cdots+\alpha^{j-N-1}\right)\right) \leqslant \\ & \alpha^{j-N+1}\left\|\boldsymbol{x}_{N}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}+\frac{\alpha \iota_{N}}{1-\alpha} \end{aligned} $ (29)

By making use of inequalities (26) and (29), there is

$ \begin{gathered} -\vartheta \sum\limits_{n=N}^{j}\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|^{2} \leqslant \iota_{N}-\iota_{j+1} \leqslant \iota_{N}+ \\ \alpha^{j-N+1}\left\|\boldsymbol{x}_{N}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}+\frac{\alpha \iota_{N}}{1-\alpha} \end{gathered} $ (30)

Set k→∞ from inequality (30), the following expression is derived:

$ \sum\limits_{n=N}^{\infty}\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|^{2}<+\infty $ (31)

which indicates

$ \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|=0 $ (32)

By using αnα, there is

$ \begin{array}{c} \left\|\boldsymbol{w}_{n}-\boldsymbol{x}_{n+1}\right\| \leqslant\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|+\alpha_{n}\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\| \leqslant \\ \left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|+\alpha\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\| \end{array} $ (33)

Therefore, by Eq.(32) and inequality (33), the following expression is obtained:

$ \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{w}_{n}-\boldsymbol{x}_{n+1}\right\|=0 $ (34)

Combining Eqs.(32) and (34) yields

$ \begin{gathered} \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{x}_{n}-\boldsymbol{w}_{n}\right\| \leqslant \lim \limits_{n \rightarrow \infty}\left(\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n+1}\right\|+\right. \\ \left.\left\|\boldsymbol{w}_{n}-\boldsymbol{x}_{n+1}\right\|\right)=0 \end{gathered} $

So

$ \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{x}_{n}-\boldsymbol{w}_{n}\right\|=0 $ (35)

From expression (22), for nN, the following expression is obtained:

$ \begin{aligned} &\left\|\boldsymbol{x}_{n+1}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2} \leqslant\left(1+\alpha_{n}\right)\left\|\boldsymbol{x}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}- \\ &\ \ \ \ \ \ \ \ \alpha_{n}\left\|\boldsymbol{x}_{n-1}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}+2 \alpha\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\|^{2} \end{aligned} $ (36)

By inequality(31) and (36), and invoking Lemma 1.5, there is

$ \left\|\boldsymbol{x}_{n}-\boldsymbol{x}^{\prime}\right\|^{2} \rightarrow \sigma $ (37)

Eq.(35) leads to

$ \left\|\boldsymbol{w}_{n}-\boldsymbol{x}^{\prime}\right\|^{2} \rightarrow {\sigma} $ (38)

Because of the relationship provided in inequality (20), the following is obtained:

$ \begin{gathered} \left\|\boldsymbol{x}_{n+1}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{{x}}}\right\|^{2} \leqslant\left(1-\beta_{n}\right)\left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}+ \\ \beta_{n}\left\|z_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2} \end{gathered} $

which means

$ \begin{gathered} \left\|\boldsymbol{z}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2} \geqslant\left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}+\\ \frac{\left\|\boldsymbol{x}_{n+1}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}-\left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}}{\beta_{n}} \end{gathered} $ (39)

Applying expressions (37), (38), inequality (39), and $0<\beta \leqslant \beta_{n} \leqslant \frac{1}{2} $, there is

$ \liminf \limits_{n \rightarrow \infty}\left\|\boldsymbol{z}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2} \geqslant \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}=\sigma $

Exploiting Lemma 2.2, the following expression is derived:

$ \left\|\boldsymbol{z}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\| \leqslant\left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\| \rightarrow \sqrt{\sigma} $

Therefore,

$ \left\|\boldsymbol{z}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2} \rightarrow \sigma $ (40)

Lemma 2.2 is invoked to obtain

$ \begin{gathered} \left(1-\frac{\lambda_{n}}{\lambda_{n+1}} \mu\right)\left(\left\|\boldsymbol{y}_{n}-\boldsymbol{w}_{n}\right\|^{2}+\left\|\boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\|^{2}\right) \leqslant \\ \left\|\boldsymbol{w}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}-\left\|\boldsymbol{z}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}, \forall n \geqslant N \end{gathered} $

The last expression implies that

$ \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{y}_{n}-\boldsymbol{w}_{n}\right\|=\lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{y}_{n}-\boldsymbol{z}_{n}\right\|=0 $

So it is found that

$ \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{w}_{n}-\boldsymbol{z}_{n}\right\|=0 $

By modality of xn+1, inequality (29), and $0<\beta \leqslant \beta_{n} \leqslant \frac{1}{2} $, $\lim \limits_{n \rightarrow \infty}\left\|T \boldsymbol{z}_{n}-\boldsymbol{w}_{n}\right\|=0 $ is obtained. Since $ \lim \limits_{n \rightarrow \infty}\left\|T \boldsymbol{z}_{n}-\boldsymbol{z}_{n}\right\| \leqslant \lim \limits_{n \rightarrow \infty}\left(\left\|T \boldsymbol{z}_{n}-\boldsymbol{w}_{n}\right\|+\left\|\boldsymbol{w}_{n}-\boldsymbol{z}_{n}\right\|\right)=0$ there is

$ \lim \limits_{n \rightarrow \infty}\left\|T \boldsymbol{z}_{n}-\boldsymbol{z}_{n}\right\|=0 $

Overall, the conclusion is drawn that progressions {xn}, {wn}, {yn}, and {zn} are bounded and $ \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{x}_{n}-\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}}\right\|^{2}$ exists. Boundedness of progression {xn} means that succession {xnk}⊂{xn} weakly converges to some x"H. Utilizing Lemma 2.3, it leads to x"Λ.

The next task is to verify that $\boldsymbol{x}_{n} \rightharpoonup \boldsymbol{x}^{\prime \prime} $. Suppose ∃x", $\hat{\boldsymbol{x}} $Λ with x"$\hat{\boldsymbol{x}} $. Given that progression {xni} satisfies the weak convergence of xni to $\hat{\boldsymbol{x}} $(i→∞), it is worth noting that the following expression holds:

$ \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{x}_{n}-\boldsymbol{x}^{\prime}\right\|=\sqrt{\sigma} \in R, \quad \forall \boldsymbol{x}^{\prime} \in \boldsymbol{\varLambda} $ (41)

Applying Lemma 1.4, it is deduced that

$ \begin{aligned} &\lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{x}_{n}-\hat{\boldsymbol{x}}\right\|=\lim \limits_{i \rightarrow \infty}\left\|\boldsymbol{x}_{n_{i}}-\hat{\boldsymbol{x}}\right\|= \\ &\ \ \ \ \liminf \limits_{i \rightarrow \infty}\left\|\boldsymbol{x}_{n_{i}}-\hat{x}\right\|<\liminf \limits_{i \rightarrow \infty} \left\|\boldsymbol{x}_{n_i}-\boldsymbol{x}^{\prime \prime}\right\|= \\ &\ \ \ \ \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{x}_{n}-\boldsymbol{x}^{\prime \prime}\right\|=\lim \limits_{k \rightarrow \infty}\left\|\boldsymbol{x}_{n_{k}}-\boldsymbol{x}^{\prime \prime}\right\|= \\ &\ \ \ \ \liminf \limits_{k \rightarrow \infty}\left\|\boldsymbol{x}_{n_{k}}-\boldsymbol{x}^{\prime \prime}\right\|<\liminf \limits_{i \rightarrow \infty} \left\|\boldsymbol{x}_{n_k}-\hat{\boldsymbol{x}}\right\|= \\ &\ \ \ \ \lim \limits_{k \rightarrow \infty}\left\|\boldsymbol{x}_{n_{k}}-\hat{\boldsymbol{x}}\right\|=\lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{x}_{n}-\hat{\boldsymbol{x}}\right\| \end{aligned} $ (42)

There is a contradiction in Eq.(42). As a result, $ \boldsymbol{x_{n}} \rightharpoonup \boldsymbol{x}^{\prime \prime}$. Given

$ \lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{x}_{n}-\boldsymbol{y}_{n}\right\|=\lim \limits_{n \rightarrow \infty}\left\|\boldsymbol{x}_{n}-\boldsymbol{z}_{n}\right\|=0 $

there is $\boldsymbol{y}_{n} \rightharpoonup \boldsymbol{x}^{\prime \prime} $ and $\boldsymbol{x}_{n} \rightharpoonup \boldsymbol{x}^{\prime \prime} $.

2.2 Strong Convergence

In this framework, inspired by Refs.[10], [12] and [23], Algorithm 2.1 was improved by using the modified version of Halpern iteration. To study the algorithhm's strong convergence, some assumptions are added.

Condition (B):

(B1) βn∈(0, 1), $ \lim \limits_{n \rightarrow \infty} \beta_{n}=0, \text { and } \sum\limits_{n=0}^{\infty} \beta_{n}=\infty$;

(B2) γn∈[a, b]⊂(0, 1);

(B3) εn=o(βn), i.e., $ \lim \limits_{n \rightarrow \infty}\left(\varepsilon_{n} / \beta_{n}\right)=0$.

Next, the form of algorithm is described in detail.

Algorithm 2.2

Step 0   Let x0, x1X, μ∈(0, 1), λ1 > 0. Select a non-negative real sequence {pn} satisfying $\sum\limits_{n=0}^{\infty} p_{n}<+\infty $.

Step 1   With xn-1 and xn known, αn with 0≤αnαn is selected and

$ \alpha^{\prime}{ }_{n}=\left\{\begin{array}{l} \min \left\{\frac{\varepsilon_{n}}{\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\|}, \alpha\right\}, \text { if } \boldsymbol{x}_{n-1}-\boldsymbol{x}_{n} \neq 0 \\ \alpha, \quad \text { else } \end{array}\right. $

Step 2   Take wn=xn-αn(xn-1-xn) and calculate

$ \begin{aligned} \boldsymbol{y}_{n}=& \arg \min \left\{\boldsymbol{\lambda}_{n} f\left(\boldsymbol{w}_{n}, \boldsymbol{y}\right)+\frac{1}{2}\left\|\boldsymbol{w}_{n}-\boldsymbol{y}\right\|^{2}, \boldsymbol{y} \in \boldsymbol{X}\right\}=\\ & \operatorname{prox}_{\lambda_{n} f\left(\boldsymbol{w}_{n}, \cdot\right)}^{X}\left(\boldsymbol{w}_{n}\right) \end{aligned} $

Step 3   Pick vn2 f(wn, yn) such that

$ \boldsymbol{w}_{n}-\lambda_{n} \boldsymbol{v}_{n}-\boldsymbol{y}_{n} \in \boldsymbol{N}_{X}\left(\boldsymbol{y}_{n}\right) $

compute

$ \begin{aligned} z_{n}=& \arg \min \left\{\lambda_{n} f\left(\boldsymbol{y}_{n}, \boldsymbol{z}\right)+\frac{1}{2}\left\|\boldsymbol{w}_{n}-\boldsymbol{z}\right\|^{2}, \boldsymbol{z} \in \boldsymbol{T}_{n}\right\}=\\ & \operatorname{prox}_{\lambda_{n} f\left(\boldsymbol{y}_{n}, \cdot\right)}^{T_{n}}\left(\boldsymbol{w}_{n}\right) \end{aligned} $

where

$ \boldsymbol{T}_{n}=\left\{\boldsymbol{x} \in \boldsymbol{H} \mid\left\langle\boldsymbol{w}_{n}-\lambda_{n} \boldsymbol{v}_{n}-\boldsymbol{y}_{n}, \boldsymbol{y}_{n}-\boldsymbol{x}\right\rangle \geqslant 0\right\} $

Step 4   Formulate

$ \boldsymbol{x}_{n+1}=\gamma_{n} T\left(\beta_{n} \boldsymbol{x}_{0}+\left(1-\beta_{n}\right) \boldsymbol{z}_{n}\right)+\left(1-\gamma_{n}\right) \boldsymbol{x}_{n} $

and

$ \begin{aligned} &\lambda_{n+1}= \\ &\left\{\begin{array}{l} \min \left\{\frac{\mu\left(\left\|\boldsymbol{w}_{n}-\boldsymbol{y}_{n}\right\|^{2}+\left\|\boldsymbol{y}_{n}-\boldsymbol{z}_{n}\right\|^{2}\right)}{2\left(f\left(\boldsymbol{w}_{n}, \boldsymbol{z}_{n}\right)-f\left(\boldsymbol{w}_{n}, \boldsymbol{y}_{n}\right)-f\left(\boldsymbol{y}_{n}, \boldsymbol{z}_{n}\right)\right)}, \lambda_{n}+p_{n}\right\}, \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text { if } f\left(\boldsymbol{w}_{n}, \boldsymbol{z}_{n}\right)-f\left(\boldsymbol{w}_{n}, \boldsymbol{y}_{n}\right)-f\left(\boldsymbol{y}_{n}, \boldsymbol{z}_{n}\right)>0 \\ \lambda_{n}+p_{n}, \quad \text { otherwise } \end{array}\right. \end{aligned} $

Take n: =n+1 and transfer to Step 1.

Remark 2.2   Apparently, it holds that

$ \lim \limits_{n \rightarrow \infty} \frac{\alpha_{n}}{\beta_{n}}\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\|=0 $

Actually, the following expression can be easily obtained:

$ \alpha_{n}\left\|\boldsymbol{x}_{n}-\boldsymbol{x}_{n-1}\right\| \leqslant \varepsilon_{n} $

From the above formula and hypothesis (B3), it is directly deduced that

$ \frac{\alpha_{n}}{\beta_{n}}\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\| \leqslant \frac{\varepsilon_{n}}{\beta_{n}} \rightarrow 0 $

Theorem 2.2   Let conditions (A) and (B) hold and Λ≠Ø. Algorithm 2.2 forms progression {xn}. Then {xn} to $\mathrm{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\boldsymbol{x}}} $PΛ(x0) is strongly convergent.

Proof   Please refer to Refs.[12] and [22] for the detailed proof of the theorem.

3 Research on Variational Inequalities

This portion elaborates applications of the outcomes to fixed point problem and variational inequality. Define f(a, b)=〈F(a), b-a〉 for each b, aX, where F: XX is a nonlinear operator. This special case of EP is simplified to variational inequality problem (VIP) which is elaborated as seeking out $\breve{\boldsymbol{x}}$X such that

$ \langle F(\breve{\boldsymbol{x}}), \boldsymbol{z}-\breve{\boldsymbol{x}}\rangle \geqslant 0, \forall \boldsymbol{z} \in \boldsymbol{X} $ (43)

where VI(X, F) represents the disaggregation of problem (43). Here f is called

● pseudo-monotone over X:

$ \langle \boldsymbol{F}(\boldsymbol{d}), \boldsymbol{e}-\boldsymbol{d}\rangle \geqslant 0\Rightarrow \langle \boldsymbol{F}(\boldsymbol{e}), \boldsymbol{e}-\boldsymbol{d}\rangle \geqslant 0, \quad \forall \boldsymbol{d}, \boldsymbol{e}\in \boldsymbol{X} $

L-Lipschitz continuous over X:

$ \begin{align} & \exists L>0, \text{ s}\text{.t}\text{. }\left\| F(\boldsymbol{d})-F(\boldsymbol{e}) \right\|\leqslant L\left\| \boldsymbol{d}-\boldsymbol{e} \right\|, \\ & \forall \boldsymbol{d}, \boldsymbol{e}\in \boldsymbol{X} \\ \end{align} $

Make the following presumptions about VIP:

(B1) f is pseudo-monotone over X;

(B2) f is weakly sequentially continuous over X for any progression {zn}: {zn} weakly converging to z, which means {F(zn)} weakly converges to F(z).

(B3) f possess L-Lipschitz successive over X.

Obviously, f(y, z)=〈F(y), z-y〉 makes conditions (A1)-(A3) hold. Condition (A4) holds in situations of Lipschitz-type constant $l_{1}=l_{2}=\frac{L}{2} $ on f. According to given forms of yn and f, the following is derived:

$ \begin{aligned} &\boldsymbol{y}_{n}=\arg \min \left\{\lambda_{n} f\left(\boldsymbol{w}_{n}, \boldsymbol{y}\right)+\frac{1}{2}\left\|\boldsymbol{w}_{n}-\boldsymbol{y}\right\|^{2}, \boldsymbol{y} \in \boldsymbol{X}\right\}= \\ &\quad \arg \min \left\{\lambda_{n}\left\langle F\left(\boldsymbol{w}_{n}\right), \boldsymbol{y}-\boldsymbol{w}_{n}\right\rangle+\frac{1}{2}\left\|\boldsymbol{w}_{n}-\boldsymbol{y}\right\|^{2}, \boldsymbol{y} \in \boldsymbol{X}\right\}= \\ &\quad \arg \min \left\{\frac{1}{2}\left\|\boldsymbol{y}-\left(\boldsymbol{w}_{n}-\lambda_{n} F\left(\boldsymbol{w}_{n}\right)\right)\right\|^{2}, \boldsymbol{y} \in \boldsymbol{X}\right\}- \\ &\quad \frac{\lambda_{n}^{2}}{2}\left\|F\left(\boldsymbol{w}_{n}\right)\right\|^{2}=P_{X}\left(\boldsymbol{w}_{n}-\lambda_{n} F\left(\boldsymbol{w}_{n}\right)\right) \end{aligned} $

Similarly, zn in the proposed algorithms reduces to

$ \boldsymbol{z}_{n}=\boldsymbol{P}_{T_{n}}\left(\boldsymbol{w}_{n}-\lambda_{n} F\left(\boldsymbol{y}_{n}\right)\right) $

Firstly, the following conclusions are obtained by observing the proof about Algorithm 2 for Ref. [23].

Theorem 3.1   Progressions {αn} and {βn} satisfying 0≤αnαn+1α < $\frac{1}{3} $ and 0 < ββn$\frac{1}{2} $ is known. Suppose that Condition (A) holds and Fix(T)∩VI(X, F)≠Ø. Take μ∈(0, 1) and λ1 > 0. Choose a non-negative real sequence {pn} such that $\sum\limits_{n=0}^{\infty} p_{n}<+\infty $. Select arbitrary points x0= x1X and clarify a progression {xn}⊂H:

$ \boldsymbol{d}_{n}=\boldsymbol{x}_{n}-\alpha_{n}\left(\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right) $
$ \boldsymbol{y}_{n}=P_{X}\left(\boldsymbol{d}_{n}-\lambda_{n} F\left(\boldsymbol{d}_{n}\right)\right) $
$ \boldsymbol{z}_{n}=P_{\boldsymbol{T}_{n}}\left(\boldsymbol{d}_{n}-\lambda_{n} F\left(\boldsymbol{y}_{n}\right)\right) $
$ \boldsymbol{T}_{n}=\left\{\boldsymbol{x} \in \boldsymbol{H} \mid\left\langle\lambda_{n} F\left(\boldsymbol{d}_{n}\right)+\boldsymbol{y}_{n}-\boldsymbol{d}_{n}, \boldsymbol{y}_{n}-\boldsymbol{x}\right\rangle \leqslant 0\right\} $
$ \boldsymbol{x}_{n+1}=\left(1-\beta_{n}\right) \boldsymbol{d}_{n}+\beta_{n} T \boldsymbol{z}_{n} $
$ \lambda_{n+1}=\left\{\begin{array}{l} \min \left\{\frac{\mu\left(\left\|\boldsymbol{d}_{n}-\boldsymbol{y}_{n}\right\|^{2}+\left\|\boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\|^{2}\right)}{2\left\langle F\left(\boldsymbol{d}_{n}\right)-F\left(\boldsymbol{y}_{n}\right), \boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\rangle}, \lambda_{n}+p_{n}\right\}, \\ \ \ \ \ \ \ \ \ \ \ \text { if }\ \ \left\langle F\left(\boldsymbol{d}_{n}\right)-F\left(\boldsymbol{y}_{n}\right), \boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\rangle>0 \\ \lambda_{n}+p_{n}, \text { otherwise } \end{array}\right. $

Then progression {xn} weakly converges to a certain spot of Fix(T)∩VI(X, F).

Then, a strong convergence schemeis presented. Its proof is similar to that in Ref.[24] and is thus omitted.

Theorem 3.2   Let conditions (A), (B) hold and Fix(T)∩VI(X, F)≠Ø. Let μ∈(0, 1) and λ1 > 0. Choose a non-negative real sequence {pn} such that $\sum\limits_{n=0}^{\infty} p_{n}<+\infty $. Take arbitrary points x0=x1X. A progression {xn}⊂H is formed as follows:

Choose αn with 0≤αnαn, where

$ \bar{\alpha}_{n}=\left\{\begin{array}{l} \min \left\{\frac{\varepsilon_{n}}{\left\|\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right\|}, \alpha\right\}, \text { if } \boldsymbol{x}_{n-1} \neq \boldsymbol{x}_{n} \\ \alpha, \text { otherwise } \end{array}\right. $
$ \boldsymbol{d}_{n}=\boldsymbol{x}_{n}-\alpha_{n}\left(\boldsymbol{x}_{n-1}-\boldsymbol{x}_{n}\right) $
$ \boldsymbol{y}_{n}=P_{X}\left(\boldsymbol{d}_{n}-\lambda_{n} F\left(\boldsymbol{w}_{n}\right)\right) $
$ \boldsymbol{z}_{n}=P_{T_{n}}\left(\boldsymbol{d}_{n}-\lambda_{n} F\left(\boldsymbol{y}_{n}\right)\right) $
$ \boldsymbol{T}_{n}=\left\{\boldsymbol{x} \in H \mid\left\langle\lambda_{n} F\left(\boldsymbol{d}_{n}\right)+\boldsymbol{y}_{n}-\boldsymbol{d}_{n}, \boldsymbol{y}_{n}-\boldsymbol{x}\right\rangle \leqslant 0\right\} $
$ \boldsymbol{x}_{n+1}=\left(1-\gamma_{n}\right) \boldsymbol{z}_{n}+\gamma_{n} T\left(\left(1-\beta_{n}\right) \boldsymbol{z}_{n}+\beta_{n} \boldsymbol{x}_{0}\right) $
$ \begin{aligned} &\lambda_{n+1}= \\ &\left\{\begin{array}{l} \min \left\{\frac{\mu\left(\left\|\boldsymbol{d}_{n}-\boldsymbol{y}_{n}\right\|^{2}+\left\|\boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\|^{2}\right)}{2\left\langle F\left(\boldsymbol{d}_{n}\right)-F\left(\boldsymbol{y}_{n}\right), \boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\rangle}, \lambda_{n}+p_{n}\right\}, \\ \ \ \ \ \ \ \ \ \ \ \ \ \text { if }\ \ \left\langle F\left(\boldsymbol{d}_{n}\right)-F\left(\boldsymbol{y}_{n}\right), \boldsymbol{z}_{n}-\boldsymbol{y}_{n}\right\rangle>0 \\ \lambda_{n}+p_{n}, \quad \text { otherwise } \end{array}\right. \end{aligned} $

Then progression {xn} strongly converges to a dot x, where xPFix(T)∩VI(X, F)(x0).

Remark 3.1   For convenience, the algorithms proposed by Theorem 3.1 and Theorem 3.2 are denoted as Algorithm 3.1 and Algorithm 3.2, respectively. They are direct applications of Algorithm 2.1 and Algorithm 2.2 to variational inequalities. Moreover, it is worth noting that the formula of calculating shadow onto a closed convex set Tn[25] is stated as

$ {P_{{\mathit{\boldsymbol{T}}_n}}}(\mathit{\boldsymbol{d}}) = \mathit{\boldsymbol{d}} - \max \left\{ {\frac{{\langle \mathit{\boldsymbol{e}}, \mathit{\boldsymbol{d}} - \mathit{\boldsymbol{x}}\rangle }}{{{{\left\| \mathit{\boldsymbol{e}} \right\|}^2}}}, 0} \right\}\mathit{\boldsymbol{e}} $

where e=wn-λnF(wn)-yn and x=yn.

4 Numerical Experiments

The proposed algorithms are compared with other algorithms in numerical experiments, and their effectiveness is illustrated in this section. For Algorithm 2.1 and Algorithm 3.1, take μ=0.6, λ1=0.12, αn=0.32, βn=0.5, and pn=1/(n+1)10. For all the tests, record the number of iterations (Iter.) and calculation time (Time) implemented in the passing seconds. Particularly, ‖xn-xn+1‖≤ε and ε=10-3 are adopted as terminate principle.

Problem Ⅰ   The first experiment is focused on equilibrium problem. f: H×HR is known and make f(y, z)=〈z-y, Ky+Mz+d〉 hold. Each item of dRm is casually created from [-5, 5]. It holds that M and K-M are symmetric positive semi-definite matrices of m×m. The practicable set is

$ \boldsymbol{X}=\left\{\boldsymbol{x} \in {\bf{R}}^{m}:-2 \leqslant x_{i} \leqslant 2, i=1, \cdots, m\right\} $

For Algorithm 2.1, take Tx=x/2. For Algorithm 3.1 in Ref.[14], λ=1/2l1, $\alpha_{n}=0.99(\sqrt{5}-2) $, and βn=0.5 are used. In Algorithm 1 of Ref.[26], λ=1/2l1 is chosen. x0=x1=(1, …, 1) is taken for any algorithms. As can be seen from Table 1, three sets of stochastic values are produced on account of distinct options of K, d and M.

Table 1 Numerical results of problem Ⅰ with starting dot (1, 1, ..., 1)

Problem Ⅱ   Consider the HpHard problem, where the feasible set is X=Rn+. Let G(x)=Ex+d possess d=0 and E=BBT+J+K, where B, J, KRn×n. Each item of matrix B and skew-symmetric matrix J is stochastically selected in [-2, 2], and each diagonal term of diagonal K is consistently given by (0, 2). For all tests, x0=x1=(1, …, 1) is taken. For Algorithm 2.1, Tx=-x/2 is used. In Algorithm 4.3 in Ref.[27], γ=1.99, λ=0.9/‖M‖, αn=1/(13t+2) are chosen. For Algorithm 3.1 in Ref.[28], μ=0.5, λ1=0.7, and αn=0.15 are taken. The results described in Table 2 reflect the effect of the proposed algorithms.

Table 2 Numerical results of problem Ⅱ with starting dot (1, 1, ..., 1)

In conclusion, compared with Algorithm 3.1 of Ref.[27] and Algorithm 4.3 of Ref.[28], this algorithm has better performance.

5 Conclusions

With regard to the fixed point problem and equilibrium problem, two new algorithms were proposed. Under appropriate circumstances, the convergence of algorithms was discussed. In particular, the variational inequalities were also studied. The performance of the proposed algorithms was demonstrated by observing the numerical results, which shows that the algorithms are effective.

References
[1]
Fan K. A Minimax Inequality and Applications. Inequalities Ⅲ. New York: Academic Press, 1972: 103-113. (0)
[2]
Muu L D, Oettli W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal, 1992, 18(12): 1159-1166. DOI:10.1016/0362-546X(92)90159-C (0)
[3]
Li W, Xiao Y B, Huang N J, et al. A class of differential inverse quasi-variational inequalities in finite dimensional spaces. Journal of Nonlinear Sciences and Applications, 2017, 10: 4532-4543. DOI:10.22436/jnsa.010.08.45 (0)
[4]
Wang Y M, Xiao Y B, Wang X, et al. Equivalence of well-posedness between systems of hemivariational inequalities and inclusion problems. Journal of Nonlinear-ence and Applications, 2016, 9(3): 1178-1192. DOI:10.22436/jnsa.009.03.44 (0)
[5]
Konnov I V. Application of the proximal point method to nonmonotone equilibrium problems. Journal of Optimization Theory and Applications, 2003, 119: 317-333. DOI:10.1023/B:JOTA.0000005448.12716.24 (0)
[6]
Hieu D V. Convergence analysis of a new algorithm for strongly pseudomontone equilibrium problems. Numerical Algorithms, 2018, 77: 983-1001. DOI:10.1007/s11075-017-0350-9 (0)
[7]
Flam S D, Antipin A S. Equilibrium programming using proximal-like algorithms. Mathematical Programming, 1996, 78: 29-41. DOI:10.1007/BF02614504 (0)
[8]
Quoc Tran D, Le Dung M, Nguyen V H. Extragradient algorithms extended to equilibrium problems?. Optimization, 2008, 57(6): 749-776. DOI:10.1080/02331930601122876 (0)
[9]
Hieu D V, Cho Y J, Xiao Y B. Modified extragradient algorithms for solving equilibrium problems. Optimization, 2018, 67(11): 2003-2029. DOI:10.1080/02331934.2018.1505886 (0)
[10]
Hieu D V. Halpern subgradient extragradient method extended to equilibrium problems. RACSAM, 2017, 111: 823-840. DOI:10.1007/s13398-016-0328-9 (0)
[11]
Dadashi V, Iyiola O S, Shehu Y. The subgradient extragradient method for pseudomonotone equilibrium problems. Optimization, 2020, 69(4): 901-923. DOI:10.1080/02331934.2019.1625899 (0)
[12]
Vinh N T, Muu L D. Inertial extragradient algorithms for solving equilibrium problems. Acta Mathematica Vietnamica, 2019, 44: 639-663. DOI:10.1007/s40306-019-00338-1 (0)
[13]
Hieu D V, Strodiot J J, Muu L D. Strongly convergent algorithms by using new adaptive regularization parameter for equilibrium problems. Journal of Computational and Applied Mathematics, 2020, 376: 112844. DOI:10.1016/j.cam.2020.112844 (0)
[14]
Rehman H U, Kumam P, Abubakar A B, et al. The extragradient algorithm with inertial effects extended to equilibrium problems. Computational and Applied Mathematics, 2020, 39: 100. DOI:10.1007/s40314-020-1093-0 (0)
[15]
Maingé P E, Moudafi A. Convergence of new inertial proximal methods for DC programming. SIAM Journal on Optimization, 2008, 19(1): 397-413. DOI:10.1137/060655183 (0)
[16]
Alvarez F, Attouch H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal, 2001, 9: 3-11. DOI:10.1023/A:1011253113155 (0)
[17]
Polyak B T. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics & Mathematical Physics, 1964, 4(5): 1-17. DOI:10.1016/0041-5553(64)90137-5 (0)
[18]
Tiel J V. Convex Analysis: An Introductory Text. New York: John Wiley & Sons Ltd., 1984, 97. (0)
[19]
Bauschke H H, Combettes P L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. New York: Springer, 2011: 399-413. DOI:10.1007/978-1-4419-9467-7 (0)
[20]
Saejung S, Yotkaew P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal, 2012, 75: 742-750. DOI:10.1016/j.na.2011.09.005 (0)
[21]
Yamada I, Ogura N. Hybrid steepest descent method for variational inequality operators over the problem certain fixed point set of quasi-nonexpansive mappings. Numerical Functional Analysis and Optimization, 2004, 25: 619-655. DOI:10.1081/NFA-200045815 (0)
[22]
Yang J, Liu H W. The subgradient extragradient method extended to pseudomonotone equilibrium problems and fixed point problems in Hilbert space. Optimization Letters, 2020, 14: 1803-1816. DOI:10.1007/s11590-019-01474-1 (0)
[23]
Tian M, Tong M Y. Self-adaptive subgradient extragradient method with inertial modification for solving monotone variational inequality problems and quasi-nonexpansive fixed point problems. Journal of Inequalities and Applications, 2019, 2019: 7. DOI:10.1186/s13660-019-1958-1 (0)
[24]
Rapeepan K, Satit S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert space. Jounal of Optimization Theory and Applications, 2014, 163: 399-412. DOI:10.1007/s10957-013-0494-2 (0)
[25]
Cegielski A. Iterative Methods for Fixed Point Problems in Hilbert Spaces. Berlin: Springer, 2013: 133. DOI:10.1007/978-3-642-30901-4 (0)
[26]
Tran D Q, Dung M L, Nguyen V H. Extragradient algorithms extended to equilibrium problems. Optimization, 2008, 57(6): 749-776. DOI:10.1080/02331930601122876 (0)
[27]
Shehu Y, Dong Q L, Jiang D. Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization, 2019, 68(1): 385-409. DOI:10.1080/02331934.2018.1522636 (0)
[28]
Yang J. Self-adaptive inertial subgradient extragradient algorithm for solving pseudomonotone variational inequalities. Applicable Analysis, 2019. DOI:10.1080/00036811.2019.1634257 (0)