Loading [MathJax]/jax/output/HTML-CSS/jax.js
A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Huidong Wang, Shifan He, Chengdong Li and Xiaohong Pan, "Pythagorean Uncertain Linguistic Variable Hamy Mean Operator and Its Application to Multi-attribute Group Decision Making," IEEE/CAA J. Autom. Sinica, vol. 6, no. 2, pp. 527-539, Mar. 2019. doi: 10.1109/JAS.2019.1911408
Citation: Huidong Wang, Shifan He, Chengdong Li and Xiaohong Pan, "Pythagorean Uncertain Linguistic Variable Hamy Mean Operator and Its Application to Multi-attribute Group Decision Making," IEEE/CAA J. Autom. Sinica, vol. 6, no. 2, pp. 527-539, Mar. 2019. doi: 10.1109/JAS.2019.1911408

Pythagorean Uncertain Linguistic Variable Hamy Mean Operator and Its Application to Multi-attribute Group Decision Making

doi: 10.1109/JAS.2019.1911408
Funds:

the National Natural Science Foundation of China 61402260

the National Natural Science Foundation of China 61473176

Taishan Scholar Project of Shandong Province TSQN201812092

More Information
  • Pythagorean fuzzy set (PFS) can provide more flexibility than intuitionistic fuzzy set (IFS) for handling uncertain information, and PFS has been increasingly used in multi-attribute decision making problems. This paper proposes a new multi-attribute group decision making method based on Pythagorean uncertain linguistic variable Hamy mean (PULVHM) operator and VIKOR method. Firstly, we define operation rules and a new aggregation operator of Pythagorean uncertain linguistic variable (PULV) and explore some properties of the operator. Secondly, taking the decision makers' hesitation degree into account, a new score function is defined, and we further develop a new group decision making approach integrated with VIKOR method. Finally, an investment example is demonstrated to elaborate the validity of the proposed method. Sensibility analysis and comprehensive comparisons with another two methods are performed to show the stability and advantage of our method.

     

  • The multi-attribute group decision making (MAGDM) problem relates to the selection of the best alternative(s) under multiple attribute values given by several experts, and has been widely applied in the fields of supplier selection [1], [2], transportation engineering [3], [4], risk assessment [5]-[7], etc. [8], [9]. In the multi-attribute decision making (MADM) problem, linguistic variables are very convenient to describe evaluation intention. For example, the decision makers can use "Very low", "Low", "Fair", "High", and "Very high" to evaluate the investment amount. With the increasing complexity and uncertainty of decision making environment, uncertain linguistic is used to describe the evaluation information, which can significantly express the decision makers' intension. For example, decision makers give the evaluation value of investment amount between "Fair" and "High", which means it is superior to "Fair" but inferior to "High". Consequently, research of uncertain linguistic variables in MAGDM is significantly valuable.

    Intuitionistic fuzzy set (IFS) was first proposed by Atanassov [10] to express uncertainty, which involves membership degree and non-membership degree, and attracted many researchers' attentions [11]-[14]. However, IFS has its limitation that the sum of membership degree and non-membership degree must be equal to or less than 1, which is not able to fully express the ideas of the decision makers. Therefore, Yager extended the IFS and proposed the Pythagorean fuzzy set (PFS) [15], which can overcome the constraints of IFS and provide more flexibility than IFS for handling uncertain information. For example, the degree of membership and non-membership given by a decision maker are 0.8 and 0.4, respectively. Obviously, it can not be handled by IFS, but it can be expressed by PFSs. Therefore, PFS received more attentions, recently [16]-[22]. Zhang and Xu [16] defined the operation rules of PFS, and extended the TOPSIS method with PFSs. By combining PFSs with hesitant fuzzy sets (HFSs), Liang and Xu [17] proposed a new concept of hesitant Pythagorean fuzzy sets (HPFSs) and extended the TOPSIS method with HPFSs. Zhang [18] extended PFSs to interval-valued Pythagorean fuzzy sets (IVPFSs) and introduced the basic operations of interval-valued Pythagorean fuzzy numbers (IVPFNs). In addition, they developed a closeness index-based Pythagorean fuzzy QUALIFLEX method to deal with hierarchical multi-criteria decision making (MCDM) problems. Wang et al. [19] proposed a new bi-directional projection model to handle uncertain linguistic variables. Compared with projection model, the bi-directional projection model can effectively get the ranking order even when alternatives distribute on the perpendicular bisector of ideal alternatives. Ren et al. [20] introduced an extended TODIM method to select the governor of Asian Infrastructure Investment Bank with Pythagorean fuzzy information. A new Pythagorean Fuzzy Stochastic MCDM method was proposed by Peng and Dai [21] based on prospect theory and regret theory. Xue et al. [22] defined the concept of entropy of PFSs and extended the LINMAP method. Combining Pythagorean fuzzy sets with linguistic variables, Peng and Yang [23] proposed Pythagorean fuzzy linguistic sets (PFLSs) and defined the operation rules of PFLSs. Since uncertain linguistic variable is more convenient to express uncertain information than linguistic variable, Liu et al. [24] proposed the definition of Pythagorean fuzzy uncertain linguistic sets (PFULSs) based on PFLSs and uncertain linguistic variable, and extended VIKOR method with the PFULSs.

    VIKOR method was initially proposed by Opricovic [25], which can obtain a set of compromising solutions when the criterion conflicts with each other. The compromising solution provides a balance between the maximum group utility value and the minimum individual regret value. Recently, VIKOR method has been widely used to deal with different types of MADM problems. Wu et al. [26] proposed an extended VIKOR method under the uncertain linguistic environment to solve the supplier selection problem of nuclear power industry in China. To select the best green supplier development program, Awasthi and Kannan [27] presented a fuzzy NGT-VIKOR method. Chen [28] developed a new VIKOR-based method, and applied it to solve service quality and Internet stock performance evaluation problems. Traditional VIKOR method can effectively express the decision makers' behavior, but, it can not describe the interaction relationship of each other.

    For MAGDM problems, information aggregation operator is a common method being used to aggregate multi-dimensional individual information into a comprehensive value [29]-[34] However, some information aggregation operators, such as OWA operator [34], do not take the interaction relationship into account. In fact, attributes interact with each other in general.

    As being discussed previously, it is natural for decision makers to express the evaluation information with linguistic terms in realistic MAGDM problems. On the face of complexity and uncertainty in real decision making problems, Pythagorean uncertain linguistic variable has been proved to be a powerful and effective tool to enhance the expression ability of uncertain information.

    To overcome the drawbacks listed in VIKOR method and OWA operator, we propose a new approach for MAGDM problems with Pythagorean uncertain linguistic variables. Hamy mean [35] (HM) operator, a useful tool to cope with the interaction relationship between the aggregated arguments, is introduced for information aggregation of experts' evaluations. As a result, we extend the HM operator to the fields of Pythagorean uncertain linguistic variables, and propose a new powerful information aggregation operator, named as Pythagorean uncertain linguistic variable Hamy mean (PULVHM) operator. Furthermore, inspired by the score function of Pythagorean fuzzy number (PFN), we develop a new score function of Pythagorean uncertain linguistic variables (PULVs), named as PULVSF, which is used to transform the PULVs into crisp numbers. Up to now, on the basis of the proposed PULVHM operator and PULVSF, we integrate the VIKOR method in the final decision making stage to obtain the ranking order of all the optional alternatives.

    The rest of this paper is organized as follows. Section Ⅱ presents some basic definitions of linguistic variable, Pythagorean fuzzy set, and Hamy mean operator. In Section Ⅲ, we propose the concept of PULVHM operator. A new score function of PULFSs is provided in Section Ⅳ and the MAGDM procedures are listed in Section Ⅴ. In Section Ⅵ, the effectiveness of the proposed method is demonstrated by a practical MAGDM problem, comparative analysis and sensitivity analysis are conducted subsequently. Finally, Section Ⅶ comes to some conclusions.

    In this section, some basic conceptions of UVL, PFS, and HM operator are given and some properties about ULV, PFS, and HM operator are introduced.

    Definition 1 [36], [37]: Let $S=\{s_i|i=0, 1, \ldots, 2z\}$ be a linguistic discrete term set, where denotes an evaluation value of linguistic variable. We call $s=[\underline{s}_\alpha, \bar{s}_\beta]$ the uncertain linguistic variable (ULV), where $\underline{s}_\alpha, \bar{s}_\beta\in S$ and $0\leq\alpha\leq\beta$, $\underline{s}_\alpha, \bar{s}_\beta$ denote the lower bound and the upper bound, respectively.

    To preserve all the given information, Xu [38] extended the discrete term set to a continuous linguistic term set $\tilde{S}=\{s_i|s_1 < s_i\leq s_{2z}, i\in[1, 2z]\}$.

    Let $s_1=[\underline{s}_{\alpha_1}, \bar{s}_{\beta_1}]$ and $s_2=[\underline{s}_{\alpha_2}, \bar{s}_{\beta_2}]$ be two arbitrary ULVs, then we have [38].

    1) $s_1\oplus s_2=[\underline{s}_{\alpha_1}, \bar{s}_{\beta_1}]\oplus[\underline{s}_{\alpha_2}, \bar{s}_{\beta_2}]=[\underline{s}_{\alpha_1+\alpha_2}, \bar{s}_{\beta_1+\beta_2}]$

    2) $s_1\otimes s_2=[\underline{s}_{\alpha_1}, \bar{s}_{\beta_1}]\otimes[\underline{s}_{\alpha_2}, \bar{s}_{\beta_2}]=[\underline{s}_{\alpha_1\times\alpha_2}, \bar{s}_{\beta_1\times\beta_2}]$

    3) $\lambda_s=\lambda[\underline{s}_\alpha, \bar{s}_{\beta}]=[\underline{s}_{\lambda\cdot\alpha}, \overline{s}_{\lambda\cdot\beta}], \lambda\geq0$

    4) $(s)^{\lambda}=[\underline{s}_\alpha, \bar{s}_{\beta}]^{\lambda}=[\underline{s}_{(\alpha)^{\lambda}}, \bar{s}_{(\beta)^{\lambda}}], \lambda\geq0$.

    Definition 2 [16]: Let $X$ be a fixed set. A PFS in $X$ takes the form of:

    P={x,P(uP(x),vP(x))|xX}

    where function $u_p(x): X\rightarrow[0, 1]$ and $v_p(x): X\rightarrow[0, 1]$ denote membership function and non-membership function of $x\in X$, respectively $u_p^2(x)+v_p^2(x)\leq 1$, and $\pi_{p} \left(x \right)=\sqrt{1-u_{p}^{2} \left(x \right)-v_{p}^{2} \left(x \right)}$ denotes the hesitate degree of $x\in X$.

    For the sake of simplicity, Zhang and Xu [16] called $P(u_P(x), v_P(x))$ as PFN denoted by $\beta=P(u_{\beta}, v_{\beta})$, where $u_p(x), v_p(x)\in[0, 1]$, $\pi_{p} \left(x \right)=\sqrt{1-u_{p}^{2} \left(x \right)-v_{p}^{2} \left(x \right)}$ and $u_p^2(x)+v_p^2(x)\leq 1$.

    Let $\beta_1=P(u_{\beta_1}, v_{\beta_1})$ and $\beta_2=P(u_{\beta_2}, v_{\beta_2})$ be two PFNs, then the basic operation rules are summarized as follows [16]:

    1) $\beta_{1} \oplus \beta_{2} =P\left({\sqrt{u_{\beta_{1} }^{2} +u_{\beta_{2} }^{2} -u_{\beta_{1} }^{2} u_{\beta_{2} }^{2} }, \nu_{\beta _{1} } \nu_{\beta_{2} } } \right)$

    2) $\beta_{1} \otimes \beta_{2} =P\left({u_{\beta_{1} } u_{\beta_{2} }, \sqrt{\nu_{\beta_{1} }^{2} +\nu_{\beta_{2} }^{2} -\nu_{\beta_{1} }^{2} \nu_{\beta_{2} }^{2} }} \right)$

    3) $k{\kern 1pt}\beta =P\left({\sqrt{1-\left({1-u_{\beta }^{2} } \right)^{k}}, \left({\nu_{\beta } } \right)^{k}} \right), {\kern 1pt}k\geq 0$

    4) $\beta^{k} =P\left({\left({u_{\beta } } \right)^{k}, \sqrt{1-\left({1-\nu_{\beta }^{2} } \right)^{k}}} \right), {\kern 1pt}k\geq 0$.

    According to the definition of PFNs, we can know that the subspace of PFNs is bigger than IFNs, as is shown in Fig. 1.

    Figure  1.  The space relationship of PFNs and IFNs memberships functions

    Definition 3 [24]: Let $X$ be a fixed set $\tilde{P}=\{\langle x_i|([s_{\alpha_i}, s_{\beta_i}], P(u_p(x_i), v_p(x_i)))\rangle|x_i\in X\}$. denotes the PULVs on $X$, where function $u_p(x):X\rightarrow[0, 1]$ and $v_p(x):X\rightarrow[0, 1]$ denote membership function and non-membership function of $x\in X$, respectively, with $u_p^2(x)+v_p^2(x)\leq 1$.

    Let $\tilde{\alpha_1}=\langle[s_{\alpha_1}, s_{\beta_1}], P(u_p(x_1), v_p(x_1))\rangle$ and $\tilde{\alpha_2}=\langle[s_{\alpha_2}, s_{\beta_2}], P(u_p(x_2), v_p(x_2))\rangle$ be two PULVs, we can obtain the operation rules based on that of ULVs and PFNs as follows:

    1)

    ˜α1˜α2=[sα1+α2,sβ1+β2],P((up(x1))2+(up(x2))2(up(x1))2(up(x2))2,vp(x1)vp(x2))

    2)

    ˜α1˜α2=[sα1×α2,sβ1×β2],P(up(x1)up(x2),(νp(x1))2+(νp(x2))2(νp(x1))2(νp(x2))2)

    3)

    γ˜α=[sγα,sγβ],P(1(1(up(x))2)γ,(νβ(x))γ)

    4)

    ˜αγ=[sαγ,sβγ],P((up(x))γ,1(1(νp(x))2)γ)

    Theorem 1: For any two PULVs $\dot{\alpha_1}=\langle[s_{\alpha_1}, s_{\beta_1}], $ $P(u_p(x_1), v_p(x_1))\rangle$ and $\dot{\alpha_2}=\langle[s_{\alpha_2}, s_{\beta_2}], P(u_p(x_2), v_p(x_2))\rangle$, the calculation rules meet the following properties:

    1) $\tilde{\alpha}_1\oplus\tilde{\alpha}_2=\tilde{\alpha}_2\oplus\tilde{\alpha}_1$

    2) $\tilde{\alpha}_1\otimes\tilde{\alpha}_2=\tilde{\alpha}_2\otimes\tilde{\alpha}_1$

    3) $\gamma(\tilde{\alpha}_1\oplus\tilde{\alpha}_2)=\gamma\tilde{\alpha}_1\oplus\tilde{\alpha}_2, \gamma\geq0$

    4) $(\tilde{\alpha})^{\gamma_1+\gamma_2}=\tilde{\alpha}^{\gamma_1}\otimes\tilde{\alpha}^{\gamma_2}, \gamma_1, \gamma_2\geq0$

    5) $\gamma_1\tilde{\alpha}\oplus\gamma_2\tilde{\alpha}=(\gamma_1+\gamma_2)\tilde{\alpha}, \gamma_1, \gamma_2\geq0$

    6) $\tilde{\alpha}_1^{\gamma}\otimes\tilde{\alpha}_2^{\gamma}=(\tilde{\alpha}_1\otimes\tilde{\alpha}_2)^{\gamma}, \gamma\geq0$

    Proof:

    1)˜α1˜α2=[sα1+α2,sβ1+β2],P((up(x1))2+(up(x2))2(up(x1))2(up(x2))2,vp(x1)vp(x2))=[sα2+α1,sβ2+β1],P((up(x2))2+(up(x1))2(up(x2))2(up(x1))2,vp(x2)vp(x1))=˜α2˜α1
    2)˜α1˜α2=[sα1×α2,sβ1×β2],P(up(x1)up(x2),(νp(x1))2+(νp(x2))2(νp(x1))2(νp(x2))2)=[sα2×α1,sβ2×β1],P(up(x2)up(x1),(νp(x2))2+(νp(x1))2(νp(x2))2(νp(x1))2)=˜α2˜α1
    3)γ(~α1~α2)=γ[sα1+α2,sβ1+β2],P((up(x1))2+(up(x2))2(up(x1))2(up(x2))2,vp(x1)vp(x2))=[sγα1+γα2,sγβ1+γβ2],P(1(1(up(x1))2+(up(x2))2(up(x1))2(up(x2))2)γ,(vp(x1)vp(x2))γ)=[sγα1+γα2,sγβ1+γβ2],P(1((1(up(x1))2)(1(up(x2))2))γ,(vp(x1)vp(x2))γ)=γ~α1+γ~α2
    4)(˜α)γ1+γ2=[sαγ1+γ2,sβγ1+γ2],P((up(x))γ1+γ2,1(1(νp(x))2)γ1+γ2)=[sαγ1×αγ2,sβγ1×βγ2],P((up(x))γ1(up(x))γ2,1(1(νp(x))2)γ1(1(νp(x))2)γ2)=[sαγ1×αγ2,sβγ1×βγ2],P((up(x))γ1(up(x))γ2,1(1(νp(x))2)γ1(1(νp(x))2)γ2)=˜αγ1˜αγ2
    5)γ1˜αγ2˜α=[sγ1α,sγ1β],P(1(1(up(x))2)γ1,(νβ(x))γ1)[sγ2α,sγ2β],˜P(1(1(up(x))2)γ2,(νβ(x))γ2)=[sγ1α+γ2α,sγ1β+γ2β],P(1(1(up(x))2)γ1(1(up(x))2)γ2,(νβ(x))γ1(νβ(x))γ2)=[s(γ1+γ2)α,s(γ1+γ2)β],P(1(1(up(x))2)(γ1+γ2),(νβ(x))(γ1+γ2))=(γ1+γ2)˜α
    6)~α1γ~α2γ=[sαγ1,sβγ1],P((up(x1))γ,1(1(νp(x1))2)γ)[sαγ2,sβγ2],˜P((up(x2))γ,1(1(νp(x2))2)γ)=[sαγ1×αγ2,sβγ1×βγ2],P((up(x1))γ(μp(x2))γ,1(1(νp(x1))2)γ(1(νp(x2))2)γ)=[s(α1×α2)γ,s(β1×β2)γ],P(((up(x1))(up(x2)))γ,1((1(νp(x1))2)(1(νp(x2))2))γ)=(~α1~α2)γ.

    Definition 4 [35]: Let $h_j(j=1, 2, \ldots, n)$ be a collection of nonnegative real numbers. $HM^{(p)}$ is called the Hamy mean(HM)operator, which is defined as follows:

    HM(p)(δ1,δ2,,δn)=1k1<<kpn(pj=1δkj)(np)1p (1)

    where $p(p=1, 2, \ldots, n)$ is the parameter of HM operator, and $(k_1, k_2, \ldots, k_p)$ is p-tuple full combination of $(1, 2, \ldots, n)$, and $\left({{np }} \right)=C_n^p=\dfrac{n!}{p!(n-p)!}$ is the binomial coefficient.

    Obviously, the HM operator meets the following properties:

    1) $HM^{(P)}(0, 0, \ldots, 0)=0$, if $\delta_k=0(k=1, 2, \ldots, n)$

    2) $HM^{(P)}(\delta, \delta, \ldots, \delta)=\delta$, if $\delta_k=\delta(k=1, 2, \ldots, n)$

    3) $HM^{(P)}(\delta_1, \delta_2, \ldots, \delta_k)\leq HM^{(P)}=(\beta_1, \beta_2, \ldots, \beta_k)$, if $\delta_k\leq\beta_k(k=1, 2, \ldots, n)$

    4) ${\text{min}}_k(\delta_k)\leq HM^{(P)}(\delta_1, \delta_2, \ldots, \delta_k)\leq {\text{max}}_k(\delta_k)$.

    Definition 5: The collection of PULVs is denoted by$\widetilde{P}=\left\{ {\left\langle {x_{i} |\left({\left[ {s_{\alpha _{i} }, s_{\beta_{i} } } \right], P\left({u_{\widetilde{P}} \left({x_{i} } \right), v_{\widetilde{P}} \left({x_{i} } \right)} \right)} \right)} \right\rangle |x_{i} \in X} \right\}{\kern 1pt}$, $i=1, 2, \ldots, n$. Then PULVHM operator is defined as follows:

    PULVHM(ϕ)(~q1,~q2,,~qn)=1k1<<kϕn(ϕj=1~qij)(nϕ)1ϕ (2)

    where $\varphi(\varphi=1, 2, \ldots, n)$ is the parameter of PULVHM operator, $(i_1, i_2, \ldots, i_{\varphi})$ is $\varphi$-tuple full combination of $(1, 2, \ldots, n)$, and $\left({{nϕ }} \right)=C_n^{\varphi}=\frac{n!}{\varphi!(n-\varphi)!}$ is the binomial coefficient and satisfies $1\leq i_1 < \ldots < i_{\varphi}\leq n$.

    Theorem 2: The collection of PULVs is denoted as $\widetilde{P}=\left\{ {\left\langle {x_{i} |\left({\left[ {s_{\alpha_{i} }, s_{\beta_{i} } } \right], P\left({u_{\widetilde{P}} \left({x_{i} } \right), v_{\widetilde{P}} \left({x_{i} } \right)} \right)} \right)} \right\rangle |x_{i} \in X} \right\}{\kern 1pt}$, $i=1, 2, \ldots, n$. Based on the operation rules of PULVs, we can know that the result of PULVHM is still a PULV, consequently, the PULVHM operator can also be expressed as:

    PULVHM(ϕ)(~q1,~q2,,~qn)=[s1(nϕ)(1i1<<iϕn(ϕj=1αij)1ϕ),s1(nϕ)(1i1<<iϕn(ϕj=1βij)1ϕ)],
    P[1(1i1<<iϕn(1(ϕj=1u˜p(xij))2ϕ))1(nϕ),(1i1<<iϕn1(ϕj=1(1v2˜p(xij)))1ϕ)1(nϕ)]

    Proof:

    According to Theorem 1, we can get

    ϕj=1~qij=[sϕj=1αij,sϕj=1βij],P[ϕj=1u˜p(xj),1ϕj=1(1v2˜p(xj))]
    (ϕj=1~qij)1ϕ=[s(ϕj=1~αij)1ϕ,s(ϕj=1βij)1ϕ],P[(ϕj=1u˜p(xij))1ϕ,1(ϕj=1(1v2˜p(xij)))1ϕ]
    1i1<<iϕn(ϕj=1~qij)1ϕ=[s1i1<<iϕn(ϕj=1αij)1ϕ,s1i1<<iϕn(ϕj=1βij)1ϕ],P[11i1<<iϕn(1(ϕj=1u˜p(xij))2ϕ),1i1<<iϕn1(ϕj=1(1v2˜p(xij)))1ϕ]

    Then,

    1(nϕ)(1i1<<iϕn(ϕj=1~qij)1ϕ)=[s1(nϕ)(1i1<<iϕn(ϕj=1αij)1ϕ),s1(nϕ)(1i1<<iϕn(ϕj=1βij)1ϕ)],P[1(1i1<<iϕn(1(ϕj=1u˜p(xij))2ϕ))1(nϕ),(1i1<<iϕn1(ϕj=1(1v2˜p(xij)))1ϕ)1(nϕ)]

    Therefore,

    PULVHM(ϕ)(~q1,~q2,...,~qn)=$$[s1(nϕ)(1i1<<iϕn(ϕj=1αij)1ϕ),s1(nϕ)(1i1<<iϕn(ϕj=1βij)1ϕ)],P[1(1i1<<iϕn(1(ϕj=1u˜p(xij))2ϕ))1(nϕ),(1i1<<iϕn1(ϕj=1(1v2˜p(xij)))1ϕ)1(nϕ)]

    Next, we will prove that the aggregation result of multiple PULVs by PULVHM operator is still a PULV. We already know that $0\leq u_{\tilde{p}}(x_{i_j})\leq 1$, $0\leq v_{\tilde{p}}(x_{i_j})\leq 1$ and $(u_{\tilde{p}}(x_{i_j}))^2+(v_{\tilde{p}}(x_{i_j}))^2\leq 1$, then,

    (1(1i1<<iϕn(1(ϕj=1u˜p(xij))2ϕ))1(nϕ))2+((1i1<<iϕn1(ϕj=1(1v2˜p(xij)))1ϕ)1(nϕ))2=1(1i1<<iϕn(1(ϕj=1u˜p(xij))2ϕ))1(nϕ)+(1i1<<iϕn(1(ϕj=1(1v2˜p(xij)))1ϕ))1(nϕ)1(1i1<<iϕn(1(ϕj=1(1v2˜p(xij)))1ϕ))1(nϕ)+(1i1<<iϕn(1(ϕj=1(1v2˜p(xij)))1ϕ))1(nϕ)=1

    Example 1: There are three PULVs:

    $\tilde{p}_1=\langle[s_4, s_5], P(0.8, 0.3)\rangle$, $\tilde{p}_2=\langle[s_3, s_4], P(0.7, 0.5)\rangle$ and $\tilde{p}_3=\langle[s_6, s_7], P(0.9, 0.2)\rangle$, $\varphi=2$ is the parameter of PULVHM operator. Then, we can get:

    (~p1~p3)12=[s4.90,s5.92],P(0.85,0.26)
    (~p1~p2)12(~p1~p3)12(~p2~p3)12=[s12.61,s15.68],P(0.98,0.04)

    Therefore,

    Theorem 3: The PULVHM operator meets the following properties:

    1) $PULVHM^{\left(\phi \right)}\left({0, 0, \ldots, } \right)=0$

    2) $PULVHM^{\left(\phi \right)}\left({\widetilde{q}, \widetilde{q}, \ldots, \widetilde{q}} \right)=\widetilde{q}$

    3) $PULVHM^{\left(\phi \right)}\left({\widetilde{q_{1} }, \widetilde{q_{2} }, \ldots, \widetilde{q_{k} }} \right)\leq PULVHM^{\left(\phi \right)}(\widetilde{p_{1} }, \widetilde{p_{2} }, $ $\ldots, \widetilde{p_{k} })$, if $\widetilde{q_{k} }\leq \widetilde{p_{k} }\left({k=1, 2, \ldots, n} \right)$

    4) ${\min} \left({\widetilde{q_{k} }} \right)\leq PULVHM^{\left(\phi \right)}\left({\widetilde{q_{1} }, \widetilde{q_{2} }, \ldots, \widetilde{q_{n} }} \right)\leq {\max }\left({\widetilde{q_{k} }} \right), k=1, 2, \ldots, n$

    Proof:

    1) The result of property 1 can be easily drawn out from the basic mathematical operation.

    2) For any PULVs $q=\langle[s_{\alpha_i}, s_{\beta_1}], P[u_{\tilde{p}}(x_i), v_{\tilde{p}}(x_i)]\rangle$,

    PULVHM(ϕ)(˜q,˜q,,˜q)=[s1(nϕ)(1i1<<iϕn(ϕj=1αij)1ϕ),s1(nϕ)(1i1<<iϕn(ϕj=1βij)1ϕ)],P[1(1i1<<iϕn(1(ϕj=1u˜p(xij))2ϕ))1(nϕ),(1i1<<iϕn1(ϕj=1(1v2˜p(xij)))1ϕ)1(nϕ)]=[s1(nϕ)(1i1<<iϕn((αij)ϕ)1ϕ),s1(nϕ)(1i1<<iϕn((βij)ϕ)1ϕ)],P[1(1i1<<iϕn(1((u˜p(xij))ϕ)2ϕ))1(nϕ),((1(ϕj=1(1v2˜p(xij)))1ϕ)(nϕ))1(nϕ)]=[s1(nϕ)((nϕ)αij),s1(nϕ)((nϕ)βij)],P[1((1(u˜p(xij))2)(nϕ))1(nϕ),(1((1v2˜p(xij))ϕ)1ϕ)]=[sα,sβ],P[u˜p(xij),v˜p(xij)]=˜q.

    3) Let $\tilde{q}_i=\langle[s_{\alpha_i}, s_{\beta_i}], P[u_q(x_i), v_q(x_i)]\rangle$ and $\tilde{p}_i=\langle[s_{\gamma_i}, s_{\gamma_i}], P[u_{\tilde{p}}(x_i), v_{\tilde{p}}(x_i)]\rangle$ be two groups of PULVs, and $\tilde{q}_i\leq \tilde{p}_i(i=1, 2, \ldots, n)$.

    For any $i$, it can be seen that $\alpha_i\leq\gamma_i$, $\beta_i\leq\lambda_i$, $u_{\tilde{q}}(x_i)\leq u_{\tilde{p}}(x_i)$ and $v_{\tilde{q}}(x_i)\leq v_{\tilde{p}}(x_i)$.

    Then, we have $\prod_{j=1}^{\varphi}\alpha_{i_j}\leq\prod_{j=1}^{\varphi}\gamma_{i_j}$, $\prod_{j=1}^{\varphi}\beta_{i_j}\leq\prod_{j=1}^{\varphi}\lambda_{i_j}$

    (φj=1αij)1φ(φj=1γij)1φ1i1<<iφn(φj=1αij)1φ       1i1<<iφn(φj=1γij)1φ

    Similarly, we can get

    1(nφ)(1i1<<iφn(φj=1βij)1φ)1(nφ)(1i1<<iφn(φj=1γij)1φ).

    From the calculation above, we can know that

    1i1<<iϕn(1(ϕj=1u˜q(xij))2ϕ)
    1i1<<iϕn(1(ϕj=1u˜p(xij))2ϕ)
    (1i1<<iϕn(1(ϕj=1u˜q(xij))2ϕ))1(nϕ)(1i1<<iϕn(1(ϕj=1u˜p(xij))2ϕ))1(nϕ)
    1(1i1<<iϕn(1(ϕj=1u˜q(xij))2ϕ))1(nϕ)1(1i1<<iϕn(1(ϕj=1u˜p(xij))2ϕ))1(nϕ)
    1(1i1<<iϕn(1(ϕj=1u˜q(xij))2ϕ))1(nϕ)1(1i1<<iϕn(1(ϕj=1u˜p(xij))2ϕ))1(nϕ)

    Similarly, we can obtain

    1(1i1<<iϕn(1(ϕj=1v˜q(xij))2ϕ))1(nϕ)1(1i1<<iϕn(1(ϕj=1v˜p(xij))2ϕ))1(nϕ)

    Accordingly, if, then $PULVHM^{\left(\phi \right)}\left({\widetilde{q_{1} }, \widetilde{q_{2} }, \ldots, \widetilde{q_{k} }} \right)\leq$ $ PULVHM^{\left(\phi \right)}\left({\widetilde{p_{1} }, \widetilde{p_{2} }, \ldots, \widetilde{p_{k} }} \right)$

    4) For arbitrary PULV, we have:

    min(~qk)=˜q=[smin(αi),smin(βi)],P[min(u˜p(xi)),min(v˜p(xi))]

    and

    max(~qk)=˜q+=[smax(αi),smax(βi)],P[max(u˜p(xi)),max(v˜p(xi))]

    It is easy to get that $s_{\min \left({\alpha_{i} } \right)} \leq s_{\alpha_{i} } \leq s_{\max \left({\alpha_{i} } \right)} $, and $\min(u_{\tilde{p}}(x_i))\leq u_{\tilde{p}}(x_i)\leq \max(u_{\tilde{p}}(x_i))$, $\min(v_{\tilde{p}}(x_i))\leq v{\tilde{p}}(x_i)\leq \max(v_{\tilde{p}}(x_i))$. Based on the properties (2) and (3) in Theorem 3, we can obtain that: $PULVHM^{\left(\phi \right)}\left({\widetilde{q_{1} }, \widetilde{q_{2} }, \ldots, \widetilde{q_{k} }} \right)\geq PULVHM^{\left(\phi \right)}\left({\widetilde{q}^{-}, \widetilde{q}^{-}, \ldots, \widetilde{q}^{-}} \right)=\widetilde{q}^{-}$$PULVHM^{\left(\phi \right)}\left({\widetilde{q_{1} }, \widetilde{q_{2} }, \ldots, \widetilde{q_{k} }} \right)\leq PULVHM^{\left(\phi \right)}\Big(\widetilde{q}^{+}, \widetilde{q}^{+}, $ $\ldots, \widetilde{q}^{+} \Big)=\widetilde{q}^{+}$Therefore, ${\min }\left({\widetilde{q_{k} }} \right)\leq PULVHM^{\left(\phi \right)}\Big(\widetilde{q_{1} }, \widetilde{q_{2} }, $ $\ldots, \widetilde{q_{n} } \Big)\leq {\max }\left({\widetilde{q_{k} }} \right), k=1, 2, \ldots, n$

    Zhang proposed the score function [16] and accuracy function [39] to compare the values of PFNs, the definitions are as follows:

    Definition 6 [16]: For arbitrary PFN $\beta=P(u_{\beta}, v_{\beta})$, the score function of $\beta$ is:

    score(β)=u2βν2β. (3)

    Definition 7 [39]: For arbitrary PFN $\beta=P(u_{\beta}, v_{\beta})$, the accuracy function of $\beta$ is:

    H(β)=u2β+ν2β. (4)

    For two PFNs $\beta_{1} =P\left({u_{\beta_{1} }, \nu_{\beta_{1} } } \right)$ and $\beta_{2} =P\left({u_{\beta_{2} }, \nu_{\beta_{2} } } \right)$, the comparison laws of the two PFNs are defined as follows:

    1) $score(\beta_1)>score(\beta_2)$, which means $\beta_1$ is bigger than $\beta_2$, expressed by $\beta_1>\beta_2$.

    2) $score(\beta_1)>score(\beta_2)$, which means $\beta_1$ is smaller than $\beta_2$, expressed by $\beta_1 < \beta_2$.

    3) $score(\beta_1)\!\!=\!\! score(\beta_2)$, then $\left\{\!\!\!H(β1)<H(β2)β1<β2H(β1)=H(β2)β1:β2H(β1)>H(β2)β1>β2\right.$.

    Based on the score function of Pythagorean fuzzy number [16], a new Pythagorean fuzzy number score function (PFNSF) is proposed, which considers the decision makers' hesitation degree.

    Definition 8: For a PFN $\beta=P(u_{\beta}, v_{\beta})$, the new score function of $\beta$ is:

    score(β)=2u2β+π2β1+ν2β1. (5)

    Theorem 4: For PFNSF $score(\beta)$, it satisfies the properties as follows.

    1) Monotonicity. $score(\beta)$ is a monotonous increasing function of membership degree $\mu_{\beta}$ and a monotonous decreasing function of non-membership degree $\nu_{\beta}$.

    2) Boundedness. The maximum of $score(\beta)$ is 1, while the minimum of $score(\beta)$ is -1. That is to say, $score(\beta)\in[-1, 1]$.

    3) Comparision rules. For two PFNs $\beta_{1} =P\left({u_{\beta_{1} }, \nu_{\beta_{1} } } \right)$ and $\beta_{2} =P\left({u_{\beta_{2} }, \nu_{\beta_{2} } } \right)$, the comparison rules of the PFNs are:

    Proof:

    From $\pi_{\beta } {\rm =}\sqrt{1-u_{\beta }^{2} -v_{\beta }^{2} }$, we can obtain $score\left(\beta \right)=2u_{\beta }^{2} +\frac{1-u_{\beta }^{2} -v_{\beta }^{2} }{1+v_{\beta }^{2} }-1$

    score(β)uβ=4uβ2uβ1+v2β=2uβ(211+v2β).

    Obviously, $\frac{\partial score\left(\beta \right)}{\partial u_{\beta } }>0$. Then, we can know that is a monotonous increasing function of membership degree.

    score(β)νβ=2νβ(2u2β)(1+ν2β)2.

    Obviously, $\frac{\partial score\left(\beta \right)}{\partial v_{\beta } } < 0$. Then, we can know that $score(\beta)$ is a monotonous decreasing function of non-membership degree $\nu_{\beta}$.

    2) According to monotonicity, $score(\beta)$ is a monotonous increasing function of membership degree $\mu_{\beta}$ and a monotonous decreasing function of non-membership degree $\nu_{\beta}$. Therefore, the maximum of $score(\beta)$ is 1 if $\mu_{\beta}=1, \nu_{\beta}=0$ and only if and the minimum $score(\beta)$ of is -1 if and only if $\mu_{\beta}=0, \nu_{\beta}=1$.

    Example 2: For $\beta_1=(0.8, 0.4)$ and $\beta_2=(0.7, 0.1)$. According to definition 6 and definition 7, we can get $score(\beta_1)=0.48$ and $score(\beta_2)=0.48$, i.e., $score(\beta_1)=score(\beta_2)$. We need to further compute the accuracy functions of the two PFNs. $H(\beta_1)=0.8$ and $H(\beta_2)=0.5$, therefore, $\beta_1>\beta_2$.

    According to definition 8, $score(\beta_1)=0.45$, $score(\beta_2)=0.48$, and we can directly obtain the comparision result of $\beta_1 < \beta_2$ from $score(\beta_1) < score(\beta_2)$.

    Remark 1: The proposed score function has the following advantages:

    1) Compared with the score function proposed by [16], we can gain the final comparision result just in one step without another accuracy function.

    2) The hesitation degree of decision makers can be taken into account.

    Next, We further extend the PFNSF to deal with Pythagorean uncertain linguistic variables and propose a new score function of Pythagorean uncertain linguistic variables, abbreviated as PULVSF for the sake of convenience.

    Definition 9:

    For any PULV $\tilde{\alpha}=\langle[s_{\alpha_i}, s_{\beta_1}], P(u_p(x_i), v_p(x_i))\rangle$, the score function of $\tilde{\alpha}$ is:

    score(˜α)=I(s(α+β)/2)×(2u2β+π2β1+ν2β1) (6)

    where $I(s_{(\alpha+\beta)/2})=(\alpha+\beta)/2$.

    Theorem 5: For PULVSF $score(\tilde{\alpha})$, it holds some properties as follows:

    1) Monotonicity. $score(\tilde{\alpha})$ is a monotonous increasing function of membership degree $\mu_{\beta}$ and is a monotonous decreasing function of non-membership degree $\nu_{\beta}$.

    2) Comparision rules. For two PULVs $\tilde{\alpha_1}=\langle[s_{\alpha_1}, s_{\beta_1}], P(u_p(x_1), v_p(x_1))\rangle$ and $\tilde{\alpha_2}=\langle[s_{\alpha_2}, s_{\beta_2}], P(u_p(x_2), v_p(x_2))\rangle$, the comparison rules of the PULVs are as follows:

    a) If $score(\tilde{\alpha_1})>score(\tilde{\alpha_2})$, then $\tilde{\alpha_1}>\tilde{\alpha_2}$.

    b) If $score(\tilde{\alpha_1}) < score(\tilde{\alpha_2})$, then $\tilde{\alpha_1} < \tilde{\alpha_2}$.

    c) If $score(\tilde{\alpha_1})=score(\tilde{\alpha_2})$, then $\tilde{\alpha_1}:\tilde{\alpha_2}$.

    Based on the PULVHM operator derived from Section Ⅲ and the PULVSF presented in Section IV, we propose a new MAGDM approach. The framework of the proposed method is depicted in Fig. 2. The proposed method consists of three stages. Firstly, construct the group decision making problem with finite m alternatives (marked as $A$), n attributes (marked as $C$) and k experts (marked as $E$). Secondly, aggregate all the experts' evaluation information by the PULVHM operator, and further transform the PULVs into crisp numbers based on PULVSF. Finally, obtain the ranking order of the alternative(s) by VIKOR method and select the best alternative(s).

    Figure  2.  Diagram of the proposed method

    Step 1: Decision information input. Define the set of $C=\{c_1, c_2, \ldots, c_n\}$ attributes and weighting vector of the attributes $w=\{w_1, w_2, \ldots, w_n\}$, which meets $0\leq w_i\leq 1(i=1, 2, \ldots, n)$, $\sum_{i=1}^nw_i=1$, a series of alternatives $A=\{a_1, a_2, \ldots, a_m\}$ and decision makers group $E=\{e_1, e_2, \ldots, e_k\}$. In addition, the attribute values of each alternative are given by PULVs.

    Step 2: Each decision maker provides his/her individual evaluation value with respect to each attribute, then we can get the decision matrix of each expert.

    Step 3: Normalize the decision matrix. For PULVs $\tilde{p}_{ij}=\langle[s_{\alpha_{ij}}, s_{\beta_{ij}}], P(u_{\tilde{p}}(x_{ij}), v_{\tilde{p}}(x_{ij}))\rangle$.

    For beneficial attributes, $\tilde{p}_{ij}=\tilde{p}_{ij}=\langle[s_{\alpha_{ij}}, s_{\beta_{ij}}], P(u_{\tilde{p}}(x_{ij})$, $v_{\tilde{p}}(x_{ij}))\rangle$. For cost attributes,

    ˜pij=(˜pij)1=[s(βij)1,s(βij)1],P(v˜p(xij),u˜p(xij)).

    where $(\alpha_{ij})^{-1}=l+1-\alpha_{ij}$, $(\beta_{ij})^{-1}=l+1-\beta_{ij}$ and $l$ is the number of language terms.

    Step 4: Based on PULVHM operator, we can get group decision matrix (GDM, as shown in Table Ⅰ), a result of the aggregation with respect to each decision maker's evaluation value.

    Table  Ⅰ.  GROUP DECISION MATRIX
    $c_1$ $c_2$ $\ldots$ $c_n$
    $a_1$ $\tilde{p}_{11}$ $\tilde{p}_{12}$ $\ldots$ $\tilde{p}_{1n}$
    $a_2$ $\tilde{p}_{21}$ $\tilde{p}_{22}$ $\ldots$ $\tilde{p}_{2n}$
    $\ldots$ $\ldots$ $\ldots$ $\ldots$ $\ldots$
    $a_m$ $\tilde{p}_{m1}$ $\tilde{p}_{m2}$ $\ldots$ $\tilde{p}_{mn}$
     | Show Table
    DownLoad: CSV

    Step 5: Transform the GDM into score function matrix (as shown in Table Ⅱ) based on definition 9.

    Table  Ⅱ.  SCORE FUNCTION MATRIX
    $c_1$ $c_2$ $\ldots$ $c_n$
    $a_1$ $G_{11}$ $G_{12}$ $\ldots$ $G_{1n}$
    $a_2$ $G_{21}$ $G_{22}$ $\ldots$ $G_{2n}$
    $\ldots$ $\ldots$ $\ldots$ $\ldots$ $\ldots$
    $a_m$ $G_{m1}$ $G_{m2}$ $\ldots$ $G_{mn}$
     | Show Table
    DownLoad: CSV

    Step 6: Determine the positive ideal solution and negative ideal solution, respectively.

    Positive ideal solution: $G^+\!\!=\!\!\{\!{\text{max}} G_{i1}, {\text{max}} G_{i2}, ..., {\text{max}} G_{in}\!\}$;

    Negative ideal solution: $G^-\!\!=\!\!\{\!\text{max} G_{i1}, \text{max} G_{i2}, ..., \text{max} G_{in}\!\}$

    Step 7: Obtain group utility $S_i$, individual regret $R_i$ and compromise value $Q_i$ via the following equations.

    Si=nj=1wjd(G+j,Gij)d(G+j,Gj) (7)
    Ri=max(wjd(G+j,Gij)d(G+j,Gj)) (8)
    Ri=max(wjd(G+j,Gij)d(G+j,Gj)) (9)

    where $S^+=\max S_i$, $S^-=\min S_i$, $R^+=\max R_i$, $R^-=\min R_i$. Besides, $\theta$ is the weight for the strategy of the maximum group utility and $1-\theta$ is the weight of individual regret. When $\theta>0.5$, decision makers tend to make decision by the strategy of maximum group utility; when $\theta=0.5$, by consensus, group utility and individual regret are of the same importance; and when $\theta < 0.5$, by the strategy of individual regret.

    Step 8: Ranking the alternatives in an ascending order by the values of $S_i$, $R_i$ and $Q_i$. The results are three ranking lists.

    Step 9: Based on the value of $Q_i$, we can get $a^{(1)}, a^{(2)}, \ldots, a^{(m)}$, and alternative $a^{(1)}$ is named as the best solution, if it meets the following conditions.

    Condition 1: $Q(a^{(2)})-Q(a^{(1)})\geq\frac{1}{m-1}$;

    Condition 2: The alternative $a^{(1)}$ must be the best ranking either by the value of $S_i$ or $R_i$.

    If the conditions can not be satisfied simultaneously, we can still get compromise solution by the following situations:

    If Condition 2 is not satisfied, then alternatives $a^{(1)}$ and $a^{(2)}$ are both the compromise solutions;

    If Condition 1 is not satisfied, the alternatives $a^{(1)}$, $a^{(2)}$, $\ldots$, $a^{(d)}$ are all the compromise solutions, where $a^{(d)}$ meets $Q(a^{(d)})-Q(a^{(1)}) < \frac{1}{m-1}$

    In this section, a numerical example is used to illustrate the effectiveness of the proposed method.

    The risk investment project evaluation decision is based on the information obtained by the risk investment company, using scientific means and methods to analyze and evaluate the quality of the project carefully, and decide whether to invest the project in light of some factors, such as its potential investment income and risk. An investment company plans to invest one of the following industries: $a_1$: high-technology industry; $a_2$ tourism industry; $a_3$ financial industry; $a_4$ manufacturing industry. And after synthetical consideration, the main factors are taken into account: $c_1$ investment rewards; $c_2$ investment risks; $c_3$ investment capitals, $c_4$ market prospects. Besides, three experts in this area provide the consultant service for the company as shown in Table Ⅲ-. $E=\{e_1, e_2, e_3\}$ denotes the experts group. Attributes set is $C=\{c_1, c_2, c_3, c_4\}$, and $w=\{0.4, 0.3, 0.2, 0.1\}$ is the weight vector of the corresponding attributes.

    Table  Ⅲ.  DECISION MATRIX OF EXPERT 1
    $c_1$ $c_2$ $c_3$ $c_4$
    $a_1$ $\langle[s_5, s_6], P(0.7, 0.3)\rangle$ $\langle[s_6, s_7], P(0.8, 0.3)\rangle$ $\langle[s_6, s_7], P(0.9, 0.2)\rangle$ $\langle[s_4, s_6], P(0.6, 0.5)\rangle$
    $a_2$ $\langle[s_4, s_5], P(0.8, 0.3)\rangle$ $\langle[s_3, s_4], P(0.7, 0.5)\rangle$ $\langle[s_3, s_5], P(0.5, 0.5)\rangle$ $\langle[s_2, s_3], P(0.7, 0.4)\rangle$
    $a_3$ $\langle[s_6, s_7], P(0.6, 0.5)\rangle$ $\langle[s_6, s_7], P(0.7, 0.2)\rangle$ $\langle[s_4, s_6], P(0.7, 0.4)\rangle$ $\langle[s_5, s_6], P(0.8, 0.3)\rangle$
    $a_4$ $\langle[s_3, s_4], P(0.7, 0.4)\rangle$ $\langle[s_1, s_2], P(0.8, 0.2)\rangle$ $\langle[s_2, s_4], P(0.6, 0.3)\rangle$ $\langle[s_2, s_3], P(0.7, 0.4)\rangle$
     | Show Table
    DownLoad: CSV
    Table  Ⅳ.  DECISION MATRIX OF EXPERT 2
    $c_1$ $c_2$ $c_3$ $c_4$
    $a_1$ $\langle[s_6, s_7], P(0.8, 0.2)\rangle$ $\langle[s_4, s_5], P(0.7, 0.4)\rangle$ $\langle[s_5, s_7], P(0.6, 0.5)\rangle$ $\langle[s_6, s_7], P(0.9, 0.2)\rangle$
    $a_2$ $\langle[s_5, s_6], P(0.7, 0.4)\rangle$ $\langle[s_4, s_5], P(0.6, 0.3)\rangle$ $\langle[s_5, s_6], P(0.8, 0.2)\rangle$ $\langle[s_5, s_7], P(0.6, 0.4)\rangle$
    $a_3$ $\langle[s_5, s_7], P(0.8, 0.3)\rangle$ $\langle[s_5, s_6], P(0.8, 0.4)\rangle$ $\langle[s_5, s_6], P(0.7, 0.4)\rangle$ $\langle[s_6, s_7], P(0.8, 0.3)\rangle$
    $a_4$ $\langle[s_4, s_6], P(0.6, 0.5)\rangle$ $\langle[s_3, s_4], P(0.7, 0.4)\rangle$ $\langle[s_4, s_5], P(0.7, 0.4)\rangle$ $\langle[s_4, s_5], P(0.8, 0.3)\rangle$
     | Show Table
    DownLoad: CSV
    Table  Ⅴ.  DECISION MATRIX OF EXPERT 3
    $c_1$ $c_2$ $c_3$ $c_4$
    $a_1$ $\langle[s_4, s_6], P(0.7, 0.4)\rangle$ $\langle[s_5, s_6], P(0.8, 0.4)\rangle$ $\langle[s_5, s_7], P(0.6, 0.4)\rangle$ $\langle[s_5, s_7], P(0.8, 0.3)\rangle$
    $a_2$ $\langle[s_4, s_5], P(0.8, 0.4)\rangle$ $\langle[s_5, s_6], P(0.7, 0.5)\rangle$ $\langle[s_4, s_6], P(0.8, 0.4)\rangle$ $\langle[s_4, s_5], P(0.7, 0.3)\rangle$
    $a_3$ $\langle[s_5, s_6], P(0.7, 0.3)\rangle$ $\langle[s_6, s_7], P(0.8, 0.3)\rangle$ $\langle[s_4, s_6], P(0.6, 0.5)\rangle$ $\langle[s_4, s_6], P(0.7, 0.4)\rangle$
    $a_4$ $\langle[s_4, s_5], P(0.7, 0.4)\rangle$ $\langle[s_3, s_4], P(0.7, 0.3)\rangle$ $\langle[s_4, s_5], P(0.6, 0.3)\rangle$ $\langle[s_3, s_4], P(0.8, 0.4)\rangle$
     | Show Table
    DownLoad: CSV

    Step 1: Standardized decision matrix. $c_1$, $c_4$ are beneficial attributes, whereas, $c_2$, $c_3$ are cost attributes. And standard expert decision matrices are as shown in Table Ⅵ-.

    Table  Ⅵ.  STANDARD DECISION MATRIX OF EXPERT 1
    $c_1$ $c_2$ $c_3$ $c_4$
    $a_1$ $\langle[s_5, s_6], P(0.7, 0.3)\rangle$ $\langle[s_1, s_2], P(0.3, 0.8)\rangle$ $\langle[s_1, s_2], P(0.2, 0.9)\rangle$ $\langle[s_4, s_6], P(0.6, 0.5)\rangle$
    $a_2$ $\langle[s_4, s_5], P(0.8, 0.3)\rangle$ $\langle[s_4, s_5], P(0.5, 0.7)\rangle$ $\langle[s_4, s_6], P(0.5, 0.5)\rangle$ $\langle[s_2, s_3], P(0.7, 0.4)\rangle$
    $a_3$ $\langle[s_6, s_7], P(0.6, 0.5)\rangle$ $\langle[s_1, s_2], P(0.2, 0.7)\rangle$ $\langle[s_3, s_5], P(0.4, 0.7)\rangle$ $\langle[s_5, s_6], P(0.8, 0.3)\rangle$
    $a_4$ $\langle[s_3, s_4], P(0.7, 0.4)\rangle$ $\langle[s_6, s_7], P(0.2, 0.8)\rangle$ $\langle[s_4, s_6], P(0.3, 0.6)\rangle$ $\langle[s_2, s_3], P(0.7, 0.4)\rangle$
     | Show Table
    DownLoad: CSV
    Table  Ⅶ.  STANDARD DECISION MATRIX OF EXPERT 2
    $c_1$ $c_2$ $c_3$ $c_4$
    $a_1$ $\langle[s_6, s_7], P(0.8, 0.2)\rangle$ $\langle[s_3, s_4], P(0.4, 0.7)\rangle$ $\langle[s_1, s_3], P(0.5, 0.6)\rangle$ $\langle[s_6, s_7], P(0.9, 0.2)\rangle$
    $a_2$ $\langle[s_5, s_6], P(0.7, 0.4)\rangle$ $\langle[s_3, s_4], P(0.3, 0.6)\rangle$ $\langle[s_2, s_3], P(0.2, 0.8)\rangle$ $\langle[s_5, s_7], P(0.6, 0.4)\rangle$
    $a_3$ $\langle[s_5, s_7], P(0.8, 0.3)\rangle$ $\langle[s_2, s_3], P(0.4, 0.8)\rangle$ $\langle[s_2, s_3], P(0.4, 0.7)\rangle$ $\langle[s_6, s_7], P(0.8, 0.3)\rangle$
    $a_4$ $\langle[s_4, s_6], P(0.6, 0.5)\rangle$ $\langle[s_4, s_5], P(0.4, 0.7)\rangle$ $\langle[s_3, s_4], P(0.4, 0.7)\rangle$ $\langle[s_4, s_5], P(0.8, 0.3)\rangle$
     | Show Table
    DownLoad: CSV
    Table  Ⅷ.  STANDARD DECISION MATRIX OF EXPERT 3
    $c_1$ $c_2$ $c_3$ $c_4$
    $a_1$ $\langle[s_4, s_6], P(0.7, 0.4)\rangle$ $\langle[s_2, s_3], P(0.4, 0.8)\rangle$ $\langle[s_1, s_3], P(0.4, 0.6)\rangle$ $\langle[s_5, s_7], P(0.8, 0.3)\rangle$
    $a_2$ $\langle[s_4, s_5], P(0.8, 0.4)\rangle$ $\langle[s_2, s_3], P(0.5, 0.7)\rangle$ $\langle[s_2, s_4], P(0.4, 0.8)\rangle$ $\langle[s_4, s_5], P(0.7, 0.3)\rangle$
    $a_3$ $\langle[s_5, s_6], P(0.7, 0.3)\rangle$ $\langle[s_1, s_2], P(0.3, 0.8)\rangle$ $\langle[s_2, s_4], P(0.5, 0.6)\rangle$ $\langle[s_4, s_6], P(0.7, 0.4)\rangle$
    $a_4$ $\langle[s_4, s_5], P(0.7, 0.4)\rangle$ $\langle[s_4, s_5], P(0.3, 0.7)\rangle$ $\langle[s_2, s_3], P(0.3, 0.6)\rangle$ $\langle[s_3, s_4], P(0.8, 0.4)\rangle$
     | Show Table
    DownLoad: CSV

    Step 2: By PULVHM operator, we can get group decision matrix as shown in Table Ⅸ.

    Table  Ⅸ.  GROUP DECISION MATRIX
    $c_1$ $c_2$ $c_3$ $c_4$
    $a_1$ $\langle[s_{4.13}, s_{6.32}], P(0.73, 0.30)\rangle$ $\langle[s_{1.87}, s_{2.10}], P(0.37, 0.77)\rangle$ $\langle[s_1, s_{2.63}], P(0.83, 0.23)\rangle$ $\langle[s_{4.94}, s_{6.65}], P(0.77, 0.34)\rangle$
    $a_2$ $\langle[s_{4.31}, s_{5.32}], P(0.76, 0.36)\rangle$ $\langle[s_{2.91}, s_{3.93}], P(0.21, 0.91)\rangle$ $\langle[s_1, s_{2.63}], P(0.83, 0.23)\rangle$ $\langle[s_{3.48}, s_{4.79}], P(0.79, 0.36)\rangle$
    $a_3$ $\langle[s_{5.31}, s_{6.65}], P(0.69, 0.08)\rangle$ $\langle[s_{1.27}, s_{2.29}], P(0.29, 0.77)\rangle$ $\langle[s_{2.29}, s_{3.93}], P(0.43, 0.66)\rangle$ $\langle[s_{4.94}, s_{6.32}], P(0.76, 0.33)\rangle$
    $a_4$ $\langle[s_{3.64}, s_{4.94}], P(0.66, 0.43)\rangle$ $\langle[s_{4.59}, s_{5.61}], P(0.29, 0.77)\rangle$ $\langle[s_{2.91}, s_{4.20}], P(0.33, 0.63)\rangle$ $\langle[s_{2.91}, s_{3.93}], P(0.76, 0.40)\rangle$
     | Show Table
    DownLoad: CSV

    Step 3: Transform the GDM into a score function matrix as shown in Table Ⅹ by the proposed score function.

    Table  Ⅹ.  SCORE FUNCTION MATRIX
    $c_1$ $c_2$ $c_3$ $c_4$
    $a_1$ 2.2 -1.1 1.1 2.6
    $a_2$ 2.1 -2.9 -1.8 1.9
    $a_3$ 2.9 -1.1 -1.2 2.5
    $a_4$ 0.8 -3.2 -1.5 1.3
     | Show Table
    DownLoad: CSV

    Step 4: Determine the positive ideal solution and negative ideal solution, respectively.

    Positive ideal solution: $G^+=\{2.86, -1.11, 1.14, 2.57\}$;

    Negative ideal solution: $G^-=\{0.84, -3.19, -1.75, 1.34\}$

    Step 5: Calculate group utility $S_i$, individual regret $R_i$ and compromise value $Q_i$, wherein, $\theta=$ 0.5.

    S1=0.141,S2=0.667,S3=0.165,S4=0.986R1=0.141,R2=0.256,S3=0.160,S4=0.4Q1=0,Q2=0.539,Q3=0.051,Q4=1

    Step 6: Get the best alternative(s) in an ascending order by the values of $S_i$, $R_i$ and $Q_i$. We have $Q_1 \prec Q_3 \prec Q_2 \prec Q_4$, $S_1 \prec S_3 \prec S_2 \prec S_4$ and $R_1 \prec R_3 \prec R_2 \prec R_4$, besides, $Q^{(2)}-Q^{(1)}=0.051 < \frac{1}{m-1}=0.33$.

    Therefore, based on the conditions mentioned in Step 9 of Section V, we can get the final ranking order of all alternatives, which is $a_1\approx a_3 \succ a_2 \succ a_4$, that is to say, $a_1$ and $a_3$ are the best alternatives a period.

    To analyze the influence of different values of parameter $\theta$ on the final ranking order, we apply the proposed method to the MAGDA problem of venture capital. The values of parameter $\theta$ vary from 0 to 1 increasing by 0.1 and the sensitivity analysis results are given in Table Ⅺ and Fig. 3.

    Table  Ⅺ.  THE RESULTS WITH DIFFERENT θ VALUES
    $c_1$ $c_2$ $c_3$ $c_4$ ranking order
    $\theta =$ 0 $Q_1 =$ 0 $Q_2 =$ 0.44 $Q_3 =$ 0.07 $Q_4=$ 0.1 $a_1\approx a_3 \succ a_2 \succ a_4$
    $\theta =$ 0.1 $Q_1 =$ 0 $Q_2 =$ 0.46 $Q_3 =$ 0.07 $Q_4 =$ 1 $a_1\approx a_3 \succ a_2 \succ a_4$
    $\theta=$ 0.2 $Q_1 =$ 0 $Q_2 =$ 0.48 $Q_3 =$ 0.06 $Q_4 =$ 1 $a_1\approx a_3 \succ a_2 \succ a_4$
    $\theta =$ 0.3 $Q_1 =$ 0 $Q_2 =$ 0.5 $Q_3 =$ 0.06 $Q_4 =$ 1 $a_1\approx a_3 \succ a_2 \succ a_4$
    $\theta =$ 0.4 $Q_1 =$ 0 $Q_2 =$ 0.52 $Q_3 =$ 0.06 $Q_4 =$ 1 $a_1\approx a_3 \succ a_2 \succ a_4$
    $\theta =$ 0.5 $Q_1 =$ 0 $Q_2 =$ 0.54 $Q_3 =$ 0.05 $Q_4 =$ 1 $a_1\approx a_3 \succ a_2 \succ a_4$
    $\theta =$ 0.6 $Q_1 =$ 0 $Q_2 =$ 0.56 $Q_3 =$ 0.05 $Q_4 =$ 1 $a_1\approx a_3 \succ a_2 \succ a_4$
    $\theta =$ 0.7 $Q_1 =$ 0 $Q_2 =$ 0.58 $Q_3 =$ 0.04 $Q_4 =$ 1 $a_1\approx a_3 \succ a_2 \succ a_4$
    $\theta =$ 0.8 $Q_1 =$ 0 $Q_2 =$ 0.6 $Q_3 =$ 0.04 $Q_4 =$ 1 $a_1\approx a_3 \succ a_2 \succ a_4$
    $\theta =$ 0.9 $Q_1 =$ 0 $Q_2 =$ 0.62 $Q_3 =$ 0.03 $Q_4 =$ 1 $a_1\approx a_3 \succ a_2 \succ a_4$
    $\theta =$ 1 $Q_1 =$ 0 $Q_2 =$ 0.63 $Q_3 =$ 0.03 $Q_4 =$ 1 $a_1\approx a_3 \succ a_2 \succ a_4$
     | Show Table
    DownLoad: CSV
    Figure  3.  The results with different $\theta$ values

    Based on the sensitivity analysis, we can see the ranking order of alternatives is stable with different values of $\theta$. Therefore, the proposed method can effectively reduce noise interference and obtain stable optimal alternative(s).

    To verify the validity of our method, another two approaches are adopted for this numerical example, named as VIKOR with PULWAA method [24] and score function method. The results of comparision analysis are shown in Table Ⅻ.

    Table  Ⅻ.  COMPARISON ANALYSIS WITH OTHER METHODS
    Methods Values Order of alternatives
    VIKOR with PULWAA[24] $Q_1=0, Q_2=1, Q_3=$0.19, $Q_4$=0.84 $a_1\approx a_3 \succ a_4 \succ a_2$
    Score function method $Score(A_1)=1.01, Score(A_2)=-0.22, $
    $Score(A_3)=0.82, Score(A_4)=-0.79, $ $a_1\approx a_3 \succ a_2 \succ a_4$
    Our method $Q_1 = $ 0, $Q_2 = $ 0.53, $Q_3 = $ 0.05, $Q_4 = $ 1 $a_1\approx a_3 \succ a_2 \succ a_4$
     | Show Table
    DownLoad: CSV

    As it can be observed that the result of our method is slightly different from the results of Liu's method and the score function method. Take the second and the fourth alternatives for an example, the ranking order by using Liu's method is $a_4 \succ a_2$, whereas the result of our method is $a_4 \succ a_2$. The reason is that our method takes the interaction relationship between the aggregated arguments into account during the computing process. On the other hand, as for the the ranking order of $a_1, a_3$, a distinguished ranking result is achieved with the score function method. However, the same results are obtained by our method and Liu's method, i.e. $a_1\approx a_3$, which are both compromising solutions. The kind of difference arises from the feature of VIKOR method, which can obtain a set of compromising solutions when the criterion conflicts with each other.

    It is a natural and convenient way for decision makers to give an evaluation in the form of linguistic terms, and it is widely used in realistic decision making applications. This study proposed a new PULVHM operator by combining the merits of Pythagorean uncertain linguistic variable and Hamy mean operator. The definition and some useful properties of PULVHM operator were put forward and proved mathematically. The effectiveness for the integration of multiple PULVs was proved and an illustrative example was given in detail. The group decision matrix was obtained by using the PULVHM operator for integrating the evaluation information given by the decision makers. At the same time, a new score function of Pythagorean uncertain linguistic variables, named as PULVSF, was developed. Some properties, comparison rules, and illustrative example of PULVSF were given out and discussed. Based on the proposed PULVHM operator and PULVSF, a new MAGDM approach was presented integrating with the VIKOR method. A numerical study about the investment project selection problem was performed. The applicability and superiority of the proposed method were verified by sensibility analysis and comparison analysis with another two methods.

  • [1]
    H. B. Liu, L. Jiang, L. Martínez, "A dynamic multi-criteria decision making model with bipolar linguistic term sets, " Expert Systems With Applications, vol. 95, pp. 104-112, Sep. 2018. https://www.sciencedirect.com/science/article/pii/S0957417417307650
    [2]
    M. K. Ghorabaee, M. Amiri, E. K. Zavadskas, et al, "A new multi-criteria model based on interval type-2 fuzzy sets and EDAS method for supplier evaluation and order allocation with environmental considerations, " Computers & Industrial Engineering, vol.112, pp. 156-174, Aug. 2017. https://www.sciencedirect.com/science/article/abs/pii/S0360835217303753
    [3]
    W. Dong, M. M. Liu, L. S. Wang, et al, "Fault diagnosis for railway turnout control circuit based on group decision making, " Acta Automatica Sinica, vol. 44, no. 6, Jun. 2018. http://www.aas.net.cn/EN/abstract/abstract19290.shtml
    [4]
    E. Celik, E. Akyuz, "An interval type-2 fuzzy AHP and TOPSIS methods for decision-making problems in maritime transportation engineering: The case of ship loader, " Ocean Engineering, vol. 155, no. 1, pp. 371-381, May 2018. https://www.sciencedirect.com/science/article/pii/S0029801818300398
    [5]
    Y. Lin, Y. M. Wang, "Group decision making with consistency of intuitionistic fuzzy preference relations under uncertainty, " IEEE/CAA Journal of Automatica Sinica, vol. 5, no.3, pp. 1-9, May 2018. doi: 10.1109/JAS.2016.7510037
    [6]
    H. D. Wang, X. H. Pan, and S. F. He, "A new interval type-2 fuzzy VIKOR method for multi-attribute decision making, ". Internatioal Journal of Fuzzy Systems, doi: 10.1007/s40815-018-0527-y, 2018.
    [7]
    L. Wang, Y. M. Wang, L. Martínez, "A group decision method based on prospect theory for emergency situations, " Information Sciences, vol. 135, pp. 418-419, 119-135, Dec. 2017. https://www.sciencedirect.com/science/article/pii/S0020025517308575
    [8]
    T. Wu, X. W. Liu, F. Liu, "An interval type-2 fuzzy TOPSIS model for large scale group decision making problems with social network information, " Information Sciences, vol. 432, pp. 392-410, Dec. 2018. https://www.sciencedirect.com/science/article/pii/S002002551632179X
    [9]
    F. Y. Meng, J. Tang, F. Hamido, "Linguistic intuitionistic fuzzy preference relations and their application to multi-criteria decision making, " Information Fusion, vol. 46, pp. 77-90, Mar. 2019. https://www.sciencedirect.com/science/article/pii/S1566253517306802
    [10]
    K. T. Atanassov, "Intuitionistic fuzzy sets, " Fuzzy Sets and Systems. vol. 20, no. 1, pp. 87-96, Jun. 1986.
    [11]
    F. Shen, X. S. Ma, Z. Y. Li, D. L. Cai, "An extended intuitionistic fuzzy TOPSIS method based on a new distance measure with an application to credit risk evaluation, " Information Science, vol. 428, pp. 105-119, Nov. 2018. https://www.sciencedirect.com/science/article/pii/S0020025516316784
    [12]
    J. D. Qiu, L. Li, "A new approach for multiple attribute group decision making with interval-valued intuitionistic fuzzy information, " Applied Soft Computing, vol. 61, pp. 111-121, Dec. 2017. https://www.sciencedirect.com/science/article/pii/S1568494617304106
    [13]
    Q. D. Qin, F. Q. Liang, Y. M. Chen, G. F. Yu, "A TODIM-based multi-criteria group decision making with triangular intuitionistic fuzzy numbers, " Applied Soft Computing, vol. 55, pp. 93-107, Jun. 2017. https://www.sciencedirect.com/science/article/pii/S156849461730056X
    [14]
    Z. X. Wang, J. Chen J, J. B. Lan, "Multi-attribute decision making approach based on intuitionistic uncertain linguistic new aggregation operator, " System Engineering-Theory and Practice, vol. 36, no. 7, pp. 1871-1878, 2016.
    [15]
    R. R. Yager, "Pythagorean membership grades in multicriteria decision making, " IEEE Transactions on Fuzzy Systems, vol. 22, no. 4, pp. 958-965, Aug. 2014. https://ieeexplore.ieee.org/document/6583233/
    [16]
    X. L. Zhang, Z. S. Xu, "Extension of TOPSIS to multiple criteria decision making with pythagorean fuzzy sets, " International Journal of Intelligent Systems, vol. 29, no. 12, pp. 1061-1078, 2014. doi: 10.1002/int.2014.29.issue-12
    [17]
    D. Liang, Z. S. Xu, "The new extension of TOPSIS method for multiple criteria decision making with hesitant Pythagorean fuzzy sets, " Applied Soft Computing, vol. 60, pp. 167-179, Nov. 2017. https://www.sciencedirect.com/science/article/pii/S1568494617303770
    [18]
    X. L. Zhang, "Multicriteria Pythagorean fuzzy decision analysis: A hierarchical QUALIFLEX approach with the closeness index-based ranking methods, " Information Sciences, vol. 330, pp. 104-124, Feb. 2016. https://www.sciencedirect.com/science/article/pii/S0020025515007306
    [19]
    H. D. Wang, S. F. He, X. H. Pan, "A new bi-directional projection model based on Pythagorean uncertain linguistic variable, " Information, vol. 9, p. 104, Apr. 2018.
    [20]
    P. J. Ren, Z. S. Xu, X. J. Gou, "Pythagorean fuzzy TODIM approach to multi-criteria decision making, " Applied Soft Computing, vol. 42, pp. 246-259, May, 2016qasd.
    [21]
    X. D. Peng, J. G. Dai, "Approaches to Pythagorean Fuzzy Stochastic Multi-criteria Decision Making Based on Prospect Theory and Regret Theory with New Distance Measure and Score Function, " International Journal of Intelligent Systems, vol. 32, no. 11, pp. 1187-1214, Mar. 2017.
    [22]
    W. T. Xue, Z. S. Xu, X. L. Zhang, X. L. Tian, "Pythagorean Fuzzy LINMAP Method Based on the Entropy Theory for Railway Project Investment Decision Making, " International Journal of Intelligent Systems, vol. 33, no. 1, pp. 93-125, Oct. 2018. doi: 10.1002/int.21941/abstract
    [23]
    J. D. Peng, Y. Yang, "Multi-attribute Group Decision Making Method Based on Pythagorean Fuzzy Linguistic Set, " Computer Engineering and Application, vol. 52, no. 23, pp. 50-54, Dec. 2016.
    [24]
    Z. M. Liu, P. D. Liu, W. L. Liu, "An extended VIKOR method based on Pythagorean uncertain linguistic variable, " Control and Decision, vol. 32, no. 12, pp. 2145-2152, 2017.
    [25]
    S. Opricovic, "Multi-criteria optimization of civil engineering systems, " Belgrad: Faculty of Civil Engineering. vol. 2, pp. 36-38, Jan. 1998.
    [26]
    Y. Wu, K. Chen, B. X. Zeng, H. Xu, Y. S. Yang, "Supplier selection in nuclear power industry with extended VIKOR method under linguistic information, " Applied Soft Computing, vol. 48, pp. 444-457, Nov. 2016. https://www.sciencedirect.com/science/article/pii/S1568494616303490
    [27]
    A. Awasthi, G. Kannan, "Green supplier development program selection using NGT and VIKOR under fuzzy environment, " Computers and Industrial Engineering, vol. 91, pp. 100-108, Nov. 2016.
    [28]
    T. Y. Chen, "Remoteness index-based Pythagorean fuzzy VIKOR methods with a generalized distance measure for multiple criteria decision analysis, " Information Fusion, vol. 41, pp. 129-150, May. 2018. https://www.sciencedirect.com/science/article/pii/S1566253517300763
    [29]
    R. R. Yager, J. Kacprzyk, "The ordered weighted averaging operators: theory and applications, " Physica, 2012. http://dl.acm.org/citation.cfm?id=267148
    [30]
    Z. S. Xu, Q. L. Da, "An overview of operators of aggregating information, " International Journal of Intelligent Systems, vol. 18, no. 9, pp. 953-969, Sep. 2013.
    [31]
    Z. S. Xu, "An approach based on the uncertain LOWG and induced uncertain LOWG operators to group decision making with uncertain multiplicative linguistic preference relations, " Decision Support System, vol. 41, no. 2, pp. 488-499, 2006. doi: 10.1016/j.dss.2004.08.011
    [32]
    B. Peng, C. Ye, S. Zeng, "Uncertain pure linguistic hybrid harmonic averaging operator and generalized interval aggregation operator based approach to group decision making, " Knowledge-Based Systems, vol. 36, pp. 175-181, Jun. 2012. https://www.sciencedirect.com/science/article/pii/S0950705112001797
    [33]
    Z. S. Xu. Uncertain multiple attribute decision making: methods and applications, New York: Springer, 2015.
    [34]
    Z. S. Xu, "Induced uncertain linguistic OWA operators applied to group decision making, " Information Fusion, vol. 7, no. 2, pp. 231-238, Jun. 2006. https://www.sciencedirect.com/science/article/pii/S1566253504000491
    [35]
    J. D. Qin, "Interval type-2 fuzzy Hamy mean operators and their application in multiple criteria decision making, " Applied Soft Computing, vol. 2, no. 7, pp. 1-21, Apr. 2017.
    [36]
    D. Q. Li, W. Y. Zeng, J. H. Li, "Note on uncertain linguistic Bonferroni mean operators and their application to multiple attribute decision making, " Applied Mathematical Modelling, vol. 39, no. 2, pp. 894-900, Jan. 2015. https://www.sciencedirect.com/science/article/pii/S0307904X14003436
    [37]
    Z. S. Xu, "A method based on linguistic aggregation operators for group decision making with linguistic preference relations, " Information Science, vol. 166, no. 1, pp. 19-30, Oct. 2004.
    [38]
    Z. S. Xu, "Uncertain linguistic aggregation operators based approach to multiple attribute group decision making under uncertain linguistic environment, " Information Science, vol. 168, no. 1, pp. 171-184, Oct. 2004. https://www.sciencedirect.com/science/article/pii/S0020025504000179
    [39]
    X. L. Zhang, "A novel approach based on similarity measure for Pythagorean fuzzy multiple criteria group decision making, " International Journal of Intelligent System, vol. 31, no. 6, pp. 593-611, Nov. 2016. doi: 10.1002/int.21796
  • Related Articles

    [1]Yifan Zhong, Yuan Yuan, Huanhuan Yuan, Mengbi Wang, Huaping Liu. Multi-Spacecraft Formation Control Under False Data Injection Attack: A Cross Layer Fuzzy Game Approach[J]. IEEE/CAA Journal of Automatica Sinica, 2025, 12(4): 776-788. doi: 10.1109/JAS.2024.124872
    [2]Xiongbo Wan, Chaoling Zhang, Fan Wei, Chuan-Ke Zhang, Min Wu. Hybrid Dynamic Variables-Dependent Event-Triggered Fuzzy Model Predictive Control[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(3): 723-733. doi: 10.1109/JAS.2023.123957
    [3]Oscar Castillo, Fevrier Valdez, Patricia Melin, Weiping Ding. A Survey on Type-3 Fuzzy Logic Systems and Their Control Applications[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(8): 1744-1756. doi: 10.1109/JAS.2024.124530
    [4]Rong Zhao, Jun-e Feng, Dawei Zhang. Self-Triggered Set Stabilization of Boolean Control Networks and Its Applications[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(7): 1631-1642. doi: 10.1109/JAS.2023.124050
    [5]Fei Ming, Wenyin Gong, Ling Wang, Yaochu Jin. Constrained Multi-Objective Optimization With Deep Reinforcement Learning Assisted Operator Selection[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(4): 919-931. doi: 10.1109/JAS.2023.123687
    [6]Fei-Yue Wang, Qinghai Miao, Xuan Li, Xingxia Wang, Yilun Lin. What Does ChatGPT Say: The DAO from Algorithmic Intelligence to Linguistic Intelligence[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(3): 575-579. doi: 10.1109/JAS.2023.123486
    [7]Jianming Zhan, Jiajia Wang, Weiping Ding, Yiyu Yao. Three-Way Behavioral Decision Making With Hesitant Fuzzy Information Systems: Survey and Challenges[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(2): 330-350. doi: 10.1109/JAS.2022.106061
    [8]Xiuyang Chen, Changbing Tang, Zhao Zhang. A Game Theoretic Approach for a Minimal Secure Dominating Set[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(12): 2258-2268. doi: 10.1109/JAS.2023.123315
    [9]Jingshu Sang, Dazhong Ma, Yu Zhou. Group-Consensus of Hierarchical Containment Control for Linear Multi-Agent Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(6): 1462-1474. doi: 10.1109/JAS.2023.123528
    [10]Tangyike Zhang, Junxiang Zhan, Jiamin Shi, Jingmin Xin, Nanning Zheng. Human-Like Decision-Making of Autonomous Vehicles in Dynamic Traffic Scenarios[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(10): 1905-1917. doi: 10.1109/JAS.2023.123696
    [11]Harish Garg, Nancy. Linguistic Single-Valued Neutrosophic Power Aggregation Operators and Their Applications to Group Decision-Making Problems[J]. IEEE/CAA Journal of Automatica Sinica, 2020, 7(2): 546-558. doi: 10.1109/JAS.2019.1911522
    [12]Chunfang Liu, Yuesheng Luo. Power Aggregation Operators of Simplified Neutrosophic Sets and Their Use in Multi-attribute Group Decision Making[J]. IEEE/CAA Journal of Automatica Sinica, 2019, 6(2): 575-583. doi: 10.1109/JAS.2017.7510424
    [13]Yang Lin, Yingming Wang. Group Decision Making With Consistency of Intuitionistic Fuzzy Preference Relations Under Uncertainty[J]. IEEE/CAA Journal of Automatica Sinica, 2018, 5(3): 741-748. doi: 10.1109/JAS.2016.7510037
    [14]Runmei Li, Chaoyang Jiang, Fenghua Zhu, Xiaolong Chen. Traffic Flow Data Forecasting Based on Interval Type-2 Fuzzy Sets Theory[J]. IEEE/CAA Journal of Automatica Sinica, 2016, 3(2): 141-148.
    [15]Jiacai Huang, YangQuan Chen, Haibin Li, Xinxin Shi. Fractional Order Modeling of Human Operator Behavior with Second Order Controlled Plant and Experiment Research[J]. IEEE/CAA Journal of Automatica Sinica, 2016, 3(3): 271-280.
    [16]Song Deng, Dong Yue, Xiong Fu, Aihua Zhou. Security Risk Assessment of Cyber Physical Power System Based on Rough Set and Gene Expression Programming[J]. IEEE/CAA Journal of Automatica Sinica, 2015, 2(4): 431-439.
    [17]Hong Mo, Jie Wang, Xuan Li, Zhanlin Wu. Linguistic Dynamic Modeling and Analysis of Psychological Health State Using Interval Type-2 Fuzzy Sets[J]. IEEE/CAA Journal of Automatica Sinica, 2015, 2(4): 366-373.
    [18]Hejin Zhang, Zhiyun Zhao, Ziyang Meng, Zongli Lin. Experimental Verification of a Multi-robot Distributed Control Algorithm with Containment and Group Dispersion Behaviors: the Case of Dynamic Leaders[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(1): 54-60.
    [19]Chuanrui Wang, Xinghu Wang, Haibo Ji. A Continuous Leader-following Consensus Control Strategy for a Class of Uncertain Multi-agent Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(2): 187-192.
    [20]Hongbin Ma, Yini Lv, Chenguang Yang, Mengyin Fu. Decentralized Adaptive Filtering for Multi-agent Systems with Uncertain Couplings[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(1): 101-112.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(3)  / Tables(12)

    Article Metrics

    Article views (2063) PDF downloads(52) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return