Skip to main content

Inequalities with applications to some k-analog random variables

Abstract

In this paper, we introduce some properties of gamma and beta probability k-distributions. We present some inequalities involving these distributions via some classical inequalities, like Chebyshev’s inequality for synchronous (asynchronous) mappings, Hölder’s and Grüss integral inequalities. Also, we discuss some inequalities involving the variance, coefficient of variation and mean deviation of the said distributions involving the parameter \(k> 0\). If \(k=1\), we get the classical results.

1 Introduction

A process which generates raw data is called an experiment, and an experiment which gives different results under similar conditions, even though it is repeated a large number of times, is termed a random experiment. A variable whose values are determined by the outcomes of a random experiment is called a random variable or simply a variate. The random variables are usually denoted by capital letters X, Y and Z, while the values associated to them by corresponding small letters x, y and z. The random variables are classified into two classes, namely discrete and continuous random variables.

A random variable that can assume only a finite or countably infinite number of values is known as a discrete random variable, while a variable which can assume each and every value within some interval is called a continuous random variable. The distribution function of a random variable X is denoted by \(F(x)\). A random variable X may also be defined as continuous if its distribution function \(F(x)\) is continuous and differentiable everywhere except at isolated points in the given range. Let the derivative of \(F(x)\) be denoted by \(f(x)\), i.e., \(f(x)=\frac{d}{dx} F(x)\). Since \(F(x)\) is a non-decreasing function of x, so

$$ f(x)\geq0 \quad \text{and}\quad F(x)= \int_{-\infty}^{x} f(x)\,dx \quad \text{for all } x. $$

Here, the function \(f(x)\) is called the probability density function p.d.f. or simply a density function of the random variable X. A probability density function has the properties (proved in [13])

$$ f(x)\geq0 \quad \text{for all } x \quad \text{and}\quad \int _{-\infty}^{\infty}f(x)\,dx = 1. $$

A moment designates the power to which the deviations are raised before averaging them. In statistics, we have three kinds of moments as follows:

  1. (i)

    Moment about any value \(x=A\) is the rth power of the deviation of variable from A and is called the rth moment of the distribution about A.

  2. (ii)

    Moment about \(x=0\) is the rth power of the deviation of variable from 0 and is called the rth moment of the distribution about 0.

  3. (iii)

    Moment about mean, i.e., \(x=\overline {x}\) for sample and \(x= \mu\) for population, is the rth power of the deviation of variable from mean and is called the rth moment of the distribution about mean.

These moments are also called central moments or mean moments and are used to describe the set of data.

Note

The moments about any number \(x=A\) and about \(x=0\) are denoted by \(\mu'_{r}\), while those about mean position by \(\mu_{r}\) and \(\mu _{0}=\mu'_{0} =1\).

A link between the moments about arbitrary mean and actual mean of the data can be established in the following results:

$$ \mu_{r}= \binom{r}{0}\mu'_{r} - \binom{r}{1}\mu'_{r-1}\mu'_{1} + \binom {r}{2}\mu'_{r-2}\mu^{\prime2}_{1}- \binom{r}{3}\mu'_{r-3}\mu^{\prime3}_{1}+ \cdots $$
(1)

and conversely, we have

$$ \mu'_{r}= \binom{r}{0}\mu_{r} + \binom{r}{1}\mu_{r-1}\mu'_{1} + \binom {r}{2}\mu_{r-2}\mu^{\prime2}_{1}+ \binom{r}{3}\mu_{r-3}\mu^{\prime3}_{1} +\cdots . $$
(2)

Remarks

From the above discussion, we see that the first moment about the mean position is always zero, while the second moment is equal to the variance.

If a random variable X assumes all the values from a to b, then for a continuous distribution, the rth moment about the arbitrary number A and mean μ, respectively, are given by

$$ \mu'_{r}= \int_{a}^{b} (x-A)^{r} f(x)\,dx ;\qquad \mu_{r}= \int_{a}^{b} (x-\mu)^{r} f(x)\,dx. $$

In a random experiment with n outcomes, suppose a variable X assumes the values \(x_{1},\ldots, x_{n}\) with corresponding probabilities \(p_{1},\ldots , p_{n}\), then this collection is called probability distribution and \(\sum p_{i}=1\) (in case of discrete distributions). Also, if \(f(x)\) is a continuous probability distribution function defined on an interval \([a,b]\), then \(\int_{a}^{b} f(x)\,dx=1\). The expected value of a variate is defined as the first moment of the probability distribution about \(x = 0\), i.e.,

$$ \mu_{1}'= E(X)= \int_{a}^{b} x f(x)\,dx $$
(3)

and the rth moment about mean of the probability distribution is defined as \(E(X-\mu)^{r}\), where μ is the mean of the distribution.

Note

For discrete probability distribution, all the above results and notations are the same, just replacing the integral sign by the summation sign (∑).

2 \(\Gamma_{k}\) Function and gamma k-distribution

In 2007, Diaz and Pariguan [4] introduced the generalized k-gamma function as

$$ \Gamma_{k}(x) = \lim_{n\to\infty} \frac{n! k^{n}(nk)^{\frac {x}{k}-1}}{(x)_{n,k}},\quad k > 0 , x\in\mathbb{C}\setminus k\mathbb{Z}^{-} $$

and also gave the properties of the said function. \(\Gamma_{k}\) is one-parameter deformation of the classical gamma function such that \(\Gamma_{k} \rightarrow\Gamma\) as \(k \rightarrow1\). \(\Gamma_{k}\) is based on the repeated appearance of the expression of the following form:

$$ \alpha(\alpha+ k) (\alpha+ 2k) (\alpha+ 3k) \cdots \bigl(\alpha+ (n-1)k \bigr). $$
(4)

The function of the variable α given by statement (4), denoted by \((\alpha)_{n,k}\), is the Pochhammer k-symbol. Thus, we have

$$ (\alpha)_{n,k} = \begin{cases} \alpha(\alpha+ k)(\alpha+ 2k)(\alpha+ 3k) \cdots (\alpha+ (n-1)k) ,& n \in\mathbb{N}, k>0, \\ 1,& n=0 ,\alpha\neq0. \end{cases} $$

We obtain the usual Pochhammer symbol \((\alpha)_{n}\) by taking \(k=1\). Also, the researchers [59] worked on the generalized k-gamma function and discussed the following properties:

$$\begin{aligned}& \Gamma_{k}(x) = \int_{0}^{\infty}t^{x-1}e^{-\frac{t^{k}}{k}}\,dt ,\quad \operatorname{Re}(x) > 0, \end{aligned}$$
(5)
$$\begin{aligned}& \Gamma_{k}(x) = k^{\frac{x}{k}-1} \Gamma\biggl(\frac{x}{k} \biggr), \end{aligned}$$
(6)
$$\begin{aligned}& (x)_{n,k}= \frac{\Gamma_{k}(x+nk)}{\Gamma_{k}(x)}, \end{aligned}$$
(7)
$$\begin{aligned}& \Gamma_{k}(x+k) = x\Gamma_{k}(x), \end{aligned}$$
(8)
$$\begin{aligned}& \Gamma_{k}(k) = 1. \end{aligned}$$
(9)

Definition 2.1

A continuous random variable X is said to have a gamma distribution with parameter \(m>0\) if its probability density function is defined by

$$ f(x)= \begin{cases} \frac{1}{\Gamma(m)}x^{m-1}e^{-x},& 0\leq x < \infty, \\ 0, &\text{elsewhere}, \end{cases} $$

and its distribution function \(F(x)\) is defined by

$$ F(x)= \begin{cases} \int_{0}^{x}\frac{1}{\Gamma(m)}z^{m-1}e^{-z}\, dz,& z \geq0, \\ 0, &x< 0, \end{cases} $$

which is also called an incomplete gamma function.

Definition 2.2

Let X be a continuous random variable, then it is said to have a gamma k-distribution with parameters \(m>0\) and \(k>0\) if its probability k-density function (p.k.d.f.) is defined [10] by

$$ f_{k}(x) = \begin{cases} \frac{1}{\Gamma_{k}(m)}x^{m-1}e^{\frac{-x^{k}}{k}},& 0\leq x < \infty, k>0, \\ 0, &\text{elsewhere} , \end{cases} $$

and its k-distribution function \(F_{k}(x)\) is defined by

$$ F_{k}(x) = \begin{cases} \int_{0}^{x}\frac{1}{\Gamma_{k}(m)}z^{m-1}e^{\frac{-z^{k}}{k}}\, dz,& z>0, \\ 0,& \text{elsewhere}. \end{cases} $$

Remarks

We can call the above function incomplete k-gamma function because, if \(k=1\), it is an incomplete gamma function tabulated in [11, 12].

Proposition 2.3

The gamma k-distribution satisfies the following properties for the parameters \(m>0\) and \(k>0\).

  1. (i)

    The gamma k-distribution is a proper probability distribution.

  2. (ii)

    The mean of the gamma k-distribution is equal to the parameter m.

  3. (iii)

    Variance of the gamma k-distribution is equal to mk.

  4. (iv)

    The harmonic mean of a \(\Gamma_{k}(m)\) variate in terms of k is \((m-k)\).

Proof

Parts (i), (ii) and (iii) are proved in [10].

(iv) Let X be a \(\Gamma_{k}(m)\) variate, then we have the expected value of \(\frac{1}{X}\), for \(0 < x < \infty\), as

$$\begin{aligned} E_{k}\biggl(\frac{1}{X}\biggr) =&\frac{1}{\Gamma_{k}(m)}\int _{0}^{\infty} \frac {1}{x}x^{m-1}e^{\frac{-x^{k}}{k}} \,dx = \frac{1}{\Gamma_{k}(m)}\int_{0}^{\infty} x^{m-2}e^{\frac{-x^{k}}{k}}\,dx \\ =& \frac{\Gamma_{k}(m-k)}{\Gamma_{k}(m)}= \frac{\Gamma _{k}(m-k)}{(m-k)\Gamma_{k}(m-k)}=\frac{1}{m-k}. \end{aligned}$$

Now, harmonic mean in terms of \(k>0\) is given by

$$ \mathit{HM}_{k} =\frac{1}{E_{k} (\frac{1}{X} )}= m-k. $$

 □

Proposition 2.4

For \(k > 0\), the moment generating function of gamma k-distribution is

$$ \mu'_{r,k}=m(m+k) (m+2k)\cdots\bigl(m+(r-1)k \bigr)=(m)_{r,k}, $$

where \((m)_{r,k}\) is the Pochhammer k-symbol.

Proof

Using the definition of expected values along with the gamma k-distribution defined above, the rth moment about \(x= 0\) is given by

$$\begin{aligned} \mu'_{r,k} =&E_{k}\bigl(X^{r}\bigr)= \int_{0}^{\infty} x^{r} \frac{1}{\Gamma _{k}(m)}x^{m-1}e^{\frac{-x^{k}}{k}}\,dx \\ =&\frac{1}{\Gamma_{k}(m)} \int_{0}^{\infty} x^{m+r-1} e^{\frac{-x^{k}}{k}}\,dx, \end{aligned}$$
(10)

which implies that

$$\begin{aligned} E_{k}\bigl(X^{r}\bigr) =&\frac{\Gamma_{k}(m+rk)}{\Gamma_{k}(m)}= \frac {m+(r-1)k\cdots(m+2k)(m+k)m \Gamma_{k}(m)}{\Gamma_{k}(m)} \\ =& m(m+k) (m+2k)\cdots\bigl(m+(r-1)k\bigr). \end{aligned}$$

To prove the second part of Proposition 2.4, just use relation (7). □

Remarks

When \(r=1\), we obtain \(\mu_{1,k}^{\prime}=m=\) mean, when \(r=2\), \(\mu_{2,k}^{\prime}=m(m+k)\) and hence \(\mu_{2,k}\) = \(\mu '_{2,k}-(\mu'_{1,k})^{2}\) = mk = variance of the gamma k-distribution given in Proposition 2.3.

3 Applications to the gamma k-distribution via Chebyshev’s integral inequality

In this section, we prove some inequalities which involve gamma k-distribution by using some natural inequalities [13]. The following result is well known in the literature as Chebyshev’s integral inequality for synchronous (asynchronous) functions. Here, we use this result to prove some k-analog inequalities [14] and some new inequalities.

Lemma 3.1

Let \(f, g, h : I\subseteq\mathbb{R}\rightarrow \mathbb{R} \) be such that \(h(x) \geq0\) for all \(x \in I\) and h, \(hfg\), hf and hg are integrable on I. If f, g are synchronous (asynchronous) on I, i.e.,

$$ \bigl(f(x)-f(y)\bigr) \bigl(g(x)-g(y)\bigr) \geq (\leq) = 0\quad \textit{for all } x, y \in I, $$

then we have the inequality (see [15, 16])

$$ \int_{I} h(x) \,dx \int_{I} h(x) f(x) g(x) \,dx \geq (\leq)\, \int_{I} h(x)f(x) \,dx \int _{I} h(x) g(x) \,dx. $$
(11)

This lemma can be proved by using Korkine’s identity [17]

$$\begin{aligned}& \int_{I} h(x) \,dx \int_{I} h(x) f(x) g(x) \,dx - \int_{I} h(x)f(x) \,dx \int_{I} h(x) g(x) \,dx \\& \quad = \frac{1}{2}\int_{I} \int _{I} h(x) h(y) \bigl(f(x)-f(y)\bigr) \bigl(g(x)-g(y)\bigr) \,dx \,dy. \end{aligned}$$

Definition 3.2

Two positive real numbers a and b are said to be similarly (oppositely) unitary if

$$ (a-1) (b-1) \geq (\leq)\, 0. $$

Theorem 3.3

If \(a, b > 0\) are similarly (oppositely) unitary and \(k>0\), let the random variable X be such that \(X \sim \Gamma_{k}(a+b+k-1)\). Further, define the random variables U and V such that \(U \sim\Gamma_{k}(a+k)\) and \(V \sim\Gamma_{k}(b+k)\), then we have the inequality

$$ \frac{E_{k}(X)^{r}}{E_{k}(U)^{r} E_{k}(V)^{r}} \geq (\leq)\, \frac{\Gamma_{k}(a+k)\Gamma _{k}(b+k)}{\Gamma_{k}(r+k+1)\Gamma_{k}(a+b+k-1)},\quad k>0, r=1,2,\ldots . $$

Proof

For \(k > 0\), consider the mappings \(f, g, h : [0,\infty) \rightarrow[0, \infty)\) defined by

$$ f(t) = t^{a-1} ,\qquad g(t) = t^{b-1}\quad \text{and}\quad h(t) = t^{r+k} e^{-\frac{t^{k}}{k}}. $$

If the condition \((a-1)(b-1) \geq (\leq)\, 0\) holds and \(k > 0\), then clearly the mappings f and g are synchronous (asynchronous) on \([0,\infty)\). Thus, by Chebyshev’s integral inequality along with the functions f, g, and h defined above, we have

$$\begin{aligned}& \int_{0}^{\infty} t^{r+k} e^{-\frac{t^{k}}{k}} \,dt \int_{0}^{\infty} t^{a+b+r+k-1-1} e^{-\frac{t^{k}}{k}}\,dt \\& \quad \geq (\leq)\,\int_{0}^{\infty} t^{a+r+k-1}e^{-\frac{t^{k}}{k}}\,dt \int_{0}^{\infty} t^{b+r+k-1}e^{-\frac{t^{k}}{k}}\,dt. \end{aligned}$$
(12)

From the moment generating function given in Proposition 2.4, using relation (10), we observe

$$\begin{aligned}& E_{k}(X)^{r}\Gamma_{k}(a+b+k-1)= \int _{0}^{\infty} x^{a+b+k-1+r-1} e^{\frac {-x^{k}}{k}}\,dx, \end{aligned}$$
(13)
$$\begin{aligned}& E_{k}(U)^{r}\Gamma_{k}(a+k)= \int _{0}^{\infty} x^{a+k+r-1} e^{\frac {-x^{k}}{k}}\,dx \end{aligned}$$
(14)

and

$$ E_{k}(V)^{r}\Gamma_{k}(b+k)= \int _{0}^{\infty} x^{b+k+r-1} e^{\frac {-x^{k}}{k}}\,dx. $$
(15)

Using relations (13) to (15), for the random variables X, U and V, inequality (12) provides the desired theorem. □

Corollary 3.4

From Theorem  3.3, if \(b=a >0\), the condition \((a-1)(b-1) \geq (\leq)\, 0\) reduces to \((a-1)^{2} \geq0\), and we have the inequality

$$ E_{k}(X)^{r} \Gamma_{k}(r+k+1) \Gamma_{k}(2a+k-1) \geq E^{2}_{k}(U)^{r} \Gamma _{k}^{2}(a+k),\quad k>0, r=1,2,\ldots . $$

Theorem 3.5

Let the random variables X and Y be such that \(X \sim\Gamma_{k}(p-q)\) and \(Y \sim\Gamma_{k}(m+q)\) for the real numbers p, q and m with \(p,m >0\) and \(p > q > -m\). Further, let the random variables U and V be such that \(U \sim\Gamma_{k}(p)\) and \(V \sim\Gamma_{k}(m)\). If \(q(p-m-q) \geq(\leq)\, 0\), k is any positive real number, then we have the inequality

$$ \frac{E_{k}(U)^{r} E_{k}(V)^{r}}{E_{k}(X)^{r}E_{k}(Y)^{r} } \geq(\leq)\, \frac{\Gamma _{k}(p-q)\Gamma_{k}(m+q)}{\Gamma_{k}(p)\Gamma_{k}(m)},\quad r=1,2,\ldots . $$

Proof

For a positive real number k, choose the mappings \(f, g, h : [0,\infty) \rightarrow[0, \infty)\) defined by

$$ f(x) = x^{p-q-m} ,\qquad g(x) = x^{q} \quad \text{and} \quad h(x) = x^{r+m-1} e^{-\frac{x^{k}}{k}}. $$

Now, using the definition of expected values as in equations (13) to (15) along with the mappings defined above, Chebyshev’s integral inequality gives the required proof. □

Corollary 3.6

From Theorem  3.5, if \(m=p >0\), then we have the inequality

$$ E^{2}_{k}(U)^{r} \Gamma^{2}_{k}(p) \leq \Gamma_{k}(p-q)\Gamma_{k}(p+q+r)E_{k}(X)^{r} ,\quad k>0, r=1,2,\ldots . $$

Theorem 3.7

Let the random variables U and Y be such that \(U \sim\Gamma_{k}(p)\) and \(Y \sim\Gamma_{k}(m+q)\) for the real numbers p, q and m with \(p,m >0\) and \(p > q > -m\). Further, if \(q(p-m-q) \geq(\leq)\, 0\), then we have another estimation for the moment ratios of the k-gamma random variables as

$$ \Gamma_{k}(p) \Gamma_{k}(m) E_{k}(U)^{r} \geq(\leq)\, \Gamma_{k}(p-q)\Gamma_{k}(m+q) E_{k}(Y)^{r},\quad k>0, r=1,2,\ldots . $$

Proof

Consider the mappings defined by

$$ f(x) = x^{p-q-m} , \qquad g(x) = x^{r+q} \quad \text{and} \quad h(x) = x^{m-1} e^{-\frac{x^{k}}{k}}. $$

Using the values of \(E(U)^{r}\), \(E(Y)^{r}\) adjusted accordingly as in equations (13) to (15) along with the above choice of mappings, from Chebyshev’s integral inequality we can get the desired result. □

Now, we discuss some estimations for the expected values of reciprocals which can be used for the harmonic mean of k-gamma random variables.

Theorem 3.8

Let the random variables X and Y be such that \(X \sim\Gamma_{k}(p)\) and \(Y \sim\Gamma_{k}(m+q)\). Then, for \(q(p-m-q) \geq(\leq)\,0\), we have the inequality for gamma k-distribution

$$ \frac{E_{k}(1/X)^{r}}{E_{k}(1/Y)^{r}} \geq(\leq)\, \frac{\Gamma_{k}(p-q)\Gamma _{k}(m+q)}{\Gamma_{k}(p)\Gamma_{k}(m)},\quad k>0, r=1,2,\ldots. $$

Proof

For \(k > 0\), choose the mappings defined by

$$ f(x) = x^{p-q-m} , \qquad g(x) = x^{-r+q}\quad \text{and}\quad h(x) = x^{m-1} e^{-\frac{x^{k}}{k}}. $$

Using these mappings in inequality (11), we get

$$\begin{aligned}& \int_{0}^{\infty}x^{m-1} e^{-\frac{x^{k}}{k}} \,dx\int_{0}^{\infty}x^{p-r-1} e^{-\frac{x^{k}}{k}} \,dx \\& \quad \geq(\leq)\, \int_{0}^{\infty}x^{p-q-1} e^{-\frac{x^{k}}{k}} \,dx \int_{0}^{\infty}x^{m+q-r-1} e^{-\frac{x^{k}}{k}} \,dx. \end{aligned}$$
(16)

From the moment generating function given in Proposition 2.4, using relation (10), we observe

$$ E_{k}(1/X)^{r}\Gamma_{k}(p)= \int _{0}^{\infty} x^{p-r-1} e^{\frac{-x^{k}}{k}}\,dx $$

and

$$ E_{k}(1/Y)^{r}\Gamma_{k}(m+q)= \int _{0}^{\infty} x^{m+q-r-1} e^{\frac{-x^{k}}{k}}\,dx. $$

Using these results in inequality (16), we have the required proof. □

In the following theorem, we give an inequality for the estimation of variance of the k-gamma random variable.

Theorem 3.9

Let the random variables X and Y be such that \(X \sim\Gamma_{k}(p)\) and \(Y \sim\Gamma_{k}(m+q)\). Denote the variances of these random variables by \(V_{k}(X)= E_{k}(X^{2})-[E_{k}(X)]^{2}\) and \(V_{k}(Y)= E_{k}(Y^{2})-[E_{k}(Y)]^{2}\), respectively. Then, for \(q(p-m-q) \geq(\leq)\,0\), we have the inequality for gamma k-distribution

$$\begin{aligned}& \Gamma_{k}(p)\Gamma_{k}(m) V_{k}(X) - \Gamma_{k}(p-q)\Gamma_{k}(m+q)V(Y) \\& \quad \geq(\leq)\, (m+q)\Gamma_{k}(p-q)\Gamma_{k}(m+q+k)-p \Gamma_{k}(m)\Gamma _{k}(p+k),\quad k>0, r=1,2,\ldots . \end{aligned}$$

Proof

From Theorem 3.7, taking \(r=2\) and rewriting \(E_{k}(\cdot)\) in terms of \(V_{k}(\cdot)\), we obtain

$$ \Gamma_{k}(p) \Gamma_{k}(m) \bigl[V_{k}(X)+ \bigl(E_{k}(X)\bigr)^{2}\bigr] \geq(\leq)\, \Gamma _{k}(p-q)\Gamma_{k}(m+q) \bigl[V_{k}(Y)+ \bigl(E_{k}(Y)\bigr)^{2}\bigr]. $$
(17)

As given in Proposition 2.3, expected value of a k-gamma variate with parameter m is m, so inequality (17) gives

$$ \Gamma_{k}(p) \Gamma_{k}(m) \bigl[V_{k}(X)+p^{2} \bigr] \geq(\leq)\, \Gamma_{k}(p-q)\Gamma _{k}(m+q) \bigl[V_{k}(Y)+(m+q)^{2}\bigr]. $$

Now, using the property of k-gamma function given in relation (8) and rearranging the terms, we get the required proof. □

Corollary 3.10

Denote the coefficient of variation of the k-gamma random variables X and Y by \(\mathit{CV}_{k}(X)\) and \(\mathit{CV}_{k}(Y)\), respectively, where \(\mathit{CV}_{k}(\cdot)= \frac{\sqrt{V_{k}(\cdot)}}{E_{k}(\cdot )}\). Then, for \(q(p-m-q) \geq(\leq)\, 0\), we have the inequality for gamma k-distribution

$$ \frac{1+\mathit{CV}^{2}_{k}(X)}{1+\mathit{CV}^{2}_{k}(Y)} \geq(\leq)\, \frac{(m+q)\Gamma _{k}(m+q+k)\Gamma_{k}(p-q)}{p\Gamma_{k}(m)\Gamma_{k}(p+k)},\quad k>0. $$

Proof

Rewriting relation (17) as

$$ \frac{[\frac{V_{k}(X)}{(E_{k}(X))^{2}}+1]}{[\frac{V_{k}(Y)}{(E_{k}(Y))^{2}}+1]} \geq(\leq)\, \frac{\Gamma_{k}(p-q)\Gamma_{k}(m+q)}{\Gamma_{k}(p) \Gamma _{k}(m)}\frac{(E_{k}(Y))^{2}}{(E_{k}(X))^{2}}, $$

using the values of \(E_{k}(X)\) and \(E_{k}(Y)\) from Proposition 2.3, in the right-hand side of the above inequality, we get

$$ \frac{[\frac{V_{k}(X)}{(E_{k}(X))^{2}}+1]}{[\frac{V_{k}(Y)}{(E_{k}(Y))^{2}}+1]} \geq(\leq)\, \frac{\Gamma_{k}(p-q)\Gamma_{k}(m+q)}{\Gamma_{k}(p) \Gamma _{k}(m)}\frac{(m+q)^{2}}{p^{2}}, $$

and by \(\Gamma_{k}(x+k)=x\Gamma_{k}(x)\), we reach the required proof. □

4 Some results via Holder’s integral inequality

In this section, we prove some results involving the k-gamma random variable via Hölder’s integral inequality. The mapping \(\Gamma_{k}\) is logarithmically convex proved in [18], and now we have the following theorem.

Theorem 4.1

Define the distributed random variables X and Y such that \(X \sim\Gamma_{k}(ax+by)\), \(Y \sim\Gamma_{k}(x)\) and \(a,b,x,y \geq0\) with \(a+b=1\). Then we have the inequality for gamma k-distribution

$$ \frac{E_{k}(X)^{ar} }{[E_{k}(Y)^{r}]^{a}} \leq \frac{[\Gamma_{k}(x)]^{a}[\Gamma _{k}(y)]^{b}}{\Gamma_{k}(ax+by)},\quad k>0, r=1,2,\ldots . $$
(18)

Proof

For \(k > 0\), consider the mappings defined by

$$ f(t) = t^{a(x-1)+ar} , \qquad g(t) = t^{b(y-1)} \quad \text{and} \quad h(t) = e^{\frac{-t^{k}}{k}} $$

for \(t \in[0,\infty)\). Substituting these mappings in Holder’s integral inequality

$$ \int_{I} f(t)g(t)h(t)\, dt \leq \biggl(\int_{I} \bigl\{ f(t)\bigr\} ^{\frac{1}{a}}h(t) \,dt \biggr)^{a} \biggl(\int _{I} \bigl\{ g(t)\bigr\} ^{\frac{1}{b}}h(t) \,dt \biggr)^{b}, $$
(19)

we have

$$\begin{aligned}& \int_{0}^{\infty}\bigl(t^{ax+by+ar-a-b} e^{\frac{-t^{k}}{k}} \bigr)\,dt \\& \quad \leq \biggl(\int_{0}^{\infty}\bigl[t^{r+x-1}e^{\frac{-t^{k}}{k}}\bigr] \,dt \biggr)^{a} \biggl(\int_{0}^{\infty}\bigl[t^{y-1}e^{\frac{-t^{k}}{k}}\bigr] \,dt \biggr)^{b}. \end{aligned}$$
(20)

From relation (10), we have

$$ E_{k}(X)^{ar}\Gamma_{k}(ax+by)= \int _{0}^{\infty}t^{ax+by+ar-1}e^{\frac {-t^{k}}{k}} \,dt $$

and

$$ E_{k}(Y)^{r}\Gamma_{k}(x)= \int _{0}^{\infty}t^{x+r-1}e^{\frac{-t^{k}}{k}} \,dt. $$

Using these results in inequality (20), we get

$$ E_{k}(X)^{ar} \Gamma_{k}(ax+by) \leq \bigl[ \Gamma_{k}(x)E_{k}(Y)^{r}\bigr]^{a} \bigl[ \Gamma _{k}(y)\bigr]^{b},\quad k>0, r=1,2,\ldots , $$

which is equivalent to the required result. □

Corollary 4.2

Setting \(b=a>0\) in Theorem  4.1, we have

$$ \bigl(E_{k}(X)^{ar} \Gamma_{k}a(x+y) \bigr)^{\frac{1}{a}} \leq E_{k}(Y)^{r}\Gamma _{k}(x) \Gamma_{k}(y),\quad k>0, r=1,2,\ldots . $$

Theorem 4.3

Let the distributed random variables X and Y be such that \(X \sim\Gamma_{k}(ax+by)\), \(Y \sim\Gamma_{k}(x)\) and \(a,b,x,y \geq0\) with \(a+b=1\). Then we have the inequality for the reciprocals of a k-gamma variate

$$ \Gamma_{k}(ax+by)\bigl[E_{k}(1/X)^{ar}\bigr] \leq \bigl[\Gamma_{k}(x)E_{k}(1/Y)^{r} \bigr]^{a}\bigl[\Gamma _{k}(y)\bigr]^{b},\quad k>0, r=1,2,\ldots . $$

Proof

For \(k > 0\), consider the mappings defined by

$$ f(t) = t^{a(x-1)-ar} , \qquad g(t) = t^{b(y-1)} \quad \text{and}\quad h(t) = e^{\frac{-t^{k}}{k}} $$

for \(t \in[0,\infty)\). Substituting these mappings in Hölder’s integral inequality, we have

$$ \int_{0}^{\infty}\bigl(t^{ax+by-ar-a-b} e^{\frac{-t^{k}}{k}} \bigr)\,dt \leq \biggl(\int _{0}^{\infty}t^{x-r-1}e^{\frac{-t^{k}}{k}} \,dt \biggr)^{a} \biggl(\int_{0}^{\infty}t^{y-1}e^{\frac{-t^{k}}{k}}\,dt \biggr)^{b}. $$
(21)

From relation (10), we deduce

$$ E_{k}(1/X)^{ar}\Gamma_{k}(ax+by)= \int _{0}^{\infty}t^{ax+by-ar-1}e^{\frac {-t^{k}}{k}} \,dt $$

and

$$ E_{k}(1/Y)^{r}\Gamma_{k}(x)= \int _{0}^{\infty}t^{x-r-1}e^{\frac{-t^{k}}{k}} \,dt, $$

and hence inequality (21) gives the required proof. □

Theorem 4.4

Let the distributed random variables X and Y be such that \(X \sim\Gamma_{k}(ax+by)\), \(Y \sim\Gamma_{k}(x)\) and \(a,b,x,y \geq0\) with \(a+b=1\). Denote the variances of these variables in terms of k by \(V_{k}(X)= E_{k}(X)^{2}-(E_{k}(X))^{2}\) and \(V_{k}(Y)= E_{k}(Y)^{2}-(E_{k}(Y))^{2}\), respectively. Then we have the inequality for the variances of gamma k-distribution

$$ \Gamma_{k}(ax+by)\bigl[V_{k}\bigl(X^{a}\bigr)+ \bigl(E_{k}\bigl(X^{a}\bigr)\bigr)^{2}\bigr] \leq \bigl(\Gamma_{k}(x)\bigr)^{a} \bigl(\Gamma _{k}(y) \bigr)^{b}\bigl[V_{k}(Y)+ x^{2}\bigr]^{a} ,\quad k>0, r=1,2,\ldots . $$

Proof

From Theorem 4.1, taking \(r=2\) and writing \(E_{k}(\cdot )\) in terms of \(V_{k}(\cdot)\), we have

$$ \frac{V_{k}(X^{a})-(E_{k}(X^{a}))^{2}}{[E_{k}(Y)^{2}]^{a}} \leq \frac{[\Gamma _{k}(x)]^{a}[\Gamma_{k}(y)]^{b}}{\Gamma_{k}(ax+by)},\quad k>0. $$
(22)

Using Proposition 2.3, we see that \(E_{k}(Y)=x\), and after rearranging the terms, inequality (22) gives the required proof. □

5 Some inequalities for the mean deviation

In 1935, Grüss established an integral inequality which provides an estimation for the integral of a product in terms of the product of integrals [13, 19]. Here, we use this inequality to prove some inequalities involving the mean deviation of a k-beta random variable. The authors [4] defined the k-beta function as

$$ \beta_{k} (x, y) = \frac{\Gamma_{k}(x)\Gamma_{k}(y)}{\Gamma_{k}(x + y)},\quad \operatorname{Re}(x) > 0, \operatorname{Re}(y) > 0 $$
(23)

and the integral form of \(\beta_{k} (x, y)\) is

$$ \beta_{k} (x, y)=\frac{1}{k} \int_{0}^{1} t^{\frac{x}{k}-1} (1-t)^{\frac{y}{k}-1} \,dt. $$
(24)

Note

When \(k\rightarrow1\), \(\beta_{k} (x, y) \rightarrow \beta(x, y)\).

Definition 5.1

Let X be a continuous random variable, then it is said to have a beta k-distribution with two parameters m and n if its probability k-density function (p.k.d.f.) is defined by (see [10, 20])

$$ f_{k}(x) = \begin{cases} \frac{1}{k\beta_{k}(m,n)}x^{\frac{m}{k}-1}(1-x)^{\frac{n}{k}-1},& 0\leq x\leq1 ; m, n, k >0, \\ 0,& \text{elsewhere}. \end{cases} $$

In the above distribution, the k-beta variable is referred to as \(\beta_{k}(m,n)\), and its k-distribution function \(F_{k}(x)\) is given by

$$ F_{k}(x) = \begin{cases} 0,& x< 0, \\ \int_{0}^{1}\frac{1}{k\beta_{k}(m,n)}x^{\frac{m}{k}-1}(1-x)^{\frac {n}{k}-1}\, dx, &0\leq x\leq1 ; m, n, k >0, \\ 0,& x>1. \end{cases} $$

Remarks

We can call the above function an incomplete k-beta function because, if \(k=1\), it is an incomplete beta function tabulated in [21].

Also, we see that the mean deviation for a beta random variable \(X \sim \beta(p,q)\) is given by [14]

$$ \mathit{MD}(X)= \frac{2p^{p} q^{q}}{\beta(p,q)(p+q)^{p+q}} $$

and, for a k-beta random variable \(X \sim\beta_{k}(p,q)\), mean deviation in terms of k is given by

$$ \mathit{MD}_{k}(X) = \frac{2(\frac{p}{k})^{\frac{p}{k}}(\frac{q}{k})^{\frac {q}{k}}}{\beta_{k}(p,q)(\frac{p+q}{k})^{\frac{p+q}{k}}}. $$
(25)

For more details about the theory of k-special functions like k-gamma function, k-polygamma function, k-beta function, k-hypergeometric functions, solutions of k-hypergeometric differential equations, contagious functions relations, inequalities and integral representations with applications involving k-gamma and k-beta functions, k-gamma and k-beta probability distributions and so forth (see [2227]).

Lemma 5.2

Let f and g be two functions defined and integrable on \([a, b]\). If m, M, s and S are given real constants such that \(m \leq f(x)\leq M\) and \(s\leq g(x)\leq S\) for all \(x \in[a, b]\), then

$$\begin{aligned}& \biggl\vert \frac{1}{b-a}\int_{a}^{b} f(x) g(x) \,dx- \frac{1}{b-a} \int_{a}^{b} f(x)\,dx \frac{1}{b-a} \int_{a}^{b} g(x)\,dx \biggr\vert \\& \quad \leq \frac{1}{4} (M-m) (S-s) \end{aligned}$$

and the constant \(\frac{1}{4}\) is best possible.

Now, an application of the Grüss integral inequality results in the following estimation of the mean deviation of a k-beta random variable.

Theorem 5.3

Let \(p, q > k >0\) be the real numbers and \(x\in [0,1]\). Then, for the mean deviation of a random variable \(X \sim\beta _{k}(p,q)\), the following inequality holds:

$$ \frac{2(\frac{p}{k})^{\frac{p}{k}}(\frac{q}{k})^{\frac{q}{k}}}{(\frac {p+q}{k})^{\frac{p+q}{k}}}\cdot\frac{1}{[\frac{k}{pq}+\frac{1}{4k}]} \leq \mathit{MD}_{k}(X) \leq \frac{2(\frac{p}{k})^{\frac{p}{k}}(\frac{q}{k})^{\frac {q}{k}}}{(\frac{p+q}{k})^{\frac{p+q}{k}}}.\frac{1}{[\frac{k}{pq}-\frac {1}{4k}]} \quad \textit{for } pq< 4k^{2}. $$

Proof

Consider the functions defined by

$$ f(x)= x^{\frac{p}{k}-1} ,\qquad g(x)= (1-x)^{\frac{q}{k}-1},\quad x\in[0, 1], p, q > k > 0. $$

For minima and maxima of \(f(x)\) and \(g(x)\), we have

$$ \inf_{x\in[0,1]}f(x) = \inf_{x\in[0,1]}g(x) = 0 ; \qquad \sup_{x\in [0,1]}f(x) = \sup_{x\in[0,1]}g(x) = 1. $$

Also,

$$ \int_{0}^{1} f(x)= \frac{k}{p} ; \qquad \int_{0}^{1} g(x)= \frac{k}{q}. $$

By using the Grüss inequality, we get

$$ \biggl\vert \int_{0}^{1} x^{\frac{p}{k}-1} (1-x)^{\frac{q}{k}-1}\,dx- \int_{0}^{1} x^{\frac{p}{k}-1}\,dx \int_{0}^{1} (1-x)^{\frac{q}{k}-1}\,dx \biggr\vert \leq \frac {1}{4}(1-0) (1-0). $$

Using the definition of k-beta function given in relation (24), we have

$$ \biggl\vert k \beta_{k}(p,q)- \frac{k}{p} \frac{k}{q} \biggr\vert \leq \frac{1}{4} $$

or equivalently,

$$ \frac{1}{[\frac{k}{pq}-\frac{1}{4k}]} \leq \beta_{k}(p,q) \leq \frac {1}{[\frac{k}{pq}+\frac{1}{4k}]}. $$
(26)

From relations (25) and (26), we get the required result. □

Theorem 5.4

Let p, q and k be positive real numbers and \(x\in[0,1]\). Then, for the mean deviation of a random variable \(X \sim\beta_{k}(p,q)\), the following inequality holds:

$$ \mathit{MD}_{k}(X) \leq(\geq)\, \frac{2k(\frac{p}{k})^{\frac{p+k}{k}}(\frac {q}{k})^{\frac{q+k}{k}}}{(\frac{p+q}{k})^{\frac{p+q}{k}}}, $$

and accordingly,

$$ (p-k) (q-k) \geq(\leq)\, 0. $$

Proof

Consider the functions defined by

$$ f(x)= x^{\frac{p}{k}-1} ,\qquad g(x)= (1-x)^{\frac{q}{k}-1}\quad \text{and}\quad h(x)= 1, \quad x\in[0, 1], p, q > k > 0. $$

As \((p-k)(q-k) \geq(\leq)\, 0\), the mappings f and g are the same (opposite) monotonic and h is non-negative on \([0,1]\). Using Chebyshev’s integral inequality, we have

$$ \int_{0}^{1} dx \int_{0}^{1} x^{\frac{p}{k}-1} (1-x)^{\frac{q}{k}-1}\, dx \geq (\leq ) \int_{0}^{1} x^{\frac{p}{k}-1}\,dx \int_{0}^{1} (1-x)^{\frac{q}{k}-1}\,dx, $$

which implies that

$$ \beta_{k}(p,q) \geq (\leq)\, \frac{k}{pq}. $$
(27)

From relations (25) and (27), we have the required result. □

Theorem 5.5

Let p, q and k be positive real numbers. Then, for the mean deviation of a random variable \(X \sim\beta _{k}(p,q)\), the following inequality is satisfied:

$$ \mathit{MD}_{k}(X) \leq (\geq)\, \frac{2(\frac{p}{k})^{\frac{p}{k}}(\frac {q}{k})^{\frac{q}{k}}}{(\frac{p+q}{k})^{\frac{p+q}{k}}}\cdot\frac{\Gamma _{k}(p+q)}{\Gamma_{k}(q+k)\Gamma_{k}(p-k)} $$

and accordingly,

$$ (p-q-k)\geq(\leq)\, 0. $$

Proof

Consider the functions defined by

$$ f(x)= x^{p-q-k}, \qquad g(x)= x^{k}\quad \text{and}\quad h(x)=x^{q-1} e^{-\frac{x^{k}}{k}} ,\quad x\in[0, \infty), p, q > k > 0. $$

Using Chebyshev’s integral inequality, we have

$$\begin{aligned}& \int_{0}^{\infty} x^{q-1} e^{-\frac{x^{k}}{k}} \,dx \int_{0}^{\infty} x^{p-1} e^{-\frac{x^{k}}{k}}\,dx \\& \quad \geq(\leq)\,\int_{0}^{\infty} x^{p-k-1}e^{-\frac{x^{k}}{k}}\,dx \int_{0}^{\infty} x^{q+k-1}e^{-\frac{x^{k}}{k}}\,dx. \end{aligned}$$
(28)

Using the integral form of a k-gamma function given in relation (5), inequality (28) gives

$$ \Gamma_{k}(q)\Gamma_{k}(p) \geq(\leq)\, \Gamma_{k}(p-k)\Gamma_{k}(q+k). $$

Dividing both sides by \(\Gamma_{k}(p+q)\) and using relation (23), we have

$$ \beta_{k}(p,q) \geq(\leq)\, \frac{\Gamma_{k}(p-k)\Gamma_{k}(q+k)}{\Gamma _{k}(p+q)}. $$
(29)

From relations (25) and (29), we reach the desired proof. □

References

  1. Larsen, RJ, Marx, ML: An Introduction to Mathematical Statistics and Its Applications, 5th edn. Prentice Hall, New York (2011)

    Google Scholar 

  2. Walac, C: A Hand Book on Statistical Distributions for Experimentalists (2007)

    Google Scholar 

  3. Hasting, NAJ, Peacock, JB: Statistical Distributions. Butterworth, Stoneham (1975)

    Google Scholar 

  4. Diaz, R, Pariguan, E: On hypergeometric functions and k-Pochhammer symbol. Divulg. Mat. 15(2), 179-192 (2007)

    MATH  MathSciNet  Google Scholar 

  5. Kokologiannaki, CG: Properties and inequalities of generalized k-gamma, beta and zeta functions. Int. J. Contemp. Math. Sci. 5(14), 653-660 (2010)

    MATH  MathSciNet  Google Scholar 

  6. Kokologiannaki, CG, Krasniqi, V: Some properties of k-gamma function. Matematiche LXVIII, 13-22 (2013)

    MathSciNet  Google Scholar 

  7. Krasniqi, V: A limit for the k-gamma and k-beta function. Int. Math. Forum 5(33), 1613-1617 (2010)

    MATH  MathSciNet  Google Scholar 

  8. Mansoor, M: Determining the k-generalized gamma function Γ k (x) by functional equations. Int. J. Contemp. Math. Sci. 4(21), 1037-1042 (2009)

    MathSciNet  Google Scholar 

  9. Mubeen, S, Habibullah, GM: An integral representation of some k-hypergeometric functions. Int. Math. Forum 7(4), 203-207 (2012)

    MATH  MathSciNet  Google Scholar 

  10. Rehman, G, Mubeen, S, Rehman, A, Naz, M: On k-gamma, k-beta distributions and moment generating functions. J. Probab. Stat. 2014, Article ID 982013 (2014)

    MathSciNet  Google Scholar 

  11. Spanier, J, Oldham, KB: The gamma function \(\Gamma(x)\). In: An Atlas of Functions, pp. 411-421. Hemisphere, Washington (1987)

    Google Scholar 

  12. Spanier, J, Oldham, KB: The incomplete gamma function and related functions. In: An Atlas of Functions, pp. 435-443. Hemisphere, Washington (1987)

    Google Scholar 

  13. Mitrinovic, DS, Pecaric, JE, Fink, AM: Classical and New Inequalities in Analysis. Kluwer Academic, Dordrecht (1993)

    Book  MATH  Google Scholar 

  14. Dragomir, SS, Pranesh, K, Sing, P: Mathematical inequalities with applications to the beta and gamma mappings - II. Survey paper (1999)

  15. Kumar, P, Singh, SP, Dragomir, SS: Some inequalities involving beta and gamma functions. Nonlinear Anal. Forum 6(1), 143-150 (2001)

    MATH  MathSciNet  Google Scholar 

  16. Dragomir, SS, Agarwal, RP, Barnett, NS: Inequalities for beta and gamma functions via some classical and new integral inequalities. J. Inequal. Appl. 5, 103-165 (2000)

    MATH  MathSciNet  Google Scholar 

  17. Dragomir, SS, Wang, S: Applications of Ostrowski’s inequality for the estimation of error bounds for some special means and for some numerical quadrature rules. Appl. Math. Lett. 11(1), 105-109 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  18. Rehman, A, Mubeen, S, Sadiq, N, Shaheen, F: Some inequalities involving k-gamma and k-beta functions with applications. J. Inequal. Appl. 2014, 224 (2014)

    Article  MathSciNet  Google Scholar 

  19. Dragomir, SS: Some integral inequalities of Grüss type. Indian J. Pure Appl. Math. 31(4), 397-415 (2000)

    MATH  MathSciNet  Google Scholar 

  20. Rehman, A, Mubeen, S: Some inequalities involving k-gamma and k-beta functions with applications - II. J. Inequal. Appl. 2014, 445 (2014)

    Article  MathSciNet  Google Scholar 

  21. Pearson, K: Tables of Incomplete Beta Function. Cambridge University Press, Cambridge (1934) (2nd edn. (1968))

    MATH  Google Scholar 

  22. Krasniqi, V: Inequalities and monotonicity for the ration of k-gamma function. Sci. Magna 6(1), 40-45 (2010)

    MathSciNet  Google Scholar 

  23. Zhang, J, Shi, HN: Two double inequalities for k-gamma and k-Riemann zeta functions. J. Inequal. Appl. 2014, 191 (2014)

    Article  Google Scholar 

  24. Mubeen, S, Naz, M, Rahman, G: A note on k-hypergeometric differential equations. J. Inequal. Spec. Funct. 4(3), 38-43 (2013)

    MathSciNet  Google Scholar 

  25. Mubeen, S, Rahman, G, Rehman, A, Naz, M: Contiguous function relations for k-hypergeometric functions. ISRN Math. Anal. 2014, Article ID 410801 (2014)

    MathSciNet  Google Scholar 

  26. Mubeen, S, Naz, M, Rehman, A, Rahman, G: Solutions of k-hypergeometric differential equations. J. Appl. Math. 2014, Article ID 128787 (2014)

    Article  MathSciNet  Google Scholar 

  27. Mubeen, S, Rehman, A, Shaheen, F: Properties of k-gamma, k-beta and k-psi functions. Bothalia 44, 371-379 (2014)

    Google Scholar 

Download references

Acknowledgements

The authors are grateful to the anonymous referees for their helpful comments and suggestions to improve the article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shahid Mubeen.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The main idea of this paper was proposed by SM and SI. Both authors contributed equally to the writing of this paper. The authors AR and MI read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rehman, A., Mubeen, S., Iqbal, S. et al. Inequalities with applications to some k-analog random variables. J Inequal Appl 2015, 177 (2015). https://doi.org/10.1186/s13660-015-0694-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0694-4

Keywords