Bayesian Estimation in Piecewise Constant Model with Gamma Noise by Using Reversible Jump MCMC

A piecewise constant model is often applied to model data in many fields. Several noises can be added in the piecewise constant model. This paper proposes the piecewise constant model with a gamma multiplicative noise and a method to estimate a parameter of the model. The estimation is done in a Bayesian framework. A prior distribution for the model parameter is chosen. The prior distribution for the parameter model is multiplied with a likelihood function for the data to build a posterior distribution for the parameter. Because a number of models are also parameters, a form of the posterior distribution for the parameter is too complex. A Bayes estimator cannot be calculated easily. A reversible jump Monte Carlo Markov Chain (MCMC) is used to find the Bayes estimator of the model parameter. A result of this paper is the development of the piecewise constant model and the method to estimate the model parameter. An advantage of this method can simultaneously estimate the constant piecewise model parameter.


Introduction
A piecewise constant model is a model used to model data in many fields, for example [1][2][3]. The piecewise constant model is used for smoothing images of flowers [1]. The piecewise constant model is used for a population size modeling [2], [3]. The piecewise constant model can contain an additive noise or a multiplicative noise. The additive noise is considered by various authors, for example [4][5][6]. The additive noise is added to a spatial regression model [4]. The additive noise is used in a partially linear functional model [5]. This linear functional model is partly applied to tecator data. The additive noise is used in a log regression model [6]. On the other hand, a multiplicative noise is also used by several authors, for example [7][8][9][10]. The multiplicative noise is used as a measurement error in line transect sampling [7]. An asymptotic Cramer-Rao bound is discussed for frequency estimation in the multiplicative noise [8]. Adsorption of ligands on DNA is considered for an arbitrary filling in the presence of multiplicative noise [9]. A multiplicative noise is used in a segmentation [10].
Noise in a mathematical model is assumed to be of a certain distribution, for example [11][12][13][14]. Gaussian additive noise is used in the piecewise constant model [11]. Exponential additive noise is used in an autoregressive model [12][13][14]. However, in some applications, the data is often modeled following the piecewise constant model with a gamma multiplicative noise. The Gamma is a distribution that is more general than an exponential distribution. The exponential distribution is a particular case of the Gamma distribution. If the piecewise constant model with the Gamma multiplicative noise used to model the data, the model parameters are unknown. The model parameters include a number of constant models, a location of constant model changes, a constant model height, and a noise variance. This study proposes an estimation method of the piecewise constant model that has a Gamma multiplicative noise where the number of constant models is unknown.

Method
A Bayesian framework is adopted to estimate the parameters [15]. A prior distribution for the number of constant models, the location of changes in the constant model, the constant height of the model, and the noise variance are selected. Then this prior distribution is combined with a likelihood function of the data to get a posterior distribution. Based on this posterior distribution, a Bayes estimator for the number of constant models, the location of changes in the constant model, the constant height of the model, and the noise variances are estimated.
A reversible jump Monte Carlo Markov Chain (MCMC) method [16] was proposed to determine the Bayes estimator. The basic idea of the MCMC reversible jump method is a creation of the Markov chain that is recurrent and irreducible such that limit distribution of the Markov chain will be equal to the posterior distribution. Furthermore, a resulting Markov chain is used to calculate estimator for the parameters.

Results and Discussion
Suppose that represents a number of data and 1 , … , represent a data set. These data follow the piecewise constant model if these data satisfy the following mathematical equation: with 1 = 0 and +2 = . The value of denotes the number of constant models. The values of = ( 1 , … , ) state the location of the change in the constant model. The value of ℎ = (ℎ 1 , … , ℎ +1 ) expresses the height of the constant model. Here, is assumed to have the Gamma distribution with the parameters > 0 and > 0.

Likelihood Function
The random variable is distributed Gamma so that the probability function for can be written as Here = 1. Suppose that = ( 1 , … , ) . By using variable transformation, a likelihood function for data y is and = +1 − for = 1, … , + 1.

Prior Distribution
To obtain a posterior distribution, a prior distribution must be determined. As in [10], the prior distribution for is chosen of a Binomial distribution with a parameter 0 < < 1. For = 0, 1, … , Where states the maximum value of . A hyperprior distribution for is chosen as a uniform distribution. A prior distribution for 1 , … , according to ordered statistics.
The distribution simulation ( 1 | 2 , ) is done by using the reversible jump MCMC algorithm. This algorithm uses 3 transformations, namely: changes in a location of the constant model, birth of the constant model, and death of the constant model.

Change in the Location
The change in the location of the constant model is as follows. Take a location randomly between 1 , … , . If is selected, the location is deleted and replaced by location * . Take u randomly according to ( −1 , +1 ). so that * = .

Simulation
Performance of the algorithm is tested using a simulation study. Synthetic data is made using the piecewise constant model in equation (1). A value of a piecewise constant model parameter is presented in Table 1 while noise is assumed to have a Gamma distribution with parameter values = 5 and = 5. A value of the maximum k is 10. This synthetic data is presented in Figure 1. This synthetic data is used as input to the reversible jump MCMC algorithm. The output is the number of constant models, the location of the model changes, and the height of the constant model.
The algorithm runs as many as 100,000 iterations with a burn-in period of 20,000. A histogram of the number of models is presented in Figure 2.  Figure 2 shows that the maximum value of k is reached at = 5. So the maximum probability estimator for the number of models k is ̂= 5. For ̂= 5, the estimator for the location of the model change is presented in Table 2. The image of the synthetic data and estimator for a model change location is shown in Figure 3.  Figure 3 shows that this synthetic data has 6 different models. Finally, for ̂= 5, the estimator for the height of the constant model is presented in Table 2. In Table 1 that is compared with Table 2, the estimated value of the parameter approaches the parameter value. The distance between and ̂ is | −| = 1.73 while the distance between ℎ and ĥ is |ℎ − ĥ| = 0.36. Synthetic data and an estimator for height are presented in Figure 4.

Conclusions
This paper develops a piecewise constant model and its parameter estimation procedure. The piecewise constant model parameter includes the number of constant models, the location of changes in the constant model, the constant model height, and noise variance. The Bayes estimator cannot be formulated explicitly because the number of constant models is a parameter. The reversible jump MCMC method is proposed to estimate these parameters. According to the simulation study, the reversible jump MCMC estimated the piece-wise constant model parameter well.
The reversible jump MCMC algorithm has several advantages. This algorithm can be used to estimate the piecewise constant model parameter that has Gamma multiplicative noise simultaneously. This algorithm can also be used to determine the hyper-parameters that appear in the prior distribution.