Comptes Rendus Mathématique

. A kernel conditional quantile estimate of a real-valued non-stationary spatial process is proposed for a prediction goal at a non-observed location of the underlying process. The originality is based on the ability to take into account some local spatial dependency. Large sample properties based on almost complete and L q -consistencies of the estimator are established. Résumé. Dans cette note, nous présentons un estimateur à noyau du quantile conditionnel d’un processus spatial non-stationnaire, pour un but de prédiction du processus considéré en un site non-observé. L’ori-ginalité vient du fait que l’estimateur permet de prendre en compte une éventuelle dépendance locale des données. Une étude asymptotique basée sur les convergences presque complète et en moyenne d’ordre q de l’estimateur est proposée.


Introduction
Spatial statistical modeling is undergoing significant developments because of its recurrent use in many fields such as epidemiology, econometrics, environmental and earth sciences, forestry, agronomy, images analyses, among others. A main goal in spatial analyses, is prediction of some features on un-sampled locations by taking into account some spatial dependence. There is a wide literature on parametric spatial prediction compare to the nonparametric setting.
Some results in this direction are those of Biau and Cadre [1] on kernel prediction of a strictly stationary real-valued random field indexed in (N * ) N . This work has been extended to several directions. Dabo-Niang and Yao [5] considered kernel regression estimation and prediction of continuously indexed and strictly stationary random fields.
Dabo-Niang et al. [3] proposed a new kernel spatial predictor of non-stationary processes based on a conditional mean regression model. This last predictor depends on two kernels in order to control both the distance between observations and that between spatial locations. The regression model used by these authors may not be relevant in some situations. This could be for instance the case of presence of extreme or outliers values. In fact, mean regression model is much more sensitive to extreme or outliers values than the quantile one. Conditional quantile spatial regression model may then be an alternative to mean regression one and is our interest in this work.
In the spatial real-valued setting, we refer to Koenker and Mizera [11] for parametric quantile estimation while Hallin et al. [8], proposed a non-parametric local linear conditional quantile estimator for strictly stationary spatial data. The last authors established consistency of their estimate by Bahadur representation and an asymptotic normality result. Dabo-Niang and Thiam [4] studied a kernel conditional quantile estimator for strictly stationary spatial processes and established its asymptotic normality. Abdi et al. [12] established consistency in L 2r (r ∈ N * ) of a conditional quantile estimator for a strictly stationary spatial random field. Some of these results have been extended in the case of functional data.
All non-parametric conditional quantile estimators considered in the previous papers dealt with strictly stationary spatial processes. For a recent review on non-parametric estimation in a spatial context, refer to El Machkouri et al. [6], among others. The current work goes beyond and extends the spatial regression model of Dabo-Niang et al. [3] on non-stationary processes to the quantile regression context. Namely, this note proposes a nonparametric conditional quantile estimator of locally stationary processes for a prediction purpose. Compare to Abdi et al. [12] and Dabo-Niang and Thiam [4], the proposed estimator depends on a multiplicative kernel taking into account the spatial positions of observations. This allows to take into account in the predictor, the number of closest neighbors locations of the non-observed location to predict the process of interest. In addition, the considered model does not assume strict stationary of the spatial process. This idea was used for density function and prediction estimations in Dabo-Niang et al. [2] and Dabo-Niang et al. [3], respectively.
The note is organized as follow. Section 2 introduces the proposed quantile predictor and gives some large sample properties, namely almost complete and L q -norm (q ∈ N * ) consistencies with rates when the considered sample is α-mixing. Some comments and main lines of the proofs are given in the last section.

New kernel quantile estimator
We consider a spatial process defined over some probability space (Ω, F , P). The main purpose of this work is to estimate the quantile of (Y i , i ∈ Z N ), given (X i , i ∈ Z N ) at an unobserved location i 0 ∈ Z N , in order to predict Y i 0 . We assume that the process is observable over the rectangular domain I n = {i = (i 1 , . . . , i N ), 1 ≤ i k ≤ n k , k = 1, . . . , N } (a basic assumption in the non-parametric literature) and observed on a spatial set O n ⊂ I n , n = (n 1 , . . . , n N ) of finite cardinality tending to ∞ as n → ∞ (defined later), with i 0 ∉ O n .
We do not suppose strict stationarity, rather we consider the following hypothesis. Let (X i 0 , Y i 0 ) has the same distribution as (X , Y ) and the variables (X i , Y i ) i ∈ O n be locally identically distributed (see for instance Klemelä [10] who considered density estimation for locally identically timeseries data). Assume that, there is a sufficient number of (X i , Y i ) with a distribution closed to that of (X , Y ). One may imagine that when i is closed to i 0 , and if there is enough sites i closed to i 0 , then the sequence ( denoted by x in the following with abuse of notation. Then, a spatial predictor of Y i 0 may account which locations of O n have an influence on the site i 0 of interest. Let (X , Y ) and (X i , Y i ) have unknown densities with respect to Lebesgue measure, let f X Y and f be the densities of (X , Y ) and X respectively.
Let n = n 1 × . . . × n N be the size of I n . We assume for simplicity that n 1 = n 2 = . . . = n N = n. However the following results can be extended to a more general framework. We write n → ∞ if n → ∞.
For the location i 0 , let k n = k n,i 0 = i 1 [∥i−i 0 ∥ ≤ d n ] denote the number of neighbors i for which the distance between i and i 0 is less than or equal to d n > 0 such that d n → ∞ as n → ∞. This last assumes that the number of neighbors locations increases (eventually) as the sample size increases.
The used method is model free. It uses nonparametric kernel techniques to estimate conditional quantiles, it is useful for prediction and may be an alternative to classical regression, see Huang and Nguyen [9] for more details.
Let the conditional distribution function of Y given X , denoted by For 0 < α < 1, is called the α-th conditional quantile of Y given X = x. This quantile is the solution of To insure the existence and uniqueness of t α (x), we assume that F x is strictly increasing. The conditional distribution F x is estimated by the following proposed estimator where K 1 defined on R d and K 2 defined on R are kernels functions, while K 3 defined on R is a distribution function. Moreover, b n , ρ n and h n are sequences of positive numbers tending to 0 such as nρ N n b d n h n → ∞, when n goes to ∞. Let in what follow K 2,ρ n (∥i 0 − i∥) = K 2 (ρ −1 n ∥ i 0 −i n ∥) and note that ∥ i 0 −i n ∥ denotes the distance between normalized locations i 0 n = i 01 n , . . . , i 0N n and i n = i 1 n , . . . , i N n . As in Dabo-Niang et al. [13], ρ −1 n ∥ i 0 −i n ∥ ≤ 1 stands for ∥i 0 − i∥ ≤ nρ n . The conditional quantile kernel estimate t α (x) is linked to the conditional distribution estimate in the following way The proposed predictor of Y i 0 is then Y i 0 = t α (x). Its large sample properties are studied in the following with the help of some assumptions.

Assumptions
Let C or C ′ denote any nonnegative constant whose value is unimportant and may vary from line to line. Let ε be an arbitrary small nonnegative number and u n = N i =1 (log n i )(log log n i ) 1+ε . It is obvious that 1/( nu n ) < ∞ (see Dabo-Niang and Thiam [4]), where the summation is over the indice n such that n i ≥ 2, for all 1 ≤ i ≤ N . Let us assume the following conditions. The random field (Z i , i ∈ Z N ) is α-mixing: there is a function ϕ such that ϕ(t ) ↓ 0 as t → ∞; where Card (E) (resp. Card (E ′ )) is the cardinality of the spatial set E (resp. E ′ ), dist(E, E ′ ) the euclidian distance between E and E ′ and ψ : N 2 → R + is a symmetric positive function nondecreasing in each variable such that ϕ(0) = 1. We recall that (Z i , i ∈ Z N ) is said to be strongly mixing if ψ ≡ 1. In addition, we assume that (H 1 ) The densities f X Y and f X are continuous on R d +1 and R d respectively and f X > 0.
(H 3 ) For all (x 1 , x 2 ) ∈ N x × N x , for all (y 1 , y 2 ) ∈ R 2 , where C is a positive constant. (H 5 ) There exist two constants C 1i and C 2i with 0 < C 1i < C 2i < ∞, i = 1, 2, such that where s ′ is the transpose of s.
(H 6 ) K 3 is of class C 1 and symmetric and its derivative K (1) 3 is a bounded kernel of compact support. Moreover, we assume that the restriction of K 3 on the set {t ∈ R, K 3 (t ) ∈ (0, 1)} is a strictly increasing function.

Remark 1.
(H 5 ) is imposed for sake of simplicity and is satisfied by several kernels with compact support. (H 6 ) is classical in non-parametric quantile inference and is satisfied by kernels with compact support. Gaussian kernel K (1) 3 is also possible, it suffices to replace the compact support assumption by R d |t | b 2 K (1) 3 (t )d t < ∞. (H 6 ) ensures the existence and uniquness of the quantile estimate t α (x). This hypothesis is discussed in Abdi et al. [12]. (H 7 )-(H 8 ) concern the bandwidths and are based on the mixing condition to achieve the following consistency results.

Main results
The main following result gives the almost complete convergence (a.c) of the conditional quantile estimate t α (x) considered as a predictor of Y i 0 .

Theorem 2. Under hypotheses
A weak consistency is given in the following result.

Conclusion
The Note presents a kernel quantile estimate as a predictor of a real-valued spatial locally stationary process at a given location. Large sample properties of the predictor are given towards an almost complete and mean of order q consistencies under mixing conditions. These results are the baseline of a work in progress, concerning asymptotic normality and finite sample properties of the proposed predictor.

Main lines of the proofs
Sketch of the proof of Theorem 3. By the same decomposition as in the proof of Abdi et al. [12,Theorem 2], and applying Lemmas 5, 6 and 7, we get the result.