We present a Markov model of a land-use dynamic along a forest corridor of Madagascar. A first approach by the maximum likelihood approach leads to a model with an absorbing state. We study the quasi-stationary distribution law of the model and the law of the hitting time of the absorbing state. According to experts, a transition not present in the data must be added to the model: this is not possible by the maximum likelihood method and we make of the Bayesian approach. We use a Markov chain Monte Carlo method to infer the transition matrix which in this case admits an invariant distribution law. Finally we analyze the two identified dynamics.

We consider a mathematical SIL model for the spread of a directly transmitted infectious disease in an age-structured population; taking into account the demographic process and the vertical transmission of the disease. First we establish the mathematical well-posedness of the time evolution problem by using the semigroup approach. Next we prove that the basic reproduction ratio R0 is given as the spectral radius of a positive operator, and an endemic state exist if and only if the basic reproduction ratio R0 is greater than unity, while the disease-free equilibrium is locally asymptotically stable if R0<1. We also show that the endemic steady states are forwardly bifurcated from the disease-free steady state when R0 cross the unity. Finally we examine the conditions for the local stability of the endemic steady states.

In this work, we focus on the mathematical analysis of a model of chemostat with enzymatic degradation of a substrate (organic matter) that can partly be under a solid form [7]. The study of this 3-step model is derived from a smaller order sub-model since some variables can be decoupled from the others. We study the existence and the stability of equilibrium points of the sub-model considering monotonic growth rates and distinct dilution rates. In the classical chemostat model with monotonic kinetics, it is well known that only one equilibrium point attracts all solutions and that bistability never occurs [8]. In the present study, although only monotonic growth rates are considered, it is shown that the considered sub-model may exhibit bistability. The study of 3-step model shows the existence at most four positive equilibrium whose one is locally asymptotically stable and according to the initial condition the two species can coexist.

The computation of determinants intervenes in many scientific applications, as for example in the localization of eigenvalues of a given matrix A in a domain of the complex plane. When a procedure based on the application of the residual theorem is used, the integration process leads to the evaluation of the principal argument of the complex logarithm of the function g(z) = det((z + h)I - A)/ det(zI - A), and a large number of determinants is computed to insure that the same branch of the complex logarithm is followed during the integration. In this paper, we present some efficient methods for computing the determinant of a large sparse and block structured matrix. Tests conducted using randomly generated matrices show the efficiency and robustness of our methods.

We present in this paper the formulation of a non-dissipative arbitrary high order time domain scheme for the elastodynamic equations. Our approach combines the use of an arbitrary high order discontinuous Galerkin interpolation with centred flux in space, with an arbitrary high order leapfrog scheme in time. Numerical two dimensionnal results are presented for the schemes from order two to order four. In these simulations, we discuss of the numerical stability and the numerical convergence of the schemes on the homogeneous eigenmode problem. We also show the ability of the computed schemes to carry out more complex propagation probems by simulating the Garvin test with an explosive source. The results show the high accuracy of the method, both on triangular regular and irregular meshes.

In this paper, we present a self-stabilizing asynchronous distributed clustering algorithm that builds non-overlapping k-hops clusters. Our approach does not require any initialization. It is based only on information from neighboring nodes with periodic messages exchange. Starting from an arbitrary configuration, the network converges to a stable state after a finite number of steps. Firstly, we prove that the stabilization is reached after at most n+2 transitions and requires (u+1)* log(2n+k+3) bits per node, whereΔu represents node's degree, n is the number of network nodes and k represents the maximum hops number. Secondly, using OMNet++ simulator, we performed an evaluation of our proposed algorithm.

In this paper, we are presenting a learning model for XML document classification based on Bayesian networks. Then, we are proposing a model which simplifies the arborescent representation of the XML document that we have, named coupled model and we will see that this approach improves the response time and keeps the same performances of the classification. Then, we will study an extension of this generative model to the discriminating model thanks to the formalism of the Fisher’s kernel. At last, we have applied a ponderation of the structure components of the Fisher’s vector. We finish by presenting the obtained results on the XML collection by using the CBS and SVM methods

The grid-based peer-to-peer architectures were used either for storage and data sharing or computing. So far, the proposed solutions with respect to grid services are based on hierarchical topologies, which present a high degree of centralization. The main issue of this centralization is the unified management of resources and the difficult to react rapidly against failure and faults that can affect grid users. In this paper, we propose a original specification, called P2P4GS, that enables selfmanaged service of peer-to- peer grid. Therefore, we design a self-adaptive solution for services deployment and invocation which take account the paradigm of peer-to-peer services. Furthermore, the deployment, and invocation are completely delegated to the platform and are done a transparent manner with respect to the end user. We propose a generic specification that is not related to a particular peer-to-peer architecture or a management protocol services defined in advance. On the other hand, we propose a study of algorithmic complexities of deployment and service localization primitives in P2P4GS by immersing them on the classical topologies of P2P stack ie the ring and tree. The obtained performances are satisfactory for these different topologies.

A property (of an object) is opaque to an observer when he or she cannot deduce the property from its set of observations. If each observer is attached to a given set of properties (the so-called secrets), then the system is said to be opaque if each secret is opaque to the corresponding observer. Opacity has been studied in the context of discrete event dynamic systems where technique of control theory were designed to enforce opacity. To the best of our knowledge, this paper is the first attempt to formalize opacity of artifacts in data-centric workflow systems. We motivate this problem and give some assumptions that guarantee the decidability of opacity. Some techniques for enforcing opacity are indicated.

The feature selection for classification is a very active research field in data mining and optimization. Its combinatorial nature requires the development of specific techniques (such as filters, wrappers, genetic algorithms, and so on) or hybrid approaches combining several optimization methods. In this context, the support vector machine recursive feature elimination (SVM-RFE), is distinguished as one of the most effective methods. However, the RFE-SVM algorithm is a greedy method that only hopes to find the best possible combination for classification. To overcome this limitation, we propose an alternative approach with the aim to combine the RFE-SVM algorithm with local search operators based on operational research and artificial intelligence.

One of the central challenges with computer security is determining the difference between normal and potentially harmful behavior. For decades, developers have protected their systems using classical methods. However, the growth and complexity of computer systems or networks to protect require the development of automated and adaptive defensive tools. Promising solutions are emerging with biological inspired computing, and in particular, the immunological approach. In this paper, we propose two artificial immune systems for intrusion detection using the KDD Cup'99 database. The first one is based on the danger theory using the dendritic cells algorithm and the second is based on negative selection. The obtained results are promising