Function approximation arises in many branches of applied mathematics and computer science, in particular in numerical analysis, in finite element theory and more recently in data sciences domain. From most common approximation we cite, polynomial, Chebychev and Fourier series approximations. In this work we establish some approximations of a continuous function by a series of activation functions. First, we deal with one and two dimensional cases. Then, we generalize the approximation to the multi dimensional case. Examples of applications of these approximations are: interpolation, numerical integration, finite element and neural network. Finally, we will present some numerical results of the examples above.
This article, which is part of the general framework of mathematics applied to economics, is a decision-making model in total ignorance. Such an environment is characterized by the absence of a law of distribution of the states of nature allowing having good forecasts or anticipations. Based primarily on the integral of Choquet, this model allows aggregating the different states of nature in order to make a better decision. This integral of Choquet imposes itself with respect to the complexity of the environment and also by its relevance of aggregation of the interactive or conflicting criteria. The present model is a combination of the Schmeidler model and the Brice Mayag algorithm for the determination of Choquet 2-additive capacity. It fits into the framework of subjective models and provides an appropriate response to the Ellsberg paradox.
We study a mathematical model for the demographic transition. It is a homogeneous differential system of degree one. There are two age groups and two fertility levels. Low fertility extends by mimicry to adults with high fertility. When the mimicry coefficient increases, the system crosses two thresholds between which the population increases or decreases exponentially with a stable mixture of the two fertility rates. This partial demographic transition is reminiscent of the situation in some countries of sub-Saharan Africa.
The aim of this work is to use a hybrid approach to extract CVs' competences. The extraction approach for competences is made of two phases: a segmentation into sections phase within which the terms representing the competences are extracted from a CV; and a prediction phase that consists from the features previously extracted, to foretell a set of competences that would have been deduced and that would not have been necessary to mention in the resume of that expert. The main contributions of the work are two folds : the use of the approach of the hierarchical clustering of a résume in section before extracting the competences; the use of the multi-label learning model based on SVMs so as to foretell among a set of skills, those that we deduce during the reading of a CV. Experimentation carried out on a set of CVs collected from an internet source have shown that, more than 10% improvement in the identification of blocs compared to a model of the start of the art. The multi-label competences model of prediction allows finding the list of competences with a precision and a reminder respectively in an order of 90.5 % and 92.3 %. .
This paper presents the application of a Multi-Level Agent Based Model technology through a Natural Model based Design in Context (NMDC) to describe and model a class of environ-mental problems. NMDC allow training domain expert to design a conceptual model for a concrete environmental problem. This model describes the underlying application domain in terms of environmental concepts and neither requires specific technical skills nor involves implementation details. We show how the associated TiC (Tool-in-Context) develop through NMDC can help the domain expert to describe in semi-natural (specific) language the environmental problem. This description is the basis for TiC to generate a simulation tool. On the base of this, we transform the specific language to NetLogo agent based code, thereby facilitating an early prototype application to be used by the domain expert. Finally, we applied this approach to explain and analyze the process of deforestation around the Laf Forest Reserve and discuss the prototype resulting from our approach.
This paper is concerned with a topological asymptotic expansion for a parabolic operator. We consider the three dimensional non-stationary Stokes system as a model problem and we derive a sensitivity analysis with respect to the creation of a small Dirich-let geometric perturbation. The established asymptotic expansion valid for a large class of shape functions. The proposed analysis is based on a preliminary estimate describing the velocity field perturbation caused by the presence of a small obstacle in the fluid flow domain. The obtained theoretical results are used to built a fast and accurate detection algorithm. Some numerical examples issued from a lake oxygenation problem show the efficiency of the proposed approach.
We study the probability of extinction of a population modelled by a linear birth-and-death process with several types in a periodic environment when the period is large compared to other time scales. This probability depends on the season and may present a sharp jump in relation to a "canard" in a slow-fast dynamical system. The point of discontinuity is determined precisely in an example with two types of individuals related to a vector-borne disease transmission model.
The paper shows how to take advantage of a possible existing linear relationship in an optimization problem to address the issue of robust design and backward uncertainty propagation lowering as much as possible the computational effort.
Graph algorithms have inherent characteristics, including data-driven computations and poor locality. These characteristics expose graph algorithms to several challenges, because most well studied (parallel) abstractions and implementation are not suitable for them. In our previous work [21, 22, 24], we show how to use some complex-network properties, including community structure and heterogeneity of node degree, to improve performance, by a proper memory management (Cn-order) and an appropriate thread scheduling (comm-deg-scheduling). In recent work [23], Besta et al. proposed log(graph), a graph representation that outperforms existing graph compression algorithms. In this paper, we show that our graph numbering heuristic and our scheduling heuristics can be improved when they are combined with log(graph) data structure. Experiments were made on multi-core machines. For example, on one node of a multi-core machine (Troll from Grid'5000), we showed that when combining our previously proposed heuristics with graph compression, with Pagerank being executing on Live Journal dataset, we can reduce with cn-order: cache-references from 29.94% (without compression) to 39.56% (with compression), cache-misses from 37.87% to 51.90% and hence time from 18.93% to 28.66%.