<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/">
  <channel>
    <title>Revue Africaine de Recherche en Informatique et Mathématiques Appliquées - Latest Publications</title>
    <description>Latest articles</description>
    
    <pubDate>Mon, 09 Mar 2026 22:10:50 +0000</pubDate>
    <generator>episciences.org</generator>
    <link>https://arima.episciences.org</link>
    <author>Revue Africaine de Recherche en Informatique et Mathématiques Appliquées</author>
    <dc:creator>Revue Africaine de Recherche en Informatique et Mathématiques Appliquées</dc:creator>
    <atom:link rel="self" type="application/rss+xml" href="https://arima.episciences.org/rss/papers"/>
    <atom:link rel="hub" href="http://pubsubhubbub.appspot.com/"/>
    <item>
      <title>NoSQL databases: A survey</title>
      <description><![CDATA[NoSQL data stores have introduced a new way of designing database systems to meet the recent needs of applications and services operating in areas such as the World Wide Web, Big Data, and Data Analytics. They offer a means to store and access high volumes of partially structured data by enhancing the flexibility of the data model and integrating distributed architecture at their core, thus providing better properties of high data availability and low data latency.This paper reviews the various design approaches of NoSQL data stores, providing up-to-date information on their data models, request processing, scalability, storage management, data dis- tribution modes, and use cases. It also addresses multi-model and cloud-oriented NoSQL stores and offer a comprehensive description of a wide range of NoSQL stores with the use of a rich taxonomy.]]></description>
      <pubDate>Tue, 16 Dec 2025 08:30:55 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.13970</link>
      <guid>https://doi.org/10.46298/arima.13970</guid>
      <author>Kpekpassi, Digonaou</author>
      <author>Faye, David</author>
      <dc:creator>Kpekpassi, Digonaou</dc:creator>
      <dc:creator>Faye, David</dc:creator>
      <content:encoded><![CDATA[NoSQL data stores have introduced a new way of designing database systems to meet the recent needs of applications and services operating in areas such as the World Wide Web, Big Data, and Data Analytics. They offer a means to store and access high volumes of partially structured data by enhancing the flexibility of the data model and integrating distributed architecture at their core, thus providing better properties of high data availability and low data latency.This paper reviews the various design approaches of NoSQL data stores, providing up-to-date information on their data models, request processing, scalability, storage management, data dis- tribution modes, and use cases. It also addresses multi-model and cloud-oriented NoSQL stores and offer a comprehensive description of a wide range of NoSQL stores with the use of a rich taxonomy.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Multi-target synthesis of logic controllers using a MDE approach</title>
      <description><![CDATA[GRAFCET is a powerful graphical modeling language for the specification of controllers in discrete event systems. It considers hierarchical structures as well as structural and semantic constraints. In this paper, we propose to use a GRAFCET specification model in a Model Driven Engineering (MDE) approach for multi-target synthesis of embedded logic control systems based on microcontrollers. In this approach, a GRAFCET metamodel is associated with a microcontroller metamodel which characterizes the microcontroller platform features to be considered when generating code. The GRAFCET metamodel includes the modeling of expressions to facilitate model verification and an easy interpretation of Grafcet events and time constraints. Transformation rules for generation of C-programmable microcontroller code are then presented. As application, we present a platform based on Eclipse EMF, Object Constraint Language (OCL) and Acceleo code generation engine.]]></description>
      <pubDate>Wed, 09 Apr 2025 03:49:56 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.14306</link>
      <guid>https://doi.org/10.46298/arima.14306</guid>
      <author>Nzebop Ndenoka, Gérard</author>
      <author>Tchuente, Maurice</author>
      <author>Simeu, Emmanuel</author>
      <author>Monthe, Valery</author>
      <dc:creator>Nzebop Ndenoka, Gérard</dc:creator>
      <dc:creator>Tchuente, Maurice</dc:creator>
      <dc:creator>Simeu, Emmanuel</dc:creator>
      <dc:creator>Monthe, Valery</dc:creator>
      <content:encoded><![CDATA[GRAFCET is a powerful graphical modeling language for the specification of controllers in discrete event systems. It considers hierarchical structures as well as structural and semantic constraints. In this paper, we propose to use a GRAFCET specification model in a Model Driven Engineering (MDE) approach for multi-target synthesis of embedded logic control systems based on microcontrollers. In this approach, a GRAFCET metamodel is associated with a microcontroller metamodel which characterizes the microcontroller platform features to be considered when generating code. The GRAFCET metamodel includes the modeling of expressions to facilitate model verification and an easy interpretation of Grafcet events and time constraints. Transformation rules for generation of C-programmable microcontroller code are then presented. As application, we present a platform based on Eclipse EMF, Object Constraint Language (OCL) and Acceleo code generation engine.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Time and content aware implicit social influence estimation to enhancetrust-based recommender systems.</title>
      <description><![CDATA[Nowadays, e-commerce, streaming and social networks platforms play an important role in our daily lives. However, the ever-increasing addition of items on these platforms (items on Amazon, videos on Netflix and YouTube, posts on Facebook and Instagram) makes it difficult for users to select items that interest them. The integration of recommender systems into these platforms aims to offer each user a small list of items that match their preferences. To improve the performance of these recommender systems, some work in the literature incorporate explicit or implicit trust between platform users through trust-based recommender systems. Indeed, many of these works are based on explicit trust, when each user designates those whom they trust in the platform. But this information is rare in most real-world platforms. Thus, other work propose to estimate the implicit trust that each user can grant to another. However, work that estimates implicit trust does not take into account the temporal dynamics of users' past following actions and even less the fact that a user can influence another on one category of item and not on another. In this paper, we propose time and content aware strategies to estimate social influence of one user on another. The resulting time and content aware implicit trust are integrated to trust-based recommender systems build on K-Nearest Neighbors (KNN) and Graph-based techniques. Experiments done for rating predictions with KNN and Top-N recommendations with Graph model show that time and content aware implicit trust make it possible to improve the performance of the KNN according to the RMSE metric by 7% and 10%, and the performance of the graph model according to the NDCG@10 metric by 59% and 08% respectively on the Ciao and Epinions datasets.]]></description>
      <pubDate>Sun, 23 Mar 2025 19:29:29 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.13328</link>
      <guid>https://doi.org/10.46298/arima.13328</guid>
      <author>Nzekon Nzeko'o, Armel Jacques</author>
      <author>Adamou, Hamza</author>
      <author>Messi Nguele, Thomas</author>
      <author>Betndam Tchamba, Bleriot Pagnaul</author>
      <dc:creator>Nzekon Nzeko'o, Armel Jacques</dc:creator>
      <dc:creator>Adamou, Hamza</dc:creator>
      <dc:creator>Messi Nguele, Thomas</dc:creator>
      <dc:creator>Betndam Tchamba, Bleriot Pagnaul</dc:creator>
      <content:encoded><![CDATA[Nowadays, e-commerce, streaming and social networks platforms play an important role in our daily lives. However, the ever-increasing addition of items on these platforms (items on Amazon, videos on Netflix and YouTube, posts on Facebook and Instagram) makes it difficult for users to select items that interest them. The integration of recommender systems into these platforms aims to offer each user a small list of items that match their preferences. To improve the performance of these recommender systems, some work in the literature incorporate explicit or implicit trust between platform users through trust-based recommender systems. Indeed, many of these works are based on explicit trust, when each user designates those whom they trust in the platform. But this information is rare in most real-world platforms. Thus, other work propose to estimate the implicit trust that each user can grant to another. However, work that estimates implicit trust does not take into account the temporal dynamics of users' past following actions and even less the fact that a user can influence another on one category of item and not on another. In this paper, we propose time and content aware strategies to estimate social influence of one user on another. The resulting time and content aware implicit trust are integrated to trust-based recommender systems build on K-Nearest Neighbors (KNN) and Graph-based techniques. Experiments done for rating predictions with KNN and Top-N recommendations with Graph model show that time and content aware implicit trust make it possible to improve the performance of the KNN according to the RMSE metric by 7% and 10%, and the performance of the graph model according to the NDCG@10 metric by 59% and 08% respectively on the Ciao and Epinions datasets.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Clustering-based Graph Numbering using Execution Traces for Cache Misses Reduction in Graph Analysis Applications</title>
      <description><![CDATA[Social graph analysis is generally based on a local exploration of the underlying graph. That is, the analysis of a node of the graph is often done after having analyzed nodes located in its vicinity. However, over the time, networks are bound to grow with the addition of new members, which inevitably leads to the enlargement of the corresponding graphs. At this level we therefore have a problem because more the size of the graph increases, more the execution time of graph analysis applications too. This is due to the very large number of nodes that will need to be treated. Some recent work in-faces this problem by exploiting the properties of social networks such as the community structure to renumber the nodes of the graph in order to reduce cache misses. Reducing cache misses in an application allows to reduce the execution time of this application. In this paper, we argue that combining existing graph ordering with a new numbering that exploit execution traces analysis can allow to improve cache misses reduction and hence execution time reduction. The idea is to build graph numbering using execution traces of graph analysis applications and then combine it with an existing graph numbering (such as cn-order). To build this new ordering, we define a new distance and then used it to analyse execution traces with well known clustering algorithms K-means (for Kmeans-order) and hierarchical clustering (for cl-hier-order). Experiments on a user machine (dual-core) and four cores of Grid'5000 node (Neowise) show that this combination improves slightly existing graph ordering (cn-order, numbaco, rabbit and gorder) in almost all the cases (the two cores of dual-core, all the four cores of neowise), with PageRank graph application and astro-ph dataset. For example, on neowise with one thread and Astro-ph dataset, the best performance is given with the combination kmeans-order_cn-order which allows to reduce by 42.59% the cache misses (compared to the second numbaco with 40.79%) and therefore by 7.27 % the time of execution (compared to 6.89% for the second numbaco).]]></description>
      <pubDate>Sat, 08 Mar 2025 12:59:54 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.13538</link>
      <guid>https://doi.org/10.46298/arima.13538</guid>
      <author>Mogo Wafo, Régis Audran</author>
      <author>Messi Nguelé, Thomas</author>
      <author>Nzekon Nzeko'o, Armel Jacques</author>
      <author>Youh Xaviera, Djam</author>
      <dc:creator>Mogo Wafo, Régis Audran</dc:creator>
      <dc:creator>Messi Nguelé, Thomas</dc:creator>
      <dc:creator>Nzekon Nzeko'o, Armel Jacques</dc:creator>
      <dc:creator>Youh Xaviera, Djam</dc:creator>
      <content:encoded><![CDATA[Social graph analysis is generally based on a local exploration of the underlying graph. That is, the analysis of a node of the graph is often done after having analyzed nodes located in its vicinity. However, over the time, networks are bound to grow with the addition of new members, which inevitably leads to the enlargement of the corresponding graphs. At this level we therefore have a problem because more the size of the graph increases, more the execution time of graph analysis applications too. This is due to the very large number of nodes that will need to be treated. Some recent work in-faces this problem by exploiting the properties of social networks such as the community structure to renumber the nodes of the graph in order to reduce cache misses. Reducing cache misses in an application allows to reduce the execution time of this application. In this paper, we argue that combining existing graph ordering with a new numbering that exploit execution traces analysis can allow to improve cache misses reduction and hence execution time reduction. The idea is to build graph numbering using execution traces of graph analysis applications and then combine it with an existing graph numbering (such as cn-order). To build this new ordering, we define a new distance and then used it to analyse execution traces with well known clustering algorithms K-means (for Kmeans-order) and hierarchical clustering (for cl-hier-order). Experiments on a user machine (dual-core) and four cores of Grid'5000 node (Neowise) show that this combination improves slightly existing graph ordering (cn-order, numbaco, rabbit and gorder) in almost all the cases (the two cores of dual-core, all the four cores of neowise), with PageRank graph application and astro-ph dataset. For example, on neowise with one thread and Astro-ph dataset, the best performance is given with the combination kmeans-order_cn-order which allows to reduce by 42.59% the cache misses (compared to the second numbaco with 40.79%) and therefore by 7.27 % the time of execution (compared to 6.89% for the second numbaco).]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Parallelization of Recurrent Neural Network training algorithm with implicit aggregation on multi-core architectures</title>
      <description><![CDATA[Recent work has shown that deep learning algorithms are efficient for various tasks, whether in Natural Language Processing (NLP) or in Computer Vision (CV). One of the particularities of these algorithms is that they are so efficient as the amount of data used is large. However, sequential execution of these algorithms on large amounts of data can take a very long time. In this paper, we consider the problem of training Recurrent Neural Network (RNN) for hate (aggressive) messages detection task. We first compared the sequential execution of three variants of RNN, we have shown that Long Short Time Memory (LSTM) provides better metric performance, but implies more important execution time in comparison with Gated Recurrent Unit (GRU) and standard RNN. To have both good metric performance and reduced execution time, we proceeded to a parallel implementation of the training algorithms. We proposed a parallel algorithm based on an implicit aggregation strategy in comparison to the existing approach which is based on a strategy with an aggregation function. We have shown that the convergence of this proposed parallel algorithm is close to that of the sequential algorithm. The experimental results on an 32-core machine at 1.5 GHz and 62 Go of RAM show that better results are obtained with the parallelization strategy that we proposed. For example, with an LSTM on a dataset having more than 100k comments, we obtained an f-measure of 0.922 and a speedup of 7 with our approach, compared to a f-measure of 0.874 and a speedup of 5 with an explicit aggregation between workers.]]></description>
      <pubDate>Tue, 11 Feb 2025 15:36:12 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.13400</link>
      <guid>https://doi.org/10.46298/arima.13400</guid>
      <author>Messi Nguelé, Thomas</author>
      <author>Nzekon Nzeko'o, Armel Jacques</author>
      <author>Onana, Damase Donald</author>
      <dc:creator>Messi Nguelé, Thomas</dc:creator>
      <dc:creator>Nzekon Nzeko'o, Armel Jacques</dc:creator>
      <dc:creator>Onana, Damase Donald</dc:creator>
      <content:encoded><![CDATA[Recent work has shown that deep learning algorithms are efficient for various tasks, whether in Natural Language Processing (NLP) or in Computer Vision (CV). One of the particularities of these algorithms is that they are so efficient as the amount of data used is large. However, sequential execution of these algorithms on large amounts of data can take a very long time. In this paper, we consider the problem of training Recurrent Neural Network (RNN) for hate (aggressive) messages detection task. We first compared the sequential execution of three variants of RNN, we have shown that Long Short Time Memory (LSTM) provides better metric performance, but implies more important execution time in comparison with Gated Recurrent Unit (GRU) and standard RNN. To have both good metric performance and reduced execution time, we proceeded to a parallel implementation of the training algorithms. We proposed a parallel algorithm based on an implicit aggregation strategy in comparison to the existing approach which is based on a strategy with an aggregation function. We have shown that the convergence of this proposed parallel algorithm is close to that of the sequential algorithm. The experimental results on an 32-core machine at 1.5 GHz and 62 Go of RAM show that better results are obtained with the parallelization strategy that we proposed. For example, with an LSTM on a dataset having more than 100k comments, we obtained an f-measure of 0.922 and a speedup of 7 with our approach, compared to a f-measure of 0.874 and a speedup of 5 with an explicit aggregation between workers.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Self-supervised and multilingual learning applied to the Wolof, Swahili and Fongbe</title>
      <description><![CDATA[Under-resourced languages encounter substantial obstacles in speech recognition owing to the scarcity of resources and limited data availability, which impedes their development and widespread adoption. This paper presents a representation learning model that leverages existing frameworks based on self-supervised learning techniques—specifically, Contrastive Predictive Coding (CPC), wav2vec, and a bidirectional variant of CPC—by integrating them with multilingual learning approaches. We apply this model to three African languages: Wolof, Swahili, and Fongbe. Our evaluation of the resulting representations in a downstream task, automatic speech recognition, utilizing an architecture analogous to DeepSpeech, reveals the model’s capacity to discern language specific linguistic features. The results demonstrate promising performance, achieving Word Error Rates (WER) of 61% for Fongbe, 72% for Wolof, and 88% for Swahili. These findings underscore the potential of our approach in advancing speech recognition capabilities for under-resourced languages, particularly within the African linguistic landscape.]]></description>
      <pubDate>Tue, 11 Feb 2025 10:20:50 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.13416</link>
      <guid>https://doi.org/10.46298/arima.13416</guid>
      <author>Djionang Pindoh, Prestilien</author>
      <author>Melatagia Yonta, Paulin</author>
      <dc:creator>Djionang Pindoh, Prestilien</dc:creator>
      <dc:creator>Melatagia Yonta, Paulin</dc:creator>
      <content:encoded><![CDATA[Under-resourced languages encounter substantial obstacles in speech recognition owing to the scarcity of resources and limited data availability, which impedes their development and widespread adoption. This paper presents a representation learning model that leverages existing frameworks based on self-supervised learning techniques—specifically, Contrastive Predictive Coding (CPC), wav2vec, and a bidirectional variant of CPC—by integrating them with multilingual learning approaches. We apply this model to three African languages: Wolof, Swahili, and Fongbe. Our evaluation of the resulting representations in a downstream task, automatic speech recognition, utilizing an architecture analogous to DeepSpeech, reveals the model’s capacity to discern language specific linguistic features. The results demonstrate promising performance, achieving Word Error Rates (WER) of 61% for Fongbe, 72% for Wolof, and 88% for Swahili. These findings underscore the potential of our approach in advancing speech recognition capabilities for under-resourced languages, particularly within the African linguistic landscape.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Mediev'Enl -A domain ontology for cultural heritage items: case of medieval illuminations of Burgundy duke</title>
      <description><![CDATA[In the Middle Ages, some illuminations were intended for the elites of society and served as a means of communication for them to extend their social influences and to represent their social environments. They constitute an information system based on symbolic components linked together by semantic and influential relationships whose structure is close to models representing social relationships and networks. Today, understanding these illuminations and extracting their implicit messages, expressed through the combination of metaphorical graphic elements, are a difficult task reserved for experts. To help these latter and address the semantic heterogeneity of illuminations, this article explores the synergy between knowledge representation techniques and the analysis of medieval documents to build a knowledge model describing these medieval paintings. It proposes a formal ontology composed of items describing the explicit and visible knowledge of medieval illuminations and others expressing their implicit messages. The considered illuminations are part of those ordered or linked to the Burgundy duke, Philiphe le Bon]]></description>
      <pubDate>Wed, 29 Jan 2025 11:19:58 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.14035</link>
      <guid>https://doi.org/10.46298/arima.14035</guid>
      <author>Diarra, Djibril</author>
      <author>Clouzot, Martine</author>
      <author>Nicolle, Christophe</author>
      <dc:creator>Diarra, Djibril</dc:creator>
      <dc:creator>Clouzot, Martine</dc:creator>
      <dc:creator>Nicolle, Christophe</dc:creator>
      <content:encoded><![CDATA[In the Middle Ages, some illuminations were intended for the elites of society and served as a means of communication for them to extend their social influences and to represent their social environments. They constitute an information system based on symbolic components linked together by semantic and influential relationships whose structure is close to models representing social relationships and networks. Today, understanding these illuminations and extracting their implicit messages, expressed through the combination of metaphorical graphic elements, are a difficult task reserved for experts. To help these latter and address the semantic heterogeneity of illuminations, this article explores the synergy between knowledge representation techniques and the analysis of medieval documents to build a knowledge model describing these medieval paintings. It proposes a formal ontology composed of items describing the explicit and visible knowledge of medieval illuminations and others expressing their implicit messages. The considered illuminations are part of those ordered or linked to the Burgundy duke, Philiphe le Bon]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A New Hybrid Algorithm Based on Ant Colony Optimization and Recurrent Neural Networks with Attention Mechanism for Solving the Traveling Salesman Problem</title>
      <description><![CDATA[In this paper, we propose a hybrid approach for solving the symmetric traveling salesman problem. The proposed approach combines the ant colony algorithm (ACO) with neural networks based on the attention mechanism. The idea is to use the predictive capacity of neural networks to guide the behaviour of ants in choosing the next cities to visit and to use the prediction results of the latter to update the pheromone matrix, thereby improving the quality of the solutions obtained. In concrete terms, attention is focused on the most promising cities by taking into account both distance and pheromone information thanks to the attention mechanism, which makes it possible to assign weights to each city according to its degree of relevance. These weights are then used to predict the next towns to visit for each city. Experimental results on instancesTSP from the TSPLIB library demonstrate that this hybrid approach is better compared to the classic ACO.]]></description>
      <pubDate>Tue, 28 Jan 2025 08:48:02 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.13340</link>
      <guid>https://doi.org/10.46298/arima.13340</guid>
      <author>Soh, Mathurin</author>
      <author>Nguetoum Likeufack, Anderson</author>
      <dc:creator>Soh, Mathurin</dc:creator>
      <dc:creator>Nguetoum Likeufack, Anderson</dc:creator>
      <content:encoded><![CDATA[In this paper, we propose a hybrid approach for solving the symmetric traveling salesman problem. The proposed approach combines the ant colony algorithm (ACO) with neural networks based on the attention mechanism. The idea is to use the predictive capacity of neural networks to guide the behaviour of ants in choosing the next cities to visit and to use the prediction results of the latter to update the pheromone matrix, thereby improving the quality of the solutions obtained. In concrete terms, attention is focused on the most promising cities by taking into account both distance and pheromone information thanks to the attention mechanism, which makes it possible to assign weights to each city according to its degree of relevance. These weights are then used to predict the next towns to visit for each city. Experimental results on instancesTSP from the TSPLIB library demonstrate that this hybrid approach is better compared to the classic ACO.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Analysis of COVID-19 Coughs: From the Mildest to the Most Severe Form, a Realistic Classification Using Deep Learning</title>
      <description><![CDATA[Cough is the most common symptom of lung disease. COVID-19, a respiratory illness, has caused over 700 million positive cases and 7 million deaths worldwide. An effective, affordable, and widely available diagnostic tool is crucial in combating lung disease and the COVID-19 pandemic. Deep learning and machine learning algorithms could be used to analyze the cough sounds of infected patients and make predictions. Our research lab and the COUGHVID research lab provide the cough data. This diagnostic approach can distinguish cough sounds from COVID-19 patients and people suffering from other ailments as well as healthy people using deep learning and feature extraction from Mel spectrograms. The model used is a variant of ConvNet. This ConvNet model can easily capture features in MFCC Vectors and enable convolution parallelism, which increases processing speed. ConvNet attains translational invariance in features through the sharing of weights between layers. During data acquisition for model training, it is important to consider quiet environments to reduce errors in audio quality. The architecture of the convolutional neural networks gives an F1-score of 89%, an accuracy of 90.33% and sensitivity of 87.3%. This system has the potential to significantly impact society by reducing virus transmission, expediting patient treatment, and freeing up hospital resources. Early detection of COVID-19 can prevent disease progression and enhance screening effectiveness.]]></description>
      <pubDate>Fri, 17 Jan 2025 08:46:09 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.13343</link>
      <guid>https://doi.org/10.46298/arima.13343</guid>
      <author>Moffo, Fabien, Mouomene</author>
      <author>Noumsi Woguia, Auguste, Vigny</author>
      <author>Mvogo Ngono, Joseph</author>
      <author>Bowong, Samuel</author>
      <dc:creator>Moffo, Fabien, Mouomene</dc:creator>
      <dc:creator>Noumsi Woguia, Auguste, Vigny</dc:creator>
      <dc:creator>Mvogo Ngono, Joseph</dc:creator>
      <dc:creator>Bowong, Samuel</dc:creator>
      <content:encoded><![CDATA[Cough is the most common symptom of lung disease. COVID-19, a respiratory illness, has caused over 700 million positive cases and 7 million deaths worldwide. An effective, affordable, and widely available diagnostic tool is crucial in combating lung disease and the COVID-19 pandemic. Deep learning and machine learning algorithms could be used to analyze the cough sounds of infected patients and make predictions. Our research lab and the COUGHVID research lab provide the cough data. This diagnostic approach can distinguish cough sounds from COVID-19 patients and people suffering from other ailments as well as healthy people using deep learning and feature extraction from Mel spectrograms. The model used is a variant of ConvNet. This ConvNet model can easily capture features in MFCC Vectors and enable convolution parallelism, which increases processing speed. ConvNet attains translational invariance in features through the sharing of weights between layers. During data acquisition for model training, it is important to consider quiet environments to reduce errors in audio quality. The architecture of the convolutional neural networks gives an F1-score of 89%, an accuracy of 90.33% and sensitivity of 87.3%. This system has the potential to significantly impact society by reducing virus transmission, expediting patient treatment, and freeing up hospital resources. Early detection of COVID-19 can prevent disease progression and enhance screening effectiveness.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Two high capacity text steganography schemes based on color coding</title>
      <description><![CDATA[Text steganography is a mechanism of hiding secret text message inside another text as a covering message. In this paper, we propose a text steganographic scheme based on color coding. This includes two different methods: the first based on permutation, and the second based on numeration systems. Given a secret message and a cover text, the proposed schemes embed the secret message in the cover text by making it colored. The stego-text is then send to the receiver by mail. After experiments, the results obtained show that our models perform a better hiding process in terms of hiding capacity as compared to the scheme of Aruna Malik et al. on which our idea is based.]]></description>
      <pubDate>Mon, 28 Oct 2024 07:45:19 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.13273</link>
      <guid>https://doi.org/10.46298/arima.13273</guid>
      <author>Karnel Sadie, Juvet</author>
      <author>Moyou Metcheka, Leonel</author>
      <author>Ndoundam, René</author>
      <dc:creator>Karnel Sadie, Juvet</dc:creator>
      <dc:creator>Moyou Metcheka, Leonel</dc:creator>
      <dc:creator>Ndoundam, René</dc:creator>
      <content:encoded><![CDATA[Text steganography is a mechanism of hiding secret text message inside another text as a covering message. In this paper, we propose a text steganographic scheme based on color coding. This includes two different methods: the first based on permutation, and the second based on numeration systems. Given a secret message and a cover text, the proposed schemes embed the secret message in the cover text by making it colored. The stego-text is then send to the receiver by mail. After experiments, the results obtained show that our models perform a better hiding process in terms of hiding capacity as compared to the scheme of Aruna Malik et al. on which our idea is based.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Application of the multilingual acoustic representation model XLSR for the transcription of Ewondo</title>
      <description><![CDATA[Recently popularized self-supervised models appear as a solution to the problem of low data availability via parsimonious learning transfer. We investigate the effectiveness of these multilingual acoustic models, in this case wav2vec 2.0 XLSR-53 and wav2vec 2.0 XLSR-128, for the transcription task of the Ewondo language (spoken in Cameroon). The experiments were conducted on 11 minutes of speech constructed from 103 read sentences. Despite a strong generalization capacity of multilingual acoustic model, preliminary results show that the distance between XLSR embedded languages (English, French, Spanish, German, Mandarin, . . . ) and Ewondo strongly impacts the performance of the transcription model. The highest performances obtained are around 69% on the WER and 28.1% on the CER. An analysis of these preliminary results is carried out andthen interpreted; in order to ultimately propose effective ways of improvement.]]></description>
      <pubDate>Mon, 28 Oct 2024 07:34:59 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.13621</link>
      <guid>https://doi.org/10.46298/arima.13621</guid>
      <author>Yannick Yomie, Nzeuhang</author>
      <author>Paulin Melatagia, Yonta</author>
      <author>Benjamin, Lecouteux</author>
      <dc:creator>Yannick Yomie, Nzeuhang</dc:creator>
      <dc:creator>Paulin Melatagia, Yonta</dc:creator>
      <dc:creator>Benjamin, Lecouteux</dc:creator>
      <content:encoded><![CDATA[Recently popularized self-supervised models appear as a solution to the problem of low data availability via parsimonious learning transfer. We investigate the effectiveness of these multilingual acoustic models, in this case wav2vec 2.0 XLSR-53 and wav2vec 2.0 XLSR-128, for the transcription task of the Ewondo language (spoken in Cameroon). The experiments were conducted on 11 minutes of speech constructed from 103 read sentences. Despite a strong generalization capacity of multilingual acoustic model, preliminary results show that the distance between XLSR embedded languages (English, French, Spanish, German, Mandarin, . . . ) and Ewondo strongly impacts the performance of the transcription model. The highest performances obtained are around 69% on the WER and 28.1% on the CER. An analysis of these preliminary results is carried out andthen interpreted; in order to ultimately propose effective ways of improvement.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>An experimental evaluation of choices of SSA forecasting parameters</title>
      <description><![CDATA[Six time series related to atmospheric phenomena are used as inputs for experiments offorecasting with singular spectrum analysis (SSA). Existing methods for SSA parametersselection are compared throughout their forecasting accuracy relatively to an optimal aposteriori selection and to a naive forecasting methods. The comparison shows that awidespread practice of selecting longer windows leads often to poorer predictions. It alsoconfirms that the choices of the window length and of the grouping are essential. Withthe mean error of rainfall forecasting below 1.5%, SSA appears as a viable alternative forhorizons beyond two weeks.]]></description>
      <pubDate>Tue, 26 Mar 2024 18:38:20 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.9641</link>
      <guid>https://doi.org/10.46298/arima.9641</guid>
      <author>Knapik, Teodor</author>
      <author>Ratiarison, Adolphe</author>
      <author>Razafindralambo, Hasina</author>
      <dc:creator>Knapik, Teodor</dc:creator>
      <dc:creator>Ratiarison, Adolphe</dc:creator>
      <dc:creator>Razafindralambo, Hasina</dc:creator>
      <content:encoded><![CDATA[Six time series related to atmospheric phenomena are used as inputs for experiments offorecasting with singular spectrum analysis (SSA). Existing methods for SSA parametersselection are compared throughout their forecasting accuracy relatively to an optimal aposteriori selection and to a naive forecasting methods. The comparison shows that awidespread practice of selecting longer windows leads often to poorer predictions. It alsoconfirms that the choices of the window length and of the grouping are essential. Withthe mean error of rainfall forecasting below 1.5%, SSA appears as a viable alternative forhorizons beyond two weeks.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Genetic Algorithms for Solving the Pigment Sequencing Problem</title>
      <description><![CDATA[Lot sizing is important in production planning. It consists of determining a production plan that meets the orders and other constraints while minimizing the production cost. Here, we consider a Discrete Lot Sizing and Scheduling Problem (DLSP), specifically the Pigment Sequencing Problem (PSP). We have developed a solution that uses Genetic Algorithms to tackle the PSP. Our approach introduces adaptive techniques for each step of the genetic algorithm, including initialization, selection, crossover, and mutation. We conducted a series of experiments to assess the performance of our approach across some multiple trials using publicly available instances of the PSP. Our experimental results demonstrate that Genetic Algorithms are practical and effective approaches for solving DLSP.]]></description>
      <pubDate>Mon, 25 Mar 2024 18:48:05 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.11382</link>
      <guid>https://doi.org/10.46298/arima.11382</guid>
      <author>Houndji, Vinasetan, Ratheil</author>
      <author>Gna, Tafsir</author>
      <dc:creator>Houndji, Vinasetan, Ratheil</dc:creator>
      <dc:creator>Gna, Tafsir</dc:creator>
      <content:encoded><![CDATA[Lot sizing is important in production planning. It consists of determining a production plan that meets the orders and other constraints while minimizing the production cost. Here, we consider a Discrete Lot Sizing and Scheduling Problem (DLSP), specifically the Pigment Sequencing Problem (PSP). We have developed a solution that uses Genetic Algorithms to tackle the PSP. Our approach introduces adaptive techniques for each step of the genetic algorithm, including initialization, selection, crossover, and mutation. We conducted a series of experiments to assess the performance of our approach across some multiple trials using publicly available instances of the PSP. Our experimental results demonstrate that Genetic Algorithms are practical and effective approaches for solving DLSP.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Extraction of association rules based on the classical measure of intensity of involvement: application to mathematics didactics</title>
      <description><![CDATA[This article proposes a method for extracting knowledge in association rules using the classical measure of implication intensity. We then applied our method to data from mathematics didactics studies. The aim of the didactic study was to identify the relationships between students' difficulties and skills when demonstrating a mathematical proposition formulated in French. The results of our study show that our methodology is effective in extracting interesting rules. In addition, the results of our didactic analysis showed the dependency between understanding a mathematical statement in French, competence in translating it formally and proving it.]]></description>
      <pubDate>Fri, 22 Mar 2024 09:01:38 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.12231</link>
      <guid>https://doi.org/10.46298/arima.12231</guid>
      <author>Andrianarivony, Fidy Heritiana</author>
      <author>Cortella, Anne</author>
      <author>Salone, Jean-Jacques</author>
      <author>Durand-Guerrier, Viviane</author>
      <author>Raherinirina, Angelo</author>
      <dc:creator>Andrianarivony, Fidy Heritiana</dc:creator>
      <dc:creator>Cortella, Anne</dc:creator>
      <dc:creator>Salone, Jean-Jacques</dc:creator>
      <dc:creator>Durand-Guerrier, Viviane</dc:creator>
      <dc:creator>Raherinirina, Angelo</dc:creator>
      <content:encoded><![CDATA[This article proposes a method for extracting knowledge in association rules using the classical measure of implication intensity. We then applied our method to data from mathematics didactics studies. The aim of the didactic study was to identify the relationships between students' difficulties and skills when demonstrating a mathematical proposition formulated in French. The results of our study show that our methodology is effective in extracting interesting rules. In addition, the results of our didactic analysis showed the dependency between understanding a mathematical statement in French, competence in translating it formally and proving it.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Optimal impulsive control of coffee berry borers in a berry age-structured epidemiological model</title>
      <description><![CDATA[The coffee berry borer (CBB) Hypothenemus hampei (Coleoptera: Scolytidae) is the most important insect pest affecting coffee production worldwide and generating huge economic losses. As most of its life cycle occurs inside the coffee berry, its control is extremely difficult. To tackle this issue, we solve an optimal control problem based on a berry age-structured dynamical model that describes the infestation dynamics of coffee berries by CBB during a cropping season. This problem consists in applying a bio-insecticide at discrete times in order to maximise the economic profit of healthy coffee berries, while minimising the CBB population for the next cropping season. We derive analytically the first-order necessary optimality conditions of the control problem. Numerical simulations are provided to illustrate the effectiveness of the optimal control strategy.]]></description>
      <pubDate>Wed, 13 Mar 2024 16:48:42 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.11338</link>
      <guid>https://doi.org/10.46298/arima.11338</guid>
      <author>Fotso Fotso, Yves</author>
      <author>Touzeau, Suzanne</author>
      <author>Tsanou, Berge</author>
      <author>Grognard, Frédéric</author>
      <author>Bowong, Samuel</author>
      <dc:creator>Fotso Fotso, Yves</dc:creator>
      <dc:creator>Touzeau, Suzanne</dc:creator>
      <dc:creator>Tsanou, Berge</dc:creator>
      <dc:creator>Grognard, Frédéric</dc:creator>
      <dc:creator>Bowong, Samuel</dc:creator>
      <content:encoded><![CDATA[The coffee berry borer (CBB) Hypothenemus hampei (Coleoptera: Scolytidae) is the most important insect pest affecting coffee production worldwide and generating huge economic losses. As most of its life cycle occurs inside the coffee berry, its control is extremely difficult. To tackle this issue, we solve an optimal control problem based on a berry age-structured dynamical model that describes the infestation dynamics of coffee berries by CBB during a cropping season. This problem consists in applying a bio-insecticide at discrete times in order to maximise the economic profit of healthy coffee berries, while minimising the CBB population for the next cropping season. We derive analytically the first-order necessary optimality conditions of the control problem. Numerical simulations are provided to illustrate the effectiveness of the optimal control strategy.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Comparative study of machine learning algorithms for face recognition</title>
      <description><![CDATA[Background: The fundamental need for authentication and identification of humans using their physiological, behavioral or biological characteristics, continues to be applied extensively to secure localities, property, financial transactions, etc. Biometric systems based on face characteristics, continue to attract the attention of researchers, major public and private services. In the literature, many methods have been deployed by different authors. The best performance must be found in order to be able to recommend the most effective method. So, the main objective of thisarticle is to make a comparative study of different existing techniques.Methods: A biometric system is generally composed of four stages: acquisition of facial images, preprocessing, extraction of characteristics and finally classification. In this work, the focus is on machine learning algorithms for classification. These algorithms are: Support Vector Machines (SVM), Artificial Neural Networks (ANN), K-Nearest Neighbors (KNN), Random Forests (RF), Logistic Regression (LR), Naive Bayesian Classification (NB: Naive Bayes’ Classifiers) and deep learning techniques such as Convolutional Neural Networks (CNN). The comparison criterion is the average performance, calculated using three performance measures: recognition rate, confusion matrix, and the Area Under Receiver Operating Characteristic (ROC) curve.Results: Based on this criterion, the performance comparison of selected machine learning algorithms, shows that CNN is the best, with an average performance of 100.00% On ORL face database. However, on the YALE database, classical algorithms such as artificial neural networks have obtained the best performances, the highest being a rate of 100%.Discussion: Deep learning techniques are very efficient in image classification as proven by the results on the ORL database. However, their inefficiency on YALE face database is due to the small size of this database which is inappropriate for some deep learning algorithms. But this weakness can be corrected by image augmentation techniques. The comparison of these results with existing state-of-the-art methods is nearly the same. Authors achieved performances of 94.82%, 95.79%, 96.15%, 96.44%, 97.27%, 98.52% and 98.95% for NB, KNN, RF, LR, ANN, SVM and CNN classifiers, respectively. Finally, in depth discussion, it is concluded that between all these approaches which are useful in face recognition, the CNN is the best classification algorithm.]]></description>
      <pubDate>Thu, 07 Mar 2024 10:35:50 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.9291</link>
      <guid>https://doi.org/10.46298/arima.9291</guid>
      <author>Alagah Komlavi, Atsu</author>
      <author>Chaibou, Kadri</author>
      <author>Naroua, Harouna</author>
      <dc:creator>Alagah Komlavi, Atsu</dc:creator>
      <dc:creator>Chaibou, Kadri</dc:creator>
      <dc:creator>Naroua, Harouna</dc:creator>
      <content:encoded><![CDATA[Background: The fundamental need for authentication and identification of humans using their physiological, behavioral or biological characteristics, continues to be applied extensively to secure localities, property, financial transactions, etc. Biometric systems based on face characteristics, continue to attract the attention of researchers, major public and private services. In the literature, many methods have been deployed by different authors. The best performance must be found in order to be able to recommend the most effective method. So, the main objective of thisarticle is to make a comparative study of different existing techniques.Methods: A biometric system is generally composed of four stages: acquisition of facial images, preprocessing, extraction of characteristics and finally classification. In this work, the focus is on machine learning algorithms for classification. These algorithms are: Support Vector Machines (SVM), Artificial Neural Networks (ANN), K-Nearest Neighbors (KNN), Random Forests (RF), Logistic Regression (LR), Naive Bayesian Classification (NB: Naive Bayes’ Classifiers) and deep learning techniques such as Convolutional Neural Networks (CNN). The comparison criterion is the average performance, calculated using three performance measures: recognition rate, confusion matrix, and the Area Under Receiver Operating Characteristic (ROC) curve.Results: Based on this criterion, the performance comparison of selected machine learning algorithms, shows that CNN is the best, with an average performance of 100.00% On ORL face database. However, on the YALE database, classical algorithms such as artificial neural networks have obtained the best performances, the highest being a rate of 100%.Discussion: Deep learning techniques are very efficient in image classification as proven by the results on the ORL database. However, their inefficiency on YALE face database is due to the small size of this database which is inappropriate for some deep learning algorithms. But this weakness can be corrected by image augmentation techniques. The comparison of these results with existing state-of-the-art methods is nearly the same. Authors achieved performances of 94.82%, 95.79%, 96.15%, 96.44%, 97.27%, 98.52% and 98.95% for NB, KNN, RF, LR, ANN, SVM and CNN classifiers, respectively. Finally, in depth discussion, it is concluded that between all these approaches which are useful in face recognition, the CNN is the best classification algorithm.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Quantitative stability estimate for the inverse coefficients problem in linear elasticity</title>
      <description><![CDATA[In this article we consider the inverse problem of reconstructing piece-wise Lamé coefficients from boundary measurements. We reformulate the inverse problem into a minimization one using a Kohn-Vogelius type functional. We study the stability of the parameters when the jump of the discontinuity is perturbed. Using tools of shape calculus, we give a quantitative stability result for local optimal solution.]]></description>
      <pubDate>Wed, 10 Jan 2024 10:15:13 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.9346</link>
      <guid>https://doi.org/10.46298/arima.9346</guid>
      <author>Meftahi, H</author>
      <author>Rezgui, T</author>
      <dc:creator>Meftahi, H</dc:creator>
      <dc:creator>Rezgui, T</dc:creator>
      <content:encoded><![CDATA[In this article we consider the inverse problem of reconstructing piece-wise Lamé coefficients from boundary measurements. We reformulate the inverse problem into a minimization one using a Kohn-Vogelius type functional. We study the stability of the parameters when the jump of the discontinuity is perturbed. Using tools of shape calculus, we give a quantitative stability result for local optimal solution.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>The one step fixed-lag particle smoother as a strategy to improve the prediction step of particle filtering</title>
      <description><![CDATA[Sequential Monte Carlo methods have been a major breakthrough in the field of numerical signal processing for stochastic dynamical state-space systems with partial and noisy observations. However, these methods still present certain weaknesses. One of the most fundamental is the degeneracy of the filter due to the impoverishment of the particles: the prediction step allows the particles to explore the state-space and can lead to the impoverishment of the particles if this exploration is poorly conducted or when it conflicts with the following observation that will be used in the evaluation of the likelihood of each particle. In this article, in order to improve this last step within the framework of the classic bootstrap particle filter, we propose a simple approximation of the one step fixed- lag smoother. At each time iteration, we propose to perform additional simulations during the prediction step in order to improve the likelihood of the selected particles.]]></description>
      <pubDate>Thu, 14 Dec 2023 09:03:48 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.10784</link>
      <guid>https://doi.org/10.46298/arima.10784</guid>
      <author>Nyobe, Samuel</author>
      <author>Campillo, Fabien</author>
      <author>Moto, Serge</author>
      <author>Rossi, Vivien</author>
      <dc:creator>Nyobe, Samuel</dc:creator>
      <dc:creator>Campillo, Fabien</dc:creator>
      <dc:creator>Moto, Serge</dc:creator>
      <dc:creator>Rossi, Vivien</dc:creator>
      <content:encoded><![CDATA[Sequential Monte Carlo methods have been a major breakthrough in the field of numerical signal processing for stochastic dynamical state-space systems with partial and noisy observations. However, these methods still present certain weaknesses. One of the most fundamental is the degeneracy of the filter due to the impoverishment of the particles: the prediction step allows the particles to explore the state-space and can lead to the impoverishment of the particles if this exploration is poorly conducted or when it conflicts with the following observation that will be used in the evaluation of the likelihood of each particle. In this article, in order to improve this last step within the framework of the classic bootstrap particle filter, we propose a simple approximation of the one step fixed- lag smoother. At each time iteration, we propose to perform additional simulations during the prediction step in order to improve the likelihood of the selected particles.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Epidemic threshold : A new spectral and structural approach of prediction</title>
      <description><![CDATA[Epidemiological modelling and epidemic threshold analysis in the networks are widely used for the control and prediction of infectious disease spread. Therefore, the prediction of the epidemic threshold in networks is a challenge in epidemiology where the contact network structure fundamentally influences the dynamics of the spread. In this paper, we design and experiment a new general structural and spectral prediction approach of the epidemic threshold. This more captures the full network structure using the number of nodes, the spectral radius, and the energy of graph. With data analytic and data visualization technics, we drive simulations overall on 31 different types and topologies networks. The simulations show similar qualitative and quantitative results between the new structural prediction approach of the epidemic threshold values compared to the earlier MF, HMF and QMF widely used benchmark approaches. The results show that the new approach is similar to the earlier one, further captures the full network structure, and is also accurate. The new approach offers a new general structural and spectral area to analyse the spreading processes in a network. The results are both fundamental and practical interest in improving the control and prediction of spreading processes in networks. So these results can be particularly significant to advise an effective epidemiological control policy.]]></description>
      <pubDate>Tue, 05 Dec 2023 18:46:34 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.11186</link>
      <guid>https://doi.org/10.46298/arima.11186</guid>
      <author>Kanyou, Claude</author>
      <author>Kouokam, Etienne</author>
      <author>Emvudu, Yves</author>
      <dc:creator>Kanyou, Claude</dc:creator>
      <dc:creator>Kouokam, Etienne</dc:creator>
      <dc:creator>Emvudu, Yves</dc:creator>
      <content:encoded><![CDATA[Epidemiological modelling and epidemic threshold analysis in the networks are widely used for the control and prediction of infectious disease spread. Therefore, the prediction of the epidemic threshold in networks is a challenge in epidemiology where the contact network structure fundamentally influences the dynamics of the spread. In this paper, we design and experiment a new general structural and spectral prediction approach of the epidemic threshold. This more captures the full network structure using the number of nodes, the spectral radius, and the energy of graph. With data analytic and data visualization technics, we drive simulations overall on 31 different types and topologies networks. The simulations show similar qualitative and quantitative results between the new structural prediction approach of the epidemic threshold values compared to the earlier MF, HMF and QMF widely used benchmark approaches. The results show that the new approach is similar to the earlier one, further captures the full network structure, and is also accurate. The new approach offers a new general structural and spectral area to analyse the spreading processes in a network. The results are both fundamental and practical interest in improving the control and prediction of spreading processes in networks. So these results can be particularly significant to advise an effective epidemiological control policy.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Coarse-grained multicomputer parallel algorithm using the four-splitting technique for the minimum cost parenthesizing problem</title>
      <description><![CDATA[Dynamic programming is a technique widely used to solve several combinatory optimization problems. A well-known example is the minimum cost parenthesizing problem (MPP), which is usually used to represent a class of non-serial polyadic dynamic-programming problems. These problems are characterized by a strong dependency between subproblems. This paper outlines a coarse-grained multicomputer parallel solution using the four-splitting technique to solve the MPP. It is a partitioning technique consisting of subdividing the dependency graph into subgraphs (or blocks) of variable size and splitting large-size blocks into four subblocks to avoid communication overhead caused by a similar partitioning technique in the literature. Our solution consists in evaluating a block by computing and communicating each subblock of this block to reduce the latency time of processors which accounts for most of the global communication time. It requires O(n^3/p) execution time with O(k * \sqrt{p}) communication rounds. n is the input data size, p is the number of processors, and k is the number of times the size of blocks is subdivided.]]></description>
      <pubDate>Mon, 06 Nov 2023 14:17:11 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.11217</link>
      <guid>https://doi.org/10.46298/arima.11217</guid>
      <author>Lacmou Zeutouo, Jerry</author>
      <author>Kengne Tchendji, Vianney</author>
      <author>Myoupo, Jean-Frédéric</author>
      <dc:creator>Lacmou Zeutouo, Jerry</dc:creator>
      <dc:creator>Kengne Tchendji, Vianney</dc:creator>
      <dc:creator>Myoupo, Jean-Frédéric</dc:creator>
      <content:encoded><![CDATA[Dynamic programming is a technique widely used to solve several combinatory optimization problems. A well-known example is the minimum cost parenthesizing problem (MPP), which is usually used to represent a class of non-serial polyadic dynamic-programming problems. These problems are characterized by a strong dependency between subproblems. This paper outlines a coarse-grained multicomputer parallel solution using the four-splitting technique to solve the MPP. It is a partitioning technique consisting of subdividing the dependency graph into subgraphs (or blocks) of variable size and splitting large-size blocks into four subblocks to avoid communication overhead caused by a similar partitioning technique in the literature. Our solution consists in evaluating a block by computing and communicating each subblock of this block to reduce the latency time of processors which accounts for most of the global communication time. It requires O(n^3/p) execution time with O(k * \sqrt{p}) communication rounds. n is the input data size, p is the number of processors, and k is the number of times the size of blocks is subdivided.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Hybrid Algorithm Based on Multi-colony Ant Optimization and Lin-Kernighan for solving the Traveling Salesman Problem</title>
      <description><![CDATA[In this article, a hybrid heuristic algorithm is proposed to solve the Traveling Salesman Problem (TSP). This algorithm combines two main metaheuristics: optimization of multi-colony ant colonies (MACO) and Lin-Kernighan-Helsgaun (LKH). The proposed hybrid approach (MACO-LKH) is a so-called insertion and relay hybridization. It brings two major innovations: The first consists in replacing the static visibility function used in the MACO heuristic by the dynamic visibility function used in LKH. This has the consequence of avoiding long paths and favoring the choice of the shortest paths more quickly. Hence the term insertion hybridization. The second innovation consists in modifying the pheromone update strategy of MACO by that of the dynamic λ-opt mechanisms of LKH in order to optimize the solutions generated and save in execution time, hence the relay hybridization. The significance of the hybridization, is examined and validated on benchmark instances including small, medium, and large instance problems taken from the TSPlib library. The results are compared to four other state-of-the-art metaheuristic approaches. It results in that they are significantly outperformed by the proposed algorithm in terms of the quality of solutions obtained and execution time.]]></description>
      <pubDate>Fri, 27 Oct 2023 22:51:08 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.8660</link>
      <guid>https://doi.org/10.46298/arima.8660</guid>
      <author>Soh, Mathurin</author>
      <author>Nguimeya Tsofack, Baudoin</author>
      <author>Tayou Djamegni, Clémentin</author>
      <dc:creator>Soh, Mathurin</dc:creator>
      <dc:creator>Nguimeya Tsofack, Baudoin</dc:creator>
      <dc:creator>Tayou Djamegni, Clémentin</dc:creator>
      <content:encoded><![CDATA[In this article, a hybrid heuristic algorithm is proposed to solve the Traveling Salesman Problem (TSP). This algorithm combines two main metaheuristics: optimization of multi-colony ant colonies (MACO) and Lin-Kernighan-Helsgaun (LKH). The proposed hybrid approach (MACO-LKH) is a so-called insertion and relay hybridization. It brings two major innovations: The first consists in replacing the static visibility function used in the MACO heuristic by the dynamic visibility function used in LKH. This has the consequence of avoiding long paths and favoring the choice of the shortest paths more quickly. Hence the term insertion hybridization. The second innovation consists in modifying the pheromone update strategy of MACO by that of the dynamic λ-opt mechanisms of LKH in order to optimize the solutions generated and save in execution time, hence the relay hybridization. The significance of the hybridization, is examined and validated on benchmark instances including small, medium, and large instance problems taken from the TSPlib library. The results are compared to four other state-of-the-art metaheuristic approaches. It results in that they are significantly outperformed by the proposed algorithm in terms of the quality of solutions obtained and execution time.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Non-Recursive LSAWfP Models are Structured Workflows</title>
      <description><![CDATA[Workflow languages are a key component of the Business Process Management (BPM) discipline: they are used to model business processes in order to facilitate their automatic management by means of BPM systems. There are numerous workflow languages addressing various issues (expressiveness, formal analysis, etc.). In the last decade, some workflow languages based on context-free grammars (having then formal semantics) and offering new perspectives to process modelling, have emerged: LSAWfP (a Language for the Specification of Administrative Workflow Processes) is one of them. LSAWfP has many advantages over other existing languages, but it is its expressiveness (which has been very little addressed in previous works) that is studied in this paper. Indeed, the work in this paper aims to demonstrate that any non-recursive LSAWfP model is a structured workflow. Knowing that the majority of commercial BPM systems only implement structured workflows, the result of this study establishes that, although LSAWfP is still much more theoretical, it is a language with commercial potential.]]></description>
      <pubDate>Wed, 05 Jul 2023 07:31:23 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.11183</link>
      <guid>https://doi.org/10.46298/arima.11183</guid>
      <author>Zekeng Ndadji, Milliam Maxime</author>
      <author>Nguedia Momo, Daniela Marionne</author>
      <author>Tonle Noumbo, Franck Bruno</author>
      <author>Tchoupé Tchendji, Maurice</author>
      <dc:creator>Zekeng Ndadji, Milliam Maxime</dc:creator>
      <dc:creator>Nguedia Momo, Daniela Marionne</dc:creator>
      <dc:creator>Tonle Noumbo, Franck Bruno</dc:creator>
      <dc:creator>Tchoupé Tchendji, Maurice</dc:creator>
      <content:encoded><![CDATA[Workflow languages are a key component of the Business Process Management (BPM) discipline: they are used to model business processes in order to facilitate their automatic management by means of BPM systems. There are numerous workflow languages addressing various issues (expressiveness, formal analysis, etc.). In the last decade, some workflow languages based on context-free grammars (having then formal semantics) and offering new perspectives to process modelling, have emerged: LSAWfP (a Language for the Specification of Administrative Workflow Processes) is one of them. LSAWfP has many advantages over other existing languages, but it is its expressiveness (which has been very little addressed in previous works) that is studied in this paper. Indeed, the work in this paper aims to demonstrate that any non-recursive LSAWfP model is a structured workflow. Knowing that the majority of commercial BPM systems only implement structured workflows, the result of this study establishes that, although LSAWfP is still much more theoretical, it is a language with commercial potential.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Combining Scrum and Model Driven Architecture for the development of an epidemiological surveillance software</title>
      <description><![CDATA[Epidemiological surveillance systems evolve with time, depending on the context and the data already collected. Then, the software used must evolve in order to meet requirements. However, introducing new requirements in order to update the software takes time, is expensive and may lead to the problem of software regression. The problem of failed software developed for epidemiological surveillance are often the result of an unsystematic transfer of business requirements to the implementation. This problem can be avoided if the system is established using a well-defined framework/architecture permitting the rapid development/update of the surveillance software. Empirical research shows on the one hand that Model Driven Techniques such as Model Driven Architecture (MDA) are more effective than code-centric approaches for the development and the maintenance of software. On the other hand, Agile Processes such as Scrum are more effective than Structured Processes when requirements are subject to frequent change. Researchers demonstrated that developers of medical software such as epidemiological surveillance software are experiencing difficulties when following Structured Processes and code-centric approaches. The main goal of this empirical study was to apply the combination of Scrum and Model Driven Architecture for the development of epidemiological surveillance of tuberculosis. During this research, we found the approach ease of use and very useful when the MDA tool can generate the complete source code. It has had positive effects on programmer productivity and satisfaction, cost-effectiveness, timelines and customer satisfaction. In addition, we learned that to involve non-informatic experts in the development/update, the modeling user interface must be as simple as possible.]]></description>
      <pubDate>Tue, 04 Jul 2023 21:12:51 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.9873</link>
      <guid>https://doi.org/10.46298/arima.9873</guid>
      <author>Azanzi, Jiomekong</author>
      <author>Tapamo, Hippolyte</author>
      <author>Camara, Gaoussou</author>
      <dc:creator>Azanzi, Jiomekong</dc:creator>
      <dc:creator>Tapamo, Hippolyte</dc:creator>
      <dc:creator>Camara, Gaoussou</dc:creator>
      <content:encoded><![CDATA[Epidemiological surveillance systems evolve with time, depending on the context and the data already collected. Then, the software used must evolve in order to meet requirements. However, introducing new requirements in order to update the software takes time, is expensive and may lead to the problem of software regression. The problem of failed software developed for epidemiological surveillance are often the result of an unsystematic transfer of business requirements to the implementation. This problem can be avoided if the system is established using a well-defined framework/architecture permitting the rapid development/update of the surveillance software. Empirical research shows on the one hand that Model Driven Techniques such as Model Driven Architecture (MDA) are more effective than code-centric approaches for the development and the maintenance of software. On the other hand, Agile Processes such as Scrum are more effective than Structured Processes when requirements are subject to frequent change. Researchers demonstrated that developers of medical software such as epidemiological surveillance software are experiencing difficulties when following Structured Processes and code-centric approaches. The main goal of this empirical study was to apply the combination of Scrum and Model Driven Architecture for the development of epidemiological surveillance of tuberculosis. During this research, we found the approach ease of use and very useful when the MDA tool can generate the complete source code. It has had positive effects on programmer productivity and satisfaction, cost-effectiveness, timelines and customer satisfaction. In addition, we learned that to involve non-informatic experts in the development/update, the modeling user interface must be as simple as possible.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Building a publish/subscribe information dissemination platform for hybrid mobile ad-hoc social networks over android devices</title>
      <description><![CDATA[Mobile ad-hoc social networks (MASNs) have been the subject of several research studies over the past two decades. They allow stations located in a small geographical area to be connected without the need for a network infrastructure and offer them the possibility to communicate any time anywhere. To communicate, stations regularly broadcast their interests in the form of keywords. Stations with a high degree of similarity among their keywords can communicate with each other. However, the coverage of MASNs is limited to a small geographical area, due to the limited communication range of mobile ad-hoc networks (MANET) stations. In this paper, we present an architecture and implementation of hybrid mobile ad-hoc social networks (MASNs coupled to infrastructure networks) of Android mobile devices for information dissemination. Stations can use the infrastructure network to communicate and rely on the mobile ad-hoc network when the infrastructure is not available.Rather than communicating synchronously as this is the case in the similar works found in the literature, in our approach, the stations communicate using a publish/subscribe communication protocol, which is perfectly suited to this type of network thanks to the decoupling in time and space it provides.]]></description>
      <pubDate>Sun, 04 Jun 2023 20:03:14 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.9136</link>
      <guid>https://doi.org/10.46298/arima.9136</guid>
      <author>Tchembe, Martin Xavier</author>
      <author>Tchoupe Tchendji, Maurice</author>
      <dc:creator>Tchembe, Martin Xavier</dc:creator>
      <dc:creator>Tchoupe Tchendji, Maurice</dc:creator>
      <content:encoded><![CDATA[Mobile ad-hoc social networks (MASNs) have been the subject of several research studies over the past two decades. They allow stations located in a small geographical area to be connected without the need for a network infrastructure and offer them the possibility to communicate any time anywhere. To communicate, stations regularly broadcast their interests in the form of keywords. Stations with a high degree of similarity among their keywords can communicate with each other. However, the coverage of MASNs is limited to a small geographical area, due to the limited communication range of mobile ad-hoc networks (MANET) stations. In this paper, we present an architecture and implementation of hybrid mobile ad-hoc social networks (MASNs coupled to infrastructure networks) of Android mobile devices for information dissemination. Stations can use the infrastructure network to communicate and rely on the mobile ad-hoc network when the infrastructure is not available.Rather than communicating synchronously as this is the case in the similar works found in the literature, in our approach, the stations communicate using a publish/subscribe communication protocol, which is perfectly suited to this type of network thanks to the decoupling in time and space it provides.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Towards a semantic platform for adaptive and collaborative e-learning</title>
      <description><![CDATA[In the world of e-learning, learning systems have sought to adapt the user's profile and the content offered to them. However, from the point of view of collaboration between learners based on adaptation to the learner profile, this adaptation has not been sufficiently explored as an important aspect of the e-learning process. Adaptation will allow users with similar or very similar profiles to be grouped together to learn in harmony while maintaining motivation and commitment to learning. This should increase the success rate of learners. This will also allow us to reuse learning paths with good success rates for future recommendations to users with the same profile. In this paper, we focus on this aspect and propose a learning system that controls learning paths adapted to the users' profile and that allows collaborative learning of users in a synchronous way. After an overview of the existing work in the field of adaptive e-learning, we propose an architecture for the piloting of this type of collaborative adaptive learning based on ontologies and orchestrated by a multi-agent system. The latter is responsible for the piloting of learning paths, the recommendation of paths in collaborative or non-collaborative mode through communication between the different agents involved, and the management of events captured by the system.]]></description>
      <pubDate>Tue, 10 Jan 2023 18:36:20 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.8396</link>
      <guid>https://doi.org/10.46298/arima.8396</guid>
      <author>Sabeima, Massra</author>
      <author>Lamolle, Myriam</author>
      <author>Anghour, Azziz</author>
      <author>Nanne, Mohamedade, Farouk</author>
      <dc:creator>Sabeima, Massra</dc:creator>
      <dc:creator>Lamolle, Myriam</dc:creator>
      <dc:creator>Anghour, Azziz</dc:creator>
      <dc:creator>Nanne, Mohamedade, Farouk</dc:creator>
      <content:encoded><![CDATA[In the world of e-learning, learning systems have sought to adapt the user's profile and the content offered to them. However, from the point of view of collaboration between learners based on adaptation to the learner profile, this adaptation has not been sufficiently explored as an important aspect of the e-learning process. Adaptation will allow users with similar or very similar profiles to be grouped together to learn in harmony while maintaining motivation and commitment to learning. This should increase the success rate of learners. This will also allow us to reuse learning paths with good success rates for future recommendations to users with the same profile. In this paper, we focus on this aspect and propose a learning system that controls learning paths adapted to the users' profile and that allows collaborative learning of users in a synchronous way. After an overview of the existing work in the field of adaptive e-learning, we propose an architecture for the piloting of this type of collaborative adaptive learning based on ontologies and orchestrated by a multi-agent system. The latter is responsible for the piloting of learning paths, the recommendation of paths in collaborative or non-collaborative mode through communication between the different agents involved, and the management of events captured by the system.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Improvement of the class visit in technical education: integration of a mediation tool</title>
      <description><![CDATA[The improvement of teacher practice is supported by the class visit. This paper presents the integration of a sharing system of documents as a mediation system tool in the class visit. This sharing system enhances the interaction between the teacher and the pedagogical supervisor when they work on the main pedagogical documents of the class visit. We made an experiment with the service Google Drive with fifteen teachers and three pedagogical supervisors of the technical education. The results of this experiment have shown an improvement of the educational quality of the class visit and better communication between teacher and pedagogical supervisor. It appears from this study that the teacher becomes less stressed and he requests much more help of the pedagogical supervisor than usual.]]></description>
      <pubDate>Fri, 30 Sep 2022 10:15:10 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.2653</link>
      <guid>https://doi.org/10.46298/arima.2653</guid>
      <author>Ouédraogo, Frédéric, T</author>
      <author>Sawadogo, Daouda</author>
      <author>Traoré, Solange, T</author>
      <author>Tindano, Olivier, T</author>
      <dc:creator>Ouédraogo, Frédéric, T</dc:creator>
      <dc:creator>Sawadogo, Daouda</dc:creator>
      <dc:creator>Traoré, Solange, T</dc:creator>
      <dc:creator>Tindano, Olivier, T</dc:creator>
      <content:encoded><![CDATA[The improvement of teacher practice is supported by the class visit. This paper presents the integration of a sharing system of documents as a mediation system tool in the class visit. This sharing system enhances the interaction between the teacher and the pedagogical supervisor when they work on the main pedagogical documents of the class visit. We made an experiment with the service Google Drive with fifteen teachers and three pedagogical supervisors of the technical education. The results of this experiment have shown an improvement of the educational quality of the class visit and better communication between teacher and pedagogical supervisor. It appears from this study that the teacher becomes less stressed and he requests much more help of the pedagogical supervisor than usual.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Implantation by an implicit approach of an elastoplastic behaviour law in the finite element cast3m code</title>
      <description><![CDATA[This paper is dedicated to the implementation of a law of mechanical behavior in the finite element software Cast3M using an open source code generator named Mfront. To do so, an elastoplastic behaviour model has been chosen from existing laws in the literature. Following an implicit discretization, a hardware library corresponding to the isotropic and kinematic strain-hardening model is generated using Mfront. The UMAT computer interface is used to build the library in Cast3M. A validation of the approach has been carried out by comparing the numerical results obtained with the generated hardware library and the equivalent pre-existing library in Cast3M. Simulations in the case of a tensile bar and a perforated plate show almost identical results]]></description>
      <pubDate>Tue, 30 Aug 2022 07:06:34 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.7632</link>
      <guid>https://doi.org/10.46298/arima.7632</guid>
      <author>Fokam, Christian, Bopda</author>
      <author>Kanko, Franklin, Donald</author>
      <author>Djomi, Rolland</author>
      <author>Kenmeugne, Bienvenu</author>
      <author>Abraham, Kanmogne</author>
      <author>Ntamack, Guy, Edgar</author>
      <dc:creator>Fokam, Christian, Bopda</dc:creator>
      <dc:creator>Kanko, Franklin, Donald</dc:creator>
      <dc:creator>Djomi, Rolland</dc:creator>
      <dc:creator>Kenmeugne, Bienvenu</dc:creator>
      <dc:creator>Abraham, Kanmogne</dc:creator>
      <dc:creator>Ntamack, Guy, Edgar</dc:creator>
      <content:encoded><![CDATA[This paper is dedicated to the implementation of a law of mechanical behavior in the finite element software Cast3M using an open source code generator named Mfront. To do so, an elastoplastic behaviour model has been chosen from existing laws in the literature. Following an implicit discretization, a hardware library corresponding to the isotropic and kinematic strain-hardening model is generated using Mfront. The UMAT computer interface is used to build the library in Cast3M. A validation of the approach has been carried out by comparing the numerical results obtained with the generated hardware library and the equivalent pre-existing library in Cast3M. Simulations in the case of a tensile bar and a perforated plate show almost identical results]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Accurate comparison of tree sets using HMM-based descriptor vectors</title>
      <description><![CDATA[Trees are among the most studied data structures and several techniques have consequently been developed for comparing two trees belonging to the same category. Until the end of year 2020, there was a serious lack of suitable metrics for comparing two weighted trees or two trees from different categories. The problem of comparing two tree sets was not also specifically addressed. These limitations have been overcome in a paper published in 2021 where a customizable metric based on hidden Markov models has been proposed for comparing two tree sets, each containing a mixture of trees belonging to various categories. Unfortunately, that metric does not allow the use of non metric-dependent classifiers which take descriptor vectors as inputs. This paper addresses this drawback by deriving a descriptor vector for each tree set using meta-information related to its corresponding models. The comparison between two tree sets is then realized by comparing their associated descriptor vectors. Classification experiments carried out on the databases FirstLast-L (FL), FirstLast-LW (FLW) and Stanford Sentiment Treebank (SSTB) respectively showed best accuracies of 99.75%, 99.75% and 87.22%. These performances are respectively 40.75% and 20.52% better than the tree Edit distance respectively for FLW and SSTB. Additional clustering experiments exhibited 54.25%, 98.75% and 75.53% of correctly clustered instances for FL, FLW and SSTB. No clustering was performed in existing work.]]></description>
      <pubDate>Tue, 23 Aug 2022 07:01:06 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.9107</link>
      <guid>https://doi.org/10.46298/arima.9107</guid>
      <author>Iloga, Sylvain</author>
      <dc:creator>Iloga, Sylvain</dc:creator>
      <content:encoded><![CDATA[Trees are among the most studied data structures and several techniques have consequently been developed for comparing two trees belonging to the same category. Until the end of year 2020, there was a serious lack of suitable metrics for comparing two weighted trees or two trees from different categories. The problem of comparing two tree sets was not also specifically addressed. These limitations have been overcome in a paper published in 2021 where a customizable metric based on hidden Markov models has been proposed for comparing two tree sets, each containing a mixture of trees belonging to various categories. Unfortunately, that metric does not allow the use of non metric-dependent classifiers which take descriptor vectors as inputs. This paper addresses this drawback by deriving a descriptor vector for each tree set using meta-information related to its corresponding models. The comparison between two tree sets is then realized by comparing their associated descriptor vectors. Classification experiments carried out on the databases FirstLast-L (FL), FirstLast-LW (FLW) and Stanford Sentiment Treebank (SSTB) respectively showed best accuracies of 99.75%, 99.75% and 87.22%. These performances are respectively 40.75% and 20.52% better than the tree Edit distance respectively for FLW and SSTB. Additional clustering experiments exhibited 54.25%, 98.75% and 75.53% of correctly clustered instances for FL, FLW and SSTB. No clustering was performed in existing work.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Assessing Maintenance of Arboviruses in Nature via Aedes Mosquitoes by Positive semigroup</title>
      <description><![CDATA[For more than one century, Aedes species are supposed to be a reservoir in dengue, yellow fever, rift valley fever and west nile viruses transmission. In this article, we study an infinite dimension ordinary differential equations system that models arbovirus vertical transmission in \textit{Aedes} mosquito. Relying of the positive semigroup theory, we show that the model is well-posed and compute a threshold parameter known as the basic reproduction ratio $R0$. This parameter describes "the average rate of secondary new cases of infected adult females from emergences in a breeding habitat that are produced by an infected adult female via transovarial transmission during its lifetime." In addition, we prove that the solution of the model goes to zero asymptotically if R0<1$, else it has the property of balanced exponential growth. Finally, a climate-environment effects Index on model parameters and a diagram depicting the conditions of arboviruses persistence via Aedes in nature is derived.]]></description>
      <pubDate>Thu, 14 Jul 2022 15:08:14 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.8714</link>
      <guid>https://doi.org/10.46298/arima.8714</guid>
      <author>Ndiaye, Papa Ibrahima</author>
      <author>Ndongo, Mamadou, Sadio</author>
      <dc:creator>Ndiaye, Papa Ibrahima</dc:creator>
      <dc:creator>Ndongo, Mamadou, Sadio</dc:creator>
      <content:encoded><![CDATA[For more than one century, Aedes species are supposed to be a reservoir in dengue, yellow fever, rift valley fever and west nile viruses transmission. In this article, we study an infinite dimension ordinary differential equations system that models arbovirus vertical transmission in \textit{Aedes} mosquito. Relying of the positive semigroup theory, we show that the model is well-posed and compute a threshold parameter known as the basic reproduction ratio $R0$. This parameter describes "the average rate of secondary new cases of infected adult females from emergences in a breeding habitat that are produced by an infected adult female via transovarial transmission during its lifetime." In addition, we prove that the solution of the model goes to zero asymptotically if R0<1$, else it has the property of balanced exponential growth. Finally, a climate-environment effects Index on model parameters and a diagram depicting the conditions of arboviruses persistence via Aedes in nature is derived.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Recommender system taking into account the availability forecast of product categories</title>
      <description><![CDATA[Recommending suitable products to users is crucial in e-commerce and streaming platforms. In some situations, a customer has a preference for a product based on the product features and the current temporal context. It is therefore wise to take these aspects into account in order to improve the quality of the recommendations. In this paper, we propose recommender systems based on the availability prediction of product categories according to the temporal context. Indeed, the classification of the Top-N recommendations proposed by the initial recommender system is updated in such a way as to favor products with categories predicted available. Furthermore, we propose an algorithm for the choice of the appropriate temporal context to consider for the availability prediction of categories. Experiments are carried out on four datasets and comparisons are made on the results of three basic recommender systems with and without integration of availability forecasts, according to the Hit-ratio, MAP and F1-score evaluation metrics. We note that in 75% of cases, to have the best performance, it is necessary to integrate the availabilities prediction of the categories. This gain can even go to more than 12% regardless of the dataset. All this confirms the relevance of our contribution.]]></description>
      <pubDate>Thu, 02 Jun 2022 08:48:06 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.9156</link>
      <guid>https://doi.org/10.46298/arima.9156</guid>
      <author>Nzekon Nzeko’o, Armel Jacques</author>
      <author>Adamou, Hamza</author>
      <author>Tchuente, Maurice</author>
      <dc:creator>Nzekon Nzeko’o, Armel Jacques</dc:creator>
      <dc:creator>Adamou, Hamza</dc:creator>
      <dc:creator>Tchuente, Maurice</dc:creator>
      <content:encoded><![CDATA[Recommending suitable products to users is crucial in e-commerce and streaming platforms. In some situations, a customer has a preference for a product based on the product features and the current temporal context. It is therefore wise to take these aspects into account in order to improve the quality of the recommendations. In this paper, we propose recommender systems based on the availability prediction of product categories according to the temporal context. Indeed, the classification of the Top-N recommendations proposed by the initial recommender system is updated in such a way as to favor products with categories predicted available. Furthermore, we propose an algorithm for the choice of the appropriate temporal context to consider for the availability prediction of categories. Experiments are carried out on four datasets and comparisons are made on the results of three basic recommender systems with and without integration of availability forecasts, according to the Hit-ratio, MAP and F1-score evaluation metrics. We note that in 75% of cases, to have the best performance, it is necessary to integrate the availabilities prediction of the categories. This gain can even go to more than 12% regardless of the dataset. All this confirms the relevance of our contribution.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Semantic Measure for Outlier Detection in Knowledge Graph</title>
      <description><![CDATA[Nowadays, there is a growing interest in data mining and information retrieval applications from Knowledge Graphs (KG). However, the latter (KG) suffers from several data quality problems such as accuracy, completeness, and different kinds of errors. In DBpedia, there are several issues related to data quality. Among them, we focus on the following: several entities are in classes they do not belong to. For instance, the query to get all the entities of the class Person also returns group entities, whereas these should be in the class Group. We call such entities “outliers.” The discovery of such outliers is crucial for class learning and understanding. This paper proposes a new outlier detection method that finds these entities. We define a semantic measure that favors the real entities of the class (inliers) with positive values while penalizing outliers with negative values and improving it with the discovery of frequent and rare itemsets. Our measure outperforms FPOF (Frequent Pattern Outlier Factor) ones. Experiments show the efficiency of our approach.]]></description>
      <pubDate>Mon, 11 Apr 2022 16:41:50 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.8679</link>
      <guid>https://doi.org/10.46298/arima.8679</guid>
      <author>Diop, Bara</author>
      <author>Diop, Cheikh Talibouya</author>
      <author>Diop, Lamine</author>
      <dc:creator>Diop, Bara</dc:creator>
      <dc:creator>Diop, Cheikh Talibouya</dc:creator>
      <dc:creator>Diop, Lamine</dc:creator>
      <content:encoded><![CDATA[Nowadays, there is a growing interest in data mining and information retrieval applications from Knowledge Graphs (KG). However, the latter (KG) suffers from several data quality problems such as accuracy, completeness, and different kinds of errors. In DBpedia, there are several issues related to data quality. Among them, we focus on the following: several entities are in classes they do not belong to. For instance, the query to get all the entities of the class Person also returns group entities, whereas these should be in the class Group. We call such entities “outliers.” The discovery of such outliers is crucial for class learning and understanding. This paper proposes a new outlier detection method that finds these entities. We define a semantic measure that favors the real entities of the class (inliers) with positive values while penalizing outliers with negative values and improving it with the discovery of frequent and rare itemsets. Our measure outperforms FPOF (Frequent Pattern Outlier Factor) ones. Experiments show the efficiency of our approach.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Applying Data Structure Succinctness to Graph Numbering For Efficient Graph Analysis</title>
      <description><![CDATA[Graph algorithms have inherent characteristics, including data-driven computations and poor locality. These characteristics expose graph algorithms to several challenges, because most well studied (parallel) abstractions and implementation are not suitable for them. In previous work[17, 18, 20], authors show how to use some complex-network properties, including community structure and heterogeneity of node degree, to improve performance, by a proper memory management (Cn-order for cache misses reduction) and an appropriate thread scheduling (comm-degscheduling to ensure load balancing). In recent work [19], Besta et al. proposed log(graph), a graph representation that outperforms existing graph compression algorithms. In this paper, we show that graph numbering heuristics and scheduling heuristics can be improved when they are combined with log(graph) data structure. Experiments were made on multi-core machines. For example, on one node of a multi-core machine (Troll from Grid’5000), we showed that when combining existing heuristic with graph compression, with Pagerank being executing on Live Journal dataset, we can reduce with cn-order: cache-references from 29.94% (without compression) to 39.56% (with compression), cache-misses from 37.87% to 51.90% and hence time from 18.93% to 28.66%.]]></description>
      <pubDate>Thu, 17 Feb 2022 12:50:11 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.8349</link>
      <guid>https://doi.org/10.46298/arima.8349</guid>
      <author>Messi Nguélé, Thomas</author>
      <author>Méhaut, Jean-François</author>
      <dc:creator>Messi Nguélé, Thomas</dc:creator>
      <dc:creator>Méhaut, Jean-François</dc:creator>
      <content:encoded><![CDATA[Graph algorithms have inherent characteristics, including data-driven computations and poor locality. These characteristics expose graph algorithms to several challenges, because most well studied (parallel) abstractions and implementation are not suitable for them. In previous work[17, 18, 20], authors show how to use some complex-network properties, including community structure and heterogeneity of node degree, to improve performance, by a proper memory management (Cn-order for cache misses reduction) and an appropriate thread scheduling (comm-degscheduling to ensure load balancing). In recent work [19], Besta et al. proposed log(graph), a graph representation that outperforms existing graph compression algorithms. In this paper, we show that graph numbering heuristics and scheduling heuristics can be improved when they are combined with log(graph) data structure. Experiments were made on multi-core machines. For example, on one node of a multi-core machine (Troll from Grid’5000), we showed that when combining existing heuristic with graph compression, with Pagerank being executing on Live Journal dataset, we can reduce with cn-order: cache-references from 29.94% (without compression) to 39.56% (with compression), cache-misses from 37.87% to 51.90% and hence time from 18.93% to 28.66%.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Choquet utility depending on the state of nature</title>
      <description><![CDATA[This article, which is part of the general framework of mathematics applied to economics, is a decision-making model in total ignorance. Such an environment is characterized by the absence of a law of distribution of the states of nature allowing having good forecasts or anticipations. Based primarily on the integral of Choquet, this model allows aggregating the different states of nature in order to make a better decision. This integral of Choquet imposes itself with respect to the complexity of the environment and also by its relevance of aggregation of the interactive or conflicting criteria. The present model is a combination of the Schmeidler model and the Brice Mayag algorithm for the determination of Choquet 2-additive capacity. It fits into the framework of subjective models and provides an appropriate response to the Ellsberg paradox.]]></description>
      <pubDate>Fri, 17 Dec 2021 12:58:42 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.5898</link>
      <guid>https://doi.org/10.46298/arima.5898</guid>
      <author>Abdou, Issouf</author>
      <author>Andriamanantena, Philibert</author>
      <author>Ravelomanana, Mamy Raoul</author>
      <author>Rakotozafy, Rivo</author>
      <dc:creator>Abdou, Issouf</dc:creator>
      <dc:creator>Andriamanantena, Philibert</dc:creator>
      <dc:creator>Ravelomanana, Mamy Raoul</dc:creator>
      <dc:creator>Rakotozafy, Rivo</dc:creator>
      <content:encoded><![CDATA[This article, which is part of the general framework of mathematics applied to economics, is a decision-making model in total ignorance. Such an environment is characterized by the absence of a law of distribution of the states of nature allowing having good forecasts or anticipations. Based primarily on the integral of Choquet, this model allows aggregating the different states of nature in order to make a better decision. This integral of Choquet imposes itself with respect to the complexity of the environment and also by its relevance of aggregation of the interactive or conflicting criteria. The present model is a combination of the Schmeidler model and the Brice Mayag algorithm for the determination of Choquet 2-additive capacity. It fits into the framework of subjective models and provides an appropriate response to the Ellsberg paradox.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Language and semantics of expressions for Grafcet model synthesis in a MDE environment</title>
      <description><![CDATA[The GRAphe Fonctionnel de Commande Étapes Transitions (GRAFCET) is a powerful graphical modeling language for the pecification of controllers in discrete event systems.It uses expressions to express the conditions of transitions and conditional actions as well as the logical and arithmetic expressions assigned to stored actions. However, several research works has focused on the transformation of Grafcet specifications (including expressions) into control code for embedded systems. To make it easier to edit valid Grafcet models and generate code, it is necessary to propose a formalization of the Grafcet expression language permitting to validate its constructs and provide an appropriate semantics. For this, we propose a context-free grammar that generates the whole set of Grafcet expressions, by extending the usual grammars of logical and arithmetic expressions. We also propose a metamodel and an associated semantics of Grafcet expressions to facilitate the implementation of the Grafcet language. A parser of the expressions Grafcet emph G7Expr is then obtained thanks to the generator of parsers ANTLR, while the metamodel is implemented in the Eclipse EMF Model Driven Engineering (MDE) environment. The combination of the two tools makes it possible to analyze and automatically build Grafcet expressions when editing and synthesizing Grafcet models.]]></description>
      <pubDate>Mon, 22 Nov 2021 08:13:45 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6452</link>
      <guid>https://doi.org/10.46298/arima.6452</guid>
      <author>Nzebop Ndenoka, Gérard</author>
      <author>Tchuenté, Maurice</author>
      <author>Simeu, Emmanuel</author>
      <dc:creator>Nzebop Ndenoka, Gérard</dc:creator>
      <dc:creator>Tchuenté, Maurice</dc:creator>
      <dc:creator>Simeu, Emmanuel</dc:creator>
      <content:encoded><![CDATA[The GRAphe Fonctionnel de Commande Étapes Transitions (GRAFCET) is a powerful graphical modeling language for the pecification of controllers in discrete event systems.It uses expressions to express the conditions of transitions and conditional actions as well as the logical and arithmetic expressions assigned to stored actions. However, several research works has focused on the transformation of Grafcet specifications (including expressions) into control code for embedded systems. To make it easier to edit valid Grafcet models and generate code, it is necessary to propose a formalization of the Grafcet expression language permitting to validate its constructs and provide an appropriate semantics. For this, we propose a context-free grammar that generates the whole set of Grafcet expressions, by extending the usual grammars of logical and arithmetic expressions. We also propose a metamodel and an associated semantics of Grafcet expressions to facilitate the implementation of the Grafcet language. A parser of the expressions Grafcet emph G7Expr is then obtained thanks to the generator of parsers ANTLR, while the metamodel is implemented in the Eclipse EMF Model Driven Engineering (MDE) environment. The combination of the two tools makes it possible to analyze and automatically build Grafcet expressions when editing and synthesizing Grafcet models.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Enhancing Reasoning with the Extension Rule in CDCL SAT Solvers</title>
      <description><![CDATA[The extension rule first introduced by G. Tseitin is a simple but powerful rule that, when added to resolution, leads to an exponentially stronger proof system known as extended resolution (ER). Despite the outstanding theoretical results obtained with ER, its exploitation in practice to improve SAT solvers' efficiency still poses some challenging issues. There have been several attempts in the literature aiming at integrating the extension rule within CDCL SAT solvers but the results are in general not as promising as in theory. An important remark that can be made on these attempts is that most of them focus on reducing the sizes of the proofs using the extended variables introduced in the solver. We adopt in this work a different view. We see extended variables as a means to enhance reasoning in solvers and therefore to give them the ability of reasoning on various semantic aspects of variables. Experiments carried out on the 2018 and 2020 SAT competitions' benchmarks show the use of the extension rule in CDCL SAT solvers to be practically beneficial for both satisfiable and unsatisfiable instances.]]></description>
      <pubDate>Tue, 09 Nov 2021 08:14:42 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6434</link>
      <guid>https://doi.org/10.46298/arima.6434</guid>
      <author>Konan Tchinda, Rodrigue</author>
      <author>Tayou Djamegni, Clémentin</author>
      <dc:creator>Konan Tchinda, Rodrigue</dc:creator>
      <dc:creator>Tayou Djamegni, Clémentin</dc:creator>
      <content:encoded><![CDATA[The extension rule first introduced by G. Tseitin is a simple but powerful rule that, when added to resolution, leads to an exponentially stronger proof system known as extended resolution (ER). Despite the outstanding theoretical results obtained with ER, its exploitation in practice to improve SAT solvers' efficiency still poses some challenging issues. There have been several attempts in the literature aiming at integrating the extension rule within CDCL SAT solvers but the results are in general not as promising as in theory. An important remark that can be made on these attempts is that most of them focus on reducing the sizes of the proofs using the extended variables introduced in the solver. We adopt in this work a different view. We see extended variables as a means to enhance reasoning in solvers and therefore to give them the ability of reasoning on various semantic aspects of variables. Experiments carried out on the 2018 and 2020 SAT competitions' benchmarks show the use of the extension rule in CDCL SAT solvers to be practically beneficial for both satisfiable and unsatisfiable instances.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Service Promotion in a Federation of Security Domains</title>
      <description><![CDATA[Service Oriented Architecture (SOA) provides standardised solutions to share services between various security domains. But acces control to services is defined for each domain, and therefore the federation of security domains brings some flexibility to users of the services. To facilitatethe authentication of users, a solution is a federated access control that relies on the identity federation, which allows an user to authenticate once in one domain and to access the services of others according to her authorisation attributes. Since the access control requirements of services are specified using domain-specific authorisation attributes, the secure sharing of services in the federation becomes a real challenge. On the one hand, domains cannot abandon their access control models in favour of a global one; on the other hand, the redefinition of the access control requirements of services compromises the existing service consumers. This article extends our paper at CARI2020; we propose the promotion of services as a method that consists in publishing the services of domains at the federation level by redefining their access control requirements with the federation’s authorisation attributes. Our promotion method relies on mappings between federation’s authorisation attributes and those of domains to preserve existing service consumers and to support domain autonomy.We formally describe interaction and access to promoted services using operational semantics. The promotion method has been implemented with web services technologies.]]></description>
      <pubDate>Thu, 28 Oct 2021 07:39:11 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6757</link>
      <guid>https://doi.org/10.46298/arima.6757</guid>
      <author>Bah, Abdramane</author>
      <author>Andre, Pascal</author>
      <author>Attiogbé, Christian</author>
      <author>Konaté, Jacqueline</author>
      <dc:creator>Bah, Abdramane</dc:creator>
      <dc:creator>Andre, Pascal</dc:creator>
      <dc:creator>Attiogbé, Christian</dc:creator>
      <dc:creator>Konaté, Jacqueline</dc:creator>
      <content:encoded><![CDATA[Service Oriented Architecture (SOA) provides standardised solutions to share services between various security domains. But acces control to services is defined for each domain, and therefore the federation of security domains brings some flexibility to users of the services. To facilitatethe authentication of users, a solution is a federated access control that relies on the identity federation, which allows an user to authenticate once in one domain and to access the services of others according to her authorisation attributes. Since the access control requirements of services are specified using domain-specific authorisation attributes, the secure sharing of services in the federation becomes a real challenge. On the one hand, domains cannot abandon their access control models in favour of a global one; on the other hand, the redefinition of the access control requirements of services compromises the existing service consumers. This article extends our paper at CARI2020; we propose the promotion of services as a method that consists in publishing the services of domains at the federation level by redefining their access control requirements with the federation’s authorisation attributes. Our promotion method relies on mappings between federation’s authorisation attributes and those of domains to preserve existing service consumers and to support domain autonomy.We formally describe interaction and access to promoted services using operational semantics. The promotion method has been implemented with web services technologies.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Low-complexity moment-based robust design and uncertainty back-propagation</title>
      <description><![CDATA[The paper shows how to take advantage of a possible existing linear relationship in an optimization problem to address the issue of robust design and backward uncertainty propagation lowering as much as possible the computational effort.]]></description>
      <pubDate>Mon, 13 Sep 2021 15:09:49 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.7160</link>
      <guid>https://doi.org/10.46298/arima.7160</guid>
      <author>Bouabdallah, Radia</author>
      <author>Mohammadi, Bijan</author>
      <author>Rapadamnaba, Robert</author>
      <dc:creator>Bouabdallah, Radia</dc:creator>
      <dc:creator>Mohammadi, Bijan</dc:creator>
      <dc:creator>Rapadamnaba, Robert</dc:creator>
      <content:encoded><![CDATA[The paper shows how to take advantage of a possible existing linear relationship in an optimization problem to address the issue of robust design and backward uncertainty propagation lowering as much as possible the computational effort.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>An efficient end to end verifiable voting system</title>
      <description><![CDATA[Electronic voting systems have become a powerful technology for the improvement of democracy by reducing the cost of elections, increasing voter turnout and even allowing voters to directly check the entire electoral process. End-to-end (E2E) verifiability has been widely identified as a critical property for the adoption of such voting systems for electoral procedures. Moreover, one of the pillars of any vote, apart from the secret of the vote and the integrity of the result, lies in the transparency of the process, the possibility for the voters "to understand the underlying system" without resorting to the competences techniques. The end-to-end verifiable electronic voting systems proposed in the literature do not always guarantee it because they require additional configuration hypotheses, for example the existence of a trusted third party as a random source or the existence of a random beacon. Hence, building a reliable verifiable end-to-end voting system offering confidentiality and integrity remains an open research problem. In this work, we are presenting a new verifiable end-to-end electronic voting system requiring only the existence of a coherent voting board, fault-tolerant, which stores all election-related information and allows any party as well as voters to read and verify the entire election process. The property of our system is information guaranteed given the existence of the bulletin board, the involvement of the voters and the political parties in the process. This involvement does not compromise the confidentiality nor integrity of the elections and does not require cryptographic operations on the voters account.]]></description>
      <pubDate>Mon, 13 Sep 2021 10:13:20 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6442</link>
      <guid>https://doi.org/10.46298/arima.6442</guid>
      <author>Tamo Mamtio, Léonie</author>
      <author>Tindo, Gilbert</author>
      <dc:creator>Tamo Mamtio, Léonie</dc:creator>
      <dc:creator>Tindo, Gilbert</dc:creator>
      <content:encoded><![CDATA[Electronic voting systems have become a powerful technology for the improvement of democracy by reducing the cost of elections, increasing voter turnout and even allowing voters to directly check the entire electoral process. End-to-end (E2E) verifiability has been widely identified as a critical property for the adoption of such voting systems for electoral procedures. Moreover, one of the pillars of any vote, apart from the secret of the vote and the integrity of the result, lies in the transparency of the process, the possibility for the voters "to understand the underlying system" without resorting to the competences techniques. The end-to-end verifiable electronic voting systems proposed in the literature do not always guarantee it because they require additional configuration hypotheses, for example the existence of a trusted third party as a random source or the existence of a random beacon. Hence, building a reliable verifiable end-to-end voting system offering confidentiality and integrity remains an open research problem. In this work, we are presenting a new verifiable end-to-end electronic voting system requiring only the existence of a coherent voting board, fault-tolerant, which stores all election-related information and allows any party as well as voters to read and verify the entire election process. The property of our system is information guaranteed given the existence of the bulletin board, the involvement of the voters and the political parties in the process. This involvement does not compromise the confidentiality nor integrity of the elections and does not require cryptographic operations on the voters account.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Extraction of lexico-grammatic features and coupling of CRF (Conditional Random Field) units to the deep neural network for aspect extraction</title>
      <description><![CDATA[The Internet contains a wealth of information in the form of unstructured texts such as customer comments on products, events and more. By extracting and analyzing the opinions expressed in customer comments in detail, it is possible to obtain valuable opportunities and information for customers and companies. The model proposed by Jebbara and Cimiano. for the extraction of aspects, winner of the SemEval2016 competition, suffers from the absence of lexico-grammatic input characteristics and poor performance in the detection of compound aspects. We propose the model based on a recurrent neural network for the task of extracting aspects of an entity for sentiment analysis. The proposed model is an improvement of the Jebbara and Cimiano model. The modification consists in adding a CRF to take into account the dependencies between labels and we have extended the characteristics space by adding grammatical level characteristics and lexical level characteristics. Experiments on the two SemEval2016 data sets tested our approach and showed an improvement in the F-score measurement of about 3.5%.]]></description>
      <pubDate>Wed, 28 Jul 2021 11:29:47 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6438</link>
      <guid>https://doi.org/10.46298/arima.6438</guid>
      <author>Bengono Obiang, Saint Germes Bienvenu</author>
      <author>Tsopze, Norbert</author>
      <dc:creator>Bengono Obiang, Saint Germes Bienvenu</dc:creator>
      <dc:creator>Tsopze, Norbert</dc:creator>
      <content:encoded><![CDATA[The Internet contains a wealth of information in the form of unstructured texts such as customer comments on products, events and more. By extracting and analyzing the opinions expressed in customer comments in detail, it is possible to obtain valuable opportunities and information for customers and companies. The model proposed by Jebbara and Cimiano. for the extraction of aspects, winner of the SemEval2016 competition, suffers from the absence of lexico-grammatic input characteristics and poor performance in the detection of compound aspects. We propose the model based on a recurrent neural network for the task of extracting aspects of an entity for sentiment analysis. The proposed model is an improvement of the Jebbara and Cimiano model. The modification consists in adding a CRF to take into account the dependencies between labels and we have extended the characteristics space by adding grammatical level characteristics and lexical level characteristics. Experiments on the two SemEval2016 data sets tested our approach and showed an improvement in the F-score measurement of about 3.5%.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Multi Ant Colony Optimization Approach For The Traveling Salesman Problem</title>
      <description><![CDATA[In this paper, we propose a new approach to solving the Traveling Salesman Problem (TSP), for which no exact algorithm is known that allows to find a solution in polynomial time. The proposed approach is based on optimization by ants. It puts several colonies in competition for improved solutions (in execution time and solution quality) to large TSP instances, and allows to efficiently explore the range of possible solutions. The results of our experiments show that the approach leads to better results compared to other heuristics from the literature, especially in terms of the quality of solutions obtained and execution time.]]></description>
      <pubDate>Mon, 12 Jul 2021 07:30:39 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6752</link>
      <guid>https://doi.org/10.46298/arima.6752</guid>
      <author>Soh, Mathurin</author>
      <author>Nguimeya Tsofack, Baudoin</author>
      <author>Tayou Djamegni, Clémentin</author>
      <dc:creator>Soh, Mathurin</dc:creator>
      <dc:creator>Nguimeya Tsofack, Baudoin</dc:creator>
      <dc:creator>Tayou Djamegni, Clémentin</dc:creator>
      <content:encoded><![CDATA[In this paper, we propose a new approach to solving the Traveling Salesman Problem (TSP), for which no exact algorithm is known that allows to find a solution in polynomial time. The proposed approach is based on optimization by ants. It puts several colonies in competition for improved solutions (in execution time and solution quality) to large TSP instances, and allows to efficiently explore the range of possible solutions. The results of our experiments show that the approach leads to better results compared to other heuristics from the literature, especially in terms of the quality of solutions obtained and execution time.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Algorithms to get out of Boring Area Trap in Reinforcement Learning</title>
      <description><![CDATA[Reinforcement learning algorithms have succeeded over the years in achieving impressive results in a variety of fields. However, these algorithms suffer from certain weaknesses highlighted by Refael Vivanti and al. that may explain the regression of even well-trained agents in certain environments : the difference in variance on rewards between areas of the environment. This difference in variance leads to two problems : Boring Area Trap and Manipulative consultant. We note that the Adaptive Symmetric Reward Noising (ASRN) algorithm proposed by Refael Vivanti and al. has limitations for environments with the following characteristics : long game times and multiple boring area environments. To overcome these problems, we propose three algorithms derived from the ASRN algorithm called Rebooted Adaptive Symmetric Reward Noising (RASRN) : Continuous ε decay RASRN, Full RASRN and Stepwise α decay RASRN. Thanks to two series of experiments carried out on the k-armed bandit problem, we show that our algorithms can better correct the Boring Area Trap problem.]]></description>
      <pubDate>Fri, 02 Jul 2021 12:31:26 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6748</link>
      <guid>https://doi.org/10.46298/arima.6748</guid>
      <author>Noulawe Tchamanbe, Landry Steve</author>
      <author>Melatagia Yonta, Paulin</author>
      <dc:creator>Noulawe Tchamanbe, Landry Steve</dc:creator>
      <dc:creator>Melatagia Yonta, Paulin</dc:creator>
      <content:encoded><![CDATA[Reinforcement learning algorithms have succeeded over the years in achieving impressive results in a variety of fields. However, these algorithms suffer from certain weaknesses highlighted by Refael Vivanti and al. that may explain the regression of even well-trained agents in certain environments : the difference in variance on rewards between areas of the environment. This difference in variance leads to two problems : Boring Area Trap and Manipulative consultant. We note that the Adaptive Symmetric Reward Noising (ASRN) algorithm proposed by Refael Vivanti and al. has limitations for environments with the following characteristics : long game times and multiple boring area environments. To overcome these problems, we propose three algorithms derived from the ASRN algorithm called Rebooted Adaptive Symmetric Reward Noising (RASRN) : Continuous ε decay RASRN, Full RASRN and Stepwise α decay RASRN. Thanks to two series of experiments carried out on the k-armed bandit problem, we show that our algorithms can better correct the Boring Area Trap problem.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Nash-game approach to joint data completion and location of small inclusions in Stokes flow</title>
      <description><![CDATA[We consider the coupled inverse problem of data completion and the determination of the best locations of an unknown number of small objects immersed in a stationary viscous fluid. We carefully introduce a novel method to solve this problem based on a game theory approach. A new algorithm is provided to recovering the missing data and the number of these objects and their approximate location simultaneously. The detection problem is formulated as a topological one. We present two test-cases that illustrate the efficiency of our original strategy to deal with the ill-posed problem.]]></description>
      <pubDate>Tue, 29 Jun 2021 07:42:52 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6761</link>
      <guid>https://doi.org/10.46298/arima.6761</guid>
      <author>Ouni, Marwa</author>
      <author>Habbal, Abderrahmane</author>
      <author>Kallel, Moez</author>
      <dc:creator>Ouni, Marwa</dc:creator>
      <dc:creator>Habbal, Abderrahmane</dc:creator>
      <dc:creator>Kallel, Moez</dc:creator>
      <content:encoded><![CDATA[We consider the coupled inverse problem of data completion and the determination of the best locations of an unknown number of small objects immersed in a stationary viscous fluid. We carefully introduce a novel method to solve this problem based on a game theory approach. A new algorithm is provided to recovering the missing data and the number of these objects and their approximate location simultaneously. The detection problem is formulated as a topological one. We present two test-cases that illustrate the efficiency of our original strategy to deal with the ill-posed problem.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Analysis of a mosquito life cycle model</title>
      <description><![CDATA[The gonotrophic cycle of mosquitoes conditions the frequency of mosquito-human contacts. The knowledge of this important phenomenon in the mosquito life cycle is a fundamental element in the epidemiological analysis of a communicable disease such as mosquito-borne diseases.In this work, we analyze a deterministic model of the complete life cycle of mosquitoes which takes into account the principal phases of female mosquitoes' gonotrophic cycle, and the Sterile Insect technique combined with the use of insecticide as control measures to fight the proliferation of mosquitoes. We compute the corresponding mosquito reproductive number N ∗ and prove the global asymptotic stability of trivial equilibrium. We prove that the model admits two non-trivial equilibria whenever N^{∗} is greater than another threshold, N_c, which the total number of sterile mosquitoes depends on. Numerical simulations, using mosquito parameters of the Aedes species, are carried out to illustrate our analytical results and permit to show that the strategy which consists in combining the sterile insect technique with adulticides, when it is well done, effectively combats the proliferation of mosquitoes.]]></description>
      <pubDate>Wed, 16 Jun 2021 07:50:27 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6697</link>
      <guid>https://doi.org/10.46298/arima.6697</guid>
      <author>Kouchéré, Albert</author>
      <author>Abboubakar, Hamadjam</author>
      <author>Damakoa, Irepran</author>
      <dc:creator>Kouchéré, Albert</dc:creator>
      <dc:creator>Abboubakar, Hamadjam</dc:creator>
      <dc:creator>Damakoa, Irepran</dc:creator>
      <content:encoded><![CDATA[The gonotrophic cycle of mosquitoes conditions the frequency of mosquito-human contacts. The knowledge of this important phenomenon in the mosquito life cycle is a fundamental element in the epidemiological analysis of a communicable disease such as mosquito-borne diseases.In this work, we analyze a deterministic model of the complete life cycle of mosquitoes which takes into account the principal phases of female mosquitoes' gonotrophic cycle, and the Sterile Insect technique combined with the use of insecticide as control measures to fight the proliferation of mosquitoes. We compute the corresponding mosquito reproductive number N ∗ and prove the global asymptotic stability of trivial equilibrium. We prove that the model admits two non-trivial equilibria whenever N^{∗} is greater than another threshold, N_c, which the total number of sterile mosquitoes depends on. Numerical simulations, using mosquito parameters of the Aedes species, are carried out to illustrate our analytical results and permit to show that the strategy which consists in combining the sterile insect technique with adulticides, when it is well done, effectively combats the proliferation of mosquitoes.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Approximation with activation functions and applications</title>
      <description><![CDATA[Function approximation arises in many branches of applied mathematics and computer science, in particular in numerical analysis, in finite element theory and more recently in data sciences domain. From most common approximation we cite, polynomial, Chebychev and Fourier series approximations. In this work we establish some approximations of a continuous function by a series of activation functions. First, we deal with one and two dimensional cases. Then, we generalize the approximation to the multi dimensional case. Examples of applications of these approximations are: interpolation, numerical integration, finite element and neural network. Finally, we will present some numerical results of the examples above.]]></description>
      <pubDate>Fri, 30 Apr 2021 13:19:30 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6464</link>
      <guid>https://doi.org/10.46298/arima.6464</guid>
      <author>Bessi, Radhia</author>
      <dc:creator>Bessi, Radhia</dc:creator>
      <content:encoded><![CDATA[Function approximation arises in many branches of applied mathematics and computer science, in particular in numerical analysis, in finite element theory and more recently in data sciences domain. From most common approximation we cite, polynomial, Chebychev and Fourier series approximations. In this work we establish some approximations of a continuous function by a series of activation functions. First, we deal with one and two dimensional cases. Then, we generalize the approximation to the multi dimensional case. Examples of applications of these approximations are: interpolation, numerical integration, finite element and neural network. Finally, we will present some numerical results of the examples above.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Big Steps Towards Query Eco-Processing - Thinking Smart</title>
      <description><![CDATA[Computers and electronic machines in businesses consume a significant amount of electricity, releasing carbon dioxide (CO2), which contributes to greenhouse gas emissions. Energy efficiency is a pressing concern in IT systems, ranging from mobile devices to large servers in data centers, in order to be more environmentally responsible. In order to meet the growing demands in the awareness of excessive energy consumption, many initiatives have been launched on energy efficiency for big data processing covering electronic components, software and applications. Query optimizers are one of the most power consuming components of a DBMS. They can be modified to take into account the energetical cost of query plans by using energy-based cost models with the aim of reducing the power consumption of computer systems. In this paper, we study, describe and evaluate the design of three energy cost models whose values of energy sensitive parameters are determined using the Nonlinear Regression and the Random Forests techniques. To this end, we study in depth the operating principle of the selected DBMS and present an analysis comparing the performance time and energy consumption of typical queries in the TPC benchmark. We perform extensive experiments on a physical testbed based on PostreSQL, MontetDB and Hyrise systems using workloads generated using our chosen benchmark to validate our proposal.]]></description>
      <pubDate>Tue, 30 Mar 2021 11:58:38 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6767</link>
      <guid>https://doi.org/10.46298/arima.6767</guid>
      <author>Dembele, Simon Pierre</author>
      <author>Bellatreche, Ladjel</author>
      <author>Ordonez, Carlos</author>
      <author>Gmati, Nabil</author>
      <author>Roche, Mathieu</author>
      <author>Nguyen-Huu, Tri</author>
      <author>Debreu, Laurent</author>
      <dc:creator>Dembele, Simon Pierre</dc:creator>
      <dc:creator>Bellatreche, Ladjel</dc:creator>
      <dc:creator>Ordonez, Carlos</dc:creator>
      <dc:creator>Gmati, Nabil</dc:creator>
      <dc:creator>Roche, Mathieu</dc:creator>
      <dc:creator>Nguyen-Huu, Tri</dc:creator>
      <dc:creator>Debreu, Laurent</dc:creator>
      <content:encoded><![CDATA[Computers and electronic machines in businesses consume a significant amount of electricity, releasing carbon dioxide (CO2), which contributes to greenhouse gas emissions. Energy efficiency is a pressing concern in IT systems, ranging from mobile devices to large servers in data centers, in order to be more environmentally responsible. In order to meet the growing demands in the awareness of excessive energy consumption, many initiatives have been launched on energy efficiency for big data processing covering electronic components, software and applications. Query optimizers are one of the most power consuming components of a DBMS. They can be modified to take into account the energetical cost of query plans by using energy-based cost models with the aim of reducing the power consumption of computer systems. In this paper, we study, describe and evaluate the design of three energy cost models whose values of energy sensitive parameters are determined using the Nonlinear Regression and the Random Forests techniques. To this end, we study in depth the operating principle of the selected DBMS and present an analysis comparing the performance time and energy consumption of typical queries in the TPC benchmark. We perform extensive experiments on a physical testbed based on PostreSQL, MontetDB and Hyrise systems using workloads generated using our chosen benchmark to validate our proposal.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Parallel Hybridization for SAT: An Efficient Combination of Search Space Splitting and Portfolio</title>
      <description><![CDATA[Search space splitting and portfolio are the two main approaches used in parallel SAT solving. Each of them has its strengths but also, its weaknesses. Decomposition in search space splitting can help improve speedup on satisfiable instances while competition in portfolio increases robustness. Many parallel hybrid approaches have been proposed in the literature but most of them still cope with load balancing issues that are the cause of a non-negligible overhead. In this paper, we describe a new parallel hybridization scheme based on both search space splitting and portfolio that does not require the use of load balancing mechanisms (such as dynamic work stealing).]]></description>
      <pubDate>Sat, 27 Feb 2021 15:27:08 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6750</link>
      <guid>https://doi.org/10.46298/arima.6750</guid>
      <author>Konan Tchinda, Rodrigue</author>
      <author>Tayou Djamegni, Clémentin</author>
      <dc:creator>Konan Tchinda, Rodrigue</dc:creator>
      <dc:creator>Tayou Djamegni, Clémentin</dc:creator>
      <content:encoded><![CDATA[Search space splitting and portfolio are the two main approaches used in parallel SAT solving. Each of them has its strengths but also, its weaknesses. Decomposition in search space splitting can help improve speedup on satisfiable instances while competition in portfolio increases robustness. Many parallel hybrid approaches have been proposed in the literature but most of them still cope with load balancing issues that are the cause of a non-negligible overhead. In this paper, we describe a new parallel hybridization scheme based on both search space splitting and portfolio that does not require the use of load balancing mechanisms (such as dynamic work stealing).]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A mathematical model for a partial demographic transition</title>
      <description><![CDATA[We study a mathematical model for the demographic transition. It is a homogeneous differential system of degree one. There are two age groups and two fertility levels. Low fertility extends by mimicry to adults with high fertility. When the mimicry coefficient increases, the system crosses two thresholds between which the population increases or decreases exponentially with a stable mixture of the two fertility rates. This partial demographic transition is reminiscent of the situation in some countries of sub-Saharan Africa.]]></description>
      <pubDate>Wed, 24 Feb 2021 09:51:09 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6713</link>
      <guid>https://doi.org/10.46298/arima.6713</guid>
      <author>Bacaër, Nicolas</author>
      <author>Inaba, Hisashi</author>
      <author>Moussaoui, Ali</author>
      <dc:creator>Bacaër, Nicolas</dc:creator>
      <dc:creator>Inaba, Hisashi</dc:creator>
      <dc:creator>Moussaoui, Ali</dc:creator>
      <content:encoded><![CDATA[We study a mathematical model for the demographic transition. It is a homogeneous differential system of degree one. There are two age groups and two fertility levels. Low fertility extends by mimicry to adults with high fertility. When the mimicry coefficient increases, the system crosses two thresholds between which the population increases or decreases exponentially with a stable mixture of the two fertility rates. This partial demographic transition is reminiscent of the situation in some countries of sub-Saharan Africa.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On the probability of extinction of a population in a slow periodic environment</title>
      <description><![CDATA[We study the probability of extinction of a population modelled by a linear birth-and-death process with several types in a periodic environment when the period is large compared to other time scales. This probability depends on the season and may present a sharp jump in relation to a "canard" in a slow-fast dynamical system. The point of discontinuity is determined precisely in an example with two types of individuals related to a vector-borne disease transmission model.]]></description>
      <pubDate>Wed, 04 Nov 2020 08:36:48 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6265</link>
      <guid>https://doi.org/10.46298/arima.6265</guid>
      <author>Bacaër, Nicolas</author>
      <author>Lobry, Claude</author>
      <author>Sari, Tewfik</author>
      <dc:creator>Bacaër, Nicolas</dc:creator>
      <dc:creator>Lobry, Claude</dc:creator>
      <dc:creator>Sari, Tewfik</dc:creator>
      <content:encoded><![CDATA[We study the probability of extinction of a population modelled by a linear birth-and-death process with several types in a periodic environment when the period is large compared to other time scales. This probability depends on the season and may present a sharp jump in relation to a "canard" in a slow-fast dynamical system. The point of discontinuity is determined precisely in an example with two types of individuals related to a vector-borne disease transmission model.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Topological asymptotic formula for the 3D non-stationary Stokes problem and application</title>
      <description><![CDATA[This paper is concerned with a topological asymptotic expansion for a parabolic operator. We consider the three dimensional non-stationary Stokes system as a model problem and we derive a sensitivity analysis with respect to the creation of a small Dirich-let geometric perturbation. The established asymptotic expansion valid for a large class of shape functions. The proposed analysis is based on a preliminary estimate describing the velocity field perturbation caused by the presence of a small obstacle in the fluid flow domain. The obtained theoretical results are used to built a fast and accurate detection algorithm. Some numerical examples issued from a lake oxygenation problem show the efficiency of the proposed approach.]]></description>
      <pubDate>Thu, 22 Oct 2020 08:44:58 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4760</link>
      <guid>https://doi.org/10.46298/arima.4760</guid>
      <author>Hassine, Maatoug</author>
      <author>Malek, Rakia</author>
      <dc:creator>Hassine, Maatoug</dc:creator>
      <dc:creator>Malek, Rakia</dc:creator>
      <content:encoded><![CDATA[This paper is concerned with a topological asymptotic expansion for a parabolic operator. We consider the three dimensional non-stationary Stokes system as a model problem and we derive a sensitivity analysis with respect to the creation of a small Dirich-let geometric perturbation. The established asymptotic expansion valid for a large class of shape functions. The proposed analysis is based on a preliminary estimate describing the velocity field perturbation caused by the presence of a small obstacle in the fluid flow domain. The obtained theoretical results are used to built a fast and accurate detection algorithm. Some numerical examples issued from a lake oxygenation problem show the efficiency of the proposed approach.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Calculus of Interfaces for Distributed Collaborative Systems: The Guarded Attribute Grammar Approach</title>
      <description><![CDATA[We address the problem of component reuse in the context of service-oriented programming and more specifically for the design of user-centric distributed collaborative systems modelled by Guarded Attribute Grammars. Following the contract-based specification of components we devel-opp an approach to an interface theory for the components of a collaborative system in three stages: we define a composition of interfaces that specifies how the component behaves with respect to its environement, we introduce an implementation order on interfaces and finally a residual operation on interfaces characterizing the systems that, when composed with a given component, can complement it in order to realize a global specification.]]></description>
      <pubDate>Mon, 05 Oct 2020 16:00:01 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.5540</link>
      <guid>https://doi.org/10.46298/arima.5540</guid>
      <author>Badouel, Eric</author>
      <author>Djeumen Djatcha, Rodrigue Aimé</author>
      <dc:creator>Badouel, Eric</dc:creator>
      <dc:creator>Djeumen Djatcha, Rodrigue Aimé</dc:creator>
      <content:encoded><![CDATA[We address the problem of component reuse in the context of service-oriented programming and more specifically for the design of user-centric distributed collaborative systems modelled by Guarded Attribute Grammars. Following the contract-based specification of components we devel-opp an approach to an interface theory for the components of a collaborative system in three stages: we define a composition of interfaces that specifies how the component behaves with respect to its environement, we introduce an implementation order on interfaces and finally a residual operation on interfaces characterizing the systems that, when composed with a given component, can complement it in order to realize a global specification.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Named Entity Recognition in Low-resource Languages using Cross-lingual distributional word representation</title>
      <description><![CDATA[Named Entity Recognition (NER) is a fundamental task in many NLP applications that seek to identify and classify expressions such as people, location, and organization names. Many NER systems have been developed, but the annotated data needed for good performances are not available for low-resource languages, such as Cameroonian languages. In this paper we exploit the low frequency of named entities in text to define a new suitable cross-lingual distributional representation for named entity recognition. We build the first Ewondo (a Bantu low-resource language of Cameroon) named entities recognizer by projecting named entity tags from English using our word representation. In terms of Recall, Precision and F-score, the obtained results show the effectiveness of the proposed distributional representation of words]]></description>
      <pubDate>Tue, 29 Sep 2020 12:28:19 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.6439</link>
      <guid>https://doi.org/10.46298/arima.6439</guid>
      <author>Mbouopda, Michael Franklin, F</author>
      <author>Melatagia Yonta, Paulin</author>
      <dc:creator>Mbouopda, Michael Franklin, F</dc:creator>
      <dc:creator>Melatagia Yonta, Paulin</dc:creator>
      <content:encoded><![CDATA[Named Entity Recognition (NER) is a fundamental task in many NLP applications that seek to identify and classify expressions such as people, location, and organization names. Many NER systems have been developed, but the annotated data needed for good performances are not available for low-resource languages, such as Cameroonian languages. In this paper we exploit the low frequency of named entities in text to define a new suitable cross-lingual distributional representation for named entity recognition. We build the first Ewondo (a Bantu low-resource language of Cameroon) named entities recognizer by projecting named entity tags from English using our word representation. In terms of Recall, Precision and F-score, the obtained results show the effectiveness of the proposed distributional representation of words]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Operating diagram of a flocculation model in the chemostat</title>
      <description><![CDATA[The objective of this study is to analyze a model of the chemostat involving the attachment and detachment dynamics of planktonic and aggregated biomass in the presence of a single resource. Considering the mortality of species, we give a complete analysis for the existence and local stability of all steady states for general monotonic growth rates. The model exhibits a rich set of behaviors with a multiplicity of coexistence steady states, bi-stability, and occurrence of stable limit cycles. Moreover, we determine the operating diagram which depicts the asymptotic behavior of the system with respect to control parameters. It shows the emergence of a bi-stability region through a saddle-node bifurcation and the occurrence of coexistence region through a transcritical bifurcation. Finally, we illustrate the importance of the mortality on the destabilization of the microbial ecosystem by promoting the washout of species.]]></description>
      <pubDate>Fri, 07 Aug 2020 11:08:29 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.5593</link>
      <guid>https://doi.org/10.46298/arima.5593</guid>
      <author>Fekih-Salem, Radhouane</author>
      <author>Sari, Tewfik</author>
      <dc:creator>Fekih-Salem, Radhouane</dc:creator>
      <dc:creator>Sari, Tewfik</dc:creator>
      <content:encoded><![CDATA[The objective of this study is to analyze a model of the chemostat involving the attachment and detachment dynamics of planktonic and aggregated biomass in the presence of a single resource. Considering the mortality of species, we give a complete analysis for the existence and local stability of all steady states for general monotonic growth rates. The model exhibits a rich set of behaviors with a multiplicity of coexistence steady states, bi-stability, and occurrence of stable limit cycles. Moreover, we determine the operating diagram which depicts the asymptotic behavior of the system with respect to control parameters. It shows the emergence of a bi-stability region through a saddle-node bifurcation and the occurrence of coexistence region through a transcritical bifurcation. Finally, we illustrate the importance of the mortality on the destabilization of the microbial ecosystem by promoting the washout of species.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Dynamic resource allocations in virtual networks through a knapsack problem's dynamic programming solution</title>
      <description><![CDATA[The high-value Internet services that have been significantly enhanced with the integration of network virtualization and Software Defined Networking (SDN) technology are increasingly attracting the attention of end-users and major computer network companies (Google, Amazon, Yahoo, Cisco, ...). In order to cope with this high demand, network resource providers (bandwidth, storage space, throughput, etc.) must implement the right models to understand and hold the users' needs while maximizing profits reaped or the number of satisfied requests into the virtual networks. This need is even more urgent that users' requests can be linked, thereby imposing to the InP some constraints concerning the mutual satisfaction of requests, which further complicates the problem. From this perspective, we show that the problem of resource allocation to users based on their requests is a knapsack problem and can therefore be solved efficiently by using the best dynamic programming solutions for the knapsack problem. Our contribution takes the dynamic resources allocation as a multiple knapsack's problem instances on variable value requests.]]></description>
      <pubDate>Thu, 09 Jan 2020 12:52:08 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.5321</link>
      <guid>https://doi.org/10.46298/arima.5321</guid>
      <author>Kengne Tchendji, Vianney</author>
      <author>Yankam, Yannick Florian</author>
      <dc:creator>Kengne Tchendji, Vianney</dc:creator>
      <dc:creator>Yankam, Yannick Florian</dc:creator>
      <content:encoded><![CDATA[The high-value Internet services that have been significantly enhanced with the integration of network virtualization and Software Defined Networking (SDN) technology are increasingly attracting the attention of end-users and major computer network companies (Google, Amazon, Yahoo, Cisco, ...). In order to cope with this high demand, network resource providers (bandwidth, storage space, throughput, etc.) must implement the right models to understand and hold the users' needs while maximizing profits reaped or the number of satisfied requests into the virtual networks. This need is even more urgent that users' requests can be linked, thereby imposing to the InP some constraints concerning the mutual satisfaction of requests, which further complicates the problem. From this perspective, we show that the problem of resource allocation to users based on their requests is a knapsack problem and can therefore be solved efficiently by using the best dynamic programming solutions for the knapsack problem. Our contribution takes the dynamic resources allocation as a multiple knapsack's problem instances on variable value requests.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>ε-TPN: definition of a Time Petri Net formalism simulating the behaviour of the timed grafcets</title>
      <description><![CDATA[To allow a formal verification of timed GRAFCET models, many authors proposed to translate them into formal and well-reputed languages such as timed automata or Time Petri nets (TPN). Thus, the work presented in [Sogbohossou, Vianou, Formal modeling of grafcets with Time Petri nets, IEEE Transactions on Control Systems Technology, 23(5)(2015)] concerns the TPN formalism: the resulting TPN of the translation, called here ε-TPN, integrates some infinitesimal delays (ε) to simulate the synchronous semantics of the grafcet. The first goal of this paper is to specify a formal operational semantics for an ε-TPN to amend the previous one: especially, priority is introduced here between two defined categories of the ε-TPN transitions, in order to respect strictly the synchronous hypothesis. The second goal is to provide how to build the finite state space abstraction resulting from the new definitions.]]></description>
      <pubDate>Tue, 03 Dec 2019 13:48:43 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.5492</link>
      <guid>https://doi.org/10.46298/arima.5492</guid>
      <author>Sogbohossou, Médésu</author>
      <author>Vianou, Antoine</author>
      <dc:creator>Sogbohossou, Médésu</dc:creator>
      <dc:creator>Vianou, Antoine</dc:creator>
      <content:encoded><![CDATA[To allow a formal verification of timed GRAFCET models, many authors proposed to translate them into formal and well-reputed languages such as timed automata or Time Petri nets (TPN). Thus, the work presented in [Sogbohossou, Vianou, Formal modeling of grafcets with Time Petri nets, IEEE Transactions on Control Systems Technology, 23(5)(2015)] concerns the TPN formalism: the resulting TPN of the translation, called here ε-TPN, integrates some infinitesimal delays (ε) to simulate the synchronous semantics of the grafcet. The first goal of this paper is to specify a formal operational semantics for an ε-TPN to amend the previous one: especially, priority is introduced here between two defined categories of the ε-TPN transitions, in order to respect strictly the synchronous hypothesis. The second goal is to provide how to build the finite state space abstraction resulting from the new definitions.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Approche hiérarchique d’extraction des compétences dans des CVs en format PDF</title>
      <description><![CDATA[The aim of this work is to use a hybrid approach to extract CVs' competences. The extraction approach for competences is made of two phases: a segmentation into sections phase within which the terms representing the competences are extracted from a CV; and a prediction phase that consists from the features previously extracted, to foretell a set of competences that would have been deduced and that would not have been necessary to mention in the resume of that expert. The main contributions of the work are two folds : the use of the approach of the hierarchical clustering of a résume in section before extracting the competences; the use of the multi-label learning model based on SVMs so as to foretell among a set of skills, those that we deduce during the reading of a CV. Experimentation carried out on a set of CVs collected from an internet source have shown that, more than 10% improvement in the identification of blocs compared to a model of the start of the art. The multi-label competences model of prediction allows finding the list of competences with a precision and a reminder respectively in an order of 90.5 % and 92.3 %. .]]></description>
      <pubDate>Thu, 03 Oct 2019 07:26:21 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4964</link>
      <guid>https://doi.org/10.46298/arima.4964</guid>
      <author>Flambeau Jiechieu Kameni, Florentin</author>
      <author>Tsopze, Norbert</author>
      <dc:creator>Flambeau Jiechieu Kameni, Florentin</dc:creator>
      <dc:creator>Tsopze, Norbert</dc:creator>
      <content:encoded><![CDATA[The aim of this work is to use a hybrid approach to extract CVs' competences. The extraction approach for competences is made of two phases: a segmentation into sections phase within which the terms representing the competences are extracted from a CV; and a prediction phase that consists from the features previously extracted, to foretell a set of competences that would have been deduced and that would not have been necessary to mention in the resume of that expert. The main contributions of the work are two folds : the use of the approach of the hierarchical clustering of a résume in section before extracting the competences; the use of the multi-label learning model based on SVMs so as to foretell among a set of skills, those that we deduce during the reading of a CV. Experimentation carried out on a set of CVs collected from an internet source have shown that, more than 10% improvement in the identification of blocs compared to a model of the start of the art. The multi-label competences model of prediction allows finding the list of competences with a precision and a reminder respectively in an order of 90.5 % and 92.3 %. .]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>From conceptual model to implementation model Piloting a multi-level case study in Cameroon.</title>
      <description><![CDATA[This paper presents the application of a Multi-Level Agent Based Model technology through a Natural Model based Design in Context (NMDC) to describe and model a class of environ-mental problems. NMDC allow training domain expert to design a conceptual model for a concrete environmental problem. This model describes the underlying application domain in terms of environmental concepts and neither requires specific technical skills nor involves implementation details. We show how the associated TiC (Tool-in-Context) develop through NMDC can help the domain expert to describe in semi-natural (specific) language the environmental problem. This description is the basis for TiC to generate a simulation tool. On the base of this, we transform the specific language to NetLogo agent based code, thereby facilitating an early prototype application to be used by the domain expert. Finally, we applied this approach to explain and analyze the process of deforestation around the Laf Forest Reserve and discuss the prototype resulting from our approach.]]></description>
      <pubDate>Tue, 01 Oct 2019 13:21:21 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.3822</link>
      <guid>https://doi.org/10.46298/arima.3822</guid>
      <author>Kameni, Eric,</author>
      <author>Van Der Weide, Theo,</author>
      <author>De Groot, W. T.</author>
      <dc:creator>Kameni, Eric,</dc:creator>
      <dc:creator>Van Der Weide, Theo,</dc:creator>
      <dc:creator>De Groot, W. T.</dc:creator>
      <content:encoded><![CDATA[This paper presents the application of a Multi-Level Agent Based Model technology through a Natural Model based Design in Context (NMDC) to describe and model a class of environ-mental problems. NMDC allow training domain expert to design a conceptual model for a concrete environmental problem. This model describes the underlying application domain in terms of environmental concepts and neither requires specific technical skills nor involves implementation details. We show how the associated TiC (Tool-in-Context) develop through NMDC can help the domain expert to describe in semi-natural (specific) language the environmental problem. This description is the basis for TiC to generate a simulation tool. On the base of this, we transform the specific language to NetLogo agent based code, thereby facilitating an early prototype application to be used by the domain expert. Finally, we applied this approach to explain and analyze the process of deforestation around the Laf Forest Reserve and discuss the prototype resulting from our approach.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Modèles mathématiques de digestion anaérobie: effet de l’hydrolyse sur la production du biogaz</title>
      <description><![CDATA[We investigate, in this work, the effects of hydrolysis on the behavior of the anaero- bic digestion process and the production of biogas (namely, the methane and the hydrogen). Two modelisations of the hydrolysis process are involved. We consider, first, that the microbial enzymatic activity is constant, then we take into consideration an explicit hydrolytic microbial compartiment for the substrate biodegradation. The considered models include the inhibition of acetoclastic and hy- drogenotrophic methanogens. To examine the effects of these inhibitions in presence of a hydrolysis step, we first study an inhibition-free model. We determine the steady states and give sufficient and necessary conditions for their stability. The existence and stability of the steady states are illustrated by operating diagrams. We prove that modelling the hydrolysis phase by a constant enzymatic activity affects the production of methane and hydrogen. Furthermore, introducing the hydrolytic microbial compartment makes appear new steady states and affects the stability regions. We prove that the biogas production occurs at only one of the steady states according to the operating parameters and state variables and we determine the maximal rate of biogas produced, in each case.]]></description>
      <pubDate>Tue, 03 Sep 2019 14:14:17 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4440</link>
      <guid>https://doi.org/10.46298/arima.4440</guid>
      <author>Daoud, Yessmine</author>
      <author>Abdellatif, Nahla</author>
      <author>Harmand, Jérome</author>
      <dc:creator>Daoud, Yessmine</dc:creator>
      <dc:creator>Abdellatif, Nahla</dc:creator>
      <dc:creator>Harmand, Jérome</dc:creator>
      <content:encoded><![CDATA[We investigate, in this work, the effects of hydrolysis on the behavior of the anaero- bic digestion process and the production of biogas (namely, the methane and the hydrogen). Two modelisations of the hydrolysis process are involved. We consider, first, that the microbial enzymatic activity is constant, then we take into consideration an explicit hydrolytic microbial compartiment for the substrate biodegradation. The considered models include the inhibition of acetoclastic and hy- drogenotrophic methanogens. To examine the effects of these inhibitions in presence of a hydrolysis step, we first study an inhibition-free model. We determine the steady states and give sufficient and necessary conditions for their stability. The existence and stability of the steady states are illustrated by operating diagrams. We prove that modelling the hydrolysis phase by a constant enzymatic activity affects the production of methane and hydrogen. Furthermore, introducing the hydrolytic microbial compartment makes appear new steady states and affects the stability regions. We prove that the biogas production occurs at only one of the steady states according to the operating parameters and state variables and we determine the maximal rate of biogas produced, in each case.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Stochastic modeling for biotechnologies Anaerobic model AM2b</title>
      <description><![CDATA[Le modèle AM2b est classiquement représenté par un système d'équations différentielles. Toutefois ce modèle n'est valide qu'en grande population et notre objectif est d'établir plusieurs mo-dèles stochastiques à différentes échelles. À l'échelle microscopique, on propose un modèle sto-chastique de saut pur que l'on peut simuler de fa con exacte. Mais dans la plupart des situations ce genre de simulation n'est pas réaliste, et nous proposons des méthodes de simulation approchées de type poissonnien ou de type diffusif. La méthode de simulation de type diffusif peut être vue comme une discrétisation d'une équation différentielle stochastique. Nous présentons enfin de fa con infor-melle un résultat de type loi des grands nombres/théorème central limite fonctionnelle qui démontre la convergence de ses modèles stochastiques vers le modèles déterministe initial.]]></description>
      <pubDate>Mon, 10 Jun 2019 07:14:42 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.3159</link>
      <guid>https://doi.org/10.46298/arima.3159</guid>
      <author>Campillo, Fabien</author>
      <author>Chebbi, Mohsen</author>
      <author>Toumi, Salwa</author>
      <dc:creator>Campillo, Fabien</dc:creator>
      <dc:creator>Chebbi, Mohsen</dc:creator>
      <dc:creator>Toumi, Salwa</dc:creator>
      <content:encoded><![CDATA[Le modèle AM2b est classiquement représenté par un système d'équations différentielles. Toutefois ce modèle n'est valide qu'en grande population et notre objectif est d'établir plusieurs mo-dèles stochastiques à différentes échelles. À l'échelle microscopique, on propose un modèle sto-chastique de saut pur que l'on peut simuler de fa con exacte. Mais dans la plupart des situations ce genre de simulation n'est pas réaliste, et nous proposons des méthodes de simulation approchées de type poissonnien ou de type diffusif. La méthode de simulation de type diffusif peut être vue comme une discrétisation d'une équation différentielle stochastique. Nous présentons enfin de fa con infor-melle un résultat de type loi des grands nombres/théorème central limite fonctionnelle qui démontre la convergence de ses modèles stochastiques vers le modèles déterministe initial.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Modeling and simulation of Power Electronic based Electrotechnical Systems for Renewable Energy, Transportation and Industrial Applications: A Brief Review</title>
      <description><![CDATA[In this paper, a completed review of recent researches about modern power converter based electrotechnical systems (ETSs) has been carried out. In particular, power electronics (PEs) based ETSs have been investigated. The literature review consists of a standard classification of PEs-based ETSs, along with a survey on strengths and weaknesses of these devices impact on renewable energy sources.]]></description>
      <pubDate>Sat, 08 Jun 2019 19:24:28 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4400</link>
      <guid>https://doi.org/10.46298/arima.4400</guid>
      <author>Mohseni-Bonab, Seyed Masoud</author>
      <author>Kamwa, Innocent</author>
      <dc:creator>Mohseni-Bonab, Seyed Masoud</dc:creator>
      <dc:creator>Kamwa, Innocent</dc:creator>
      <content:encoded><![CDATA[In this paper, a completed review of recent researches about modern power converter based electrotechnical systems (ETSs) has been carried out. In particular, power electronics (PEs) based ETSs have been investigated. The literature review consists of a standard classification of PEs-based ETSs, along with a survey on strengths and weaknesses of these devices impact on renewable energy sources.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Optimization of absorption systems: case of the refrigerators and heat pumps</title>
      <description><![CDATA[The new thermo-ecological performance optimization of absorption is investigated by taking the ecological coefficient of performance ECOP as an objective function. ECOP has been expressed in terms of the temperatures of the working fluid in the main components of the system. The maximum of ECOP and the corresponding optimal temperatures of the working fluid and other optimal performance design parameters such as coefficient of performance, specific cooling load of absorption refrigerators, specific heating load of absorption heat pumps, specific entropy generation rate and the distributions of the heat exchanger areas have been derived analytically. The obtained results may provide a general theoretical tool for the ecological design of absorption refrigerators and heat pumps.]]></description>
      <pubDate>Sat, 08 Jun 2019 19:23:49 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4360</link>
      <guid>https://doi.org/10.46298/arima.4360</guid>
      <author>Tchinda, René</author>
      <author>Ngouateu Wouagfack, Paiguy Armand</author>
      <dc:creator>Tchinda, René</dc:creator>
      <dc:creator>Ngouateu Wouagfack, Paiguy Armand</dc:creator>
      <content:encoded><![CDATA[The new thermo-ecological performance optimization of absorption is investigated by taking the ecological coefficient of performance ECOP as an objective function. ECOP has been expressed in terms of the temperatures of the working fluid in the main components of the system. The maximum of ECOP and the corresponding optimal temperatures of the working fluid and other optimal performance design parameters such as coefficient of performance, specific cooling load of absorption refrigerators, specific heating load of absorption heat pumps, specific entropy generation rate and the distributions of the heat exchanger areas have been derived analytically. The obtained results may provide a general theoretical tool for the ecological design of absorption refrigerators and heat pumps.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Chronic myeloid leukemia model with periodic pulsed treatment</title>
      <description><![CDATA[In this work we develop a mathematical model of chronic myeloid leukemia including treatment with instantaneous effects. Our analysis focuses on the values of growth rate γ which give either stability or instability of the disease free equilibrium. If the growth rate γ of sensitive leukemic stem cells is less than some threshold γ * , we obtain the stability of disease free equilibrium which means that the disease is eradicated for any period of treatment τ 0. Otherwise, for γ great than γ * , the period of treatment must be less than some specific value τ * 0. In the critical case when the period of treatment is equal to τ * 0 , we observe a persistence of the tumor, which means that the disease is viable.]]></description>
      <pubDate>Sat, 08 Jun 2019 19:23:23 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4990</link>
      <guid>https://doi.org/10.46298/arima.4990</guid>
      <author>Mohamed, Helal</author>
      <author>Lakmeche, Abdelkader</author>
      <author>Souna, Fethi</author>
      <dc:creator>Mohamed, Helal</dc:creator>
      <dc:creator>Lakmeche, Abdelkader</dc:creator>
      <dc:creator>Souna, Fethi</dc:creator>
      <content:encoded><![CDATA[In this work we develop a mathematical model of chronic myeloid leukemia including treatment with instantaneous effects. Our analysis focuses on the values of growth rate γ which give either stability or instability of the disease free equilibrium. If the growth rate γ of sensitive leukemic stem cells is less than some threshold γ * , we obtain the stability of disease free equilibrium which means that the disease is eradicated for any period of treatment τ 0. Otherwise, for γ great than γ * , the period of treatment must be less than some specific value τ * 0. In the critical case when the period of treatment is equal to τ * 0 , we observe a persistence of the tumor, which means that the disease is viable.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Nonlinear Control for Isolated DC MicroGrids</title>
      <description><![CDATA[The access to electricity for isolated communities is capital for improving life in those societies. The use of renewable energy sources (renewables) and energy storage systems is the key for clean energy supply in remote areas without the main grid. The standalone operation of renew-ables represents a challenge for operation and reliability, therefore the DC MicroGrid concept is seen as a powerful solution allowing renewables integration and reliable operation of the system is simple way. This paper proposes a distributed nonlinear control strategy for an isolated MicroGrid composed of renewables and different timescale storage systems to supply a DC load. The simulations results show the behavior of the proposed MicroGrid and a comparison with classical linear control is done to highlight the control performance.]]></description>
      <pubDate>Sat, 08 Jun 2019 19:22:57 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.5356</link>
      <guid>https://doi.org/10.46298/arima.5356</guid>
      <author>Perez, Filipe</author>
      <author>Damm, Gilney</author>
      <author>Lamnabhi-Lagarrigue, Francoise</author>
      <author>Ribeiro, Paulo</author>
      <dc:creator>Perez, Filipe</dc:creator>
      <dc:creator>Damm, Gilney</dc:creator>
      <dc:creator>Lamnabhi-Lagarrigue, Francoise</dc:creator>
      <dc:creator>Ribeiro, Paulo</dc:creator>
      <content:encoded><![CDATA[The access to electricity for isolated communities is capital for improving life in those societies. The use of renewable energy sources (renewables) and energy storage systems is the key for clean energy supply in remote areas without the main grid. The standalone operation of renew-ables represents a challenge for operation and reliability, therefore the DC MicroGrid concept is seen as a powerful solution allowing renewables integration and reliable operation of the system is simple way. This paper proposes a distributed nonlinear control strategy for an isolated MicroGrid composed of renewables and different timescale storage systems to supply a DC load. The simulations results show the behavior of the proposed MicroGrid and a comparison with classical linear control is done to highlight the control performance.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Global stability of a fractional order HIV infection model with cure of infected cells in eclipse stage</title>
      <description><![CDATA[Modeling by fractional order differential equations has more advantages to describe the dynamics of phenomena with memory which exists in many biological systems. In this paper, we propose a fractional order model for human immunodeficiency virus (HIV) infection by including a class of infected cells that are not yet producing virus, i.e., cells in the eclipse stage. We first prove the positivity and bound-edness of solutions in order to ensure the well-posedness of the proposed model. By constructing appropriate Lyapunov functionals, the global stability of the disease-free equilibrium and the chronic infection equilibrium is established. Numerical simulations are presented in order to validate our theoretical results.]]></description>
      <pubDate>Sat, 08 Jun 2019 19:22:28 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4359</link>
      <guid>https://doi.org/10.46298/arima.4359</guid>
      <author>Bachraoui, Moussa</author>
      <author>Hattaf, Khalid</author>
      <author>Yousfi, Noura</author>
      <dc:creator>Bachraoui, Moussa</dc:creator>
      <dc:creator>Hattaf, Khalid</dc:creator>
      <dc:creator>Yousfi, Noura</dc:creator>
      <content:encoded><![CDATA[Modeling by fractional order differential equations has more advantages to describe the dynamics of phenomena with memory which exists in many biological systems. In this paper, we propose a fractional order model for human immunodeficiency virus (HIV) infection by including a class of infected cells that are not yet producing virus, i.e., cells in the eclipse stage. We first prove the positivity and bound-edness of solutions in order to ensure the well-posedness of the proposed model. By constructing appropriate Lyapunov functionals, the global stability of the disease-free equilibrium and the chronic infection equilibrium is established. Numerical simulations are presented in order to validate our theoretical results.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Challenges of mastering the energy sector and sustainable solutions for development in Africa</title>
      <description><![CDATA[The African continent is currently experiencing a period of sustained economic and population growth that requires massive investment in the energy sector to effectively meet the energy needs in the context of sustainable development. At the same time, COP21 agreements now call all the states to use clean energy. Yet a great energy potential is available, but the electrification rate of the continent currently accounts for only 3% of the world's energy production. Also, the demand for both quality and quantity energy requires the mastery of applied mathematical tools to efficiently solve problems arising in the energy system. In this article, the major problems affecting the energy sector in Africa are identified, some solutions to the challenges are recalled and some new ones are proposed, with emphasis given to applied mathematics tools as well as energy policy. As case studied, a new control strategy of Static Synchronous Series Compensator (SSSC) devices-which are modern power quality Flexible Alternating Current Transmission Systems (FACTS)-is proposed for Power Flow Control.]]></description>
      <pubDate>Sat, 08 Jun 2019 19:21:49 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4269</link>
      <guid>https://doi.org/10.46298/arima.4269</guid>
      <author>Nguimfack-Ndongmo, J., D. D.</author>
      <author>Muluh Fombu, A.</author>
      <author>Sonfack, L., L.</author>
      <author>Kuaté-Fochie, R.</author>
      <author>Kenné, G.</author>
      <author>Lamnabhi-Lagarrigue, F.</author>
      <dc:creator>Nguimfack-Ndongmo, J., D. D.</dc:creator>
      <dc:creator>Muluh Fombu, A.</dc:creator>
      <dc:creator>Sonfack, L., L.</dc:creator>
      <dc:creator>Kuaté-Fochie, R.</dc:creator>
      <dc:creator>Kenné, G.</dc:creator>
      <dc:creator>Lamnabhi-Lagarrigue, F.</dc:creator>
      <content:encoded><![CDATA[The African continent is currently experiencing a period of sustained economic and population growth that requires massive investment in the energy sector to effectively meet the energy needs in the context of sustainable development. At the same time, COP21 agreements now call all the states to use clean energy. Yet a great energy potential is available, but the electrification rate of the continent currently accounts for only 3% of the world's energy production. Also, the demand for both quality and quantity energy requires the mastery of applied mathematical tools to efficiently solve problems arising in the energy system. In this article, the major problems affecting the energy sector in Africa are identified, some solutions to the challenges are recalled and some new ones are proposed, with emphasis given to applied mathematics tools as well as energy policy. As case studied, a new control strategy of Static Synchronous Series Compensator (SSSC) devices-which are modern power quality Flexible Alternating Current Transmission Systems (FACTS)-is proposed for Power Flow Control.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Dynamics of an HBV infection model with cell-to-cell transmission and CTL immune response</title>
      <description><![CDATA[In this work, we propose a mathematical model to describe the dynamics of the hepatitis B virus (HBV) infection by taking into account the cure of infected cells, the export of precursor cytotoxic T lympho-cytes (CTL) cells from the thymus and both modes of transmission that are the virus-to-cell infection and the cell-to-cell transmission. The local stability of the disease-free equilibrium and the chronic infection equilibrium is obtained via characteristic equations. Furthermore, the global stability of both equilibria is established by using two techniques, the direct Lyapunov method for the disease-free equilibrium and the geometrical approach for the chronic infection equilibrium.]]></description>
      <pubDate>Sat, 08 Jun 2019 19:21:21 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4329</link>
      <guid>https://doi.org/10.46298/arima.4329</guid>
      <author>Besbassi, Hajar</author>
      <author>Elrhoubari, Zineb</author>
      <author>Hattaf, Khalid</author>
      <author>Noura, Yousfi</author>
      <dc:creator>Besbassi, Hajar</dc:creator>
      <dc:creator>Elrhoubari, Zineb</dc:creator>
      <dc:creator>Hattaf, Khalid</dc:creator>
      <dc:creator>Noura, Yousfi</dc:creator>
      <content:encoded><![CDATA[In this work, we propose a mathematical model to describe the dynamics of the hepatitis B virus (HBV) infection by taking into account the cure of infected cells, the export of precursor cytotoxic T lympho-cytes (CTL) cells from the thymus and both modes of transmission that are the virus-to-cell infection and the cell-to-cell transmission. The local stability of the disease-free equilibrium and the chronic infection equilibrium is obtained via characteristic equations. Furthermore, the global stability of both equilibria is established by using two techniques, the direct Lyapunov method for the disease-free equilibrium and the geometrical approach for the chronic infection equilibrium.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Application des modèles mathématiques pour l’optimisation de l’énergie dans un système PV</title>
      <description><![CDATA[This paper proposes an approach used to optimize the energy for a stand-alone photovoltaic (PV) system in isolated regions. The intended objective is house energy comfort. The aim is to present the impact of flow energy of housing on the system reliability. The operation of stand-alone PV system is represented by a simulation program. This later describes the principle of energy equilibrium among diverse sub-systems, using different mathematical models of different parts of renewable energy system. The recommended models were implemented via Matlab-Simulink software with real input data. The reliability is achieved by reducing the loss power supply probability criteria, with improvement of the battery life cycle during the operating years of the PV system.]]></description>
      <pubDate>Sat, 08 Jun 2019 19:20:09 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4378</link>
      <guid>https://doi.org/10.46298/arima.4378</guid>
      <author>Semaoui, Smail</author>
      <author>Hadj Arab, Amar</author>
      <author>Bacha, Seddik</author>
      <dc:creator>Semaoui, Smail</dc:creator>
      <dc:creator>Hadj Arab, Amar</dc:creator>
      <dc:creator>Bacha, Seddik</dc:creator>
      <content:encoded><![CDATA[This paper proposes an approach used to optimize the energy for a stand-alone photovoltaic (PV) system in isolated regions. The intended objective is house energy comfort. The aim is to present the impact of flow energy of housing on the system reliability. The operation of stand-alone PV system is represented by a simulation program. This later describes the principle of energy equilibrium among diverse sub-systems, using different mathematical models of different parts of renewable energy system. The recommended models were implemented via Matlab-Simulink software with real input data. The reliability is achieved by reducing the loss power supply probability criteria, with improvement of the battery life cycle during the operating years of the PV system.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A mathematical model on the effect of growth hormone on glucose homeostasis</title>
      <description><![CDATA[Extending an existing model devoted to the interaction between β-Cell Mass, Insulin, Glucose, Receptor Dynamics and Free Fatty Acids in glucose regulatory system simulation, this paper proposes a mathematical model introducing the effect of growth hormone on the glucose homeostasis alongside the other variables. Stability analysis is carried out and pragmatic explanation of the equilibrium points is emphasized. Finally, simulation illustrated how β-Cell Mass, Insulin, Glucose, Receptor Dynamics, Free Fatty Acids and Growth Hormone may vary with different values of some parameters in the model.]]></description>
      <pubDate>Sat, 08 Jun 2019 19:19:35 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4945</link>
      <guid>https://doi.org/10.46298/arima.4945</guid>
      <author>Ali, Hannah Al</author>
      <author>Wiam, Boutayeb</author>
      <author>Abdesslam, Boutayeb</author>
      <author>Nora, Merabet</author>
      <dc:creator>Ali, Hannah Al</dc:creator>
      <dc:creator>Wiam, Boutayeb</dc:creator>
      <dc:creator>Abdesslam, Boutayeb</dc:creator>
      <dc:creator>Nora, Merabet</dc:creator>
      <content:encoded><![CDATA[Extending an existing model devoted to the interaction between β-Cell Mass, Insulin, Glucose, Receptor Dynamics and Free Fatty Acids in glucose regulatory system simulation, this paper proposes a mathematical model introducing the effect of growth hormone on the glucose homeostasis alongside the other variables. Stability analysis is carried out and pragmatic explanation of the equilibrium points is emphasized. Finally, simulation illustrated how β-Cell Mass, Insulin, Glucose, Receptor Dynamics, Free Fatty Acids and Growth Hormone may vary with different values of some parameters in the model.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Optimal control of a parabolic solar collector</title>
      <description><![CDATA[The aim of this paper is to study an optimal control problem for a parabolic solar collector. We consider a bilinear distributed model, where the control models the velocity of the heat-transfer fluid. We prove the existence of an optimal control, and we derive a necessary optimality condition. Then we give an algorithm for the computation of the optimal control. The obtained results are illustrated by simulations of the collector model, using data of Ain Beni Mathar solar plant in Morocco.]]></description>
      <pubDate>Sat, 08 Jun 2019 19:19:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4371</link>
      <guid>https://doi.org/10.46298/arima.4371</guid>
      <author>El Boukhari, Nihale</author>
      <author>Zerrik, El Hassan</author>
      <dc:creator>El Boukhari, Nihale</dc:creator>
      <dc:creator>Zerrik, El Hassan</dc:creator>
      <content:encoded><![CDATA[The aim of this paper is to study an optimal control problem for a parabolic solar collector. We consider a bilinear distributed model, where the control models the velocity of the heat-transfer fluid. We prove the existence of an optimal control, and we derive a necessary optimality condition. Then we give an algorithm for the computation of the optimal control. The obtained results are illustrated by simulations of the collector model, using data of Ain Beni Mathar solar plant in Morocco.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Photovoltaic Hybrid Systems for remote villages</title>
      <description><![CDATA[Electricity access in remote areas of Sub-Saharan Africa is limited due to high costs of grid extension to areas characterised by low population and low energy densities. Photovoltaic hybrid systems can be computed using an energy balance equation involving one unknown. For hypothetical village with an average daily energy demand of 153.6 kWh/d, the monthly energy output of photovoltaic modules at Garoua, Cameroon, enabled the evaluation of feasible photovoltaic hybrid (PVHS) options. An option with a renewable energy fraction of 0.557 having lower initial investments is suggested for electrification of more remote villages in Sub-Saharan African countries which have high solar radiation levels. This option comprises a 23.56 kWp PV array, a 15 kWp PV inverter, a 25 kW bi-directional inverter, a battery bank of capacity 324.48 kWh and a 25 kW diesel generator with an operating time of 1309 h/yr or 3.59h/d. The size of the PV array determined is smaller compared to the sizes of PV arrays which have been evaluated in the range 30-45 kWp using HOMER software for medium villages in Senegal.]]></description>
      <pubDate>Sat, 08 Jun 2019 19:16:19 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4350</link>
      <guid>https://doi.org/10.46298/arima.4350</guid>
      <author>Ngundam, J.M.</author>
      <author>Kenne, G.</author>
      <author>Nfah, Eustace Mbaka</author>
      <dc:creator>Ngundam, J.M.</dc:creator>
      <dc:creator>Kenne, G.</dc:creator>
      <dc:creator>Nfah, Eustace Mbaka</dc:creator>
      <content:encoded><![CDATA[Electricity access in remote areas of Sub-Saharan Africa is limited due to high costs of grid extension to areas characterised by low population and low energy densities. Photovoltaic hybrid systems can be computed using an energy balance equation involving one unknown. For hypothetical village with an average daily energy demand of 153.6 kWh/d, the monthly energy output of photovoltaic modules at Garoua, Cameroon, enabled the evaluation of feasible photovoltaic hybrid (PVHS) options. An option with a renewable energy fraction of 0.557 having lower initial investments is suggested for electrification of more remote villages in Sub-Saharan African countries which have high solar radiation levels. This option comprises a 23.56 kWp PV array, a 15 kWp PV inverter, a 25 kW bi-directional inverter, a battery bank of capacity 324.48 kWh and a 25 kW diesel generator with an operating time of 1309 h/yr or 3.59h/d. The size of the PV array determined is smaller compared to the sizes of PV arrays which have been evaluated in the range 30-45 kWp using HOMER software for medium villages in Senegal.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Editorial Special Issue, Volume 30, MADEV Health and Energy</title>
      <description><![CDATA[International audience]]></description>
      <pubDate>Sat, 08 Jun 2019 19:10:08 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.5556</link>
      <guid>https://doi.org/10.46298/arima.5556</guid>
      <author>Lamnabhi-Lagarrigue, Françoise</author>
      <author>Noura, Yousfi</author>
      <author>Gmati, Nabil</author>
      <dc:creator>Lamnabhi-Lagarrigue, Françoise</dc:creator>
      <dc:creator>Noura, Yousfi</dc:creator>
      <dc:creator>Gmati, Nabil</dc:creator>
      <content:encoded><![CDATA[International audience]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Social Information Retrieval and Recommendation: state- of-the-art and future research</title>
      <description><![CDATA[The explosion of web 2.0 and social networks has created an enormous and rewarding source of information that has motivated researchers in different fields to exploit it. Our work revolves around the issue of access and identification of social information and their use in building a user profile enriched with a social dimension, and operating in a process of personalization and recommendation. We study several approaches of Social IR (Information Retrieval), distinguished by the type of incorporated social information. We also study various social recommendation approaches classified by the type of recommendation. We then present a study of techniques for modeling the social user profile dimension, followed by a critical discussion. Thus, we propose our social recommendation approach integrating an advanced social user profile model.]]></description>
      <pubDate>Fri, 19 Apr 2019 19:45:21 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.3101</link>
      <guid>https://doi.org/10.46298/arima.3101</guid>
      <author>Gorrab, Abir</author>
      <author>Kboubi, Ferihane</author>
      <author>Ghézala, Henda, Ben</author>
      <dc:creator>Gorrab, Abir</dc:creator>
      <dc:creator>Kboubi, Ferihane</dc:creator>
      <dc:creator>Ghézala, Henda, Ben</dc:creator>
      <content:encoded><![CDATA[The explosion of web 2.0 and social networks has created an enormous and rewarding source of information that has motivated researchers in different fields to exploit it. Our work revolves around the issue of access and identification of social information and their use in building a user profile enriched with a social dimension, and operating in a process of personalization and recommendation. We study several approaches of Social IR (Information Retrieval), distinguished by the type of incorporated social information. We also study various social recommendation approaches classified by the type of recommendation. We then present a study of techniques for modeling the social user profile dimension, followed by a critical discussion. Thus, we propose our social recommendation approach integrating an advanced social user profile model.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>FE approximation for an hybrid Naghdi equations for shells with G 1 -midsurface</title>
      <description><![CDATA[The purpose of this present work is to consider an hybrid formulation of Naghdi’s shell with G1-midsurface of the model already introduced by H. Le Dret in [1] and prove its well-posedness. Here, the displacement and the rotation of the normal to the midsurface are respectively given in Cartesian and local covariant or contravariant basis. This new version enables us, in particular, to approximate by conforming finite elements the solution with less degrees of freedom. Numerical tests are given to illustrate the efficiency of our approach.]]></description>
      <pubDate>Thu, 11 Oct 2018 07:09:46 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4668</link>
      <guid>https://doi.org/10.46298/arima.4668</guid>
      <author>Refka, Barbouche</author>
      <dc:creator>Refka, Barbouche</dc:creator>
      <content:encoded><![CDATA[The purpose of this present work is to consider an hybrid formulation of Naghdi’s shell with G1-midsurface of the model already introduced by H. Le Dret in [1] and prove its well-posedness. Here, the displacement and the rotation of the normal to the midsurface are respectively given in Cartesian and local covariant or contravariant basis. This new version enables us, in particular, to approximate by conforming finite elements the solution with less degrees of freedom. Numerical tests are given to illustrate the efficiency of our approach.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Ajustement des données pluviométriques à l'aide des splines plaque mince de lissage: Application à la pluviométrie Camerounaise</title>
      <description><![CDATA[In this paper we present a numerical method for drawing a rainfall map. We use thin plate ajustement spline to fit a smooth surface through meteorological data such as rainfall observations. The determination of the ajustement parameters gives rise to the minimization of a multidimensional nonlinear function whose evaluation is very computational intensive. We propose an heuristic approach for the calculation of the ajustement parameters which is cost effective. The proposed method is tested on a set of rainfall observations gathered by the department of national meteorology in Cameroun.]]></description>
      <pubDate>Tue, 21 Aug 2018 10:29:27 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.3996</link>
      <guid>https://doi.org/10.46298/arima.3996</guid>
      <author>Nyamsi, Madeleine</author>
      <dc:creator>Nyamsi, Madeleine</dc:creator>
      <content:encoded><![CDATA[In this paper we present a numerical method for drawing a rainfall map. We use thin plate ajustement spline to fit a smooth surface through meteorological data such as rainfall observations. The determination of the ajustement parameters gives rise to the minimization of a multidimensional nonlinear function whose evaluation is very computational intensive. We propose an heuristic approach for the calculation of the ajustement parameters which is cost effective. The proposed method is tested on a set of rainfall observations gathered by the department of national meteorology in Cameroun.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Non-parametric kernel-based bit error probability estimation in digital communication systems: An estimator for soft coded QAM BER computation</title>
      <description><![CDATA[The standard Monte Carlo estimations of rare events probabilities suffer from too much computational time. To make estimations faster, kernel-based estimators proved to be more efficient for binary systems whilst appearing to be more suitable in situations where the probability density function of the samples is unknown. We propose a kernel-based Bit Error Probability (BEP) estimator for coded M-ary Quadrature Amplitude Modulation (QAM) systems. We defined soft real bits upon which an Epanechnikov kernel-based estimator is designed. Simulation results showed, compared to the standard Monte Carlo simulation technique, accurate, reliable and efficient BEP estimates for 4-QAM and 16-QAM symbols transmissions over the additive white Gaussian noise channel and over a frequency-selective Rayleigh fading channel.]]></description>
      <pubDate>Fri, 03 Aug 2018 07:03:09 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.4348</link>
      <guid>https://doi.org/10.46298/arima.4348</guid>
      <author>Poda, Pasteur</author>
      <author>Saoudi, Samir</author>
      <author>Chonavel, Thierry</author>
      <author>Guilloud, Frédéric</author>
      <author>Tapsoba, Théodore, Marie-Yves</author>
      <dc:creator>Poda, Pasteur</dc:creator>
      <dc:creator>Saoudi, Samir</dc:creator>
      <dc:creator>Chonavel, Thierry</dc:creator>
      <dc:creator>Guilloud, Frédéric</dc:creator>
      <dc:creator>Tapsoba, Théodore, Marie-Yves</dc:creator>
      <content:encoded><![CDATA[The standard Monte Carlo estimations of rare events probabilities suffer from too much computational time. To make estimations faster, kernel-based estimators proved to be more efficient for binary systems whilst appearing to be more suitable in situations where the probability density function of the samples is unknown. We propose a kernel-based Bit Error Probability (BEP) estimator for coded M-ary Quadrature Amplitude Modulation (QAM) systems. We defined soft real bits upon which an Epanechnikov kernel-based estimator is designed. Simulation results showed, compared to the standard Monte Carlo simulation technique, accurate, reliable and efficient BEP estimates for 4-QAM and 16-QAM symbols transmissions over the additive white Gaussian noise channel and over a frequency-selective Rayleigh fading channel.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Efficient high order schemes for stiff ODEs in cardiac electrophysiology</title>
      <description><![CDATA[In this work we analyze the resort to high order exponential solvers for stiff ODEs in the context of cardiac electrophysiology modeling. The exponential Adams-Bashforth and the Rush-Larsen schemes will be considered up to order 4. These methods are explicit multistep schemes.The accuracy and the cost of these methods are numerically analyzed in this paper and benchmarked with several classical explicit and implicit schemes at various orders. This analysis has been led considering data of high particular interest in cardiac electrophysiology : the activation time ($t_a$ ), the recovery time ($t_r $) and the action potential duration ($APD$). The Beeler Reuter ionic model, especially designed for cardiac ventricular cells, has been used for this study. It is shown that, in spite of the stiffness of the considered model, exponential solvers allow computation at large time steps, as large as for implicit methods. Moreover, in terms of cost for a given accuracy, a significant gain is achieved with exponential solvers. We conclude that accurate computations at large time step are possible with explicit high order methods. This is a quite important feature when considering stiff non linear ODEs.]]></description>
      <pubDate>Wed, 25 Apr 2018 09:23:53 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.2668</link>
      <guid>https://doi.org/10.46298/arima.2668</guid>
      <author>Douanla Lontsi, Charlie</author>
      <author>Coudière, Yves</author>
      <author>Pierre, Charles</author>
      <dc:creator>Douanla Lontsi, Charlie</dc:creator>
      <dc:creator>Coudière, Yves</dc:creator>
      <dc:creator>Pierre, Charles</dc:creator>
      <content:encoded><![CDATA[In this work we analyze the resort to high order exponential solvers for stiff ODEs in the context of cardiac electrophysiology modeling. The exponential Adams-Bashforth and the Rush-Larsen schemes will be considered up to order 4. These methods are explicit multistep schemes.The accuracy and the cost of these methods are numerically analyzed in this paper and benchmarked with several classical explicit and implicit schemes at various orders. This analysis has been led considering data of high particular interest in cardiac electrophysiology : the activation time ($t_a$ ), the recovery time ($t_r $) and the action potential duration ($APD$). The Beeler Reuter ionic model, especially designed for cardiac ventricular cells, has been used for this study. It is shown that, in spite of the stiffness of the considered model, exponential solvers allow computation at large time steps, as large as for implicit methods. Moreover, in terms of cost for a given accuracy, a significant gain is achieved with exponential solvers. We conclude that accurate computations at large time step are possible with explicit high order methods. This is a quite important feature when considering stiff non linear ODEs.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Management of Low-density Sensor-Actuator Network in a Virtual Architecture</title>
      <description><![CDATA[Wireless sensor networks (WSN) face many implementation’s problems such as connectivity, security, energy saving, fault tolerance, interference, collision, routing problems, etc. In this paper, we consider a low-density WSN where the distribution of the sensors is poor, and the virtual architecture introduced by Wadaa and al which provides a powerful and fast partitioning of the network into a set of clusters. In order to effectively route the information collected by each sensor node to the base station (sink node, located at the center of the network), we propose a technique based on multiple communication frequencies in order to avoid interferences during the communications. Secondly, we propose an empty clusters detection algorithm, allowing to know the area actually covered by the sensors after the deployment, and therefore, giving the possibility to react accordingly. Finally, we also propose a strategy to allow mobile sensors (actuators) to move in order to: save the WSN’s connectivity, improve the routing of collected data, save the sensors’ energy, improve the coverage of the area of interest, etc.]]></description>
      <pubDate>Mon, 12 Mar 2018 08:32:31 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.3110</link>
      <guid>https://doi.org/10.46298/arima.3110</guid>
      <author>Kengne Tchendji, Vianney</author>
      <author>Paho Nana, Blaise</author>
      <dc:creator>Kengne Tchendji, Vianney</dc:creator>
      <dc:creator>Paho Nana, Blaise</dc:creator>
      <content:encoded><![CDATA[Wireless sensor networks (WSN) face many implementation’s problems such as connectivity, security, energy saving, fault tolerance, interference, collision, routing problems, etc. In this paper, we consider a low-density WSN where the distribution of the sensors is poor, and the virtual architecture introduced by Wadaa and al which provides a powerful and fast partitioning of the network into a set of clusters. In order to effectively route the information collected by each sensor node to the base station (sink node, located at the center of the network), we propose a technique based on multiple communication frequencies in order to avoid interferences during the communications. Secondly, we propose an empty clusters detection algorithm, allowing to know the area actually covered by the sensors after the deployment, and therefore, giving the possibility to react accordingly. Finally, we also propose a strategy to allow mobile sensors (actuators) to move in order to: save the WSN’s connectivity, improve the routing of collected data, save the sensors’ energy, improve the coverage of the area of interest, etc.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Arabic topic identification based on empirical studies of topic models</title>
      <description><![CDATA[This paper focuses on the topic identification for the Arabic language based on topic models. We study the Latent Dirichlet Allocation (LDA) as an unsupervised method for the Arabic topic identification. Thus, a deep study of LDA is carried out at two levels: Stemming process and the choice of LDA hyper-parameters. For the first level, we study the effect of different Arabic stemmers on LDA. For the second level, we focus on LDA hyper-parameters α and β and their impact on the topic identification. This study shows that LDA is an efficient method for Arabic topic identification especially with the right choice of hyper-parameters. Another important result is the high impact of the stemming algorithm on topic identification.]]></description>
      <pubDate>Thu, 03 Aug 2017 06:26:47 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.3102</link>
      <guid>https://doi.org/10.46298/arima.3102</guid>
      <author>Naili, Marwa</author>
      <author>Chaibi, Anja, Habacha</author>
      <author>Ghézala, Henda, Ben</author>
      <dc:creator>Naili, Marwa</dc:creator>
      <dc:creator>Chaibi, Anja, Habacha</dc:creator>
      <dc:creator>Ghézala, Henda, Ben</dc:creator>
      <content:encoded><![CDATA[This paper focuses on the topic identification for the Arabic language based on topic models. We study the Latent Dirichlet Allocation (LDA) as an unsupervised method for the Arabic topic identification. Thus, a deep study of LDA is carried out at two levels: Stemming process and the choice of LDA hyper-parameters. For the first level, we study the effect of different Arabic stemmers on LDA. For the second level, we focus on LDA hyper-parameters α and β and their impact on the topic identification. This study shows that LDA is an efficient method for Arabic topic identification especially with the right choice of hyper-parameters. Another important result is the high impact of the stemming algorithm on topic identification.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Unfolding through processes to compute the complete prefix of Petri nets</title>
      <description><![CDATA[The partial-order technique of the unfolding implicitly represents state-space of a Petri net (PN), by in particular preserving the concurrency relations between the events. That makes it possible to contain state-space explosion problem in case of strong concurrency. A complete prefix of unfolding is used to cover all the state-space of a bounded PN: its computation according to the classical approach is based on the concept of adequate order, taking directly into account only safe PN. In this paper, a new approach independent of the concept of adequate order and faithful to the partial-order semantics, consists in creating the events of the unfolding in the context of a single process at the same time. The results of the tests are conclusive for safe and nonsafe PN. Some solutions are presented to improve compactness of the prefix obtained.]]></description>
      <pubDate>Mon, 24 Jul 2017 09:14:12 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.3177</link>
      <guid>https://doi.org/10.46298/arima.3177</guid>
      <author>Sogbohossou, Médésu</author>
      <author>Vianou, Antoine</author>
      <dc:creator>Sogbohossou, Médésu</dc:creator>
      <dc:creator>Vianou, Antoine</dc:creator>
      <content:encoded><![CDATA[The partial-order technique of the unfolding implicitly represents state-space of a Petri net (PN), by in particular preserving the concurrency relations between the events. That makes it possible to contain state-space explosion problem in case of strong concurrency. A complete prefix of unfolding is used to cover all the state-space of a bounded PN: its computation according to the classical approach is based on the concept of adequate order, taking directly into account only safe PN. In this paper, a new approach independent of the concept of adequate order and faithful to the partial-order semantics, consists in creating the events of the unfolding in the context of a single process at the same time. The results of the tests are conclusive for safe and nonsafe PN. Some solutions are presented to improve compactness of the prefix obtained.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>XPath bipolar queries and evaluation</title>
      <description><![CDATA[The concept of bipolar queries (also call preferences queries) emerged in the Relational Databasescommunity, allowing users to get much more relevant responses to their requests, expressed via queries say withpreferences. Such requests usually have two parts: the first is used to express the strict constraints and thesecond, preferences or wishes. Any response to a query with preferences must necessarily satisfy the first partand preferably the latter. However, if there is at least a satisfactory answer of the second part, those satisfyingonly the first part will be excluded from the final result: they are dominated. In this paper, we explore an approachof importation of this concept in a XML Database via XPath language. To do this, we propose PrefSXPathlanguage, an extension of XPath in order to express XPath queries with structural preferences, then we presenta query evaluation algorithm of PrefSXPath using automata]]></description>
      <pubDate>Tue, 11 Jul 2017 05:51:14 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.3108</link>
      <guid>https://doi.org/10.46298/arima.3108</guid>
      <author>Tchoupé Tchendji, Maurice</author>
      <author>Nguefack, Brice</author>
      <dc:creator>Tchoupé Tchendji, Maurice</dc:creator>
      <dc:creator>Nguefack, Brice</dc:creator>
      <content:encoded><![CDATA[The concept of bipolar queries (also call preferences queries) emerged in the Relational Databasescommunity, allowing users to get much more relevant responses to their requests, expressed via queries say withpreferences. Such requests usually have two parts: the first is used to express the strict constraints and thesecond, preferences or wishes. Any response to a query with preferences must necessarily satisfy the first partand preferably the latter. However, if there is at least a satisfactory answer of the second part, those satisfyingonly the first part will be excluded from the final result: they are dominated. In this paper, we explore an approachof importation of this concept in a XML Database via XPath language. To do this, we propose PrefSXPathlanguage, an extension of XPath in order to express XPath queries with structural preferences, then we presenta query evaluation algorithm of PrefSXPath using automata]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Efficient controller synthesis of multi-energy systems for autonomous domestic water supply</title>
      <description><![CDATA[The continuous development of ICT facilitates the emergence and rapid proliferation of a wide variety of low-cost processors for the execution of programs in complex embedded applications. In this paper, the study explores the possibility to benefit from this wealth of computing capacity at a reasonable cost to solve concrete problems encountered in implementation of sustainable development processes, particularly in water and energy supply . . . We are focusing autonomous water supply in buildings of several floors using several tanks supplied by several sources of water and pumping energy, based on a multilevel hierarchical priority access to water. The first problem is to propose pumping devices and a switching process between power sources, associated to an architectural structure guaranteeing significant reduction of pumping energy. The second problem is the system controller realization. For this, we have proposed a generic architecture justified by gains in potential energy. We also propose an automatic generation tool of control programs for different microprocessor targets taken from the functional design specification of the system given in a Grafcet form. To put them in evidence, we describe at the end a case study.]]></description>
      <pubDate>Wed, 05 Jul 2017 05:07:16 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1458</link>
      <guid>https://doi.org/10.46298/arima.1458</guid>
      <author>Nzebop Ndenoka, Gérard</author>
      <author>Simeu, Emmanuel</author>
      <author>Alhakim, Rshdee</author>
      <dc:creator>Nzebop Ndenoka, Gérard</dc:creator>
      <dc:creator>Simeu, Emmanuel</dc:creator>
      <dc:creator>Alhakim, Rshdee</dc:creator>
      <content:encoded><![CDATA[The continuous development of ICT facilitates the emergence and rapid proliferation of a wide variety of low-cost processors for the execution of programs in complex embedded applications. In this paper, the study explores the possibility to benefit from this wealth of computing capacity at a reasonable cost to solve concrete problems encountered in implementation of sustainable development processes, particularly in water and energy supply . . . We are focusing autonomous water supply in buildings of several floors using several tanks supplied by several sources of water and pumping energy, based on a multilevel hierarchical priority access to water. The first problem is to propose pumping devices and a switching process between power sources, associated to an architectural structure guaranteeing significant reduction of pumping energy. The second problem is the system controller realization. For this, we have proposed a generic architecture justified by gains in potential energy. We also propose an automatic generation tool of control programs for different microprocessor targets taken from the functional design specification of the system given in a Grafcet form. To put them in evidence, we describe at the end a case study.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Two-sources randomness extractors in finite fields and in elliptic curves</title>
      <description><![CDATA[We propose two-sources randomness extractors over finite fields and on elliptic curves that can extract from two sources of information without consideration of other assumptions that the starting algorithmic assumptions with a competitive level of security. These functions have several applications. We propose here a description of a version of a Diffie-Hellman key exchange protocol and key extraction.]]></description>
      <pubDate>Mon, 26 Jun 2017 06:30:29 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1446</link>
      <guid>https://doi.org/10.46298/arima.1446</guid>
      <author>Tchapgnouo, Hortense Boudjou</author>
      <author>Ciss, Abdoul A.</author>
      <author>Sow, Djiby</author>
      <author>Kolyang, D.T.</author>
      <dc:creator>Tchapgnouo, Hortense Boudjou</dc:creator>
      <dc:creator>Ciss, Abdoul A.</dc:creator>
      <dc:creator>Sow, Djiby</dc:creator>
      <dc:creator>Kolyang, D.T.</dc:creator>
      <content:encoded><![CDATA[We propose two-sources randomness extractors over finite fields and on elliptic curves that can extract from two sources of information without consideration of other assumptions that the starting algorithmic assumptions with a competitive level of security. These functions have several applications. We propose here a description of a version of a Diffie-Hellman key exchange protocol and key extraction.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Social network ordering based on communities to reduce cache misses</title>
      <description><![CDATA[ABSTRACT. One of social graph's properties is the community structure, that is, subsets where nodes belonging to the same subset have a higher link density between themselves and a low link density with nodes belonging to external subsets. Futhermore, most social network mining algorithms comprise a local exploration of the underlying graph, which consists in referencing nodes in the neighborhood of a particular node. The idea of this paper is to use the community structure during the storage of large graphs that arise in social network mining. The goal is to reduce cache misses and consequently, execution time. After formalizing the problem of social network ordering as a problem of optimal linear arrangement which is known as NP-Complet, we propose NumBaCo, a heuristic based on the community structure. We present for Katz score and Pagerank, simulations that compare classic data structures Bloc and Yale to their corresponding versions that use NumBaCo. Results on a 32 cores NUMA machine using amazon, dblp and web-google datasets show that NumBaCo allows to reduce from 62% to 80% of cache misses and from 15% to 50% of execution time.]]></description>
      <pubDate>Wed, 10 May 2017 14:10:03 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1448</link>
      <guid>https://doi.org/10.46298/arima.1448</guid>
      <author>Messi Nguélé, Thomas, Messi</author>
      <author>Tchuente, Maurice</author>
      <author>Méhaut, Jean-François</author>
      <dc:creator>Messi Nguélé, Thomas, Messi</dc:creator>
      <dc:creator>Tchuente, Maurice</dc:creator>
      <dc:creator>Méhaut, Jean-François</dc:creator>
      <content:encoded><![CDATA[ABSTRACT. One of social graph's properties is the community structure, that is, subsets where nodes belonging to the same subset have a higher link density between themselves and a low link density with nodes belonging to external subsets. Futhermore, most social network mining algorithms comprise a local exploration of the underlying graph, which consists in referencing nodes in the neighborhood of a particular node. The idea of this paper is to use the community structure during the storage of large graphs that arise in social network mining. The goal is to reduce cache misses and consequently, execution time. After formalizing the problem of social network ordering as a problem of optimal linear arrangement which is known as NP-Complet, we propose NumBaCo, a heuristic based on the community structure. We present for Katz score and Pagerank, simulations that compare classic data structures Bloc and Yale to their corresponding versions that use NumBaCo. Results on a 32 cores NUMA machine using amazon, dblp and web-google datasets show that NumBaCo allows to reduce from 62% to 80% of cache misses and from 15% to 50% of execution time.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Growth model for collaboration networks</title>
      <description><![CDATA[We propose a model of growing networks based on cliques formations. A clique is used to illustrate for example co-authorship in co-publication networks, co-occurence of words or collaboration between actors of the same movie. Our model is iterative and at each step, a clique of λη existing vertices and (1 − λ)η new vertices is created and added in the network; η is the mean number of vertices per clique and λ is the proportion of old vertices per clique. The old vertices are selected according to preferential attachment. We show that the degree distribution of the generated networks follows the Power Law of parameter 1 + 1/ λ and thus they are ultra small-world networks with high clustering coefﬁcient and low density. Moreover, the networks generated by the proposed model match with some real co-publication networks such as CARI, EGC and HepTh.]]></description>
      <pubDate>Thu, 16 Feb 2017 08:02:43 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1447</link>
      <guid>https://doi.org/10.46298/arima.1447</guid>
      <author>Meleu, Ghislain Romaric</author>
      <author>Melatagia Yonta, Paulin</author>
      <dc:creator>Meleu, Ghislain Romaric</dc:creator>
      <dc:creator>Melatagia Yonta, Paulin</dc:creator>
      <content:encoded><![CDATA[We propose a model of growing networks based on cliques formations. A clique is used to illustrate for example co-authorship in co-publication networks, co-occurence of words or collaboration between actors of the same movie. Our model is iterative and at each step, a clique of λη existing vertices and (1 − λ)η new vertices is created and added in the network; η is the mean number of vertices per clique and λ is the proportion of old vertices per clique. The old vertices are selected according to preferential attachment. We show that the degree distribution of the generated networks follows the Power Law of parameter 1 + 1/ λ and thus they are ultra small-world networks with high clustering coefﬁcient and low density. Moreover, the networks generated by the proposed model match with some real co-publication networks such as CARI, EGC and HepTh.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Chemotherapy models</title>
      <description><![CDATA[A chemotherapeutic treatment model for cell population with resistant tumor is considered. We consider the case of two drugs one with pulsed effect and the other one with continuous effect. We investigate stability of the trivial periodic solutions and the onset of nontrivial periodic solutions by the mean of Lyapunov-Schmidt bifurcation.]]></description>
      <pubDate>Tue, 27 Dec 2016 15:09:50 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1493</link>
      <guid>https://doi.org/10.46298/arima.1493</guid>
      <author>Charif, Fayssal</author>
      <author>Helal, Mohamed</author>
      <author>Lakmeche, Abdelkader</author>
      <dc:creator>Charif, Fayssal</dc:creator>
      <dc:creator>Helal, Mohamed</dc:creator>
      <dc:creator>Lakmeche, Abdelkader</dc:creator>
      <content:encoded><![CDATA[A chemotherapeutic treatment model for cell population with resistant tumor is considered. We consider the case of two drugs one with pulsed effect and the other one with continuous effect. We investigate stability of the trivial periodic solutions and the onset of nontrivial periodic solutions by the mean of Lyapunov-Schmidt bifurcation.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>UU polynomial matrix decomposition applying in 802.11ac beamforming</title>
      <description><![CDATA[This paper presents a MIMO-OFDM " Beamforming " approach in a IEEE 802.11ac context. This technique of " Beamforming " has the same performance as the conventional technique while allowing to perform the precoding and postcoding at one time and whatever the number of OFDM subcarriers.]]></description>
      <pubDate>Tue, 13 Dec 2016 13:48:58 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1486</link>
      <guid>https://doi.org/10.46298/arima.1486</guid>
      <author>Mbaye, Moustapha</author>
      <author>Diallo, Moussa</author>
      <author>Gueye, Bamba</author>
      <dc:creator>Mbaye, Moustapha</dc:creator>
      <dc:creator>Diallo, Moussa</dc:creator>
      <dc:creator>Gueye, Bamba</dc:creator>
      <content:encoded><![CDATA[This paper presents a MIMO-OFDM " Beamforming " approach in a IEEE 802.11ac context. This technique of " Beamforming " has the same performance as the conventional technique while allowing to perform the precoding and postcoding at one time and whatever the number of OFDM subcarriers.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Self-adaptive structuring for P2P-based large-scale Grid environment</title>
      <description><![CDATA[In this paper, we propose an extension and experimental evaluation of our self-adaptive structuring solution in an large-scale P2P Grid environment. The proposed specification, enables both services deployment, location and invocation of while respecting the P2P networks paradigm. Moreover, the specification is generic i.e. not linked to a particular P2P architecture. The increasing size of resources and users in large-scale distributed systems has lead to a scalability problem. To ensure the scalability, we propose to organize the P2P grid nodes in virtual communities. A particular node called ISP (Information System Proxy) acts as service directory within each cluster. On the other hand, resource discovery is one of the essential challenges in large-scale Grid environment. In this sense, we propose to build a spanning tree which will be constituted by the set of formed ISPs in order to allow an efficient service lookup in the system. An experimental validation, through simulation, shows that our approach ensures a high scalability in terms of clusters distribution and communication cost.]]></description>
      <pubDate>Tue, 13 Dec 2016 13:42:04 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.2574</link>
      <guid>https://doi.org/10.46298/arima.2574</guid>
      <author>Gueye, Bassirou</author>
      <author>Flauzac, Olivier</author>
      <author>Rabat, Cyril</author>
      <author>Niang, Ibrahima</author>
      <dc:creator>Gueye, Bassirou</dc:creator>
      <dc:creator>Flauzac, Olivier</dc:creator>
      <dc:creator>Rabat, Cyril</dc:creator>
      <dc:creator>Niang, Ibrahima</dc:creator>
      <content:encoded><![CDATA[In this paper, we propose an extension and experimental evaluation of our self-adaptive structuring solution in an large-scale P2P Grid environment. The proposed specification, enables both services deployment, location and invocation of while respecting the P2P networks paradigm. Moreover, the specification is generic i.e. not linked to a particular P2P architecture. The increasing size of resources and users in large-scale distributed systems has lead to a scalability problem. To ensure the scalability, we propose to organize the P2P grid nodes in virtual communities. A particular node called ISP (Information System Proxy) acts as service directory within each cluster. On the other hand, resource discovery is one of the essential challenges in large-scale Grid environment. In this sense, we propose to build a spanning tree which will be constituted by the set of formed ISPs in order to allow an efficient service lookup in the system. An experimental validation, through simulation, shows that our approach ensures a high scalability in terms of clusters distribution and communication cost.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Complexity of Opacity algorithm in data-centric workflow system</title>
      <description><![CDATA[A property (of an object) is opaque to an observer when he or she cannot deduce the property from its set of observations. If each observer is attached to a given set of properties (the so-called secrets), then the system is said to be opaque if each secret is opaque to the corresponding observer. We study in this paper, the complexity of opacity algorithm in data-centric workflows systems. We show that the complexity of this algorithm is EXPTIME-complete. Using the reduction problem, whe show that we can reduce the complexity of opacity problem to wellknow problem, the intersection of nonemptyness problem of Tree automata in polynomial time.]]></description>
      <pubDate>Tue, 13 Dec 2016 13:41:15 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1479</link>
      <guid>https://doi.org/10.46298/arima.1479</guid>
      <author>Diouf, Mohamadou Lamine</author>
      <author>Pinchinat, Sophie</author>
      <dc:creator>Diouf, Mohamadou Lamine</dc:creator>
      <dc:creator>Pinchinat, Sophie</dc:creator>
      <content:encoded><![CDATA[A property (of an object) is opaque to an observer when he or she cannot deduce the property from its set of observations. If each observer is attached to a given set of properties (the so-called secrets), then the system is said to be opaque if each secret is opaque to the corresponding observer. We study in this paper, the complexity of opacity algorithm in data-centric workflows systems. We show that the complexity of this algorithm is EXPTIME-complete. Using the reduction problem, whe show that we can reduce the complexity of opacity problem to wellknow problem, the intersection of nonemptyness problem of Tree automata in polynomial time.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Regular Bohr-Sommerfeld quantization rules for a h-pseudo-differential operator. The method of positive commutators</title>
      <description><![CDATA[We revisit in this Note the well known Bohr-Sommerfeld quantization rule (BS) for a 1-D Pseudo-differential self-adjoint Hamiltonian within the algebraic and microlocal framework of Helffer and Sjöstrand; BS holds precisely when the Gram matrix consisting of scalar products of some WKB solutions with respect to the " flux norm " is not invertible.]]></description>
      <pubDate>Tue, 13 Dec 2016 10:55:52 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.2593</link>
      <guid>https://doi.org/10.46298/arima.2593</guid>
      <author>Ifa, Abdelwaheb</author>
      <author>Rouleux, Michel, L.</author>
      <dc:creator>Ifa, Abdelwaheb</dc:creator>
      <dc:creator>Rouleux, Michel, L.</dc:creator>
      <content:encoded><![CDATA[We revisit in this Note the well known Bohr-Sommerfeld quantization rule (BS) for a 1-D Pseudo-differential self-adjoint Hamiltonian within the algebraic and microlocal framework of Helffer and Sjöstrand; BS holds precisely when the Gram matrix consisting of scalar products of some WKB solutions with respect to the " flux norm " is not invertible.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Reconstruction of missing boundary conditions from partially overspecified data : the Stokes system</title>
      <description><![CDATA[We are interested in this paper with the ill-posed Cauchy-Stokes problem. We consider a data completion problem in which we aim recovering lacking data on some part of a domain boundary , from the knowledge of partially overspecified data on the other part. The inverse problem is formulated as an optimization one using an energy-like misfit functional. We give the first order opti-mality condition in terms of an interfacial operator. Displayed numerical results highlight its accuracy.]]></description>
      <pubDate>Tue, 13 Dec 2016 10:55:23 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1494</link>
      <guid>https://doi.org/10.46298/arima.1494</guid>
      <author>Ben Abda, Amel</author>
      <author>Khayat, Faten</author>
      <dc:creator>Ben Abda, Amel</dc:creator>
      <dc:creator>Khayat, Faten</dc:creator>
      <content:encoded><![CDATA[We are interested in this paper with the ill-posed Cauchy-Stokes problem. We consider a data completion problem in which we aim recovering lacking data on some part of a domain boundary , from the knowledge of partially overspecified data on the other part. The inverse problem is formulated as an optimization one using an energy-like misfit functional. We give the first order opti-mality condition in terms of an interfacial operator. Displayed numerical results highlight its accuracy.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>UNIQUENESS FOR AN INVERSE PROBLEM FOR A DISSIPATIVE WAVE EQUATION WITH TIME DEPENDENT COEFFICIENT</title>
      <description><![CDATA[This paper deals with an hyperbolic inverse problem of determining a time-dependent coefficient a appearing in a dissipative wave equation, from boundary observations. We prove in dimension n greater than two, that a can be uniquely determined in a precise subset of the domain, from the knowledge of the Dirichlet-to-Neumann map.]]></description>
      <pubDate>Tue, 13 Dec 2016 10:54:51 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1497</link>
      <guid>https://doi.org/10.46298/arima.1497</guid>
      <author>Bellassoued, Mourad</author>
      <author>Ben Aicha, Ibtissem</author>
      <dc:creator>Bellassoued, Mourad</dc:creator>
      <dc:creator>Ben Aicha, Ibtissem</dc:creator>
      <content:encoded><![CDATA[This paper deals with an hyperbolic inverse problem of determining a time-dependent coefficient a appearing in a dissipative wave equation, from boundary observations. We prove in dimension n greater than two, that a can be uniquely determined in a precise subset of the domain, from the knowledge of the Dirichlet-to-Neumann map.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Coupling Parareal with Non-Overlapping Domain Decomposition Method</title>
      <description><![CDATA[In this paper, we present a new parallel algorithm for time dependent problems based on coupling parareal with non-overlapping domain decomposition method in order to increase parallelism in time and in space. For this we focus on the iterative methods of parallization in space to solve the interface problem like Neumann-Neumann method. In the new algorithm, the coarse temporel propagator is defined on the global domain and the Neumann-Neumann method is chosen as a fine propagator with a few iterations. We present the rigorous convergence analysis of the new coupled algorithm on bounded time interval. Numerical experiments illustrate the performance of this new algorithm and confirm our analysis. RÉSUMÉ. Dans ce papier, nous présentons un nouvel algorithme parallèle pour les problèmes dé-pendant du temps basé sur le couplage du pararéel avec les méthodes de décomposition de domaine sans recouvrement afin d'augmenter le parallélisme dans le temps et l'espace. Nous nous concen-trons sur les méthodes itératives de parallélisation en espace pour résoudre le problème d'interface par la méthode de Neumann-Neumann. Dans ce nouvel algorithme, le propagateur grossier est dé-finie sur le domaine global et la méthode de Neumann-Neumann est choisi pour le propagateur fin avec quelques itérations. Nous présentons l'analyse rigoureuse de convergence du nouvel algorithme couplé sur un intervalle de temps borné. Des expèriences numériques illustrent les performances de ce nouvel algorithme et confirment notre analyse.]]></description>
      <pubDate>Tue, 13 Dec 2016 10:54:23 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1474</link>
      <guid>https://doi.org/10.46298/arima.1474</guid>
      <author>Guetat, Rim</author>
      <dc:creator>Guetat, Rim</dc:creator>
      <content:encoded><![CDATA[In this paper, we present a new parallel algorithm for time dependent problems based on coupling parareal with non-overlapping domain decomposition method in order to increase parallelism in time and in space. For this we focus on the iterative methods of parallization in space to solve the interface problem like Neumann-Neumann method. In the new algorithm, the coarse temporel propagator is defined on the global domain and the Neumann-Neumann method is chosen as a fine propagator with a few iterations. We present the rigorous convergence analysis of the new coupled algorithm on bounded time interval. Numerical experiments illustrate the performance of this new algorithm and confirm our analysis. RÉSUMÉ. Dans ce papier, nous présentons un nouvel algorithme parallèle pour les problèmes dé-pendant du temps basé sur le couplage du pararéel avec les méthodes de décomposition de domaine sans recouvrement afin d'augmenter le parallélisme dans le temps et l'espace. Nous nous concen-trons sur les méthodes itératives de parallélisation en espace pour résoudre le problème d'interface par la méthode de Neumann-Neumann. Dans ce nouvel algorithme, le propagateur grossier est dé-finie sur le domaine global et la méthode de Neumann-Neumann est choisi pour le propagateur fin avec quelques itérations. Nous présentons l'analyse rigoureuse de convergence du nouvel algorithme couplé sur un intervalle de temps borné. Des expèriences numériques illustrent les performances de ce nouvel algorithme et confirment notre analyse.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>An alternative algorithm for regularization of noisy volatility calibration in Finance</title>
      <description><![CDATA[This contribution is an extension of the work initiated in [1], presenting a strategy for the calibration of the local volatility. Due to Morozov's discrepancy principle [6], the Tikhonov regularization problem introduced in [7] is understood as an inequality-constrained minimization problem. An Uzawa procedure is proposed to replace this latter by a sequence of unconstrained problems dealt with in the modified Thikonov regularization procedure in [1]. Numerical tests confirm the consistency of the approach and the significant speed-up of the process of local volatility determination.]]></description>
      <pubDate>Tue, 13 Dec 2016 10:53:56 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1492</link>
      <guid>https://doi.org/10.46298/arima.1492</guid>
      <author>Ibtissam, Medarhri</author>
      <author>Rajae, Aboulaich</author>
      <author>Debit, Naima</author>
      <dc:creator>Ibtissam, Medarhri</dc:creator>
      <dc:creator>Rajae, Aboulaich</dc:creator>
      <dc:creator>Debit, Naima</dc:creator>
      <content:encoded><![CDATA[This contribution is an extension of the work initiated in [1], presenting a strategy for the calibration of the local volatility. Due to Morozov's discrepancy principle [6], the Tikhonov regularization problem introduced in [7] is understood as an inequality-constrained minimization problem. An Uzawa procedure is proposed to replace this latter by a sequence of unconstrained problems dealt with in the modified Thikonov regularization procedure in [1]. Numerical tests confirm the consistency of the approach and the significant speed-up of the process of local volatility determination.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Inverse Problem: Stability for the aligned magnetic field by the Dirichlet-to-Neumann map for the wave equation in a periodic quantum waveguide</title>
      <description><![CDATA[Dans ce papier, on a prouvé une estimation de stabilité pour le problème inverse de dé-termination du champ magnétique dans l'équation des ondes donné sur un domaine non borné à partir de l'opérateur de Dirichlet-to-Neumann. On a montré un résultat de stabilité pour ce problème inverse, dont la démonstration est basée sur la construction de solutions optique géométrique pour l'équation des ondes avec un potentiel magnétique 1-périodique. ABSTRACT. We consider the boundary inverse problem of determining the aligned magnetic field appearing in the magnetic wave equation in a periodic quantum cylindrical waveguide from boundary observations. The observation is given by the Dirichlet to Neumann map associated to the wave equation. We prove by means of the geometrical optics solutions of the magnetic wave equation that the knowledge of the Dirichlet-to-Neumann map determines uniquely the aligned magnetic field induced by a time independent and 1-periodic magnetic potential. We establish a Hölder-type stability estimate in the inverse problem.]]></description>
      <pubDate>Tue, 13 Dec 2016 10:53:22 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1509</link>
      <guid>https://doi.org/10.46298/arima.1509</guid>
      <author>Youssef, Mejri</author>
      <dc:creator>Youssef, Mejri</dc:creator>
      <content:encoded><![CDATA[Dans ce papier, on a prouvé une estimation de stabilité pour le problème inverse de dé-termination du champ magnétique dans l'équation des ondes donné sur un domaine non borné à partir de l'opérateur de Dirichlet-to-Neumann. On a montré un résultat de stabilité pour ce problème inverse, dont la démonstration est basée sur la construction de solutions optique géométrique pour l'équation des ondes avec un potentiel magnétique 1-périodique. ABSTRACT. We consider the boundary inverse problem of determining the aligned magnetic field appearing in the magnetic wave equation in a periodic quantum cylindrical waveguide from boundary observations. The observation is given by the Dirichlet to Neumann map associated to the wave equation. We prove by means of the geometrical optics solutions of the magnetic wave equation that the knowledge of the Dirichlet-to-Neumann map determines uniquely the aligned magnetic field induced by a time independent and 1-periodic magnetic potential. We establish a Hölder-type stability estimate in the inverse problem.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Inverse heat source problem for a coupled hyperbolic-parabolic system</title>
      <description><![CDATA[Dans ce papier, on a prouvé une estimation de stabilité de type Höldérienne pour un problème inverse de détermination du terme source de l'équation de la chaleur à l'aide d'une inégalité de Carleman pour un système d'équations hyperbolique-parabolique couplé. ABSTRACT. In this paper we consider a coupled system of mixed hyperbolic-parabolic type which describes the Biot consolidation model in poro-elasticity. Using a local Carleman estimate for a coupled hyperbolic-parabolic system, we prove the uniqueness and a Hölder stability in determining the heat source by a single measurement of solution over ω × (0, T), where T > 0 is a sufficiently large time and a suitable subbdomain ω ⊂ Ω such that ∂ω ⊃ ∂Ω. MOTS-CLÉS : Problème inverse, estimation de Carleman, système couplet]]></description>
      <pubDate>Tue, 13 Dec 2016 10:52:52 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1487</link>
      <guid>https://doi.org/10.46298/arima.1487</guid>
      <author>Bellassoued, Mourad</author>
      <author>Riahi, Bochra</author>
      <dc:creator>Bellassoued, Mourad</dc:creator>
      <dc:creator>Riahi, Bochra</dc:creator>
      <content:encoded><![CDATA[Dans ce papier, on a prouvé une estimation de stabilité de type Höldérienne pour un problème inverse de détermination du terme source de l'équation de la chaleur à l'aide d'une inégalité de Carleman pour un système d'équations hyperbolique-parabolique couplé. ABSTRACT. In this paper we consider a coupled system of mixed hyperbolic-parabolic type which describes the Biot consolidation model in poro-elasticity. Using a local Carleman estimate for a coupled hyperbolic-parabolic system, we prove the uniqueness and a Hölder stability in determining the heat source by a single measurement of solution over ω × (0, T), where T > 0 is a sufficiently large time and a suitable subbdomain ω ⊂ Ω such that ∂ω ⊃ ∂Ω. MOTS-CLÉS : Problème inverse, estimation de Carleman, système couplet]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Solvability of Mixed Problems With Integral Condition for Singular Parabolic Equations</title>
      <description><![CDATA[In this paper we proved the existance and uniqness of strong generalized solution of mixed problems wih integral condition for singular parabolic equaions depending on a theorem proved in [1] in which a priori estimaion of the solution for such problems was derived.]]></description>
      <pubDate>Tue, 13 Dec 2016 10:52:23 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1519</link>
      <guid>https://doi.org/10.46298/arima.1519</guid>
      <author>Almomani, Raid</author>
      <dc:creator>Almomani, Raid</dc:creator>
      <content:encoded><![CDATA[In this paper we proved the existance and uniqness of strong generalized solution of mixed problems wih integral condition for singular parabolic equaions depending on a theorem proved in [1] in which a priori estimaion of the solution for such problems was derived.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A self-stabilizing algorithm for a hierarchical middleware self-adaptive deployment : specification, proof, simulations</title>
      <description><![CDATA[ABSTRACT. An effective solution to deal with this dynamic nature of distributed systems is to implement a self-adaptive mechanism to sustain the distributed architecture. Self-adaptive systems can autonomously modify their behavior at run-timein response to changes in their environment. Our paper describes the self-adaptive algorithm that we developed for an existing middleware. Once the middleware is deployed, it can detects a set of events which indicate an unstable deployment state. When an event is detected, some instructions are executed to handle the event. We have proposed a sketch proof of the self-stabilizing property of the algorithm. We have designed a simulator to have a deeper insights of our proposed self-adaptive algorithm. Results of our simulated experiments validate the safe convergence of the algorithm.]]></description>
      <pubDate>Mon, 12 Dec 2016 13:12:39 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1473</link>
      <guid>https://doi.org/10.46298/arima.1473</guid>
      <author>Faye, Maurice-Djibril</author>
      <author>Caron, Eddy</author>
      <author>Thiare, Ousmane</author>
      <dc:creator>Faye, Maurice-Djibril</dc:creator>
      <dc:creator>Caron, Eddy</dc:creator>
      <dc:creator>Thiare, Ousmane</dc:creator>
      <content:encoded><![CDATA[ABSTRACT. An effective solution to deal with this dynamic nature of distributed systems is to implement a self-adaptive mechanism to sustain the distributed architecture. Self-adaptive systems can autonomously modify their behavior at run-timein response to changes in their environment. Our paper describes the self-adaptive algorithm that we developed for an existing middleware. Once the middleware is deployed, it can detects a set of events which indicate an unstable deployment state. When an event is detected, some instructions are executed to handle the event. We have proposed a sketch proof of the self-stabilizing property of the algorithm. We have designed a simulator to have a deeper insights of our proposed self-adaptive algorithm. Results of our simulated experiments validate the safe convergence of the algorithm.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Discovering frequent patterns guided by an ontology</title>
      <description><![CDATA[The frequent pattern mining generates a huge amount of patterns and therefore requires the establishment of an effective post-treatment to target the most useful. This paper proposes an approach to discover the useful frequent patterns that integrates knowledge described by the expert and represented in an ontology associated with the data. The approach uses the ontology for benefit from more structured information to remove some frequent patterns of the analysis. The experiments realized with our approach give satisfactory results.]]></description>
      <pubDate>Wed, 07 Dec 2016 08:01:19 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.2558</link>
      <guid>https://doi.org/10.46298/arima.2558</guid>
      <author>Traore, Yaya</author>
      <author>Diop, Cheikh Talibouya</author>
      <author>Malo, Sadouanouan</author>
      <author>Lo, Moussa</author>
      <author>Ouaro, Stanislas</author>
      <dc:creator>Traore, Yaya</dc:creator>
      <dc:creator>Diop, Cheikh Talibouya</dc:creator>
      <dc:creator>Malo, Sadouanouan</dc:creator>
      <dc:creator>Lo, Moussa</dc:creator>
      <dc:creator>Ouaro, Stanislas</dc:creator>
      <content:encoded><![CDATA[The frequent pattern mining generates a huge amount of patterns and therefore requires the establishment of an effective post-treatment to target the most useful. This paper proposes an approach to discover the useful frequent patterns that integrates knowledge described by the expert and represented in an ontology associated with the data. The approach uses the ontology for benefit from more structured information to remove some frequent patterns of the analysis. The experiments realized with our approach give satisfactory results.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Population dynamics modelling: Impact of climate change on tick populations</title>
      <description><![CDATA[Epidemiology had an important development these last years allowing the resolution of a large number of problems and had good prediction on disease evolution. However, the transmission of several vector-borne diseases is closely connected to environmental protagonists, specially in the parasite-host interaction. Moreover, understanding the disease transmission is related to studying the ecology of all protagonists. These two levels of complexity(epidemiology and ecology) cannot be separated and have to be studied as a whole in a systematic way. Our goal is to understand the interaction of climate change on the evolution of a disease when the vector has ecological niche that depends on physiological state of development. We are particularly interested in tick vector diseases which are serious health problem affecting humans as well as domestic animals in many parts of the world. These infections are transmitted through a bite of an infected tick, and it appears that most of these infections are widely present in some wildlife species.]]></description>
      <pubDate>Mon, 05 Dec 2016 08:08:18 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.2553</link>
      <guid>https://doi.org/10.46298/arima.2553</guid>
      <author>Khouaja, Leila</author>
      <author>Ben Miled, Slimane</author>
      <author>Hbid, Hassan</author>
      <dc:creator>Khouaja, Leila</dc:creator>
      <dc:creator>Ben Miled, Slimane</dc:creator>
      <dc:creator>Hbid, Hassan</dc:creator>
      <content:encoded><![CDATA[Epidemiology had an important development these last years allowing the resolution of a large number of problems and had good prediction on disease evolution. However, the transmission of several vector-borne diseases is closely connected to environmental protagonists, specially in the parasite-host interaction. Moreover, understanding the disease transmission is related to studying the ecology of all protagonists. These two levels of complexity(epidemiology and ecology) cannot be separated and have to be studied as a whole in a systematic way. Our goal is to understand the interaction of climate change on the evolution of a disease when the vector has ecological niche that depends on physiological state of development. We are particularly interested in tick vector diseases which are serious health problem affecting humans as well as domestic animals in many parts of the world. These infections are transmitted through a bite of an infected tick, and it appears that most of these infections are widely present in some wildlife species.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A model for assessing sustainability: A web-based decision support system for predicting and evaluating sustainability</title>
      <description><![CDATA[This work presents a web-based tool for predicting and evaluating sustainability with a case study in the framework of water delivery service project (WDSP). A decision support model is built, based on Multi-Criteria Decision Analysis (MCDA) and afterward implemented in Java EE platform for predicting and evaluating on-line the sustainability of WDSPs. An additive value function based assignment model is used to sort a WDSP to one of the ordered categories corresponding to various level of sustainability. The model allows to aggregate socioeconomic, technical, technological and environmental aspects in term of their impact on the sustainability. Knowing the sustainability level of a WDSP can serve as a basis for undertaking an intervention.]]></description>
      <pubDate>Tue, 04 Oct 2016 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.2003</link>
      <guid>https://doi.org/10.46298/arima.2003</guid>
      <author>Metchebon Takougang, Stéphane Aimé</author>
      <author>Wethe, Joseph</author>
      <dc:creator>Metchebon Takougang, Stéphane Aimé</dc:creator>
      <dc:creator>Wethe, Joseph</dc:creator>
      <content:encoded><![CDATA[This work presents a web-based tool for predicting and evaluating sustainability with a case study in the framework of water delivery service project (WDSP). A decision support model is built, based on Multi-Criteria Decision Analysis (MCDA) and afterward implemented in Java EE platform for predicting and evaluating on-line the sustainability of WDSPs. An additive value function based assignment model is used to sort a WDSP to one of the ordered categories corresponding to various level of sustainability. The model allows to aggregate socioeconomic, technical, technological and environmental aspects in term of their impact on the sustainability. Knowing the sustainability level of a WDSP can serve as a basis for undertaking an intervention.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Localisation robuste et dénombrement de valeurs propres</title>
      <description><![CDATA[This article deals with the localization of eigenvalues of a large sparse and not necessarilysymmetric matrix in a domain of the complex plane. It combines two studies carried out earlier.The first work deals with the effect of applying small perturbations on a matrix, and referred to ase -spectrum or pseudospectrum. The second study describes a procedure for counting the numberof eigenvalues of a matrix in a region of the complex plain surrounded by a closed curve. The twomethods are combined in order to share the LU factorization of the resolvent, that intervenes in thetwo methods, so as to reduce the cost. The codes obtained are parallelized.]]></description>
      <pubDate>Sun, 29 Nov 2015 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1983</link>
      <guid>https://doi.org/10.46298/arima.1983</guid>
      <author>Nguenang, L.B.</author>
      <author>Kamgnia, Emmanuel</author>
      <author>Philippe, Bernard</author>
      <dc:creator>Nguenang, L.B.</dc:creator>
      <dc:creator>Kamgnia, Emmanuel</dc:creator>
      <dc:creator>Philippe, Bernard</dc:creator>
      <content:encoded><![CDATA[This article deals with the localization of eigenvalues of a large sparse and not necessarilysymmetric matrix in a domain of the complex plane. It combines two studies carried out earlier.The first work deals with the effect of applying small perturbations on a matrix, and referred to ase -spectrum or pseudospectrum. The second study describes a procedure for counting the numberof eigenvalues of a matrix in a region of the complex plain surrounded by a closed curve. The twomethods are combined in order to share the LU factorization of the resolvent, that intervenes in thetwo methods, so as to reduce the cost. The codes obtained are parallelized.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Monotone dynamical systems and some models of Wolbachia in Aedes Aegypti populations</title>
      <description><![CDATA[We present a model of infection by Wolbachia of an Aedes aegypti population. This model is designed to take into account both the biology of this infection and any available experimental data obtained in the field. The objective is to use this model for predicting the sustainable introduction of this bacteria. We provide a complete mathematical analysis of the model proposed and give the basic reproduction ratio R0 for Wolbachia. We observe a bistability phenomenon. Two equilibria are asymptotically stable : an equilibrium where all the population is uninfected and an equilibrium where all the population is infected. A third unstable equilibrium exists. We provide an lower bound for the basin of attraction of the desired infected equilibrium. We are in a backward bifurcation situation. The bistable situations occurs with natural biological values for the parameters.]]></description>
      <pubDate>Sat, 28 Nov 2015 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1992</link>
      <guid>https://doi.org/10.46298/arima.1992</guid>
      <author>Sallet, Gauthier</author>
      <author>Silva Moacyr, A.H.B.</author>
      <dc:creator>Sallet, Gauthier</dc:creator>
      <dc:creator>Silva Moacyr, A.H.B.</dc:creator>
      <content:encoded><![CDATA[We present a model of infection by Wolbachia of an Aedes aegypti population. This model is designed to take into account both the biology of this infection and any available experimental data obtained in the field. The objective is to use this model for predicting the sustainable introduction of this bacteria. We provide a complete mathematical analysis of the model proposed and give the basic reproduction ratio R0 for Wolbachia. We observe a bistability phenomenon. Two equilibria are asymptotically stable : an equilibrium where all the population is uninfected and an equilibrium where all the population is infected. A third unstable equilibrium exists. We provide an lower bound for the basin of attraction of the desired infected equilibrium. We are in a backward bifurcation situation. The bistable situations occurs with natural biological values for the parameters.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Avant propos</title>
      <description><![CDATA[Avant propos du colloque en l'honneur d'Eric Benoît : Des dynamiques singulièrement perturbées aux dynamiques des populations.]]></description>
      <pubDate>Wed, 18 Nov 2015 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1996</link>
      <guid>https://doi.org/10.46298/arima.1996</guid>
      <author>Sari, Nadir</author>
      <author>Wallet, Guy</author>
      <dc:creator>Sari, Nadir</dc:creator>
      <dc:creator>Wallet, Guy</dc:creator>
      <content:encoded><![CDATA[Avant propos du colloque en l'honneur d'Eric Benoît : Des dynamiques singulièrement perturbées aux dynamiques des populations.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Nonparametric estimation for probability mass function with Disake: an R package for discrete associated kernel estimators</title>
      <description><![CDATA[Kernel smoothing is one of the most widely used nonparametric data smoothing techniques. We introduce a new R package, Disake, for computing discrete associated kernel estimators for probability mass function. When working with a kernel estimator, two choices must be made: the kernel function and the smoothing parameter. The Disake package focuses on discrete associated kernels and also on cross-validation and local Bayesian techniques to select the appropriate bandwidth. Applications on simulated data and real data show that the binomial kernel is appropriate for small or moderate count data while the empirical estimator or the discrete triangular kernel is indicated for large samples.]]></description>
      <pubDate>Sun, 15 Nov 2015 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1984</link>
      <guid>https://doi.org/10.46298/arima.1984</guid>
      <author>Wansouwé, W.E.</author>
      <author>Kokonendji, C.C.</author>
      <author>Kolyang, D.T.</author>
      <dc:creator>Wansouwé, W.E.</dc:creator>
      <dc:creator>Kokonendji, C.C.</dc:creator>
      <dc:creator>Kolyang, D.T.</dc:creator>
      <content:encoded><![CDATA[Kernel smoothing is one of the most widely used nonparametric data smoothing techniques. We introduce a new R package, Disake, for computing discrete associated kernel estimators for probability mass function. When working with a kernel estimator, two choices must be made: the kernel function and the smoothing parameter. The Disake package focuses on discrete associated kernels and also on cross-validation and local Bayesian techniques to select the appropriate bandwidth. Applications on simulated data and real data show that the binomial kernel is appropriate for small or moderate count data while the empirical estimator or the discrete triangular kernel is indicated for large samples.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Migrations in the Rosenzweig-MacArthur model and the "atto-fox" problem</title>
      <description><![CDATA[The Rosenzweig-MacArthur model is a system of two ODEs used in population dynamics to modelize the predator-prey relationship. For certain values of the parameters the differential system exhibits a unique stable limit cycle. When the dynamics of the prey is faster than the dynamics of the predator, during oscillations along the limit cycle, the density of preys take so small values that it cannot modelize any actual population. This phenomenon is known as the "atto-fox" problem. In this paper we assume that the populations are living in two patches and are able to migrate from one patch to another. We give conditions for which the migration can prevent the density of prey being too small.]]></description>
      <pubDate>Sat, 07 Nov 2015 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1990</link>
      <guid>https://doi.org/10.46298/arima.1990</guid>
      <author>Lobry, Claude</author>
      <author>Sari, Tewfik</author>
      <dc:creator>Lobry, Claude</dc:creator>
      <dc:creator>Sari, Tewfik</dc:creator>
      <content:encoded><![CDATA[The Rosenzweig-MacArthur model is a system of two ODEs used in population dynamics to modelize the predator-prey relationship. For certain values of the parameters the differential system exhibits a unique stable limit cycle. When the dynamics of the prey is faster than the dynamics of the predator, during oscillations along the limit cycle, the density of preys take so small values that it cannot modelize any actual population. This phenomenon is known as the "atto-fox" problem. In this paper we assume that the populations are living in two patches and are able to migrate from one patch to another. We give conditions for which the migration can prevent the density of prey being too small.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Optimisation de requêtes dynamiques pour l'analyse de la biodiversité</title>
      <description><![CDATA[La quantité des données produites par de nombreux domaines augmente constamment et rend leur traitement de plus en plus difficile à gérer. Parmi ces domaines nous nous intéressons à la biodiversité pour laquelle le GBIF (Global Biodiversity Information Facility) vise à fédérer et partager les données de biodiversité produites par de nombreux fournisseurs à l’échelle mondiale. Aujourd’hui, avec un nombre croissant d’utilisateurs caractérisés par un comportement versatile et une fréquence d’accès aux données très aléatoire, les solutions actuelles n’ont pas été conçues pour s’adapter dynamiquement à ce type de situation. Par ailleurs, avec un nombre croissant de fournisseurs de données et d’utilisateurs qui interrogent sa base, le GBIF est confronté à un problème d’efficacité difficile à résoudre. Nous visons, dans cet article, à résoudre les problèmes de performances du GBIF. Dans cette perspective, nous proposons une approche d’optimisation de requête d’analyse de données de biodiversité qui s’adapte dynamiquement au contexte des environnements répartis à large échelle pour garantir la disponibilité des données. L’implémentation de notre solution et les résultats des expériences sont satisfaisants pour la garantie de performance et du passage à l’échelle.]]></description>
      <pubDate>Tue, 03 Nov 2015 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1998</link>
      <guid>https://doi.org/10.46298/arima.1998</guid>
      <author>Bame, Ndiouma</author>
      <author>Naacke, Hubert</author>
      <author>Sarr, Idrissa</author>
      <author>Ndiaye, Samba</author>
      <dc:creator>Bame, Ndiouma</dc:creator>
      <dc:creator>Naacke, Hubert</dc:creator>
      <dc:creator>Sarr, Idrissa</dc:creator>
      <dc:creator>Ndiaye, Samba</dc:creator>
      <content:encoded><![CDATA[La quantité des données produites par de nombreux domaines augmente constamment et rend leur traitement de plus en plus difficile à gérer. Parmi ces domaines nous nous intéressons à la biodiversité pour laquelle le GBIF (Global Biodiversity Information Facility) vise à fédérer et partager les données de biodiversité produites par de nombreux fournisseurs à l’échelle mondiale. Aujourd’hui, avec un nombre croissant d’utilisateurs caractérisés par un comportement versatile et une fréquence d’accès aux données très aléatoire, les solutions actuelles n’ont pas été conçues pour s’adapter dynamiquement à ce type de situation. Par ailleurs, avec un nombre croissant de fournisseurs de données et d’utilisateurs qui interrogent sa base, le GBIF est confronté à un problème d’efficacité difficile à résoudre. Nous visons, dans cet article, à résoudre les problèmes de performances du GBIF. Dans cette perspective, nous proposons une approche d’optimisation de requête d’analyse de données de biodiversité qui s’adapte dynamiquement au contexte des environnements répartis à large échelle pour garantir la disponibilité des données. L’implémentation de notre solution et les résultats des expériences sont satisfaisants pour la garantie de performance et du passage à l’échelle.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Routage et agrégation de données dans les réseaux de capteurs sans fil structurés en clusters auto-stabilisants</title>
      <description><![CDATA[Dans cet article, nous menons une étude complète visant à proposer trois stratégies de routage, intégrant différents niveau d’agrégation, afin d’acheminer les données collectées dans les Réseaux de Capteurs Sans Fil (RCSF) structurés en clusters auto-stabilisants. Ces trois scénarios sont : (i) le Routage Sans Agrégation (RSA), (ii) le Routage avec Agrégation Partielle (RAP) et (iii) le Routage avec agrégation Totale (RAT). Ces derniers se fondent sur un schéma de clustering autostabilisant où est intégré un système d’agents coopératifs. Nous validons ces trois scénarios par simulation sous OMNeT++ en évaluant et comparant leurs performances en termes de délai de bout en bout, de consommation énergétique et de durée de vie du réseau. Les résultats de simulation montrent que le RSA minimise les délais de communication, le RAP réduit la consommation énergétique et le RAT prolonge la durée de vie des cluster-heads]]></description>
      <pubDate>Sun, 01 Nov 2015 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.2001</link>
      <guid>https://doi.org/10.46298/arima.2001</guid>
      <author>Ba, Mandicou</author>
      <author>Flauzac, Olivier</author>
      <author>Niang, Ibrahima</author>
      <author>Nolot, Florent</author>
      <dc:creator>Ba, Mandicou</dc:creator>
      <dc:creator>Flauzac, Olivier</dc:creator>
      <dc:creator>Niang, Ibrahima</dc:creator>
      <dc:creator>Nolot, Florent</dc:creator>
      <content:encoded><![CDATA[Dans cet article, nous menons une étude complète visant à proposer trois stratégies de routage, intégrant différents niveau d’agrégation, afin d’acheminer les données collectées dans les Réseaux de Capteurs Sans Fil (RCSF) structurés en clusters auto-stabilisants. Ces trois scénarios sont : (i) le Routage Sans Agrégation (RSA), (ii) le Routage avec Agrégation Partielle (RAP) et (iii) le Routage avec agrégation Totale (RAT). Ces derniers se fondent sur un schéma de clustering autostabilisant où est intégré un système d’agents coopératifs. Nous validons ces trois scénarios par simulation sous OMNeT++ en évaluant et comparant leurs performances en termes de délai de bout en bout, de consommation énergétique et de durée de vie du réseau. Les résultats de simulation montrent que le RSA minimise les délais de communication, le RAP réduit la consommation énergétique et le RAT prolonge la durée de vie des cluster-heads]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Surstable solutions with a singularity at a turning point</title>
      <description><![CDATA[Dans cet article, on va établir de façon très simple, presque sans analyse complexe, la première condition de style Matkowsky nécessaire pour qu'une solution soit un canard avec singularités. Le but est d'étudier des équations différentielles singulièrement perturbées analytiques présentant un point tournant. Des illustrations numériques sont présentées.]]></description>
      <pubDate>Thu, 22 Oct 2015 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1987</link>
      <guid>https://doi.org/10.46298/arima.1987</guid>
      <author>Benoît, Eric</author>
      <dc:creator>Benoît, Eric</dc:creator>
      <content:encoded><![CDATA[Dans cet article, on va établir de façon très simple, presque sans analyse complexe, la première condition de style Matkowsky nécessaire pour qu'une solution soit un canard avec singularités. Le but est d'étudier des équations différentielles singulièrement perturbées analytiques présentant un point tournant. Des illustrations numériques sont présentées.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Fitting coefficients of differential systems with Monte Carlo methods</title>
      <description><![CDATA[We consider the problem of estimating the coefficients in a system of differential equations when a trajectory of the system is known at a set of times. To do this, we use a simple Monte Carlo sampling method, known as the rejection sampling algorithm. Unlike deterministic methods, it does not provide a point estimate of the coefficients directly, but rather a collection of values that "fits" the known data well. An examination of the properties of the method allows us not only to better understand how to choose the different parameters when implementing the method, but also to introduce a more efficient method by using a new two-step approach which we call sequential rejection sampling. Several examples are presented to illustrate the performance of both the original and the new methods.]]></description>
      <pubDate>Thu, 15 Oct 2015 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1988</link>
      <guid>https://doi.org/10.46298/arima.1988</guid>
      <author>Chan Shio, Christian</author>
      <author>Diener, Francine</author>
      <dc:creator>Chan Shio, Christian</dc:creator>
      <dc:creator>Diener, Francine</dc:creator>
      <content:encoded><![CDATA[We consider the problem of estimating the coefficients in a system of differential equations when a trajectory of the system is known at a set of times. To do this, we use a simple Monte Carlo sampling method, known as the rejection sampling algorithm. Unlike deterministic methods, it does not provide a point estimate of the coefficients directly, but rather a collection of values that "fits" the known data well. An examination of the properties of the method allows us not only to better understand how to choose the different parameters when implementing the method, but also to introduce a more efficient method by using a new two-step approach which we call sequential rejection sampling. Several examples are presented to illustrate the performance of both the original and the new methods.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Canards, canard cascades and black swans</title>
      <description><![CDATA[The paper is devoted to the investigation of the slow integral manifolds of variable stability. The existence of non periodic canards, canard cascades and black swans is stated. The theoretical developments are illustrated by several examples.]]></description>
      <pubDate>Wed, 23 Sep 2015 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1993</link>
      <guid>https://doi.org/10.46298/arima.1993</guid>
      <author>Sobolev, Vladimir</author>
      <author>Shchepakina, Elena</author>
      <dc:creator>Sobolev, Vladimir</dc:creator>
      <dc:creator>Shchepakina, Elena</dc:creator>
      <content:encoded><![CDATA[The paper is devoted to the investigation of the slow integral manifolds of variable stability. The existence of non periodic canards, canard cascades and black swans is stated. The theoretical developments are illustrated by several examples.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Analysis of a model describing stage-structured population dynamics using hawk-dove tactics</title>
      <description><![CDATA[The purpose of this paper is to investigate the effects of conflicting tactics of resource acquisition on stage structured population dynamics. We present a population subdivided into two distinct stages (immature and mature). We assume that immature individual survival is density dependent. We also assume that mature individuals acquire resources required to survive and reproduce by using two contrasted behavioral tactics (hawk versus dove). Mature individual survival thus is assumed to depend on the average cost of fights while individual fecundity depends on the average gain in the competition to access the resource. Our model includes two parts: a fast part that describes the encounters and fights involves a game dynamic model based upon the replicator equations, and a slow part that describes the long-term effects of conflicting tactics on the population dynamics. The existence of two time scales let us investigate the complete system from a reduced one, which describes the dynamics of the total immature and mature densities at the slow time scale. Our analysis shows that an increase in resource value may decrease total population density, because it promotes individual (i.e. selfish) behavior. Our results may therefore find practical implications in animal conservation or biological control for instance.]]></description>
      <pubDate>Sat, 19 Sep 2015 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1991</link>
      <guid>https://doi.org/10.46298/arima.1991</guid>
      <author>Moussaoui, Ali</author>
      <author>Doanh, Nguyen Ngoc</author>
      <author>Auger, Pierre</author>
      <dc:creator>Moussaoui, Ali</dc:creator>
      <dc:creator>Doanh, Nguyen Ngoc</dc:creator>
      <dc:creator>Auger, Pierre</dc:creator>
      <content:encoded><![CDATA[The purpose of this paper is to investigate the effects of conflicting tactics of resource acquisition on stage structured population dynamics. We present a population subdivided into two distinct stages (immature and mature). We assume that immature individual survival is density dependent. We also assume that mature individuals acquire resources required to survive and reproduce by using two contrasted behavioral tactics (hawk versus dove). Mature individual survival thus is assumed to depend on the average cost of fights while individual fecundity depends on the average gain in the competition to access the resource. Our model includes two parts: a fast part that describes the encounters and fights involves a game dynamic model based upon the replicator equations, and a slow part that describes the long-term effects of conflicting tactics on the population dynamics. The existence of two time scales let us investigate the complete system from a reduced one, which describes the dynamics of the total immature and mature densities at the slow time scale. Our analysis shows that an increase in resource value may decrease total population density, because it promotes individual (i.e. selfish) behavior. Our results may therefore find practical implications in animal conservation or biological control for instance.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Choice sequence and nonstandard extension of type theory</title>
      <description><![CDATA[Partant de son travail Mathematics of Infinity (1989), Martin-Löf a développé l'idée d'un lien conceptuel profond entre les notions de suite de choix et d'objet mathématique nonstandard. Précisément, il a défini une extension nonstandard de la théorie des types en ajoutant une série d'axiomes nonstandard conçue comme une sorte de suite de choix. Enfin, dans une communication de 1999, il a présenté les grandes lignes d'une théorie des types nonstandard plus générale et munie d'un fort contenu computationnel. Le présent travail est une tentative de donner un développement complet d'une théorie de ce genre. Cependant, dans le but de garder un fort contrôle sur la théorie résultante et notablement pour éviter quelques problèmes en rapport avec l'égalité définitionnelle, le champ des axiomes nonstandard est moins général que ceux proposés dans sa communication de 1999. L'étude présente est poussée jusqu'à l' introduction d'une notion de proposition externe qui joue le même rôle que les propriétés externes si utiles dans l'analyse nonstandard usuelle. Du fait que ce texte débute par une introduction à la théorie des types de Martin-Löf, il peut intéresser les mathématiciens non familiers avec ce sujet.]]></description>
      <pubDate>Wed, 16 Sep 2015 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1995</link>
      <guid>https://doi.org/10.46298/arima.1995</guid>
      <author>Wallet, Guy</author>
      <dc:creator>Wallet, Guy</dc:creator>
      <content:encoded><![CDATA[Partant de son travail Mathematics of Infinity (1989), Martin-Löf a développé l'idée d'un lien conceptuel profond entre les notions de suite de choix et d'objet mathématique nonstandard. Précisément, il a défini une extension nonstandard de la théorie des types en ajoutant une série d'axiomes nonstandard conçue comme une sorte de suite de choix. Enfin, dans une communication de 1999, il a présenté les grandes lignes d'une théorie des types nonstandard plus générale et munie d'un fort contenu computationnel. Le présent travail est une tentative de donner un développement complet d'une théorie de ce genre. Cependant, dans le but de garder un fort contrôle sur la théorie résultante et notablement pour éviter quelques problèmes en rapport avec l'égalité définitionnelle, le champ des axiomes nonstandard est moins général que ceux proposés dans sa communication de 1999. L'étude présente est poussée jusqu'à l' introduction d'une notion de proposition externe qui joue le même rôle que les propriétés externes si utiles dans l'analyse nonstandard usuelle. Du fait que ce texte débute par une introduction à la théorie des types de Martin-Löf, il peut intéresser les mathématiciens non familiers avec ce sujet.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Foreword</title>
      <description><![CDATA[Foreword to the special issue of ARIMA Journal dedicated to CARI'14]]></description>
      <pubDate>Sun, 06 Sep 2015 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.2002</link>
      <guid>https://doi.org/10.46298/arima.2002</guid>
      <author>Badouel, Eric</author>
      <author>Lo, Moussa</author>
      <author>Sellami, Mokhtar</author>
      <dc:creator>Badouel, Eric</dc:creator>
      <dc:creator>Lo, Moussa</dc:creator>
      <dc:creator>Sellami, Mokhtar</dc:creator>
      <content:encoded><![CDATA[Foreword to the special issue of ARIMA Journal dedicated to CARI'14]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Classification non Supervisée de Données Multidimensionnelles par les Processus Ponctuels Marqués</title>
      <description><![CDATA[Cet article décrit un nouvel algorithme non supervisé de classification des données multidimensionnelles. Il consiste à détecter les prototypes des classes présentes dans un échantillon et à appliquer l’algorithme KNN pour la classification de toutes les observations. La détection des prototypes des classes est basée sur les processus ponctuels marqués, c’est d’une part une adaptation de la méthode de Métropolis-Hasting-Green qui génère des mouvements manipulant les objets du processus (naissance, mort…) et d’autre part une modélisation de Gibbs qui introduit la fonction de potentiel matérialisant les interactions du processus en termes d’énergie. Plusieurs expérimentations ont été réalisées sur des données ponctuelles multidimensionnelles où les classes sont non linéairement séparables et des données réelles issues des puces à ADN. Une comparaison avec des méthodes de classification existantes a permis de montrer l’efficacité de ce nouvel algorithme.]]></description>
      <pubDate>Wed, 02 Sep 2015 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.2000</link>
      <guid>https://doi.org/10.46298/arima.2000</guid>
      <author>Henni, Khadidja</author>
      <author>Alata, Olivier</author>
      <author>Zaoui, Lynda</author>
      <author>Elidrissi, Abdellatif</author>
      <author>Moussa, Ahmed</author>
      <dc:creator>Henni, Khadidja</dc:creator>
      <dc:creator>Alata, Olivier</dc:creator>
      <dc:creator>Zaoui, Lynda</dc:creator>
      <dc:creator>Elidrissi, Abdellatif</dc:creator>
      <dc:creator>Moussa, Ahmed</dc:creator>
      <content:encoded><![CDATA[Cet article décrit un nouvel algorithme non supervisé de classification des données multidimensionnelles. Il consiste à détecter les prototypes des classes présentes dans un échantillon et à appliquer l’algorithme KNN pour la classification de toutes les observations. La détection des prototypes des classes est basée sur les processus ponctuels marqués, c’est d’une part une adaptation de la méthode de Métropolis-Hasting-Green qui génère des mouvements manipulant les objets du processus (naissance, mort…) et d’autre part une modélisation de Gibbs qui introduit la fonction de potentiel matérialisant les interactions du processus en termes d’énergie. Plusieurs expérimentations ont été réalisées sur des données ponctuelles multidimensionnelles où les classes sont non linéairement séparables et des données réelles issues des puces à ADN. Une comparaison avec des méthodes de classification existantes a permis de montrer l’efficacité de ce nouvel algorithme.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Temporal and Hierarchical HMM for Activity Recognition Applied in Visual Medical Monitoring using a Multi-Camera System</title>
      <description><![CDATA[We address in this paper an improved medical monitoring system through an automatic recognition of human activity in Intensive Care Units (ICUs). A multi camera vision system approach is proposed to collect video sequence for automatic analysis and interpretation of the scene. The latter is performed using Hidden Markov Model (HMM) with explicit state duration combine at the management of the hierarchical structure of the scenario. Significant experiments are carried out on the proposed monitoring system in a hospital's cardiology section in order to prove the need for computer-aided patient supervision to help clinicians in the decision making process. Temporal and hierarchical HMM handles explicitly the state duration and then provides a suitable solution for the automatic recognition of temporal events. Finally, the use of Temporal HMM (THMM) based approach improves the scenario recognition performance compared to the result of standard HMM models.]]></description>
      <pubDate>Sat, 29 Aug 2015 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1999</link>
      <guid>https://doi.org/10.46298/arima.1999</guid>
      <author>Ahouandjinou, Arnaud</author>
      <author>Ezin, Eugène C.</author>
      <author>Motamed, Cina</author>
      <dc:creator>Ahouandjinou, Arnaud</dc:creator>
      <dc:creator>Ezin, Eugène C.</dc:creator>
      <dc:creator>Motamed, Cina</dc:creator>
      <content:encoded><![CDATA[We address in this paper an improved medical monitoring system through an automatic recognition of human activity in Intensive Care Units (ICUs). A multi camera vision system approach is proposed to collect video sequence for automatic analysis and interpretation of the scene. The latter is performed using Hidden Markov Model (HMM) with explicit state duration combine at the management of the hierarchical structure of the scenario. Significant experiments are carried out on the proposed monitoring system in a hospital's cardiology section in order to prove the need for computer-aided patient supervision to help clinicians in the decision making process. Temporal and hierarchical HMM handles explicitly the state duration and then provides a suitable solution for the automatic recognition of temporal events. Finally, the use of Temporal HMM (THMM) based approach improves the scenario recognition performance compared to the result of standard HMM models.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A discrete free boundary problem</title>
      <description><![CDATA[We study the free boundary problem in a nonstandard setting of infinitesimal discretisations of the heat equation. In particular we derive regularity results of solutions and the free boundary, in terms of S-continuity and S-differentiability]]></description>
      <pubDate>Fri, 28 Aug 2015 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1994</link>
      <guid>https://doi.org/10.46298/arima.1994</guid>
      <author>Berg, Imme, van Den</author>
      <dc:creator>Berg, Imme, van Den</dc:creator>
      <content:encoded><![CDATA[We study the free boundary problem in a nonstandard setting of infinitesimal discretisations of the heat equation. In particular we derive regularity results of solutions and the free boundary, in terms of S-continuity and S-differentiability]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Canard-induced loss of stability across a homoclinic bifurcation</title>
      <description><![CDATA[his article deals with slow-fast systems and is, in some sense, a first approach to a general problem, namely to investigate the possibility of bifurcations which display a dramatic change in the phase portrait in a very small (on the order of 10−7 in the example presented here) change of a parameter. We provide evidence of existence of such a very rapid loss of stability on a specific example of a singular perturbation setting. This example is strongly inspired of the explosion of canard cycles first discovered and studied by E Benoît, J.-L. Callot, F. Diener and M. Diener. After some presentation of the integrable case to be perturbed, we present the numerical evidences for this rapid loss of stability using numerical continuation. We discuss then the possibility to estimate accurately the value of the parameter for which this bifurcation occurs.]]></description>
      <pubDate>Sun, 23 Aug 2015 12:41:35 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1989</link>
      <guid>https://doi.org/10.46298/arima.1989</guid>
      <author>Desroches, Mathieu</author>
      <author>Françoise, Jean-Pierre</author>
      <author>Mégret, Lucile</author>
      <dc:creator>Desroches, Mathieu</dc:creator>
      <dc:creator>Françoise, Jean-Pierre</dc:creator>
      <dc:creator>Mégret, Lucile</dc:creator>
      <content:encoded><![CDATA[his article deals with slow-fast systems and is, in some sense, a first approach to a general problem, namely to investigate the possibility of bifurcations which display a dramatic change in the phase portrait in a very small (on the order of 10−7 in the example presented here) change of a parameter. We provide evidence of existence of such a very rapid loss of stability on a specific example of a singular perturbation setting. This example is strongly inspired of the explosion of canard cycles first discovered and studied by E Benoît, J.-L. Callot, F. Diener and M. Diener. After some presentation of the integrable case to be perturbed, we present the numerical evidences for this rapid loss of stability using numerical continuation. We discuss then the possibility to estimate accurately the value of the parameter for which this bifurcation occurs.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Préface/Foreword</title>
      <description><![CDATA[Forword CRI 2013]]></description>
      <pubDate>Mon, 10 Aug 2015 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1985</link>
      <guid>https://doi.org/10.46298/arima.1985</guid>
      <author>Badouel, Eric</author>
      <author>Tchuenté, Maurice</author>
      <dc:creator>Badouel, Eric</dc:creator>
      <dc:creator>Tchuenté, Maurice</dc:creator>
      <content:encoded><![CDATA[Forword CRI 2013]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Mathematical analysis of the effect of a pulse vaccination to an HBV mutation</title>
      <description><![CDATA[It has been proven that vaccine can play an important role for eradication of hepatitis B infection. When the mutant strain of virus appears, it changes all treatments strategies. The current problem is to find the critical vaccine threshold which can stimulate the immune system for eradicate the virus, or to find conditions at which mutant strain of the virus can persist in the presence of a CTL vaccine. In this paper, the dynamical behavior of a new hepatitis B virus model with two strains of virus and CTL immune responses is studied. We compute the basic reproductive ratio of the model and show that the dynamic depend of this threshold. After that, we extend the model incorporating pulse vaccination and we find conditions for eradication of the disease. Our result indicates that if the vaccine is sufficiently strong, both strains are driven to extinction, assuming perfect adherence.]]></description>
      <pubDate>Tue, 04 Aug 2015 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1997</link>
      <guid>https://doi.org/10.46298/arima.1997</guid>
      <author>Tchinda Mouofo, Plaire</author>
      <author>Tewa, Jean Jules</author>
      <author>Mewoli, Boulchard</author>
      <author>Samuel, Bowong</author>
      <dc:creator>Tchinda Mouofo, Plaire</dc:creator>
      <dc:creator>Tewa, Jean Jules</dc:creator>
      <dc:creator>Mewoli, Boulchard</dc:creator>
      <dc:creator>Samuel, Bowong</dc:creator>
      <content:encoded><![CDATA[It has been proven that vaccine can play an important role for eradication of hepatitis B infection. When the mutant strain of virus appears, it changes all treatments strategies. The current problem is to find the critical vaccine threshold which can stimulate the immune system for eradicate the virus, or to find conditions at which mutant strain of the virus can persist in the presence of a CTL vaccine. In this paper, the dynamical behavior of a new hepatitis B virus model with two strains of virus and CTL immune responses is studied. We compute the basic reproductive ratio of the model and show that the dynamic depend of this threshold. After that, we extend the model incorporating pulse vaccination and we find conditions for eradication of the disease. Our result indicates that if the vaccine is sufficiently strong, both strains are driven to extinction, assuming perfect adherence.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Composite Asymptotic Expansions and Difference Equations</title>
      <description><![CDATA[Difference equations in the complex domain of the form y(x+ϵ)−y(x)=ϵf(y(x))/y(x) are considered. The step size ϵ>0 is a small parameter, and the equation has a singularity at y=0. Solutions near the singularity are described using composite asymptotic expansions. More precisely, it is shown that the derivative v′ of the inverse function v of a solution (the so-called Fatou coordinate) admits a Gevrey asymptotic expansion in powers of the square root of ϵ, denoted by η, involving functions of y and of Y=y/η. This also yields Gevrey asymptotic expansions of the so-called Écalle-Voronin invariants of the equation which are functions of epsilon. An application coming from the theory of complex iteration is presented.]]></description>
      <pubDate>Fri, 31 Jul 2015 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1986</link>
      <guid>https://doi.org/10.46298/arima.1986</guid>
      <author>Fruchard, Augustin</author>
      <author>Schäfke, Reinhard</author>
      <dc:creator>Fruchard, Augustin</dc:creator>
      <dc:creator>Schäfke, Reinhard</dc:creator>
      <content:encoded><![CDATA[Difference equations in the complex domain of the form y(x+ϵ)−y(x)=ϵf(y(x))/y(x) are considered. The step size ϵ>0 is a small parameter, and the equation has a singularity at y=0. Solutions near the singularity are described using composite asymptotic expansions. More precisely, it is shown that the derivative v′ of the inverse function v of a solution (the so-called Fatou coordinate) admits a Gevrey asymptotic expansion in powers of the square root of ϵ, denoted by η, involving functions of y and of Y=y/η. This also yields Gevrey asymptotic expansions of the so-called Écalle-Voronin invariants of the equation which are functions of epsilon. An application coming from the theory of complex iteration is presented.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Une approche d'implémentation des dictionnaires de métadonnées pour la fédération de données géographiques multisource</title>
      <description><![CDATA[Spatial Metadata are used to describe the existing data sources in order to facilitate their access and sharing between different actors. The problem of exploitation of these metadata arises when they should be catalogued within the framework of a platform for spatial data federation. We describe a service oriented approach for structuring this component, with an implementation based on LDAP. To achieve this, we start from a canonical language that unifies the major known geographic metadata standards, and then we define new classes of LDAP objects which map syntactic units of canonical language.]]></description>
      <pubDate>Fri, 28 Nov 2014 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1975</link>
      <guid>https://doi.org/10.46298/arima.1975</guid>
      <author>Tongo, Landry</author>
      <author>Kouamou, Georges-Edouard</author>
      <author>Tchudjo, Gilbert Armand</author>
      <dc:creator>Tongo, Landry</dc:creator>
      <dc:creator>Kouamou, Georges-Edouard</dc:creator>
      <dc:creator>Tchudjo, Gilbert Armand</dc:creator>
      <content:encoded><![CDATA[Spatial Metadata are used to describe the existing data sources in order to facilitate their access and sharing between different actors. The problem of exploitation of these metadata arises when they should be catalogued within the framework of a platform for spatial data federation. We describe a service oriented approach for structuring this component, with an implementation based on LDAP. To achieve this, we start from a canonical language that unifies the major known geographic metadata standards, and then we define new classes of LDAP objects which map syntactic units of canonical language.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Analysis of an Age-structured SIL model with demographics process and vertical transmission</title>
      <description><![CDATA[International audience]]></description>
      <pubDate>Tue, 25 Nov 2014 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1966</link>
      <guid>https://doi.org/10.46298/arima.1966</guid>
      <author>Demasse, Ramses Djidjou</author>
      <author>Tewa, Jean Jules</author>
      <author>Bowong, Samuel</author>
      <dc:creator>Demasse, Ramses Djidjou</dc:creator>
      <dc:creator>Tewa, Jean Jules</dc:creator>
      <dc:creator>Bowong, Samuel</dc:creator>
      <content:encoded><![CDATA[International audience]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>An arbitrary high order discontinuous Galerkin scheme for the elastodynamic equations</title>
      <description><![CDATA[We present in this paper the formulation of a non-dissipative arbitrary high order time domain scheme for the elastodynamic equations. Our approach combines the use of an arbitrary high order discontinuous Galerkin interpolation with centred flux in space, with an arbitrary high order leapfrog scheme in time. Numerical two dimensionnal results are presented for the schemes from order two to order four. In these simulations, we discuss of the numerical stability and the numerical convergence of the schemes on the homogeneous eigenmode problem. We also show the ability of the computed schemes to carry out more complex propagation probems by simulating the Garvin test with an explosive source. The results show the high accuracy of the method, both on triangular regular and irregular meshes.]]></description>
      <pubDate>Wed, 19 Nov 2014 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1969</link>
      <guid>https://doi.org/10.46298/arima.1969</guid>
      <author>Mpong, Serge Moto</author>
      <dc:creator>Mpong, Serge Moto</dc:creator>
      <content:encoded><![CDATA[We present in this paper the formulation of a non-dissipative arbitrary high order time domain scheme for the elastodynamic equations. Our approach combines the use of an arbitrary high order discontinuous Galerkin interpolation with centred flux in space, with an arbitrary high order leapfrog scheme in time. Numerical two dimensionnal results are presented for the schemes from order two to order four. In these simulations, we discuss of the numerical stability and the numerical convergence of the schemes on the homogeneous eigenmode problem. We also show the ability of the computed schemes to carry out more complex propagation probems by simulating the Garvin test with an explosive source. The results show the high accuracy of the method, both on triangular regular and irregular meshes.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>P2P4GS : une spécification de grille pair-à-pair de services auto-gérés</title>
      <description><![CDATA[The grid-based peer-to-peer architectures were used either for storage and data sharing or computing. So far, the proposed solutions with respect to grid services are based on hierarchical topologies, which present a high degree of centralization. The main issue of this centralization is the unified management of resources and the difficult to react rapidly against failure and faults that can affect grid users. In this paper, we propose a original specification, called P2P4GS, that enables selfmanaged service of peer-to- peer grid. Therefore, we design a self-adaptive solution for services deployment and invocation which take account the paradigm of peer-to-peer services. Furthermore, the deployment, and invocation are completely delegated to the platform and are done a transparent manner with respect to the end user. We propose a generic specification that is not related to a particular peer-to-peer architecture or a management protocol services defined in advance. On the other hand, we propose a study of algorithmic complexities of deployment and service localization primitives in P2P4GS by immersing them on the classical topologies of P2P stack ie the ring and tree. The obtained performances are satisfactory for these different topologies.]]></description>
      <pubDate>Wed, 12 Nov 2014 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1972</link>
      <guid>https://doi.org/10.46298/arima.1972</guid>
      <author>Gueye, Bassirou</author>
      <author>Flauzac, Olivier</author>
      <author>Niang, Ibrahima</author>
      <author>Rabat, Cyril</author>
      <dc:creator>Gueye, Bassirou</dc:creator>
      <dc:creator>Flauzac, Olivier</dc:creator>
      <dc:creator>Niang, Ibrahima</dc:creator>
      <dc:creator>Rabat, Cyril</dc:creator>
      <content:encoded><![CDATA[The grid-based peer-to-peer architectures were used either for storage and data sharing or computing. So far, the proposed solutions with respect to grid services are based on hierarchical topologies, which present a high degree of centralization. The main issue of this centralization is the unified management of resources and the difficult to react rapidly against failure and faults that can affect grid users. In this paper, we propose a original specification, called P2P4GS, that enables selfmanaged service of peer-to- peer grid. Therefore, we design a self-adaptive solution for services deployment and invocation which take account the paradigm of peer-to-peer services. Furthermore, the deployment, and invocation are completely delegated to the platform and are done a transparent manner with respect to the end user. We propose a generic specification that is not related to a particular peer-to-peer architecture or a management protocol services defined in advance. On the other hand, we propose a study of algorithmic complexities of deployment and service localization primitives in P2P4GS by immersing them on the classical topologies of P2P stack ie the ring and tree. The obtained performances are satisfactory for these different topologies.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Réseaux bayésiens jumelés et noyau de Fisher pondéré pour la classification de documents XML</title>
      <description><![CDATA[In this paper, we are presenting a learning model for XML document classification based on Bayesian networks. Then, we are proposing a model which simplifies the arborescent representation of the XML document that we have, named coupled model and we will see that this approach improves the response time and keeps the same performances of the classification. Then, we will study an extension of this generative model to the discriminating model thanks to the formalism of the Fisher’s kernel. At last, we have applied a ponderation of the structure components of the Fisher’s vector. We finish by presenting the obtained results on the XML collection by using the CBS and SVM methods]]></description>
      <pubDate>Sat, 08 Nov 2014 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1971</link>
      <guid>https://doi.org/10.46298/arima.1971</guid>
      <author>Yassine, Ait Ali Yahia</author>
      <author>Karima, Amrouche</author>
      <dc:creator>Yassine, Ait Ali Yahia</dc:creator>
      <dc:creator>Karima, Amrouche</dc:creator>
      <content:encoded><![CDATA[In this paper, we are presenting a learning model for XML document classification based on Bayesian networks. Then, we are proposing a model which simplifies the arborescent representation of the XML document that we have, named coupled model and we will see that this approach improves the response time and keeps the same performances of the classification. Then, we will study an extension of this generative model to the discriminating model thanks to the formalism of the Fisher’s kernel. At last, we have applied a ponderation of the structure components of the Fisher’s vector. We finish by presenting the obtained results on the XML collection by using the CBS and SVM methods]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Scheduling an aperiodic flow within a real-time system using Fairness properties</title>
      <description><![CDATA[We consider hard real-time systems composed of periodic tasks and of an aperiodic flow. Each task, either periodic or aperiodic, has a firm deadline. An aperiodic task is accepted within the system only if it can be completed before its deadline, without causing temporal failures for the periodic tasks or for the previously accepted aperiodic tasks. We propose an acceptance test, linear in the number of pending accepted aperiodic tasks. This protocol can be used provided the idle slots left by the periodic tasks are fairly distributed. We then propose a model-driven approach, based on Petri nets, to produce schedules with a fair distribution of the idle slots for systems of non independent periodic tasks.]]></description>
      <pubDate>Thu, 06 Nov 2014 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1980</link>
      <guid>https://doi.org/10.46298/arima.1980</guid>
      <author>Choquet-Geniet, Annie</author>
      <author>Malo, Sadouanouan</author>
      <dc:creator>Choquet-Geniet, Annie</dc:creator>
      <dc:creator>Malo, Sadouanouan</dc:creator>
      <content:encoded><![CDATA[We consider hard real-time systems composed of periodic tasks and of an aperiodic flow. Each task, either periodic or aperiodic, has a firm deadline. An aperiodic task is accepted within the system only if it can be completed before its deadline, without causing temporal failures for the periodic tasks or for the previously accepted aperiodic tasks. We propose an acceptance test, linear in the number of pending accepted aperiodic tasks. This protocol can be used provided the idle slots left by the periodic tasks are fairly distributed. We then propose a model-driven approach, based on Petri nets, to produce schedules with a fair distribution of the idle slots for systems of non independent periodic tasks.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Analyse mathématique d'un modèle de digestion anaérobie à trois étapes</title>
      <description><![CDATA[In this work, we focus on the mathematical analysis of a model of chemostat with enzymatic degradation of a substrate (organic matter) that can partly be under a solid form [7]. The study of this 3-step model is derived from a smaller order sub-model since some variables can be decoupled from the others. We study the existence and the stability of equilibrium points of the sub-model considering monotonic growth rates and distinct dilution rates. In the classical chemostat model with monotonic kinetics, it is well known that only one equilibrium point attracts all solutions and that bistability never occurs [8]. In the present study, although only monotonic growth rates are considered, it is shown that the considered sub-model may exhibit bistability. The study of 3-step model shows the existence at most four positive equilibrium whose one is locally asymptotically stable and according to the initial condition the two species can coexist.]]></description>
      <pubDate>Fri, 31 Oct 2014 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1967</link>
      <guid>https://doi.org/10.46298/arima.1967</guid>
      <author>Fekih-Salem, Radhouane</author>
      <author>Abdellatif, Nahla</author>
      <author>Sari, Tewfik</author>
      <author>Jérôme, Harmand</author>
      <dc:creator>Fekih-Salem, Radhouane</dc:creator>
      <dc:creator>Abdellatif, Nahla</dc:creator>
      <dc:creator>Sari, Tewfik</dc:creator>
      <dc:creator>Jérôme, Harmand</dc:creator>
      <content:encoded><![CDATA[In this work, we focus on the mathematical analysis of a model of chemostat with enzymatic degradation of a substrate (organic matter) that can partly be under a solid form [7]. The study of this 3-step model is derived from a smaller order sub-model since some variables can be decoupled from the others. We study the existence and the stability of equilibrium points of the sub-model considering monotonic growth rates and distinct dilution rates. In the classical chemostat model with monotonic kinetics, it is well known that only one equilibrium point attracts all solutions and that bistability never occurs [8]. In the present study, although only monotonic growth rates are considered, it is shown that the considered sub-model may exhibit bistability. The study of 3-step model shows the existence at most four positive equilibrium whose one is locally asymptotically stable and according to the initial condition the two species can coexist.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Channel Estimation methods with low complexity for 3GPP/LTE</title>
      <description><![CDATA[OFDM based pilots channel estimation methods with processing into the transform domain appear attractive owing to their capacity to highly reduce the noise component effect. However, in current OFDM systems, null subcarriers are placed at the edge of the spectrum in order to assure isolation from interfering signals in neighboring frequency bands; and the presence of these null carriers may lead, if not taken into account, to serious degradation of the estimated channel responses due to the “border effect” phenomenon. In this paper an improved algorithm based on truncated SVD is proposed in order to correctly support the case of null carriers at border spectrum. A method for optimizing the truncation threshold whatever the system parameters is also proposed. To make the truncated SVD channel estimation method applicable to any SISO or MIMO OFDM system and whatever the system parameters, a complexity reduction algorithm based on the distribution of the power in the transfer matrix (based on DFT or DCT) is proposed.]]></description>
      <pubDate>Sat, 11 Oct 2014 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1981</link>
      <guid>https://doi.org/10.46298/arima.1981</guid>
      <author>Diallo, Moussa</author>
      <author>Hélard, Maryline</author>
      <dc:creator>Diallo, Moussa</dc:creator>
      <dc:creator>Hélard, Maryline</dc:creator>
      <content:encoded><![CDATA[OFDM based pilots channel estimation methods with processing into the transform domain appear attractive owing to their capacity to highly reduce the noise component effect. However, in current OFDM systems, null subcarriers are placed at the edge of the spectrum in order to assure isolation from interfering signals in neighboring frequency bands; and the presence of these null carriers may lead, if not taken into account, to serious degradation of the estimated channel responses due to the “border effect” phenomenon. In this paper an improved algorithm based on truncated SVD is proposed in order to correctly support the case of null carriers at border spectrum. A method for optimizing the truncation threshold whatever the system parameters is also proposed. To make the truncated SVD channel estimation method applicable to any SISO or MIMO OFDM system and whatever the system parameters, a complexity reduction algorithm based on the distribution of the power in the transfer matrix (based on DFT or DCT) is proposed.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Immunological Approach for Intrusion Detection</title>
      <description><![CDATA[One of the central challenges with computer security is determining the difference between normal and potentially harmful behavior. For decades, developers have protected their systems using classical methods. However, the growth and complexity of computer systems or networks to protect require the development of automated and adaptive defensive tools. Promising solutions are emerging with biological inspired computing, and in particular, the immunological approach. In this paper, we propose two artificial immune systems for intrusion detection using the KDD Cup'99 database. The first one is based on the danger theory using the dendritic cells algorithm and the second is based on negative selection. The obtained results are promising]]></description>
      <pubDate>Fri, 03 Oct 2014 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1974</link>
      <guid>https://doi.org/10.46298/arima.1974</guid>
      <author>Zekri, Meriem</author>
      <author>Souici-Meslati, Labiba</author>
      <dc:creator>Zekri, Meriem</dc:creator>
      <dc:creator>Souici-Meslati, Labiba</dc:creator>
      <content:encoded><![CDATA[One of the central challenges with computer security is determining the difference between normal and potentially harmful behavior. For decades, developers have protected their systems using classical methods. However, the growth and complexity of computer systems or networks to protect require the development of automated and adaptive defensive tools. Promising solutions are emerging with biological inspired computing, and in particular, the immunological approach. In this paper, we propose two artificial immune systems for intrusion detection using the KDD Cup'99 database. The first one is based on the danger theory using the dendritic cells algorithm and the second is based on negative selection. The obtained results are promising]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Algorithmes de traitement de requêtes de biodiversité dans un environnement distribué</title>
      <description><![CDATA[The GBIF portal contains a description of most of the global biodiversity data. It faces two problems, namely the data availability and a poor expressiveness of queries, mainly due to a growing number of users which keep expressing new needs. To tackle these problems, we envision a scalable and relatively low cost solution. With this in mind, we propose a non-invasive and decentralized architecture for processing GBIF queries over a cloud infrastructure. We define a dynamic strategy for data distribution and queries processing algorithms that fit the GBIF requirements. We demonstrate the feasibility and efficiency of our solution by a prototype implementation which allows for processing extra query types, up to now unsupported by the GBIF portal.]]></description>
      <pubDate>Mon, 15 Sep 2014 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1976</link>
      <guid>https://doi.org/10.46298/arima.1976</guid>
      <author>Bame, Ndiouma</author>
      <author>Naacke, Hubert</author>
      <author>Sarr, Idrissa</author>
      <author>Ndiaye, Samba</author>
      <dc:creator>Bame, Ndiouma</dc:creator>
      <dc:creator>Naacke, Hubert</dc:creator>
      <dc:creator>Sarr, Idrissa</dc:creator>
      <dc:creator>Ndiaye, Samba</dc:creator>
      <content:encoded><![CDATA[The GBIF portal contains a description of most of the global biodiversity data. It faces two problems, namely the data availability and a poor expressiveness of queries, mainly due to a growing number of users which keep expressing new needs. To tackle these problems, we envision a scalable and relatively low cost solution. With this in mind, we propose a non-invasive and decentralized architecture for processing GBIF queries over a cloud infrastructure. We define a dynamic strategy for data distribution and queries processing algorithms that fit the GBIF requirements. We demonstrate the feasibility and efficiency of our solution by a prototype implementation which allows for processing extra query types, up to now unsupported by the GBIF portal.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Linear vs non-linear learning methods A comparative study for forest above ground biomass, estimation from texture analysis of satellite images</title>
      <description><![CDATA[The aboveground biomass estimation is an important question in the scope of Reducing Emission from Deforestation and Forest Degradation (REDD framework of the UNCCC). It is particularly challenging for tropical countries because of the scarcity of accurate ground forest inventory data and of the complexity of the forests. Satellite-borne remote sensing can help solve this problem considering the increasing availability of optical very high spatial resolution images that provide information on the forest structure via texture analysis of the canopy grain. For example, the FOTO (FOurier Texture Ordination) proved relevant for forest biomass prediction in several tropical regions. It uses PCA and linear regression and, in this paper, we suggest applying classification methods such as k-NN (k-nearest neighbors), SVM (support vector machines) and Random Forests to texture descriptors extracted from images via Fourier spectra. Experiments have been carried out on simulated images produced by the software DART (Discrete Anisotropic Radiative Transfer) in reference to information (3D stand mockups) from forests of DRC (Democratic Republic of Congo), CAR (Central African Republic) and Congo. On this basis, we show that some classification techniques may yield a gain in prediction accuracy of 18 to 20%]]></description>
      <pubDate>Sun, 14 Sep 2014 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1982</link>
      <guid>https://doi.org/10.46298/arima.1982</guid>
      <author>Tapamo, Hippolyte</author>
      <author>Mfopou, Adamou</author>
      <author>Ngonmang, Blaise</author>
      <author>Couteron, Pierre</author>
      <author>Monga, Olivier</author>
      <dc:creator>Tapamo, Hippolyte</dc:creator>
      <dc:creator>Mfopou, Adamou</dc:creator>
      <dc:creator>Ngonmang, Blaise</dc:creator>
      <dc:creator>Couteron, Pierre</dc:creator>
      <dc:creator>Monga, Olivier</dc:creator>
      <content:encoded><![CDATA[The aboveground biomass estimation is an important question in the scope of Reducing Emission from Deforestation and Forest Degradation (REDD framework of the UNCCC). It is particularly challenging for tropical countries because of the scarcity of accurate ground forest inventory data and of the complexity of the forests. Satellite-borne remote sensing can help solve this problem considering the increasing availability of optical very high spatial resolution images that provide information on the forest structure via texture analysis of the canopy grain. For example, the FOTO (FOurier Texture Ordination) proved relevant for forest biomass prediction in several tropical regions. It uses PCA and linear regression and, in this paper, we suggest applying classification methods such as k-NN (k-nearest neighbors), SVM (support vector machines) and Random Forests to texture descriptors extracted from images via Fourier spectra. Experiments have been carried out on simulated images produced by the software DART (Discrete Anisotropic Radiative Transfer) in reference to information (3D stand mockups) from forests of DRC (Democratic Republic of Congo), CAR (Central African Republic) and Congo. On this basis, we show that some classification techniques may yield a gain in prediction accuracy of 18 to 20%]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Markov analysis of land use dynamics - A Case Study in Madagascar</title>
      <description><![CDATA[We present a Markov model of a land-use dynamic along a forest corridor of Madagascar. A first approach by the maximum likelihood approach leads to a model with an absorbing state. We study the quasi-stationary distribution law of the model and the law of the hitting time of the absorbing state. According to experts, a transition not present in the data must be added to the model: this is not possible by the maximum likelihood method and we make of the Bayesian approach. We use a Markov chain Monte Carlo method to infer the transition matrix which in this case admits an invariant distribution law. Finally we analyze the two identified dynamics.]]></description>
      <pubDate>Tue, 26 Aug 2014 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1964</link>
      <guid>https://doi.org/10.46298/arima.1964</guid>
      <author>Campillo, Fabien</author>
      <author>Hervé, Dominique</author>
      <author>Raherinirina, Angelo</author>
      <author>Rakotozafy, Rivo</author>
      <dc:creator>Campillo, Fabien</dc:creator>
      <dc:creator>Hervé, Dominique</dc:creator>
      <dc:creator>Raherinirina, Angelo</dc:creator>
      <dc:creator>Rakotozafy, Rivo</dc:creator>
      <content:encoded><![CDATA[We present a Markov model of a land-use dynamic along a forest corridor of Madagascar. A first approach by the maximum likelihood approach leads to a model with an absorbing state. We study the quasi-stationary distribution law of the model and the law of the hitting time of the absorbing state. According to experts, a transition not present in the data must be added to the model: this is not possible by the maximum likelihood method and we make of the Bayesian approach. We use a Markov chain Monte Carlo method to infer the transition matrix which in this case admits an invariant distribution law. Finally we analyze the two identified dynamics.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Commande optimale en temps minimal d'un procédé biologique d'épuration de l'eau</title>
      <description><![CDATA[In this work, we consider an optimal control problem of a biological sequencing batch reactor for the treatment of pollutants. This model includes two biological reactions, one being aerobic while the other is anoxic. We are first interested in a problem of optimal control in time and then, in both time and energy. The existence of the optimal trajectories is proven and the corresponding optimal controls are derived in each case.]]></description>
      <pubDate>Mon, 25 Aug 2014 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1978</link>
      <guid>https://doi.org/10.46298/arima.1978</guid>
      <author>Bouafs, Walid</author>
      <author>Abdellatif, Nahla</author>
      <author>Jean, Frédéric</author>
      <author>Jérôme, Harmand</author>
      <dc:creator>Bouafs, Walid</dc:creator>
      <dc:creator>Abdellatif, Nahla</dc:creator>
      <dc:creator>Jean, Frédéric</dc:creator>
      <dc:creator>Jérôme, Harmand</dc:creator>
      <content:encoded><![CDATA[In this work, we consider an optimal control problem of a biological sequencing batch reactor for the treatment of pollutants. This model includes two biological reactions, one being aerobic while the other is anoxic. We are first interested in a problem of optimal control in time and then, in both time and energy. The existence of the optimal trajectories is proven and the corresponding optimal controls are derived in each case.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Component reuse methodology for multi-clock Data-Flow parallel embedded Systems</title>
      <description><![CDATA[The growing complexity of new chips and the time-to-market constraints require fundamental changes in the way systems are designed. Systems on Chip (SoC) based on reused components have become an absolute necessity to embedded systems companies that want to remain competitive. However, the design of a SoC is extremely complex because it encompasses a range of difficult problems in hardware and software design. This paper focuses on the design of parallel and multi-frequency applications using flexible components. Flexible parallel components are assembled using a scheduling method which combines the synchronous data-flow principle of balance equations and the polyhedral scheduling technique. Our approach allows a flexible component to be modelled and a full system to be assembled and synthesized with automatically generated wrappers. The work presented here is an extension of previous work. We illustrate our method on a simplified WCDMA system. We discuss the relationship of this approach with multi-clock architecture, latency-insensitive design, multidimensional data-flow systems and stream programming]]></description>
      <pubDate>Sun, 24 Aug 2014 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1979</link>
      <guid>https://doi.org/10.46298/arima.1979</guid>
      <author>Chana, Anne Marie</author>
      <author>Quinton, Patrice</author>
      <author>Derrien, Steven</author>
      <dc:creator>Chana, Anne Marie</dc:creator>
      <dc:creator>Quinton, Patrice</dc:creator>
      <dc:creator>Derrien, Steven</dc:creator>
      <content:encoded><![CDATA[The growing complexity of new chips and the time-to-market constraints require fundamental changes in the way systems are designed. Systems on Chip (SoC) based on reused components have become an absolute necessity to embedded systems companies that want to remain competitive. However, the design of a SoC is extremely complex because it encompasses a range of difficult problems in hardware and software design. This paper focuses on the design of parallel and multi-frequency applications using flexible components. Flexible parallel components are assembled using a scheduling method which combines the synchronous data-flow principle of balance equations and the polyhedral scheduling technique. Our approach allows a flexible component to be modelled and a full system to be assembled and synthesized with automatically generated wrappers. The work presented here is an extension of previous work. We illustrate our method on a simplified WCDMA system. We discuss the relationship of this approach with multi-clock architecture, latency-insensitive design, multidimensional data-flow systems and stream programming]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Détection des préoccupations transversales par l'analyse formelle de concepts des diagrammes de séquence</title>
      <description><![CDATA[The existence of crosscutting concerns tangled or scattered, complicates the understanding and evolution of object oriented source code. The industrial adoption of aspect-oriented paradigm has led to research new approaches supporting aspect oriented migration. This migration requires the identification of crosscutting concerns, in order to encapsulate them into aspects. We propose in this paper a new approach for the identification of crosscutting concerns at the conceptual level. We materialize this latter by the UML class and sequence diagrams. We use the formal concept analysis to group scattered functionalities in sequence diagrams, and we analyze the order of method calls to detect the tangled ones. Then, we filter all obtained candidate aspects, in order to avoid the mistakes.]]></description>
      <pubDate>Sat, 23 Aug 2014 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1977</link>
      <guid>https://doi.org/10.46298/arima.1977</guid>
      <author>Dahi, Fairouz</author>
      <author>Bounour, Nora</author>
      <dc:creator>Dahi, Fairouz</dc:creator>
      <dc:creator>Bounour, Nora</dc:creator>
      <content:encoded><![CDATA[The existence of crosscutting concerns tangled or scattered, complicates the understanding and evolution of object oriented source code. The industrial adoption of aspect-oriented paradigm has led to research new approaches supporting aspect oriented migration. This migration requires the identification of crosscutting concerns, in order to encapsulate them into aspects. We propose in this paper a new approach for the identification of crosscutting concerns at the conceptual level. We materialize this latter by the UML class and sequence diagrams. We use the formal concept analysis to group scattered functionalities in sequence diagrams, and we analyze the order of method calls to detect the tangled ones. Then, we filter all obtained candidate aspects, in order to avoid the mistakes.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Approche de sélection d’attributs pour la classification basée sur l’algorithme RFE-SVM</title>
      <description><![CDATA[The feature selection for classification is a very active research field in data mining and optimization. Its combinatorial nature requires the development of specific techniques (such as filters, wrappers, genetic algorithms, and so on) or hybrid approaches combining several optimization methods. In this context, the support vector machine recursive feature elimination (SVM-RFE), is distinguished as one of the most effective methods. However, the RFE-SVM algorithm is a greedy method that only hopes to find the best possible combination for classification. To overcome this limitation, we propose an alternative approach with the aim to combine the RFE-SVM algorithm with local search operators based on operational research and artificial intelligence.]]></description>
      <pubDate>Sun, 17 Aug 2014 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1965</link>
      <guid>https://doi.org/10.46298/arima.1965</guid>
      <author>Slimani, Yahya</author>
      <author>Essegir, Mohamed Amir</author>
      <author>Samb, Mouhamadou Lamine</author>
      <author>Camara, Fodé</author>
      <author>Ndiaye, Samba</author>
      <dc:creator>Slimani, Yahya</dc:creator>
      <dc:creator>Essegir, Mohamed Amir</dc:creator>
      <dc:creator>Samb, Mouhamadou Lamine</dc:creator>
      <dc:creator>Camara, Fodé</dc:creator>
      <dc:creator>Ndiaye, Samba</dc:creator>
      <content:encoded><![CDATA[The feature selection for classification is a very active research field in data mining and optimization. Its combinatorial nature requires the development of specific techniques (such as filters, wrappers, genetic algorithms, and so on) or hybrid approaches combining several optimization methods. In this context, the support vector machine recursive feature elimination (SVM-RFE), is distinguished as one of the most effective methods. However, the RFE-SVM algorithm is a greedy method that only hopes to find the best possible combination for classification. To overcome this limitation, we propose an alternative approach with the aim to combine the RFE-SVM algorithm with local search operators based on operational research and artificial intelligence.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Opacité des artefacts d'un système workflow</title>
      <description><![CDATA[A property (of an object) is opaque to an observer when he or she cannot deduce the property from its set of observations. If each observer is attached to a given set of properties (the so-called secrets), then the system is said to be opaque if each secret is opaque to the corresponding observer. Opacity has been studied in the context of discrete event dynamic systems where technique of control theory were designed to enforce opacity. To the best of our knowledge, this paper is the first attempt to formalize opacity of artifacts in data-centric workflow systems. We motivate this problem and give some assumptions that guarantee the decidability of opacity. Some techniques for enforcing opacity are indicated.]]></description>
      <pubDate>Fri, 08 Aug 2014 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1973</link>
      <guid>https://doi.org/10.46298/arima.1973</guid>
      <author>Badouel, Eric</author>
      <author>Diouf, Mohamadou Lamine</author>
      <dc:creator>Badouel, Eric</dc:creator>
      <dc:creator>Diouf, Mohamadou Lamine</dc:creator>
      <content:encoded><![CDATA[A property (of an object) is opaque to an observer when he or she cannot deduce the property from its set of observations. If each observer is attached to a given set of properties (the so-called secrets), then the system is said to be opaque if each secret is opaque to the corresponding observer. Opacity has been studied in the context of discrete event dynamic systems where technique of control theory were designed to enforce opacity. To the best of our knowledge, this paper is the first attempt to formalize opacity of artifacts in data-centric workflow systems. We motivate this problem and give some assumptions that guarantee the decidability of opacity. Some techniques for enforcing opacity are indicated.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Some efficient methods for computing the determinant of large sparse matrices</title>
      <description><![CDATA[The computation of determinants intervenes in many scientific applications, as for example in the localization of eigenvalues of a given matrix A in a domain of the complex plane. When a procedure based on the application of the residual theorem is used, the integration process leads to the evaluation of the principal argument of the complex logarithm of the function g(z) = det((z + h)I - A)/ det(zI - A), and a large number of determinants is computed to insure that the same branch of the complex logarithm is followed during the integration. In this paper, we present some efficient methods for computing the determinant of a large sparse and block structured matrix. Tests conducted using randomly generated matrices show the efficiency and robustness of our methods.]]></description>
      <pubDate>Sun, 03 Aug 2014 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1968</link>
      <guid>https://doi.org/10.46298/arima.1968</guid>
      <author>Kamgnia, Emmanuel</author>
      <author>Nguenang, Louis Bernard</author>
      <dc:creator>Kamgnia, Emmanuel</dc:creator>
      <dc:creator>Nguenang, Louis Bernard</dc:creator>
      <content:encoded><![CDATA[The computation of determinants intervenes in many scientific applications, as for example in the localization of eigenvalues of a given matrix A in a domain of the complex plane. When a procedure based on the application of the residual theorem is used, the integration process leads to the evaluation of the principal argument of the complex logarithm of the function g(z) = det((z + h)I - A)/ det(zI - A), and a large number of determinants is computed to insure that the same branch of the complex logarithm is followed during the integration. In this paper, we present some efficient methods for computing the determinant of a large sparse and block structured matrix. Tests conducted using randomly generated matrices show the efficiency and robustness of our methods.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Vers une structuration auto-stabilisante des réseaux Ad Hoc</title>
      <description><![CDATA[In this paper, we present a self-stabilizing asynchronous distributed clustering algorithm that builds non-overlapping k-hops clusters. Our approach does not require any initialization. It is based only on information from neighboring nodes with periodic messages exchange. Starting from an arbitrary configuration, the network converges to a stable state after a finite number of steps. Firstly, we prove that the stabilization is reached after at most n+2 transitions and requires (u+1)* log(2n+k+3) bits per node, whereΔu represents node's degree, n is the number of network nodes and k represents the maximum hops number. Secondly, using OMNet++ simulator, we performed an evaluation of our proposed algorithm.]]></description>
      <pubDate>Sun, 03 Aug 2014 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1970</link>
      <guid>https://doi.org/10.46298/arima.1970</guid>
      <author>Ba, Mandicou</author>
      <author>Flauzac, Olivier</author>
      <author>Haggar, Bachar Salim</author>
      <author>Makhloufi, Rafik</author>
      <author>Nolot, Florent</author>
      <author>Niang, Ibrahima</author>
      <dc:creator>Ba, Mandicou</dc:creator>
      <dc:creator>Flauzac, Olivier</dc:creator>
      <dc:creator>Haggar, Bachar Salim</dc:creator>
      <dc:creator>Makhloufi, Rafik</dc:creator>
      <dc:creator>Nolot, Florent</dc:creator>
      <dc:creator>Niang, Ibrahima</dc:creator>
      <content:encoded><![CDATA[In this paper, we present a self-stabilizing asynchronous distributed clustering algorithm that builds non-overlapping k-hops clusters. Our approach does not require any initialization. It is based only on information from neighboring nodes with periodic messages exchange. Starting from an arbitrary configuration, the network converges to a stable state after a finite number of steps. Firstly, we prove that the stabilization is reached after at most n+2 transitions and requires (u+1)* log(2n+k+3) bits per node, whereΔu represents node's degree, n is the number of network nodes and k represents the maximum hops number. Secondly, using OMNet++ simulator, we performed an evaluation of our proposed algorithm.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A mixture of local and quadratic approximation variable selection algorithm in nonconcave penalized regression</title>
      <description><![CDATA[We consider the problem of variable selection via penalized likelihood using nonconvex penalty functions. To maximize the non-differentiable and nonconcave objective function, an algorithm based on local linear approximation and which adopts a naturally sparse representation was recently proposed. However, although it has promising theoretical properties, it inherits some drawbacks of Lasso in high dimensional setting. To overcome these drawbacks, we propose an algorithm (MLLQA) for maximizing the penalized likelihood for a large class of nonconvex penalty functions. The convergence property of MLLQA and oracle property of one-step MLLQA estimator are established. Some simulations and application to a real data set are also presented.]]></description>
      <pubDate>Wed, 28 Aug 2013 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1962</link>
      <guid>https://doi.org/10.46298/arima.1962</guid>
      <author>N'Guessan, Assi</author>
      <author>Sidi Zakari, Ibrahim</author>
      <author>Mkhadri, Assi</author>
      <dc:creator>N'Guessan, Assi</dc:creator>
      <dc:creator>Sidi Zakari, Ibrahim</dc:creator>
      <dc:creator>Mkhadri, Assi</dc:creator>
      <content:encoded><![CDATA[We consider the problem of variable selection via penalized likelihood using nonconvex penalty functions. To maximize the non-differentiable and nonconcave objective function, an algorithm based on local linear approximation and which adopts a naturally sparse representation was recently proposed. However, although it has promising theoretical properties, it inherits some drawbacks of Lasso in high dimensional setting. To overcome these drawbacks, we propose an algorithm (MLLQA) for maximizing the penalized likelihood for a large class of nonconvex penalty functions. The convergence property of MLLQA and oracle property of one-step MLLQA estimator are established. Some simulations and application to a real data set are also presented.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A posteriori error estimates for non conforming approximation of quasi Stokes problem</title>
      <description><![CDATA[We derive and analyze an a posteriori error estimator for nonconforming finite element approximation for the quasi-Stokes problem, which is based on the solution of local problem on stars with low cost computation, this indicator is equivalent to the energy error norm up to data oscillation, neither saturation assumption nor comparison with residual estimator are made.]]></description>
      <pubDate>Tue, 06 Aug 2013 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1963</link>
      <guid>https://doi.org/10.46298/arima.1963</guid>
      <author>Achchab, B.</author>
      <author>Agouzal, A.</author>
      <author>Bouihat, K.</author>
      <dc:creator>Achchab, B.</dc:creator>
      <dc:creator>Agouzal, A.</dc:creator>
      <dc:creator>Bouihat, K.</dc:creator>
      <content:encoded><![CDATA[We derive and analyze an a posteriori error estimator for nonconforming finite element approximation for the quasi-Stokes problem, which is based on the solution of local problem on stars with low cost computation, this indicator is equivalent to the energy error norm up to data oscillation, neither saturation assumption nor comparison with residual estimator are made.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Asymptotic Analysis of a switching system of Typha proliferation</title>
      <description><![CDATA[In this paper we propose a mathematical model of the Typha growth and analyse its stability. The model that we plan to study describes the dynamics of the plant population. The theoretical study of this model determines the key factors of the Typha proliferation. We present the analysis of equilibrium solutions and lead a study of their local stability.This constitutes a first step towards a more detailed study of the nonlinear dynamics of this model.]]></description>
      <pubDate>Wed, 28 Nov 2012 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1960</link>
      <guid>https://doi.org/10.46298/arima.1960</guid>
      <author>Diagne, Mamadou Lamine</author>
      <author>Ndiaye, Papa Ibrahima</author>
      <author>Niane, Mary Teuw</author>
      <author>Sari, Tewfik</author>
      <dc:creator>Diagne, Mamadou Lamine</dc:creator>
      <dc:creator>Ndiaye, Papa Ibrahima</dc:creator>
      <dc:creator>Niane, Mary Teuw</dc:creator>
      <dc:creator>Sari, Tewfik</dc:creator>
      <content:encoded><![CDATA[In this paper we propose a mathematical model of the Typha growth and analyse its stability. The model that we plan to study describes the dynamics of the plant population. The theoretical study of this model determines the key factors of the Typha proliferation. We present the analysis of equilibrium solutions and lead a study of their local stability.This constitutes a first step towards a more detailed study of the nonlinear dynamics of this model.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Criteria for longitudinal data model selection based on Kullback’s symmetric divergence</title>
      <description><![CDATA[Recently, Azari et al (2006) showed that (AIC) criterion and its corrected versions cannot be directly applied to model selection for longitudinal data with correlated errors. They proposed two model selection criteria, AICc and RICc, by applying likelihood and residual likelihood approaches. These two criteria are estimators of the Kullback-Leibler's divergence distance which is asymmetric. In this work, we apply the likelihood and residual likelihood approaches to propose two new criteria, suitable for small samples longitudinal data, based on the Kullback's symmetric divergence. Their performance relative to others criteria is examined in a large simulation study]]></description>
      <pubDate>Tue, 20 Nov 2012 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1959</link>
      <guid>https://doi.org/10.46298/arima.1959</guid>
      <author>Hafidi, Bezza</author>
      <author>Azzaoui, Nourddine</author>
      <dc:creator>Hafidi, Bezza</dc:creator>
      <dc:creator>Azzaoui, Nourddine</dc:creator>
      <content:encoded><![CDATA[Recently, Azari et al (2006) showed that (AIC) criterion and its corrected versions cannot be directly applied to model selection for longitudinal data with correlated errors. They proposed two model selection criteria, AICc and RICc, by applying likelihood and residual likelihood approaches. These two criteria are estimators of the Kullback-Leibler's divergence distance which is asymmetric. In this work, we apply the likelihood and residual likelihood approaches to propose two new criteria, suitable for small samples longitudinal data, based on the Kullback's symmetric divergence. Their performance relative to others criteria is examined in a large simulation study]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Minimization of an energy error functional to solve a Cauchy problem arising in plasma physics: the reconstruction of the magnetic flux in the vacuum surrounding the plasma in a Tokamak</title>
      <description><![CDATA[A numerical method for the computation of the magnetic flux in the vacuum surrounding the plasma in a Tokamak is investigated. It is based on the formulation of a Cauchy problem which is solved through the minimization of an energy error functional. Several numerical experiments are conducted which show the efficiency of the method.]]></description>
      <pubDate>Thu, 11 Oct 2012 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1957</link>
      <guid>https://doi.org/10.46298/arima.1957</guid>
      <author>Faugeras, Blaise</author>
      <author>Ben Abda, Amel</author>
      <author>Blum, Jacques</author>
      <author>Boulbe, Cedric</author>
      <dc:creator>Faugeras, Blaise</dc:creator>
      <dc:creator>Ben Abda, Amel</dc:creator>
      <dc:creator>Blum, Jacques</dc:creator>
      <dc:creator>Boulbe, Cedric</dc:creator>
      <content:encoded><![CDATA[A numerical method for the computation of the magnetic flux in the vacuum surrounding the plasma in a Tokamak is investigated. It is based on the formulation of a Cauchy problem which is solved through the minimization of an energy error functional. Several numerical experiments are conducted which show the efficiency of the method.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>An error estimate in image processing</title>
      <description><![CDATA[A new interpolation error estimate for a finite element method for image processing is proved in this paper. The suggested scheme is based on the Raviart-Thomas one, extended to a non linear formulation. The numerical trials run confirm the accuracy of the restoration algorithm.]]></description>
      <pubDate>Fri, 28 Sep 2012 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1958</link>
      <guid>https://doi.org/10.46298/arima.1958</guid>
      <author>Destuynder, Philippe</author>
      <author>Jaoua, Mohamed</author>
      <author>Sellami, Hela</author>
      <dc:creator>Destuynder, Philippe</dc:creator>
      <dc:creator>Jaoua, Mohamed</dc:creator>
      <dc:creator>Sellami, Hela</dc:creator>
      <content:encoded><![CDATA[A new interpolation error estimate for a finite element method for image processing is proved in this paper. The suggested scheme is based on the Raviart-Thomas one, extended to a non linear formulation. The numerical trials run confirm the accuracy of the restoration algorithm.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A survey of RDF storage approaches</title>
      <description><![CDATA[The Semantic Web extends the principles of the Web by allowing computers to understand and easily explore the Web. In recent years RDF has been a widespread data format for the Semantic Web. There is a real need to efficiently store and retrieve RDF data as the number and scale of Semantic Web in real-word applications in use increase. As datasets grow larger and more datasets are linked together, scalability becomes more important. Efficient data storage and query processing that can scale to large amounts of possibly schema-less data has become an important research topic. This paper gives an overview of the features of techniques for storing \textttRDF data.]]></description>
      <pubDate>Tue, 04 Sep 2012 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1956</link>
      <guid>https://doi.org/10.46298/arima.1956</guid>
      <author>Faye, David C.</author>
      <author>Curé, Olivier</author>
      <author>Blin, Guillaume</author>
      <dc:creator>Faye, David C.</dc:creator>
      <dc:creator>Curé, Olivier</dc:creator>
      <dc:creator>Blin, Guillaume</dc:creator>
      <content:encoded><![CDATA[The Semantic Web extends the principles of the Web by allowing computers to understand and easily explore the Web. In recent years RDF has been a widespread data format for the Semantic Web. There is a real need to efficiently store and retrieve RDF data as the number and scale of Semantic Web in real-word applications in use increase. As datasets grow larger and more datasets are linked together, scalability becomes more important. Efficient data storage and query processing that can scale to large amounts of possibly schema-less data has become an important research topic. This paper gives an overview of the features of techniques for storing \textttRDF data.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Two numerical methods for Abel's integral equation with comparison</title>
      <description><![CDATA[Analogy between Abel's integral equation and the integral of fractional order of a given function, j^α f(t), is discussed. Two different numerical methods are presented and an approximate formula for j^α f(t) is obtained. The first approach considers the case when the function, f(t), is smooth and a quadrature formula is obtained. A modified formula is deduced in case the function has one or more simple pole. In the second approach, a procedure is presented to weaken the singularities. Both two approaches could be used to solve numerically Abel's integral equation. Some numerical examples are given to illustrate our results.]]></description>
      <pubDate>Sat, 25 Aug 2012 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1955</link>
      <guid>https://doi.org/10.46298/arima.1955</guid>
      <author>Badr, Abdallah Ali</author>
      <dc:creator>Badr, Abdallah Ali</dc:creator>
      <content:encoded><![CDATA[Analogy between Abel's integral equation and the integral of fractional order of a given function, j^α f(t), is discussed. Two different numerical methods are presented and an approximate formula for j^α f(t) is obtained. The first approach considers the case when the function, f(t), is smooth and a quadrature formula is obtained. A modified formula is deduced in case the function has one or more simple pole. In the second approach, a procedure is presented to weaken the singularities. Both two approaches could be used to solve numerically Abel's integral equation. Some numerical examples are given to illustrate our results.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Two-Aircraft Acoustic Optimal Control Problem: SQP algorithms</title>
      <description><![CDATA[This contribution aims to develop an acoustic optimization model of flight paths minimizing two-aircraft perceived noise on the ground. It is about minimizing the noise taking into account all the constraints of flight without conflict. The flight dynamics associated with a cost function generate a non-linear optimal control problem governed by ordinary non-linear differential equations. To solve this problem, the theory of necessary conditions for optimal control problems with instantaneous constraints is well used. This characterizes the optimal solution as a local one when the newtonian approach has been used alongside the optimality conditions of Karush-Kuhn-Tucker and the trust region sequential quadratic programming. The SQP methods are suggested as an option by commercial KNITRO solver under AMPL programming language. Among several possible solution, it was shown that there is an optimal trajectory (for each aircraft) leading to a reduction of noise levels on the ground.]]></description>
      <pubDate>Tue, 29 Nov 2011 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1946</link>
      <guid>https://doi.org/10.46298/arima.1946</guid>
      <author>Nahayo, F.</author>
      <author>Khardi, S.</author>
      <author>Ndimubandi, J.</author>
      <author>Haddou, Mounir</author>
      <author>Hamadiche, M.</author>
      <dc:creator>Nahayo, F.</dc:creator>
      <dc:creator>Khardi, S.</dc:creator>
      <dc:creator>Ndimubandi, J.</dc:creator>
      <dc:creator>Haddou, Mounir</dc:creator>
      <dc:creator>Hamadiche, M.</dc:creator>
      <content:encoded><![CDATA[This contribution aims to develop an acoustic optimization model of flight paths minimizing two-aircraft perceived noise on the ground. It is about minimizing the noise taking into account all the constraints of flight without conflict. The flight dynamics associated with a cost function generate a non-linear optimal control problem governed by ordinary non-linear differential equations. To solve this problem, the theory of necessary conditions for optimal control problems with instantaneous constraints is well used. This characterizes the optimal solution as a local one when the newtonian approach has been used alongside the optimality conditions of Karush-Kuhn-Tucker and the trust region sequential quadratic programming. The SQP methods are suggested as an option by commercial KNITRO solver under AMPL programming language. Among several possible solution, it was shown that there is an optimal trajectory (for each aircraft) leading to a reduction of noise levels on the ground.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Sur un modèle de compétition et de coexistence dans le chémostat</title>
      <description><![CDATA[In this paper, we consider the mathematical model of two microbial species competition on a single resource in a chemostat. We take into account the interspecific interactions between the two populations of micro-organisms and intraspecific interactions between individuals themselves. The growth functions are monotonic and the dilution ratios are distinct. We determine the equilibrium points, and their local stability.]]></description>
      <pubDate>Tue, 22 Nov 2011 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1953</link>
      <guid>https://doi.org/10.46298/arima.1953</guid>
      <author>Fekih-Salem, Radhouane</author>
      <author>Sari, Tewfik</author>
      <author>Abdellatif, Nahla</author>
      <dc:creator>Fekih-Salem, Radhouane</dc:creator>
      <dc:creator>Sari, Tewfik</dc:creator>
      <dc:creator>Abdellatif, Nahla</dc:creator>
      <content:encoded><![CDATA[In this paper, we consider the mathematical model of two microbial species competition on a single resource in a chemostat. We take into account the interspecific interactions between the two populations of micro-organisms and intraspecific interactions between individuals themselves. The growth functions are monotonic and the dilution ratios are distinct. We determine the equilibrium points, and their local stability.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Evaluation objective des méthodes de segmentation des maillages polygonaux 3D basée sur la classification de régions</title>
      <description><![CDATA[In this paper, we propose an objective evaluation approach of polygonal 3D mesh segmentation algorithms. Our approach is based on region classification. For that, we classify first manual segmented mesh into convex, concave and planar regions. Secondly, we present three quality measures that quantify the similarity of each type of region of the ground-truth relatively to the segmentation obtained by an automatic algorithm. We apply this approach on eight wellselected existing algorithms on heterogeneous images. This provides better understanding as to the strengths and weaknesses of each technique in function of each mesh-regions type in the aim to make the better choice concerning the segmentation algorithms for different applications.]]></description>
      <pubDate>Tue, 08 Nov 2011 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1943</link>
      <guid>https://doi.org/10.46298/arima.1943</guid>
      <author>Zguira, Amira</author>
      <author>Doggaz, Narjes</author>
      <author>Zagoubra, Ezzeddine</author>
      <dc:creator>Zguira, Amira</dc:creator>
      <dc:creator>Doggaz, Narjes</dc:creator>
      <dc:creator>Zagoubra, Ezzeddine</dc:creator>
      <content:encoded><![CDATA[In this paper, we propose an objective evaluation approach of polygonal 3D mesh segmentation algorithms. Our approach is based on region classification. For that, we classify first manual segmented mesh into convex, concave and planar regions. Secondly, we present three quality measures that quantify the similarity of each type of region of the ground-truth relatively to the segmentation obtained by an automatic algorithm. We apply this approach on eight wellselected existing algorithms on heterogeneous images. This provides better understanding as to the strengths and weaknesses of each technique in function of each mesh-regions type in the aim to make the better choice concerning the segmentation algorithms for different applications.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Schéma DHT hiérarchique pour la tolérance aux pannes dans les réseaux P2P-SIP</title>
      <description><![CDATA[This paper focuses on fault tolerance of super-nodes in P2P-SIP systems. These systems are characterized by high volatility of super-nodes. Most fault-tolerant proposed solutions are only for physical defects. They do not take into consideration the timing faults that are very important for multimedia applications such as telephony. This paper proposes a timing and physical fault tolerant mechanism based on P2P overlay with two levels for P2P-SIP systems. The simulation results show that our proposition reduces mostly the nodes location latency and increases the probability to find the called nodes.]]></description>
      <pubDate>Thu, 27 Oct 2011 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1948</link>
      <guid>https://doi.org/10.46298/arima.1948</guid>
      <author>Diané, Ibrahima</author>
      <author>Niang, Ibrahima</author>
      <dc:creator>Diané, Ibrahima</dc:creator>
      <dc:creator>Niang, Ibrahima</dc:creator>
      <content:encoded><![CDATA[This paper focuses on fault tolerance of super-nodes in P2P-SIP systems. These systems are characterized by high volatility of super-nodes. Most fault-tolerant proposed solutions are only for physical defects. They do not take into consideration the timing faults that are very important for multimedia applications such as telephony. This paper proposes a timing and physical fault tolerant mechanism based on P2P overlay with two levels for P2P-SIP systems. The simulation results show that our proposition reduces mostly the nodes location latency and increases the probability to find the called nodes.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Collision-resistant hash function based on composition of functions</title>
      <description><![CDATA[A cryptographic hash function is a deterministic procedure that compresses an arbitrary block of numerical data and returns a fixed-size bit string. There exists many hash functions: MD5, HAVAL, SHA, ... It was reported that these hash functions are no longer secure. Our work is focused on the construction of a new hash function based on composition of functions. The construction used the NP-completeness of Three-dimensional contingency tables and the relaxation of the constraint that a hash function should also be a compression function.]]></description>
      <pubDate>Wed, 26 Oct 2011 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1949</link>
      <guid>https://doi.org/10.46298/arima.1949</guid>
      <author>Ndoundam, René</author>
      <author>Karnel Sadie, Juvet</author>
      <dc:creator>Ndoundam, René</dc:creator>
      <dc:creator>Karnel Sadie, Juvet</dc:creator>
      <content:encoded><![CDATA[A cryptographic hash function is a deterministic procedure that compresses an arbitrary block of numerical data and returns a fixed-size bit string. There exists many hash functions: MD5, HAVAL, SHA, ... It was reported that these hash functions are no longer secure. Our work is focused on the construction of a new hash function based on composition of functions. The construction used the NP-completeness of Three-dimensional contingency tables and the relaxation of the constraint that a hash function should also be a compression function.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Dépliage des réseaux de Petri temporels à modèle sous-jacent non sauf</title>
      <description><![CDATA[For the formal verification of the concurrent or communicating dynamic systems modeled with Petri nets, the method of the unfolding is used to cope with the well-known problem of the state explosion. An extension of the method to the non safe time Petri nets is presented. The obtained unfolding is simply a prefix of that from the underlying ordinary Petri net to the time Petri net. For a certain class of time Petri nets, a finite prefix capturing the state space and the timed language ensues from the calculation of a finite set of finite processes with valid timings. The quantitative temporal constraints associated with these processes can serve to validate more effectively the temporal specifications of a hard real-time system.]]></description>
      <pubDate>Sat, 15 Oct 2011 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1950</link>
      <guid>https://doi.org/10.46298/arima.1950</guid>
      <author>Sogbohossou, M.</author>
      <author>Delfieu, D.</author>
      <dc:creator>Sogbohossou, M.</dc:creator>
      <dc:creator>Delfieu, D.</dc:creator>
      <content:encoded><![CDATA[For the formal verification of the concurrent or communicating dynamic systems modeled with Petri nets, the method of the unfolding is used to cope with the well-known problem of the state explosion. An extension of the method to the non safe time Petri nets is presented. The obtained unfolding is simply a prefix of that from the underlying ordinary Petri net to the time Petri net. For a certain class of time Petri nets, a finite prefix capturing the state space and the timed language ensues from the calculation of a finite set of finite processes with valid timings. The quantitative temporal constraints associated with these processes can serve to validate more effectively the temporal specifications of a hard real-time system.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Segmentation d'Images Texturées Couleur à l'aide de modèles paramétriques pour approcher la distribution des erreurs de prédiction linéaires</title>
      <description><![CDATA[We propose novel a priori parametric models to approximate the distribution of the two dimensional multichannel linear prediction error in order to improve the performance of color texture segmentation algorithms. Two dimensional linear prediction models are used to characterize the spatial structures in color images. The multivariate linear prediction error of these texture models is approximated with Wishart distribution and multivariate Gaussian mixture models. A novel color texture segmentation framework based on these models and a spatial regularization model of initial class label fields is presented. For the proposed method and with different color spaces, experimental results show better performances in terms of percentage segmentation error, in comparison with the use of a multivariate Gaussian law.]]></description>
      <pubDate>Fri, 07 Oct 2011 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1942</link>
      <guid>https://doi.org/10.46298/arima.1942</guid>
      <author>Qazi, Imtnan-Ul-Haque</author>
      <author>Alata, Olivier</author>
      <author>Burie, Jean-Christophe</author>
      <author>Moussa, Ahmed</author>
      <author>Fernandez-Maloigne, Christine</author>
      <dc:creator>Qazi, Imtnan-Ul-Haque</dc:creator>
      <dc:creator>Alata, Olivier</dc:creator>
      <dc:creator>Burie, Jean-Christophe</dc:creator>
      <dc:creator>Moussa, Ahmed</dc:creator>
      <dc:creator>Fernandez-Maloigne, Christine</dc:creator>
      <content:encoded><![CDATA[We propose novel a priori parametric models to approximate the distribution of the two dimensional multichannel linear prediction error in order to improve the performance of color texture segmentation algorithms. Two dimensional linear prediction models are used to characterize the spatial structures in color images. The multivariate linear prediction error of these texture models is approximated with Wishart distribution and multivariate Gaussian mixture models. A novel color texture segmentation framework based on these models and a spatial regularization model of initial class label fields is presented. For the proposed method and with different color spaces, experimental results show better performances in terms of percentage segmentation error, in comparison with the use of a multivariate Gaussian law.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Un protocole de fertilisation croisée d’un langage fonctionnel et d’un langage objet: application à la mise en oeuvre d’un prototype d’éditeur coopératif asynchrone</title>
      <description><![CDATA[The cross-fertilization is a technique to pool expertise and resources of at least two sectors in order to make the best of each. In this paper, we present a protocol of programming based on cross-fertilization of two programming languages (Haskell and Java) under two different programming paradigms: the functional paradigm and the object paradigm. This pooling of the strengths of each type of language permit to develop more secure applications in a shorter time, with functional code concise, easily understandable and thus, easily maintainable by one third. We present the meta-architecture of applications developed following this approach and an instantiation of it for the implementation of a prototype of an asynchronous collaborative editor.]]></description>
      <pubDate>Tue, 04 Oct 2011 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1952</link>
      <guid>https://doi.org/10.46298/arima.1952</guid>
      <author>Tchoupé Tchendji, Maurice</author>
      <dc:creator>Tchoupé Tchendji, Maurice</dc:creator>
      <content:encoded><![CDATA[The cross-fertilization is a technique to pool expertise and resources of at least two sectors in order to make the best of each. In this paper, we present a protocol of programming based on cross-fertilization of two programming languages (Haskell and Java) under two different programming paradigms: the functional paradigm and the object paradigm. This pooling of the strengths of each type of language permit to develop more secure applications in a shorter time, with functional code concise, easily understandable and thus, easily maintainable by one third. We present the meta-architecture of applications developed following this approach and an instantiation of it for the implementation of a prototype of an asynchronous collaborative editor.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Parallel GMRES with a multiplicative Schwarz preconditioner</title>
      <description><![CDATA[This paper presents a robust hybrid solver for linear systems that combines a Krylov subspace method as accelerator with a Schwarz-based preconditioner. This preconditioner uses an explicit formulation associated to one iteration of the multiplicative Schwarz method. The Newtonbasis GMRES, which aim at expressing a good data parallelism between subdomains is used as accelerator. In the first part of this paper, we present the pipeline parallelism that is obtained when the multiplicative Schwarz preconditioner is used to build the Krylov basis for the GMRES method. This is referred as the first level of parallelism. In the second part, we introduce a second level of parallelism inside the subdomains. For Schwarz-based preconditioners, the number of subdomains are keeped small to provide a robust solver. Therefore, the linear systems associated to subdomains are solved efficiently with this approach. Numerical experiments are performed on several problems to demonstrate the benefits of using these two levels of parallelism in the solver, mainly in terms of numerical robustness and global efficiency.]]></description>
      <pubDate>Mon, 12 Sep 2011 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1945</link>
      <guid>https://doi.org/10.46298/arima.1945</guid>
      <author>Nuentsa Wakam, Désiré</author>
      <author>Atenekeng-Kahou, Guy-Antoine</author>
      <dc:creator>Nuentsa Wakam, Désiré</dc:creator>
      <dc:creator>Atenekeng-Kahou, Guy-Antoine</dc:creator>
      <content:encoded><![CDATA[This paper presents a robust hybrid solver for linear systems that combines a Krylov subspace method as accelerator with a Schwarz-based preconditioner. This preconditioner uses an explicit formulation associated to one iteration of the multiplicative Schwarz method. The Newtonbasis GMRES, which aim at expressing a good data parallelism between subdomains is used as accelerator. In the first part of this paper, we present the pipeline parallelism that is obtained when the multiplicative Schwarz preconditioner is used to build the Krylov basis for the GMRES method. This is referred as the first level of parallelism. In the second part, we introduce a second level of parallelism inside the subdomains. For Schwarz-based preconditioners, the number of subdomains are keeped small to provide a robust solver. Therefore, the linear systems associated to subdomains are solved efficiently with this approach. Numerical experiments are performed on several problems to demonstrate the benefits of using these two levels of parallelism in the solver, mainly in terms of numerical robustness and global efficiency.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A transmission model of Bilharzia : A mathematical analysis of an heterogeneous model</title>
      <description><![CDATA[We consider an heterogeneous model of transmission of bilharzia. We compute the basic reproduction ratio R 0. We prove that if R 0 < 1, then the disease free equilibrium is globally asymptotically stable. If R 0 > 1 then there exists an unique endemic equilibrium, which is globally asymptotically stable. We will then consider possible applications to real data]]></description>
      <pubDate>Tue, 23 Aug 2011 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1954</link>
      <guid>https://doi.org/10.46298/arima.1954</guid>
      <author>Gilles, Riveau</author>
      <author>Sallet, Gauthier</author>
      <author>Lena, Tendeng</author>
      <dc:creator>Gilles, Riveau</dc:creator>
      <dc:creator>Sallet, Gauthier</dc:creator>
      <dc:creator>Lena, Tendeng</dc:creator>
      <content:encoded><![CDATA[We consider an heterogeneous model of transmission of bilharzia. We compute the basic reproduction ratio R 0. We prove that if R 0 < 1, then the disease free equilibrium is globally asymptotically stable. If R 0 > 1 then there exists an unique endemic equilibrium, which is globally asymptotically stable. We will then consider possible applications to real data]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Multigrid methods and data assimilation ― Convergence study and first experiments on non-linear equations</title>
      <description><![CDATA[In order to limit the computational cost of the variational data assimilation process, we investigate the use of multigrid methods to solve the associated optimal control system. On a linear advection equation, we study the impact of the regularization term and the discretization errors on the efficiency of the coarse grid correction step introduced by the multigrid method. We show that even if for a perfect numerical model the optimal control problem leads to the solution of an elliptic system, discretization errors introduce implicit diffusion that can alter the success of the multigrid methods. Then we test the multigrids configuration and the influence of the algorithmic parameters on a non-linear Burgers equation to show that the algorithm is robust and converges much faster than the monogrid one.]]></description>
      <pubDate>Sat, 20 Aug 2011 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1944</link>
      <guid>https://doi.org/10.46298/arima.1944</guid>
      <author>Neveu, Emilie</author>
      <author>Debreu, Laurent</author>
      <author>Le Dimet, François-Xavier</author>
      <dc:creator>Neveu, Emilie</dc:creator>
      <dc:creator>Debreu, Laurent</dc:creator>
      <dc:creator>Le Dimet, François-Xavier</dc:creator>
      <content:encoded><![CDATA[In order to limit the computational cost of the variational data assimilation process, we investigate the use of multigrid methods to solve the associated optimal control system. On a linear advection equation, we study the impact of the regularization term and the discretization errors on the efficiency of the coarse grid correction step introduced by the multigrid method. We show that even if for a perfect numerical model the optimal control problem leads to the solution of an elliptic system, discretization errors introduce implicit diffusion that can alter the success of the multigrid methods. Then we test the multigrids configuration and the influence of the algorithmic parameters on a non-linear Burgers equation to show that the algorithm is robust and converges much faster than the monogrid one.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Instrumentation des activités des tuteurs à l’aide d’un système multi-agents d’analyse automatique des interactions</title>
      <description><![CDATA[Research presented in this article is dedicated to the tutor instrumentation in distance collaborative learning situations. We are particularly interested in the reuse of interaction analysis indicators. In this paper, we present our system SYSAT; a multi-agent system for monitoring the activities of learners. The aim of SYSAT is to reuse indicators (social, cognitive, emotional ...) reported in the literature, in an open and adaptive system. We tested our system on the interaction data from two experiments conducted with two master students of the Ibn Tofail University. The article presents the results and discusses the prospects for Research.]]></description>
      <pubDate>Thu, 18 Aug 2011 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1947</link>
      <guid>https://doi.org/10.46298/arima.1947</guid>
      <author>Oumaira, Ilham</author>
      <author>Messoussi, Rochdi</author>
      <author>Touahni, Raja</author>
      <dc:creator>Oumaira, Ilham</dc:creator>
      <dc:creator>Messoussi, Rochdi</dc:creator>
      <dc:creator>Touahni, Raja</dc:creator>
      <content:encoded><![CDATA[Research presented in this article is dedicated to the tutor instrumentation in distance collaborative learning situations. We are particularly interested in the reuse of interaction analysis indicators. In this paper, we present our system SYSAT; a multi-agent system for monitoring the activities of learners. The aim of SYSAT is to reuse indicators (social, cognitive, emotional ...) reported in the literature, in an open and adaptive system. We tested our system on the interaction data from two experiments conducted with two master students of the Ibn Tofail University. The article presents the results and discusses the prospects for Research.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Cohérence de vues dans la spécification des Architectures Logicielles</title>
      <description><![CDATA[Cet article s’intéresse à la spécification des architectures logicielles. Il présente une symbiose entre l’approche conceptuelle basée sur les profils UML et la vision opérationnelle prônée par ArchJava. Actuellement, chaque langage de description se situe à une extrémité du processus, engendrant ainsi un découplage entre la spécification des Architectures Logicielles et leur implémentation et un risque d’incohérence. Nous décrivons une démarche basée sur un profil UML pour la description structurale des architectures logicielles et des règles de transformation pour générer le code source. Les expérimentations actuelles sont probantes et nous espérons poursuivre la réflexion sur les configurations et l’aspect dynamique.]]></description>
      <pubDate>Mon, 01 Aug 2011 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1951</link>
      <guid>https://doi.org/10.46298/arima.1951</guid>
      <author>Kouamou, Georges-Edouard</author>
      <dc:creator>Kouamou, Georges-Edouard</dc:creator>
      <content:encoded><![CDATA[Cet article s’intéresse à la spécification des architectures logicielles. Il présente une symbiose entre l’approche conceptuelle basée sur les profils UML et la vision opérationnelle prônée par ArchJava. Actuellement, chaque langage de description se situe à une extrémité du processus, engendrant ainsi un découplage entre la spécification des Architectures Logicielles et leur implémentation et un risque d’incohérence. Nous décrivons une démarche basée sur un profil UML pour la description structurale des architectures logicielles et des règles de transformation pour générer le code source. Les expérimentations actuelles sont probantes et nous espérons poursuivre la réflexion sur les configurations et l’aspect dynamique.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Optimisation de forme fluide-structure par un jeu de Nash</title>
      <description><![CDATA[This paper aims at the development of innovating methods for optimum design for multidisciplinary optimization problems in the aeronautical context. The subject is the treatment of a problem of concurrent optimization in which the aerodynamicist interacts with the structural designer, in a parallel way in a symmetric Nash game. Algorithms for the calculation of the equilibrium point have been proposed and successfully tested for this coupled aero-structural shape optimization in a situation where the aerodynamical criterion is preponderant.]]></description>
      <pubDate>Sun, 28 Nov 2010 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1933</link>
      <guid>https://doi.org/10.46298/arima.1933</guid>
      <author>Abou El Majd, B.</author>
      <author>Desideri, J.-A.</author>
      <author>Habbal, A.</author>
      <dc:creator>Abou El Majd, B.</dc:creator>
      <dc:creator>Desideri, J.-A.</dc:creator>
      <dc:creator>Habbal, A.</dc:creator>
      <content:encoded><![CDATA[This paper aims at the development of innovating methods for optimum design for multidisciplinary optimization problems in the aeronautical context. The subject is the treatment of a problem of concurrent optimization in which the aerodynamicist interacts with the structural designer, in a parallel way in a symmetric Nash game. Algorithms for the calculation of the equilibrium point have been proposed and successfully tested for this coupled aero-structural shape optimization in a situation where the aerodynamical criterion is preponderant.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Procédures d’échantillonnage efficaces : Estimation de la fiabilité des systèmes séries/parallèles</title>
      <description><![CDATA[This work consists to determine in a practical and straightforward manner some efficient sequential sampling schemes in order to estimate the product of Bernoulli parameters. The sampling schemes given by the literature are complex and costly. The results are useful for estimating the reliability of series/parallel systems where the allocation of the number of units to be tested from each component can be effective for minimizing the variance of the estimator.]]></description>
      <pubDate>Sat, 06 Nov 2010 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1941</link>
      <guid>https://doi.org/10.46298/arima.1941</guid>
      <author>Benkamra, Zohra</author>
      <author>Terbeche, Mekki</author>
      <author>Tlemcani, Mounir</author>
      <dc:creator>Benkamra, Zohra</dc:creator>
      <dc:creator>Terbeche, Mekki</dc:creator>
      <dc:creator>Tlemcani, Mounir</dc:creator>
      <content:encoded><![CDATA[This work consists to determine in a practical and straightforward manner some efficient sequential sampling schemes in order to estimate the product of Bernoulli parameters. The sampling schemes given by the literature are complex and costly. The results are useful for estimating the reliability of series/parallel systems where the allocation of the number of units to be tested from each component can be effective for minimizing the variance of the estimator.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Modélisation asymptotique d’une coque peu-profonde de Marguerre-von Kármán généralisée dans le cas dynamique</title>
      <description><![CDATA[In a recent work Gratie has generalized the classical Marguerre-von Kármán equations studied by Ciarlet and Paumier in [2], where only a portion of the lateral face is subjected to boundary conditions of von Kármán’s type and the remaining portion being free. She shows that the leading term of the asymptotic expansion is characterized by a two-dimensional boundary value problem. In this paper, we extend formally this study to dynamic case.]]></description>
      <pubDate>Wed, 27 Oct 2010 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1937</link>
      <guid>https://doi.org/10.46298/arima.1937</guid>
      <author>Chacha, D.A.</author>
      <author>Ghezal, A.</author>
      <author>Bensayah, A.</author>
      <dc:creator>Chacha, D.A.</dc:creator>
      <dc:creator>Ghezal, A.</dc:creator>
      <dc:creator>Bensayah, A.</dc:creator>
      <content:encoded><![CDATA[In a recent work Gratie has generalized the classical Marguerre-von Kármán equations studied by Ciarlet and Paumier in [2], where only a portion of the lateral face is subjected to boundary conditions of von Kármán’s type and the remaining portion being free. She shows that the leading term of the asymptotic expansion is characterized by a two-dimensional boundary value problem. In this paper, we extend formally this study to dynamic case.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Optimisation multicritère : Une approche par partage des variables</title>
      <description><![CDATA[We are interested here, in multi-criteria optimization problem using game theory. This problem will be treated by using a new algorithm for the splitting of territory in case of concurrent optimization, which presents a new formulation of Nash games between two players using two tables of allocations. Each player minimizes his cost function using the variables allocated by his own table. The two tables are given by an iterative algorithm. An image processing problem is addressed by using the proposed algorithms.]]></description>
      <pubDate>Tue, 26 Oct 2010 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1938</link>
      <guid>https://doi.org/10.46298/arima.1938</guid>
      <author>Aboulaich, Rajae</author>
      <author>Habbal, Abderrahmane</author>
      <author>Moussaid, Noureddine</author>
      <dc:creator>Aboulaich, Rajae</dc:creator>
      <dc:creator>Habbal, Abderrahmane</dc:creator>
      <dc:creator>Moussaid, Noureddine</dc:creator>
      <content:encoded><![CDATA[We are interested here, in multi-criteria optimization problem using game theory. This problem will be treated by using a new algorithm for the splitting of territory in case of concurrent optimization, which presents a new formulation of Nash games between two players using two tables of allocations. Each player minimizes his cost function using the variables allocated by his own table. The two tables are given by an iterative algorithm. An image processing problem is addressed by using the proposed algorithms.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Nouveau critère de séparation aveugle de sources cyclostationnaires au second ordre</title>
      <description><![CDATA[Le travail présenté ici vise à proposer un modèle pour la composition des services web sémantiques. Ce modèle est basé sur une représentation sémantique de l'ensemble des concepts manipulés par les services web d’un domaine d'application, à savoir, les opérations et les concepts statiques utilisés pour décrire les propriétés des services web. Différents niveaux d'abstraction sont donnés au concept opération pour permettre un accès progressif aux services concrets. Ainsi, deux plans de composition à granularités différentes (abstrait et concrets) sont générés. Ceci permettrade réutiliser des plans déjà construits pour répondre à des besoins similaires et même avec despréférences modifiées.]]></description>
      <pubDate>Mon, 04 Oct 2010 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1929</link>
      <guid>https://doi.org/10.46298/arima.1929</guid>
      <author>Ould Mohamed, Mohamed Salem</author>
      <author>Keziou, Amor</author>
      <author>Fenniri, Hassan</author>
      <author>Delaunay, Georges</author>
      <dc:creator>Ould Mohamed, Mohamed Salem</dc:creator>
      <dc:creator>Keziou, Amor</dc:creator>
      <dc:creator>Fenniri, Hassan</dc:creator>
      <dc:creator>Delaunay, Georges</dc:creator>
      <content:encoded><![CDATA[Le travail présenté ici vise à proposer un modèle pour la composition des services web sémantiques. Ce modèle est basé sur une représentation sémantique de l'ensemble des concepts manipulés par les services web d’un domaine d'application, à savoir, les opérations et les concepts statiques utilisés pour décrire les propriétés des services web. Différents niveaux d'abstraction sont donnés au concept opération pour permettre un accès progressif aux services concrets. Ainsi, deux plans de composition à granularités différentes (abstrait et concrets) sont générés. Ceci permettrade réutiliser des plans déjà construits pour répondre à des besoins similaires et même avec despréférences modifiées.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A hybrid Ant Colony Algorithm for the exam timetabling problem</title>
      <description><![CDATA[Due to increased student numbers and regulation changes educational institutions that allow for greater flexibility, operations researchers and computer scientists have renewed their interest in developing effective methods to resolve the examination timetabling problem. Thus, in the intervening decades, important progress was made in the examination timetabling problem with appearance of adaptation of meta-heuristics. This paper presents a hybridization of the Ant Colony Algorithm and a Complete Local search with Memory heuristic, in order to maximize as much as possible; the free time between consecutive exams for each student, while respecting the conflict constraints, a student cannot sit more than one exam in the same timeslot.]]></description>
      <pubDate>Sun, 03 Oct 2010 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1930</link>
      <guid>https://doi.org/10.46298/arima.1930</guid>
      <author>Abounacer, R.</author>
      <author>Boukachour, J.</author>
      <author>Dkhissi, B.</author>
      <author>El Hilali Alaoui, A.</author>
      <dc:creator>Abounacer, R.</dc:creator>
      <dc:creator>Boukachour, J.</dc:creator>
      <dc:creator>Dkhissi, B.</dc:creator>
      <dc:creator>El Hilali Alaoui, A.</dc:creator>
      <content:encoded><![CDATA[Due to increased student numbers and regulation changes educational institutions that allow for greater flexibility, operations researchers and computer scientists have renewed their interest in developing effective methods to resolve the examination timetabling problem. Thus, in the intervening decades, important progress was made in the examination timetabling problem with appearance of adaptation of meta-heuristics. This paper presents a hybridization of the Ant Colony Algorithm and a Complete Local search with Memory heuristic, in order to maximize as much as possible; the free time between consecutive exams for each student, while respecting the conflict constraints, a student cannot sit more than one exam in the same timeslot.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Application of the topological gradient method to tomography</title>
      <description><![CDATA[A new method for parallel beam tomography is proposed. This method is based on the topological gradient approach. The use of the topological asymptotic analysis for detecting the main edges of the data allows us to filter the noise while inverting the Radon transform. Experimental results obtained on noisy data illustrate the efficiency of this promising approach in the case of Magnetic Resonance Imaging. We also study the sensitivity of the algorithm with respect to several regularization and weight parameters.]]></description>
      <pubDate>Wed, 29 Sep 2010 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1939</link>
      <guid>https://doi.org/10.46298/arima.1939</guid>
      <author>Auroux, D.</author>
      <author>Jaafar-Belaid, L.</author>
      <author>Rjaibi, B.</author>
      <dc:creator>Auroux, D.</dc:creator>
      <dc:creator>Jaafar-Belaid, L.</dc:creator>
      <dc:creator>Rjaibi, B.</dc:creator>
      <content:encoded><![CDATA[A new method for parallel beam tomography is proposed. This method is based on the topological gradient approach. The use of the topological asymptotic analysis for detecting the main edges of the data allows us to filter the noise while inverting the Radon transform. Experimental results obtained on noisy data illustrate the efficiency of this promising approach in the case of Magnetic Resonance Imaging. We also study the sensitivity of the algorithm with respect to several regularization and weight parameters.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Inverse impedance boundary problem via the conformal mapping method: the case of small impedances</title>
      <description><![CDATA[Haddar and Kress [9] extended the use of the conformal mapping approach [2, 8] to reconstruct the internal boundary curve Ti of a doubly connected domain from the Cauchy data on the external boundary of a harmonic function satisfying a homogeneous impedance boundary condition on Ti. However, the analysis of this scheme indicates non convergence of the proposed algorithm for small values of the impedance. In this paper, we modify the algorithm proposed in [9] in order to obtain a convergent and stable inversion process for small impedances. We illustrate the performance of the method through some numerical examples that also include the cases of variable impedances.]]></description>
      <pubDate>Wed, 25 Aug 2010 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1936</link>
      <guid>https://doi.org/10.46298/arima.1936</guid>
      <author>Ben Hassen, F.</author>
      <author>Boukari, Y.</author>
      <author>Haddar, H.</author>
      <dc:creator>Ben Hassen, F.</dc:creator>
      <dc:creator>Boukari, Y.</dc:creator>
      <dc:creator>Haddar, H.</dc:creator>
      <content:encoded><![CDATA[Haddar and Kress [9] extended the use of the conformal mapping approach [2, 8] to reconstruct the internal boundary curve Ti of a doubly connected domain from the Cauchy data on the external boundary of a harmonic function satisfying a homogeneous impedance boundary condition on Ti. However, the analysis of this scheme indicates non convergence of the proposed algorithm for small values of the impedance. In this paper, we modify the algorithm proposed in [9] in order to obtain a convergent and stable inversion process for small impedances. We illustrate the performance of the method through some numerical examples that also include the cases of variable impedances.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Distributed Load Balancing Model for Grid Computing</title>
      <description><![CDATA[Most of the existing load balancing strategies were interested in distributed systems which were supposed to have homogeneous resources interconnected with homogeneous and fast networks. For Grid computing, these assumptions are not realistic because of heterogeneity, scalability and dynamicity characteristics. For these environments the load balancing problem is then a new challenge presently for which many research projects are under way. In this perspective, our contributions through this paper are two folds. First, we propose a distributed load balancing model which can represent any Grid topology into a forest structure. After that, we develop on this model, a load balancing strategy at two levels; its principal objectives : the reduction of average response time of tasks and their transferring cost. The proposed strategy is naturally distributed with a local decision, which allows the possibility of avoiding use of wide area communication network.]]></description>
      <pubDate>Sat, 21 Aug 2010 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1931</link>
      <guid>https://doi.org/10.46298/arima.1931</guid>
      <author>Yagoubi, Belabbas</author>
      <author>Meddeber, Meriem</author>
      <dc:creator>Yagoubi, Belabbas</dc:creator>
      <dc:creator>Meddeber, Meriem</dc:creator>
      <content:encoded><![CDATA[Most of the existing load balancing strategies were interested in distributed systems which were supposed to have homogeneous resources interconnected with homogeneous and fast networks. For Grid computing, these assumptions are not realistic because of heterogeneity, scalability and dynamicity characteristics. For these environments the load balancing problem is then a new challenge presently for which many research projects are under way. In this perspective, our contributions through this paper are two folds. First, we propose a distributed load balancing model which can represent any Grid topology into a forest structure. After that, we develop on this model, a load balancing strategy at two levels; its principal objectives : the reduction of average response time of tasks and their transferring cost. The proposed strategy is naturally distributed with a local decision, which allows the possibility of avoiding use of wide area communication network.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Outils d’aide à la décision pour la planification des réseaux de distribution de l’énergie électrique</title>
      <description><![CDATA[The continued growth in demand for electricity is a increasingly challenge for the company. This requires great efforts to optimize decisions to be taken especially for managing the distribution of electricity which poses many problems in society, primarily due to the expansion of the network, increased consumption Power and real-time management. As the strengthening of electrical networks is difficult and expensive at the same time, it is necessary to choose an optimal management to ensure customer satisfaction, reduce costs and increase profit margins. In this work, we propose a few different optimization methods to solve partially or globally this problem, allowing to make appropriate choices.]]></description>
      <pubDate>Mon, 09 Aug 2010 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1940</link>
      <guid>https://doi.org/10.46298/arima.1940</guid>
      <author>El Yassini, Khalid</author>
      <author>Zine, Rabie</author>
      <author>Raïssouli, Mustapha</author>
      <dc:creator>El Yassini, Khalid</dc:creator>
      <dc:creator>Zine, Rabie</dc:creator>
      <dc:creator>Raïssouli, Mustapha</dc:creator>
      <content:encoded><![CDATA[The continued growth in demand for electricity is a increasingly challenge for the company. This requires great efforts to optimize decisions to be taken especially for managing the distribution of electricity which poses many problems in society, primarily due to the expansion of the network, increased consumption Power and real-time management. As the strengthening of electrical networks is difficult and expensive at the same time, it is necessary to choose an optimal management to ensure customer satisfaction, reduce costs and increase profit margins. In this work, we propose a few different optimization methods to solve partially or globally this problem, allowing to make appropriate choices.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Préface au numéro spécial de la revue ARIMA dédié à TAM-TAM'09</title>
      <description><![CDATA[Foreword to the special issue of Arima Journal dedicated to TAM-TAM'09]]></description>
      <pubDate>Sat, 07 Aug 2010 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1932</link>
      <guid>https://doi.org/10.46298/arima.1932</guid>
      <author>Jaoua, Mohamed</author>
      <author>Philippe, Bernard</author>
      <author>Mghazali, Zoubida</author>
      <dc:creator>Jaoua, Mohamed</dc:creator>
      <dc:creator>Philippe, Bernard</dc:creator>
      <dc:creator>Mghazali, Zoubida</dc:creator>
      <content:encoded><![CDATA[Foreword to the special issue of Arima Journal dedicated to TAM-TAM'09]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Time error estimators for the Chorin-Temam scheme</title>
      <description><![CDATA[The time-dependent Stokes equations are discretized by the original Chorin’s projection method [5] and Temam[15]. According to an idea of [1], we derive time error estimators for velocity and pressure. In particular, the velocity estimator is implemented for adaptation on the time step.]]></description>
      <pubDate>Fri, 06 Aug 2010 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1935</link>
      <guid>https://doi.org/10.46298/arima.1935</guid>
      <author>Kharrat, Nizar</author>
      <author>Mghazali, Zoubida</author>
      <dc:creator>Kharrat, Nizar</dc:creator>
      <dc:creator>Mghazali, Zoubida</dc:creator>
      <content:encoded><![CDATA[The time-dependent Stokes equations are discretized by the original Chorin’s projection method [5] and Temam[15]. According to an idea of [1], we derive time error estimators for velocity and pressure. In particular, the velocity estimator is implemented for adaptation on the time step.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Preconditioned Richardson Regularization for the Data Completion Problem and the Kozlov-Maz’ya-Fomin Method</title>
      <description><![CDATA[Using a preconditioned Richardson iterative method as a regularization to the data completion problem is the aim of the contribution. The problem is known to be exponentially ill posed that makes its numerical treatment a hard task. The approach we present relies on the Steklov-Poincaré variational framework introduced in [Inverse Problems, vol. 21, 2005]. The resulting algorithm turns out to be equivalent to the Kozlov-Maz’ya-Fomin method in [Comp. Math. Phys., vol. 31, 1991]. We conduct a comprehensive analysis on the suitable stopping rules that provides some optimal estimates under the General Source Condition on the exact solution. Some numerical examples are finally discussed to highlight the performances of the method.]]></description>
      <pubDate>Sat, 31 Jul 2010 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1934</link>
      <guid>https://doi.org/10.46298/arima.1934</guid>
      <author>Thang Du, Duc</author>
      <author>Jelassi, Faten</author>
      <dc:creator>Thang Du, Duc</dc:creator>
      <dc:creator>Jelassi, Faten</dc:creator>
      <content:encoded><![CDATA[Using a preconditioned Richardson iterative method as a regularization to the data completion problem is the aim of the contribution. The problem is known to be exponentially ill posed that makes its numerical treatment a hard task. The approach we present relies on the Steklov-Poincaré variational framework introduced in [Inverse Problems, vol. 21, 2005]. The resulting algorithm turns out to be equivalent to the Kozlov-Maz’ya-Fomin method in [Comp. Math. Phys., vol. 31, 1991]. We conduct a comprehensive analysis on the suitable stopping rules that provides some optimal estimates under the General Source Condition on the exact solution. Some numerical examples are finally discussed to highlight the performances of the method.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Two Short Presentations related to Cancer Modeling</title>
      <description><![CDATA[This paper contains two short presentations related to the mathematical modeling of Cancer. The first part intends to introduce a tumour-immune system interaction, which describes the early dynamics of cancerous cells, competing with the immune system, potentially leading to either the elimination of tumoral cells or to the viability of a solid tumor. The second part of the paper addresses the case where a solid tumor has grown enough to initiate angiogenesis, a process which equips the tumor with its own blood network. Nash game theory is used to model the interaction between activators and inhibitors of the angiogenesis process.]]></description>
      <pubDate>Wed, 28 Oct 2009 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1919</link>
      <guid>https://doi.org/10.46298/arima.1919</guid>
      <author>Habbal, A.</author>
      <author>Jabin, P.-E.</author>
      <dc:creator>Habbal, A.</dc:creator>
      <dc:creator>Jabin, P.-E.</dc:creator>
      <content:encoded><![CDATA[This paper contains two short presentations related to the mathematical modeling of Cancer. The first part intends to introduce a tumour-immune system interaction, which describes the early dynamics of cancerous cells, competing with the immune system, potentially leading to either the elimination of tumoral cells or to the viability of a solid tumor. The second part of the paper addresses the case where a solid tumor has grown enough to initiate angiogenesis, a process which equips the tumor with its own blood network. Nash game theory is used to model the interaction between activators and inhibitors of the angiogenesis process.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Un modèle intégré pour explorer les trajectoires d'utilisation de l'espace</title>
      <description><![CDATA[Dynamic spatial models are important tools for the study of complex systems like environmental systems. This paper presents an integrated model that has been designed to explore land use trajectories in a small region around Maroua, located in the far north of Cameroon. The model simulates competition between land use types taking into account a set of biophysical, socio-demographic and geo-economics driving factors. The model includes three modules. The dynamic simulation module combines results of the spatial analysis and prediction modules. Simulation results for each scenario can help to identify where changes occur. The model developed constitutes an efficient knowledge support system for exploratory research and land use planning.]]></description>
      <pubDate>Tue, 29 Sep 2009 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1961</link>
      <guid>https://doi.org/10.46298/arima.1961</guid>
      <author>Fotsing, Eric</author>
      <author>Verburg, Peter H.</author>
      <author>de Groot, Wouter T.</author>
      <author>Cheylan, Jean-Paul</author>
      <author>Tchuenté, Maurice</author>
      <dc:creator>Fotsing, Eric</dc:creator>
      <dc:creator>Verburg, Peter H.</dc:creator>
      <dc:creator>de Groot, Wouter T.</dc:creator>
      <dc:creator>Cheylan, Jean-Paul</dc:creator>
      <dc:creator>Tchuenté, Maurice</dc:creator>
      <content:encoded><![CDATA[Dynamic spatial models are important tools for the study of complex systems like environmental systems. This paper presents an integrated model that has been designed to explore land use trajectories in a small region around Maroua, located in the far north of Cameroon. The model simulates competition between land use types taking into account a set of biophysical, socio-demographic and geo-economics driving factors. The model includes three modules. The dynamic simulation module combines results of the spatial analysis and prediction modules. Simulation results for each scenario can help to identify where changes occur. The model developed constitutes an efficient knowledge support system for exploratory research and land use planning.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Modélisation des interactions de coopération dans la conduite d'un projet de simulation</title>
      <description><![CDATA[This study was carried out within the framework of a research project entitled «CSCW and Simulation: Toward a group-oriented platform of analysis and design of production systems». The object of this project is to analyze the cooperation practices during the conduct of a project of modeling and simulation of a production system, then to specify and develop a simulation groupware with an aim of adding the group dimension to simulation tools. In this paper, we define and we present a modeling of cooperation interactions in the conduct of cooperative simulation project using the Denver Model. We illustrate our approach by presenting our experiment with BSCW system to validate our modeling.]]></description>
      <pubDate>Thu, 24 Sep 2009 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1920</link>
      <guid>https://doi.org/10.46298/arima.1920</guid>
      <author>Korichi, Ahmed</author>
      <author>Belattar, Brahim</author>
      <dc:creator>Korichi, Ahmed</dc:creator>
      <dc:creator>Belattar, Brahim</dc:creator>
      <content:encoded><![CDATA[This study was carried out within the framework of a research project entitled «CSCW and Simulation: Toward a group-oriented platform of analysis and design of production systems». The object of this project is to analyze the cooperation practices during the conduct of a project of modeling and simulation of a production system, then to specify and develop a simulation groupware with an aim of adding the group dimension to simulation tools. In this paper, we define and we present a modeling of cooperation interactions in the conduct of cooperative simulation project using the Denver Model. We illustrate our approach by presenting our experiment with BSCW system to validate our modeling.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Typing rules OCL specification of QoS-capable ODP Computational Interfaces</title>
      <description><![CDATA[In this work we model the interaction signature concepts in a consistent and compact manner as well as their related type checking rules. First, we begin by literally analyzing those concepts in order to bring unambiguous definitions out of them. Following this analysis we shall formalize those concepts by mapping them into UML language constructs. Secondly, we specify constraints imposed on computational interfaces interaction signatures related to the computational language typing and subtyping rules. We shall show how we can we literally redefine those rules in order to steadily formalize them. After rewriting those rules in a compact way, we make use of OCL 2.0 which provides the means to exploit those new definitions. Then we introduce the concept of Functional computational interface and a set of related concepts which unify signal and operation interfaces notions. Based on the new additional concepts introduced, we introduce two new important concepts, namely; QoS-definable interactions and QoS-capable interfaces. We then provide a UML metamodel of interfaces and interaction signatures. The final metamodel being a first step towards a QoS-capable computational metamodel. Finally, as an application of our modeling choices we define ODP QoScapable computational interfaces type checking rules and then specify them using OCL 2.0]]></description>
      <pubDate>Wed, 23 Sep 2009 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1922</link>
      <guid>https://doi.org/10.46298/arima.1922</guid>
      <author>Reda, Oussama</author>
      <author>El Ouahidi, Bouabid</author>
      <author>Bourget, Daniel</author>
      <dc:creator>Reda, Oussama</dc:creator>
      <dc:creator>El Ouahidi, Bouabid</dc:creator>
      <dc:creator>Bourget, Daniel</dc:creator>
      <content:encoded><![CDATA[In this work we model the interaction signature concepts in a consistent and compact manner as well as their related type checking rules. First, we begin by literally analyzing those concepts in order to bring unambiguous definitions out of them. Following this analysis we shall formalize those concepts by mapping them into UML language constructs. Secondly, we specify constraints imposed on computational interfaces interaction signatures related to the computational language typing and subtyping rules. We shall show how we can we literally redefine those rules in order to steadily formalize them. After rewriting those rules in a compact way, we make use of OCL 2.0 which provides the means to exploit those new definitions. Then we introduce the concept of Functional computational interface and a set of related concepts which unify signal and operation interfaces notions. Based on the new additional concepts introduced, we introduce two new important concepts, namely; QoS-definable interactions and QoS-capable interfaces. We then provide a UML metamodel of interfaces and interaction signatures. The final metamodel being a first step towards a QoS-capable computational metamodel. Finally, as an application of our modeling choices we define ODP QoScapable computational interfaces type checking rules and then specify them using OCL 2.0]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Un modèle de composition des services web sémantiques</title>
      <description><![CDATA[The work presented here aims to provide a composition model of semantic web services. This model is based on a semantic representation of domain concepts handled by web services, namely, operations and the static concepts used to describe static properties of Web services. Different levels of abstraction are given to the concept of operation to allow gradual access to concret services. Thus, two different levels of the composition plan are generated (abstract and concret). This will reuse plans already constructed to meet similar needs even with modified preferences.]]></description>
      <pubDate>Wed, 23 Sep 2009 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1928</link>
      <guid>https://doi.org/10.46298/arima.1928</guid>
      <author>Temglit, N.</author>
      <author>Aliane, H.</author>
      <author>Ahmed Nacer, M.</author>
      <dc:creator>Temglit, N.</dc:creator>
      <dc:creator>Aliane, H.</dc:creator>
      <dc:creator>Ahmed Nacer, M.</dc:creator>
      <content:encoded><![CDATA[The work presented here aims to provide a composition model of semantic web services. This model is based on a semantic representation of domain concepts handled by web services, namely, operations and the static concepts used to describe static properties of Web services. Different levels of abstraction are given to the concept of operation to allow gradual access to concret services. Thus, two different levels of the composition plan are generated (abstract and concret). This will reuse plans already constructed to meet similar needs even with modified preferences.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Vers une métrique de description objective d'une sensation subjective</title>
      <description><![CDATA[In the last decade, several image and video quality metrics have been proposed, which incorporate perceptual quality measures by considering the HVS characteristics. All this metrics do the differences pixel to pixel in image. Therefore a local fidelity of the colour is defined. However, the human visual system is rather sensitive to a global quality. In this paper, we propose a new objective full reference quality metric for colour images called Global Delta E and noted: ~ G DE . This metric is based on human visual system properties in order to obtain the best correspondence with judgments. Some experimentations and assessments prove the performance of our metrics and that interrelationship with the SVH.]]></description>
      <pubDate>Sun, 20 Sep 2009 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1921</link>
      <guid>https://doi.org/10.46298/arima.1921</guid>
      <author>Ouni, Sonia</author>
      <author>Zagrouba, Ezzeddine</author>
      <author>Chambah, Majeb</author>
      <author>Herbin, Michel</author>
      <dc:creator>Ouni, Sonia</dc:creator>
      <dc:creator>Zagrouba, Ezzeddine</dc:creator>
      <dc:creator>Chambah, Majeb</dc:creator>
      <dc:creator>Herbin, Michel</dc:creator>
      <content:encoded><![CDATA[In the last decade, several image and video quality metrics have been proposed, which incorporate perceptual quality measures by considering the HVS characteristics. All this metrics do the differences pixel to pixel in image. Therefore a local fidelity of the colour is defined. However, the human visual system is rather sensitive to a global quality. In this paper, we propose a new objective full reference quality metric for colour images called Global Delta E and noted: ~ G DE . This metric is based on human visual system properties in order to obtain the best correspondence with judgments. Some experimentations and assessments prove the performance of our metrics and that interrelationship with the SVH.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Les Composants Logiciels et La Séparation Avancée des Préoccupations : Vers une nouvelle Approche de Combinaison</title>
      <description><![CDATA[The component based approach and advanced separation of concerns are two important paradigms for software systems development. Although the two paradigms are complementary and looking for their synergy is a promising issue, only few research works are currently dedicated to their combination. This paper presents a comparative state of the art of the main research works that aim at the synergy of the two paradigms and proposes a new approach of combination that derives from the fact that all potential contributions that can be drawn, are tightly related with the well manipulation of aspects.]]></description>
      <pubDate>Tue, 15 Sep 2009 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1924</link>
      <guid>https://doi.org/10.46298/arima.1924</guid>
      <author>Hariati, Mehdi</author>
      <author>Meslati, Djamel</author>
      <dc:creator>Hariati, Mehdi</dc:creator>
      <dc:creator>Meslati, Djamel</dc:creator>
      <content:encoded><![CDATA[The component based approach and advanced separation of concerns are two important paradigms for software systems development. Although the two paradigms are complementary and looking for their synergy is a promising issue, only few research works are currently dedicated to their combination. This paper presents a comparative state of the art of the main research works that aim at the synergy of the two paradigms and proposes a new approach of combination that derives from the fact that all potential contributions that can be drawn, are tightly related with the well manipulation of aspects.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Proposition pour l’intégration des réseaux petits mondes en recherche d’information</title>
      <description><![CDATA[We propose in this paper an approach for document clustering. It consists of representing the corpus as a document graph, where the links are defined by some criteria. These links are quantified by simialrity measures. We aim join this context into the approach of classification to constitute small-worlds networks of homogeneous documents. The homogeneity of the clusters is measured according to the properties of small worlds. The clusters, as well as their proprietes, allow to rerank search results. Some experiments were done on a corpus provided by TREC and the obtained results show the contribution of small-worlds networks in information retrieval.]]></description>
      <pubDate>Tue, 08 Sep 2009 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1925</link>
      <guid>https://doi.org/10.46298/arima.1925</guid>
      <author>Khazri, Mohamed</author>
      <author>Tmar, Mohamed</author>
      <author>Abid, Mohamed</author>
      <author>Boughanem, Mohand</author>
      <dc:creator>Khazri, Mohamed</dc:creator>
      <dc:creator>Tmar, Mohamed</dc:creator>
      <dc:creator>Abid, Mohamed</dc:creator>
      <dc:creator>Boughanem, Mohand</dc:creator>
      <content:encoded><![CDATA[We propose in this paper an approach for document clustering. It consists of representing the corpus as a document graph, where the links are defined by some criteria. These links are quantified by simialrity measures. We aim join this context into the approach of classification to constitute small-worlds networks of homogeneous documents. The homogeneity of the clusters is measured according to the properties of small worlds. The clusters, as well as their proprietes, allow to rerank search results. Some experiments were done on a corpus provided by TREC and the obtained results show the contribution of small-worlds networks in information retrieval.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Ordonnancement de tâches hiérarchiques interdépendantes sous des exigences temporelles et objectif d’efficacité</title>
      <description><![CDATA[The introduction of high-performance applications such as multimedia applications into embedded systems led the manufacturers to offer embedded platforms able to offer an important computing power which makes it possible to answer the increasing requirements of future evolutions of these applications. One of the adopted solutions is the use of multiprocessor platforms. In this paper we propose an exploratory methodology of scheduling software modelled in the form of hierarchical, collaborative and interdependent hierarchical tasks with local deadlines. Scheduling answers (1) the local temporal requirements of tasks, and allows (2) an effective exploitation of multiprocessor platforms]]></description>
      <pubDate>Tue, 11 Aug 2009 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1926</link>
      <guid>https://doi.org/10.46298/arima.1926</guid>
      <author>Assayad, Ismail</author>
      <dc:creator>Assayad, Ismail</dc:creator>
      <content:encoded><![CDATA[The introduction of high-performance applications such as multimedia applications into embedded systems led the manufacturers to offer embedded platforms able to offer an important computing power which makes it possible to answer the increasing requirements of future evolutions of these applications. One of the adopted solutions is the use of multiprocessor platforms. In this paper we propose an exploratory methodology of scheduling software modelled in the form of hierarchical, collaborative and interdependent hierarchical tasks with local deadlines. Scheduling answers (1) the local temporal requirements of tasks, and allows (2) an effective exploitation of multiprocessor platforms]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Nouvelle approche d’accélération du codage fractal d’images</title>
      <description><![CDATA[The Fractal image compression has the advantage of presenting fast decoding and independent resolution but it suffers of slow encoding phase. In the present study, we propose to reduce the computational complexity by using two domain pools instead of one domain pool and encoding an image in two steps (AP2D approach). AP2D could be applied to classification methods or domain pool reduction methods leading to more reduction in encoding phase. Indeed, experimental results showed that AP2D speed up the encoding time. The time reduction obtained reached a percentage of more than 65% when AP2D was applied to Fisher classification and more than 72% when AP2D was applied to exhaustive search. The image quality was not altered by this approach while the compression ratio was slightly enhanced.]]></description>
      <pubDate>Fri, 07 Aug 2009 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1927</link>
      <guid>https://doi.org/10.46298/arima.1927</guid>
      <author>Douda, Sofia</author>
      <author>El Imrani, Abdelhakim</author>
      <author>Limouri, Mohammed</author>
      <dc:creator>Douda, Sofia</dc:creator>
      <dc:creator>El Imrani, Abdelhakim</dc:creator>
      <dc:creator>Limouri, Mohammed</dc:creator>
      <content:encoded><![CDATA[The Fractal image compression has the advantage of presenting fast decoding and independent resolution but it suffers of slow encoding phase. In the present study, we propose to reduce the computational complexity by using two domain pools instead of one domain pool and encoding an image in two steps (AP2D approach). AP2D could be applied to classification methods or domain pool reduction methods leading to more reduction in encoding phase. Indeed, experimental results showed that AP2D speed up the encoding time. The time reduction obtained reached a percentage of more than 65% when AP2D was applied to Fisher classification and more than 72% when AP2D was applied to exhaustive search. The image quality was not altered by this approach while the compression ratio was slightly enhanced.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Méthodes MCMC en interaction pour l'évaluation de ressources naturelles</title>
      <description><![CDATA[Markov chain Monte Carlo (MCMC) methods together with hidden Markov models are extensively used in the Bayesian inference for many scientific fields like environment and ecology. Through simulated examples we show that the speed of convergence of these methods can be very low. In order to improve the convergence properties, we propose a method to make parallel chains interact. We apply this method to a biomass evolution model for fisheries.]]></description>
      <pubDate>Sat, 29 Nov 2008 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1887</link>
      <guid>https://doi.org/10.46298/arima.1887</guid>
      <author>Campillo, Fabien</author>
      <author>Cantet, Philippe</author>
      <author>Rakotozafy, Rivo</author>
      <author>Rossi, Vivien</author>
      <dc:creator>Campillo, Fabien</dc:creator>
      <dc:creator>Cantet, Philippe</dc:creator>
      <dc:creator>Rakotozafy, Rivo</dc:creator>
      <dc:creator>Rossi, Vivien</dc:creator>
      <content:encoded><![CDATA[Markov chain Monte Carlo (MCMC) methods together with hidden Markov models are extensively used in the Bayesian inference for many scientific fields like environment and ecology. Through simulated examples we show that the speed of convergence of these methods can be very low. In order to improve the convergence properties, we propose a method to make parallel chains interact. We apply this method to a biomass evolution model for fisheries.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Continuous branching processes : the discrete hidden in the continuous : Dedicated to Claude Lobry</title>
      <description><![CDATA[Feller diffusion is a continuous branching process. The branching property tells us that for t > 0 fixed, when indexed by the initial condition, it is a subordinator (i. e. a positive–valued Lévy process), which is fact is a compound Poisson process. The number of points of this Poisson process can be interpreted as the number of individuals whose progeny survives during a number of generations of the order of t × N, where N denotes the size of the population, in the limit N ―>µ. This fact follows from recent results of Bertoin, Fontbona, Martinez [1]. We compare them with older results of de O’Connell [7] and [8]. We believe that this comparison is useful for better understanding these results. There is no new result in this presentation.]]></description>
      <pubDate>Mon, 24 Nov 2008 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1899</link>
      <guid>https://doi.org/10.46298/arima.1899</guid>
      <author>Pardoux, Etienne</author>
      <dc:creator>Pardoux, Etienne</dc:creator>
      <content:encoded><![CDATA[Feller diffusion is a continuous branching process. The branching property tells us that for t > 0 fixed, when indexed by the initial condition, it is a subordinator (i. e. a positive–valued Lévy process), which is fact is a compound Poisson process. The number of points of this Poisson process can be interpreted as the number of individuals whose progeny survives during a number of generations of the order of t × N, where N denotes the size of the population, in the limit N ―>µ. This fact follows from recent results of Bertoin, Fontbona, Martinez [1]. We compare them with older results of de O’Connell [7] and [8]. We believe that this comparison is useful for better understanding these results. There is no new result in this presentation.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Sur le retard à la bifurcation</title>
      <description><![CDATA[We give a non-exhaustive overview of the problem of bifurcation delay from its appearance in France at the end of the 80ies to the most recent contributions. We present the bifurcation delay for differential equations as well as for discrete dynamical systems.]]></description>
      <pubDate>Fri, 21 Nov 2008 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1909</link>
      <guid>https://doi.org/10.46298/arima.1909</guid>
      <author>Fruchard, Augustin</author>
      <author>Schäfke, Reinhard</author>
      <dc:creator>Fruchard, Augustin</dc:creator>
      <dc:creator>Schäfke, Reinhard</dc:creator>
      <content:encoded><![CDATA[We give a non-exhaustive overview of the problem of bifurcation delay from its appearance in France at the end of the 80ies to the most recent contributions. We present the bifurcation delay for differential equations as well as for discrete dynamical systems.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On a 1D-Shallow Water Model: Existence of solution and numerical simulations</title>
      <description><![CDATA[The study of a 1D-shallow water model, obtained in a height-flow formulation, is presented. It takes viscosity into account and can be used for the flood prediction in rivers. For a linearized system, the existence and uniqueness of a global solution is proved. Finally, various numerical results are presented regarding the linear and non linear case.]]></description>
      <pubDate>Wed, 19 Nov 2008 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1901</link>
      <guid>https://doi.org/10.46298/arima.1901</guid>
      <author>Besson, Olivier</author>
      <author>Kane, Soulèye</author>
      <author>Sy, Mamadou</author>
      <dc:creator>Besson, Olivier</dc:creator>
      <dc:creator>Kane, Soulèye</dc:creator>
      <dc:creator>Sy, Mamadou</dc:creator>
      <content:encoded><![CDATA[The study of a 1D-shallow water model, obtained in a height-flow formulation, is presented. It takes viscosity into account and can be used for the flood prediction in rivers. For a linearized system, the existence and uniqueness of a global solution is proved. Finally, various numerical results are presented regarding the linear and non linear case.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Computational probability modeling and Bayesian inference</title>
      <description><![CDATA[Computational probabilistic modeling and Bayesian inference has met a great success over the past fifteen years through the development of Monte Carlo methods and the ever increasing performance of computers. Through methods such as Monte Carlo Markov chain and sequential Monte Carlo Bayesian inference effectively combines with Markovian modelling. This approach has been very successful in ecology and agronomy. We analyze the development of this approach applied to a few examples of natural resources management.]]></description>
      <pubDate>Sat, 08 Nov 2008 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1917</link>
      <guid>https://doi.org/10.46298/arima.1917</guid>
      <author>Campillo, Fabien</author>
      <author>Rakotozafy, Rivo</author>
      <author>Rossi, Vivien</author>
      <dc:creator>Campillo, Fabien</dc:creator>
      <dc:creator>Rakotozafy, Rivo</dc:creator>
      <dc:creator>Rossi, Vivien</dc:creator>
      <content:encoded><![CDATA[Computational probabilistic modeling and Bayesian inference has met a great success over the past fifteen years through the development of Monte Carlo methods and the ever increasing performance of computers. Through methods such as Monte Carlo Markov chain and sequential Monte Carlo Bayesian inference effectively combines with Markovian modelling. This approach has been very successful in ecology and agronomy. We analyze the development of this approach applied to a few examples of natural resources management.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Une approche pour la modèlisation et le contrôle des instabilités de combustion</title>
      <description><![CDATA[A set of two coupled genralized Vand der Pol equations is proposed as a control model for combustion instabilities. The system is analyzed using the Krylov-Bogoliubov method. The control aspects related to quenching of the oscillations are examined. The analysis results are compared with simulation results.]]></description>
      <pubDate>Sat, 08 Nov 2008 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1914</link>
      <guid>https://doi.org/10.46298/arima.1914</guid>
      <author>Landau, Ioan Doré</author>
      <author>Bouziani, Fethi</author>
      <author>Bitmead, Robert R.</author>
      <dc:creator>Landau, Ioan Doré</dc:creator>
      <dc:creator>Bouziani, Fethi</dc:creator>
      <dc:creator>Bitmead, Robert R.</dc:creator>
      <content:encoded><![CDATA[A set of two coupled genralized Vand der Pol equations is proposed as a control model for combustion instabilities. The system is analyzed using the Krylov-Bogoliubov method. The control aspects related to quenching of the oscillations are examined. The analysis results are compared with simulation results.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Projections et cohérence de vues dans les grammaires algébriques</title>
      <description><![CDATA[A complex structured document is intentionnally represented as a tree decorated with attributes. The set of legal structures is given by an abstract context-free grammar. We forget about the attributes; they are related with semantical issues that can be treated independently of the purely structural aspects that we address in this article. That intentional representation may be asynchronously manipulated by a set of independent tools each of which operates on a distinct partial view of the whole structure. In order to synchronize these various partial views, we are faced to the problem of their coherence: can we decide whether there exists some global structure corresponding to a given set of partial views and in the affirmative, can we produce such a global structure ? We solve this problem in the case where a view is given by a subset of grammatical symbols, those associated with the so-called visible syntactical categories. The proposed algorithm, that strongly relies on the mechanism of lazy evaluation, produces an answer to this problem even if partial views may correspond to an infinite set of related global structures.]]></description>
      <pubDate>Wed, 05 Nov 2008 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1885</link>
      <guid>https://doi.org/10.46298/arima.1885</guid>
      <author>Badouel, Eric</author>
      <author>Tchoupé Tchendji, Maurice</author>
      <dc:creator>Badouel, Eric</dc:creator>
      <dc:creator>Tchoupé Tchendji, Maurice</dc:creator>
      <content:encoded><![CDATA[A complex structured document is intentionnally represented as a tree decorated with attributes. The set of legal structures is given by an abstract context-free grammar. We forget about the attributes; they are related with semantical issues that can be treated independently of the purely structural aspects that we address in this article. That intentional representation may be asynchronously manipulated by a set of independent tools each of which operates on a distinct partial view of the whole structure. In order to synchronize these various partial views, we are faced to the problem of their coherence: can we decide whether there exists some global structure corresponding to a given set of partial views and in the affirmative, can we produce such a global structure ? We solve this problem in the case where a view is given by a subset of grammatical symbols, those associated with the so-called visible syntactical categories. The proposed algorithm, that strongly relies on the mechanism of lazy evaluation, produces an answer to this problem even if partial views may correspond to an infinite set of related global structures.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>The peaking phenomenon and singular perturbations</title>
      <description><![CDATA[We study the asymptotic behaviour, when the parameter " tends to 0, of a class of singularly perturbed triangular systems x˙ = f(x, y), y˙ = G(y, "). We assume that all solutions of the second equation tend to zero arbitrarily fast when " tends to 0. We assume that the origin of equation x˙ = f(x, 0) is globally asymptotically stable. Some states of the second equation may peak to very large values, before they rapidly decay to zero. Such peaking states can destabilize the first equation. The paper introduces the concept of instantaneous stability, to measure the fast decay to zero of the solutions of the second equation, and the concept of uniform infinitesimal boundedness to measure the effects of peaking on the first equation. Whe show that all the solutions of the triangular system tend to zero when " ! 0 and t ! +1. Our results are formulated in both classical mathematics and nonstandard analysis.]]></description>
      <pubDate>Tue, 04 Nov 2008 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1910</link>
      <guid>https://doi.org/10.46298/arima.1910</guid>
      <author>Lobry, Claude</author>
      <author>Sari, Tewfik</author>
      <dc:creator>Lobry, Claude</dc:creator>
      <dc:creator>Sari, Tewfik</dc:creator>
      <content:encoded><![CDATA[We study the asymptotic behaviour, when the parameter " tends to 0, of a class of singularly perturbed triangular systems x˙ = f(x, y), y˙ = G(y, "). We assume that all solutions of the second equation tend to zero arbitrarily fast when " tends to 0. We assume that the origin of equation x˙ = f(x, 0) is globally asymptotically stable. Some states of the second equation may peak to very large values, before they rapidly decay to zero. Such peaking states can destabilize the first equation. The paper introduces the concept of instantaneous stability, to measure the fast decay to zero of the solutions of the second equation, and the concept of uniform infinitesimal boundedness to measure the effects of peaking on the first equation. Whe show that all the solutions of the triangular system tend to zero when " ! 0 and t ! +1. Our results are formulated in both classical mathematics and nonstandard analysis.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Questioning the signal to noise ratioin digital communications</title>
      <description><![CDATA[The signal to noise ratio, which plays such an important rôle in information theory, is shown to become pointless for digital communications where the demodulation is achieved via new fast estimation techniques. Operational calculus, differential algebra, noncommutative algebra and nonstandard analysis are the main mathematical tools.]]></description>
      <pubDate>Tue, 21 Oct 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1908</link>
      <guid>https://doi.org/10.46298/arima.1908</guid>
      <author>Fliess, Michel</author>
      <dc:creator>Fliess, Michel</dc:creator>
      <content:encoded><![CDATA[The signal to noise ratio, which plays such an important rôle in information theory, is shown to become pointless for digital communications where the demodulation is achieved via new fast estimation techniques. Operational calculus, differential algebra, noncommutative algebra and nonstandard analysis are the main mathematical tools.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Canard solutions and bifurcations in smooth models of plane structure variable systems</title>
      <description><![CDATA[Systems that operate in different modes with quick transition are usually studied through discontinuous systems. We give a model of a smoothing of the transition between two vector fields along a separation line, allowing perturbations of the vector fields and of the separation line. In this model there appears a canard phenomenon in certain macroscopically indeterminate situations. This phenomenon gives a new point of view on some situations usually studied through discontinuous bifurcations. We also study the dynamics near the transition line through an associated slow-fast system and compare the slow dynamics with the classical theory, namely, sliding mode dynamics in variable structure systems and equivalent control.]]></description>
      <pubDate>Wed, 15 Oct 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1915</link>
      <guid>https://doi.org/10.46298/arima.1915</guid>
      <author>Albuquerque, Luis Gonzaga</author>
      <dc:creator>Albuquerque, Luis Gonzaga</dc:creator>
      <content:encoded><![CDATA[Systems that operate in different modes with quick transition are usually studied through discontinuous systems. We give a model of a smoothing of the transition between two vector fields along a separation line, allowing perturbations of the vector fields and of the separation line. In this model there appears a canard phenomenon in certain macroscopically indeterminate situations. This phenomenon gives a new point of view on some situations usually studied through discontinuous bifurcations. We also study the dynamics near the transition line through an associated slow-fast system and compare the slow dynamics with the classical theory, namely, sliding mode dynamics in variable structure systems and equivalent control.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Uncoupling Isaacs’equations in nonzero-sum two-player differential games: The example of conflict over parental care</title>
      <description><![CDATA[We use a recently uncovered decoupling of Isaacs PDE’s of some mixed closed loop Nash equilibria to give a rather complete analysis of the classical problem of conflict over parental care in behavioural ecology, for a more general set up than had been considered heretofore.]]></description>
      <pubDate>Wed, 08 Oct 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1896</link>
      <guid>https://doi.org/10.46298/arima.1896</guid>
      <author>Hamelin, Frédéric</author>
      <author>Bernhard, Pierre</author>
      <dc:creator>Hamelin, Frédéric</dc:creator>
      <dc:creator>Bernhard, Pierre</dc:creator>
      <content:encoded><![CDATA[We use a recently uncovered decoupling of Isaacs PDE’s of some mixed closed loop Nash equilibria to give a rather complete analysis of the classical problem of conflict over parental care in behavioural ecology, for a more general set up than had been considered heretofore.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Théorie générale d’équation de type hyperbolique-parabolique non linéaire</title>
      <description><![CDATA[We develop general theory for degenerate hyperbolic-parabolic type problems using semi-group theory in Banach spaces. We establish existence, uniqness results and continuous dependance with respects to data for mild solution. Similar results are developped for weak solution of entropy type, and existence of solutions are studied.]]></description>
      <pubDate>Sat, 04 Oct 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1906</link>
      <guid>https://doi.org/10.46298/arima.1906</guid>
      <author>Touré, Hamidou</author>
      <dc:creator>Touré, Hamidou</dc:creator>
      <content:encoded><![CDATA[We develop general theory for degenerate hyperbolic-parabolic type problems using semi-group theory in Banach spaces. We establish existence, uniqness results and continuous dependance with respects to data for mild solution. Similar results are developped for weak solution of entropy type, and existence of solutions are studied.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>La méthode des élucidations successives</title>
      <description><![CDATA[In the process of elaboration of a model one emphasize on the necessity of confronting the model with the reality which it is supposed to represent. There is another aspect of the modelling process, to my opinion also essential, about which one usually do not speak. It consists in a logico-linguistic work where formal models are used to produce prediction which are not confronted with the reality but serve for falsifying assertions which nevertheless seemed to be derived from the not formalized model. More exactly a first informal model is described in the natural language and, considered in the natural language, seems to say some thing but in a more or less clear way. Then we translate the informal model into a formal model (mathematical model or computer model) where what was argumentation becomes demonstration.The formal model so serves for raising ambiguities of the natural language. But conversely a too much formalized text quickly loses any sense for a human brain what makes necessary the return for a less formal language. It is these successive "translations" between more or less formal languages that I try to analyze on two examples, the first one in population dynamics, the second in mathematics.]]></description>
      <pubDate>Mon, 29 Sep 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1897</link>
      <guid>https://doi.org/10.46298/arima.1897</guid>
      <author>Lobry, Claude</author>
      <dc:creator>Lobry, Claude</dc:creator>
      <content:encoded><![CDATA[In the process of elaboration of a model one emphasize on the necessity of confronting the model with the reality which it is supposed to represent. There is another aspect of the modelling process, to my opinion also essential, about which one usually do not speak. It consists in a logico-linguistic work where formal models are used to produce prediction which are not confronted with the reality but serve for falsifying assertions which nevertheless seemed to be derived from the not formalized model. More exactly a first informal model is described in the natural language and, considered in the natural language, seems to say some thing but in a more or less clear way. Then we translate the informal model into a formal model (mathematical model or computer model) where what was argumentation becomes demonstration.The formal model so serves for raising ambiguities of the natural language. But conversely a too much formalized text quickly loses any sense for a human brain what makes necessary the return for a less formal language. It is these successive "translations" between more or less formal languages that I try to analyze on two examples, the first one in population dynamics, the second in mathematics.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Ondes locales dans les milieux hétérogènes. Aspects numériques</title>
      <description><![CDATA[Lorsqu'un milieu présente une vitesse d'onde inférieure à celles de ceux qui l'entourent, un mécanisme de localisation d'énergie vibratoire peut apparaître. Cela provient du fait qu'une partie plus importante de l'énergie est réfléchie du côté du milieu le plus souple. Or la transition conduit à des contraintes d'interface très fortes qui sont susceptibles de créer un endommagement local. Notre ambition est de proposer une stratégie d'évaluation de l'énergie de ces surcontraintes sous forme d'un taux de restitution dynamique de l'énergie totale du système qui ne nécessite pas un calcul très précis au voisinage de l'interface.]]></description>
      <pubDate>Mon, 29 Sep 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1888</link>
      <guid>https://doi.org/10.46298/arima.1888</guid>
      <author>Destuynder, Philippe</author>
      <author>Wilk, Olivier</author>
      <dc:creator>Destuynder, Philippe</dc:creator>
      <dc:creator>Wilk, Olivier</dc:creator>
      <content:encoded><![CDATA[Lorsqu'un milieu présente une vitesse d'onde inférieure à celles de ceux qui l'entourent, un mécanisme de localisation d'énergie vibratoire peut apparaître. Cela provient du fait qu'une partie plus importante de l'énergie est réfléchie du côté du milieu le plus souple. Or la transition conduit à des contraintes d'interface très fortes qui sont susceptibles de créer un endommagement local. Notre ambition est de proposer une stratégie d'évaluation de l'énergie de ces surcontraintes sous forme d'un taux de restitution dynamique de l'énergie totale du système qui ne nécessite pas un calcul très précis au voisinage de l'interface.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Generic analysis of the response of calcifying microalgae to an elevation of pCO2 : qualitative vs quantitative analysis</title>
      <description><![CDATA[Calcifying microalgae can play a key role in atmospheric CO2 trapping through large scale precipitation of calcium carbonate in the oceans. However, recent experiments revealed that the associated fluxes may be slow down by an increase in atmospheric CO2 concentration. In this paper we design models to account for the decrease in calcification and photosynthesis rates observed after an increase of pCO2 in Emiliania huxleyi chemostat cultures. Since the involved mechanisms are still not completely understood, we consider various models, each of them being based on a different hypothesis. These models are kept at a very general level, by maintaining the growth and calcification functions in a generic form, i.e. independent on the exact shape of these functions and on parameter values. The analysis is thus performed using these generic functions where the only hypothesis is an increase of these rates with respect to the regulating carbon species. As a result, each model responds differently to a pCO2 elevation. Surprisingly, the only models whose behaviour are in agreement with the experimental results correspond to carbonate as the regulating species for photosynthesis. Finally we show that the models whose qualitative behaviour are wrong could be considered as acceptable on the basis of a quantitative prediction error criterion.]]></description>
      <pubDate>Thu, 25 Sep 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1892</link>
      <guid>https://doi.org/10.46298/arima.1892</guid>
      <author>Bernard, Olivier</author>
      <author>Sciandra, Antoine</author>
      <dc:creator>Bernard, Olivier</dc:creator>
      <dc:creator>Sciandra, Antoine</dc:creator>
      <content:encoded><![CDATA[Calcifying microalgae can play a key role in atmospheric CO2 trapping through large scale precipitation of calcium carbonate in the oceans. However, recent experiments revealed that the associated fluxes may be slow down by an increase in atmospheric CO2 concentration. In this paper we design models to account for the decrease in calcification and photosynthesis rates observed after an increase of pCO2 in Emiliania huxleyi chemostat cultures. Since the involved mechanisms are still not completely understood, we consider various models, each of them being based on a different hypothesis. These models are kept at a very general level, by maintaining the growth and calcification functions in a generic form, i.e. independent on the exact shape of these functions and on parameter values. The analysis is thus performed using these generic functions where the only hypothesis is an increase of these rates with respect to the regulating carbon species. As a result, each model responds differently to a pCO2 elevation. Surprisingly, the only models whose behaviour are in agreement with the experimental results correspond to carbonate as the regulating species for photosynthesis. Finally we show that the models whose qualitative behaviour are wrong could be considered as acceptable on the basis of a quantitative prediction error criterion.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Observer design for a fish population model</title>
      <description><![CDATA[Our aim is to apply some tools of control to fishing population systems. In this paper we construct a non linear observer for the continuous stage structured model of an exploited fish population, using the fishing effort as a control term, the age classes as a states and the quantity of captured fish as a measured output. Under some biological satisfied assumptions we formulate the observer corresponding to this system and show its exponential convergence. With the Lie derivative transformation, we show that the model can be transformed to a canonical observable form; then we give the explicit gain of the estimation.]]></description>
      <pubDate>Tue, 23 Sep 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1886</link>
      <guid>https://doi.org/10.46298/arima.1886</guid>
      <author>El Mazoudi, El Houssine</author>
      <author>Mrabti, Mostafa</author>
      <author>Elalami, Noureddine</author>
      <dc:creator>El Mazoudi, El Houssine</dc:creator>
      <dc:creator>Mrabti, Mostafa</dc:creator>
      <dc:creator>Elalami, Noureddine</dc:creator>
      <content:encoded><![CDATA[Our aim is to apply some tools of control to fishing population systems. In this paper we construct a non linear observer for the continuous stage structured model of an exploited fish population, using the fishing effort as a control term, the age classes as a states and the quantity of captured fish as a measured output. Under some biological satisfied assumptions we formulate the observer corresponding to this system and show its exponential convergence. With the Lie derivative transformation, we show that the model can be transformed to a canonical observable form; then we give the explicit gain of the estimation.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Quantum systems and control 1</title>
      <description><![CDATA[This paper describes several methods used by physicists for manipulations of quantum states. For each method, we explain the model, the various time-scales, the performed approximations and we propose an interpretation in terms of control theory. These various interpretations underlie open questions on controllability, feedback and estimations. For 2-level systems we consider: the Rabi oscillations in connection with averaging; the Bloch-Siegert corrections associated to the second order terms; controllability versus parametric robustness of open-loop control and an interesting controllability problem in infinite dimension with continuous spectra. For 3-level systems we consider: Raman pulses and the second order terms. For spin/spring systems we consider: composite systems made of 2-level sub-systems coupled to quantized harmonic oscillators; multi-frequency averaging in infinite dimension; controllability of 1D partial differential equation of Shrödinger type and affine versus the control; motion planning for quantum gates. For open quantum systems subject to decoherence with continuous measures we consider: quantum trajectories and jump processes for a 2-level system; Lindblad-Kossakovsky equation and their controllability.]]></description>
      <pubDate>Sun, 21 Sep 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1904</link>
      <guid>https://doi.org/10.46298/arima.1904</guid>
      <author>Rouchon, Pierre</author>
      <dc:creator>Rouchon, Pierre</dc:creator>
      <content:encoded><![CDATA[This paper describes several methods used by physicists for manipulations of quantum states. For each method, we explain the model, the various time-scales, the performed approximations and we propose an interpretation in terms of control theory. These various interpretations underlie open questions on controllability, feedback and estimations. For 2-level systems we consider: the Rabi oscillations in connection with averaging; the Bloch-Siegert corrections associated to the second order terms; controllability versus parametric robustness of open-loop control and an interesting controllability problem in infinite dimension with continuous spectra. For 3-level systems we consider: Raman pulses and the second order terms. For spin/spring systems we consider: composite systems made of 2-level sub-systems coupled to quantized harmonic oscillators; multi-frequency averaging in infinite dimension; controllability of 1D partial differential equation of Shrödinger type and affine versus the control; motion planning for quantum gates. For open quantum systems subject to decoherence with continuous measures we consider: quantum trajectories and jump processes for a 2-level system; Lindblad-Kossakovsky equation and their controllability.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Stabilité Lp exponentielle d’un système d’échangeurs thermiques avec diffusion et sans diffusion</title>
      <description><![CDATA[In this paper we study exponential stability of a heat exchanger system with diffusion and without diffusion in the context of Banach spaces. The heat exchanger system is governed by hyperbolic partial differential equations (PDE) and parabolic PDEs, respectively, according to the diffusion impact ignored or not in the heat exchange. The exponential stability of the model with diffusion in the Banach space (C[0, 1])4 is deduced by establishing the exponential Lp stability of the considered system, and using the sectorial operator theory. The exponential decay rate of stability is also computed for the model with diffusion. Using the perturbation theory, we establish the exponential stability of the model without diffusion in the Banach space (C[0, 1])4 with the uniform topology. However the exponential decay rate of stability without diffusion is not exactly computed, since its associated semigroup is non analytic. Indeed the purpose of our paper is to investigate the exponential stability of a heat exchanger system with diffusion and without diffusion in the real Banach space X1 = (C[0, 1])4 with the uniform norm. The exponential stability of these two models in the Hilbert space X2 = (L2(0, 1))4 has been proved in [31] by using Lyapunov’s direct method. The first step consists to study the stability problem in the real Banach space Xp = (Lp(0, 1))4 equipped with the usual Lp norm, p > 1. By passing to the limit (p ! 1) we can extend some results of exponential stability from Xp = (Lp(0, 1))4 to the space X1 = (C[0, 1])4. In particular the dissipativity of the system in all the Xp spaces implies its dissipativity in X1 (see Lemma 3). The section 1 is dedicated to recall the heat exchanger models. The process with diffusion is governed by a system of parabolic PDEs, and the process without diffusion is described by degenerate hyperbolic PDEs of first order. The section 2 deals with exponential stability of the parabolic system in the Lebesgue spaces Lp(0, 1) , 1 < p < 1. Certain results can be extended to the X1 space. Unfortunately this study doesn’t allow us to deduce the expected stability of the system in X1. In the section 3, the sectorial operator theory is made use of to get exponential stability results on the model with diffusion in Xp. Specifically the theory enables us to determine the exponential decay rate in (C[0, 1])4 by computing the spectrum bound. In the section 4, using a perturbation technique we show the exponential stability for the model without diffusion in all Xp spaces, 1 < p < 1. We then take the limit, as p goes to 1, to deduce the exponential stability of the system in the Banach space X1. We call the diffusion model the heat exchanger model with diffusion taken into account and the convection model the heat exchanger without diffusion, respectively. We use the analyticity property of the semigroup associated to the diffusion model in order to determine its exponential decay rate. However the semigroup associated to the convection model is not analytic. In the latter case we have not yet found an efficient method to compute exactly the exponential decay rate. The main tools we use for our investigations are the notion of dissipativity in the Banach spaces, specifically in the Lp spaces, and the sectorial operator theory. As the reader will see our work presents some extensions of the Lyapunov’s direct method to a context of Banach spaces. We will denote the system operator associated to the diffusion model by Ad,p, and that of the convection model by Ac,p, respectively. The index p indicates the Lp( ) space in which the system evolves and the operator Ad,p or Ac,p is considered. Thus Ad,p (resp. Ac,p) indicates the diffusive (resp. convective) operator in the Xp space.]]></description>
      <pubDate>Tue, 16 Sep 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1905</link>
      <guid>https://doi.org/10.46298/arima.1905</guid>
      <author>Li, Chen-Zhong</author>
      <author>Tchousso, Abdoua</author>
      <author>Li, Xiao-Dong</author>
      <author>Sallet, Gauthier</author>
      <dc:creator>Li, Chen-Zhong</dc:creator>
      <dc:creator>Tchousso, Abdoua</dc:creator>
      <dc:creator>Li, Xiao-Dong</dc:creator>
      <dc:creator>Sallet, Gauthier</dc:creator>
      <content:encoded><![CDATA[In this paper we study exponential stability of a heat exchanger system with diffusion and without diffusion in the context of Banach spaces. The heat exchanger system is governed by hyperbolic partial differential equations (PDE) and parabolic PDEs, respectively, according to the diffusion impact ignored or not in the heat exchange. The exponential stability of the model with diffusion in the Banach space (C[0, 1])4 is deduced by establishing the exponential Lp stability of the considered system, and using the sectorial operator theory. The exponential decay rate of stability is also computed for the model with diffusion. Using the perturbation theory, we establish the exponential stability of the model without diffusion in the Banach space (C[0, 1])4 with the uniform topology. However the exponential decay rate of stability without diffusion is not exactly computed, since its associated semigroup is non analytic. Indeed the purpose of our paper is to investigate the exponential stability of a heat exchanger system with diffusion and without diffusion in the real Banach space X1 = (C[0, 1])4 with the uniform norm. The exponential stability of these two models in the Hilbert space X2 = (L2(0, 1))4 has been proved in [31] by using Lyapunov’s direct method. The first step consists to study the stability problem in the real Banach space Xp = (Lp(0, 1))4 equipped with the usual Lp norm, p > 1. By passing to the limit (p ! 1) we can extend some results of exponential stability from Xp = (Lp(0, 1))4 to the space X1 = (C[0, 1])4. In particular the dissipativity of the system in all the Xp spaces implies its dissipativity in X1 (see Lemma 3). The section 1 is dedicated to recall the heat exchanger models. The process with diffusion is governed by a system of parabolic PDEs, and the process without diffusion is described by degenerate hyperbolic PDEs of first order. The section 2 deals with exponential stability of the parabolic system in the Lebesgue spaces Lp(0, 1) , 1 < p < 1. Certain results can be extended to the X1 space. Unfortunately this study doesn’t allow us to deduce the expected stability of the system in X1. In the section 3, the sectorial operator theory is made use of to get exponential stability results on the model with diffusion in Xp. Specifically the theory enables us to determine the exponential decay rate in (C[0, 1])4 by computing the spectrum bound. In the section 4, using a perturbation technique we show the exponential stability for the model without diffusion in all Xp spaces, 1 < p < 1. We then take the limit, as p goes to 1, to deduce the exponential stability of the system in the Banach space X1. We call the diffusion model the heat exchanger model with diffusion taken into account and the convection model the heat exchanger without diffusion, respectively. We use the analyticity property of the semigroup associated to the diffusion model in order to determine its exponential decay rate. However the semigroup associated to the convection model is not analytic. In the latter case we have not yet found an efficient method to compute exactly the exponential decay rate. The main tools we use for our investigations are the notion of dissipativity in the Banach spaces, specifically in the Lp spaces, and the sectorial operator theory. As the reader will see our work presents some extensions of the Lyapunov’s direct method to a context of Banach spaces. We will denote the system operator associated to the diffusion model by Ad,p, and that of the convection model by Ac,p, respectively. The index p indicates the Lp( ) space in which the system evolves and the operator Ad,p or Ac,p is considered. Thus Ad,p (resp. Ac,p) indicates the diffusive (resp. convective) operator in the Xp space.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Ockham’s razor: Deriving cyclic evolutions from viability and inertia constraints</title>
      <description><![CDATA[This article deals with a theme to which Claude Lobry has been interested for a long time: what is the nature of mathematics motivated by biological sciences? It starts by presenting the subjective opinions of its author, illustrated by the simplest application one can think of: demonstrating that it is possible to produce cyclic evolutions on the simple basis of viability and inertia constraints, without using periodic differential equations. It is not impossible that this approach is foreign to an explanation of biological clocks (or economic cycles) in another field.]]></description>
      <pubDate>Sun, 14 Sep 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1913</link>
      <guid>https://doi.org/10.46298/arima.1913</guid>
      <author>Aubin, Jean-Pierre</author>
      <dc:creator>Aubin, Jean-Pierre</dc:creator>
      <content:encoded><![CDATA[This article deals with a theme to which Claude Lobry has been interested for a long time: what is the nature of mathematics motivated by biological sciences? It starts by presenting the subjective opinions of its author, illustrated by the simplest application one can think of: demonstrating that it is possible to produce cyclic evolutions on the simple basis of viability and inertia constraints, without using periodic differential equations. It is not impossible that this approach is foreign to an explanation of biological clocks (or economic cycles) in another field.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Practical coexistence in the chemostat with arbitrarily close growth functions</title>
      <description><![CDATA[We show that the coexistence of different species in competition for a common resource may be substantially long when their growth functions are arbitrarily closed. The transient behavior is analyzed in terms of slow-fast dynamics. We prove that non-dominant species can first increase before decreasing, depending on their initial proportions.]]></description>
      <pubDate>Sat, 13 Sep 2008 12:40:25 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1900</link>
      <guid>https://doi.org/10.46298/arima.1900</guid>
      <author>Rapaport, Alain</author>
      <author>Dochain, Denis</author>
      <author>Harmand, Jérôme</author>
      <dc:creator>Rapaport, Alain</dc:creator>
      <dc:creator>Dochain, Denis</dc:creator>
      <dc:creator>Harmand, Jérôme</dc:creator>
      <content:encoded><![CDATA[We show that the coexistence of different species in competition for a common resource may be substantially long when their growth functions are arbitrarily closed. The transient behavior is analyzed in terms of slow-fast dynamics. We prove that non-dominant species can first increase before decreasing, depending on their initial proportions.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Nearly recombining processes and the calculation of expectations</title>
      <description><![CDATA[In the context of Nonstandard Analysis, we study stochastic difference equations with infinitesimal time-steps. In particular we give a necessary and sufficient condition for a solution to be nearly-equivalent to a recombining stochastic process. The characterization is based upon a partial differential equation involving the trend and the conditional variance of the original process. An analogy with Ito’s Lemma is pointed out. As an application we obtain a method for approximation of expectations, in terms of two ordinary differential equations, also involving the trend and the conditional variance of the original process, and of Gaussian integrals.]]></description>
      <pubDate>Thu, 04 Sep 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1907</link>
      <guid>https://doi.org/10.46298/arima.1907</guid>
      <author>van den Berg, Imme</author>
      <author>Amaro, Elsa</author>
      <dc:creator>van den Berg, Imme</dc:creator>
      <dc:creator>Amaro, Elsa</dc:creator>
      <content:encoded><![CDATA[In the context of Nonstandard Analysis, we study stochastic difference equations with infinitesimal time-steps. In particular we give a necessary and sufficient condition for a solution to be nearly-equivalent to a recombining stochastic process. The characterization is based upon a partial differential equation involving the trend and the conditional variance of the original process. An analogy with Ito’s Lemma is pointed out. As an application we obtain a method for approximation of expectations, in terms of two ordinary differential equations, also involving the trend and the conditional variance of the original process, and of Gaussian integrals.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Two simple growth models in the chemostat</title>
      <description><![CDATA[In a chemostat, transient oscillations are often experimentally observed during cell growth. The aim of this paper is to propose simple autonomous models which are able (or not) to generate these oscillations, and to investigate them analytically. Our point of view is based on a simplification of the cell cycle in which there are two states (mature and immature) with the transfer between the two dependent on the available resources. We built two similar models, one with cell biomass and the other with cell number density. We prove that the first one oscillates, but not the second. This paper is dedicated to Claude Lobry, who helped us to build a first version of these models.]]></description>
      <pubDate>Mon, 01 Sep 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1895</link>
      <guid>https://doi.org/10.46298/arima.1895</guid>
      <author>Gouzé, Jean-Luc</author>
      <author>Lemesle, Valérie</author>
      <dc:creator>Gouzé, Jean-Luc</dc:creator>
      <dc:creator>Lemesle, Valérie</dc:creator>
      <content:encoded><![CDATA[In a chemostat, transient oscillations are often experimentally observed during cell growth. The aim of this paper is to propose simple autonomous models which are able (or not) to generate these oscillations, and to investigate them analytically. Our point of view is based on a simplification of the cell cycle in which there are two states (mature and immature) with the transfer between the two dependent on the available resources. We built two similar models, one with cell biomass and the other with cell number density. We prove that the first one oscillates, but not the second. This paper is dedicated to Claude Lobry, who helped us to build a first version of these models.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Complexity in a prey-predator model</title>
      <description><![CDATA[In this paper we consider a predator-prey model given by a reaction-diffusion system. It incorporates the Holling-type-II and a modified Leslie-Gower functional response. We focus on qualitaive analysis, bifurcation mecanisms and patterns formation.]]></description>
      <pubDate>Wed, 27 Aug 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1894</link>
      <guid>https://doi.org/10.46298/arima.1894</guid>
      <author>Camara, Baba I.</author>
      <author>Alaoui, Moulay A. Aziz</author>
      <dc:creator>Camara, Baba I.</dc:creator>
      <dc:creator>Alaoui, Moulay A. Aziz</dc:creator>
      <content:encoded><![CDATA[In this paper we consider a predator-prey model given by a reaction-diffusion system. It incorporates the Holling-type-II and a modified Leslie-Gower functional response. We focus on qualitaive analysis, bifurcation mecanisms and patterns formation.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Interoperability test generation: formal definitions and algorithm</title>
      <description><![CDATA[In the context of network protocols, interoperability testing is used to verify that two (or more) implementations communicate correctly while providing the services described in their respective specifications. This study is aimed at providing a method for interoperability test generation based on formal definitions. Contrary to previous works, this study takes into account quiescence of implementations that may occur during interoperability testing. This is done through the notion of interoperability criteria that give formal definitions of the different existing pragmatic interoperability notions. It is first proved that quiescence management improves non-interoperability detection. Two of these interoperability criteria are proved equivalent leading to a new method for interoperability test generation. This method avoids the well-known state explosion problem that may occur when using existing classical approaches.]]></description>
      <pubDate>Tue, 26 Aug 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1884</link>
      <guid>https://doi.org/10.46298/arima.1884</guid>
      <author>Desmoulin, Alexandra</author>
      <author>Viho, César</author>
      <dc:creator>Desmoulin, Alexandra</dc:creator>
      <dc:creator>Viho, César</dc:creator>
      <content:encoded><![CDATA[In the context of network protocols, interoperability testing is used to verify that two (or more) implementations communicate correctly while providing the services described in their respective specifications. This study is aimed at providing a method for interoperability test generation based on formal definitions. Contrary to previous works, this study takes into account quiescence of implementations that may occur during interoperability testing. This is done through the notion of interoperability criteria that give formal definitions of the different existing pragmatic interoperability notions. It is first proved that quiescence management improves non-interoperability detection. Two of these interoperability criteria are proved equivalent leading to a new method for interoperability test generation. This method avoids the well-known state explosion problem that may occur when using existing classical approaches.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On some robust algorithms for the Robin inverse problem</title>
      <description><![CDATA[The problem we are dealing with is to recover a Robin coefficient (or impedance) from measurements performed on some part of the boundary of a domain, in the framework of nondestructive testing by the means of Electric Impedance Tomography. The impedance can provide information on the location of a corroded area, as well as on the extent of the damage, which has possibly occurred on an unaccessible part of the boundary. Two different identification algorithms are presented and studied: the first one is based on a Kohn and Vogelius cost function, actually an energetic least squares one, which turns the inverse problem into an optimization one ; as for the second, it makes use of the best approximation in Hardy classes, in order to extend the Cauchy data to the unreachable part of the boundary, and then compute the Robin coefficient from these extended data. Special focus is put on the robustness with respect to noise, both from a mathematical and and numerical point of view. Some numerical experiments are eventually presented and compared.]]></description>
      <pubDate>Wed, 20 Aug 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1903</link>
      <guid>https://doi.org/10.46298/arima.1903</guid>
      <author>Jaoua, Mohamed</author>
      <author>Chaabane, Slim</author>
      <author>Elhechmi, Chokri</author>
      <author>Leblond, Juliette</author>
      <author>Mahjoub, Moncef</author>
      <author>Partington, Jonathan R.</author>
      <dc:creator>Jaoua, Mohamed</dc:creator>
      <dc:creator>Chaabane, Slim</dc:creator>
      <dc:creator>Elhechmi, Chokri</dc:creator>
      <dc:creator>Leblond, Juliette</dc:creator>
      <dc:creator>Mahjoub, Moncef</dc:creator>
      <dc:creator>Partington, Jonathan R.</dc:creator>
      <content:encoded><![CDATA[The problem we are dealing with is to recover a Robin coefficient (or impedance) from measurements performed on some part of the boundary of a domain, in the framework of nondestructive testing by the means of Electric Impedance Tomography. The impedance can provide information on the location of a corroded area, as well as on the extent of the damage, which has possibly occurred on an unaccessible part of the boundary. Two different identification algorithms are presented and studied: the first one is based on a Kohn and Vogelius cost function, actually an energetic least squares one, which turns the inverse problem into an optimization one ; as for the second, it makes use of the best approximation in Hardy classes, in order to extend the Cauchy data to the unreachable part of the boundary, and then compute the Robin coefficient from these extended data. Special focus is put on the robustness with respect to noise, both from a mathematical and and numerical point of view. Some numerical experiments are eventually presented and compared.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Simulation du remplissage des moules par la méthode des éléments Finis / volume contrôle dans les procédés RTM</title>
      <description><![CDATA[In the course of this study, the simulation of the resin flow in the RTM process is developed by the control volume finite element method (CVFEM) coupled with the equation of the free surface location. This location is made by means of the so called "Volume of Fluid" methods or VOF. Thus, the position of the flow front, the time-lapse and the rate of the non saturated zone are calculated at every step. Our results will be compared with the experimental and analytical models in the literature. On the whole, our study is concerned with the simulation of the thermally insulated filling of moulds in RTM process while adopting the CVFEM and VOF method, taking into account the presence of obstacles, coupled with the thickness variation effect and the reinforcement coats.]]></description>
      <pubDate>Mon, 18 Aug 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1918</link>
      <guid>https://doi.org/10.46298/arima.1918</guid>
      <author>Samir, J.</author>
      <author>Hattabi, M.</author>
      <author>Echaabi, J.</author>
      <author>Saouab, A.</author>
      <author>Park, C.H.</author>
      <dc:creator>Samir, J.</dc:creator>
      <dc:creator>Hattabi, M.</dc:creator>
      <dc:creator>Echaabi, J.</dc:creator>
      <dc:creator>Saouab, A.</dc:creator>
      <dc:creator>Park, C.H.</dc:creator>
      <content:encoded><![CDATA[In the course of this study, the simulation of the resin flow in the RTM process is developed by the control volume finite element method (CVFEM) coupled with the equation of the free surface location. This location is made by means of the so called "Volume of Fluid" methods or VOF. Thus, the position of the flow front, the time-lapse and the rate of the non saturated zone are calculated at every step. Our results will be compared with the experimental and analytical models in the literature. On the whole, our study is concerned with the simulation of the thermally insulated filling of moulds in RTM process while adopting the CVFEM and VOF method, taking into account the presence of obstacles, coupled with the thickness variation effect and the reinforcement coats.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Singular perturbations on the infinite time interval</title>
      <description><![CDATA[We consider the slow and fast systems that belong to a small neighborhood of an unperturbed problem. We study the general case where the slow equation has a compact positively invariant subset which is asymptotically stable, and meanwhile the fast equation has asymptotically stable equilibria (Tykhonov’s theory) or asymptotically stable periodic orbits (Pontryagin–Rodygin’s theory). The description of the solutions is by this way given on infinite time interval. We investigate the stability problems derived from this results by introducing the notion of practical asymptotic stability. We show that some particular subsets of the phase space of the singularly perturbed systems behave like asymptotically stable sets. Our results are formulated in classical mathematics. They are proved within Internal Set Theory which is an axiomatic approach to Nonstandard Analysis.]]></description>
      <pubDate>Sat, 16 Aug 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1912</link>
      <guid>https://doi.org/10.46298/arima.1912</guid>
      <author>Yadi, Karim</author>
      <dc:creator>Yadi, Karim</dc:creator>
      <content:encoded><![CDATA[We consider the slow and fast systems that belong to a small neighborhood of an unperturbed problem. We study the general case where the slow equation has a compact positively invariant subset which is asymptotically stable, and meanwhile the fast equation has asymptotically stable equilibria (Tykhonov’s theory) or asymptotically stable periodic orbits (Pontryagin–Rodygin’s theory). The description of the solutions is by this way given on infinite time interval. We investigate the stability problems derived from this results by introducing the notion of practical asymptotic stability. We show that some particular subsets of the phase space of the singularly perturbed systems behave like asymptotically stable sets. Our results are formulated in classical mathematics. They are proved within Internal Set Theory which is an axiomatic approach to Nonstandard Analysis.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Claude Lobry, un mathématicien militant</title>
      <description><![CDATA[We show in this communication, that Claude Lobry has always been contributing to mathematics and simultaneously promoting actions to develop a certain idea of acting in mathematics]]></description>
      <pubDate>Sat, 16 Aug 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1890</link>
      <guid>https://doi.org/10.46298/arima.1890</guid>
      <author>Sallet, Gauthier</author>
      <dc:creator>Sallet, Gauthier</dc:creator>
      <content:encoded><![CDATA[We show in this communication, that Claude Lobry has always been contributing to mathematics and simultaneously promoting actions to develop a certain idea of acting in mathematics]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Quantitative analysis of metabolic networks and design of minimal bioreaction models</title>
      <description><![CDATA[This tutorial paper is concerned with the design of macroscopic bioreaction models on the basis a quantitative analysis of the underlying cell metabolism. The paper starts with a review of two fundamental algebraic techniques for the quantitative analysis of metabolic networks : (i) the decomposition of complex metabolic networks into elementary pathways (or elementary modes), (ii) the metabolic flux analysis which aims at computing the entire intracellular flux distribution from a limited number of flux meaurements. Then it is discussed how these two fundamental techniques can be exploited to design minimal bioreaction models by using a systematic model reduction approach that automatically produces a family of equivalent minimal models which are fully compatible with the underlying metabolism and consistent with the available experimental data. The theory is illustrated with an experimental case-study on CHO cells.]]></description>
      <pubDate>Thu, 14 Aug 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1891</link>
      <guid>https://doi.org/10.46298/arima.1891</guid>
      <author>Bastin, Georges</author>
      <dc:creator>Bastin, Georges</dc:creator>
      <content:encoded><![CDATA[This tutorial paper is concerned with the design of macroscopic bioreaction models on the basis a quantitative analysis of the underlying cell metabolism. The paper starts with a review of two fundamental algebraic techniques for the quantitative analysis of metabolic networks : (i) the decomposition of complex metabolic networks into elementary pathways (or elementary modes), (ii) the metabolic flux analysis which aims at computing the entire intracellular flux distribution from a limited number of flux meaurements. Then it is discussed how these two fundamental techniques can be exploited to design minimal bioreaction models by using a systematic model reduction approach that automatically produces a family of equivalent minimal models which are fully compatible with the underlying metabolism and consistent with the available experimental data. The theory is illustrated with an experimental case-study on CHO cells.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On a Radially Symmetrical Green’s Function</title>
      <description><![CDATA[It is quite usual to transform elliptic PDE problems of second order into fixed point integral problems, via the Green’s function. But it is not easy, in general, to handle integrals involved in such a formulation. When it comes to the Laplacian operator on balls of Rn, we give here a radially symmetrical Green’s function which, under some nonlinearity assumptions, makes the Green’s Integral representation formula easier to use; we give three examples of application.]]></description>
      <pubDate>Sun, 10 Aug 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1902</link>
      <guid>https://doi.org/10.46298/arima.1902</guid>
      <author>Isselkou, Ould Ahmed Izid Bih</author>
      <dc:creator>Isselkou, Ould Ahmed Izid Bih</dc:creator>
      <content:encoded><![CDATA[It is quite usual to transform elliptic PDE problems of second order into fixed point integral problems, via the Green’s function. But it is not easy, in general, to handle integrals involved in such a formulation. When it comes to the Laplacian operator on balls of Rn, we give here a radially symmetrical Green’s function which, under some nonlinearity assumptions, makes the Green’s Integral representation formula easier to use; we give three examples of application.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Interval numerical observer: Application to a discrete time nonlinear fish model</title>
      <description><![CDATA[The aim of this work is to reconstitute the state of a discrete-time nonlinear system representing a dynamical model of a harvested fish population. For this end, we are going to use a numerical method of building an interval observer for the consider discrete-time model fish population. We adapt to this model an algorithm called "Interval Moving Horizon State Estimation" (IMHSE) which gives an estimated interval of the system states. This algorithm is carried out in [8] and work well for a general class of discrete-time systems.]]></description>
      <pubDate>Sun, 10 Aug 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1923</link>
      <guid>https://doi.org/10.46298/arima.1923</guid>
      <author>Guiro, Aboudramane</author>
      <author>Iggidr, Abderrahman</author>
      <author>Ngom, Diène</author>
      <dc:creator>Guiro, Aboudramane</dc:creator>
      <dc:creator>Iggidr, Abderrahman</dc:creator>
      <dc:creator>Ngom, Diène</dc:creator>
      <content:encoded><![CDATA[The aim of this work is to reconstitute the state of a discrete-time nonlinear system representing a dynamical model of a harvested fish population. For this end, we are going to use a numerical method of building an interval observer for the consider discrete-time model fish population. We adapt to this model an algorithm called "Interval Moving Horizon State Estimation" (IMHSE) which gives an estimated interval of the system states. This algorithm is carried out in [8] and work well for a general class of discrete-time systems.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Applications de méthodes d’agrégation de variables à l’analyse de modèles spatiaux de dynamique des populations</title>
      <description><![CDATA[Models in population dynamics can deal with an important number of parameters and variables, which can make them difficult to analyse. Aggregation of variables allow reducing complexity of such models by building simplified models governing fewer variables by use of the existence of different time scales associated to the processes governing the whole system. Those reduced models allows analysing and describing the global dynamics of the system. We present those methods for time discrete models and illustrate their use for the study of spatial host-parasitoids models.]]></description>
      <pubDate>Sat, 09 Aug 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1898</link>
      <guid>https://doi.org/10.46298/arima.1898</guid>
      <author>Nguyen-Huu, Tri</author>
      <author>Auger, Pierre</author>
      <dc:creator>Nguyen-Huu, Tri</dc:creator>
      <dc:creator>Auger, Pierre</dc:creator>
      <content:encoded><![CDATA[Models in population dynamics can deal with an important number of parameters and variables, which can make them difficult to analyse. Aggregation of variables allow reducing complexity of such models by building simplified models governing fewer variables by use of the existence of different time scales associated to the processes governing the whole system. Those reduced models allows analysing and describing the global dynamics of the system. We present those methods for time discrete models and illustrate their use for the study of spatial host-parasitoids models.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Survey of recent results of multi-compartments intra-host models of malaria and HIV</title>
      <description><![CDATA[We present the recent results obtained for the within-host models of malaria and HIV. We briefly recall the Anderson-May-Gupter model. We also recall the Van Den Driessche method of computation for the basic reproduction ratio R0 ; here, a very simple formula is given for a new class of models. The global analysis of these models can be founded in [1, 2, 3, 5]. The results we recall here are for a model of one strain of parasites and many classes of age, a general model of n strains of parasites and k classes of age, a S E1 E2 · · ·En I S model with one linear chain of compartments and finally a general S Ei1 Ei2 · · ·Ein I S model with k linear chains of compartments. When R0 <=1, the authors prove that there is a trivial equilibria calling disease free equilibrium (DFE) which is globally asymptotically stable (GAS) on the non-negative orthant , and when R0 > 1, they prove the existence of a unique endemic equilibrium in the non-negative orthant and give an explicit formula. They provided a weak condition for the global stability of endemic equilibrium]]></description>
      <pubDate>Fri, 08 Aug 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1893</link>
      <guid>https://doi.org/10.46298/arima.1893</guid>
      <author>Bowong, Samuel</author>
      <author>Dimi, Jean-Luc</author>
      <author>Kamgang, Jean-Claude</author>
      <author>Mbang, Joseph</author>
      <author>Tewa, Jean Jules</author>
      <dc:creator>Bowong, Samuel</dc:creator>
      <dc:creator>Dimi, Jean-Luc</dc:creator>
      <dc:creator>Kamgang, Jean-Claude</dc:creator>
      <dc:creator>Mbang, Joseph</dc:creator>
      <dc:creator>Tewa, Jean Jules</dc:creator>
      <content:encoded><![CDATA[We present the recent results obtained for the within-host models of malaria and HIV. We briefly recall the Anderson-May-Gupter model. We also recall the Van Den Driessche method of computation for the basic reproduction ratio R0 ; here, a very simple formula is given for a new class of models. The global analysis of these models can be founded in [1, 2, 3, 5]. The results we recall here are for a model of one strain of parasites and many classes of age, a general model of n strains of parasites and k classes of age, a S E1 E2 · · ·En I S model with one linear chain of compartments and finally a general S Ei1 Ei2 · · ·Ein I S model with k linear chains of compartments. When R0 <=1, the authors prove that there is a trivial equilibria calling disease free equilibrium (DFE) which is globally asymptotically stable (GAS) on the non-negative orthant , and when R0 > 1, they prove the existence of a unique endemic equilibrium in the non-negative orthant and give an explicit formula. They provided a weak condition for the global stability of endemic equilibrium]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Integer calculus on the Harthong-Reeb Line</title>
      <description><![CDATA[In this work, we give a presentation of the so-called Harthong-Reeb line. Only based on integer numbers, this numerical system has the striking property to be roughly equivalent to the continuous real line. Its definition requires the use of a natural number w which is infinitely large in the meaning of nonstandard analysis. Following the idea of G. Reeb, we show how to implement in this framework the Euler scheme. Then we get an exact representation in the Harthong-Reeb line of many real functions like the exponential. Since this representation is given with the help of an explicit algorithm, it is natural to wonder about the global constructivity of this numerical system. In the conclusion, we discuss this last point and we outline some new directions for getting analogous systems which would be more constructive]]></description>
      <pubDate>Thu, 07 Aug 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1911</link>
      <guid>https://doi.org/10.46298/arima.1911</guid>
      <author>Wallet, Guy</author>
      <dc:creator>Wallet, Guy</dc:creator>
      <content:encoded><![CDATA[In this work, we give a presentation of the so-called Harthong-Reeb line. Only based on integer numbers, this numerical system has the striking property to be roughly equivalent to the continuous real line. Its definition requires the use of a natural number w which is infinitely large in the meaning of nonstandard analysis. Following the idea of G. Reeb, we show how to implement in this framework the Euler scheme. Then we get an exact representation in the Harthong-Reeb line of many real functions like the exponential. Since this representation is given with the help of an explicit algorithm, it is natural to wonder about the global constructivity of this numerical system. In the conclusion, we discuss this last point and we outline some new directions for getting analogous systems which would be more constructive]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Avant Propos</title>
      <description><![CDATA[Les amis de Claude Lobry ont organisé du 10 au 14 septembre 2007 à l’université Gaston Berger de Saint Louis une conférence en son honneur. Les apports scientifiques de Claude Lobry ont été non seulement multiformes et pluridisciplinaires, mais il a souvent été un précurseur dans nombre d’activités. Cette conférence s’est tenue en Afrique, à la demande des mathématiciens africains, en raison de l’activité particulière de Claude Lobry pour le développement des mathématiques en Afrique depuis sa prise de fonction comme Directeur du CIMPA en 1995 jusqu’à nos jours. Son livre « Les mathématiques : une nécessité pour le développement » est un vibrant plaidoyer pour le développement des mathématiques en Afrique]]></description>
      <pubDate>Thu, 31 Jul 2008 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1889</link>
      <guid>https://doi.org/10.46298/arima.1889</guid>
      <author>Niane, Mary Teuw</author>
      <author>Sallet, Gauthier</author>
      <author>Sari, Tewik</author>
      <author>Touré, Hamidou</author>
      <dc:creator>Niane, Mary Teuw</dc:creator>
      <dc:creator>Sallet, Gauthier</dc:creator>
      <dc:creator>Sari, Tewik</dc:creator>
      <dc:creator>Touré, Hamidou</dc:creator>
      <content:encoded><![CDATA[Les amis de Claude Lobry ont organisé du 10 au 14 septembre 2007 à l’université Gaston Berger de Saint Louis une conférence en son honneur. Les apports scientifiques de Claude Lobry ont été non seulement multiformes et pluridisciplinaires, mais il a souvent été un précurseur dans nombre d’activités. Cette conférence s’est tenue en Afrique, à la demande des mathématiciens africains, en raison de l’activité particulière de Claude Lobry pour le développement des mathématiques en Afrique depuis sa prise de fonction comme Directeur du CIMPA en 1995 jusqu’à nos jours. Son livre « Les mathématiques : une nécessité pour le développement » est un vibrant plaidoyer pour le développement des mathématiques en Afrique]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Approche structurelle des systèmes, de la géométrie à la théorie des graphes</title>
      <description><![CDATA[In this work, which was presented at the conference in honor of Claude Lobry, we focus on a structural approach of systems which was the mainstream of our research. The modeling ability of this approach and the power of the associated graph tools are enlightened. As an illustration we consider the disturbance decoupling problem by measurement feedback and solve this problem using geometric and graph techniques]]></description>
      <pubDate>Sat, 19 Jul 2008 12:40:38 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1916</link>
      <guid>https://doi.org/10.46298/arima.1916</guid>
      <author>Dion, Jean-Michel</author>
      <author>Commault, Christian</author>
      <dc:creator>Dion, Jean-Michel</dc:creator>
      <dc:creator>Commault, Christian</dc:creator>
      <content:encoded><![CDATA[In this work, which was presented at the conference in honor of Claude Lobry, we focus on a structural approach of systems which was the mainstream of our research. The modeling ability of this approach and the power of the associated graph tools are enlightened. As an illustration we consider the disturbance decoupling problem by measurement feedback and solve this problem using geometric and graph techniques]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Outil de partitionnement hw/sw basé sur l’algorithme Kernighan/Lin amélioré</title>
      <description><![CDATA[Partitioning of system functionality for implementation among multiple system components, such as among hardware and software components in codesign, is becoming an increasingly important topic. Various heuristics are used in automatic partitioning. In this paper, we present our tool, called AutoDec, implemented in Visual C++ 6.0. We verified that hierarchical clustering algorithm, based on closeness metrics, can be used to merge pieces of functionality before applying Kernighan/Lin algorithm, resulting in reduced execution time with often improvements in quality. In addition, we show that our approach, when used in partitioning, fills the gap between fast algorithms and highly-optimizing ones.]]></description>
      <pubDate>Sun, 25 Nov 2007 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1882</link>
      <guid>https://doi.org/10.46298/arima.1882</guid>
      <author>Boudour, R.</author>
      <author>Laskri, M.T.</author>
      <dc:creator>Boudour, R.</dc:creator>
      <dc:creator>Laskri, M.T.</dc:creator>
      <content:encoded><![CDATA[Partitioning of system functionality for implementation among multiple system components, such as among hardware and software components in codesign, is becoming an increasingly important topic. Various heuristics are used in automatic partitioning. In this paper, we present our tool, called AutoDec, implemented in Visual C++ 6.0. We verified that hierarchical clustering algorithm, based on closeness metrics, can be used to merge pieces of functionality before applying Kernighan/Lin algorithm, resulting in reduced execution time with often improvements in quality. In addition, we show that our approach, when used in partitioning, fills the gap between fast algorithms and highly-optimizing ones.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>New Evolutionary Classifier Based on Genetic Algorithms and Neural Networks: Application to the Bankruptcy Forecasting Problem</title>
      <description><![CDATA[Artificial neural networks (ANNs) have been widely applied in data mining as a supervised classification technique. The accuracy of this model is mainly provided by its high tolerance to noisy data as well as its ability to classify patterns on which they have not been trained. Moreover, the performance to ANN based models mainly depends both on the ANN parameters and on the quality of input variables. Whereas, an exhaustive search on either appropriate parameters or predictive inputs is very computationally expansive. In this paper, we propose a new hybrid model based on genetic algorithms and artificial neural networks. Our evolutionary classifier is capable of selecting the best set of predictive variables, then, searching for the best neural network classifier and improving classification and generalization accuracies. The designated model was applied to the problem of bankruptcy forecasting, experiments have shown very promising results for the bankruptcy prediction in terms of predictive accuracy and adaptability.]]></description>
      <pubDate>Sun, 18 Nov 2007 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1879</link>
      <guid>https://doi.org/10.46298/arima.1879</guid>
      <author>Esseghir, M.A.</author>
      <dc:creator>Esseghir, M.A.</dc:creator>
      <content:encoded><![CDATA[Artificial neural networks (ANNs) have been widely applied in data mining as a supervised classification technique. The accuracy of this model is mainly provided by its high tolerance to noisy data as well as its ability to classify patterns on which they have not been trained. Moreover, the performance to ANN based models mainly depends both on the ANN parameters and on the quality of input variables. Whereas, an exhaustive search on either appropriate parameters or predictive inputs is very computationally expansive. In this paper, we propose a new hybrid model based on genetic algorithms and artificial neural networks. Our evolutionary classifier is capable of selecting the best set of predictive variables, then, searching for the best neural network classifier and improving classification and generalization accuracies. The designated model was applied to the problem of bankruptcy forecasting, experiments have shown very promising results for the bankruptcy prediction in terms of predictive accuracy and adaptability.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Word Game Support Tool Case Study</title>
      <description><![CDATA[This article reports on the approach taken, experience gathered, and results found in building a tool to support the derivation of solutions to a particular kind of word game. This required that techniques had to be derived for simple yet acceptably quick access to a dictionary of natural language words (in the present case, Afrikaans). The main challenge was to access a large corpus of natural language words via a partial match retrieval technique. Other challenges included discovering how to represent such a dictionary in a "semi-compressed" format, thus arriving at a balance that favours search speed but nevertheless derives a savings on storage requirements. In addition, a query language had to be developed that would effectively exploit this access method. The system is designed to support a more intelligent query capability in the future. Acceptable response times were achieved even though an interpretive scripting language, ObjectREXX, was used.]]></description>
      <pubDate>Fri, 09 Nov 2007 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1881</link>
      <guid>https://doi.org/10.46298/arima.1881</guid>
      <author>Botha, T.</author>
      <author>Kourie, D.G.</author>
      <author>Watson, B.W.</author>
      <dc:creator>Botha, T.</dc:creator>
      <dc:creator>Kourie, D.G.</dc:creator>
      <dc:creator>Watson, B.W.</dc:creator>
      <content:encoded><![CDATA[This article reports on the approach taken, experience gathered, and results found in building a tool to support the derivation of solutions to a particular kind of word game. This required that techniques had to be derived for simple yet acceptably quick access to a dictionary of natural language words (in the present case, Afrikaans). The main challenge was to access a large corpus of natural language words via a partial match retrieval technique. Other challenges included discovering how to represent such a dictionary in a "semi-compressed" format, thus arriving at a balance that favours search speed but nevertheless derives a savings on storage requirements. In addition, a query language had to be developed that would effectively exploit this access method. The system is designed to support a more intelligent query capability in the future. Acceptable response times were achieved even though an interpretive scripting language, ObjectREXX, was used.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Progress of organisational data mining in South Africa</title>
      <description><![CDATA[This paper describes three largely qualitative studies, spread over a five year period, into the current practice of data mining in several large South African organisations. The objective was to gain an understanding through in-depth interviews of the major issues faced by participants in the data mining process. The focus is more on the organisational, resource and business issues than on technological or algorithmic aspects. Strong progress is revealed to have been made over this period, and a model for the data mining organisation is proposed.]]></description>
      <pubDate>Mon, 22 Oct 2007 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1875</link>
      <guid>https://doi.org/10.46298/arima.1875</guid>
      <author>Hart, Mike</author>
      <dc:creator>Hart, Mike</dc:creator>
      <content:encoded><![CDATA[This paper describes three largely qualitative studies, spread over a five year period, into the current practice of data mining in several large South African organisations. The objective was to gain an understanding through in-depth interviews of the major issues faced by participants in the data mining process. The focus is more on the organisational, resource and business issues than on technological or algorithmic aspects. Strong progress is revealed to have been made over this period, and a model for the data mining organisation is proposed.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Texture-based Method for Document Segmentation and Classification</title>
      <description><![CDATA[In this paper we present a hybrid approach to segment and classify contents of document images. A Document Image is segmented into three types of regions: Graphics, Text and Space. The image of a document is subdivided into blocks and for each block five GLCM (Grey Level Co-occurrence Matrix) features are extracted. Based on these features, blocks are then clustered into three groups using K-Means algorithm; connected blocks that belong to the same group are merged. The classification of groups is done using pre-learned heuristic rules. Experiments were conducted on scanned newspapers and images from MediaTeam Document Database]]></description>
      <pubDate>Sun, 14 Oct 2007 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1878</link>
      <guid>https://doi.org/10.46298/arima.1878</guid>
      <author>Lin, Ming-Wei</author>
      <author>Tapamo, Jules-Raymond</author>
      <author>Ndovie, Baird</author>
      <dc:creator>Lin, Ming-Wei</dc:creator>
      <dc:creator>Tapamo, Jules-Raymond</dc:creator>
      <dc:creator>Ndovie, Baird</dc:creator>
      <content:encoded><![CDATA[In this paper we present a hybrid approach to segment and classify contents of document images. A Document Image is segmented into three types of regions: Graphics, Text and Space. The image of a document is subdivided into blocks and for each block five GLCM (Grey Level Co-occurrence Matrix) features are extracted. Based on these features, blocks are then clustered into three groups using K-Means algorithm; connected blocks that belong to the same group are merged. The classification of groups is done using pre-learned heuristic rules. Experiments were conducted on scanned newspapers and images from MediaTeam Document Database]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>One-Class Classifiers: A Review and Analysis of Suitability in the Context of Mobile-Masquerader Detection</title>
      <description><![CDATA[One-class classifiers employing for training only the data from one class are justified when the data from other classes is difficult to obtain. In particular, their use is justified in mobile-masquerader detection, where user characteristics are classified as belonging to the legitimate user class or to the impostor class, and where collecting the data originated from impostors is problematic. This paper systematically reviews various one-class classification methods, and analyses their suitability in the context of mobile-masquerader detection. For each classification method, its sensitivity to the errors in the training set, computational requirements, and other characteristics are considered. After that, for each category of features used in masquerader detection, suitable classifiers are identified.]]></description>
      <pubDate>Sat, 29 Sep 2007 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1877</link>
      <guid>https://doi.org/10.46298/arima.1877</guid>
      <author>Mazhelis, Oleksiy</author>
      <dc:creator>Mazhelis, Oleksiy</dc:creator>
      <content:encoded><![CDATA[One-class classifiers employing for training only the data from one class are justified when the data from other classes is difficult to obtain. In particular, their use is justified in mobile-masquerader detection, where user characteristics are classified as belonging to the legitimate user class or to the impostor class, and where collecting the data originated from impostors is problematic. This paper systematically reviews various one-class classification methods, and analyses their suitability in the context of mobile-masquerader detection. For each classification method, its sensitivity to the errors in the training set, computational requirements, and other characteristics are considered. After that, for each category of features used in masquerader detection, suitable classifiers are identified.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Comparative study of sample selection methods for classification</title>
      <description><![CDATA[Sampling of large datasets for data mining is important for at least two reasons. The processing of large amounts of data results in increased computational complexity. The cost of this additional complexity may not be justifiable. On the other hand, the use of small samples results in fast and efficient computation for data mining algorithms. Statistical methods for obtaining sufficient samples from datasets for classification problems are discussed in this paper. Results are presented for an empirical study based on the use of sequential random sampling and sample evaluation using univariate hypothesis testing and an information theoretic measure. Comparisons are made between theoretical and empirical estimates.]]></description>
      <pubDate>Sat, 01 Sep 2007 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1880</link>
      <guid>https://doi.org/10.46298/arima.1880</guid>
      <author>Lutu, Patricia E.N.</author>
      <author>Engelbrecht, Andries P.</author>
      <dc:creator>Lutu, Patricia E.N.</dc:creator>
      <dc:creator>Engelbrecht, Andries P.</dc:creator>
      <content:encoded><![CDATA[Sampling of large datasets for data mining is important for at least two reasons. The processing of large amounts of data results in increased computational complexity. The cost of this additional complexity may not be justifiable. On the other hand, the use of small samples results in fast and efficient computation for data mining algorithms. Statistical methods for obtaining sufficient samples from datasets for classification problems are discussed in this paper. Results are presented for an empirical study based on the use of sequential random sampling and sample evaluation using univariate hypothesis testing and an information theoretic measure. Comparisons are made between theoretical and empirical estimates.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Modèle d'équilibrage de Charge pour les Grilles de Calcul</title>
      <description><![CDATA[In order to get a better performance in distributed systems, load balancing problem has been extensively studied in recent years. Most of existing works focus on traditional systems where resources are generally homogeneous, like clusters. For grid infrastructures, this assumption is not totally true because resources of a grid are highly heterogeneous. Hence, load balancing problem for grid computing is a new challenge for scientists. In this paper, we propose a tree-based representation model for grid computing, over which we develop a hierarchical load balancing strategy. The main characteristics of this strategy can be summarized as follows:(i) It uses a task-level load balancing; (ii) It privileges local tasks transfer to reduce communication costs; (iii) It is a distributed strategy with local decision making.]]></description>
      <pubDate>Mon, 27 Aug 2007 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1883</link>
      <guid>https://doi.org/10.46298/arima.1883</guid>
      <author>Yagoubi, Bellabas</author>
      <dc:creator>Yagoubi, Bellabas</dc:creator>
      <content:encoded><![CDATA[In order to get a better performance in distributed systems, load balancing problem has been extensively studied in recent years. Most of existing works focus on traditional systems where resources are generally homogeneous, like clusters. For grid infrastructures, this assumption is not totally true because resources of a grid are highly heterogeneous. Hence, load balancing problem for grid computing is a new challenge for scientists. In this paper, we propose a tree-based representation model for grid computing, over which we develop a hierarchical load balancing strategy. The main characteristics of this strategy can be summarized as follows:(i) It uses a task-level load balancing; (ii) It privileges local tasks transfer to reduce communication costs; (iii) It is a distributed strategy with local decision making.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Motivic Pattern Extraction in Music, and Application to the Study of Tunisian Modal Music</title>
      <description><![CDATA[A new methodology for automated extraction of repeated patterns in time-series data is presented, aimed in particular at the analysis of musical sequences. The basic principles consists in a search for closed patterns in a multi-dimensional parametric space. It is shown that this basic mechanism needs to be articulated with a periodic pattern discovery system, implying therefore a strict chronological scanning of the time-series data. Thanks to this modelling global pattern filtering may be avoided and rich and highly pertinent results can be obtained. The modelling has been integrated in a collaborative pro ject between ethnomusicology, cognitive sciences and computer science, aimed at the study of Tunisian Modal Music.]]></description>
      <pubDate>Sun, 26 Aug 2007 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1876</link>
      <guid>https://doi.org/10.46298/arima.1876</guid>
      <author>Lartillo, Olivier</author>
      <author>Ayari, Mondher</author>
      <dc:creator>Lartillo, Olivier</dc:creator>
      <dc:creator>Ayari, Mondher</dc:creator>
      <content:encoded><![CDATA[A new methodology for automated extraction of repeated patterns in time-series data is presented, aimed in particular at the analysis of musical sequences. The basic principles consists in a search for closed patterns in a multi-dimensional parametric space. It is shown that this basic mechanism needs to be articulated with a periodic pattern discovery system, implying therefore a strict chronological scanning of the time-series data. Thanks to this modelling global pattern filtering may be avoided and rich and highly pertinent results can be obtained. The modelling has been integrated in a collaborative pro ject between ethnomusicology, cognitive sciences and computer science, aimed at the study of Tunisian Modal Music.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Solutions de similitude d'un jeu différentiel stochastique</title>
      <description><![CDATA[A two-dimensional controlled stochastic process defined by a set of stochastic differential equations is considered. Contrary to the most frequent formulation, the control variables appear only in the infinitesimal variances of the process, rather than in the infinitesimal means. The differential game ends the first time the two controlled processes are equal or their difference is equal to a given constant. Explicit solutions to particular problems are obtained by making use of the method of similarity solutions to solve the appropriate partial differential equation.]]></description>
      <pubDate>Tue, 28 Nov 2006 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1864</link>
      <guid>https://doi.org/10.46298/arima.1864</guid>
      <author>Lefebvre, Mario</author>
      <dc:creator>Lefebvre, Mario</dc:creator>
      <content:encoded><![CDATA[A two-dimensional controlled stochastic process defined by a set of stochastic differential equations is considered. Contrary to the most frequent formulation, the control variables appear only in the infinitesimal variances of the process, rather than in the infinitesimal means. The differential game ends the first time the two controlled processes are equal or their difference is equal to a given constant. Explicit solutions to particular problems are obtained by making use of the method of similarity solutions to solve the appropriate partial differential equation.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Suite d’ensembles partiellement ordonnés</title>
      <description><![CDATA[This work is to study an order D(P) on maximal antichains of a given order. D(P) is an order included in the order which defines the Lattice of maximal antichains AM(P), introduced by R.P. Dilworth, in 1960. In [3], T.Y. Kong and P. Ribenboim have proved that there exists an integer i such that Di(P) is a chain, where Di(P)=D(D(…D(P))), i times. We find the smallest i, noted cdev(P) such that Di(P) is a chain for some particular classes of orders and we approximate this parameter in the general case of order.]]></description>
      <pubDate>Sat, 25 Nov 2006 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1846</link>
      <guid>https://doi.org/10.46298/arima.1846</guid>
      <author>Sadi, Bachir</author>
      <dc:creator>Sadi, Bachir</dc:creator>
      <content:encoded><![CDATA[This work is to study an order D(P) on maximal antichains of a given order. D(P) is an order included in the order which defines the Lattice of maximal antichains AM(P), introduced by R.P. Dilworth, in 1960. In [3], T.Y. Kong and P. Ribenboim have proved that there exists an integer i such that Di(P) is a chain, where Di(P)=D(D(…D(P))), i times. We find the smallest i, noted cdev(P) such that Di(P) is a chain for some particular classes of orders and we approximate this parameter in the general case of order.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Hermite spline interpolents ― New methods for constructing and compressing Hermite interpolants</title>
      <description><![CDATA[In this paper, we present a quite simple recursive method for the construction of classical tensor product Hermite spline interpolant of a function defined on a rectangular domain. We show that this function can be written under a recursive form and a sum of particular splines that have interesting properties. As application of this method, we give an algorithm which allows to compress Hermite data. In order to illustrate our results, some numerical examples are presented.]]></description>
      <pubDate>Fri, 17 Nov 2006 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1871</link>
      <guid>https://doi.org/10.46298/arima.1871</guid>
      <author>Mraoui, Hamid</author>
      <author>Sbibih, Driss</author>
      <dc:creator>Mraoui, Hamid</dc:creator>
      <dc:creator>Sbibih, Driss</dc:creator>
      <content:encoded><![CDATA[In this paper, we present a quite simple recursive method for the construction of classical tensor product Hermite spline interpolant of a function defined on a rectangular domain. We show that this function can be written under a recursive form and a sum of particular splines that have interesting properties. As application of this method, we give an algorithm which allows to compress Hermite data. In order to illustrate our results, some numerical examples are presented.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Problème de contrôle optimal frontière pour l'équation de la chaleur : Approche variationnelle et pénalisation</title>
      <description><![CDATA[Our goal is to give a detailed analysis of an optimal control problem where the control variable is a rather boundary condition of Dirichlet type in L². We focus on establishing an appropriate variationnel approach to the optimal problem. We use the penalization method for the boundary control problem and we study the convergence between the penalized and the non-penalized boundary control problem. A numerical result is reported on to validate the convergence.]]></description>
      <pubDate>Sat, 11 Nov 2006 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1868</link>
      <guid>https://doi.org/10.46298/arima.1868</guid>
      <author>Metoui, H.</author>
      <dc:creator>Metoui, H.</dc:creator>
      <content:encoded><![CDATA[Our goal is to give a detailed analysis of an optimal control problem where the control variable is a rather boundary condition of Dirichlet type in L². We focus on establishing an appropriate variationnel approach to the optimal problem. We use the penalization method for the boundary control problem and we study the convergence between the penalized and the non-penalized boundary control problem. A numerical result is reported on to validate the convergence.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A stochastic modelling of phytoplankton aggregation</title>
      <description><![CDATA[The aim of this work is to provide a stochastic mathematical model of aggregation in phytoplankton, from the point of view of modelling a system of a large but finite number of phytoplankton cells that are subject to random dispersal, mutual interactions allowing the cell motions some dependence and branching (cell division or death). We present the passage from the ''microscopic'' description to the ''macroscopic'' one, when the initial number of cells tends to infinity (large phytoplankton populations). The limit of the system is an extension of the Dawson-Watanabe superprocess: it is a superprocess with spatial interactions which can be described by a nonlinear stochastic partial differential equation.]]></description>
      <pubDate>Sat, 04 Nov 2006 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1856</link>
      <guid>https://doi.org/10.46298/arima.1856</guid>
      <author>El Saadi, Nadjia</author>
      <author>Arino, Ovide</author>
      <dc:creator>El Saadi, Nadjia</dc:creator>
      <dc:creator>Arino, Ovide</dc:creator>
      <content:encoded><![CDATA[The aim of this work is to provide a stochastic mathematical model of aggregation in phytoplankton, from the point of view of modelling a system of a large but finite number of phytoplankton cells that are subject to random dispersal, mutual interactions allowing the cell motions some dependence and branching (cell division or death). We present the passage from the ''microscopic'' description to the ''macroscopic'' one, when the initial number of cells tends to infinity (large phytoplankton populations). The limit of the system is an extension of the Dawson-Watanabe superprocess: it is a superprocess with spatial interactions which can be described by a nonlinear stochastic partial differential equation.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Décomposition de domaine pour un milieu poreux fracturé : un modèle en 3D avec fractures</title>
      <description><![CDATA[Dans cet article, nous nous intéressons à la modélisation de l'écoulement d'un fluide monophasique dans un milieu poreux faillé en utilisant les méthodes de décomposition de domaine. Le problème à résoudre est un problème d'interface non standard qui prend en compte l'écoulement dans les fractures. Dans l'approche proposée, la fracture est considérée comme une interface active, les conditions de transmission et les échanges entre la roche et la fracture font intervenir les propriétés de l'écoulement dans la fracture.]]></description>
      <pubDate>Fri, 03 Nov 2006 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1851</link>
      <guid>https://doi.org/10.46298/arima.1851</guid>
      <author>Amir, Laila</author>
      <author>Kern, Michel</author>
      <author>Roberts, Jean E.</author>
      <author>Martin, Vincent</author>
      <dc:creator>Amir, Laila</dc:creator>
      <dc:creator>Kern, Michel</dc:creator>
      <dc:creator>Roberts, Jean E.</dc:creator>
      <dc:creator>Martin, Vincent</dc:creator>
      <content:encoded><![CDATA[Dans cet article, nous nous intéressons à la modélisation de l'écoulement d'un fluide monophasique dans un milieu poreux faillé en utilisant les méthodes de décomposition de domaine. Le problème à résoudre est un problème d'interface non standard qui prend en compte l'écoulement dans les fractures. Dans l'approche proposée, la fracture est considérée comme une interface active, les conditions de transmission et les échanges entre la roche et la fracture font intervenir les propriétés de l'écoulement dans la fracture.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>An introduction to the topological asymptotic expansion with examples</title>
      <description><![CDATA[To find an optimal domain is equivalent to look for Its characteristic function. At first sight this problem seems to be nondifferentiable. But it is possible to derive the variation of a cost function when we switch the characteristic function from 0 to 1 or from 1 to 0 a small area. Classical and two generalized adjoint approaches are considered in this paper. Their domain of validity is given and Illustrated by several examples. Using this gradient type Information, It is possible to build fast algorithms. Generally, only one Iteration Is needed to find the optimal shape.]]></description>
      <pubDate>Thu, 19 Oct 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1866</link>
      <guid>https://doi.org/10.46298/arima.1866</guid>
      <author>Fehrenbach, Jérôme</author>
      <author>Masmoudi, Mohamed</author>
      <dc:creator>Fehrenbach, Jérôme</dc:creator>
      <dc:creator>Masmoudi, Mohamed</dc:creator>
      <content:encoded><![CDATA[To find an optimal domain is equivalent to look for Its characteristic function. At first sight this problem seems to be nondifferentiable. But it is possible to derive the variation of a cost function when we switch the characteristic function from 0 to 1 or from 1 to 0 a small area. Classical and two generalized adjoint approaches are considered in this paper. Their domain of validity is given and Illustrated by several examples. Using this gradient type Information, It is possible to build fast algorithms. Generally, only one Iteration Is needed to find the optimal shape.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>SenPeer : un système pair-à-pair de médiation de données</title>
      <description><![CDATA[In this article we present SenPeer, a new Peer-to-Peer data management system allowing data sharing among experts working on the development of the Senegal river in a decentralized and lexible fashion. SenPeer has a Super-Peer network topology based on an organization of peers in semantic domains and in which peers can contribute XML documents, relational or object databases. Each peer exports its data in a common formalism which has a graph structure semantically enriched with a set of keywords in order to guide mappings discovery. Mappings discovery relies on a set of fuzzy similarity measures. Moreover they allow the establishement of a semantic topology that is independent of the underlying network topology which is the basis for intelligent query routing.]]></description>
      <pubDate>Thu, 19 Oct 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1847</link>
      <guid>https://doi.org/10.46298/arima.1847</guid>
      <author>Faye, David C.</author>
      <author>Nachouki, Gilles</author>
      <author>Valduriez, Patrick</author>
      <dc:creator>Faye, David C.</dc:creator>
      <dc:creator>Nachouki, Gilles</dc:creator>
      <dc:creator>Valduriez, Patrick</dc:creator>
      <content:encoded><![CDATA[In this article we present SenPeer, a new Peer-to-Peer data management system allowing data sharing among experts working on the development of the Senegal river in a decentralized and lexible fashion. SenPeer has a Super-Peer network topology based on an organization of peers in semantic domains and in which peers can contribute XML documents, relational or object databases. Each peer exports its data in a common formalism which has a graph structure semantically enriched with a set of keywords in order to guide mappings discovery. Mappings discovery relies on a set of fuzzy similarity measures. Moreover they allow the establishement of a semantic topology that is independent of the underlying network topology which is the basis for intelligent query routing.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A method for optimal control problems</title>
      <description><![CDATA[We deal with a numerical method for HJB equations coming from optimal control problems with state constraints. More precisely, we present here an antidissipative scheme applied on an adaptative grid. The adaptative grid is generated using linear quadtree structure. This technique of adaptation facilitates stocking data and dealing with large numerical systems.]]></description>
      <pubDate>Wed, 18 Oct 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1867</link>
      <guid>https://doi.org/10.46298/arima.1867</guid>
      <author>Bokanowski, Olivier</author>
      <author>Megdich, Nadia</author>
      <author>Zidani, Hasnaa</author>
      <dc:creator>Bokanowski, Olivier</dc:creator>
      <dc:creator>Megdich, Nadia</dc:creator>
      <dc:creator>Zidani, Hasnaa</dc:creator>
      <content:encoded><![CDATA[We deal with a numerical method for HJB equations coming from optimal control problems with state constraints. More precisely, we present here an antidissipative scheme applied on an adaptative grid. The adaptative grid is generated using linear quadtree structure. This technique of adaptation facilitates stocking data and dealing with large numerical systems.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Exécution d'un graphe cubique de tâches sur un réseau bi-dimensionnel et asymptotiquement optimal</title>
      <description><![CDATA[This work proposes a scheduling strategy, based on re-indexing transformations, for task graphs associated with a linear timing function. This scheduling strategy is used to execute a cubical task graph, for which all the tasks have the sane execution time and inter-tasks communication delays are neglected, on a two-dimensional array of processors which is asymptotically space-optimal with respect to the timing function.]]></description>
      <pubDate>Wed, 11 Oct 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1848</link>
      <guid>https://doi.org/10.46298/arima.1848</guid>
      <author>Tayou Djamegni, Clémentin</author>
      <dc:creator>Tayou Djamegni, Clémentin</dc:creator>
      <content:encoded><![CDATA[This work proposes a scheduling strategy, based on re-indexing transformations, for task graphs associated with a linear timing function. This scheduling strategy is used to execute a cubical task graph, for which all the tasks have the sane execution time and inter-tasks communication delays are neglected, on a two-dimensional array of processors which is asymptotically space-optimal with respect to the timing function.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Flood simulation via shallow water numerical model based on characteristic method</title>
      <description><![CDATA[This work deals with the numerical simulation of flood waves propagation. This phenomena can be described by the non conservative form of shallow water or St-Venant equations, in water velocity-depht formulation (u,H). The numerical approximation of the model is based on the Characteristics method for the time discretization. The obtained steady system is of Quasi-Stokes type, and it is resolved by a preconditioned Uzawa conjugated gradient algorithm, combined to P1/P1 finite element for the spatial approximation. Some numerical results describing subcritical flow on various fluid domains are given.]]></description>
      <pubDate>Fri, 06 Oct 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1874</link>
      <guid>https://doi.org/10.46298/arima.1874</guid>
      <author>El Dabaghi, F.</author>
      <author>El Kacimi, A.</author>
      <author>Nakhlé, B.</author>
      <dc:creator>El Dabaghi, F.</dc:creator>
      <dc:creator>El Kacimi, A.</dc:creator>
      <dc:creator>Nakhlé, B.</dc:creator>
      <content:encoded><![CDATA[This work deals with the numerical simulation of flood waves propagation. This phenomena can be described by the non conservative form of shallow water or St-Venant equations, in water velocity-depht formulation (u,H). The numerical approximation of the model is based on the Characteristics method for the time discretization. The obtained steady system is of Quasi-Stokes type, and it is resolved by a preconditioned Uzawa conjugated gradient algorithm, combined to P1/P1 finite element for the spatial approximation. Some numerical results describing subcritical flow on various fluid domains are given.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On the representation of L-M algebra by intuitionistic fuzzy subsets</title>
      <description><![CDATA[In this paper we introduce the notions of intuitionistic weak alpha-cut and untuitionistic strong alpha-cut of intuitionistic fuzzy subsets of a universe X. These notions lead us to show that the set IF(X) of all intuitionistic fuzzy subsets on a universe X can be equipped with a structure of involutive theta-valued Lukasiewicz-Moisil algebra. Conversely, we show that every involutive theta-valued Lukasiewicz-Moisil algebra can be embedded into an algebra of intuitionistic fuzzy subsets.]]></description>
      <pubDate>Fri, 06 Oct 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1845</link>
      <guid>https://doi.org/10.46298/arima.1845</guid>
      <author>Lemnaouar, Zedam</author>
      <author>Abdelaziz, Amroune</author>
      <dc:creator>Lemnaouar, Zedam</dc:creator>
      <dc:creator>Abdelaziz, Amroune</dc:creator>
      <content:encoded><![CDATA[In this paper we introduce the notions of intuitionistic weak alpha-cut and untuitionistic strong alpha-cut of intuitionistic fuzzy subsets of a universe X. These notions lead us to show that the set IF(X) of all intuitionistic fuzzy subsets on a universe X can be equipped with a structure of involutive theta-valued Lukasiewicz-Moisil algebra. Conversely, we show that every involutive theta-valued Lukasiewicz-Moisil algebra can be embedded into an algebra of intuitionistic fuzzy subsets.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>An Algorithm for the Navier-Stokes Problem</title>
      <description><![CDATA[This study is a continuation of the one done in [7],[8] and [9] which are based on the work, first derived by Glowinski et al. in [3] and [4] and also Bernardi et al. [1] and [2]. Here, we propose an Algorithm to solve a nonlinear problem rising from fluid mechanics. In [7], we have studied Stokes problem by adapting Glowinski technique. This technique is userful as it decouples the pressure from the velocity during the resolution of the Stokes problem. In this paper, we extend our study to show that this technique can be used in solving a nonlinear problem such as the Navier Stokes equations. Numerical experiments confirm the interest of this discretisation.]]></description>
      <pubDate>Thu, 05 Oct 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1869</link>
      <guid>https://doi.org/10.46298/arima.1869</guid>
      <author>Nouri, F.Z.</author>
      <author>Amoura, K.</author>
      <dc:creator>Nouri, F.Z.</dc:creator>
      <dc:creator>Amoura, K.</dc:creator>
      <content:encoded><![CDATA[This study is a continuation of the one done in [7],[8] and [9] which are based on the work, first derived by Glowinski et al. in [3] and [4] and also Bernardi et al. [1] and [2]. Here, we propose an Algorithm to solve a nonlinear problem rising from fluid mechanics. In [7], we have studied Stokes problem by adapting Glowinski technique. This technique is userful as it decouples the pressure from the velocity during the resolution of the Stokes problem. In this paper, we extend our study to show that this technique can be used in solving a nonlinear problem such as the Navier Stokes equations. Numerical experiments confirm the interest of this discretisation.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Méthode d'agrégation des variables appliquée à la dynamique des populations</title>
      <description><![CDATA[We present the method of aggregation of variables in the case of ordinary differential equations. We apply the method to a prey - predator model in a multi - patchy environment. In this model, preys can go to a refuge and therefore escape to predation. The predator must return regularly to his terrier to feed his progeny. We study the effect of density-dependent migration on the global stability of the prey-predator system. We consider constant migration rates, but also density-dependent migration rates. We prove that the positif equilibrium is globally asymptotically stable in the first case, and that its stability changes in the second case. The fact that we consider density-dependent migration rates leads to the existence of a stable limit cycle via a Hopf bifurcation.]]></description>
      <pubDate>Tue, 03 Oct 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1852</link>
      <guid>https://doi.org/10.46298/arima.1852</guid>
      <author>Auger, Pierre</author>
      <author>El Abdllaoui, Abderrahim</author>
      <author>Mchich, Rachid</author>
      <dc:creator>Auger, Pierre</dc:creator>
      <dc:creator>El Abdllaoui, Abderrahim</dc:creator>
      <dc:creator>Mchich, Rachid</dc:creator>
      <content:encoded><![CDATA[We present the method of aggregation of variables in the case of ordinary differential equations. We apply the method to a prey - predator model in a multi - patchy environment. In this model, preys can go to a refuge and therefore escape to predation. The predator must return regularly to his terrier to feed his progeny. We study the effect of density-dependent migration on the global stability of the prey-predator system. We consider constant migration rates, but also density-dependent migration rates. We prove that the positif equilibrium is globally asymptotically stable in the first case, and that its stability changes in the second case. The fact that we consider density-dependent migration rates leads to the existence of a stable limit cycle via a Hopf bifurcation.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Un modèle Darcy-Forchheimer pour un écoulement dans un milieu poreux fracturé</title>
      <description><![CDATA[We propose a numerical model for the flow of a single phase,incompressible fluid in a porous medium with fractures. In this model, the flow obeys Forchheimer's law in the fracture and Darcy's law in the rock matrix.]]></description>
      <pubDate>Fri, 29 Sep 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1859</link>
      <guid>https://doi.org/10.46298/arima.1859</guid>
      <author>Frih, Najla</author>
      <author>Roberts, Jean E.</author>
      <author>Saada, Ali</author>
      <dc:creator>Frih, Najla</dc:creator>
      <dc:creator>Roberts, Jean E.</dc:creator>
      <dc:creator>Saada, Ali</dc:creator>
      <content:encoded><![CDATA[We propose a numerical model for the flow of a single phase,incompressible fluid in a porous medium with fractures. In this model, the flow obeys Forchheimer's law in the fracture and Darcy's law in the rock matrix.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Système d'Information Intégré Adaptatif sous Web pour la gestion et la modélisation des ressources hydriques</title>
      <description><![CDATA[This work presents the Adaptative Integrated Information System under Web (AIISW) for the management of hydrous resources. The AIISW is a natural extension of the IISW-WADI towards an adaptative and customised architecture according to the user profile or the context of use including : a research and indexation engine system, simulators, a GIS and meshing editors as well as graphic viewers. This AIISW allows on the one hand to manage in an automatic way data of simulators and on the other hand to exploit and calibrate eventually in real time the results of these simulations by corroborating them with extracted or identified parameters.]]></description>
      <pubDate>Wed, 27 Sep 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1854</link>
      <guid>https://doi.org/10.46298/arima.1854</guid>
      <author>El Dabaghi, F.</author>
      <author>Bechchi, M.</author>
      <author>Henine, Hocine</author>
      <dc:creator>El Dabaghi, F.</dc:creator>
      <dc:creator>Bechchi, M.</dc:creator>
      <dc:creator>Henine, Hocine</dc:creator>
      <content:encoded><![CDATA[This work presents the Adaptative Integrated Information System under Web (AIISW) for the management of hydrous resources. The AIISW is a natural extension of the IISW-WADI towards an adaptative and customised architecture according to the user profile or the context of use including : a research and indexation engine system, simulators, a GIS and meshing editors as well as graphic viewers. This AIISW allows on the one hand to manage in an automatic way data of simulators and on the other hand to exploit and calibrate eventually in real time the results of these simulations by corroborating them with extracted or identified parameters.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Résolution d'un Problème de Cauchy en EEG</title>
      <description><![CDATA[This paper talks about the resolution of the Cauchy problem thats appears in the localization of epileptic sources on Electro-Encephalo-Graphy (EEG). We treat specially the problem of estimating Cauchy data over the layer of the brain, knowing only the ones on the scalp measured by EEG. As a method of resolution, we choose an alternating iteratif algorithm rst proposed by Kozlov, Mazjya and Fomin. In this paper, we study numerically this method in three dimensions. We give also some numerical examples.]]></description>
      <pubDate>Wed, 27 Sep 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1858</link>
      <guid>https://doi.org/10.46298/arima.1858</guid>
      <author>El-Badia, Abdellatif</author>
      <author>Farah, Maha</author>
      <author>Ha-Duong, Tuong</author>
      <author>Pavan, Vincent</author>
      <dc:creator>El-Badia, Abdellatif</dc:creator>
      <dc:creator>Farah, Maha</dc:creator>
      <dc:creator>Ha-Duong, Tuong</dc:creator>
      <dc:creator>Pavan, Vincent</dc:creator>
      <content:encoded><![CDATA[This paper talks about the resolution of the Cauchy problem thats appears in the localization of epileptic sources on Electro-Encephalo-Graphy (EEG). We treat specially the problem of estimating Cauchy data over the layer of the brain, knowing only the ones on the scalp measured by EEG. As a method of resolution, we choose an alternating iteratif algorithm rst proposed by Kozlov, Mazjya and Fomin. In this paper, we study numerically this method in three dimensions. We give also some numerical examples.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Schéma SRNHS Analyse et Application d'un schéma aux volumes finis dédié aux systèmes non homogènes</title>
      <description><![CDATA[This article is devoted to the analysis, and improvement of a finite volume scheme proposed recently for a class of non homogeneous systems. We consider those for which the corressponding Riemann problem admits a selfsimilar solution. Some important examples of such problems are Shallow Water problems with irregular topography and two phase flows. The stability analysis of the considered scheme, in the homogeneous scalar case, leads to a new formulation which has a naturel extension to non homogeneous systems. Comparative numerical experiments for Shallow Water equations with sourec term, and a two phase problem (Ransom faucet) are presented to validate the scheme.]]></description>
      <pubDate>Tue, 26 Sep 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1870</link>
      <guid>https://doi.org/10.46298/arima.1870</guid>
      <author>Sahmim, Slah</author>
      <author>Benkhaldoun, Fayssal</author>
      <dc:creator>Sahmim, Slah</dc:creator>
      <dc:creator>Benkhaldoun, Fayssal</dc:creator>
      <content:encoded><![CDATA[This article is devoted to the analysis, and improvement of a finite volume scheme proposed recently for a class of non homogeneous systems. We consider those for which the corressponding Riemann problem admits a selfsimilar solution. Some important examples of such problems are Shallow Water problems with irregular topography and two phase flows. The stability analysis of the considered scheme, in the homogeneous scalar case, leads to a new formulation which has a naturel extension to non homogeneous systems. Comparative numerical experiments for Shallow Water equations with sourec term, and a two phase problem (Ransom faucet) are presented to validate the scheme.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Mise en oeuvre de tests unitaires dans un contexte de programmation eXtrème répartie</title>
      <description><![CDATA[eXtreme Programming (XP) is a methodology based on principles and practices for quickly developing software. However this approach requires the programmers to be co-located. Many research projects investigate how to extend XP to a distributed environment. However the challenge is to carry out the XP approach without conflicting with the distributed constraints. Our work takes place in this way. More precisely, we propose an extension based on assistance for supporting distributed unit testing, one of the key principles of XP methodology.]]></description>
      <pubDate>Sat, 23 Sep 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1843</link>
      <guid>https://doi.org/10.46298/arima.1843</guid>
      <author>Lokpo, Ibrahim</author>
      <author>Babri, Michel</author>
      <author>Padiou, Gérard</author>
      <dc:creator>Lokpo, Ibrahim</dc:creator>
      <dc:creator>Babri, Michel</dc:creator>
      <dc:creator>Padiou, Gérard</dc:creator>
      <content:encoded><![CDATA[eXtreme Programming (XP) is a methodology based on principles and practices for quickly developing software. However this approach requires the programmers to be co-located. Many research projects investigate how to extend XP to a distributed environment. However the challenge is to carry out the XP approach without conflicting with the distributed constraints. Our work takes place in this way. More precisely, we propose an extension based on assistance for supporting distributed unit testing, one of the key principles of XP methodology.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A kinetic model for a two phases flow simulation</title>
      <description><![CDATA[This work deals with the modelling and simulation of the air bubble injection effect in a water reservoir. The water phase is modelled by a Navier-Stokes quation in which we integrate the air bubble effect by a source term. This one depends on probability density function described by a kinetic model. For the numerical aspects we used particular method for kinetic equation and mixed finite elements method for Navier-Stokes equations. Finally, we present some numercial results to illustrate the used method.]]></description>
      <pubDate>Thu, 21 Sep 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1853</link>
      <guid>https://doi.org/10.46298/arima.1853</guid>
      <author>Abdelwahed, Mohamed</author>
      <author>Badé, Rabé</author>
      <author>Chaker, Hedia</author>
      <dc:creator>Abdelwahed, Mohamed</dc:creator>
      <dc:creator>Badé, Rabé</dc:creator>
      <dc:creator>Chaker, Hedia</dc:creator>
      <content:encoded><![CDATA[This work deals with the modelling and simulation of the air bubble injection effect in a water reservoir. The water phase is modelled by a Navier-Stokes quation in which we integrate the air bubble effect by a source term. This one depends on probability density function described by a kinetic model. For the numerical aspects we used particular method for kinetic equation and mixed finite elements method for Navier-Stokes equations. Finally, we present some numercial results to illustrate the used method.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Régularisation de l'équation de Galbrun pour l'aéroacoustique en régime transitoire</title>
      <description><![CDATA[In this paper we are interested in the mathematical and numerical analysis of the timedependent Galbrun equation in a rigid duct. This equation modelizes the acoustic propagation in presence of flow. We prove the well-posedness of the problem for a subsonic uniform flow. Besides, we propose a regularized variational formulation of the problem suitable for an approximation by Lagrange finite elements.]]></description>
      <pubDate>Wed, 20 Sep 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1855</link>
      <guid>https://doi.org/10.46298/arima.1855</guid>
      <author>Bonnet-Bendhia, Anne Sophie</author>
      <author>Berriri, Kamel</author>
      <author>Joly, Patrick</author>
      <dc:creator>Bonnet-Bendhia, Anne Sophie</dc:creator>
      <dc:creator>Berriri, Kamel</dc:creator>
      <dc:creator>Joly, Patrick</dc:creator>
      <content:encoded><![CDATA[In this paper we are interested in the mathematical and numerical analysis of the timedependent Galbrun equation in a rigid duct. This equation modelizes the acoustic propagation in presence of flow. We prove the well-posedness of the problem for a subsonic uniform flow. Besides, we propose a regularized variational formulation of the problem suitable for an approximation by Lagrange finite elements.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Shape optimization for the Stokes equations using topological sensitivity analysis</title>
      <description><![CDATA[In this paper, we consider a shape optimization problem related to the Stokes equations. The proposed approach is based on a topological sensitivity analysis. It consists in an asymptotic expansion of a cost function with respect to the insertion of a small obstacle in the domain. The theoretical part of this work is discussed in both two and three dimensional cases. In the numerical part, we use this approach to optimize the shape of the tubes that connect the inlet to the outlets of the cavity maximizing the outflow rate.]]></description>
      <pubDate>Sun, 17 Sep 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1865</link>
      <guid>https://doi.org/10.46298/arima.1865</guid>
      <author>Maatoug, Hassine</author>
      <dc:creator>Maatoug, Hassine</dc:creator>
      <content:encoded><![CDATA[In this paper, we consider a shape optimization problem related to the Stokes equations. The proposed approach is based on a topological sensitivity analysis. It consists in an asymptotic expansion of a cost function with respect to the insertion of a small obstacle in the domain. The theoretical part of this work is discussed in both two and three dimensional cases. In the numerical part, we use this approach to optimize the shape of the tubes that connect the inlet to the outlets of the cavity maximizing the outflow rate.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Régularisation d'un problème d'obstacle bilatéral</title>
      <description><![CDATA[The main objective of this work is the resolution of an bilateral obstacle problem by means of the regularization method.]]></description>
      <pubDate>Fri, 08 Sep 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1873</link>
      <guid>https://doi.org/10.46298/arima.1873</guid>
      <author>Achachab, B.</author>
      <author>Zahi, J.</author>
      <author>Addou, A.</author>
      <dc:creator>Achachab, B.</dc:creator>
      <dc:creator>Zahi, J.</dc:creator>
      <dc:creator>Addou, A.</dc:creator>
      <content:encoded><![CDATA[The main objective of this work is the resolution of an bilateral obstacle problem by means of the regularization method.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A dynamical model of plant growth with full retroaction between organogenesis and photosynthesis</title>
      <description><![CDATA[In this paper are presented new mathematical developments in plant growth modelling and simulation. GreenLab Model is a functional-structural plant growth model, it combines both organogenesis (architecture) and photosynthesis (biomass production and repartition). New improvements concern the retroaction of photosynthesis on organogenesis. We present in this paper the influence of available biomass on the number of metamers in a growth unit and on the branching. The general theory is introduced and applied to simple trees. Some interesting behaviours are underlined.]]></description>
      <pubDate>Mon, 21 Aug 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1844</link>
      <guid>https://doi.org/10.46298/arima.1844</guid>
      <author>Rostand-Mathieu, Amélie</author>
      <author>Cournède, Paul-Henry</author>
      <author>de Reffye, Philippe</author>
      <dc:creator>Rostand-Mathieu, Amélie</dc:creator>
      <dc:creator>Cournède, Paul-Henry</dc:creator>
      <dc:creator>de Reffye, Philippe</dc:creator>
      <content:encoded><![CDATA[In this paper are presented new mathematical developments in plant growth modelling and simulation. GreenLab Model is a functional-structural plant growth model, it combines both organogenesis (architecture) and photosynthesis (biomass production and repartition). New improvements concern the retroaction of photosynthesis on organogenesis. We present in this paper the influence of available biomass on the number of metamers in a growth unit and on the branching. The general theory is introduced and applied to simple trees. Some interesting behaviours are underlined.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Discrétisation en temps par sous-domaine pour un problème d'advection en milieu poreux</title>
      <description><![CDATA[The aim of this paper is to present a method of time dicretisation which allows the use of different time steps in different subdomains. The advection equation is dicretised by an upwind scheme. The link between the time grids is accomplished in order that the scheme is conservative.]]></description>
      <pubDate>Sun, 20 Aug 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1872</link>
      <guid>https://doi.org/10.46298/arima.1872</guid>
      <author>Sboui, Amel</author>
      <author>Jérôme, Jaffré</author>
      <dc:creator>Sboui, Amel</dc:creator>
      <dc:creator>Jérôme, Jaffré</dc:creator>
      <content:encoded><![CDATA[The aim of this paper is to present a method of time dicretisation which allows the use of different time steps in different subdomains. The advection equation is dicretised by an upwind scheme. The link between the time grids is accomplished in order that the scheme is conservative.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Modélisation d'une population de mérous, effets du braconnage et de la migration</title>
      <description><![CDATA[The aim of our work is to model the dynamics of a grouper population in a fishing zone, by holding account at the same time : the natural growth, the predation and the migrations, and to study the impact of the poaching on this population]]></description>
      <pubDate>Sun, 20 Aug 2006 12:39:56 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1863</link>
      <guid>https://doi.org/10.46298/arima.1863</guid>
      <author>Ben Miled, Slimane</author>
      <author>Kebir, Amira</author>
      <dc:creator>Ben Miled, Slimane</dc:creator>
      <dc:creator>Kebir, Amira</dc:creator>
      <content:encoded><![CDATA[The aim of our work is to model the dynamics of a grouper population in a fishing zone, by holding account at the same time : the natural growth, the predation and the migrations, and to study the impact of the poaching on this population]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Numerical study of some iterative solvers for acoustics in unbounded domains</title>
      <description><![CDATA[The aim of this paper is to study some iterative methods, based on the domain decomposition approach to solve the acoustic harmonic wave propagation in an unbounded domain. We describe how our methodology applies to semi-infinite closed guides and to acoustic scattering problems. In both cases, we use some well-known transparent boundary conditions by imposing on a fictitious boundary a boundary condition by the means of a Fourier expansion. For numerical purposes, we propose an original algorithm based on a fixed-point technique applied to the problem set in the truncated domain. We will interprate this method as a domain decomposition solver which allows to state convergence results. The improvement brought by this method is a consequence of the sparsity presentation of the finite matrix system which is decomposed only once.]]></description>
      <pubDate>Sat, 19 Aug 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1849</link>
      <guid>https://doi.org/10.46298/arima.1849</guid>
      <author>Gmati, Nabil</author>
      <author>Zrelli, Naouel</author>
      <dc:creator>Gmati, Nabil</dc:creator>
      <dc:creator>Zrelli, Naouel</dc:creator>
      <content:encoded><![CDATA[The aim of this paper is to study some iterative methods, based on the domain decomposition approach to solve the acoustic harmonic wave propagation in an unbounded domain. We describe how our methodology applies to semi-infinite closed guides and to acoustic scattering problems. In both cases, we use some well-known transparent boundary conditions by imposing on a fictitious boundary a boundary condition by the means of a Fourier expansion. For numerical purposes, we propose an original algorithm based on a fixed-point technique applied to the problem set in the truncated domain. We will interprate this method as a domain decomposition solver which allows to state convergence results. The improvement brought by this method is a consequence of the sparsity presentation of the finite matrix system which is decomposed only once.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>R-adaptation par l'estimateur d'erreur hiérarchique</title>
      <description><![CDATA[The aim of this work is to devise a method to determine the optimal position of the nodes in a finite element discretization for a boundary value problem. The node displacement procedure (also called R-adaptation) is a crucial step in a global mesh adaptation procedure. In the present approch, we determine the nodal position by minimizing the approximation error. This error is evaluated using a hierarchical estimator. A numerical test is presented.]]></description>
      <pubDate>Fri, 18 Aug 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1850</link>
      <guid>https://doi.org/10.46298/arima.1850</guid>
      <author>Alla, Abdellah</author>
      <author>Mghazli, Zoubida</author>
      <author>Fortin, Michel</author>
      <author>Hecht, Frédéric</author>
      <dc:creator>Alla, Abdellah</dc:creator>
      <dc:creator>Mghazli, Zoubida</dc:creator>
      <dc:creator>Fortin, Michel</dc:creator>
      <dc:creator>Hecht, Frédéric</dc:creator>
      <content:encoded><![CDATA[The aim of this work is to devise a method to determine the optimal position of the nodes in a finite element discretization for a boundary value problem. The node displacement procedure (also called R-adaptation) is a crucial step in a global mesh adaptation procedure. In the present approch, we determine the nodal position by minimizing the approximation error. This error is evaluated using a hierarchical estimator. A numerical test is presented.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Segmentation d'une image couleur par les critères d'information et la théorie des ensembles flous</title>
      <description><![CDATA[In this paper we present an unsupervised color image segmentation algorithm using the information criteria and a fuzzy theory. We propose this method to estimate the number of color image clusters and the optimal radius associated with minimizing the value of the proposed criteria. The experimental results demonstrate that this approach compresses the image in a small number of clusters without losing the informational contents of the image and we reduce the number of parameters using the process of segmentation, we also decrease the computational time. The color image segmentation system has been tested on some usual color images; "House", "Lena", "Monarch" and "Peppers".]]></description>
      <pubDate>Tue, 08 Aug 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1861</link>
      <guid>https://doi.org/10.46298/arima.1861</guid>
      <author>Hamzaouil, H.</author>
      <author>Elmatouat, A.</author>
      <author>Martin, P.</author>
      <dc:creator>Hamzaouil, H.</dc:creator>
      <dc:creator>Elmatouat, A.</dc:creator>
      <dc:creator>Martin, P.</dc:creator>
      <content:encoded><![CDATA[In this paper we present an unsupervised color image segmentation algorithm using the information criteria and a fuzzy theory. We propose this method to estimate the number of color image clusters and the optimal radius associated with minimizing the value of the proposed criteria. The experimental results demonstrate that this approach compresses the image in a small number of clusters without losing the informational contents of the image and we reduce the number of parameters using the process of segmentation, we also decrease the computational time. The color image segmentation system has been tested on some usual color images; "House", "Lena", "Monarch" and "Peppers".]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Calcul des courants de Foucault harmoniques dans des domaines non bornés par un algorithme de point fixe de Cauchy</title>
      <description><![CDATA[We investigate a computing procedure for the unbounded eddy current model put under a coupled finite elements/integral representation form. The exact and non-local artificial condition, enforced on the boundary of the truncated domain, is derived from the simple/double layers potential and the critical point is: how to handle it numerically? An iterating technique, based on the Cauchy fixed point technique, allows us to approximate accurately the solution. The advantage of it is that, at each step, only a bounded eddy current problem with a local condition has to be solved, which is currently carried out by most of the nowadays computing codes conceived to handle value problems on bounded domains.]]></description>
      <pubDate>Sun, 06 Aug 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1862</link>
      <guid>https://doi.org/10.46298/arima.1862</guid>
      <author>Jelassi, Faten</author>
      <dc:creator>Jelassi, Faten</dc:creator>
      <content:encoded><![CDATA[We investigate a computing procedure for the unbounded eddy current model put under a coupled finite elements/integral representation form. The exact and non-local artificial condition, enforced on the boundary of the truncated domain, is derived from the simple/double layers potential and the critical point is: how to handle it numerically? An iterating technique, based on the Cauchy fixed point technique, allows us to approximate accurately the solution. The advantage of it is that, at each step, only a bounded eddy current problem with a local condition has to be solved, which is currently carried out by most of the nowadays computing codes conceived to handle value problems on bounded domains.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Spectral Procedure with Diagonalization of Operators for 2D Navier-Stokes and Heat Equations in Cylindrical Geometry</title>
      <description><![CDATA[We present in this paper a spectral method for solving a problem governed by Navier-Stokes and heat equations. The Fourier-Chebyshev technique in the azimuthal direction leads to a system of Helmholtz equations. The Collocation-Chebyshev method in the radial direction has been used for the simulation of these equations. The Crank-Nicholson scheme is employed to solve the Helmholtz systems obtained for wide ranges of parameters, and its efficiency is considerably improved by diagonalization of the obtained operators. The results are in a very good agreement with the experimental data available in the literature.]]></description>
      <pubDate>Sun, 06 Aug 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1860</link>
      <guid>https://doi.org/10.46298/arima.1860</guid>
      <author>El Guarmah, Emahdi</author>
      <author>Cheddadi, Abdelkhalek</author>
      <author>Azaier, Mejdi</author>
      <dc:creator>El Guarmah, Emahdi</dc:creator>
      <dc:creator>Cheddadi, Abdelkhalek</dc:creator>
      <dc:creator>Azaier, Mejdi</dc:creator>
      <content:encoded><![CDATA[We present in this paper a spectral method for solving a problem governed by Navier-Stokes and heat equations. The Fourier-Chebyshev technique in the azimuthal direction leads to a system of Helmholtz equations. The Collocation-Chebyshev method in the radial direction has been used for the simulation of these equations. The Crank-Nicholson scheme is employed to solve the Helmholtz systems obtained for wide ranges of parameters, and its efficiency is considerably improved by diagonalization of the obtained operators. The results are in a very good agreement with the experimental data available in the literature.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Ondes dans les milieux poroélastiques - Analyse du modèle de Biot</title>
      <description><![CDATA[We are interested in the modeling of wave propagation in poroelastic media. We consider the biphasic Biot's model. This paper is devoted to the mathematical analysis of such model : existence and uniqueness result, energy decay result and the calculation of an analytical solution.]]></description>
      <pubDate>Wed, 02 Aug 2006 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1857</link>
      <guid>https://doi.org/10.46298/arima.1857</guid>
      <author>Ezziani, Abdelaâziz</author>
      <dc:creator>Ezziani, Abdelaâziz</dc:creator>
      <content:encoded><![CDATA[We are interested in the modeling of wave propagation in poroelastic media. We consider the biphasic Biot's model. This paper is devoted to the mathematical analysis of such model : existence and uniqueness result, energy decay result and the calculation of an analytical solution.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Extraction of Association Rules for the Prediction of Missing Values</title>
      <description><![CDATA[Missing values in databases have motivated many researches in the field of KDD, specially concerning prediction. However, to the best of our knowledge, few appraoches based on association rules have been proposed so far. In this paper, we show how to adapt the levelwise algorithm for the mining of association rules in order to mine frequent rules with a confidence equal to 1 from a relational table. In our approach, the consequents of extracted rules are either an interval or a set of values, according to whether the domain of the predicted attribute is continuous or discrete.]]></description>
      <pubDate>Sat, 26 Nov 2005 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1834</link>
      <guid>https://doi.org/10.46298/arima.1834</guid>
      <author>Jami, Sylvie</author>
      <author>Jen, Tao-Yan</author>
      <author>Laurent, Dominique</author>
      <author>Loizou, Georges</author>
      <author>Sy, Oumar</author>
      <dc:creator>Jami, Sylvie</dc:creator>
      <dc:creator>Jen, Tao-Yan</dc:creator>
      <dc:creator>Laurent, Dominique</dc:creator>
      <dc:creator>Loizou, Georges</dc:creator>
      <dc:creator>Sy, Oumar</dc:creator>
      <content:encoded><![CDATA[Missing values in databases have motivated many researches in the field of KDD, specially concerning prediction. However, to the best of our knowledge, few appraoches based on association rules have been proposed so far. In this paper, we show how to adapt the levelwise algorithm for the mining of association rules in order to mine frequent rules with a confidence equal to 1 from a relational table. In our approach, the consequents of extracted rules are either an interval or a set of values, according to whether the domain of the predicted attribute is continuous or discrete.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Dynamic Generation of Adaptative Hypermedia Document in an e-learning Environment</title>
      <description><![CDATA[In the hypermedia systems the reinforcement of the learner interest requires the production, edition and diffusion of various type of teaching documents (courses, exercises, etc). The aim of our work, is the elaboration of a model of documents and teaching activities. This model describes parameters and functionalities to integrate in pedagogical contexts witch supports different activities. Based on this model, we conceived and carried out a dynamic adaptive hypermedia environment called MEDYNA, witch helps us to draft documents for e-learning. The system takes into account parameters and elements of the proposed model. It allows the dynamic generation of adaptive context to the learner. We exploited the XML technology for the implementation of our system.]]></description>
      <pubDate>Fri, 25 Nov 2005 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1839</link>
      <guid>https://doi.org/10.46298/arima.1839</guid>
      <author>Behaz, Amel</author>
      <author>Djoudi, Mahieddine</author>
      <dc:creator>Behaz, Amel</dc:creator>
      <dc:creator>Djoudi, Mahieddine</dc:creator>
      <content:encoded><![CDATA[In the hypermedia systems the reinforcement of the learner interest requires the production, edition and diffusion of various type of teaching documents (courses, exercises, etc). The aim of our work, is the elaboration of a model of documents and teaching activities. This model describes parameters and functionalities to integrate in pedagogical contexts witch supports different activities. Based on this model, we conceived and carried out a dynamic adaptive hypermedia environment called MEDYNA, witch helps us to draft documents for e-learning. The system takes into account parameters and elements of the proposed model. It allows the dynamic generation of adaptive context to the learner. We exploited the XML technology for the implementation of our system.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Analysis of Texture Applied to Renal Echography</title>
      <description><![CDATA[Renal echography remains the least expensive means for the exploration of the kidney. The system that we propose is a contribution for the diagnostic automatic of the kidney on ultrasound image. The analysis of texture is a technique which proved reliable in the field of the characterization of the human organs on ultrasound images. Indeed, our contribution aims at the characterization of the images of echographic textures of the kidney. This characterization is, in a first level, structural to evaluate the presence (form and position) of the various components of the kidney (clusters, medullary cortical zone). The statistical analysis of texture constitutes our second approach by carrying out virtual punctures on the kidney in order to be able to evaluate its state by quantifying the texture of the various areas characteristic of the kidney .]]></description>
      <pubDate>Sat, 22 Oct 2005 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1841</link>
      <guid>https://doi.org/10.46298/arima.1841</guid>
      <author>Tahiri Alaoui, M.</author>
      <author>Farssi, S.M.</author>
      <author>Touzani, K.</author>
      <author>Bunel, P.</author>
      <dc:creator>Tahiri Alaoui, M.</dc:creator>
      <dc:creator>Farssi, S.M.</dc:creator>
      <dc:creator>Touzani, K.</dc:creator>
      <dc:creator>Bunel, P.</dc:creator>
      <content:encoded><![CDATA[Renal echography remains the least expensive means for the exploration of the kidney. The system that we propose is a contribution for the diagnostic automatic of the kidney on ultrasound image. The analysis of texture is a technique which proved reliable in the field of the characterization of the human organs on ultrasound images. Indeed, our contribution aims at the characterization of the images of echographic textures of the kidney. This characterization is, in a first level, structural to evaluate the presence (form and position) of the various components of the kidney (clusters, medullary cortical zone). The statistical analysis of texture constitutes our second approach by carrying out virtual punctures on the kidney in order to be able to evaluate its state by quantifying the texture of the various areas characteristic of the kidney .]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Formal Approach to the Description and Manipulation of Structured Mathematical Objects</title>
      <description><![CDATA[We present in this paper a formal approach of description, posting and handling of the mathematical structured objects; based on the formalism of attribute grammars. We are interested particularly in the problem of two-dimensional and bidirectional posting of certain expressions and mathematical formulas. Indeed, in more of the two-dimensional character that presents certain mathematical symbols like the square root or the matrix, we also note the problem of posting rightto-left of an Arab text in a context planned for a posting left-to-right of an Indo-European text, or a bidirectional posting mixing the two modes. After a study of some solutions suggested in the literature, we show how the method of attribute grammars adapts easily to these types of problem.]]></description>
      <pubDate>Fri, 14 Oct 2005 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1838</link>
      <guid>https://doi.org/10.46298/arima.1838</guid>
      <author>Fotsing Talla, Bernard</author>
      <author>Kouamou, Georges-Edouard</author>
      <dc:creator>Fotsing Talla, Bernard</dc:creator>
      <dc:creator>Kouamou, Georges-Edouard</dc:creator>
      <content:encoded><![CDATA[We present in this paper a formal approach of description, posting and handling of the mathematical structured objects; based on the formalism of attribute grammars. We are interested particularly in the problem of two-dimensional and bidirectional posting of certain expressions and mathematical formulas. Indeed, in more of the two-dimensional character that presents certain mathematical symbols like the square root or the matrix, we also note the problem of posting rightto-left of an Arab text in a context planned for a posting left-to-right of an Indo-European text, or a bidirectional posting mixing the two modes. After a study of some solutions suggested in the literature, we show how the method of attribute grammars adapts easily to these types of problem.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Coalition Formation for Improving the Utility of the Response: Application to Information Request</title>
      <description><![CDATA[This paper is concerned with the negotiation problem between agents with limited resources and under time constraints in dynamic environment. The society of agents has the same goal which is to respond with best delays at client requests. Each agent has a local technique for improving “progressively” the quality of the request. Agents must begin a negotiation cycle for coalition formation which maximizes the utility of the response to the new request. The goal is to minimize the number of exchanged messages between agents for a coalition formation in order to minimize the negotiation time.]]></description>
      <pubDate>Wed, 12 Oct 2005 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1840</link>
      <guid>https://doi.org/10.46298/arima.1840</guid>
      <author>Belleili, Habiba</author>
      <author>Bouzid, Maroua</author>
      <author>Sellami, Mokhtar</author>
      <dc:creator>Belleili, Habiba</dc:creator>
      <dc:creator>Bouzid, Maroua</dc:creator>
      <dc:creator>Sellami, Mokhtar</dc:creator>
      <content:encoded><![CDATA[This paper is concerned with the negotiation problem between agents with limited resources and under time constraints in dynamic environment. The society of agents has the same goal which is to respond with best delays at client requests. Each agent has a local technique for improving “progressively” the quality of the request. Agents must begin a negotiation cycle for coalition formation which maximizes the utility of the response to the new request. The goal is to minimize the number of exchanged messages between agents for a coalition formation in order to minimize the negotiation time.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Time-lag Derivative Convergence for Fixed Point Iterations</title>
      <description><![CDATA[In an earlier study it was proven and experimentally confirmed on a 2D Euler code that fixed point iterations can be differentiated to yield first and second order derivatives of implicit functions that are defined by state equations. It was also asserted that the resulting approximations for reduced gradients and Hessians converge with the same R-factor as the underlying fixed point iteration. A closer look reveals now that nevertheless these derivative values lag behind the functions in that the ratios of the corresponding errors grow proportional to the iteration counter or its square towards infinity. This rather subtle effect is caused mathematically by the occurrence of nontrivial Jordan blocks associated with degenerated eigenvalues. We elaborate the theory and report its confirmation through numerical experiments]]></description>
      <pubDate>Wed, 28 Sep 2005 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1837</link>
      <guid>https://doi.org/10.46298/arima.1837</guid>
      <author>Griewank, Andreas</author>
      <author>Kressner, Daniel</author>
      <dc:creator>Griewank, Andreas</dc:creator>
      <dc:creator>Kressner, Daniel</dc:creator>
      <content:encoded><![CDATA[In an earlier study it was proven and experimentally confirmed on a 2D Euler code that fixed point iterations can be differentiated to yield first and second order derivatives of implicit functions that are defined by state equations. It was also asserted that the resulting approximations for reduced gradients and Hessians converge with the same R-factor as the underlying fixed point iteration. A closer look reveals now that nevertheless these derivative values lag behind the functions in that the ratios of the corresponding errors grow proportional to the iteration counter or its square towards infinity. This rather subtle effect is caused mathematically by the occurrence of nontrivial Jordan blocks associated with degenerated eigenvalues. We elaborate the theory and report its confirmation through numerical experiments]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Water Supply Optimization Problem for Plant Growth Based on GreenLab Model</title>
      <description><![CDATA[GreenLab is a structural-functional model for plant growth based on multidisciplinary knowledge. Its mathematical formalism allows dynamic simulation of plant growth and model analysis. A simplified soil water balance equation is introduced to illustrate the interactions and feedbacks between the plant functioning and water resources. A water supply optimization problem is then described and solved: the sunflower fruit weight is optimized with respect to different water supply strategies in a theoretical case. Intuitive searching method and genetic algorithms are used to solve this mixed integer nonlinear problem. The optimization results are analyzed and reveal possible agronomic applications.]]></description>
      <pubDate>Sun, 11 Sep 2005 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1836</link>
      <guid>https://doi.org/10.46298/arima.1836</guid>
      <author>Wu, Lin</author>
      <author>de Reffye, Philippe</author>
      <author>Hu, Bao-Gang</author>
      <author>Le Dimet, François-Xavier</author>
      <author>Cournède, Paul-Henry</author>
      <dc:creator>Wu, Lin</dc:creator>
      <dc:creator>de Reffye, Philippe</dc:creator>
      <dc:creator>Hu, Bao-Gang</dc:creator>
      <dc:creator>Le Dimet, François-Xavier</dc:creator>
      <dc:creator>Cournède, Paul-Henry</dc:creator>
      <content:encoded><![CDATA[GreenLab is a structural-functional model for plant growth based on multidisciplinary knowledge. Its mathematical formalism allows dynamic simulation of plant growth and model analysis. A simplified soil water balance equation is introduced to illustrate the interactions and feedbacks between the plant functioning and water resources. A water supply optimization problem is then described and solved: the sunflower fruit weight is optimized with respect to different water supply strategies in a theoretical case. Intuitive searching method and genetic algorithms are used to solve this mixed integer nonlinear problem. The optimization results are analyzed and reveal possible agronomic applications.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A New Data Fusion Method for Hybrid MMC/RNA Learning: Application to Automatic Speech Recognition</title>
      <description><![CDATA[It is well known that traditional Hidden Markov Models (HMM) systems lead to a considerable improvement when more training data or more parameters are used. However, using more data with hybrid Hidden Markov Models and Artificial Neural Networks (HMM/ANN) models results in increased training times without improvements in performance. We developed in this work a new method based on automatically separating data into several sets and training several neural networks of Multi-Layer Perceptrons (MLP) type on each set. During the recognition phase, models are combined using several criteria (based on data fusion techniques) to provide the recognized word. We showed in this paper that this method significantly improved the recognition accuracy. This method was applied in an Arabic speech recognition system. This last is based on the one hand, on a fuzzy clustering (application of the fuzzy c-means algorithm) and of another share, on a segmentation at base of the genetic algorithms.]]></description>
      <pubDate>Thu, 01 Sep 2005 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1842</link>
      <guid>https://doi.org/10.46298/arima.1842</guid>
      <author>Lazli, Lilia</author>
      <author>Laskri, Mohamed Tayeb</author>
      <dc:creator>Lazli, Lilia</dc:creator>
      <dc:creator>Laskri, Mohamed Tayeb</dc:creator>
      <content:encoded><![CDATA[It is well known that traditional Hidden Markov Models (HMM) systems lead to a considerable improvement when more training data or more parameters are used. However, using more data with hybrid Hidden Markov Models and Artificial Neural Networks (HMM/ANN) models results in increased training times without improvements in performance. We developed in this work a new method based on automatically separating data into several sets and training several neural networks of Multi-Layer Perceptrons (MLP) type on each set. During the recognition phase, models are combined using several criteria (based on data fusion techniques) to provide the recognized word. We showed in this paper that this method significantly improved the recognition accuracy. This method was applied in an Arabic speech recognition system. This last is based on the one hand, on a fuzzy clustering (application of the fuzzy c-means algorithm) and of another share, on a segmentation at base of the genetic algorithms.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Propriétés d'un circuit graphe minimum</title>
      <description><![CDATA[A graph circuit is a planar graph in which edges are oriented such that any finite face is a circuit. Such graph is said to be minimum if the number of edges oriented in two direction is minimum. In this article we study such graph properties. We prove that each finite face can be characterized by its orientation direction. We also present sum results on the disposition of edges oriented in two directions in a minimum graph circuit.]]></description>
      <pubDate>Fri, 26 Aug 2005 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.2554</link>
      <guid>https://doi.org/10.46298/arima.2554</guid>
      <author>Nzali, Jean-Pierre</author>
      <dc:creator>Nzali, Jean-Pierre</dc:creator>
      <content:encoded><![CDATA[A graph circuit is a planar graph in which edges are oriented such that any finite face is a circuit. Such graph is said to be minimum if the number of edges oriented in two direction is minimum. In this article we study such graph properties. We prove that each finite face can be characterized by its orientation direction. We also present sum results on the disposition of edges oriented in two directions in a minimum graph circuit.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>An Algorithm for the Construction of a Formal Concept Lattice and for the Extraction of Minimal Generators</title>
      <description><![CDATA[The extremely large number of association rules that can be drawn from ―even reasonably sized datasets―, bootstrapped the development of more acute techniques or methods to reduce the size of the reported rule sets. In this context, the battery of results provided by the Formal Concept Analysis (FCA) allowed to define "irreducible" nuclei of association rule subset better known as generic basis. However, a thorough overview of the literature shows that all the algorithms dedicated neglected an essential component: the relation of order, or the extraction of the minimal generators. In this paper, we introduce the GenAll algorithm to build a formal concept lattice, in which each formal concept is "decorated" by its minimal generators. The GenAll algorithm aims to extract generic bases of association rules. The main novelty in this algorithm is the use of refinement process to compute immediate successor lists to simultaneously determine the set of formal concepts, their underlying partial order and the set of minimal generators associated with each formal concept. Carried out experiments showed that the GenAll algorithm is especially efficient for dense extraction contexts compared to that of Nourine et al. Response times obtained from the GenAll algorithm largely outperform those of Nourine et al.]]></description>
      <pubDate>Tue, 09 Aug 2005 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1835</link>
      <guid>https://doi.org/10.46298/arima.1835</guid>
      <author>Ben Tekaya, Sondess</author>
      <author>Ben Yahia, Sadok</author>
      <author>Slimani, Yahia</author>
      <dc:creator>Ben Tekaya, Sondess</dc:creator>
      <dc:creator>Ben Yahia, Sadok</dc:creator>
      <dc:creator>Slimani, Yahia</dc:creator>
      <content:encoded><![CDATA[The extremely large number of association rules that can be drawn from ―even reasonably sized datasets―, bootstrapped the development of more acute techniques or methods to reduce the size of the reported rule sets. In this context, the battery of results provided by the Formal Concept Analysis (FCA) allowed to define "irreducible" nuclei of association rule subset better known as generic basis. However, a thorough overview of the literature shows that all the algorithms dedicated neglected an essential component: the relation of order, or the extraction of the minimal generators. In this paper, we introduce the GenAll algorithm to build a formal concept lattice, in which each formal concept is "decorated" by its minimal generators. The GenAll algorithm aims to extract generic bases of association rules. The main novelty in this algorithm is the use of refinement process to compute immediate successor lists to simultaneously determine the set of formal concepts, their underlying partial order and the set of minimal generators associated with each formal concept. Carried out experiments showed that the GenAll algorithm is especially efficient for dense extraction contexts compared to that of Nourine et al. Response times obtained from the GenAll algorithm largely outperform those of Nourine et al.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Correction des erreurs orthographiques des systèmes de reconnaissance de l'écriture et de la parole arabe</title>
      <description><![CDATA[In this paper, we present two methods for correcting Arabic words generated by text and/or speech recognizers. These techniques operate as post-processors and they are conceived to be adaptable. They correct rejection and substitution word errors. The former one is very linked to the dictionary and is called 'lexicon driven', when the orther is very general exploiting contextual information and called 'context driven'. Arabic language properties are very useful in morpho-lexical analysis and so they were strongly exploited in the development of the second method. Substitution errors are rewritten in rules for being used by a rule based system. The extensions to the other levels of language analysis are considered in perspectives.]]></description>
      <pubDate>Thu, 21 Oct 2004 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.2555</link>
      <guid>https://doi.org/10.46298/arima.2555</guid>
      <author>Sari, Toufik</author>
      <author>Sellami, Mokhtar</author>
      <dc:creator>Sari, Toufik</dc:creator>
      <dc:creator>Sellami, Mokhtar</dc:creator>
      <content:encoded><![CDATA[In this paper, we present two methods for correcting Arabic words generated by text and/or speech recognizers. These techniques operate as post-processors and they are conceived to be adaptable. They correct rejection and substitution word errors. The former one is very linked to the dictionary and is called 'lexicon driven', when the orther is very general exploiting contextual information and called 'context driven'. Arabic language properties are very useful in morpho-lexical analysis and so they were strongly exploited in the development of the second method. Substitution errors are rewritten in rules for being used by a rule based system. The extensions to the other levels of language analysis are considered in perspectives.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>An algorithm for computing the reversal degree of planar topological graphs</title>
      <description><![CDATA[One characteristic of planar topological graphs is the reversal degree. In this paper, we propose an improve algorithm for calculating the reversal degree of a planar topological graphs. This algorithm explores various possible cases following the descending method. Practical tests carried out on machine, using graphs with more than fifty internal vertices of odd degree, have been realized within reasonable computing time.]]></description>
      <pubDate>Tue, 26 Nov 2002 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1831</link>
      <guid>https://doi.org/10.46298/arima.1831</guid>
      <author>Nzali, Jean-Pierre</author>
      <author>Porgy, Koumpo Tanékou</author>
      <author>Tapamo, Hippolyte</author>
      <dc:creator>Nzali, Jean-Pierre</dc:creator>
      <dc:creator>Porgy, Koumpo Tanékou</dc:creator>
      <dc:creator>Tapamo, Hippolyte</dc:creator>
      <content:encoded><![CDATA[One characteristic of planar topological graphs is the reversal degree. In this paper, we propose an improve algorithm for calculating the reversal degree of a planar topological graphs. This algorithm explores various possible cases following the descending method. Practical tests carried out on machine, using graphs with more than fifty internal vertices of odd degree, have been realized within reasonable computing time.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Who_Is : Identification system of human faces</title>
      <description><![CDATA[Although human face recognition is a hard topic due to many parameters involved (e.g. variability of the position, lighting, hairstyle, existence of glasses, beard, moustaches, wrinkles...), it becomes of increasing interest in numerous application fields (personal identification, video watch, man machine interfaces...). In this work, we present WHO_IS, a system for person identification based on face recognition. A geometric model of the face is definedfrom a set of characteristic points which are extracted from the face image. The identification consists in calculating the K nearest neighbors of the individual test by using the City-Block distance. The system is tested on a sample of 100 people with a success rate of 86 %.]]></description>
      <pubDate>Sun, 03 Nov 2002 23:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1830</link>
      <guid>https://doi.org/10.46298/arima.1830</guid>
      <author>Laskri, Mohamed Tayeb</author>
      <author>Chefrour, Djallel</author>
      <dc:creator>Laskri, Mohamed Tayeb</dc:creator>
      <dc:creator>Chefrour, Djallel</dc:creator>
      <content:encoded><![CDATA[Although human face recognition is a hard topic due to many parameters involved (e.g. variability of the position, lighting, hairstyle, existence of glasses, beard, moustaches, wrinkles...), it becomes of increasing interest in numerous application fields (personal identification, video watch, man machine interfaces...). In this work, we present WHO_IS, a system for person identification based on face recognition. A geometric model of the face is definedfrom a set of characteristic points which are extracted from the face image. The identification consists in calculating the K nearest neighbors of the individual test by using the City-Block distance. The system is tested on a sample of 100 people with a success rate of 86 %.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Parameters identification: an application to the Richards equation</title>
      <description><![CDATA[Inverse modeling has become a standard technique for estimating hydrogeologic parameters. These parameters are usually inferred by minimizing the sum of the squared differences between the observed system state and the one calculed by a mathematical model. Since some hydrodynamics parameters in Richards model cannot be measured, they have to be tuned with respect to the observation and the output of the model. Optimal parameters are found by minimizing cost function and the unconstrained minimization algorithm of the quasi-Newton limited memory type is used. The inverse model allows computation of optimal scale parameters and model sensi-tivity.]]></description>
      <pubDate>Sun, 13 Oct 2002 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1833</link>
      <guid>https://doi.org/10.46298/arima.1833</guid>
      <author>Ngnepieba, Pierre</author>
      <author>Le Dimet, François Xavier</author>
      <author>Boukong, Alexis</author>
      <author>Nguetseng, Gabriel</author>
      <dc:creator>Ngnepieba, Pierre</dc:creator>
      <dc:creator>Le Dimet, François Xavier</dc:creator>
      <dc:creator>Boukong, Alexis</dc:creator>
      <dc:creator>Nguetseng, Gabriel</dc:creator>
      <content:encoded><![CDATA[Inverse modeling has become a standard technique for estimating hydrogeologic parameters. These parameters are usually inferred by minimizing the sum of the squared differences between the observed system state and the one calculed by a mathematical model. Since some hydrodynamics parameters in Richards model cannot be measured, they have to be tuned with respect to the observation and the output of the model. Optimal parameters are found by minimizing cost function and the unconstrained minimization algorithm of the quasi-Newton limited memory type is used. The inverse model allows computation of optimal scale parameters and model sensi-tivity.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Contribution to image restoration using a neural network model</title>
      <description><![CDATA[The reduction of the blur and the noise is an important task in image processing. Indeed, these two types of degradation are some undesirable components during some high level treatments. In this paper, we propose an optimization method based on neural network model for the regularized image restoration. We used in this application a modified Hopfield neural network. We propose two algorithms using the modified Hopfield neural network with two updating modes : the algorithm with a sequential updates and the algorithm with the n-simultaneous updates. The quality of the obtained result attests the efficiency of the proposed method when applied on several images degraded with blur and noise.]]></description>
      <pubDate>Sat, 21 Sep 2002 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1829</link>
      <guid>https://doi.org/10.46298/arima.1829</guid>
      <author>Achour, Karim</author>
      <author>Zenati, Nadia</author>
      <author>Djekoune, Oualid</author>
      <dc:creator>Achour, Karim</dc:creator>
      <dc:creator>Zenati, Nadia</dc:creator>
      <dc:creator>Djekoune, Oualid</dc:creator>
      <content:encoded><![CDATA[The reduction of the blur and the noise is an important task in image processing. Indeed, these two types of degradation are some undesirable components during some high level treatments. In this paper, we propose an optimization method based on neural network model for the regularized image restoration. We used in this application a modified Hopfield neural network. We propose two algorithms using the modified Hopfield neural network with two updating modes : the algorithm with a sequential updates and the algorithm with the n-simultaneous updates. The quality of the obtained result attests the efficiency of the proposed method when applied on several images degraded with blur and noise.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Interactive layout and handling of mathematical formulas in structured documents</title>
      <description><![CDATA[Tools dedicated to mathematics need to display formulas and to interact with them. In this paper, we present a summary of existing tools, then we describe FIGUE, an incremental two dimensional layout engine, developed at INRIA, to get a specialized toolbox for building customized editors and graphical user interfaces. Finally we give an exemple of interface using FIGUE to develop mathematical proofs on computer.]]></description>
      <pubDate>Wed, 04 Sep 2002 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1832</link>
      <guid>https://doi.org/10.46298/arima.1832</guid>
      <author>Naciri, Hanane</author>
      <author>Rideau, Laurence</author>
      <dc:creator>Naciri, Hanane</dc:creator>
      <dc:creator>Rideau, Laurence</dc:creator>
      <content:encoded><![CDATA[Tools dedicated to mathematics need to display formulas and to interact with them. In this paper, we present a summary of existing tools, then we describe FIGUE, an incremental two dimensional layout engine, developed at INRIA, to get a specialized toolbox for building customized editors and graphical user interfaces. Finally we give an exemple of interface using FIGUE to develop mathematical proofs on computer.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Classification by split and merge technique for the detection of vascular retinopathies</title>
      <description><![CDATA[Dans cet article, nous proposons une nouvelle approche d'analyse de forme pour la détection des rétinopathies vasculaires. La méthode procède à la classification des vaisseaux et quantifie la modification de leur tortuosité sur les images d'angiographie rétinienne en fluorescence. La classification est basée sur une technique de division/fusion. Deux classes sont définies: artère/veine et artériole/veinule. Dans une séquence d'images, les vaisseaux classifiés sont mis en correspondance. La quantification consiste à comparer la tortuosité des vaisseaux classifiés correspondants. La tortuosité est définie par la valeur de l'excentricité déterminée avec les moments invariants 2D d'ordre 2. La méthode est appliquée aux images d'angiographie rétinienne en fluorescence de patients diabétiques et drépanocytaires.]]></description>
      <pubDate>Tue, 01 Jan 2002 07:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/arima.1502</link>
      <guid>https://doi.org/10.46298/arima.1502</guid>
      <author>Assogba, Kokou</author>
      <author>Bouaoune, Yasmina</author>
      <author>Bunel, Philippe</author>
      <dc:creator>Assogba, Kokou</dc:creator>
      <dc:creator>Bouaoune, Yasmina</dc:creator>
      <dc:creator>Bunel, Philippe</dc:creator>
      <content:encoded><![CDATA[Dans cet article, nous proposons une nouvelle approche d'analyse de forme pour la détection des rétinopathies vasculaires. La méthode procède à la classification des vaisseaux et quantifie la modification de leur tortuosité sur les images d'angiographie rétinienne en fluorescence. La classification est basée sur une technique de division/fusion. Deux classes sont définies: artère/veine et artériole/veinule. Dans une séquence d'images, les vaisseaux classifiés sont mis en correspondance. La quantification consiste à comparer la tortuosité des vaisseaux classifiés correspondants. La tortuosité est définie par la valeur de l'excentricité déterminée avec les moments invariants 2D d'ordre 2. La méthode est appliquée aux images d'angiographie rétinienne en fluorescence de patients diabétiques et drépanocytaires.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
  </channel>
</rss>
