Browsing Department of Electronics, Computing & Maths by Submit Date
Now showing items 2140 of 711

Severity Estimation of Plant Leaf Diseases Using Segmentation MethodPlants have assumed a significant role in the history of humankind, for the most part as a source of nourishment for human and animals. However, plants typically powerless to different sort of diseases such as leaf blight, gray spot and rust. It will cause a great loss to farmers and ranchers. Therefore, an appropriate method to estimate the severity of diseases in plant leaf is needed to overcome the problem. This paper presents the fusions of the Fuzzy CMeans segmentation method with four different colour spaces namely RGB, HSV, L*a*b and YCbCr to estimate plant leaf disease severity. The percentage of performance of proposed algorithms are recorded and compared with the previous method which are KMeans and Otsu’s thresholding. The best severity estimation algorithm and colour space used to estimate the diseases severity of plant leaf is the combination of Fuzzy CMeans and YCbCr color space. The average performance of Fuzzy CMeans is 91.08% while the average performance of YCbCr is 83.74%. Combination of Fuzzy CMeans and YCbCr produce 96.81% accuracy. This algorithm is more effective than other algorithms in terms of not only better segmentation performance but also low time complexity that is 34.75s in average with 0.2697s standard deviation.

Controlling Wolbachia transmission and invasion dynamics among aedes aegypti population via impulsive control strategyThis work is devoted to analyzing an impulsive control synthesis to maintain the selfsustainability of Wolbachia among Aedes Aegypti mosquitoes. The present paper provides a fractional order Wolbachia invasive model. Through fixed point theory, this work derives the existence and uniqueness results for the proposed model. Also, we performed a global MittagLeffler stability analysis via Linear Matrix Inequality theory and Lyapunov theory. As a result of this controller synthesis, the sustainability of Wolbachia is preserved and nonWolbachia mosquitoes are eradicated. Finally, a numerical simulation is established for the published data to analyze the nature of the proposed Wolbachia invasive model.

Application of caputo–fabrizio operator to suppress the aedes aegypti mosquitoes via wolbachia: an LMI approachThe aim of this paper is to establish the stability results based on the approach of Linear Matrix Inequality (LMI) for the addressed mathematical model using Caputo–Fabrizio operator (CF operator). Firstly, we extend some existing results of Caputo fractional derivative in the literature to a new fractional order operator without using singular kernel which was introduced by Caputo and Fabrizio. Secondly, we have created a mathematical model to increase Cytoplasmic Incompatibility (CI) in Aedes Aegypti mosquitoes by releasing Wolbachia infected mosquitoes. By this, we can suppress the population density of A.Aegypti mosquitoes and can control most common mosquitoborne diseases such as Dengue, Zika fever, Chikungunya, Yellow fever and so on. Our main aim in this paper is to examine the behaviours of Caputo–Fabrizio operator over the logistic growth equation of a population system then, prove the existence and uniqueness of the solution for the considered mathematical model using CF operator. Also, we check the alphaexponential stability results for the system via linear matrix inequality technique. Finally a numerical example is provided to check the behaviour of the CF operator on the population system by incorporating the real world data available in the known literature.

Pseudoprimality related to the generalized Lucas sequencesSome arithmetic properties and new pseudoprimality results concerning generalized Lucas sequences are presented. The findings are connected to the classical Fibonacci, Lucas, Pell, and Pell–Lucas pseudoprimality. During the process new integer sequences are found and some conjectures are formulated.

Targeted ensemble machine classification approach for supporting IOT enabled skin disease detectionThe fast development of the Internet of Things (IoT) changes our life in many areas, especially in the health domain. For example, remote disease diagnosis can be achieved more efficiently with advanced IoTtechnologies which not only include hardware but also smart IoT data processing and learning algorithms, e.g. imagebased disease classification. In this paper, we work in a specific area of skin condition classification. This research work aims to provide an implementable solution for IoTled remote skin disease diagnosis applications. The research output can be concluded into three folders. The first folder is about dynamic AI model configuration supported IoTFogCloud remote diagnosis architecture with hardware examples. The second folder is the evaluation survey regarding the performances of machine learning models for skin disease detection. The evaluation contains a variety of data processing methods and their aggregations. The evaluation takes account of both trainingtesting and crosstesting validations on all seven conditions and individual condition. In addition, the HAM10000 dataset is picked for the evaluation process according to the suitability comparisons to other relevant datasets. In the evaluation, we discuss the earlier work of ANN, SVM and KNN models, but the evaluation process mainly focuses on six widely applied Deep Learning models of VGG16, Inception, Xception, MobileNet, ResNet50 and DenseNet161. The result shows that each of the top four models for the major seven skin conditions has better performance for the specific condition than others. Based on the evaluation discovery, the last folder proposes a novel classification approach of the Targeted Ensemble Machine Classify Model (TEMCM) to enable dynamically combining a suitable model in a twophase detection process. The final evaluation result shows the proposed model can archive better performance.

Dielectron production in protonproton and protonlead collisions at √sNN = 5.02TeVThe first measurements of dielectron production at midrapidity (η_e < 0.8) in proton–proton and proton–lead collisions at √sNN = 5.02 TeV at the LHC are presented. The dielectron cross section is measured with the ALICE detector as a function of the invariant mass m_ee and the pair transverse momentum p_T,ee in the ranges m_ee < 3.5 GeV/c^2 and p_T,ee < 8 GeV/c, in both collision systems. In proton–proton collisions, the charm and beauty cross sections are determined at midrapidity from a fit to the data with two different event generators. This complements the existing dielectron measurements performed at √s = 7 and 13 TeV. The slope of the √s dependence of the three measurements is described by FONLL calculations. The dielectron cross section measured in proton–lead collisions is in agreement, within the current precision, with the expected dielectron production without any nuclear matter effects for e+e− pairs from open heavyflavor hadron decays. For the first time at LHC energies, the dielectron production in proton–lead and proton–proton collisions are directly compared at the same √sNN via the dielectron nuclear modification factor RpPb. The measurements are compared to model calculations including cold nuclear matter effects, or additional sources of dielectrons from thermal radiation.

Production of ω mesons in pp collisions at √s =7 TeVThe invariant differential cross section of inclusive ω(782) meson production at midrapidity (y < 0.5) in pp collisions at √s = 7 TeV was measured with the ALICE detector at the LHC over a transverse momentum range of 2 < pT < 17 GeV/c. The ω meson was reconstructed via its ω → π+π−π0 decay channel. The measured ω production cross section is compared to various calculations: PYTHIA 8.2 Monash 2013 describes the data, while PYTHIA 8.2 Tune 4C overestimates the data by about 50%. A recent NLO calculation, which includes a model describing the fragmentation of the whole vectormeson nonet, describes the data within uncertainties below 6 GeV/c, while it overestimates the data by up to 50% for higher pT. The ω/π0 ratio is in agreement with previous measurements at lower collision energies and the PYTHIA calculations. In addition, the measurement is compatible with transverse mass scaling within the measured pT range and the ratio is constant with C^(ω/π0) = 0.67±0.03 (stat) ±0.04 (sys) above a transverse momentum of 2.5 GeV/c.

Centrality dependence of J/ψ and ψ(2S) production and nuclear modification in pPb collisions at √sNN = 8.16 TeVThe inclusive production of the J/ψ and ψ(2S) charmonium states is studied as a function of centrality in pPb collisions at a centreofmass energy per nucleon pair √sNN = 8.16 TeV at the LHC. The measurement is performed in the dimuon decay channel with the ALICE apparatus in the centreofmass rapidity intervals −4.46 < ycms < −2.96 (Pbgoing direction) and 2.03 < ycms < 3.53 (pgoing direction), down to zero transverse momentum (pT). The J/ψ and ψ(2S) production cross sections are evaluated as a function of the collision centrality, estimated through the energy deposited in the zero degree calorimeter located in the Pbgoing direction. The pTdifferential J/ψ production cross section is measured at backward and forward rapidity for several centrality classes, together with the corresponding average ⟨pT⟩ and ⟨pT^2⟩ values. The nuclear effects affecting the production of both charmonium states are studied using the nuclear modification factor. In the pgoing direction, a suppression of the production of both charmonium states is observed, which seems to increase from peripheral to central collisions. In the Pbgoing direction, however, the centrality dependence is different for the two states: the nuclear modification factor of the J/ψ increases from below unity in peripheral collisions to above unity in central collisions, while for the ψ(2S) it stays below or consistent with unity for all centralities with no significant centrality dependence. The results are compared with measurements in pPb collisions at √sNN = 5.02 TeV and no significant dependence on the energy of the collision is observed. Finally, the results are compared with theoretical models implementing various nuclear matter effects.

Pion–kaon femtoscopy and the lifetime of the hadronic phase in Pb−Pb collisions at √sNN = 2.76 TeVIn this paper, the first femtoscopic analysis of pion–kaon correlations at the LHC is reported. The analysis was performed on the Pb–Pb collision data at √sNN = 2.76 TeV recorded with the ALICE detector. The nonidentical particle correlations probe the spatiotemporal separation between sources of different particle species as well as the average source size of the emitting system. The sizes of the pion and kaon sources increase with centrality, and pions are emitted closer to the centre of the system and/or later than kaons. This is naturally expected in a system with strong radial flow and is qualitatively reproduced by hydrodynamic models. ALICE data on pion–kaon emission asymmetry are consistent with (3+1)dimensional viscous hydrodynamics coupled to a statistical hadronisation model, resonance propagation, and decay code THERMINATOR 2 calculation, with an additional time delay between 1 and 2 fm/c for kaons. The delay can be interpreted as evidence for a significant hadronic rescattering phase in heavyion collisions at the LHC.

Transversemomentum and eventshape dependence of Dmeson flow harmonics in Pb–Pb collisions at √sNN = 5.02 TeVThe elliptic and triangular flow coefficients v2 and v3 of prompt D0, D+, and D*+ mesons were measured at midrapidity (y<0.8) in Pb–Pb collisions at the centreofmass energy per nucleon pair of √sNN = 5.02 TeV with the ALICE detector at the LHC. The D mesons were reconstructed via their hadronic decays in the transverse momentum interval 1 <p_T < 36 GeV/c in central (0–10%) and semicentral (30–50%) collisions. Compared to pions, protons, and J/ψ mesons, the average Dmeson v_n harmonics are compatible within uncertainties with a mass hierarchy for p_T ≤ 3 GeV/c, and are similar to those of charged pions for higher p_T. The coupling of the charm quark to the light quarks in the underlying medium is further investigated with the application of the eventshape engineering (ESE) technique to the Dmeson v2 and p_Tdifferential yields. The Dmeson v2 is correlated with average bulk elliptic flow in both central and semicentral collisions. Within the current precision, the ratios of perevent Dmeson yields in the ESEselected and unbiased samples are found to be compatible with unity. All the measurements are found to be reasonably well described by theoretical calculations including the effects of charmquark transport and the recombination of charm quarks with light quarks in a hydrodynamically expanding medium.

Search for a common baryon source in highmultiplicity pp collisions at the LHCWe report on the measurement of the size of the particleemitting source from twobaryon correlations with ALICE in highmultiplicity pp collisions at √s = 13 TeV. The source radius is studied with low relative momentum p–p, pbarpbar, p–Λ , and pbarΛbar pairs as a function of the pair transverse mass m_T considering for the first time in a quantitative way the effect of strong resonance decays. After correcting for this effect, the radii extracted for pairs of different particle species agree. This indicates that protons, antiprotons, Λ s, and Λbar s originate from the same source. Within the measured m_T range (1.1–2.2) GeV/c^2 the invariant radius of this common source varies between 1.3 and 0.85 fm. These results provide a precise reference for studies of the strong hadron–hadron interactions and for the investigation of collective properties in small colliding systems.

Blessing of dimensionality at the edge and geometry of fewshot learningIn this paper we present theory and algorithms enabling classes of Artificial Intelligence (AI) systems to continuously and incrementally improve with a priori quantifiable guarantees – or more specifically remove classification errors – over time. This is distinct from stateoftheart machine learning, AI, and software approaches. The theory enables building fewshot AI correction algorithms and provides conditions justifying their successful application. Another feature of this approach is that, in the supervised setting, the computational complexity of training is linear in the number of training samples. At the time of classification, the computational complexity is bounded by few inner product calculations. Moreover, the implementation is shown to be very scalable. This makes it viable for deployment in applications where computational power and memory are limited, such as embedded environments. It enables the possibility for fast online optimisation using improved training samples. The approach is based on the concentration of measure effects and stochastic separation theorems and is illustrated with an example on the identification faulty processes in Computer Numerical Control (CNC) milling and with a case study on adaptive removal of false positives in an industrial video surveillance and analytics system.

Bringing the Blessing of Dimensionality to the EdgeIn this work we present a novel approach and algorithms for equipping Artificial Intelligence systems with capabilities to become better over time. A distinctive feature of the approach is that, in the supervised setting, the approaches' computational complexity is sublinear in the number of training samples. This makes it particularly attractive in applications in which the computational power and memory are limited. The approach is based on the concentration of measure effects and stochastic separation theorems. The algorithms are illustrated with examples.

Multiplicity dependence of J/ψ production at midrapidity in pp collisions at √s = 13 TeVMeasurements of the inclusive J/ψ yield as a function of chargedparticle pseudorapidity density dNch/dη in pp collisions at √s = 13 TeV with ALICE at the LHC are reported. The J/ψ meson yield is measured at midrapidity (y<0.9) in the dielectron channel, for events selected based on the chargedparticle multiplicity at midrapidity (η<1) and at forward rapidity ( 3.7 < η < 1.7 and 2.8 < η < 5.1); both observables are normalized to their corresponding averages in minimum bias events. The increase of the normalized J/ψ yield with normalized dNch/dη is significantly stronger than linear and dependent on the transverse momentum. The data are compared to theoretical predictions, which describe the observed trends well, albeit not always quantitatively.

Control strategies of a gas turbine generator: a comparative studyGas turbine generators are commonly used in oil and gas industries due to their robustness and association with other operating systems in the combined cycles. The electrical generators may become unstable under severe load fluctuations. For these raisons, maintaining the stability is paramount to ensure continuous functioninality.This paper deals with the modeling and simulation of a single shaft gas turbine generator using the model developed by Rowen and incorporating different types of controllers, viz a Zeigler Nichols PID controller, a Fuzzy Logic Controller (FLC), FLCPID and finally a hybridPID/FLC/FLCPIDcontroller. The study was undertaken under Matlab / Simulink environment with data related to an in service power plant owned by Sonatrach, Algiers, Algeria. The results show that FLCPID and hybrid tuned controllers provide the best time domain performances.

Unveiling the strong interaction among hadrons at the LHCOne of the key challenges for nuclear physics today is to understand from first principles the effective interaction between hadrons with different quark content. First successes have been achieved using techniques that solve the dynamics of quarks and gluons on discrete spacetime lattices. Experimentally, the dynamics of the strong interaction have been studied by scattering hadrons off each other. Such scattering experiments are difficult or impossible for unstable hadrons and so highquality measurements exist only for hadrons containing up and down quarks. Here we demonstrate that measuring correlations in the momentum space between hadron pairs produced in ultrarelativistic proton–proton collisions at the CERN Large Hadron Collider (LHC) provides a precise method with which to obtain the missing information on the interaction dynamics between any pair of unstable hadrons. Specifically, we discuss the case of the interaction of baryons containing strange quarks (hyperons). We demonstrate how, using precision measurements of proton–omega baryon correlations, the effect of the strong interaction for this hadron–hadron pair can be studied with precision similar to, and compared with, predictions from lattice calculations. The large number of hyperons identified in proton–proton collisions at the LHC, together with accurate modelling of the small (approximately one femtometre) interparticle distance and exact predictions for the correlation functions, enables a detailed determination of the shortrange part of the nucleonhyperon interaction.

Energyaware scheduling of streaming applications on edgedevices in IoT based healthcareThe reliance on NetworkonChip (NoC) based Multiprocessor SystemsonChips (MPSoCs) is proliferating in modern embedded systems to satisfy the higher performance requirement of multimedia streaming applications. Task level coarse grained software pipeling also called retiming when combined with Dynamic Voltage and Frequency Scaling (DVFS) has shown to be an effective approach in significantly reducing energy consumption of the multiprocessor systems at the expense of additional delay. In this paper we develop a novel energyaware scheduler considering tasks with conditional constraints on Voltage Frequency Island (VFI) based heterogeneous NoCMPSoCs deploying retiming integrated with DVFS for realtime streaming applications. We propose a novel task level retiming approach called RCTG and integrate it with non linear programming based scheduling and voltage scaling approach referred to as ALIEBAD. The RCTG approach aims to minimize the latency caused by retiming without compromising on energyefficiency. Compared to RDAG, the stateoftheart approach designed for traditional Directed Acyclic Graph (DAG) based task graphs, RCTG significantly reduces the retiming latency because it only retimes tasks that free up the wasted slack. To validate our claims we performed experiments on using 12 real benchmarks, the results demonstrate that ALIEBAD out performs CATMESSearch and CATMESQuick task schedulers in terms of energyefficiency.

Prescribed $k$symmetric curvature hypersurfaces in de Sitter spaceWe prove existence of compact spacelike hypersurfaces with prescribed k  curvature in de Sitter space, where the prescription function depends on both space and the tilt function.

Largescale Data Integration Using Graph Probabilistic Dependencies (GPDs)The diversity and proliferation of Knowledge bases have made data integration one of the key challenges in the data science domain. The imperfect representations of entities, particularly in graphs, add additional challenges in data integration. Graph dependencies (GDs) were investigated in existing studies for the integration and maintenance of data quality on graphs. However, the majority of graphs contain plenty of duplicates with high diversity. Consequently, the existence of dependencies over these graphs becomes highly uncertain. In this paper, we proposed graph probabilistic dependencies (GPDs) to address the issue of uncertainty over these largescale graphs with a novel class of dependencies for graphs. GPDs can provide a probabilistic explanation for dealing with uncertainty while discovering dependencies over graphs. Furthermore, a case study is provided to verify the correctness of the data integration process based on GPDs. Preliminary results demonstrated the effectiveness of GPDs in terms of reducing redundancies and inconsistencies over the benchmark datasets.

Explaining probabilistic Artificial Intelligence (AI) models by discretizing Deep Neural NetworksArtificial Intelligence (AI) models can learn from data and make decisions without any human intervention. However, the deployment of such models is challenging and risky because we do not know how the internal decisionmaking is happening in these models. Especially, the highrisk decisions such as medical diagnosis or automated navigation demand explainability and verification of the decision making process in AI algorithms. This research paper aims to explain Artificial Intelligence (AI) models by discretizing the blackbox process model of deep neural networks using partial differential equations. The PDEs based deterministic models would minimize the time and computational cost of the decisionmaking process and reduce the chances of uncertainty that make the prediction more trustworthy.