• Severity Estimation of Plant Leaf Diseases Using Segmentation Method

      Entuni, Chyntia Jaby; Afendi Zulcaffle, Tengku Mohd; Kipli, Kuryati; Kurugollu, Fatih; Universiti Malaysia Sarawak, Malaysia; University of Derby (2020-11-09)
      Plants have assumed a significant role in the history of humankind, for the most part as a source of nourishment for human and animals. However, plants typically powerless to different sort of diseases such as leaf blight, gray spot and rust. It will cause a great loss to farmers and ranchers. Therefore, an appropriate method to estimate the severity of diseases in plant leaf is needed to overcome the problem. This paper presents the fusions of the Fuzzy C-Means segmentation method with four different colour spaces namely RGB, HSV, L*a*b and YCbCr to estimate plant leaf disease severity. The percentage of performance of proposed algorithms are recorded and compared with the previous method which are K-Means and Otsu’s thresholding. The best severity estimation algorithm and colour space used to estimate the diseases severity of plant leaf is the combination of Fuzzy C-Means and YCbCr color space. The average performance of Fuzzy C-Means is 91.08% while the average performance of YCbCr is 83.74%. Combination of Fuzzy C-Means and YCbCr produce 96.81% accuracy. This algorithm is more effective than other algorithms in terms of not only better segmentation performance but also low time complexity that is 34.75s in average with 0.2697s standard deviation.
    • Controlling Wolbachia transmission and invasion dynamics among aedes aegypti population via impulsive control strategy

      Dianavinnarasi, Joseph; Raja, Ramachandran; Alzabut, Jehad; Niezabitowski, Michał; Bagdasar, Ovidiu; Alagappa University, Karaikudi, India; Prince Sultan University, Riyadh, Saudi Arabia; Silesian University of Technology, Akademicka 16, Gliwice, Poland; University of Derby (MDPI AG, 2021-03-08)
      This work is devoted to analyzing an impulsive control synthesis to maintain the self-sustainability of Wolbachia among Aedes Aegypti mosquitoes. The present paper provides a fractional order Wolbachia invasive model. Through fixed point theory, this work derives the existence and uniqueness results for the proposed model. Also, we performed a global Mittag-Leffler stability analysis via Linear Matrix Inequality theory and Lyapunov theory. As a result of this controller synthesis, the sustainability of Wolbachia is preserved and non-Wolbachia mosquitoes are eradicated. Finally, a numerical simulation is established for the published data to analyze the nature of the proposed Wolbachia invasive model.
    • Application of caputo–fabrizio operator to suppress the aedes aegypti mosquitoes via wolbachia: an LMI approach

      Dianavinnarasi, J.; Raja, R.; Alzabut, J.; Cao, J.; Niezabitowski, M.; Bagdasar, O.; Alagappa University, Karaikudi, India; Prince Sultan University, Riyadh 12435, Saudi Arabia; Southeast University, Nanjing, China; Yonsei University, Seoul, South Korea; et al. (Elsevier BV, 2021-02-11)
      The aim of this paper is to establish the stability results based on the approach of Linear Matrix Inequality (LMI) for the addressed mathematical model using Caputo–Fabrizio operator (CF operator). Firstly, we extend some existing results of Caputo fractional derivative in the literature to a new fractional order operator without using singular kernel which was introduced by Caputo and Fabrizio. Secondly, we have created a mathematical model to increase Cytoplasmic Incompatibility (CI) in Aedes Aegypti mosquitoes by releasing Wolbachia infected mosquitoes. By this, we can suppress the population density of A.Aegypti mosquitoes and can control most common mosquito-borne diseases such as Dengue, Zika fever, Chikungunya, Yellow fever and so on. Our main aim in this paper is to examine the behaviours of Caputo–Fabrizio operator over the logistic growth equation of a population system then, prove the existence and uniqueness of the solution for the considered mathematical model using CF operator. Also, we check the alpha-exponential stability results for the system via linear matrix inequality technique. Finally a numerical example is provided to check the behaviour of the CF operator on the population system by incorporating the real world data available in the known literature.
    • Pseudoprimality related to the generalized Lucas sequences

      Andrica, Dorin; Bagdasar, Ovidiu; Babeş-Bolyai University, Cluj-Napoca, Romania; University of Derby (Elsevier BV, 2021-03-13)
      Some arithmetic properties and new pseudoprimality results concerning generalized Lucas sequences are presented. The findings are connected to the classical Fibonacci, Lucas, Pell, and Pell–Lucas pseudoprimality. During the process new integer sequences are found and some conjectures are formulated.
    • Targeted ensemble machine classification approach for supporting IOT enabled skin disease detection

      Yu, Hong Qing; Reiff-Marganiec, Stephan; University of Derby (IEEE, 2021-03-26)
      The fast development of the Internet of Things (IoT) changes our life in many areas, especially in the health domain. For example, remote disease diagnosis can be achieved more efficiently with advanced IoT-technologies which not only include hardware but also smart IoT data processing and learning algorithms, e.g. image-based disease classification. In this paper, we work in a specific area of skin condition classification. This research work aims to provide an implementable solution for IoT-led remote skin disease diagnosis applications. The research output can be concluded into three folders. The first folder is about dynamic AI model configuration supported IoT-Fog-Cloud remote diagnosis architecture with hardware examples. The second folder is the evaluation survey regarding the performances of machine learning models for skin disease detection. The evaluation contains a variety of data processing methods and their aggregations. The evaluation takes account of both training-testing and cross-testing validations on all seven conditions and individual condition. In addition, the HAM10000 dataset is picked for the evaluation process according to the suitability comparisons to other relevant datasets. In the evaluation, we discuss the earlier work of ANN, SVM and KNN models, but the evaluation process mainly focuses on six widely applied Deep Learning models of VGG16, Inception, Xception, MobileNet, ResNet50 and DenseNet161. The result shows that each of the top four models for the major seven skin conditions has better performance for the specific condition than others. Based on the evaluation discovery, the last folder proposes a novel classification approach of the Targeted Ensemble Machine Classify Model (TEMCM) to enable dynamically combining a suitable model in a two-phase detection process. The final evaluation result shows the proposed model can archive better performance.
    • Dielectron production in proton-proton and proton-lead collisions at √sNN = 5.02TeV

      Acharya, S.; Adamová, D.; Adler, A.; Adolfsson, J.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, S.; et al. (American Physical Society (APS), 2020-11-25)
      The first measurements of dielectron production at midrapidity (|η_e| < 0.8) in proton–proton and proton–lead collisions at √sNN = 5.02 TeV at the LHC are presented. The dielectron cross section is measured with the ALICE detector as a function of the invariant mass m_ee and the pair transverse momentum p_T,ee in the ranges m_ee < 3.5 GeV/c^2 and p_T,ee < 8 GeV/c, in both collision systems. In proton–proton collisions, the charm and beauty cross sections are determined at midrapidity from a fit to the data with two different event generators. This complements the existing dielectron measurements performed at √s = 7 and 13 TeV. The slope of the √s dependence of the three measurements is described by FONLL calculations. The dielectron cross section measured in proton–lead collisions is in agreement, within the current precision, with the expected dielectron production without any nuclear matter effects for e+e− pairs from open heavy-flavor hadron decays. For the first time at LHC energies, the dielectron production in proton–lead and proton–proton collisions are directly compared at the same √sNN via the dielectron nuclear modification factor RpPb. The measurements are compared to model calculations including cold nuclear matter effects, or additional sources of dielectrons from thermal radiation.
    • Production of ω mesons in pp collisions at √s =7 TeV

      Acharya, S.; Adamová, D.; Adler, A.; Adolfsson, J.; Aggarwal, M. M.; Agha, S.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; et al. (Springer Science and Business Media LLC, 2020-12-07)
      The invariant differential cross section of inclusive ω(782) meson production at midrapidity (|y| < 0.5) in pp collisions at √s = 7 TeV was measured with the ALICE detector at the LHC over a transverse momentum range of 2 < pT < 17 GeV/c. The ω meson was reconstructed via its ω → π+π−π0 decay channel. The measured ω production cross section is compared to various calculations: PYTHIA 8.2 Monash 2013 describes the data, while PYTHIA 8.2 Tune 4C overestimates the data by about 50%. A recent NLO calculation, which includes a model describing the fragmentation of the whole vector-meson nonet, describes the data within uncertainties below 6 GeV/c, while it overestimates the data by up to 50% for higher pT. The ω/π0 ratio is in agreement with previous measurements at lower collision energies and the PYTHIA calculations. In addition, the measurement is compatible with transverse mass scaling within the measured pT range and the ratio is constant with C^(ω/π0) = 0.67±0.03 (stat) ±0.04 (sys) above a transverse momentum of 2.5 GeV/c.
    • Centrality dependence of J/ψ and ψ(2S) production and nuclear modification in p-Pb collisions at √sNN = 8.16 TeV

      Acharya, S.; Adamová, D.; Adler, A.; Adolfsson, J.; Aggarwal, M. M.; Agha, S.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; et al. (Springer Science and Business Media LLC, 2021-02-01)
      The inclusive production of the J/ψ and ψ(2S) charmonium states is studied as a function of centrality in p-Pb collisions at a centre-of-mass energy per nucleon pair √sNN = 8.16 TeV at the LHC. The measurement is performed in the dimuon decay channel with the ALICE apparatus in the centre-of-mass rapidity intervals −4.46 < ycms < −2.96 (Pb-going direction) and 2.03 < ycms < 3.53 (p-going direction), down to zero transverse momentum (pT). The J/ψ and ψ(2S) production cross sections are evaluated as a function of the collision centrality, estimated through the energy deposited in the zero degree calorimeter located in the Pb-going direction. The pT-differential J/ψ production cross section is measured at backward and forward rapidity for several centrality classes, together with the corresponding average ⟨pT⟩ and ⟨pT^2⟩ values. The nuclear effects affecting the production of both charmonium states are studied using the nuclear modification factor. In the p-going direction, a suppression of the production of both charmonium states is observed, which seems to increase from peripheral to central collisions. In the Pb-going direction, however, the centrality dependence is different for the two states: the nuclear modification factor of the J/ψ increases from below unity in peripheral collisions to above unity in central collisions, while for the ψ(2S) it stays below or consistent with unity for all centralities with no significant centrality dependence. The results are compared with measurements in p-Pb collisions at √sNN = 5.02 TeV and no significant dependence on the energy of the collision is observed. Finally, the results are compared with theoretical models implementing various nuclear matter effects.
    • Pion–kaon femtoscopy and the lifetime of the hadronic phase in Pb−Pb collisions at √sNN = 2.76 TeV

      Acharya, S.; Adamová, D.; Adler, A.; Adolfsson, J.; Aggarwal, M.M.; Agha, S.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; et al. (Elsevier BV, 2020-12-17)
      In this paper, the first femtoscopic analysis of pion–kaon correlations at the LHC is reported. The analysis was performed on the Pb–Pb collision data at √sNN = 2.76 TeV recorded with the ALICE detector. The non-identical particle correlations probe the spatio-temporal separation between sources of different particle species as well as the average source size of the emitting system. The sizes of the pion and kaon sources increase with centrality, and pions are emitted closer to the centre of the system and/or later than kaons. This is naturally expected in a system with strong radial flow and is qualitatively reproduced by hydrodynamic models. ALICE data on pion–kaon emission asymmetry are consistent with (3+1)-dimensional viscous hydrodynamics coupled to a statistical hadronisation model, resonance propagation, and decay code THERMINATOR 2 calculation, with an additional time delay between 1 and 2 fm/c for kaons. The delay can be interpreted as evidence for a significant hadronic rescattering phase in heavy-ion collisions at the LHC.
    • Transverse-momentum and event-shape dependence of D-meson flow harmonics in Pb–Pb collisions at √sNN = 5.02 TeV

      Acharya, S.; Adamová, D.; Adler, A.; Adolfsson, J.; Aggarwal, M.M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, S.; et al. (Elsevier BV, 2020-12-29)
      The elliptic and triangular flow coefficients v2 and v3 of prompt D0, D+, and D*+ mesons were measured at midrapidity (|y|<0.8) in Pb–Pb collisions at the centre-of-mass energy per nucleon pair of √sNN = 5.02 TeV with the ALICE detector at the LHC. The D mesons were reconstructed via their hadronic decays in the transverse momentum interval 1 <p_T < 36 GeV/c in central (0–10%) and semi-central (30–50%) collisions. Compared to pions, protons, and J/ψ mesons, the average D-meson v_n harmonics are compatible within uncertainties with a mass hierarchy for p_T ≤ 3 GeV/c, and are similar to those of charged pions for higher p_T. The coupling of the charm quark to the light quarks in the underlying medium is further investigated with the application of the event-shape engineering (ESE) technique to the D-meson v2 and p_T-differential yields. The D-meson v2 is correlated with average bulk elliptic flow in both central and semi-central collisions. Within the current precision, the ratios of per-event D-meson yields in the ESE-selected and unbiased samples are found to be compatible with unity. All the measurements are found to be reasonably well described by theoretical calculations including the effects of charm-quark transport and the recombination of charm quarks with light quarks in a hydrodynamically expanding medium.
    • Search for a common baryon source in high-multiplicity pp collisions at the LHC

      Acharya, S.; Adamová, D.; Adler, A.; Adolfsson, J.; Aggarwal, M.M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, S.; et al. (Elsevier BV, 2020-10-08)
      We report on the measurement of the size of the particle-emitting source from two-baryon correlations with ALICE in high-multiplicity pp collisions at √s = 13 TeV. The source radius is studied with low relative momentum p–p, pbar-pbar, p–Λ , and pbar-Λbar pairs as a function of the pair transverse mass m_T considering for the first time in a quantitative way the effect of strong resonance decays. After correcting for this effect, the radii extracted for pairs of different particle species agree. This indicates that protons, antiprotons, Λ s, and Λbar s originate from the same source. Within the measured m_T range (1.1–2.2) GeV/c^2 the invariant radius of this common source varies between 1.3 and 0.85 fm. These results provide a precise reference for studies of the strong hadron–hadron interactions and for the investigation of collective properties in small colliding systems.
    • Blessing of dimensionality at the edge and geometry of few-shot learning

      Tyukin, Ivan Y.; Gorban, Alexander N.; McEwan, Alistair A.; Meshkinfamfard, Sepehr; Tang, Lixin; University of Leicester; Lobachevsky University, Russia; St Petersburg State Electrotechnical University, Russia; University College London; Northeastern University, China; et al. (Elsevier BV, 2021-02-03)
      In this paper we present theory and algorithms enabling classes of Artificial Intelligence (AI) systems to continuously and incrementally improve with a priori quantifiable guarantees – or more specifically remove classification errors – over time. This is distinct from state-of-the-art machine learning, AI, and software approaches. The theory enables building few-shot AI correction algorithms and provides conditions justifying their successful application. Another feature of this approach is that, in the supervised setting, the computational complexity of training is linear in the number of training samples. At the time of classification, the computational complexity is bounded by few inner product calculations. Moreover, the implementation is shown to be very scalable. This makes it viable for deployment in applications where computational power and memory are limited, such as embedded environments. It enables the possibility for fast on-line optimisation using improved training samples. The approach is based on the concentration of measure effects and stochastic separation theorems and is illustrated with an example on the identification faulty processes in Computer Numerical Control (CNC) milling and with a case study on adaptive removal of false positives in an industrial video surveillance and analytics system.
    • Bringing the Blessing of Dimensionality to the Edge

      Tyukin, Ivan Y.; Gorban, Alexander N; McEwan, Alistair; Meshkinfamfard, Sepehr; University of Leicester; Lobachevsky University, Russia (IEEE, 2019-09-30)
      In this work we present a novel approach and algorithms for equipping Artificial Intelligence systems with capabilities to become better over time. A distinctive feature of the approach is that, in the supervised setting, the approaches' computational complexity is sub-linear in the number of training samples. This makes it particularly attractive in applications in which the computational power and memory are limited. The approach is based on the concentration of measure effects and stochastic separation theorems. The algorithms are illustrated with examples.
    • Multiplicity dependence of J/ψ production at midrapidity in pp collisions at √s = 13 TeV

      Acharya, S.; Adamová, D.; Adler, A.; Adolfsson, J.; Aggarwal, M.M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, S.; et al. (Elsevier BV, 2020-09-03)
      Measurements of the inclusive J/ψ yield as a function of charged-particle pseudorapidity density dNch/dη in pp collisions at √s = 13 TeV with ALICE at the LHC are reported. The J/ψ meson yield is measured at midrapidity (|y|<0.9) in the dielectron channel, for events selected based on the charged-particle multiplicity at midrapidity (|η|<1) and at forward rapidity ( -3.7 < η < -1.7 and 2.8 < η < 5.1); both observables are normalized to their corresponding averages in minimum bias events. The increase of the normalized J/ψ yield with normalized dNch/dη is significantly stronger than linear and dependent on the transverse momentum. The data are compared to theoretical predictions, which describe the observed trends well, albeit not always quantitatively.
    • Control strategies of a gas turbine generator: a comparative study

      Abbassen, Lyes; Zaouia, Mustapha; Benamrouche, Nacereddine; Bousbaine, Amar; University Mouloud Mammeri of Tizi Ouzou,Tizi Ouzou, 15000 Algeria; University of Derby; SONATRACH Direction Centrale Recherche et Développement DC- R&D, Boumerdes, Algeria (Indonesian Journal of Electrical Engineering and Informatics (IJEEI), 2020-12-04)
      Gas turbine generators are commonly used in oil and gas industries due to their robustness and association with other operating systems in the combined cycles. The electrical generators may become unstable under severe load fluctuations. For these raisons, maintaining the stability is paramount to ensure continuous functioninality.This paper deals with the modeling and simulation of a single shaft gas turbine generator using the model developed by Rowen and incorporating different types of controllers, viz a Zeigler- Nichols PID controller, a Fuzzy Logic Controller (FLC), FLC-PID and finally a hybridPID/FLC/FLC-PIDcontroller. The study was undertaken under Matlab / Simulink environment with data related to an in service power plant owned by Sonatrach, Algiers, Algeria. The results show that FLC-PID and hybrid tuned controllers provide the best time domain performances.
    • Unveiling the strong interaction among hadrons at the LHC

      Barnby, Lee; ALICE Collaboration; STFC Daresbury Laboratory; University of Derby (Springer Science and Business Media LLC, 2020-12-09)
      One of the key challenges for nuclear physics today is to understand from first principles the effective interaction between hadrons with different quark content. First successes have been achieved using techniques that solve the dynamics of quarks and gluons on discrete space-time lattices. Experimentally, the dynamics of the strong interaction have been studied by scattering hadrons off each other. Such scattering experiments are difficult or impossible for unstable hadrons and so high-quality measurements exist only for hadrons containing up and down quarks. Here we demonstrate that measuring correlations in the momentum space between hadron pairs produced in ultrarelativistic proton–proton collisions at the CERN Large Hadron Collider (LHC) provides a precise method with which to obtain the missing information on the interaction dynamics between any pair of unstable hadrons. Specifically, we discuss the case of the interaction of baryons containing strange quarks (hyperons). We demonstrate how, using precision measurements of proton–omega baryon correlations, the effect of the strong interaction for this hadron–hadron pair can be studied with precision similar to, and compared with, predictions from lattice calculations. The large number of hyperons identified in proton–proton collisions at the LHC, together with accurate modelling of the small (approximately one femtometre) inter-particle distance and exact predictions for the correlation functions, enables a detailed determination of the short-range part of the nucleon-hyperon interaction.
    • Energy-aware scheduling of streaming applications on edge-devices in IoT based healthcare

      Tariq, Umair Ullah; Ali, Haider; Liu, Lu; Hardy, James; Kazim, Muhammad; Ahmed, Waqar; Central Queensland University, Sydney, Australia.; University of Derby; University of Leicester; De Montfort University; et al. (Institute of Electrical and Electronics Engineers (IEEE), 2021-02-02)
      The reliance on Network-on-Chip (NoC) based Multiprocessor Systems-on-Chips (MPSoCs) is proliferating in modern embedded systems to satisfy the higher performance requirement of multimedia streaming applications. Task level coarse grained software pipeling also called re-timing when combined with Dynamic Voltage and Frequency Scaling (DVFS) has shown to be an effective approach in significantly reducing energy consumption of the multiprocessor systems at the expense of additional delay. In this paper we develop a novel energy-aware scheduler considering tasks with conditional constraints on Voltage Frequency Island (VFI) based heterogeneous NoC-MPSoCs deploying re-timing integrated with DVFS for real-time streaming applications. We propose a novel task level re-timing approach called R-CTG and integrate it with non linear programming based scheduling and voltage scaling approach referred to as ALI-EBAD. The R-CTG approach aims to minimize the latency caused by re-timing without compromising on energy-efficiency. Compared to R-DAG, the state-of-the-art approach designed for traditional Directed Acyclic Graph (DAG) based task graphs, R-CTG significantly reduces the re-timing latency because it only re-times tasks that free up the wasted slack. To validate our claims we performed experiments on using 12 real benchmarks, the results demonstrate that ALI-EBAD out performs CA-TMES-Search and CA-TMES-Quick task schedulers in terms of energy-efficiency.
    • Prescribed $k$-symmetric curvature hypersurfaces in de Sitter space

      Ballesteros-Chávez, Daniel; Klingenberg, Wilhelm; Lambert, Ben; Silesian University of Technology, Kaszubska; University of Durham; University of Derby (Cambridge University Press, 2020-11-26)
      We prove existence of compact spacelike hypersurfaces with prescribed k - curvature in de Sitter space, where the prescription function depends on both space and the tilt function.
    • Large-scale Data Integration Using Graph Probabilistic Dependencies (GPDs)

      Zada, Muhammad Sadiq Hassan; Yuan, Bo; Anjum, Ashiq; Azad, Muhammad Ajmal; Khan, Wajahat Ali; Reiff-Marganiec, Stephan; University of Derby; University of Leicester (IEEE, 2020-12-28)
      The diversity and proliferation of Knowledge bases have made data integration one of the key challenges in the data science domain. The imperfect representations of entities, particularly in graphs, add additional challenges in data integration. Graph dependencies (GDs) were investigated in existing studies for the integration and maintenance of data quality on graphs. However, the majority of graphs contain plenty of duplicates with high diversity. Consequently, the existence of dependencies over these graphs becomes highly uncertain. In this paper, we proposed graph probabilistic dependencies (GPDs) to address the issue of uncertainty over these large-scale graphs with a novel class of dependencies for graphs. GPDs can provide a probabilistic explanation for dealing with uncertainty while discovering dependencies over graphs. Furthermore, a case study is provided to verify the correctness of the data integration process based on GPDs. Preliminary results demonstrated the effectiveness of GPDs in terms of reducing redundancies and inconsistencies over the benchmark datasets.
    • Explaining probabilistic Artificial Intelligence (AI) models by discretizing Deep Neural Networks

      Saleem, Rabia; Yuan, Bo; Kurugollu, Fatih; Anjum, Ashiq; University of Derby; University of Leicester (IEEE, 2020-12-30)
      Artificial Intelligence (AI) models can learn from data and make decisions without any human intervention. However, the deployment of such models is challenging and risky because we do not know how the internal decisionmaking is happening in these models. Especially, the high-risk decisions such as medical diagnosis or automated navigation demand explainability and verification of the decision making process in AI algorithms. This research paper aims to explain Artificial Intelligence (AI) models by discretizing the black-box process model of deep neural networks using partial differential equations. The PDEs based deterministic models would minimize the time and computational cost of the decision-making process and reduce the chances of uncertainty that make the prediction more trustworthy.