• Adversarial Thresholding Semi-Bandits

      Anjum, Ashiq; Bagdasar, Ovidiu; Xue, Yong; Bower, Craig (University of Derby, 2020-12)
      The classical multi-armed bandit is one of the most common examples of sequential decision-making, either by trading-off between exploiting and exploring arms to maximise some payoff or purely exploring arms until the optimal arm is identified. In particular, a bandit player wanting to only pull arms with stochastic feedback exceeding a given threshold, has been studied extensively in a pure exploration context. However, numerous applications fail to be expressed, where a player wishes to balance the need to observe regions of an uncertain environment that are currently interesting (exploit) and checking if neglected regions have become interesting since last observed (explore). We introduce the adversarial thresholding semi-bandit problem: a non-stochastic bandit model, where a player wants to only pull (potentially several) arms with feedback meeting some threshold condition. Our main objective is to design algorithms that meet the requirements of the adversarial thresholding semi-bandit problem theoretically, empirically and algorithmically, for a given application. In other words, we want to develop a machine that learns to select options according to some threshold condition and adapts quickly if the feedback from selecting an option unexpectedly changes. This work has many real-world applications and is motivated by online detector control monitoring in high-energy physics experiments, on the Large Hadron Collider. We begin by describing the adversarial thresholding semi-bandit problem (ATSBP) in terms of a multi-armed bandit with multiple plays and extending the stochastic thresholding bandit problem to the adversarial setting. The adversarial thresholding exponentially-weighted exploration and exploitation with multiple plays algorithm (T-Exp3.M) and an algorithm combining label efficient prediction (LET-Exp3.M), are introduced that satisfy theoretical and computational Research specifications, but either perform poorly or fail completely under certain threshold conditions. To meet empirical performance requirements, we propose the dynamic label efficient adversarial thresholding exponentially-weighted exploration and exploitation with multiple plays algorithm (dLET-Exp3.M). Whilst computational requirements match those for T-Exp3.M, theoretical upper bounds on performance are proven to be worse. We also introduce an ATSBP algorithm (AliceBandit) that decomposes the action of pulling an arm into selection and observation decisions. Computational complexity and empirical performance under two different threshold conditions are significantly improved, compared with exponentially weighted adversarial thresholding semi-bandits. Theoretical upper bounds on performance are also significantly improved, for certain environments. In the latter part of this thesis, we address the challenge of efficiently monitoring multiple condition parameters in high-energy experimental physics. Due to the extreme conditions experienced in heavy-ion particle colliders, the power supply to any device exceeding safe operating parameters is automatically shut down or tripped, to preserve integrity and functionality of the device. Prior to recent upgrades, a device or channel trip would halt data-taking for the entire experiment. Post-trip recovery requires a costly procedure both in terms of expertise and data-taking time. After the completion of the current upgrading phase (scheduled for 2021), the detector will collect data continuously. In this new regime, a channel trip will result in only the affected components of the experiment being shut down. However, since the new upgraded experiment will enable data-taking to increase by a factor of 100, each trip will have a significant impact on the experiments ability to provide physicists with reliable data to analyse. We demonstrate that adversarial thresholding semi-bandits efficiently identify device channels either exceeding a fixed threshold or deviating by more than a prescribed range prior to a trip, extending the state-of-the-art in high-energy physics detector control.
    • Assessing the credibility of online social network messages.

      Makinde, Oghenefejiro Winnie; University of Derby (University of Derby, 2018-01)
      ABSTRACT Information gathered socially online is a key feature of the growth and development of modern society. Presently the Internet is a platform for the distribution of data. Millions of people use Online Social Networks daily as a tool to get updated with social, political, educational or other occurrences. In many cases information derived from an Online Social Network is acted upon and often shared with other networks, without further assessments or judgments. Many people do not check to see if the information shared is credible. A user may trust the information generated by a close friend without questioning its credibility, in contrast to a message generated by an unknown user. This work considers the concept of credibility in the wider sense, by proposing whether a user can trust the service provider or even the information itself. Two key components of credibility have been explored; trustworthiness and expertise. Credibility has been researched in the past using Twitter as a validation tool. The research was focused on automatic methods of assessing the credibility of sets of tweets using analysis of microblog postings related to trending topics to determine the credibility of tweets. This research develops a framework that can assist the assessment of the credibility of messages in Online Social Networks. Four types of credibility are explored (experienced, surface, reputed and presumed credibility) resulting in a credibility hierarchy. To determine the credibility of messages generated and distributed in Online Social Networks, a virtual network is created, which attributes nodes with individual views to generate messages in the network at random, recording data from a network and analysing the data based on the behaviour exhibited by agents (an agent-based modelling approach). The factors considered for the experiment design included; peer-to-peer networking, collaboration, opinion formation and network rewiring. The behaviour of agents, frequency in which messages are shared and used, the pathway of the messages and how this affects credibility of messages is also considered. A framework is designed and the resulting data are tested using the design. The resulting data generated validated the framework in part, supporting an approach whereby the concept of tagging the message status assists the understanding and application of the credibility hierarchy. Validation was carried out with Twitter data acquired through twitter’s Application Programming Interface (API). There were similarities in the generation and frequency of the message distributions in the network; these findings were also recorded and analysed using the framework proposed. Some limitations were encountered while acquiring data from Twitter, however, there was sufficient evidence of correlation between the simulated and real social network datasets to indicate the validity of the framework.
    • Cloud BI: A Multi-party Authentication Framework for Securing Business Intelligence on the Cloud

      Al-Aqrabi, Hussain; University of Derby (2016)
      Business intelligence (BI) has emerged as a key technology to be hosted on Cloud computing. BI offers a method to analyse data thereby enabling informed decision making to improve business performance and profitability. However, within the shared domains of Cloud computing, BI is exposed to increased security and privacy threats because an unauthorised user may be able to gain access to highly sensitive, consolidated business information. The business process contains collaborating services and users from multiple Cloud systems in different security realms which need to be engaged dynamically at runtime. If the heterogamous Cloud systems located in different security realms do not have direct authentication relationships then it is technically difficult to enable a secure collaboration. In order to address these security challenges, a new authentication framework is required to establish certain trust relationships among these BI service instances and users by distributing a common session secret to all participants of a session. The author addresses this challenge by designing and implementing a multiparty authentication framework for dynamic secure interactions when members of different security realms want to access services. The framework takes advantage of the trust relationship between session members in different security realms to enable a user to obtain security credentials to access Cloud resources in a remote realm. This mechanism can help Cloud session users authenticate their session membership to improve the authentication processes within multi-party sessions. The correctness of the proposed framework has been verified by using BAN Logics. The performance and the overhead have been evaluated via simulation in a dynamic environment. A prototype authentication system has been designed, implemented and tested based on the proposed framework. The research concludes that the proposed framework and its supporting protocols are an effective functional basis for practical implementation testing, as it achieves good scalability and imposes only minimal performance overhead which is comparable with other state-of-art methods.
    • Computational fluid dynamics model of a quad-rotor helicopter for dynamic analysis

      Poyi, Gwangtim Timothy; Wu, Mian Hong; Bousbaine, Amar; University of Derby (Pioneer Research and Development Group, 2016-06-30)
      The control and performance of a quad-rotor helicopter UAV is greatly influenced by its aerodynamics, which in turn is affected by the interactions with features in its remote environment. This paper presents details of Computational Fluid Dynamics (CFD) simulation and analysis of a quadrotor helicopter. It starts by presenting how SolidWorks software is used to develop a 3-D Computer Aided Design (CAD) model of the quad-rotor helicopter, then describes how CFD is used as a computer based mathematical modelling tool to simulate and analyze the effects of wind flow patterns on the performance and control of the quadrotor helicopter. For the purpose of developing a robust adaptive controller for the quad-rotor helicopter to withstand any environmental constraints, which is not within the scope of this paper; this work accurately models the quad-rotor static and dynamic characteristics from a limited number of time-accurate CFD simulations.
    • Computer aided design of 3D of renewable energy platform for Togo's smart grid power system infrastructure

      Komlanvi, Moglo; University of Derby (2018-09-04)
      The global requirement for sustainable energy provision will become increasingly important over the next fifty years as the environmental effects of fossil fuel use become apparent. Therefore, the issues surrounding integration of renewable energy supplies need to be considered carefully. The focus of this work was the development of an innovative computer aided design of a 3 Dimensional renewable energy platform for Togo’s smart grid power system infrastructure. It demonstrates its validation for industrial, commercial and domestic applications. The Wind, Hydro, and PV system forming our 3 Dimensional renewable energy power generation systems introduces a new path for hybrid systems which extends the system capacities to include, a stable and constant clean energy supply, a reduced harmonic distortion, and an improved power system efficiency. Issues requiring consideration in high percentage renewable energy systems therefore includes the reliability of the supply when intermittent sources of electricity are being used, and the subsequent necessity for storage and back-up generation The adoption of Genetic algorithms in this case was much suited in minimizing the THD as the adoption of the CHB-MLI was ideal for connecting renewable energy sources with an AC grid. Cascaded inverters have also been proposed for use as the main traction drive in electric vehicles, where several batteries or ultra-capacitors are well suited to serve as separate DC sources. The simulation done in various non-linear load conditions showed the proportionality of an integral control based compensating cascaded passive filter thereby balancing the system even in non-linear load conditions. The measured total harmonic distortion of the source currents was found to be 2.36% thereby in compliance with IEEE 519-1992 and IEC 61000-3 standards for harmonics This work has succeeded in developing a more complete tool for analysing the feasibility of integrated renewable energy systems. This will allow informed decisions to be made about the technical feasibility of supply mix and control strategies, plant type, sizing and storage sizing, for any given area and range of supply options. The developed 3D renewable energy platform was examined and evaluated using CAD software analysis and a laboratory base mini test. The initial results showed improvements compared to other hybrid systems and their existing control systems. There was a notable improvement in the dynamic load demand and response, stability of the system with a reduced harmonic distortion. The derivatives of this research therefore proposes an innovative solution and a path for Togo and its intention of switching to renewable energy especially for its smart grid power system infrastructure. It demonstrates its validation for industrial, commercial and domestic applications
    • Contractors’ selection criteria for sustainable infrastructure delivery in Nigeria

      Ceranic, Boris; Dean, Angela; Arowosafe, Oluwumi I. (University of Derby, 2020)
      The research reported in this study developed and validated a framework for the pre-evaluation of contractors for sustainable infrastructure projects through Public-Private Partnership (PPP) in Nigeria. The proposed framework uses the Analytic Network Process (ANP) to select contractors for build-operate-transfer (BOT) contractors. Theoretically grounded on a system theory, a sustainable infrastructure delivery (SID) model is developed in this research. One of its important features is the ability to solve complex decision problems, typical of a decision-making process that involves selection of contractors for PPP projects. At the deductive phase of the proposed model is the integration of the ANP (multi-criteria decision-making technique) for data synthesis. An extensive literature review was conducted with regard to selection criteria for contractors. Furthermore, a web-based questionnaire survey was undertaken, aimed at capturing the perception of the Nigerian construction professionals regarding the importance of these criteria for pre-evaluation of contractors for public infrastructure procurement. A total of 143 questionnaires was received and their feedbacks were analysed with the IBM SPSS statistical package. The findings revealed a broad range of 55 relevant criteria that were linked to sustainable contractor selection. Through the application of factor analysis, the number of the criteria was reduced to 16, after multicollinearity issues in the data set had been resolved. The 16 factors were modelled to pairwise comparison matrices, transforming decision making process from linear to a systemic approach. A purposeful sampling methodology was then applied for the selection of decision-making panel (DM), who completed the pairwise comparison survey. The survey results were synthesised by ANP. The final solution derived order of significance of the two categories of contractors- multinational construction corporations (MCC) and local construction contractors (LCC) in respect to the delivery of a sustainable infrastructure. Sensitivity analysis of the research findings reveals that the 16 criteria have differential comparative advantages, which requires critical judgement during contractors’ pre-evaluation process. Although the overall priorities rank multinational construction corporations (MCC) higher than local construction companies (LCC), it is not absolute that MCC will deliver a better value for money on all tangible and intangible elements of sustainable infrastructure attributes. LCC outperform on some of the key criteria such as local employment creation and local material sourcing, which are essential pre-evaluation criteria. This research proposes a novel framework to harmonise sustainability indicators in contractor selection and offers a new theoretical insight into the approach to contractors’ selection criteria during pre-evaluation process, which contributes to the enhancement of PPP delivery in Nigeria. Overall, the proposed SID model has demonstrated the need for a shift in the modus operandi of the government’s ministries, department and agencies (MDAs) in Nigeria from unidirectional to systemic selection techniques. It clearly demonstrates the appropriateness of the ANP to predict the contractor that will deliver more sustainable infrastructure.
    • Crashworthiness Characterisation of the Car Front Bumper System Based on FEA Analysis

      Lu, Yiling; Harmanto, Dani; Zhang, Xiyuan (University of Derby, 2020-11-19)
      This thesis investigated different designs and material selections of vehicle front bumper system to improve the vehicle crashworthiness during the low impact speed (impact velocity=15km/h, 9.32mph) via FEA simulations. The primary purpose is to identify the most important parameters directly related to the improvement of crashworthiness using numerical parametric study. It is found the cross-section profile, curvature shape, material of the bumper beam, together with the connection to the crash box have been all identified that directly influence the crashworthiness performance of the front bumper system. The bumper system, including the sub-components such as bumper beam, crash box, and the connection methods were carried all the parameters, including a number of folds, curvature shapes and spot welds were in-built while creating them into the CAD models using Solidworks. The final assembled complete bumper system is then imported into the ANSYS for further geometry checks and adjustment. Solver Autodyn is used to perform the FEA simulation, and numbers of results files were generated. Results files such as force reaction, plastic work, and equivalent stress, normal stress was all exported it into the Excel for parametric analysis and discussions. Cross-section Profile-Out of proposed Single fold (fold 1) and Triple fold(fold 3) bumper beam profiles, Double fold (fold 2) bumper beam profile presented the best results of force reaction on both smoothness and force value, while the plastic work remained almost identical to profile fold 1 and 3 gained. Fold 2 profile is considered as a good performer since this profile regulated the deformation behaviour of the beam resulted in a smooth increasing force reaction curve. Where the force reaction curve on both fold 1 and fold 3 were fluctuated dramatically due to catastrophic structural failure. Material-In between structural steel and aluminium alloy used in the bumper beam, while the structural steel made bumper beam achieved good force reaction and plastic work. Switched to aluminium can achieve similar force reaction trend and rate with Cross-section neglectable amount of plastic work reduced. Particularly the weight of the bumper beam is dropped down to 5.357 kg while maintaining similar crashworthiness performance to the structural steel. Crash box Crash box connection- The bonded connection is considered as an ideal scenario and was xvii Sensitivity: Internal favoured in much other literature due to it simplifies the connection setting in the FEA environment since it automatically considers it as perfect contact. Three alternative connection methods were therefore proposed to simulate the more realistic scenario. It defined as welding connection that is constituted by a number of spot welds at left, right, top and bottom of the crash box. Since the bonded method contains no spot welds, a method of weld L+R was indicated by totally 4 spot welds appeared at both left and right side of the crash box. On top of this, 4 additional spot welds were added to the top and bottom of the crash box. Totally 4 spot welds were added only to both the top and bottom of the crash box to extend the comparison. While both bonded and weld L+R methods suffered from buckling effect to the crash box, particularly concentrated at the left and right side with high equivalent and normal stresses. It is discovered weld full method provided promising results by reducing the buckling effect to both left and right faces of the crash box, and also managed to lower the equivalent stress down to 336.48MPa and normal stress on the connection surface down to 66MPa. Weld T+B also observed similar performance when compared with both bonded and weld L+R methods. While registered with very small amount of equivalent and normal stresses, the buckling effect is significantly reduced. This thesis contributed the knowledge to the improvement of vehicle front bumper system. Particularly to the failure mode of both bumper beam and crash box, and offered the related optimisation.
    • A critical analysis of the continued use of Georgian buildings: a case study of Darley Abbey Mills, Derbyshire.

      Deakin, Emmie Lousie; University of Derby (2016)
      This thesis undertakes a critical assessment of the impact of Statutory Legislation and UNESCO World Heritage Designation upon the sustainability and continued use of historic industrial buildings, utilising the late 18th Century Georgian Industrial Buildings of Darley Abbey Mills, Derby, as a case study. This thesis provides an indepth and longitudinal analysis of the morphology and evolution of Darley Abbey Mills between 2006-2015, during this time the assessment of whether the mills would find a sustainable and continued contemporary use has shifted from a concern that the site was slowly disintegrating with the danger of an important historical artefact being lost for ever or becoming irrevocably damaged through lack of maintenance and repair to a position where the future of the mills is looking promising. What makes Darley Abbey Mills so unusual or unique is that it possesses the highest possible levels of statutory protection, but that is also under private ownership. The initial findings in an analysis of policy documents and planning applications between 2006- 2010 was that there was limited engagement with the external heritage and conservations stakeholders or the Local Authority, an ‘umbrella of statutory protection’ was not providing barriers or protecting the site, there was just a lack of action by all parties. This changed during the period 2010-13 when the site came under new unified ownership, the new owners started to make small adaptations and repairs to the site that enabled them to encourage new tenants from the creative and artisan communities to the site, however all of this work was not authorised, nor was planning permission sought. Although there was still a lack of enforcement of what can be seen as ‘aspirational urbanism’, a dialogue was started between the owners and the wider stakeholder community. Between 2013-2015, the relationship between all of the stakeholders became more formalised and an unofficial partnership was formed between the owners and the monitoring bodies that resulted in the successful planning application to adapt the West Mills and Long Mill, which moved some of the way towards ensuring the sustainable and continued use of Darley Abbey Mills.
    • Data Collection and Analysis in Urban Scenarios

      Bagdasar, Ovidiu; Liotta, Antonio; Barnby, Lee; Ferrara, Enrico (University of DerbyCollege of Science and Engineering, 2021-12-03)
      The United Nations estimates that the world population will continue to grow, with a projection indicating a world population of up to approximately 8.5 billion people in 2030, 9.7 billion in 2050 and 10.9 billion in 2100. In addition to the phenomenon of population growth, the United Nations also estimates that in 2050 about 70% of the total world population will live in cities. These conditions increase the complexity of the services that public administrations and private companies must provide to citizens with the aim of optimising resources and increasing the level of quality of life. For an adequate design, implementation and management of these services, an extensive effort is required towards the design of effective solutions for data collection and analysis, applying Data Science and Artificial Intelligence techniques. Several approaches were addressed during the development of this research thesis. Furthermore, different real-world use cases are introduced where the presented work was tested and validated. The first thesis part focuses on data analysis on data collected using crowdsourcing. A real case study used for the analyses was a study conducted in Sheffield in which the goal was to understand people’s interaction with green areas and their wellbeing. In this study, an app with a chatbot was used to ask questions targeted to the study and collected not only the subjective answers but also objective data like users’ location. Through the analysis of this data, it was possible to extract insights that otherwise would not be easily reachable in other ways. Some limitations have arisen for less frequented areas, in fact, not enough information has been collected to have a statistical significance of the insights found. Conversely, more information than necessary was collected in the most frequented areas. For this reason, a framework that analyses the amount of information and its statistical significance in real-time has been developed. It increases the efficiency of the study and reduces intrusiveness towards the study participants. The limit that this approach presents is certainly the low sample of data that can be acquired. In the second part of this thesis, a move on to passive data collection is done, where the user does not have to interact in any way. Any data acquired is pseudonymised upon capture so that the dictates of the privacy legislation are respected. A system is then presented that collects probe requests generated by Wi-Fi devices while scanning radio channels to detect Access Points. The system processes the collected data to extract key information on people’s mobility, such as crowd density by area of interest, people flow, permanence time, return time, heat maps, origin-destination matrix and estimate of the locations of the people. The main novelty with respect to the state of the art is related to new powerful indicators necessary for some key services of the city, such as safety management and passenger transport services, and to experimental activities carried out in real scenarios. Furthermore, a de-randomisation algorithm to solve the problem of MAC address randomisation is presented.
    • Dynamic collaboration and secure access of services in multi-cloud environments

      Liu, Lu; Zhu, Shao Ying; Kazim, Muhammad (University of DerbyCollege of Engineering and Technology, 2019-08-19)
      The cloud computing services have gained popularity in both public and enterprise domains and they process a large amount of user data with varying privacy levels. The increasing demand for cloud services including storage and computation requires new functional elements and provisioning schemes to meet user requirements. Multi-clouds can optimise the user requirements by allowing them to choose best services from a large number of services offered by various cloud providers as they are massively scalable, can be dynamically configured, and delivered on demand with large-scale infrastructure resources. A major concern related to multi-cloud adoption is the lack of models for them and their associated security issues which become more unpredictable in a multi-cloud environment. Moreover, in order to trust the services in a foreign cloud users depend on their assurances given by the cloud provider but cloud providers give very limited evidence or accountability to users which offers them the ability to hide some behaviour of the service. In this thesis, we propose a model for multi-cloud collaboration that can securely establish dynamic collaboration between heterogeneous clouds using the cloud on-demand model in a secure way. Initially, threat modelling for cloud services has been done that leads to the identification of various threats to service interfaces along with the possible attackers and the mechanisms to exploit those threats. Based on these threats the cloud provider can apply suitable mechanisms to protect services and user data from these threats. In the next phase, we present a lightweight and novel authentication mechanism which provides a single sign-on (SSO) to users for authentication at runtime between multi-clouds before granting them service access and it is formally verified. Next, we provide a service scheduling mechanism to select the best services from multiple cloud providers that closely match user quality of service requirements (QoS). The scheduling mechanism achieves high accuracy by providing distance correlation weighting mechanism among a large number of services QoS parameters. In the next stage, novel service level agreement (SLA) management mechanisms are proposed to ensure secure service execution in the foreign cloud. The usage of SLA mechanisms ensures that user QoS parameters including the functional (CPU, RAM, memory etc.) and non-functional requirements (bandwidth, latency, availability, reliability etc.) of users for a particular service are negotiated before secure collaboration between multi-clouds is setup. The multi-cloud handling user requests will be responsible to enforce mechanisms that fulfil the QoS requirements agreed in the SLA. While the monitoring phase in SLA involves monitoring the service execution in the foreign cloud to check its compliance with the SLA and report it back to the user. Finally, we present the use cases of applying the proposed model in scenarios such as Internet of Things (IoT) and E-Healthcare in multi-clouds. Moreover, the designed protocols are empirically implemented on two different clouds including OpenStack and Amazon AWS. Experiments indicate that the proposed model is scalable, authentication protocols result only in a limited overhead compared to standard authentication protocols, service scheduling achieves high efficiency and any SLA violations by a cloud provider can be recorded and reported back to the user.
    • Effects of the graphene on the mechanical properties of fibre reinforced polymer - a numerical and experimental study

      Lu, Yiling; Dean, Angela; Pawlik, Marzena (University of Derby, 2019-11)
      Mechanical properties of carbon fibre reinforced polymer (CFRP) are greatly affected by interphase between fibre and matrix. Coating fibre with nanofillers, i.e. graphene nanoplatelets (GNPs) or carbon nanotubes (CNTs) has suggested improving the interphase properties. Although the interphase is of small thickness, it plays an important role. Quantitative characterisation of the interphase region using an experimental technique such as nanoindentation, dynamic mechanical mapping remains challenging. More recently, computational modelling has become an alternative way to study the effects of interphase on CFRP properties. Simulation work of CFRP reinforced with nanofillers mainly focuses on CNTs grown on the fibre surface called fuzzy fibre reinforced polymers. Modelling work on the effects of GNPs on CFRP properties is rather limited. This project aims to study numerically and experimentally the effects of the nano-reinforced interphase on mechanical properties of CFRP. A multiscale model was developed to study the effects of the GNPs reinforced interphase on the elastic properties of CFRP laminate. The effective material properties of the reinforced interphase were determined by considering transversely isotropic features of GNPs and various orientation. The presence of GNPs in the interphase enhances the elastic properties of CFRP lamina, and the enhancement depends on its volume fraction. The incorporation of randomly orientated GNPs in the interphase increased longitudinal and transverse lamina moduli by 5 and 12 % respectively. While aligned GNPs in the interphase yielded less improvement. The present multiscale modelling was able to reproduce experimental measurements for GNPs reinforced CFRP laminates well. The multiscale model was also proven successful in predicting fuzzy fibre reinforced polymer. Moreover, the interphase properties were inversely quantified by combining with the multiscale model with some standard material testing. A two-step optimisation process was proposed, which involved the microscale and macroscale modelling. Based on the experimental data on flexural modulus, the lamina properties were derived at macroscale modelling, which were later used to determine the interphase properties from the optimisation at the microscale. The GNPs reinforced interphase modulus was 129.1 GPa which is significantly higher than epoxy coated carbon fibre of 60.51 GPa. In the experiment, a simple spraying technique was proposed to introduce GNPs and CNTs into the CFRP. Carbon fibre prepreg was sprayed with a nanofillers-ethanol solution using an airbrush. The extremely low volume fraction of nanofillers introduced between prepreg plies caused a noticeable improvement in mechanical properties, i.e. 7% increase in strain energy release. For the first time, the GNPs-ethanol-epoxy solution was sprayed directly on the carbon fibre fabric. Resultant nano-reinforced interphase created on fibre surface showed moderate improvement in samples flexural properties. In conclusion, a multiscale modelling framework was developed and tested. The GNPs reinforced interphase improved the mechanical properties of CFRP. This enhancement depended on the orientation and volume fraction of GNPs in the interphase. Spraying was a cost-effective method to introduce nanofillers in CFRP and showed huge potential for the scale-up manufacturing process. In a combination of multiscale framework and optimisation process, the nanofillers reinforced interphase was for the first time determined. This framework could be used to optimise the development process of new fibre-reinforced composites.
    • The efficiency of bacterial self-healing concrete inculcated in ground condition

      Esaker, Mohamed (University of DerbyCollege of Science and EngineeringDirect Science, Springer, 2021-12-20)
      The innovative bacterial self-healing concrete is a promising solution to improve the sustainability of concrete structures by sealing the cracks in an autonomous way. Regardless of the types of bacterial-based approach, the provision of a suitable incubation environment is essential for the activation of bacteria and thus for a successful self-healing application. However, the research to date has mainly focused on the self-healing process within humid air or water environment. This research aims to investigate the performance of bacterial self-healing concrete within ground conditions which can potentially benefit the development of more sustainable underground concrete structures such as deep foundations, retaining walls and tunnels. The research method is comprised of a laboratory experimental program with several stages/ phases. In the first stage, control tests were conducted to examine the influence of different delivery techniques of healing agents such as the material of capsules on the healing performance in water. The outputs from this stage were used as a control test to inform the next stages where the fine-grained concrete/mortar specimens were incubated inside the soil. In this stage, three different delivery techniques of the healing agent were examined namely Direct add, Calcium alginate beads and Perlite. The results showed that the crack-healing capacity was significantly improved with using of bacterial agent for all delivery techniques and the maximum healed crack width was about 0.57 mm after 60 days of incubation for specimens incorporated with perlite (set ID: M4). The volume stability of the perlite capsules has made them more compatible with the cement mortar matrix in comparison with the calcium alginate capsule. The results from Scanning Electron Microscope (SEM) and Energy Dispersive X-ray (EDX) indicated that the mineral precipitations on crack surfaces were calcium carbonate. The second stage investigates the effect of different ground conditions on the efficiency of bio self-healing concrete. This stage presents a major part of the experimental programme and contains three experimental parts based on the types of soils and their conditions where bio self-healing of cement mortar specimens was examined. The first part investigates the effect of the presence of microbial and organic materials within the soil on the performance of self-healing by incubating cracked mortar specimens into sterilized and non-sterilized soil. This part aims to investigate if the existing bacteria in the soil can produce any self-healing. In the second part, the investigation focused on the bio self-healing in specimens incubated in coarse-grained soil (sand). The soil was subjected to fully and partially saturated cycles and conditioned with different pH and sulphate levels representing industrially recognised classes of exposure (namely, X0, XA1, and XA3). These classes were selected according to BS EN 206:2013+A1:2016 - based on the risk of corrosion and chemical attack from an aggressive ground environment. In the third part, cement mortar specimens were incubated into fully and partially saturated fine-grained soil (clay) with similar aggressive environments as in part 2. The results showed that the indigenous bacteria naturally present within the soil can enhance the mortar self-healing process. For specimens incubated within coarse-grained soil (sand), the reduction in pH of the incubation environment affected the bio self-healing performance. However, for fine-grained soil (clay) the healing ratios of specimens incubated in the same identical exposure conditions were almost similar, with better results observed in the pH neutral condition. The results showed also that the self-healing efficiencies in both the control and bio-mortar specimens were significantly affected by the soil's moisture content. This indicates that the mineral precipitation of calcium carbonate caused by the metabolic conversion of nutrients by bacteria is heavily reliant on the moisture content of the soil. The hydration of un-hydrated cement particles representing the primary source of autogenous healing is also influenced by soil moisture content. The third stage investigated the use of a non-destructive technique utilising the concrete electrical resistivity to monitor the crack healing performance of specimens incubated within the soil. The results showed that the improvement in electrical resistivity of bio-mortar specimens was remarkably higher in comparison with control specimens. This improvement can be used as an indication of the healing performance of bio-mortar specimens in comparison with autogenous healing in control specimens. In general, the study suggests that the bio self-healing process can protect underground concrete structures such as foundations, bridge piers, and tunnels in a range of standard exposure conditions and that this is facilitated by the commonly applied bacterial agent Bacillus subtilis or similar strains. However, as the experimental findings indicated the exposure conditions could affect the healing efficiency. Therefore, future work should consider how formulations, application methods, and ground preparation can be optimised to achieve the best possible incubation environment and thus improved protection for underground concrete structures.
    • Electro-thermal modelling of electrical power drive systems.

      Trigkidis, Georgios.; University of Derby (2008)
    • Evaluation and improvement on service quality of Chinese university libraries under new information environments.

      Fan,Yue Qian; University of Derby (2018-06)
      The rapid development of information technology in the recent years has added a range of new featuresto the traditional information environment, which has a profound impact on university library services and users. The Quality of Service parameter in library services has reached a broader consensus,which directly reflects customer satisfactions and loyalty. Exploring the evaluation frameworks for service quality in university libraries cannot be undermined in this context. Besides, existing evaluation frameworks of service quality of university library services are also facing numerous challenges due to their imperfections. Thus,there is an urgency and necessity to explore and enhance the efficiencies of the evaluation frameworks of service quality. To this end, this thesis conducts a systematic analysisof evaluation frameworks with a motivation of identifying the core components that needs enhancements for achieving effective service quality in Chinese university libraries through empirical methods. Furthermore, the inferences extracted from the analysis has been exploited to provide suitable recommendations for improving the service quality of university libraries.
    • High Performance Video Stream Analytics System for Object Detection and Classification

      Anjum, Ashiq; Yaseen, Muhammad Usman (University of DerbyCollege of Engineering and Technology, 2019-02-05)
      Due to the recent advances in cameras, cell phones and camcorders, particularly the resolution at which they can record an image/video, large amounts of data are generated daily. This video data is often so large that manually inspecting it for object detection and classification can be time consuming and error prone, thereby it requires automated analysis to extract useful information and meta-data. The automated analysis from video streams also comes with numerous challenges such as blur content and variation in illumination conditions and poses. We investigate an automated video analytics system in this thesis which takes into account the characteristics from both shallow and deep learning domains. We propose fusion of features from spatial frequency domain to perform highly accurate blur and illumination invariant object classification using deep learning networks. We also propose the tuning of hyper-parameters associated with the deep learning network through a mathematical model. The mathematical model used to support hyper-parameter tuning improved the performance of the proposed system during training. The outcomes of various hyper-parameters on system's performance are compared. The parameters that contribute towards the most optimal performance are selected for the video object classification. The proposed video analytics system has been demonstrated to process a large number of video streams and the underlying infrastructure is able to scale based on the number and size of the video stream(s) being processed. The extensive experimentation on publicly available image and video datasets reveal that the proposed system is significantly more accurate and scalable and can be used as a general purpose video analytics system.
    • High Voltage Optical Fibre Sensor for Use in Wire Relay Electrical Protection Systems

      Bashour, Rami; University Of Derby (2016)
      The last few decades have a wide spread use of optical fibre sensors in many applications. Optical fibre sensors have significant benefits over existing conventional sensors such as; high immunity to electromagnetic interference, the ability to transmit signal over long distance at high bandwidth, high resolution, usage in hazardous environments and no need for isolation when working at high voltages. The measurement of high voltages is essential for electrical power systems as it is used as a source of electrical information for Relay Protection Systems (RPS) and load management systems. Electrical Power Systems need to be protected from faults. Faults can range from short circuits, voltage dips, surges, transients etc. The Optical High Voltage sensor developed is based on the principle that the Lead Zirconate Titanate (PZT) electrostriction displacement changes when a voltage is applied to it. The displacement causes the fibre (FBG) which is bonded to the PZT material to have a resultant change in the wavelength. An optical fibre sensor prototype has been developed and evaluated that measures up to 250 V DC. Simulation using ANSYS software has been used to demonstrate the operational capability of the sensor up to 300kV AC. This sensor overcomes some of the challenges of conventional sensors issues like electromagnetic interference, signal transmission, resolution etc. R BASHOUR 2 A novel optical fibre high voltage based on the Kerr effect has been demonstrated. The The Kerr effect was determined using Optsim (R-Soft) software and Maxwell software was used to model an optical Kerr Cell. Maxwell software is an electromagnetic/electric field software used for simulating, analysing, designing 2D and 3D electromagnetic materials and devices. It uses highly accurate Finite Element techniques to solve time varying, static, frequency domain electric and electromagnetic fields. A Relay Protection System on electrical networks was discussed in detail. Keywords: Fibre Bragg Grating, Fibre Optics Sensors, Piezoelectricity, Kerr effect, Relay Protection Systems.
    • Life cycle costing methodology for sustainable commerical office buildings

      Oduyemi, Olufolahan Ifeoluwa; University of Derby (2015)
      The need for a more authoritative approach to investment decision-making and cost control has been a requirement of office spending for many years now. The commercial offices find itself in an increasingly demanding position to allocate its budgets as wisely and prudently as possible. The significant percentage of total spending on buildings demands a more accurate and adaptable method of achieving quality of service within the constraints on the budgets. By adoption of life cycle costing techniques with risk management, practitioners have the ability to make accurate forecasts of likely future running costs. This thesis presents a novel framework (Artificial Neural Networks and probabilistic simulations) for modelling of operating and maintenance historical costs as well as economic performance measures of LCC. The methodology consisted of eight steps and presented a novel approach to modelling the LCC of operating and maintenance costs of two sustainable commercial office buildings. Finally, a set of performance measurement indicators were utilised to draw inference from these results. Therefore, the contribution that this research aimed to achieve was to develop a dynamic LCC framework for sustainable commercial office buildings, and by means of two existing buildings, demonstrate how assumption modelling can be utilised within a probabilistic environment. In this research, the key themes of risk assessment, probabilistic assumption modelling and stochastic assessment of LCC has been addressed. Significant improvements in existing LCC models have been achieved in this research in an attempt to make the LCC model more accurate and meaningful to estate managers and high-level capital investment decision makers A new approach to modelling historical costs and forecasting these costs in sustainable commercial office buildings is presented based upon a combination of ANN methods and stochastic modelling of the annual forecasted data. These models provide a far more accurate representation of long-term building costs as the inherent risk associated with the forecasts is easily quantifiable and the forecasts are based on a sounder approach to forecasting than what was previously used in the commercial sector. A novel framework for modelling the facilities management costs in two sustainable commercial office buildings is also presented. This is not only useful for modelling the LCC of existing commercial office buildings as presented here, but has wider implications for modelling LCC in competing option modelling in commercial office buildings. The processes of assumption modelling presented in this work can be modified easily to represent other types of commercial office buildings. Discussions with policy makers in the real estate industry revealed that concerns were held over how these building costs can be modelled given that available historical data represents wide spending and are not cost specific to commercial office buildings. Similarly, a pilot and main survey questionnaire was aimed at ascertaining current level of LCC application in sustainable construction; ranking drivers and barriers of sustainable commercial office buildings and determining the applications and limitations of LCC. The survey result showed that respondents strongly agreed that key performance indicators and economic performance measures need to be incorporated into LCC and that it is important to consider the initial, operating and maintenance costs of building when conducting LCC analysis, respondents disagreed that the current LCC techniques are suitable for calculating the whole costs of buildings but agreed that there is a low accuracy of historical cost data.
    • Multiprocessor System-on-Chips based Wireless Sensor Network Energy Optimization

      Panneerselvam, John; Xue, Yong; Ali, Haider (University of DerbyDepartment of Electronics, Computing and Mathematics, 2020-10-08)
      Wireless Sensor Network (WSN) is an integrated part of the Internet-of-Things (IoT) used to monitor the physical or environmental conditions without human intervention. In WSN one of the major challenges is energy consumption reduction both at the sensor nodes and network levels. High energy consumption not only causes an increased carbon footprint but also limits the lifetime (LT) of the network. Network-on-Chip (NoC) based Multiprocessor System-on-Chips (MPSoCs) are becoming the de-facto computing platform for computationally extensive real-time applications in IoT due to their high performance and exceptional quality-of-service. In this thesis a task scheduling problem is investigated using MPSoCs architecture for tasks with precedence and deadline constraints in order to minimize the processing energy consumption while guaranteeing the timing constraints. Moreover, energy-aware nodes clustering is also performed to reduce the transmission energy consumption of the sensor nodes. Three distinct problems for energy optimization are investigated given as follows: First, a contention-aware energy-efficient static scheduling using NoC based heterogeneous MPSoC is performed for real-time tasks with an individual deadline and precedence constraints. An offline meta-heuristic based contention-aware energy-efficient task scheduling is developed that performs task ordering, mapping, and voltage assignment in an integrated manner. Compared to state-of-the-art scheduling our proposed algorithm significantly improves the energy-efficiency. Second, an energy-aware scheduling is investigated for a set of tasks with precedence constraints deploying Voltage Frequency Island (VFI) based heterogeneous NoC-MPSoCs. A novel population based algorithm called ARSH-FATI is developed that can dynamically switch between explorative and exploitative search modes at run-time. ARSH-FATI performance is superior to the existing task schedulers developed for homogeneous VFI-NoC-MPSoCs. Third, the transmission energy consumption of the sensor nodes in WSN is reduced by developing ARSH-FATI based Cluster Head Selection (ARSH-FATI-CHS) algorithm integrated with a heuristic called Novel Ranked Based Clustering (NRC). In cluster formation parameters such as residual energy, distance parameters, and workload on CHs are considered to improve LT of the network. The results prove that ARSH-FATI-CHS outperforms other state-of-the-art clustering algorithms in terms of LT.
    • Network Features in Complex Applications

      Bagdasar, Ovidiu; Kurugollu, Fatih; Liotta, Antonio; Cavallaro, Lucia (University of Derby, 2021-12-20)
      The aim of this thesis is to show the potential of Graph Theory and Network Science applied in real-case scenarios. Indeed, there is a gap in the state-of-art in combining mathematical theory with more practical applications such as helping the Law Enforcement Agencies (LEAs) to conduct their investigations, or in Deep Learning techniques which enable Artificial Neural Networks (ANNs) to work more efficiently. In particular, three main case studies on which evaluate the goodness of Social Network Analysis (SNA) tools were considered: (i) Criminal Networks Analysis, (ii) Networks Resilience, and (iii) ANN topology. We have addressed two typical problems in dealing with criminal networks: (i) how to efficiently slow down the information spreading within the criminal organisation by prompt and targeted investigative operations from LEAs and (ii) what is the impact of missing data during LEAs investigation. In the first case, we identified the appropriate centrality metric to effectively identify the criminals to be arrested, showing how, by neutralising only 5% of the top-ranking affiliates, the network connectivity dropped by 70%. In the second case, we simulated the missing data problem by pruning some criminal networks by removing nodes or links and compared these networks against the originals considering four metrics to compute graph similarities. We discovered that a negligible error (i.e., 30% difference from the real network) was detected when, for example, some wiretaps are missing. On the other hand, it is crucial to investigate the suspects in a timely fashion, since any exclusion of suspects from an investigation may lead to significant errors (i.e., 80% difference). Next, we defined a new approach for simulating network resilience by a probabilistic failure model. Indeed, while the classical approach for removing nodes was always successful, such an assumption was not realistic. Thus, we defined some models simulating the scenario in which nodes oppose resistance against removal. Once identified the centrality metric that on average, generates the biggest damage in the connectivity of the networks under scrutiny, we have compared our outcomes against the classical node removal approach, by ranking the nodes according to the same centrality metric, which confirmed our intuition. Lastly, we adopted SNA techniques to analyse ANNs. In particular, we moved a step forward from earlier works because not only did our experiments confirm the efficiency arising from training sparse ANNs, but they also managed to further exploit sparsity through a better tuned algorithm, featuring increased speed at a negligible accuracy loss. We focused on the role of the parameter used to fine-tune the training phase of Sparse ANNs. Our intuition has been that this step can be avoided as the accuracy loss is negligible and, as a consequence, the execution time is significantly reduced. Yet, it is evident that Network Science algorithms, by keeping sparsity in ANNs, are a promising direction for accelerating their training processes. All these studies pave the way for a range of unexplored possibilities for an effective use of Network Science at the service of society.