• Dynamic collaboration and secure access of services in multi-cloud environments

      Liu, Lu; Zhu, Shao Ying; Kazim, Muhammad (University of DerbyCollege of Engineering and Technology, 2019-08-19)
      The cloud computing services have gained popularity in both public and enterprise domains and they process a large amount of user data with varying privacy levels. The increasing demand for cloud services including storage and computation requires new functional elements and provisioning schemes to meet user requirements. Multi-clouds can optimise the user requirements by allowing them to choose best services from a large number of services offered by various cloud providers as they are massively scalable, can be dynamically configured, and delivered on demand with large-scale infrastructure resources. A major concern related to multi-cloud adoption is the lack of models for them and their associated security issues which become more unpredictable in a multi-cloud environment. Moreover, in order to trust the services in a foreign cloud users depend on their assurances given by the cloud provider but cloud providers give very limited evidence or accountability to users which offers them the ability to hide some behaviour of the service. In this thesis, we propose a model for multi-cloud collaboration that can securely establish dynamic collaboration between heterogeneous clouds using the cloud on-demand model in a secure way. Initially, threat modelling for cloud services has been done that leads to the identification of various threats to service interfaces along with the possible attackers and the mechanisms to exploit those threats. Based on these threats the cloud provider can apply suitable mechanisms to protect services and user data from these threats. In the next phase, we present a lightweight and novel authentication mechanism which provides a single sign-on (SSO) to users for authentication at runtime between multi-clouds before granting them service access and it is formally verified. Next, we provide a service scheduling mechanism to select the best services from multiple cloud providers that closely match user quality of service requirements (QoS). The scheduling mechanism achieves high accuracy by providing distance correlation weighting mechanism among a large number of services QoS parameters. In the next stage, novel service level agreement (SLA) management mechanisms are proposed to ensure secure service execution in the foreign cloud. The usage of SLA mechanisms ensures that user QoS parameters including the functional (CPU, RAM, memory etc.) and non-functional requirements (bandwidth, latency, availability, reliability etc.) of users for a particular service are negotiated before secure collaboration between multi-clouds is setup. The multi-cloud handling user requests will be responsible to enforce mechanisms that fulfil the QoS requirements agreed in the SLA. While the monitoring phase in SLA involves monitoring the service execution in the foreign cloud to check its compliance with the SLA and report it back to the user. Finally, we present the use cases of applying the proposed model in scenarios such as Internet of Things (IoT) and E-Healthcare in multi-clouds. Moreover, the designed protocols are empirically implemented on two different clouds including OpenStack and Amazon AWS. Experiments indicate that the proposed model is scalable, authentication protocols result only in a limited overhead compared to standard authentication protocols, service scheduling achieves high efficiency and any SLA violations by a cloud provider can be recorded and reported back to the user.
    • Effects of the graphene on the mechanical properties of fibre reinforced polymer - a numerical and experimental study

      Lu, Yiling; Dean, Angela; Pawlik, Marzena (University of Derby, 2019-11)
      Mechanical properties of carbon fibre reinforced polymer (CFRP) are greatly affected by interphase between fibre and matrix. Coating fibre with nanofillers, i.e. graphene nanoplatelets (GNPs) or carbon nanotubes (CNTs) has suggested improving the interphase properties. Although the interphase is of small thickness, it plays an important role. Quantitative characterisation of the interphase region using an experimental technique such as nanoindentation, dynamic mechanical mapping remains challenging. More recently, computational modelling has become an alternative way to study the effects of interphase on CFRP properties. Simulation work of CFRP reinforced with nanofillers mainly focuses on CNTs grown on the fibre surface called fuzzy fibre reinforced polymers. Modelling work on the effects of GNPs on CFRP properties is rather limited. This project aims to study numerically and experimentally the effects of the nano-reinforced interphase on mechanical properties of CFRP. A multiscale model was developed to study the effects of the GNPs reinforced interphase on the elastic properties of CFRP laminate. The effective material properties of the reinforced interphase were determined by considering transversely isotropic features of GNPs and various orientation. The presence of GNPs in the interphase enhances the elastic properties of CFRP lamina, and the enhancement depends on its volume fraction. The incorporation of randomly orientated GNPs in the interphase increased longitudinal and transverse lamina moduli by 5 and 12 % respectively. While aligned GNPs in the interphase yielded less improvement. The present multiscale modelling was able to reproduce experimental measurements for GNPs reinforced CFRP laminates well. The multiscale model was also proven successful in predicting fuzzy fibre reinforced polymer. Moreover, the interphase properties were inversely quantified by combining with the multiscale model with some standard material testing. A two-step optimisation process was proposed, which involved the microscale and macroscale modelling. Based on the experimental data on flexural modulus, the lamina properties were derived at macroscale modelling, which were later used to determine the interphase properties from the optimisation at the microscale. The GNPs reinforced interphase modulus was 129.1 GPa which is significantly higher than epoxy coated carbon fibre of 60.51 GPa. In the experiment, a simple spraying technique was proposed to introduce GNPs and CNTs into the CFRP. Carbon fibre prepreg was sprayed with a nanofillers-ethanol solution using an airbrush. The extremely low volume fraction of nanofillers introduced between prepreg plies caused a noticeable improvement in mechanical properties, i.e. 7% increase in strain energy release. For the first time, the GNPs-ethanol-epoxy solution was sprayed directly on the carbon fibre fabric. Resultant nano-reinforced interphase created on fibre surface showed moderate improvement in samples flexural properties. In conclusion, a multiscale modelling framework was developed and tested. The GNPs reinforced interphase improved the mechanical properties of CFRP. This enhancement depended on the orientation and volume fraction of GNPs in the interphase. Spraying was a cost-effective method to introduce nanofillers in CFRP and showed huge potential for the scale-up manufacturing process. In a combination of multiscale framework and optimisation process, the nanofillers reinforced interphase was for the first time determined. This framework could be used to optimise the development process of new fibre-reinforced composites.
    • The efficiency of bacterial self-healing concrete inculcated in ground condition

      Esaker, Mohamed (University of DerbyCollege of Science and EngineeringDirect Science, Springer, 2021-12-20)
      The innovative bacterial self-healing concrete is a promising solution to improve the sustainability of concrete structures by sealing the cracks in an autonomous way. Regardless of the types of bacterial-based approach, the provision of a suitable incubation environment is essential for the activation of bacteria and thus for a successful self-healing application. However, the research to date has mainly focused on the self-healing process within humid air or water environment. This research aims to investigate the performance of bacterial self-healing concrete within ground conditions which can potentially benefit the development of more sustainable underground concrete structures such as deep foundations, retaining walls and tunnels. The research method is comprised of a laboratory experimental program with several stages/ phases. In the first stage, control tests were conducted to examine the influence of different delivery techniques of healing agents such as the material of capsules on the healing performance in water. The outputs from this stage were used as a control test to inform the next stages where the fine-grained concrete/mortar specimens were incubated inside the soil. In this stage, three different delivery techniques of the healing agent were examined namely Direct add, Calcium alginate beads and Perlite. The results showed that the crack-healing capacity was significantly improved with using of bacterial agent for all delivery techniques and the maximum healed crack width was about 0.57 mm after 60 days of incubation for specimens incorporated with perlite (set ID: M4). The volume stability of the perlite capsules has made them more compatible with the cement mortar matrix in comparison with the calcium alginate capsule. The results from Scanning Electron Microscope (SEM) and Energy Dispersive X-ray (EDX) indicated that the mineral precipitations on crack surfaces were calcium carbonate. The second stage investigates the effect of different ground conditions on the efficiency of bio self-healing concrete. This stage presents a major part of the experimental programme and contains three experimental parts based on the types of soils and their conditions where bio self-healing of cement mortar specimens was examined. The first part investigates the effect of the presence of microbial and organic materials within the soil on the performance of self-healing by incubating cracked mortar specimens into sterilized and non-sterilized soil. This part aims to investigate if the existing bacteria in the soil can produce any self-healing. In the second part, the investigation focused on the bio self-healing in specimens incubated in coarse-grained soil (sand). The soil was subjected to fully and partially saturated cycles and conditioned with different pH and sulphate levels representing industrially recognised classes of exposure (namely, X0, XA1, and XA3). These classes were selected according to BS EN 206:2013+A1:2016 - based on the risk of corrosion and chemical attack from an aggressive ground environment. In the third part, cement mortar specimens were incubated into fully and partially saturated fine-grained soil (clay) with similar aggressive environments as in part 2. The results showed that the indigenous bacteria naturally present within the soil can enhance the mortar self-healing process. For specimens incubated within coarse-grained soil (sand), the reduction in pH of the incubation environment affected the bio self-healing performance. However, for fine-grained soil (clay) the healing ratios of specimens incubated in the same identical exposure conditions were almost similar, with better results observed in the pH neutral condition. The results showed also that the self-healing efficiencies in both the control and bio-mortar specimens were significantly affected by the soil's moisture content. This indicates that the mineral precipitation of calcium carbonate caused by the metabolic conversion of nutrients by bacteria is heavily reliant on the moisture content of the soil. The hydration of un-hydrated cement particles representing the primary source of autogenous healing is also influenced by soil moisture content. The third stage investigated the use of a non-destructive technique utilising the concrete electrical resistivity to monitor the crack healing performance of specimens incubated within the soil. The results showed that the improvement in electrical resistivity of bio-mortar specimens was remarkably higher in comparison with control specimens. This improvement can be used as an indication of the healing performance of bio-mortar specimens in comparison with autogenous healing in control specimens. In general, the study suggests that the bio self-healing process can protect underground concrete structures such as foundations, bridge piers, and tunnels in a range of standard exposure conditions and that this is facilitated by the commonly applied bacterial agent Bacillus subtilis or similar strains. However, as the experimental findings indicated the exposure conditions could affect the healing efficiency. Therefore, future work should consider how formulations, application methods, and ground preparation can be optimised to achieve the best possible incubation environment and thus improved protection for underground concrete structures.
    • Electro-thermal modelling of electrical power drive systems.

      Trigkidis, Georgios.; University of Derby (2008)
    • Evaluation and improvement on service quality of Chinese university libraries under new information environments.

      Fan,Yue Qian; University of Derby (2018-06)
      The rapid development of information technology in the recent years has added a range of new featuresto the traditional information environment, which has a profound impact on university library services and users. The Quality of Service parameter in library services has reached a broader consensus,which directly reflects customer satisfactions and loyalty. Exploring the evaluation frameworks for service quality in university libraries cannot be undermined in this context. Besides, existing evaluation frameworks of service quality of university library services are also facing numerous challenges due to their imperfections. Thus,there is an urgency and necessity to explore and enhance the efficiencies of the evaluation frameworks of service quality. To this end, this thesis conducts a systematic analysisof evaluation frameworks with a motivation of identifying the core components that needs enhancements for achieving effective service quality in Chinese university libraries through empirical methods. Furthermore, the inferences extracted from the analysis has been exploited to provide suitable recommendations for improving the service quality of university libraries.
    • High Performance Video Stream Analytics System for Object Detection and Classification

      Anjum, Ashiq; Yaseen, Muhammad Usman (University of DerbyCollege of Engineering and Technology, 2019-02-05)
      Due to the recent advances in cameras, cell phones and camcorders, particularly the resolution at which they can record an image/video, large amounts of data are generated daily. This video data is often so large that manually inspecting it for object detection and classification can be time consuming and error prone, thereby it requires automated analysis to extract useful information and meta-data. The automated analysis from video streams also comes with numerous challenges such as blur content and variation in illumination conditions and poses. We investigate an automated video analytics system in this thesis which takes into account the characteristics from both shallow and deep learning domains. We propose fusion of features from spatial frequency domain to perform highly accurate blur and illumination invariant object classification using deep learning networks. We also propose the tuning of hyper-parameters associated with the deep learning network through a mathematical model. The mathematical model used to support hyper-parameter tuning improved the performance of the proposed system during training. The outcomes of various hyper-parameters on system's performance are compared. The parameters that contribute towards the most optimal performance are selected for the video object classification. The proposed video analytics system has been demonstrated to process a large number of video streams and the underlying infrastructure is able to scale based on the number and size of the video stream(s) being processed. The extensive experimentation on publicly available image and video datasets reveal that the proposed system is significantly more accurate and scalable and can be used as a general purpose video analytics system.
    • High Voltage Optical Fibre Sensor for Use in Wire Relay Electrical Protection Systems

      Bashour, Rami; University Of Derby (2016)
      The last few decades have a wide spread use of optical fibre sensors in many applications. Optical fibre sensors have significant benefits over existing conventional sensors such as; high immunity to electromagnetic interference, the ability to transmit signal over long distance at high bandwidth, high resolution, usage in hazardous environments and no need for isolation when working at high voltages. The measurement of high voltages is essential for electrical power systems as it is used as a source of electrical information for Relay Protection Systems (RPS) and load management systems. Electrical Power Systems need to be protected from faults. Faults can range from short circuits, voltage dips, surges, transients etc. The Optical High Voltage sensor developed is based on the principle that the Lead Zirconate Titanate (PZT) electrostriction displacement changes when a voltage is applied to it. The displacement causes the fibre (FBG) which is bonded to the PZT material to have a resultant change in the wavelength. An optical fibre sensor prototype has been developed and evaluated that measures up to 250 V DC. Simulation using ANSYS software has been used to demonstrate the operational capability of the sensor up to 300kV AC. This sensor overcomes some of the challenges of conventional sensors issues like electromagnetic interference, signal transmission, resolution etc. R BASHOUR 2 A novel optical fibre high voltage based on the Kerr effect has been demonstrated. The The Kerr effect was determined using Optsim (R-Soft) software and Maxwell software was used to model an optical Kerr Cell. Maxwell software is an electromagnetic/electric field software used for simulating, analysing, designing 2D and 3D electromagnetic materials and devices. It uses highly accurate Finite Element techniques to solve time varying, static, frequency domain electric and electromagnetic fields. A Relay Protection System on electrical networks was discussed in detail. Keywords: Fibre Bragg Grating, Fibre Optics Sensors, Piezoelectricity, Kerr effect, Relay Protection Systems.
    • Life cycle costing methodology for sustainable commerical office buildings

      Oduyemi, Olufolahan Ifeoluwa; University of Derby (2015)
      The need for a more authoritative approach to investment decision-making and cost control has been a requirement of office spending for many years now. The commercial offices find itself in an increasingly demanding position to allocate its budgets as wisely and prudently as possible. The significant percentage of total spending on buildings demands a more accurate and adaptable method of achieving quality of service within the constraints on the budgets. By adoption of life cycle costing techniques with risk management, practitioners have the ability to make accurate forecasts of likely future running costs. This thesis presents a novel framework (Artificial Neural Networks and probabilistic simulations) for modelling of operating and maintenance historical costs as well as economic performance measures of LCC. The methodology consisted of eight steps and presented a novel approach to modelling the LCC of operating and maintenance costs of two sustainable commercial office buildings. Finally, a set of performance measurement indicators were utilised to draw inference from these results. Therefore, the contribution that this research aimed to achieve was to develop a dynamic LCC framework for sustainable commercial office buildings, and by means of two existing buildings, demonstrate how assumption modelling can be utilised within a probabilistic environment. In this research, the key themes of risk assessment, probabilistic assumption modelling and stochastic assessment of LCC has been addressed. Significant improvements in existing LCC models have been achieved in this research in an attempt to make the LCC model more accurate and meaningful to estate managers and high-level capital investment decision makers A new approach to modelling historical costs and forecasting these costs in sustainable commercial office buildings is presented based upon a combination of ANN methods and stochastic modelling of the annual forecasted data. These models provide a far more accurate representation of long-term building costs as the inherent risk associated with the forecasts is easily quantifiable and the forecasts are based on a sounder approach to forecasting than what was previously used in the commercial sector. A novel framework for modelling the facilities management costs in two sustainable commercial office buildings is also presented. This is not only useful for modelling the LCC of existing commercial office buildings as presented here, but has wider implications for modelling LCC in competing option modelling in commercial office buildings. The processes of assumption modelling presented in this work can be modified easily to represent other types of commercial office buildings. Discussions with policy makers in the real estate industry revealed that concerns were held over how these building costs can be modelled given that available historical data represents wide spending and are not cost specific to commercial office buildings. Similarly, a pilot and main survey questionnaire was aimed at ascertaining current level of LCC application in sustainable construction; ranking drivers and barriers of sustainable commercial office buildings and determining the applications and limitations of LCC. The survey result showed that respondents strongly agreed that key performance indicators and economic performance measures need to be incorporated into LCC and that it is important to consider the initial, operating and maintenance costs of building when conducting LCC analysis, respondents disagreed that the current LCC techniques are suitable for calculating the whole costs of buildings but agreed that there is a low accuracy of historical cost data.
    • Multiprocessor System-on-Chips based Wireless Sensor Network Energy Optimization

      Panneerselvam, John; Xue, Yong; Ali, Haider (University of DerbyDepartment of Electronics, Computing and Mathematics, 2020-10-08)
      Wireless Sensor Network (WSN) is an integrated part of the Internet-of-Things (IoT) used to monitor the physical or environmental conditions without human intervention. In WSN one of the major challenges is energy consumption reduction both at the sensor nodes and network levels. High energy consumption not only causes an increased carbon footprint but also limits the lifetime (LT) of the network. Network-on-Chip (NoC) based Multiprocessor System-on-Chips (MPSoCs) are becoming the de-facto computing platform for computationally extensive real-time applications in IoT due to their high performance and exceptional quality-of-service. In this thesis a task scheduling problem is investigated using MPSoCs architecture for tasks with precedence and deadline constraints in order to minimize the processing energy consumption while guaranteeing the timing constraints. Moreover, energy-aware nodes clustering is also performed to reduce the transmission energy consumption of the sensor nodes. Three distinct problems for energy optimization are investigated given as follows: First, a contention-aware energy-efficient static scheduling using NoC based heterogeneous MPSoC is performed for real-time tasks with an individual deadline and precedence constraints. An offline meta-heuristic based contention-aware energy-efficient task scheduling is developed that performs task ordering, mapping, and voltage assignment in an integrated manner. Compared to state-of-the-art scheduling our proposed algorithm significantly improves the energy-efficiency. Second, an energy-aware scheduling is investigated for a set of tasks with precedence constraints deploying Voltage Frequency Island (VFI) based heterogeneous NoC-MPSoCs. A novel population based algorithm called ARSH-FATI is developed that can dynamically switch between explorative and exploitative search modes at run-time. ARSH-FATI performance is superior to the existing task schedulers developed for homogeneous VFI-NoC-MPSoCs. Third, the transmission energy consumption of the sensor nodes in WSN is reduced by developing ARSH-FATI based Cluster Head Selection (ARSH-FATI-CHS) algorithm integrated with a heuristic called Novel Ranked Based Clustering (NRC). In cluster formation parameters such as residual energy, distance parameters, and workload on CHs are considered to improve LT of the network. The results prove that ARSH-FATI-CHS outperforms other state-of-the-art clustering algorithms in terms of LT.
    • Network Features in Complex Applications

      Bagdasar, Ovidiu; Kurugollu, Fatih; Liotta, Antonio; Cavallaro, Lucia (University of Derby, 2021-12-20)
      The aim of this thesis is to show the potential of Graph Theory and Network Science applied in real-case scenarios. Indeed, there is a gap in the state-of-art in combining mathematical theory with more practical applications such as helping the Law Enforcement Agencies (LEAs) to conduct their investigations, or in Deep Learning techniques which enable Artificial Neural Networks (ANNs) to work more efficiently. In particular, three main case studies on which evaluate the goodness of Social Network Analysis (SNA) tools were considered: (i) Criminal Networks Analysis, (ii) Networks Resilience, and (iii) ANN topology. We have addressed two typical problems in dealing with criminal networks: (i) how to efficiently slow down the information spreading within the criminal organisation by prompt and targeted investigative operations from LEAs and (ii) what is the impact of missing data during LEAs investigation. In the first case, we identified the appropriate centrality metric to effectively identify the criminals to be arrested, showing how, by neutralising only 5% of the top-ranking affiliates, the network connectivity dropped by 70%. In the second case, we simulated the missing data problem by pruning some criminal networks by removing nodes or links and compared these networks against the originals considering four metrics to compute graph similarities. We discovered that a negligible error (i.e., 30% difference from the real network) was detected when, for example, some wiretaps are missing. On the other hand, it is crucial to investigate the suspects in a timely fashion, since any exclusion of suspects from an investigation may lead to significant errors (i.e., 80% difference). Next, we defined a new approach for simulating network resilience by a probabilistic failure model. Indeed, while the classical approach for removing nodes was always successful, such an assumption was not realistic. Thus, we defined some models simulating the scenario in which nodes oppose resistance against removal. Once identified the centrality metric that on average, generates the biggest damage in the connectivity of the networks under scrutiny, we have compared our outcomes against the classical node removal approach, by ranking the nodes according to the same centrality metric, which confirmed our intuition. Lastly, we adopted SNA techniques to analyse ANNs. In particular, we moved a step forward from earlier works because not only did our experiments confirm the efficiency arising from training sparse ANNs, but they also managed to further exploit sparsity through a better tuned algorithm, featuring increased speed at a negligible accuracy loss. We focused on the role of the parameter used to fine-tune the training phase of Sparse ANNs. Our intuition has been that this step can be avoided as the accuracy loss is negligible and, as a consequence, the execution time is significantly reduced. Yet, it is evident that Network Science algorithms, by keeping sparsity in ANNs, are a promising direction for accelerating their training processes. All these studies pave the way for a range of unexplored possibilities for an effective use of Network Science at the service of society.
    • A Novel Mathematical Layout Optimisation Method and Design Framework for Modularisation in Industrial Process Plants and SMRs

      Wood, Paul; Hall, Richard; Robertson, Daniel; Wrigley, Paul (University of DerbyInstitute for Innovation in Sustainable EngineeringUniversity of Derby, 2021-01-19)
      Nuclear power has been proposed as a low carbon solution to electricity generation when intermittent wind and solar renewable energy are not generating. Nuclear can provide co-generation through district heating, desalination, hydrogen production or aid in the process of producing synfuels. However, current new large nuclear power plants are expensive, time consuming to build and plagued by delays and cost increases. An emerging trend in the construction industry is to manufacture parts off the critical path, off site in factories, through modular design to reduce schedules and direct costs. A study from shipbuilding estimates work done in a factory may be 8 times more efficient than performing the same work on site. This productivity increase could be a solution to the problems in nuclear power plant construction. It is an emerging area and the International Atomic Energy Agency records over 50 Small Modular Reactor designs in commercial development worldwide. Most Small Modular Reactor designs focus on integrating the Nuclear Steam Supply System into one module. The aim of this Applied Research Programme was to develop an efficient and effective analysis tool for modularisation in industrial plant systems. The objectives were to understand the state of the art in modular construction and automating design through a literature review. The literature review in this thesis highlighted that automating earlier parts of the plant design process (equipment databases, selection tools and modular Process and Instrumentation Diagrams) have been developed in modular industrial process plant research but 3D layout has not been studied. It was also found that layout optimisation for industrial process plants has not considered modularisation. It was therefore proposed to develop a novel mathematical layout optimisation method for modularisation of industrial plants. Furthermore, the integration within the plant design process would be improved by developing a method to integrate the output of the optimisation with the plant design software. A case study was developed to analyse how this new method would compare against the current design process at Rolls-Royce. A systems engineering approach was taken to develop the capabilities of the optimisation by decomposing the three required constituents of modularisation: development of a model to optimise layout of modules utilising the module designs from previous research (Lapp, 1989), development of a model to optimise the layout equipment within modules and development of a combined and integrated model to optimise assignment and layout of equipment to modules. The objective function was to reduce pipe length as it can constitute up to 20% of process plant costs (Peters, Timmerhaus, & West, 2003) and to reduce the number of modules utilised. The results from the mathematical model were compared against previous layout designs (Lapp, 1989), highlighting a 46-88.7% reduction in pipework and considering pipework costs can be up to 20% of a process plant cost, this could be a significant saving. This does not consider the significant schedule and productivity savings by moving this work offsite. The second model (Bi) analysed the layout of the Chemical Volume and Control System and Boron Thermal Regeneration System into one and two modules, reducing pipe cost and installation by 67.6% and 85% respectively compared to the previously designed systems from (Lapp, 1989). The third model (Bii) considered the allocation of equipment to multiple modules, reducing pipe cost and installation by 80.5% compared to the previously designed systems from (Lapp, 1989), creating new data and knowledge. Mixed Integer Linear Programming formulations and soft constraints within the genetic algorithm function were utilised within MATLAB and Gurobi. Furthermore, by integrating the optimisation output with the plant design software to update the new locations of equipment and concept pipe routing, efficiency is vastly improved when the plant design engineer interprets the optimisation results. Not only can the mathematical layout optimisation analyse millions more possible layouts than an engineering designer, it can perform the function in a fraction of the time, saving time and costs. It at least gives the design engineer a suitable starting point which can be analysed and the optimisation model updated in an iterative process. This novel method was compared against the current design process at Rolls-Royce, it was found that an update to a module would take minutes with the novel optimisation and integration with the plant design software method, rather than days or weeks for the manual process. However, the disadvantage is that more upfront work is required to convert engineering knowledge into mathematical terms and relationships. The research is limited by the publicly available nuclear power plant data. Future work could include applying this novel method to wider industrial plant design to understand the broader impact. The mathematical optimisation model can be developed in the future to include constraints in other research such as assembly, operation and maintenance costs.
    • A novel service discovery model for decentralised online social networks.

      Yuan, Bo; University of Derby (2018-03)
      Online social networks (OSNs) have become the most popular Internet application that attracts billions of users to share information, disseminate opinions and interact with others in the online society. The unprecedented growing popularity of OSNs naturally makes using social network services as a pervasive phenomenon in our daily life. The majority of OSNs service providers adopts a centralised architecture because of its management simplicity and content controllability. However, the centralised architecture for large-scale OSNs applications incurs costly deployment of computing infrastructures and suffers performance bottleneck. Moreover, the centralised architecture has two major shortcomings: the single point failure problem and the lack of privacy, which challenges the uninterrupted service provision and raises serious privacy concerns. This thesis proposes a decentralised approach based on peer-to-peer (P2P) networks as an alternative to the traditional centralised architecture. Firstly, a self-organised architecture with self-sustaining social network adaptation has been designed to support decentralised topology maintenance. This self-organised architecture exhibits small-world characteristics with short average path length and large average clustering coefficient to support efficient information exchange. Based on this self-organised architecture, a novel decentralised service discovery model has been developed to achieve a semantic-aware and interest-aware query routing in the P2P social network. The proposed model encompasses a service matchmaking module to capture the hidden semantic information for query-service matching and a homophily-based query processing module to characterise user’s common social status and interests for personalised query routing. Furthermore, in order to optimise the efficiency of service discovery, a swarm intelligence inspired algorithm has been designed to reduce the query routing overhead. This algorithm employs an adaptive forwarding strategy that can adapt to various social network structures and achieves promising search performance with low redundant query overhead in dynamic environments. Finally, a configurable software simulator is implemented to simulate complex networks and to evaluate the proposed service discovery model. Extensive experiments have been conducted through simulations, and the obtained results have demonstrated the efficiency and effectiveness of the proposed model.
    • Numerical Study of Track-Trailer Gap Aerodynamics

      Yang, Zhiyin; Lu, Yiling; Charles, Terrance Priestley (University of Derby, 2020-12-08)
      Aerodynamics have become an essential design process for ground vehicles in order to improve the fuel consumption by lowering the emissions along with increasing the range of vehicles using different source of power. A significant portion of the world CO2 emissions is a result of ground vehicles with a more significant portion of these contributed by trucks. The boxy nature of trucks is the desired shape to carry maximum payload. However, a box shaped geometry is not aerodynamically efficient. Several manufacturers have developed aerodynamic add on devices that are optimized to the shape of the truck, in order to achieve gains in lowering emission and improving range by deeper understanding of the flow physics around the vehicle. The thesis reports an in-depth understanding of the flow field within the gap region of a tractor trailer combination truck and how several aerodynamic add on devices reduce the overall drag of a truck. The gap region of a truck typically contributes to about 20-25% of the overall vehicle drag and hence presents an opportunity for considerable level of drag reduction. A basic two box bluff body (2D & 3D) model was used to investigate how the flow field changes by changing the gap width between the two bluff bodies. A section of the thesis investigates the sudden increase in drag coefficient of the downstream cube around 2D tandem bluff bodies. Distinct flow patterns were observed in the gap and around the 2D tandem at different gap ratios. The sudden change in drag coefficient for the 2D downstream bluff body is well captured numerically, which is due to the wake of the upstream cube impinging onto the front face of the downstream cube. A steady increase in drag coefficient is witnessed for the 3D cubes which are consistent with previous experimental findings. The steady increase in drag coefficient is due to the vortical structures formed around the 3D cubes which are different, which consist of a smooth transition. Hence, they result in steady increase in drag coefficient. A second study was conducted on a realistic truck like test case with the simplified truck model where the leading edges of the tractor were rounded off to manipulate the flow separation. As a result of leading edge rounding off the flow separation reduced significantly resulting in a major portion of the flow remain attached to the lateral walls of the tractor. This was seen to increase the flow entering the gap region between the tractor and trailer. Finally, several add on devices which were subdivided based on tractor and trailer mounted devices were numerically assessed with several other devices within the gap region. Significant level of drag reduction was achieved for the entire truck with these add on devices. The highest drag reduction was achieved with the base bleeding technique. Overall, the research has shown that it is important to control the flow condition within the gap region and maintain an even pressure on the front face of the trailer. The base bleeding method proved to be a vital technique to further reduce drag.
    • Parallaxical identities: Architectural semantics of contemporary arts institutions and the curation of cultural identity

      Tracada, Eleni; D'Arcy-Reed, Louis (University of Derby, 2019-09-19)
      The research project interrogates the identity forming principles beneath contemporary arts museum architecture across physical and psychoanalytical dimensions. In identifying a metaphysical distance, or barrier, between the unconscious of the cultural architectural intervention and the identity within the cities’ fabric, the state of a parallaxical identity manifests itself. The parallaxical identity, developed from Slavoj Žižek’s parallax gap in psychoanalysis, elicits the presentation of ego-ideal, ideal-ego, and superego of architectural interventions seen as regenerative for culture, the city and its communities. Developing the parallax within architecture allows the thesis to include a rigorous interrogation of theory across disciplines of psychoanalysis, architecture, contemporary art and museology, whilst also remediating the position of architectural practice beyond its conventional boundaries and rhetoric. Adopting a mixed methodology across theoretical and practical disciplines, the thesis reveals unconscious interpretations and embodied analyses through a weaving of para-architectural methods including, photography, questionnaires, exploratory installations, written prose, and imagined cultural visualisations. Three major arts institutions act as case study analysands for psychoanalytical observation and diagnosis to take place, informing the resulting framework for observing parallaxical identities, whilst also producing recommendations for the future of the cultural institution of the museum/gallery. Alongside the thesis’ position as a critical commentary, a supplementary PhD exhibition proposal centered on Parallaxical Identities questions the role of architecture as a discipline that necessitates para-architectural and psychoanalytic methodologies, whilst also presenting new artistic works in response to the thesis to reveal to audiences’ the haptic and hidden structures within architecture and the ‘expected or unexpected’ parallaxical interventions of place.
    • Power efficient and power attacks resistant system design and analysis using aggressive scaling with timing speculation

      Rathnala, Prasanthi; University of Derby (2017-05)
      Growing usage of smart and portable electronic devices demands embedded system designers to provide solutions with better performance and reduced power consumption. Due to the new development of IoT and embedded systems usage, not only power and performance of these devices but also security of them is becoming an important design constraint. In this work, a novel aggressive scaling based on timing speculation is proposed to overcome the drawbacks of traditional DVFS and provide security from power analysis attacks at the same time. Dynamic voltage and frequency scaling (DVFS) is proven to be the most suitable technique for power efficiency in processor designs. Due to its promising benefits, the technique is still getting researchers attention to trade off power and performance of modern processor designs. The issues of traditional DVFS are: 1) Due to its pre-calculated operating points, the system is not able to suit to modern process variations. 2) Since Process Voltage and Temperature (PVT) variations are not considered, large timing margins are added to guarantee a safe operation in the presence of variations. The research work presented here addresses these issues by employing aggressive scaling mechanisms to achieve more power savings with increased performance. This approach uses in-situ timing error monitoring and recovering mechanisms to reduce extra timing margins and to account for process variations. A novel timing error detection and correction mechanism, to achieve more power savings or high performance, is presented. This novel technique has also been shown to improve security of processors against differential power analysis attacks technique. Differential power analysis attacks can extract secret information from embedded systems without knowing much details about the internal architecture of the device. Simulated and experimental data show that the novel technique can provide a performance improvement of 24% or power savings of 44% while occupying less area and power overhead. Overall, the proposed aggressive scaling technique provides an improvement in power consumption and performance while increasing the security of processors from power analysis attacks.
    • A prescriptive analytics approach for energy efficiency in datacentres.

      Panneerselvam, John; University of Derby (University of Derby, 2018-02-19)
      Given the evolution of Cloud Computing in recent years, users and clients adopting Cloud Computing for both personal and business needs have increased at an unprecedented scale. This has naturally led to the increased deployments and implementations of Cloud datacentres across the globe. As a consequence of this increasing adoption of Cloud Computing, Cloud datacentres are witnessed to be massive energy consumers and environmental polluters. Whilst the energy implications of Cloud datacentres are being addressed from various research perspectives, predicting the future trend and behaviours of workloads at the datacentres thereby reducing the active server resources is one particular dimension of green computing gaining the interests of researchers and Cloud providers. However, this includes various practical and analytical challenges imposed by the increased dynamism of Cloud systems. The behavioural characteristics of Cloud workloads and users are still not perfectly clear which restrains the reliability of the prediction accuracy of existing research works in this context. To this end, this thesis presents a comprehensive descriptive analytics of Cloud workload and user behaviours, uncovering the cause and energy related implications of Cloud Computing. Furthermore, the characteristics of Cloud workloads and users including latency levels, job heterogeneity, user dynamicity, straggling task behaviours, energy implications of stragglers, job execution and termination patterns and the inherent periodicity among Cloud workload and user behaviours have been empirically presented. Driven by descriptive analytics, a novel user behaviour forecasting framework has been developed, aimed at a tri-fold forecast of user behaviours including the session duration of users, anticipated number of submissions and the arrival trend of the incoming workloads. Furthermore, a novel resource optimisation framework has been proposed to avail the most optimum level of resources for executing jobs with reduced server energy expenditures and job terminations. This optimisation framework encompasses a resource estimation module to predict the anticipated resource consumption level for the arrived jobs and a classification module to classify tasks based on their resource intensiveness. Both the proposed frameworks have been verified theoretically and tested experimentally based on Google Cloud trace logs. Experimental analysis demonstrates the effectiveness of the proposed framework in terms of the achieved reliability of the forecast results and in reducing the server energy expenditures spent towards executing jobs at the datacentres.
    • Proposing a framework for organisational sustainable development: integrating quality management, supply chain management and sustainability

      Liyanage, Kapila; Bastas, Ali (University of DerbyCollege of Engineering and Technology, 2019-07-04)
      Increasing worldwide demand for products and services is applying a significant pressure on firms and supply chains operationally and financially, along with negative implications on our planet and the public. New approaches are highly required to be adopted by all members of the society, including the businesses for sustainable development. On the other hand, enabling such integration from an organisational management perspective is not straightforward, due to complexities and conflicts associated with balanced integration of economic, environmental and social agendas. Aimed towards addressing this important research requirement, a tailored conceptual framework is presented, constructed upon the synergistic principles of quality management (QM) and supply chain management (SCM) to facilitate integration of triple bottom line sustainability into business management. As the first step of the research, a systematic literature review was conducted, evidencing research gaps, and opportunities. A conceptual framework was established, and an implementation procedure to facilitate operationalisation of the framework was developed including a business diagnostic tool contribution, aiding current state maturity assessment as one of the key implementation steps. These developments were verified, validated and improved through the Delphi method, and applied at an organisation in Cyprus as the final validation step, using the action research method. Positive relationships were established and verified conceptually between the ISO 9001 principles of QM, supply chain integration principle of SCM, and organisational triple bottom line sustainability integration. The relative importance of these principles adopted in the framework were determined based on expert Delphi panel feedback. The action research demonstrated the application of the framework, outlined its contextual implementation factors, and concluded positive effects on the sustainable development of the participating organisation. Several contributions to knowledge were made, including the refinement of existing QM and SCM concepts for organisational sustainability improvement, and formulation of a practical framework including a novel diagnostic tool to facilitate integration of triple bottom line sustainability through QM and SCM. Particularly, a new management perspective was introduced with implications to many organisational managers that adopt ISO 9001 and supply chain integration principles, setting the way for extending these principles beyond their original QM and SCM agendas towards organisational sustainable development.
    • The Role of L-type Voltage Gated Calcium Channels in Ovarian Cancer

      Shiva, Subramanam; Luderman, William (University of DerbyUniversity of Derby, 2021-03)
      Ovarian cancer is the most lethal gynaecological malignancy. Of the four ovarian cancer subtypes - serous, mucinous, endometroid and clear cell - serous ovarian cancer is the most common, comprising around 70% of cases. The median stage of diagnosis of ovarian cancer is stage III and patients present with widespread metastasis, usually facilitated by peritoneal fluid accumulation in the abdomen (ascites). Live single cancer cells and aggregates from the ascites represent a population with the potential to become metastatic and are thought to be the main contributors to disease recurrence, displaying increased chemoresistance after traditional first line therapy, which consists of extensive tumour debulking followed by combination platinum and taxane chemotherapy. For this reason, research into potential therapeutic targets in this important cell population is critical for development of more efficacious treatments for ovarian cancer. The opportunity to re-purpose existing pharmaceuticals for use as chemotherapeutics is an idea that has gained traction recently in the medical research field. This method circumvents the requirement for lengthy target identification and validation and extensive toxicity testing. One class of drugs that are a possible candidate for use against ovarian cancer are the dihydropyridines, which are antagonists of voltage-gated calcium channels (CaV). In ovarian cancer, CaV have already been shown to play a role in malignant behaviours such as migration and proliferation, although most expression and functional data in the literature are from ovarian cancer cell lines. There has been very little research on the role of CaV in primary ovarian cancer cells. This work addresses the question of whether CaV are expressed in primary ovarian cancer cells and whether channels in ovarian cancer cells are functional, particularly in cells derived from malignant ascitic fluid. Here, the results from immunohistochemical staining of ovarian tumour sections and RT-qPCR using normal ovaries, tumours and cells derived from malignant ascitic fluid suggest that in the majority of cases, CaV1.2 and CaV1.3 become expressed in ovarian cancer cells only after cells have been shed from the primary tumour. CaV1.2 and CaV1.3 mRNA is expressed in ascites derived cells, although the highest expression of these mRNAs was seen in a sample from a patient with mucinous ovarian cancer. In serous ovarian cancer cells from ascites, CaV1.2 protein was shown with flow cytometry to be highly expressed and immunofluorescent staining confirmed that this expression is localised to the nucleus. Functionally, dihydropyridine antagonism with nifedipine was found to prevent cell migration only in a 2-dimensional wound-healing model, whereas invasion of cell lines and ascites derived primary cells into basement membrane extract was unchanged. Cell lines display differing apoptotic responses to nifedipine, which triggers apoptosis in SKOV-3 but no response in OVCAR-8 at the same concentration. To assess channel functionality, fluorescent measurement of cytosolic Ca2+ flux using Fluo4-AM was performed on cell lines in the presence of the CaV agonist Bay K8644 and patch clamp electrophysiology was performed on cell lines and malignant ascites derived cells. Both of these techniques confirmed that no detectable L-type CaV current is present in the ovarian cancer cell lines. CaV current was observed in cells derived from malignant ascites, from the mucinous sample. Transient receptor potential (TRP) currents were also detected in OVCAR-8 cells, as well as single cells and spheroids from the ascites derived mucinous sample and were most likely carried by the Ca2+-selective TRPV5 or TRV6. These results suggest that functional L-type CaV are present in cancer cells from malignant ascites of patients with mucinous ovarian cancer. Although CaV1.2 was nuclear expressed in serous samples, similar research from other cancers has shown that these channels may still function in migration, even when expression is restricted to the nucleus, although nuclear channels would not be amenable to therapeutic targeting. These results fit into a wider context of contemporary research which is placing a greater role for CaV and ion channels in cancer, although more research needs to be performed in ovarian cancer ascites derived cells to determine the possible function of these nuclear ion channels.
    • Service recommendation and selection in centralized and decentralized environments.

      Ahmed, Mariwan; University of Derby (2017-07-20)
      With the increasing use of web services in everyday tasks we are entering an era of Internet of Services (IoS). Service discovery and selection in both centralized and decentralized environments have become a critical issue in the area of web services, in particular when services having similar functionality but different Quality of Service (QoS). As a result, selecting a high quality service that best suits consumer requirements from a large list of functionally equivalent services is a challenging task. In response to increasing numbers of services in the discovery and selection process, there is a corresponding increase of service consumers and a consequent diversity in Quality of Service (QoS) available. Increases in both sides leads to a diversity in the demand and supply of services, which would result in the partial match of the requirements and offers. Furthermore, it is challenging for customers to select suitable services from a large number of services that satisfy consumer functional requirements. Therefore, web service recommendation becomes an attractive solution to provide recommended services to consumers which can satisfy their requirements.In this thesis, first a service ranking and selection algorithm is proposed by considering multiple QoS requirements and allowing partially matched services to be counted as a candidate for the selection process. With the initial list of available services the approach considers those services with a partial match of consumer requirements and ranks them based on the QoS parameters, this allows the consumer to select suitable service. In addition, providing weight value for QoS parameters might not be an easy and understandable task for consumers, as a result an automatic weight calculation method has been included for consumer requirements by utilizing distance correlation between QoS parameters. The second aspect of the work in the thesis is the process of QoS based web service recommendation. With an increasing number of web services having similar functionality, it is challenging for service consumers to find out suitable web services that meet their requirements. We propose a personalised service recommendation method using the LDA topic model, which extracts latent interests of consumers and latent topics of services in the form of probability distribution. In addition, the proposed method is able to improve the accuracy of prediction of QoS properties by considering the correlation between neighbouring services and return a list of recommended services that best satisfy consumer requirements. The third part of the thesis concerns providing service discovery and selection in a decentralized environment. Service discovery approaches are often supported by centralized repositories that could suffer from single point failure, performance bottleneck, and scalability issues in large scale systems. To address these issues, we propose a context-aware service discovery and selection approach in a decentralized peer-to-peer environment. In the approach homophily similarity was used for bootstrapping and distribution of nodes. The discovery process is based on the similarity of nodes and previous interaction and behaviour of the nodes, which will help the discovery process in a dynamic environment. Our approach is not only considering service discovery, but also the selection of suitable web service by taking into account the QoS properties of the web services. The major contribution of the thesis is providing a comprehensive QoS based service recommendation and selection in centralized and decentralized environments. With the proposed approach consumers will be able to select suitable service based on their requirements. Experimental results on real world service datasets showed that proposed approaches achieved better performance and efficiency in recommendation and selection process.