• Multiprocessor System-on-Chips based Wireless Sensor Network Energy Optimization

      Panneerselvam, John; Xue, Yong; Ali, Haider (University of DerbyDepartment of Electronics, Computing and Mathematics, 2020-10-08)
      Wireless Sensor Network (WSN) is an integrated part of the Internet-of-Things (IoT) used to monitor the physical or environmental conditions without human intervention. In WSN one of the major challenges is energy consumption reduction both at the sensor nodes and network levels. High energy consumption not only causes an increased carbon footprint but also limits the lifetime (LT) of the network. Network-on-Chip (NoC) based Multiprocessor System-on-Chips (MPSoCs) are becoming the de-facto computing platform for computationally extensive real-time applications in IoT due to their high performance and exceptional quality-of-service. In this thesis a task scheduling problem is investigated using MPSoCs architecture for tasks with precedence and deadline constraints in order to minimize the processing energy consumption while guaranteeing the timing constraints. Moreover, energy-aware nodes clustering is also performed to reduce the transmission energy consumption of the sensor nodes. Three distinct problems for energy optimization are investigated given as follows: First, a contention-aware energy-efficient static scheduling using NoC based heterogeneous MPSoC is performed for real-time tasks with an individual deadline and precedence constraints. An offline meta-heuristic based contention-aware energy-efficient task scheduling is developed that performs task ordering, mapping, and voltage assignment in an integrated manner. Compared to state-of-the-art scheduling our proposed algorithm significantly improves the energy-efficiency. Second, an energy-aware scheduling is investigated for a set of tasks with precedence constraints deploying Voltage Frequency Island (VFI) based heterogeneous NoC-MPSoCs. A novel population based algorithm called ARSH-FATI is developed that can dynamically switch between explorative and exploitative search modes at run-time. ARSH-FATI performance is superior to the existing task schedulers developed for homogeneous VFI-NoC-MPSoCs. Third, the transmission energy consumption of the sensor nodes in WSN is reduced by developing ARSH-FATI based Cluster Head Selection (ARSH-FATI-CHS) algorithm integrated with a heuristic called Novel Ranked Based Clustering (NRC). In cluster formation parameters such as residual energy, distance parameters, and workload on CHs are considered to improve LT of the network. The results prove that ARSH-FATI-CHS outperforms other state-of-the-art clustering algorithms in terms of LT.
    • A Novel Mathematical Layout Optimisation Method and Design Framework for Modularisation in Industrial Process Plants and SMRs

      Wood, Paul; Hall, Richard; Robertson, Daniel; Wrigley, Paul (University of DerbyInstitute for Innovation in Sustainable EngineeringUniversity of Derby, 2021-01-19)
      Nuclear power has been proposed as a low carbon solution to electricity generation when intermittent wind and solar renewable energy are not generating. Nuclear can provide co-generation through district heating, desalination, hydrogen production or aid in the process of producing synfuels. However, current new large nuclear power plants are expensive, time consuming to build and plagued by delays and cost increases. An emerging trend in the construction industry is to manufacture parts off the critical path, off site in factories, through modular design to reduce schedules and direct costs. A study from shipbuilding estimates work done in a factory may be 8 times more efficient than performing the same work on site. This productivity increase could be a solution to the problems in nuclear power plant construction. It is an emerging area and the International Atomic Energy Agency records over 50 Small Modular Reactor designs in commercial development worldwide. Most Small Modular Reactor designs focus on integrating the Nuclear Steam Supply System into one module. The aim of this Applied Research Programme was to develop an efficient and effective analysis tool for modularisation in industrial plant systems. The objectives were to understand the state of the art in modular construction and automating design through a literature review. The literature review in this thesis highlighted that automating earlier parts of the plant design process (equipment databases, selection tools and modular Process and Instrumentation Diagrams) have been developed in modular industrial process plant research but 3D layout has not been studied. It was also found that layout optimisation for industrial process plants has not considered modularisation. It was therefore proposed to develop a novel mathematical layout optimisation method for modularisation of industrial plants. Furthermore, the integration within the plant design process would be improved by developing a method to integrate the output of the optimisation with the plant design software. A case study was developed to analyse how this new method would compare against the current design process at Rolls-Royce. A systems engineering approach was taken to develop the capabilities of the optimisation by decomposing the three required constituents of modularisation: development of a model to optimise layout of modules utilising the module designs from previous research (Lapp, 1989), development of a model to optimise the layout equipment within modules and development of a combined and integrated model to optimise assignment and layout of equipment to modules. The objective function was to reduce pipe length as it can constitute up to 20% of process plant costs (Peters, Timmerhaus, & West, 2003) and to reduce the number of modules utilised. The results from the mathematical model were compared against previous layout designs (Lapp, 1989), highlighting a 46-88.7% reduction in pipework and considering pipework costs can be up to 20% of a process plant cost, this could be a significant saving. This does not consider the significant schedule and productivity savings by moving this work offsite. The second model (Bi) analysed the layout of the Chemical Volume and Control System and Boron Thermal Regeneration System into one and two modules, reducing pipe cost and installation by 67.6% and 85% respectively compared to the previously designed systems from (Lapp, 1989). The third model (Bii) considered the allocation of equipment to multiple modules, reducing pipe cost and installation by 80.5% compared to the previously designed systems from (Lapp, 1989), creating new data and knowledge. Mixed Integer Linear Programming formulations and soft constraints within the genetic algorithm function were utilised within MATLAB and Gurobi. Furthermore, by integrating the optimisation output with the plant design software to update the new locations of equipment and concept pipe routing, efficiency is vastly improved when the plant design engineer interprets the optimisation results. Not only can the mathematical layout optimisation analyse millions more possible layouts than an engineering designer, it can perform the function in a fraction of the time, saving time and costs. It at least gives the design engineer a suitable starting point which can be analysed and the optimisation model updated in an iterative process. This novel method was compared against the current design process at Rolls-Royce, it was found that an update to a module would take minutes with the novel optimisation and integration with the plant design software method, rather than days or weeks for the manual process. However, the disadvantage is that more upfront work is required to convert engineering knowledge into mathematical terms and relationships. The research is limited by the publicly available nuclear power plant data. Future work could include applying this novel method to wider industrial plant design to understand the broader impact. The mathematical optimisation model can be developed in the future to include constraints in other research such as assembly, operation and maintenance costs.
    • A novel service discovery model for decentralised online social networks.

      Yuan, Bo; University of Derby (2018-03)
      Online social networks (OSNs) have become the most popular Internet application that attracts billions of users to share information, disseminate opinions and interact with others in the online society. The unprecedented growing popularity of OSNs naturally makes using social network services as a pervasive phenomenon in our daily life. The majority of OSNs service providers adopts a centralised architecture because of its management simplicity and content controllability. However, the centralised architecture for large-scale OSNs applications incurs costly deployment of computing infrastructures and suffers performance bottleneck. Moreover, the centralised architecture has two major shortcomings: the single point failure problem and the lack of privacy, which challenges the uninterrupted service provision and raises serious privacy concerns. This thesis proposes a decentralised approach based on peer-to-peer (P2P) networks as an alternative to the traditional centralised architecture. Firstly, a self-organised architecture with self-sustaining social network adaptation has been designed to support decentralised topology maintenance. This self-organised architecture exhibits small-world characteristics with short average path length and large average clustering coefficient to support efficient information exchange. Based on this self-organised architecture, a novel decentralised service discovery model has been developed to achieve a semantic-aware and interest-aware query routing in the P2P social network. The proposed model encompasses a service matchmaking module to capture the hidden semantic information for query-service matching and a homophily-based query processing module to characterise user’s common social status and interests for personalised query routing. Furthermore, in order to optimise the efficiency of service discovery, a swarm intelligence inspired algorithm has been designed to reduce the query routing overhead. This algorithm employs an adaptive forwarding strategy that can adapt to various social network structures and achieves promising search performance with low redundant query overhead in dynamic environments. Finally, a configurable software simulator is implemented to simulate complex networks and to evaluate the proposed service discovery model. Extensive experiments have been conducted through simulations, and the obtained results have demonstrated the efficiency and effectiveness of the proposed model.
    • Numerical Study of Track-Trailer Gap Aerodynamics

      Yang, Zhiyin; Lu, Yiling; Charles, Terrance Priestley (University of Derby, 2020-12-08)
      Aerodynamics have become an essential design process for ground vehicles in order to improve the fuel consumption by lowering the emissions along with increasing the range of vehicles using different source of power. A significant portion of the world CO2 emissions is a result of ground vehicles with a more significant portion of these contributed by trucks. The boxy nature of trucks is the desired shape to carry maximum payload. However, a box shaped geometry is not aerodynamically efficient. Several manufacturers have developed aerodynamic add on devices that are optimized to the shape of the truck, in order to achieve gains in lowering emission and improving range by deeper understanding of the flow physics around the vehicle. The thesis reports an in-depth understanding of the flow field within the gap region of a tractor trailer combination truck and how several aerodynamic add on devices reduce the overall drag of a truck. The gap region of a truck typically contributes to about 20-25% of the overall vehicle drag and hence presents an opportunity for considerable level of drag reduction. A basic two box bluff body (2D & 3D) model was used to investigate how the flow field changes by changing the gap width between the two bluff bodies. A section of the thesis investigates the sudden increase in drag coefficient of the downstream cube around 2D tandem bluff bodies. Distinct flow patterns were observed in the gap and around the 2D tandem at different gap ratios. The sudden change in drag coefficient for the 2D downstream bluff body is well captured numerically, which is due to the wake of the upstream cube impinging onto the front face of the downstream cube. A steady increase in drag coefficient is witnessed for the 3D cubes which are consistent with previous experimental findings. The steady increase in drag coefficient is due to the vortical structures formed around the 3D cubes which are different, which consist of a smooth transition. Hence, they result in steady increase in drag coefficient. A second study was conducted on a realistic truck like test case with the simplified truck model where the leading edges of the tractor were rounded off to manipulate the flow separation. As a result of leading edge rounding off the flow separation reduced significantly resulting in a major portion of the flow remain attached to the lateral walls of the tractor. This was seen to increase the flow entering the gap region between the tractor and trailer. Finally, several add on devices which were subdivided based on tractor and trailer mounted devices were numerically assessed with several other devices within the gap region. Significant level of drag reduction was achieved for the entire truck with these add on devices. The highest drag reduction was achieved with the base bleeding technique. Overall, the research has shown that it is important to control the flow condition within the gap region and maintain an even pressure on the front face of the trailer. The base bleeding method proved to be a vital technique to further reduce drag.
    • Parallaxical identities: Architectural semantics of contemporary arts institutions and the curation of cultural identity

      Tracada, Eleni; D'Arcy-Reed, Louis (University of Derby, 2019-09-19)
      The research project interrogates the identity forming principles beneath contemporary arts museum architecture across physical and psychoanalytical dimensions. In identifying a metaphysical distance, or barrier, between the unconscious of the cultural architectural intervention and the identity within the cities’ fabric, the state of a parallaxical identity manifests itself. The parallaxical identity, developed from Slavoj Žižek’s parallax gap in psychoanalysis, elicits the presentation of ego-ideal, ideal-ego, and superego of architectural interventions seen as regenerative for culture, the city and its communities. Developing the parallax within architecture allows the thesis to include a rigorous interrogation of theory across disciplines of psychoanalysis, architecture, contemporary art and museology, whilst also remediating the position of architectural practice beyond its conventional boundaries and rhetoric. Adopting a mixed methodology across theoretical and practical disciplines, the thesis reveals unconscious interpretations and embodied analyses through a weaving of para-architectural methods including, photography, questionnaires, exploratory installations, written prose, and imagined cultural visualisations. Three major arts institutions act as case study analysands for psychoanalytical observation and diagnosis to take place, informing the resulting framework for observing parallaxical identities, whilst also producing recommendations for the future of the cultural institution of the museum/gallery. Alongside the thesis’ position as a critical commentary, a supplementary PhD exhibition proposal centered on Parallaxical Identities questions the role of architecture as a discipline that necessitates para-architectural and psychoanalytic methodologies, whilst also presenting new artistic works in response to the thesis to reveal to audiences’ the haptic and hidden structures within architecture and the ‘expected or unexpected’ parallaxical interventions of place.
    • Power efficient and power attacks resistant system design and analysis using aggressive scaling with timing speculation

      Rathnala, Prasanthi; University of Derby (2017-05)
      Growing usage of smart and portable electronic devices demands embedded system designers to provide solutions with better performance and reduced power consumption. Due to the new development of IoT and embedded systems usage, not only power and performance of these devices but also security of them is becoming an important design constraint. In this work, a novel aggressive scaling based on timing speculation is proposed to overcome the drawbacks of traditional DVFS and provide security from power analysis attacks at the same time. Dynamic voltage and frequency scaling (DVFS) is proven to be the most suitable technique for power efficiency in processor designs. Due to its promising benefits, the technique is still getting researchers attention to trade off power and performance of modern processor designs. The issues of traditional DVFS are: 1) Due to its pre-calculated operating points, the system is not able to suit to modern process variations. 2) Since Process Voltage and Temperature (PVT) variations are not considered, large timing margins are added to guarantee a safe operation in the presence of variations. The research work presented here addresses these issues by employing aggressive scaling mechanisms to achieve more power savings with increased performance. This approach uses in-situ timing error monitoring and recovering mechanisms to reduce extra timing margins and to account for process variations. A novel timing error detection and correction mechanism, to achieve more power savings or high performance, is presented. This novel technique has also been shown to improve security of processors against differential power analysis attacks technique. Differential power analysis attacks can extract secret information from embedded systems without knowing much details about the internal architecture of the device. Simulated and experimental data show that the novel technique can provide a performance improvement of 24% or power savings of 44% while occupying less area and power overhead. Overall, the proposed aggressive scaling technique provides an improvement in power consumption and performance while increasing the security of processors from power analysis attacks.
    • A prescriptive analytics approach for energy efficiency in datacentres.

      Panneerselvam, John; University of Derby (University of Derby, 2018-02-19)
      Given the evolution of Cloud Computing in recent years, users and clients adopting Cloud Computing for both personal and business needs have increased at an unprecedented scale. This has naturally led to the increased deployments and implementations of Cloud datacentres across the globe. As a consequence of this increasing adoption of Cloud Computing, Cloud datacentres are witnessed to be massive energy consumers and environmental polluters. Whilst the energy implications of Cloud datacentres are being addressed from various research perspectives, predicting the future trend and behaviours of workloads at the datacentres thereby reducing the active server resources is one particular dimension of green computing gaining the interests of researchers and Cloud providers. However, this includes various practical and analytical challenges imposed by the increased dynamism of Cloud systems. The behavioural characteristics of Cloud workloads and users are still not perfectly clear which restrains the reliability of the prediction accuracy of existing research works in this context. To this end, this thesis presents a comprehensive descriptive analytics of Cloud workload and user behaviours, uncovering the cause and energy related implications of Cloud Computing. Furthermore, the characteristics of Cloud workloads and users including latency levels, job heterogeneity, user dynamicity, straggling task behaviours, energy implications of stragglers, job execution and termination patterns and the inherent periodicity among Cloud workload and user behaviours have been empirically presented. Driven by descriptive analytics, a novel user behaviour forecasting framework has been developed, aimed at a tri-fold forecast of user behaviours including the session duration of users, anticipated number of submissions and the arrival trend of the incoming workloads. Furthermore, a novel resource optimisation framework has been proposed to avail the most optimum level of resources for executing jobs with reduced server energy expenditures and job terminations. This optimisation framework encompasses a resource estimation module to predict the anticipated resource consumption level for the arrived jobs and a classification module to classify tasks based on their resource intensiveness. Both the proposed frameworks have been verified theoretically and tested experimentally based on Google Cloud trace logs. Experimental analysis demonstrates the effectiveness of the proposed framework in terms of the achieved reliability of the forecast results and in reducing the server energy expenditures spent towards executing jobs at the datacentres.
    • Proposing a framework for organisational sustainable development: integrating quality management, supply chain management and sustainability

      Liyanage, Kapila; Bastas, Ali (University of DerbyCollege of Engineering and Technology, 2019-07-04)
      Increasing worldwide demand for products and services is applying a significant pressure on firms and supply chains operationally and financially, along with negative implications on our planet and the public. New approaches are highly required to be adopted by all members of the society, including the businesses for sustainable development. On the other hand, enabling such integration from an organisational management perspective is not straightforward, due to complexities and conflicts associated with balanced integration of economic, environmental and social agendas. Aimed towards addressing this important research requirement, a tailored conceptual framework is presented, constructed upon the synergistic principles of quality management (QM) and supply chain management (SCM) to facilitate integration of triple bottom line sustainability into business management. As the first step of the research, a systematic literature review was conducted, evidencing research gaps, and opportunities. A conceptual framework was established, and an implementation procedure to facilitate operationalisation of the framework was developed including a business diagnostic tool contribution, aiding current state maturity assessment as one of the key implementation steps. These developments were verified, validated and improved through the Delphi method, and applied at an organisation in Cyprus as the final validation step, using the action research method. Positive relationships were established and verified conceptually between the ISO 9001 principles of QM, supply chain integration principle of SCM, and organisational triple bottom line sustainability integration. The relative importance of these principles adopted in the framework were determined based on expert Delphi panel feedback. The action research demonstrated the application of the framework, outlined its contextual implementation factors, and concluded positive effects on the sustainable development of the participating organisation. Several contributions to knowledge were made, including the refinement of existing QM and SCM concepts for organisational sustainability improvement, and formulation of a practical framework including a novel diagnostic tool to facilitate integration of triple bottom line sustainability through QM and SCM. Particularly, a new management perspective was introduced with implications to many organisational managers that adopt ISO 9001 and supply chain integration principles, setting the way for extending these principles beyond their original QM and SCM agendas towards organisational sustainable development.
    • The Role of L-type Voltage Gated Calcium Channels in Ovarian Cancer

      Shiva, Subramanam; Luderman, William (University of DerbyUniversity of Derby, 2021-03)
      Ovarian cancer is the most lethal gynaecological malignancy. Of the four ovarian cancer subtypes - serous, mucinous, endometroid and clear cell - serous ovarian cancer is the most common, comprising around 70% of cases. The median stage of diagnosis of ovarian cancer is stage III and patients present with widespread metastasis, usually facilitated by peritoneal fluid accumulation in the abdomen (ascites). Live single cancer cells and aggregates from the ascites represent a population with the potential to become metastatic and are thought to be the main contributors to disease recurrence, displaying increased chemoresistance after traditional first line therapy, which consists of extensive tumour debulking followed by combination platinum and taxane chemotherapy. For this reason, research into potential therapeutic targets in this important cell population is critical for development of more efficacious treatments for ovarian cancer. The opportunity to re-purpose existing pharmaceuticals for use as chemotherapeutics is an idea that has gained traction recently in the medical research field. This method circumvents the requirement for lengthy target identification and validation and extensive toxicity testing. One class of drugs that are a possible candidate for use against ovarian cancer are the dihydropyridines, which are antagonists of voltage-gated calcium channels (CaV). In ovarian cancer, CaV have already been shown to play a role in malignant behaviours such as migration and proliferation, although most expression and functional data in the literature are from ovarian cancer cell lines. There has been very little research on the role of CaV in primary ovarian cancer cells. This work addresses the question of whether CaV are expressed in primary ovarian cancer cells and whether channels in ovarian cancer cells are functional, particularly in cells derived from malignant ascitic fluid. Here, the results from immunohistochemical staining of ovarian tumour sections and RT-qPCR using normal ovaries, tumours and cells derived from malignant ascitic fluid suggest that in the majority of cases, CaV1.2 and CaV1.3 become expressed in ovarian cancer cells only after cells have been shed from the primary tumour. CaV1.2 and CaV1.3 mRNA is expressed in ascites derived cells, although the highest expression of these mRNAs was seen in a sample from a patient with mucinous ovarian cancer. In serous ovarian cancer cells from ascites, CaV1.2 protein was shown with flow cytometry to be highly expressed and immunofluorescent staining confirmed that this expression is localised to the nucleus. Functionally, dihydropyridine antagonism with nifedipine was found to prevent cell migration only in a 2-dimensional wound-healing model, whereas invasion of cell lines and ascites derived primary cells into basement membrane extract was unchanged. Cell lines display differing apoptotic responses to nifedipine, which triggers apoptosis in SKOV-3 but no response in OVCAR-8 at the same concentration. To assess channel functionality, fluorescent measurement of cytosolic Ca2+ flux using Fluo4-AM was performed on cell lines in the presence of the CaV agonist Bay K8644 and patch clamp electrophysiology was performed on cell lines and malignant ascites derived cells. Both of these techniques confirmed that no detectable L-type CaV current is present in the ovarian cancer cell lines. CaV current was observed in cells derived from malignant ascites, from the mucinous sample. Transient receptor potential (TRP) currents were also detected in OVCAR-8 cells, as well as single cells and spheroids from the ascites derived mucinous sample and were most likely carried by the Ca2+-selective TRPV5 or TRV6. These results suggest that functional L-type CaV are present in cancer cells from malignant ascites of patients with mucinous ovarian cancer. Although CaV1.2 was nuclear expressed in serous samples, similar research from other cancers has shown that these channels may still function in migration, even when expression is restricted to the nucleus, although nuclear channels would not be amenable to therapeutic targeting. These results fit into a wider context of contemporary research which is placing a greater role for CaV and ion channels in cancer, although more research needs to be performed in ovarian cancer ascites derived cells to determine the possible function of these nuclear ion channels.
    • Service recommendation and selection in centralized and decentralized environments.

      Ahmed, Mariwan; University of Derby (2017-07-20)
      With the increasing use of web services in everyday tasks we are entering an era of Internet of Services (IoS). Service discovery and selection in both centralized and decentralized environments have become a critical issue in the area of web services, in particular when services having similar functionality but different Quality of Service (QoS). As a result, selecting a high quality service that best suits consumer requirements from a large list of functionally equivalent services is a challenging task. In response to increasing numbers of services in the discovery and selection process, there is a corresponding increase of service consumers and a consequent diversity in Quality of Service (QoS) available. Increases in both sides leads to a diversity in the demand and supply of services, which would result in the partial match of the requirements and offers. Furthermore, it is challenging for customers to select suitable services from a large number of services that satisfy consumer functional requirements. Therefore, web service recommendation becomes an attractive solution to provide recommended services to consumers which can satisfy their requirements.In this thesis, first a service ranking and selection algorithm is proposed by considering multiple QoS requirements and allowing partially matched services to be counted as a candidate for the selection process. With the initial list of available services the approach considers those services with a partial match of consumer requirements and ranks them based on the QoS parameters, this allows the consumer to select suitable service. In addition, providing weight value for QoS parameters might not be an easy and understandable task for consumers, as a result an automatic weight calculation method has been included for consumer requirements by utilizing distance correlation between QoS parameters. The second aspect of the work in the thesis is the process of QoS based web service recommendation. With an increasing number of web services having similar functionality, it is challenging for service consumers to find out suitable web services that meet their requirements. We propose a personalised service recommendation method using the LDA topic model, which extracts latent interests of consumers and latent topics of services in the form of probability distribution. In addition, the proposed method is able to improve the accuracy of prediction of QoS properties by considering the correlation between neighbouring services and return a list of recommended services that best satisfy consumer requirements. The third part of the thesis concerns providing service discovery and selection in a decentralized environment. Service discovery approaches are often supported by centralized repositories that could suffer from single point failure, performance bottleneck, and scalability issues in large scale systems. To address these issues, we propose a context-aware service discovery and selection approach in a decentralized peer-to-peer environment. In the approach homophily similarity was used for bootstrapping and distribution of nodes. The discovery process is based on the similarity of nodes and previous interaction and behaviour of the nodes, which will help the discovery process in a dynamic environment. Our approach is not only considering service discovery, but also the selection of suitable web service by taking into account the QoS properties of the web services. The major contribution of the thesis is providing a comprehensive QoS based service recommendation and selection in centralized and decentralized environments. With the proposed approach consumers will be able to select suitable service based on their requirements. Experimental results on real world service datasets showed that proposed approaches achieved better performance and efficiency in recommendation and selection process.
    • Shape grammar based adaptive building envelopes: Towards a novel climate responsive facade systems for sustainable architectural design in Vietnam.

      Ceranic, Boris; Tracada, Eleni; Nguyen, Ngoc Son Tung (University of Derby, 2020-01-14)
      The concept of a dynamic building enclosure is a relatively novel and unexplored area in sustainable architectural design and engineering and as such, could be considered a new paradigm. These façade systems, kinetic and adaptive in their nature, can provide opportunities for significant reductions in building energy use and CO2 emissions, whilst at the same time having a positive impact on the quality of the indoor environment. Current research in this area reports on a growing increase in the application of new generative design approaches and computational techniques to assist the design of adaptable kinetic systems and to help quantify their relationships between the building envelope and the environment. In this research, a novel application of shape grammar for the design of kinetic façade shading systems has been developed, based upon a generative design approach that controls the creation of complex shape composites, starting from a set of initial shapes and pre-defined rules of their composition. Shape grammars provide an interesting generative design archetype in which a set of shape rules can be recursively applied to create a language of designs, with the rules themselves becoming descriptors of such generated designs. The research is inspired by traditional patterns and ornaments in Vietnam, seen as an important symbol of its cultural heritage, especially in the era of globalisation where many developing countries, including Vietnam, are experiencing substantial modernist transformations in their cities. Those are often perceived as a cause of the loss of both visual and historical connections with indigenous architectural origins and traditions. This research hence investigates how these aspects of spatial culture could be interpreted and used in designing of novel façade shading systems that draw their inspiration from Vietnamese vernacular styles and cultural identity. At the same time, they also have to satisfy modern building performance demands, such as a reduction in energy consumption and enhanced indoor comfort. This led to the exploration of a creative form-finding for different building façade shading configurations, the performance of which was tested via simulation and evaluation of indoor daylight levels and corresponding heating and cooling loads. The developed façade structures are intended to adapt real-time, via responding to both results of an undertaken simulation and data-regulation protocols responsible for sensing and processing building performance data. To this extent, a strategy for BIM integrated sustainable design analysis (SDA) has also been deliberated, as a framework for exploring the integration of building management systems (BMS) into smart building environments (SBEs). Finally, the research reports on the findings of a prototype system development and its testing, allowing continuous evaluation of multiple solutions and presenting an opportunity for further improvement via multi-objective optimisation, which would be very difficult to do, if not impossible, with conventional design methods.
    • Simulation-based impact analysis for sustainable manufacturing design and management

      University of Derby (2018)
      This research focuses on effective decision-making for sustainable manufacturing design and management. The research contributes to the decision-making tools that can enable sustainability analysts to capture the aspects of the economic, environmental and social dimensions into a common framework. The framework will enable the practitioners to conduct a sustainability impact analysis of a real or proposed manufacturing system and use the outcome to support sustainability decision. In the past, the industries had focused more on the economic aspects in gaining and sustaining their competitive positions; this has changed in the recent years following the Brundtland report which centred on incorporating the sustainability of the future generations into our decision for meeting today’s needs (Brundtland, 1987). The government regulations and legislation, coupled with the changes in consumers’ preference for ethical and environmentally friendly products are other factors that are challenging and changing the way companies, and organisations perceive and drive their competitive goals (Gu et al., 2015). Another challenge is the lack of adequate tools to address the dynamism of the manufacturing environment and the need to balance the business’ competitive goal with sustainability requirements. The launch of the Life Cycle Sustainability Analysis (LCSA) framework further emphasised the needs for the integration and analysis of the interdependencies of the three dimensions for effective decision-making and the control of unintended consequences (UNEP, 2011). Various studies have also demonstrated the importance of interdependence impact analysis and integration of the three sustainability dimensions of the product, process and system levels of sustainability (Jayal et al., 2010; Valdivia et al., 2013; Eastwood and Haapala, 2015). Although there are tools capable of assessing the performance of either one or two of the three sustainability dimensions, the tools have not adequately integrated the three dimensions or address the holistic sustainability issues. Hence, this research proposes an approach to provide a solution for successful interdependence impact analysis and trade-off amongst the three sustainability dimensions and enable support for effective decision-making in a manufacturing environment. This novel approach explores and integrates the concepts and principles of the existing sustainability methodologies and frameworks and the simulation modelling construction process into a common descriptive framework for process level assessment. The thesis deploys Delphi study to verify and validate the descriptive framework and demonstrates its applicability in a case study of a real manufacturing system. The results of the research demonstrate the completeness, conciseness, correctness, clarity and applicability of the descriptive framework. Thus, the outcome of this research is a simulation-based impact analysis framework which provides a new way for sustainability practitioners to build an integrated and holistic computer simulation model of a real system, capable of assessing both production and sustainability performance of a dynamic manufacturing system.
    • Smart City: A Traffic Signal Control System for Reducing the Effects of Traffic Congestion in Urban Environments

      Hardy, James (University of Derby, 2019-06-10)
      This thesis addresses the detrimental effects of road traffic congestion in the Smart City environment. Urban congestion is a recognisable problem that affects much of the world’s population through delays and pollution although the delays are not an entirely modern phenomena. The progressive increase in urbanisation and the numbers of powered road vehicles have led to an increasing need to control traffic in order to maintain flows and avoid gridlock situations. Signalised methods typically control flows through reduction, frequently increasing delays, holding traffic within the urban area and increasing local pollution. The current levels of vehicular congestion may relate to an increase in traffic volumes of 300% over 50 years while traffic control methods based on delaying moving traffic have changed very little. Mobility and Socio-economics indicate that the number of active road vehicles will increase or at least remain at the same levels in the foreseeable future and as a result congestion will continue to be a problem. The Smart City concept is intended to improve the urban environment through the application of advanced technology. Within the context of road transportation, the urban area consists of a wide variety of low to moderate speed transportation systems ranging from pedestrians to heavy goods vehicles. Urban roadways have a large number of junctions where the transport systems and flows interact presenting additional and more complex challenges as compared to high speed dual carriageways and motorways. Congestion is a function of population density while car ownership is an indicator of affluence; road congestion can therefore be seen as an indicator of local economic and social prosperity. Congestion cannot be resolved while there is a social benefit to urbanisation, high density living and a materialistic population. Recognising that congestion cannot be resolved, this research proposes a method to reduce the undesirable consequences and side effects of traffic congestion such as transit delays, inefficient fuel use and chemical pollution without adversely affecting the social and economic benefits. Existing traffic signal systems manage traffic flows based on traffic arrivals, prediction and traffic census models. Flow modification is accomplished by introducing delays through signal transition in order to prioritise a conflicting direction. It is incorrectly assumed that traffic will always be able to move and therefore signal changes will always have an effect. Signal transitions result in lost time at the junction. Existing Urban Traffic Control systems have limited capability as they are unable to adapt immediately to unexpected conditions, have a finite response, cannot modify stationary flow and may introduce needless losses through inefficient transition. This research proposes and develops Available Forward Road Capacity (AFRC), an algorithm with the ability to detect the onset of congestion, actively promote clearance, prevent unnecessary losses due to ineffective transitions and can influence other AFRC equipped junctions to ensure the most efficient use of unoccupied road capacity. AFRC is an additional function that can be applied to existing traffic controllers, becoming active only during congestion conditions; as a result it cannot increase congestion above current levels. By reducing the duration of congestion periods, AFRC reduces delays, improves the efficiency of fuel use and reduces pollution. AFRC is a scalable, multi-junction generalised solution which is able to manage traffic from multiple directions without prior tuning; it can detect and actively resolve problems with stationary traffic. AFRC is evaluated using a commercial traffic simulation system and is shown to resolve inbound and outbound congestion in less time than Vehicle Actuated and Fully Timed systems when simulating both morning and evening rush-hours.
    • A Systematic Review Approach Using the Behaviour Change Wheel, COM-B Behaviour Model and Theoretical Domains Framework to Evaluate Physical Activity Engagement in a University Setting

      Bussell, Chris; Faghy, Mark; Staples, Vicki; Lipka, Sigrid; Ndupu, Lawrence (University of DerbyEngineering and Technology, University of Derby, 2021-09-09)
      Introduction: Physical activity has been recognised to offer health benefits and reduce the risks of developing chronic diseases such as diabetes, cardiovascular diseases, hypertension, cancer, depression, and atherosclerosis. However, even with the known health benefits of physical activity, over a quarter of adults globally are physically inactive, which is a serious public health concern and thus calls for concerted efforts to increase physical activity levels in diverse settings. A university is a unique setting in which to promote health enhancing behaviours, such as physical activity, because it offers opportunity to be active (e.g., in-built sports facilities), provides flexible working conditions to enable staff and students a reasonable level of autonomy in managing their individual time and endowed with highly educated and well-informed staff base, which has been previously shown to influence individuals’ engagement in physical activity. Therefore, the overall aim of the PhD research project was to understand the barriers and enablers to physical activity among university staff and students, design an intervention informed by this understanding and implement intervention to address these barriers, in order to create behaviours that lead to better engagement in physical activity. Methods: A mixed-methods experimental design was utilised throughout the research, incorporating both qualitative (group interviews) and quantitative (surveys) data collection. The four experimental studies that make up this programme of work were designed using established behaviour change models, i.e., the Behaviour Change Wheel (BCW), the Capability, Opportunity, Motivation-Behaviour (COM-B) model and/or the Theoretical Domains Framework (TDF). The qualitative data were analysed in Nvivo12 using deductive content analysis, while the qualitative data were analysed using SPSS Statistical software 26.0, with significance level set at 0.05. Results: Six prominent domains were identified as enablers and barriers to physical activity among university staff and students, i.e., knowledge; social influences; social/professional role and identity; environmental context and resources; beliefs about capabilities; and intentions (study 1). About 78.0% of the administrative staff and 67.0% of the PhD students were physically inactive, i.e., achieving less than 600 MET-minutes/week of moderate intensity physical activity. A multiple regression analysis showed that of the 14 domains of the TDF, the ‘physical skills’ domain (t 106 = 2.198, p=0.030) was the only significant predictor of physical inactivity among the administrative staff, while the ‘knowledge’ (t 99 = 2.018, p= 0.046) and ‘intentions’ (t 99 = 4.240, p=0.001) were the only predictors of physical inactivity amongst the PhD students (study 2). The administrative staff that were assigned to engage in supervised exercise sessions (experimental group) reported higher physical skills scores and overall physical activity levels compared to the control (study 3). The PhD students that were allocated to the education and intentions group, who received educational materials and asked to form implementation intentions of times, days and places they intend to carry out physical activity, reported higher overall physical activity levels compared to other treatment groups, i.e., intentions only, education only and control groups (study 4). Conclusion: This thesis contributes to the knowledge on adult’s physical activity by detailing the development, implementation, and assessment of a bespoke brief 4-week behaviour change intervention that effectively increased university administrative staff and PhD students’ total physical activity levels, as well as time spent in physical activity weekly. The university was established as a unique setting to promote health-enhancing behaviour such as promotion of physical activity. Therefore, theory-based interventions underpinned by the BCW, COM-B model and TDF may provide an effective strategy to improve university staff and students’ engagement in physical activity, as well as their overall wellbeing.
    • Thermo-mechanical reliability studies of lead-free solder interconnects

      Mallik, Sabuj; Lu, Yiling; Depiver, Joshua Adeniyi (University of DerbyN/A, 2021-06-03)
      Solder interconnections, also known as solder joints, are the weakest link in electronics packaging. Reliability of these miniature joints is of utmost interest - especially in safety-critical applications in the automotive, medical, aerospace, power grid and oil and drilling sectors. Studies have shown that these joints' critical thermal and mechanical loading culminate in accelerated creep, fatigue, and a combination of these joints' induced failures. The ball grid array (BGA) components being an integral part of many electronic modules functioning in mission-critical systems. This study investigates the response of solder joints in BGA to crucial reliability influencing parameters derived from creep, visco-plastic and fatigue damage of the joints. These are the plastic strain, shear strain, plastic shear strain, creep energy density, strain energy density, deformation, equivalent (Von-Mises) stress etc. The parameters' obtained magnitudes are inputted into established life prediction models – Coffin-Manson, Engelmaier, Solomon (Low cycle fatigue) and Syed (Accumulated creep energy density) – to determine several BGA assemblies' fatigue lives. The joints are subjected to thermal, mechanical and random vibration loadings. The finite element analysis (FEA) is employed in a commercial software package to model and simulate the responses of the solder joints of the representative assemblies' finite element models. As the magnitude and rate of degradation of solder joints in the BGA significantly depend on the composition of the solder alloys used to assembly the BGA on the printed circuit board, this research studies the response of various mainstream lead-free Sn-Ag-Cu (SAC) solders (SAC305, SAC387, SAC396 and SAC405) and benchmarked those with lead-based eutectic solder (Sn63Pb37). In the creep response study, the effects of thermal ageing and temperature cycling on these solder alloys' behaviours are explored. The results show superior creep properties for SAC405 and SAC396 lead-free solder alloys. The lead-free SAC405 solder joint is the most effective solder under thermal cycling condition, and the SAC396 solder joint is the most effective solder under isothermal ageing operation. The finding shows that SAC405 and SAC396 solders accumulated the minimum magnitudes of stress, strain rate, deformation rate and strain energy density than any other solder considered in this study. The hysteresis loops show that lead-free SAC405 has the lowest dissipated energy per cycle. Thus the highest fatigue life, followed by eutectic lead-based Sn63Pb37 solder. The solder with the highest dissipated energy per cycle was lead-free SAC305, SAC387 and SAC396 solder alloys. In the thermal fatigue life prediction research, four different lead-free (SAC305, SAC387, SAC396 and SAC405) and one eutectic lead-based (Sn63Pb37) solder alloys are defined against their thermal fatigue lives (TFLs) to predict their mean-time-to-failure for preventive maintenance advice. Five finite elements (FE) models of the assemblies of the BGAs with the different solder alloy compositions and properties are created with SolidWorks. The models are subjected to standard IEC 60749-25 temperature cycling in ANSYS 19.0 mechanical package environment. SAC405 joints have the highest predicted TFL of circa 13.2 years, while SAC387 joints have the least life of circa 1.4 years. The predicted lives are inversely proportional to the magnitude of the areas of stress-strain hysteresis loops of the solder joints. The prediction models are significantly consistent in predicted magnitudes across the solder joints irrespective of the damage parameters used. Several failure modes drive solder joints and damage mechanics from the research and understand an essential variation in the models' predicted values. This investigation presents a method of managing preventive maintenance time of BGA electronic components in mission-critical systems. It recommends developing a novel life prediction model based on a combination of the damage parameters for enhanced prediction. The FEA random vibration simulation test results showed that different solder alloys have a comparable performance during random vibration testing. The fatigue life result shows that SAC405 and SAC396 have the highest fatigue lives before being prone to failure. As a result of the FEA simulation outcomes with the application of Coffin-Manson's empirical formula, the author can predict the fatigue life of solder joint alloys to a higher degree of accuracy of average ~93% in an actual service environment such as the one experienced under-the-hood of an automobile and aerospace. Therefore, it is concluded that the combination of FEA simulation and empirical formulas employed in this study could be used in the computation and prediction of the fatigue life of solder joint alloys when subjected to random vibration. Based on the thermal and mechanical responses of lead-free SAC405 and SAC396 solder alloys, they are recommended as a suitable replacement of lead-based eutectic Sn63Pb37 solder alloy for improved device thermo-mechanical operations when subjected to random vibration (non-deterministic vibration). The FEA simulation studies' outcomes are validated using experimental and analytical-based reviews in published and peer-reviewed literature.
    • Towards an efficient indexing and searching model for service discovery in a decentralised environment.

      Miao, Dejun; University of Derby (2018-05)
      Given the growth and outreach of new information, communication, computing and electronic technologies in various dimensions, the amount of data has explosively increased in the recent years. Centralised systems suffer some limitations to dealing with this issue due to all data is stored in central data centres. Thus, decentralised systems are getting more attention and increasing in popularity. Moreover, efficient service discovery mechanisms have naturally become an essential component in both large-scale and small-scale decentralised systems and. This research study is aimed at modelling a novel efficient indexing and searching model for service discovery in decentralised environments comprising numerous repositories with massive stored services. The main contributions of this research study can be summarised in three components: a novel distributed multilevel indexing model, an optimised searching algorithm and a new simulation environment. Indexing model has been widely used for efficient service discovery. For instance; the inverted index is one of the popular indexing models used for service retrieval in consistent repositories. However, redundancies are inevitable in the inverted index which is significantly time-consuming in the service discovery and retrieval process. This theeis proposes a novel distributed multilevel indexing model (DM-index), which offers an efficient solution for service discovery and retrieval in distributed service repositories comprising massive stored services. The architecture of the proposed indexing model encompasses four hierarchical levels to eliminate redundancy information in service repositories, to narrow the searching space and to reduce the number of traversed services whilst discovering services. Distributed Hash Tables have been widely used to provide data lookup services with logarithmic message costs which only require maintenance of limited amounts of routing states. This thesis develops an optimised searching algorithm, named Double-layer No-redundancy Enhanced Bi-direction Chord (DNEB-Chord), to handle retrieval requests in distributed destination repositories efficiently. This DNEB-Chord algorithm achieves faster routing performances with the double-layer routing mechanism and optimal routing index. The efficiency of the developed indexing and searching model is evaluated through theoretical analysis and experimental evaluation in a newly developed simulation environment, named Distributed Multilevel Bi-direction Simulator (DMBSim), which can be used as cost efficient tool for exploring various service configurations, user retrieval requirements and other parameter settings. Both the theoretical validation and experimental evaluations demonstrate that the service discovery efficiency of the DM-index outperforms the sequential index and inverted index configurations. Furthermore, the experimental evaluation results demostrate that the DNEB-Chord algorithm performs better than the Chord in terms of reducing the incurred hop counts. Finally, simulation results demonstrate that the proposed indexing and searching model can achieve better service discovery performances in large-scale decentralised environments comprising numerous repositories with massive stored services.
    • A Trust Evaluation Framework in Vehicular Ad-Hoc Networks

      Adnane, Asma; Franqueira, Virginia N. L.; Anjum, Ashiq; Ahmad, Farhan (University of DerbyCollege of Engineering and Technology, 2019-03-11)
      Vehicular Ad-Hoc Networks (VANET) is a novel cutting-edge technology which provides connectivity to millions of vehicles around the world. It is the future of Intelligent Transportation System (ITS) and plays a significant role in the success of emerging smart cities and Internet of Things (IoT). VANET provides a unique platform for vehicles to intelligently exchange critical information, such as collision avoidance or steep-curve warnings. It is, therefore, paramount that this information remains reliable and authentic, i.e., originated from a legitimate and trusted vehicle. Due to sensitive nature of the messages in VANET, a secure, attack-free and trusted network is imperative for the propagation of reliable, accurate and authentic information. In case of VANET, ensuring such network is extremely difficult due to its large-scale and open nature, making it susceptible to diverse range of attacks including man-in-the-middle (MITM), replay, jamming and eavesdropping. Trust establishment among vehicles can increase network security by identifying dishonest vehicles and revoking messages with malicious content. For this purpose, several trust models (TMs) have been proposed but, currently, there is no effective way to compare how they would behave in practice under adversary conditions. Further, the proposed TMs are mostly context-dependent. Due to randomly distributed and highly mobile vehicles, context changes very frequently in VANET. Ideally the TMs should perform in every context of VANET. Therefore, it is important to have a common framework for the validation and evaluation of TMs. In this thesis, we proposed a novel Trust Evaluation And Management (TEAM) framework, which serves as a unique paradigm for the design, management and evaluation of TMs in various contexts and in presence of malicious vehicles. Our framework incorporates an asset-based threat model and ISO-based risk assessment for the identification of attacks against critical risks. TEAM has been built using VEINS, an open source simulation environment which incorporates SUMO traffic simulator and OMNET++ discrete event simulator. The framework created has been tested with the implementation of three types of TM (data-oriented, entity-oriented and hybrid) under four different contexts of VANET based on the mobility of both honest and malicious vehicles. Results indicate that TEAM is effective to simulate a wide range of TMs, where the efficiency is evaluated against different Quality of Service (QoS) and security-related criteria. Such framework may be instrumental for planning smart cities and for car manufacturers.
    • Vulnerability and adaptive capacity of rural coastal fishing communities in Ghana to climatic and socio-economic stressors

      Davies-Vollum, Kathrine; Raha, Debadayita; Koomson, Daniel (University of Derby, 2021-08-13)
      The global fishing industry is a source of livelihood for about 820 million people. About 90% of this number are small-scale fisherfolk and traders, living in rural fishing-dependent communities in tropical, developing, and least developed countries. Although the industry generates about $362 billion annually, fishing-dependent communities are generally characterised by chronic poverty and deprivation. Decrease in fish productivity and availability in tropical regions, as well as, increase in the frequency and intensity of extreme weather events due to climate change processes have exacerbated the plight of fishing-dependent communities. In 1970, an agenda for research and development of small-scale fishing was set out. However, rural fishing communities are still considered the poorest of the poor today. They are also considered the most vulnerable as future climate change predictions indicate more extreme events and further reductions in maximum fish catch and revenue potentials. Therefore, there are continued calls for research efforts to understand the impacts of multiple climatic and socioeconomic stressors on small-scale fishing livelihoods, in order to identify viable, context-specific management and policy interventions that can reduce their vulnerability. Using two rural coastal fishing communities in Ghana as a case study, the purpose of this study was to explicate how rural coastal fishing-dependent communities in a tropical context are impacted by the interaction of climatic and socio-economic factors and identify viable policy and management options to enhance their adaptive capacity. Three key research questions guided the study: (i) what are the various factors that impact small-scale fishing livelihoods/households, and how do they interact to shape vulnerability? (ii) how are the fishing communities adapting to current livelihood stressors? and, (iii) What context-specific policy and management interventions are needed to enhance their adaptive capacity and safeguard their wellbeing. The Intergovernmental Panel on Climate Change’s (IPCC) vulnerability framework and the Sustainable Livelihoods Approach (SLA) were integrated as the theoretical underpinnings of the study. A mixed-methods approach was adopted. A total of 120 fishing households were selected and surveyed through a stratified-snowball sampling technique. Several gender and age-group disaggregated focus groups with participatory activities, semi-structured interviews, and key informant discussions were also conducted to collect primary data. These were combined with climatic data to assess each household’s vulnerability, and through triangulated analyses, explicate how it is mediated by socio-cultural, institutional, and policy structures.