Our systems are now restored following recent technical disruption, and we’re working hard to catch up on publishing. We apologise for the inconvenience caused. Find out more: https://www.cambridge.org/universitypress/about-us/news-and-blogs/cambridge-university-press-publishing-update-following-technical-disruption
We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Fault propagation analysis is a process used to determine the consequences of faults residing in a computer system. A typical computer system consists of diverse components (e.g., electronic and software components), thus, the faults contained in these components tend to possess diverse characteristics. How to describe and model such diverse faults, and further determine fault propagation through different components are challenging problems to be addressed in the fault propagation analysis. This paper proposes an ontology-based approach, which is an integrated method allowing for the generation, injection, and propagation through inference of diverse faults at an early stage of the design of a computer system. The results generated by the proposed framework can verify system robustness and identify safety and reliability risks with limited design level information. In this paper, we propose an ontological framework and its application to analyze an example safety-critical computer system. The analysis result shows that the proposed framework is capable of inferring fault propagation paths through software and hardware components and is effective in predicting the impact of faults.
In this survey, we present an overview on (Modal) Temporal Logic Programming in view of its application to Knowledge Representation and Declarative Problem Solving. The syntax of this extension of logic programs is the result of combining usual rules with temporal modal operators, as in Linear-time Temporal Logic (LTL). In the paper, we focus on the main recent results of the non-monotonic formalism called Temporal Equilibrium Logic (TEL) that is defined for the full syntax of LTL but involves a model selection criterion based on Equilibrium Logic, a well known logical characterization of Answer Set Programming (ASP). As a result, we obtain a proper extension of the stable models semantics for the general case of temporal formulas in the syntax of LTL. We recall the basic definitions for TEL and its monotonic basis, the temporal logic of Here-and-There (THT), and study the differences between finite and infinite trace length. We also provide further useful results, such as the translation into other formalisms like Quantified Equilibrium Logic and Second-order LTL, and some techniques for computing temporal stable models based on automata constructions. In the remainder of the paper, we focus on practical aspects, defining a syntactic fragment called (modal) temporal logic programs closer to ASP, and explaining how this has been exploited in the construction of the solver telingo, a temporal extension of the well-known ASP solver clingo that uses its incremental solving capabilities.
Data have played a role in urban mobility policy planning for decades, especially in forecasting demand, but much less in policy evaluations and assessments. The surge in availability and openness of (big) data in the last decade seems to provide new opportunities to meet demand for evidence-based policymaking. This paper reviews how different types of data are employed in assessments published in academic journals by analyzing 74 cases. Our review finds that (a) academic literature has currently provided limited insight in new data developments in policy practice; (b) research shows that the new types of big data provide new opportunities for evidence-based policy-making; however, (c) they cannot replace traditional data usage (surveys and statistics). Instead, combining big data with survey and Geographic Information System data in ex-ante assessments, as well as in developing decision support tools, is found to be the most effective. This could help policymakers not only to get much more insight from policy assessments, but also to help avoid the limitations of one certain type of data. Finally, current research projects are rather data supply-driven. Future research should engage with policy practitioners to reveal best practices, constraints, and potential of more demand-driven data use in mobility policy assessments in practice.
Recently, many question answering systems that derive answers from linked data repositories have been developed. The purpose of this survey is to identify the common features and approaches of the semantic question answering (SQA) systems, although many different and prototype systems have been designed. The SQA systems use a formal query language like SPARQL and knowledge of a specific vocabulary. This paper analyses different frameworks, architectures, or systems that perform SQA and classifies SQA systems based on different criteria.
Seasonal effects can significantly impact the robustness of socio-technical systems (STS) to demand fluctuations. There is an increasing need to develop novel design approaches that can support capacity planning decisions for enhancing the robustness of STS against seasonal effects. This paper proposes a new network motif-based approach to supporting capacity planning in STS for an improved seasonal robustness. Network motifs are underlying nonrandom subgraphs within a complex network. In this approach, we introduce three motif-based metrics for system performance evaluation and capacity planning decision-making. The first one is the imbalance score of a motif (e.g., a local service network), the second one is the measurement of a motif’s seasonal robustness, and the third one is a capacity planning decision criterion. Based on these three metrics, we validate that the sensitivity of STS performance against seasonal effects is highly correlated with the imbalanced capacity between service nodes in an STS. Correspondingly, we formulate a design optimisation problem to improve the robustness of STS by rebalancing the resources at critical service nodes. To demonstrate the utility of the approach, a case study on Divvy bike-sharing system in Chicago is conducted. With a focus on the size-3 motifs (a subgraph consisting three docked stations), we find that there is a significant correlation between the difference of the number of docks among the stations in a motif and the return/rental performance of such a motif against seasonal changes. Guided by this finding, our design approach can successfully balance out the number of docks between those stations that have caused the most severe seasonal perturbations. The results also imply that the network motifs can be an effective local structural representation in support of STS robust design. Our approach can be generally applied in other STS where the system performances are significantly impacted by seasonal changes, for example, supply chain networks, transportation systems and power grids.
The vertex-centric programming model is now widely used for processing large graphs. User-defined vertex programs are executed in parallel over every vertex of a graph, but the imperative and explicit message-passing style of existing systems makes defining a vertex program unintuitive and difficult. This article presents Fregel, a purely functional domain-specific language for processing large graphs and describes its model, design, and implementation. Fregel is a subset of Haskell, so Haskell tools can be used to test and debug Fregel programs. The vertex-centric computation is abstracted using compositional programming that uses second-order functions on graphs provided by Fregel. A Fregel program can be compiled into imperative programs for use in the Giraph and Pregel+ vertex-centric frameworks. Fregel’s functional nature without side effects enables various transformations and optimizations during the compilation process. Thus, the programmer is freed from the burden of program optimization, which is manually done for existing imperative systems. Experimental results for typical examples demonstrated that the compiled code can be executed with reasonable and promising performance.
Large-eddy simulation (LES) using an unstructured overset numerical method is performed to study the flow around a ducted marine propeller for the highly unsteady off-design condition called crashback. Known as one of the most challenging propeller states to analyse, the propeller rotates in the reverse direction to yield negative thrust while the vehicle is still in forward motion. The LES results for the marine propeller David Taylor Model Basin 4381 with a neutrally loaded duct are validated against experiments, showing good agreement. The simulations are performed at Reynolds number of 561 000 and an advance ratio $J=-0.82$. The flow field around the different components (duct, rotor blades and stator blades) and their impact on the unsteady loading are examined. The side-force coefficient $K_S$ is mostly generated from the duct surface, consistent with experiments. The majority of the thrust and torque coefficients $K_T$ and $K_Q$ arise from the rotor blades. A prominent contribution to $K_Q$ is also produced from the stator blades. Tip-leakage flow between the rotor blade tips and duct surface is shown to play a major role in the local unsteady loads on the rotor blades and duct. The physical mechanisms responsible for the overall unsteady loads and large side-force production are identified as globally, the vortex ring and locally, leading-edge separation as well as tip-leakage flow which forms blade-local recirculation zones.
Passive wearable exoskeletons are desirable as they can provide assistance during user movements while still maintaining a simple and low-profile design. These can be useful in industrial tasks where an ergonomic device could aid in load lifting without inconveniencing them and reducing fatigue and stress in the lower limbs. The SpringExo is a coil-spring design that aids in knee extension. In this paper, we describe the muscle activation of the knee flexors and extensors from seven healthy participants during repeated squats. The outcome measures are the timings of the key events during squat, flexion angle, muscle activation of rectus femoris and bicep femoris, and foot pressure characteristics of the participants. These outcome measures assess the possible effects of the device during lifting operations where reduced effort in the muscles is desired during ascent phase of the squat, without changing the knee and foot kinematics. The results show that the SpringExo significantly decreased rectus femoris activation during ascent (−2%) without significantly affecting either the bicep femoris or rectus femoris muscle activations in descent. This implies that the user could perform a descent without added effort and ascent with reduced effort. The exoskeleton showed other effects on the biomechanics of the user, increasing average squat time (+0.02 s) and maximum squat time (+0.1 s), and decreasing average knee flexion angle (−4°). The exoskeleton has no effect on foot loading or placement, that is, the user did not have to revise their stance while using the device.
Modern programming languages typically provide some form of comprehension syntax which renders programs manipulating collection types more readable and understandable. However, comprehension syntax corresponds to nested loops in general. There is no simple way of using it to express efficient general synchronized iterations on multiple ordered collections, such as linear-time algorithms for low-selectivity database joins. Synchrony fold is proposed here as a novel characterization of synchronized iteration. Central to this characterization is a monotonicisBeforepredicate for relating the orderings on the two collections being iterated on and an antimonotoniccanSeepredicate for identifying matching pairs in the two collections to synchronize and act on. A restriction is then placed on Synchrony fold, cutting its extensional expressive power to match that of comprehension syntax, giving us Synchrony generator. Synchrony generator retains sufficient intensional expressive power for expressing efficient synchronized iteration on ordered collections. In particular, it is proved to be a natural generalization of the database merge join algorithm, extending the latter to more general database joins. Finally, Synchrony iterator is derived from Synchrony generator as a novel form of iterator. While Synchrony iterator has the same extensional and intensional expressive power as Synchrony generator, the former is better dovetailed with comprehension syntax. Thereby, algorithms requiring synchronized iterations on multiple ordered collections, including those for efficient general database joins, become expressible naturally in comprehension syntax.
Many programmers use dependently typed languages such as Coq to machine-verify high-assurance software. However, existing compilers for these languages provide no guarantees after compiling, nor when linking after compilation. Type-preserving compilers preserve guarantees encoded in types and then use type checking to verify compiled code and ensure safe linking with external code. Unfortunately, standard compiler passes do not preserve the dependent typing of commonly used (intensional) type theories. This is because assumptions valid in simpler type systems no longer hold, and intensional dependent type systems are highly sensitive to syntactic changes, including compilation. We develop an A-normal form (ANF) translation with join-point optimization—a standard translation for making control flow explicit in functional languages—from the Extended Calculus of Constructions (ECC) with dependent elimination of booleans and natural numbers (a representative subset of Coq). Our dependently typed target language has equality reflection, allowing the type system to encode semantic equality of terms. This is key to proving type preservation and correctness of separate compilation for this translation. This is the first ANF translation for dependent types. Unlike related translations, it supports the universe hierarchy, and does not rely on parametricity or impredicativity.
Prototyping is interwoven with nearly all product, service, and systems development efforts. A prototype is a pre-production representation of some aspect of a concept or final design. Prototyping often predetermines a large portion of resource deployment in development and influences design project success. This review surveys literature sources in engineering, management, design science, and architecture. The study is focused around design prototyping for early stage design. Insights are synthesized from critical review of the literature: key objectives of prototyping, critical review of major techniques, relationships between techniques, and a strategy matrix to connect objectives to techniques. The review is supported with exemplar prototypes provided from industrial design efforts. Techniques are roughly categorized into those that improve the outcomes of prototyping directly, and those that enable prototyping through lowering of cost and time. Compact descriptions of each technique provide a foundation to compare the potential benefits and drawbacks of each. The review concludes with a summary of key observations, highlighted opportunities in the research, and a vision of the future of prototyping. This review aims to provide a resource for designers as well as set a trajectory for continuing innovation in the scientific research of design prototyping.
COVID-19 is causing a significant burden on medical and healthcare resources globally due to high numbers of hospitalisations and deaths recorded as the pandemic continues. This research aims to assess the effects of climate factors (i.e., daily average temperature and average relative humidity) on effective reproductive number of COVID-19 outbreak in Wuhan, China during the early stage of the outbreak. Our research showed that effective reproductive number of COVID-19 will increase by 7.6% (95% Confidence Interval: 5.4% ~ 9.8%) per 1°C drop in mean temperature at prior moving average of 0–8 days lag in Wuhan, China. Our results indicate temperature was negatively associated with COVID-19 transmissibility during early stages of the outbreak in Wuhan, suggesting temperature is likely to effect COVID-19 transmission. These results suggest increased precautions should be taken in the colder seasons to reduce COVID-19 transmission in the future, based on past success in controlling the pandemic in Wuhan, China.
We review the scholarly contributions that utilise natural language processing (NLP) techniques to support the design process. Using a heuristic approach, we gathered 223 articles that are published in 32 journals within the period 1991–present. We present state-of-the-art NLP in-and-for design research by reviewing these articles according to the type of natural language text sources: internal reports, design concepts, discourse transcripts, technical publications, consumer opinions and others. Upon summarising and identifying the gaps in these contributions, we utilise an existing design innovation framework to identify the applications that are currently being supported by NLP. We then propose a few methodological and theoretical directions for future NLP in-and-for design research.
This paper introduces a method to improve the design procedure of axiomatic design theory (AD) with Extenics. A comprehensive review of the AD indicates that the powerful principle of AD has been widely studied and applied to many areas, however, inexperienced practitioners of the AD theory still find it difficult to follow or apply the principles in their design which inadvertently often leads to misunderstanding and skepticism. The lack of definitive descriptions for all the elements and specific approaches to guiding the mapping process restricts the development and application of AD theory. This paper improves the design procedure of AD with Extenics. The elements in AD domain are expressed by basic-elements of Extenics, and the formulations are generated. The mapping process based on AD and Extenics is developed. The improved design procedure provides designers with a theoretical foundation based on the logical and rational thought process, meanwhile the solution space can be expanded and innovative designs are inspired. Based on the proposed design procedure, a computer-aided system is developed, which makes the complex and fuzzy design activity clear and easy to follow by filling in the blanks in a step-by-step manner. An example of a novel corn harvester header design scheme is considered to illustrate the validity of the improved design procedure.
GPT-3 made the mainstream media headlines this year, generating far more interest than we’d normally expect of a technical advance in NLP. People are fascinated by its ability to produce apparently novel text that reads as if it was written by a human. But what kind of practical applications can we expect to see, and can they be trusted?
Dielectric elastomers (DEs) find applications in many areas, particularly in the field of soft robotics. When modeling and simulating DE-based actuators and sensors, a substantial portion of the literature assumes the selected DE material to behave in some perfectly hyperelastic manner, and the vast majority have assumed invariant permittivity. However, studies on simple planar DEs have revealed instabilities and hastened breakdowns when a variable permittivity is allowed. This is partly due to the intertwined electromechanical properties of DEs rooted on their labyrinthine polymeric microstructures. This work focuses on studying the effects of a varying (with stretch) permittivity on the out-of-plane deformation of a circular DE, using a model derived from principles of strain-induced polymer birefringence. In addition, we utilize the Edward–Vilgis model, which attempts to account for effects related to crosslinking, and length extension, slippage, and entanglement of polymer chains. Our approach reveals the presence of “stagnation” regions in the electromechanical behavior of the DE actuator material. These stagnation regions are characterized by both electrical and mechanical critical electrostrictive coefficient ratios. Mechanically, certain values of the electrostrictive coefficient ratio predict cases where deformation does not occur in response to a change in voltage. Electrically, certain cases are predicted where changes in capacitance cannot be measured in response to changes in deformation. Thus, some combined conditions of loading and material properties could limit the effectiveness of DE membranes in either actuation or sensing. Therefore, our results reveal mechanisms that could be useful to designers of actuators and sensors and unveil an opportunity for exploring new theoretical materials with potential novel applications. Furthermore, since there are known analogous formulations between electrical and optical properties, criticality principles studied in this article could be extended to optomechanical coupling.
Many real-world networks, including social networks and computer networks for example, are temporal networks. This means that the vertices and edges change over time. However, most approaches for modeling and analyzing temporal networks do not explicitly discuss the underlying notion of time. In this paper, we therefore introduce a generalized notion of discrete time for modeling temporal networks. Our approach also allows for considering nondeterministic time and incomplete data, two issues that are often found when analyzing datasets extracted from online social networks, for example. In order to demonstrate the consequences of our generalized notion of time, we also discuss the implications for the computation of (shortest) temporal paths in temporal networks. In addition, we implemented an R-package that provides programming support for all concepts discussed in this paper. The R-package is publicly available for download.
Nowadays, automated essay evaluation (AEE) systems play an important role in evaluating essays and have been successfully used in large-scale writing assessments. However, existing AEE systems mostly focus on grammar or shallow content measurements rather than higher-order traits such as ideas. This paper proposes a new formulation of graph-based features for concept maps using word embeddings to evaluate the quality of ideas for Chinese compositions. The concept map derived from the student’s composition is composed of the concepts appearing in the essay and the co-occurrence relationship between the concepts. By utilizing real compositions written by eighth-grade students from a large-scale assessment, the scoring accuracy of the computer evaluation system (named AECC-I: Automated Evaluation for Chinese Compositions—Ideas) is higher than the baselines. The results indicate that the proposed method deepens the construct-relevant coverage of automatic ideas evaluation in compositions and that it can provide constructive feedback for students.
Functional modeling is an effective method of depicting products in the design process. Using this approach, product architecture, concept generation, and physical modeling all contribute to the design process to generate a result full of quality and functionality. The functional basis approach provides taxonomy of uniform vocabulary to produce function structures with consistent functions (verbs) and flows (nouns). Material and energy flows dominate function structures in the mechanical engineering domain with only a small percentage including signal flows. Research suggests that the signal flow gap is due to the requirement of “carrier” flows of either material or energy to transport the signals between functions. This research suggests that incorporating controls engineering methodologies may increase the number of signal flows in function structures. We show correlations between the functional modeling and controls engineering in four facets: schematic similarities, performance matching through flows, mathematical function creation using bond graphs, and isomorphic matching of the aforementioned characteristics allows for analogical solutions. Controls systems use block diagrams to represent the sequential steps of the system. These block diagrams parallel the function structures of engineering design. Performance metrics between the two domains can be complimentary when decomposed down to nondimensional engineering units. Mathematical functions of the actions in controls systems can resemble the functional basis functions with bond graphs by identifying characteristic behavior of the functions on the flows. Isomorphic matching, using the schematic diagrams, produces analogies based upon similar functionality and target performance metrics. These four similarities bridge the mechanical and electrical domains via the controls domain. We provide concepts and contextualization for the methodology using domain-agnostic examples. We conclude with suggestion of pathways forward for this preliminary research.