A novel end-to-end fault detection and localization protocol for wavelength-routed WDM networks
NASA Astrophysics Data System (ADS)
Zeng, Hongqing; Vukovic, Alex; Huang, Changcheng
2005-09-01
Recently the wavelength division multiplexing (WDM) networks are becoming prevalent for telecommunication networks. However, even a very short disruption of service caused by network faults may lead to high data loss in such networks due to the high date rates, increased wavelength numbers and density. Therefore, the network survivability is critical and has been intensively studied, where fault detection and localization is the vital part but has received disproportional attentions. In this paper we describe and analyze an end-to-end lightpath fault detection scheme in data plane with the fault notification in control plane. The endeavor is focused on reducing the fault detection time. In this protocol, the source node of each lightpath keeps sending hello packets to the destination node exactly following the path for data traffic. The destination node generates an alarm once a certain number of consecutive hello packets are missed within a given time period. Then the network management unit collects all alarms and locates the faulty source based on the network topology, as well as sends fault notification messages via control plane to either the source node or all upstream nodes along the lightpath. The performance evaluation shows such a protocol can achieve fast fault detection, and at the same time, the overhead brought to the user data by hello packets is negligible.
Hyperswitch Network For Hypercube Computer
NASA Technical Reports Server (NTRS)
Chow, Edward; Madan, Herbert; Peterson, John
1989-01-01
Data-driven dynamic switching enables high speed data transfer. Proposed hyperswitch network based on mixed static and dynamic topologies. Routing header modified in response to congestion or faults encountered as path established. Static topology meets requirement if nodes have switching elements that perform necessary routing header revisions dynamically. Hypercube topology now being implemented with switching element in each computer node aimed at designing very-richly-interconnected multicomputer system. Interconnection network connects great number of small computer nodes, using fixed hypercube topology, characterized by point-to-point links between nodes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumsdaine, Andrew
2013-03-08
The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less
Distributed adaptive diagnosis of sensor faults using structural response data
NASA Astrophysics Data System (ADS)
Dragos, Kosmas; Smarsly, Kay
2016-10-01
The reliability and consistency of wireless structural health monitoring (SHM) systems can be compromised by sensor faults, leading to miscalibrations, corrupted data, or even data loss. Several research approaches towards fault diagnosis, referred to as ‘analytical redundancy’, have been proposed that analyze the correlations between different sensor outputs. In wireless SHM, most analytical redundancy approaches require centralized data storage on a server for data analysis, while other approaches exploit the on-board computing capabilities of wireless sensor nodes, analyzing the raw sensor data directly on board. However, using raw sensor data poses an operational constraint due to the limited power resources of wireless sensor nodes. In this paper, a new distributed autonomous approach towards sensor fault diagnosis based on processed structural response data is presented. The inherent correlations among Fourier amplitudes of acceleration response data, at peaks corresponding to the eigenfrequencies of the structure, are used for diagnosis of abnormal sensor outputs at a given structural condition. Representing an entirely data-driven analytical redundancy approach that does not require any a priori knowledge of the monitored structure or of the SHM system, artificial neural networks (ANN) are embedded into the sensor nodes enabling cooperative fault diagnosis in a fully decentralized manner. The distributed analytical redundancy approach is implemented into a wireless SHM system and validated in laboratory experiments, demonstrating the ability of wireless sensor nodes to self-diagnose sensor faults accurately and efficiently with minimal data traffic. Besides enabling distributed autonomous fault diagnosis, the embedded ANNs are able to adapt to the actual condition of the structure, thus ensuring accurate and efficient fault diagnosis even in case of structural changes.
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
Fault Tolerance for VLSI Multicomputers
1985-08-01
that consists of hundreds or thousands of VLSI computation nodes interconnected by dedicated links. Some important applications of high-end computers...technology, and intended applications . A proposed fault tolerance scheme combines hardware that performs error detection and system-level protocols for...order to recover from the error and resume correct operation, a valid system state must be restored. A low-overhead, application -transparent error
Waggle: A Framework for Intelligent Attentive Sensing and Actuation
NASA Astrophysics Data System (ADS)
Sankaran, R.; Jacob, R. L.; Beckman, P. H.; Catlett, C. E.; Keahey, K.
2014-12-01
Advances in sensor-driven computation and computationally steered sensing will greatly enable future research in fields including environmental and atmospheric sciences. We will present "Waggle," an open-source hardware and software infrastructure developed with two goals: (1) reducing the separation and latency between sensing and computing and (2) improving the reliability and longevity of sensing-actuation platforms in challenging and costly deployments. Inspired by "deep-space probe" systems, the Waggle platform design includes features that can support longitudinal studies, deployments with varying communication links, and remote management capabilities. Waggle lowers the barrier for scientists to incorporate real-time data from their sensors into their computations and to manipulate the sensors or provide feedback through actuators. A standardized software and hardware design allows quick addition of new sensors/actuators and associated software in the nodes and enables them to be coupled with computational codes both insitu and on external compute infrastructure. The Waggle framework currently drives the deployment of two observational systems - a portable and self-sufficient weather platform for study of small-scale effects in Chicago's urban core and an open-ended distributed instrument in Chicago that aims to support several research pursuits across a broad range of disciplines including urban planning, microbiology and computer science. Built around open-source software, hardware, and Linux OS, the Waggle system comprises two components - the Waggle field-node and Waggle cloud-computing infrastructure. Waggle field-node affords a modular, scalable, fault-tolerant, secure, and extensible platform for hosting sensors and actuators in the field. It supports insitu computation and data storage, and integration with cloud-computing infrastructure. The Waggle cloud infrastructure is designed with the goal of scaling to several hundreds of thousands of Waggle nodes. It supports aggregating data from sensors hosted by the nodes, staging computation, relaying feedback to the nodes and serving data to end-users. We will discuss the Waggle design principles and their applicability to various observational research pursuits, and demonstrate its capabilities.
Achieving Agreement in Three Rounds With Bounded-Byzantine Faults
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2015-01-01
A three-round algorithm is presented that guarantees agreement in a system of K (nodes) greater than or equal to 3F (faults) +1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport et al. and is scalable with respect to the number of nodes in the system and applies equally to the traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.
NASA Astrophysics Data System (ADS)
Huang, Yuehua; Li, Xiaomin; Cheng, Jiangzhou; Nie, Deyu; Wang, Zhuoyuan
2018-02-01
This paper presents a novel fault location method by injecting travelling wave current. The new methodology is based on Time Difference Of Arrival(TDOA)measurement which is available measurements the injection point and the end node of main radial. In other words, TDOA is the maximum correlation time when the signal reflected wave crest of the injected and fault appear simultaneously. Then distance calculation is equal to the wave velocity multiplied by TDOA. Furthermore, in case of some transformers connected to the end of the feeder, it’s necessary to combine with the transient voltage comparison of amplitude. Finally, in order to verify the effectiveness of this method, several simulations have been undertaken by using MATLAB/SIMULINK software packages. The proposed fault location is useful to short the positioning time in the premise of ensuring the accuracy, besides the error is 5.1% and 13.7%.
ROBUS-2: A Fault-Tolerant Broadcast Communication System
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.
2005-01-01
The Reliable Optical Bus (ROBUS) is the core communication system of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER), a general-purpose fault-tolerant integrated modular architecture currently under development at NASA Langley Research Center. The ROBUS is a time-division multiple access (TDMA) broadcast communication system with medium access control by means of time-indexed communication schedule. ROBUS-2 is a developmental version of the ROBUS providing guaranteed fault-tolerant services to the attached processing elements (PEs), in the presence of a bounded number of faults. These services include message broadcast (Byzantine Agreement), dynamic communication schedule update, clock synchronization, and distributed diagnosis (group membership). The ROBUS also features fault-tolerant startup and restart capabilities. ROBUS-2 is tolerant to internal as well as PE faults, and incorporates a dynamic self-reconfiguration capability driven by the internal diagnostic system. This version of the ROBUS is intended for laboratory experimentation and demonstrations of the capability to reintegrate failed nodes, dynamically update the communication schedule, and tolerate and recover from correlated transient faults.
An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks
Abba, Sani; Lee, Jeong-A
2015-01-01
We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network. PMID:26295236
An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks.
Abba, Sani; Lee, Jeong-A
2015-08-18
We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almasi, Gheorghe; Blumrich, Matthias Augustin; Chen, Dong
Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored inmore » memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.« less
All-to-all sequenced fault detection system
Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward
2010-11-02
An apparatus, program product and method enable nodal fault detection by sequencing communications between all system nodes. A master node may coordinate communications between two slave nodes before sequencing to and initiating communications between a new pair of slave nodes. The communications may be analyzed to determine the nodal fault.
Achieving Agreement in Three Rounds with Bounded-Byzantine Faults
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar, R.
2017-01-01
A three-round algorithm is presented that guarantees agreement in a system of K greater than or equal to 3F+1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport, Shostak, and Pease and is scalable with respect to the number of nodes in the system and applies equally to traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.
Modeling and Simulation Reliable Spacecraft On-Board Computing
NASA Technical Reports Server (NTRS)
Park, Nohpill
1999-01-01
The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.
Multi-directional fault detection system
Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward
2010-11-23
An apparatus, program product and method checks for nodal faults in a group of nodes comprising a center node and all adjacent nodes. The center node concurrently communicates with the immediately adjacent nodes in three dimensions. The communications are analyzed to determine a presence of a faulty node or connection.
Multi-directional fault detection system
Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN
2009-03-17
An apparatus, program product and method checks for nodal faults in a group of nodes comprising a center node and all adjacent nodes. The center node concurrently communicates with the immediately adjacent nodes in three dimensions. The communications are analyzed to determine a presence of a faulty node or connection.
Multi-directional fault detection system
Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward
2010-06-29
An apparatus, program product and method checks for nodal faults in a group of nodes comprising a center node and all adjacent nodes. The center node concurrently communicates with the immediately adjacent nodes in three dimensions. The communications are analyzed to determine a presence of a faulty node or connection.
Interactive Planning for Capability Driven Air & Space Operations
2008-04-30
Time, Routledge and Kegan , London, UK, 1980. [5] A. Bochman, Concerted instant–interval temporal semantics I: Temporal ontologies, Notre Dame Journal...then return true else deleteStatement (X, rj , Y ) end if end for return false Figure 8 shows the search space for instance in Table 2. The green ...nodes are those for which the set of relations cor- responding to the path from the root form a consistent set. A path from root to a green leaf node
All row, planar fault detection system
Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D; Smith, Brian Edward
2013-07-23
An apparatus, program product and method for detecting nodal faults may simultaneously cause designated nodes of a cell to communicate with all nodes adjacent to each of the designated nodes. Furthermore, all nodes along the axes of the designated nodes are made to communicate with their adjacent nodes, and the communications are analyzed to determine if a node or connection is faulty.
Metric Ranking of Invariant Networks with Belief Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Changxia; Ge, Yong; Song, Qinbao
The management of large-scale distributed information systems relies on the effective use and modeling of monitoring data collected at various points in the distributed information systems. A promising approach is to discover invariant relationships among the monitoring data and generate invariant networks, where a node is a monitoring data source (metric) and a link indicates an invariant relationship between two monitoring data. Such an invariant network representation can help system experts to localize and diagnose the system faults by examining those broken invariant relationships and their related metrics, because system faults usually propagate among the monitoring data and eventually leadmore » to some broken invariant relationships. However, at one time, there are usually a lot of broken links (invariant relationships) within an invariant network. Without proper guidance, it is difficult for system experts to manually inspect this large number of broken links. Thus, a critical challenge is how to effectively and efficiently rank metrics (nodes) of invariant networks according to the anomaly levels of metrics. The ranked list of metrics will provide system experts with useful guidance for them to localize and diagnose the system faults. To this end, we propose to model the nodes and the broken links as a Markov Random Field (MRF), and develop an iteration algorithm to infer the anomaly of each node based on belief propagation (BP). Finally, we validate the proposed algorithm on both realworld and synthetic data sets to illustrate its effectiveness.« less
Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN
2008-10-14
An apparatus, program product and method checks for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.
Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN
2012-02-07
An apparatus, program product and method check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.
Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward
2010-02-23
An apparatus and program product check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.
Simpson, R.W.; Lienkaemper, J.J.; Galehouse, J.S.
2001-01-01
Variations ill surface creep rate along the Hayward fault are modeled as changes in locking depth using 3D boundary elements. Model creep is driven by screw dislocations at 12 km depth under the Hayward and other regional faults. Inferred depth to locking varies along strike from 4-12 km. (12 km implies no locking.) Our models require locked patches under the central Hayward fault, consistent with a M6.8 earthquake in 1868, but the geometry and extent of locking under the north and south ends depend critically on assumptions regarding continuity and creep behavior of the fault at its ends. For the northern onshore part of the fault, our models contain 1.4-1.7 times more stored moment than the model of Bu??rgmann et al. [2000]; 45-57% of this stored moment resides in creeping areas. It is important for seismic hazard estimation to know how much of this moment is released coseismically or as aseismic afterslip.
NASA Astrophysics Data System (ADS)
Lauer, Rachel M.; Saffer, Demian M.
2015-04-01
Observations of seafloor seeps on the continental slope of many subduction zones illustrate that splay faults represent a primary hydraulic connection to the plate boundary at depth, carry deeply sourced fluids to the seafloor, and are in some cases associated with mud volcanoes. However, the role of these structures in forearc hydrogeology remains poorly quantified. We use a 2-D numerical model that simulates coupled fluid flow and solute transport driven by fluid sources from tectonically driven compaction and smectite transformation to investigate the effects of permeable splay faults on solute transport and pore pressure distribution. We focus on the Nicoya margin of Costa Rica as a case study, where previous modeling and field studies constrain flow rates, thermal structure, and margin geology. In our simulations, splay faults accommodate up to 33% of the total dewatering flux, primarily along faults that outcrop within 25 km of the trench. The distribution and fate of dehydration-derived fluids is strongly dependent on thermal structure, which determines the locus of smectite transformation. In simulations of a cold end-member margin, smectite transformation initiates 30 km from the trench, and 64% of the dehydration-derived fluids are intercepted by splay faults and carried to the middle and upper slope, rather than exiting at the trench. For a warm end-member, smectite transformation initiates 7 km from the trench, and the associated fluids are primarily transmitted to the trench via the décollement (50%), and faults intercept only 21% of these fluids. For a wide range of splay fault permeabilities, simulated fluid pressures are near lithostatic where the faults intersect overlying slope sediments, providing a viable mechanism for the formation of mud volcanoes.
Trust index based fault tolerant multiple event localization algorithm for WSNs.
Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue
2011-01-01
This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.
Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs
Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue
2011-01-01
This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972
Locating hardware faults in a data communications network of a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-01-12
Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.
NASA Astrophysics Data System (ADS)
Soelistijanto, B.; Muliadi, V.
2018-03-01
Diffie-Hellman (DH) provides an efficient key exchange system by reducing the number of cryptographic keys distributed in the network. In this method, a node broadcasts a single public key to all nodes in the network, and in turn each peer uses this key to establish a shared secret key which then can be utilized to encrypt and decrypt traffic between the peer and the given node. In this paper, we evaluate the key transfer delay and cost performance of DH in opportunistic mobile networks, a specific scenario of MANETs where complete end-to-end paths rarely exist between sources and destinations; consequently, the end-to-end delays in these networks are much greater than typical MANETs. Simulation results, driven by a random node movement model and real human mobility traces, showed that DH outperforms a typical key distribution scheme based on the RSA algorithm in terms of key transfer delay, measured by average key convergence time; however, DH performs as well as the benchmark in terms of key transfer cost, evaluated by total key (copies) forwards.
Multi-hop routing mechanism for reliable sensor computing.
Chen, Jiann-Liang; Ma, Yi-Wei; Lai, Chia-Ping; Hu, Chia-Cheng; Huang, Yueh-Min
2009-01-01
Current research on routing in wireless sensor computing concentrates on increasing the service lifetime, enabling scalability for large number of sensors and supporting fault tolerance for battery exhaustion and broken nodes. A sensor node is naturally exposed to various sources of unreliable communication channels and node failures. Sensor nodes have many failure modes, and each failure degrades the network performance. This work develops a novel mechanism, called Reliable Routing Mechanism (RRM), based on a hybrid cluster-based routing protocol to specify the best reliable routing path for sensor computing. Table-driven intra-cluster routing and on-demand inter-cluster routing are combined by changing the relationship between clusters for sensor computing. Applying a reliable routing mechanism in sensor computing can improve routing reliability, maintain low packet loss, minimize management overhead and save energy consumption. Simulation results indicate that the reliability of the proposed RRM mechanism is around 25% higher than that of the Dynamic Source Routing (DSR) and ad hoc On-demand Distance Vector routing (AODV) mechanisms.
Fault tolerant features and experiments of ANTS distributed real-time system
NASA Astrophysics Data System (ADS)
Dominic-Savio, Patrick; Lo, Jien-Chung; Tufts, Donald W.
1995-01-01
The ANTS project at the University of Rhode Island introduces the concept of Active Nodal Task Seeking (ANTS) as a way to efficiently design and implement dependable, high-performance, distributed computing. This paper presents the fault tolerant design features that have been incorporated in the ANTS experimental system implementation. The results of performance evaluations and fault injection experiments are reported. The fault-tolerant version of ANTS categorizes all computing nodes into three groups. They are: the up-and-running green group, the self-diagnosing yellow group and the failed red group. Each available computing node will be placed in the yellow group periodically for a routine diagnosis. In addition, for long-life missions, ANTS uses a monitoring scheme to identify faulty computing nodes. In this monitoring scheme, the communication pattern of each computing node is monitored by two other nodes.
Selection of test paths for solder joint intermittent connection faults under DC stimulus
NASA Astrophysics Data System (ADS)
Huakang, Li; Kehong, Lv; Jing, Qiu; Guanjun, Liu; Bailiang, Chen
2018-06-01
The test path of solder joint intermittent connection faults under direct-current stimulus is examined in this paper. According to the physical structure of the circuit, a network model is established first. A network node is utilised to represent the test node. The path edge refers to the number of intermittent connection faults in the path. Then, the selection criteria of the test path based on the node degree index are proposed and the solder joint intermittent connection faults are covered using fewer test paths. Finally, three circuits are selected to verify the method. To test if the intermittent fault is covered by the test paths, the intermittent fault is simulated by a switch. The results show that the proposed method can detect the solder joint intermittent connection fault using fewer test paths. Additionally, the number of detection steps is greatly reduced without compromising fault coverage.
Distributed fault detection over sensor networks with Markovian switching topologies
NASA Astrophysics Data System (ADS)
Ge, Xiaohua; Han, Qing-Long
2014-05-01
This paper deals with the distributed fault detection for discrete-time Markov jump linear systems over sensor networks with Markovian switching topologies. The sensors are scatteredly deployed in the sensor field and the fault detectors are physically distributed via a communication network. The system dynamics changes and sensing topology variations are modeled by a discrete-time Markov chain with incomplete mode transition probabilities. Each of these sensor nodes firstly collects measurement outputs from its all underlying neighboring nodes, processes these data in accordance with the Markovian switching topologies, and then transmits the processed data to the remote fault detector node. Network-induced delays and accumulated data packet dropouts are incorporated in the data transmission between the sensor nodes and the distributed fault detector nodes through the communication network. To generate localized residual signals, mode-independent distributed fault detection filters are proposed. By means of the stochastic Lyapunov functional approach, the residual system performance analysis is carried out such that the overall residual system is stochastically stable and the error between each residual signal and the fault signal is made as small as possible. Furthermore, a sufficient condition on the existence of the mode-independent distributed fault detection filters is derived in the simultaneous presence of incomplete mode transition probabilities, Markovian switching topologies, network-induced delays, and accumulated data packed dropouts. Finally, a stirred-tank reactor system is given to show the effectiveness of the developed theoretical results.
DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal root node. A subtree is created for each of the inputs to the digraph terminal node and the root of those subtrees are added as children of the top node of the fault tree. Every node in the digraph upstream of the terminal node will be visited and converted. During the conversion process, the algorithm keeps track of the path from the digraph terminal node to the current digraph node. If a node is visited twice, then the program has found a cycle in the digraph. This cycle is broken by finding the minimal cut sets of the twice visited digraph node and forming those cut sets into subtrees. Another implementation of the algorithm resolves loops by building a subtree based on the digraph minimal cut sets calculation. It does not reduce the subtree to minimal cut set form. This second implementation produces larger fault trees, but runs much faster than the version using minimal cut sets since it does not spend time reducing the subtrees to minimal cut sets. The fault trees produced by DG TO FT will contain OR gates, AND gates, Basic Event nodes, and NOP gates. The results of a translation can be output as a text object description of the fault tree similar to the text digraph input format. The translator can also output a LISP language formatted file and an augmented LISP file which can be used by the FTDS (ARC-13019) diagnosis system, available from COSMIC, which performs diagnostic reasoning using the fault tree as a knowledge base. DG TO FT is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. DG TO FT is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is provided on the distribution medium. DG TO FT was developed in 1992. Sun, and SunOS are trademarks of Sun Microsystems, Inc. DECstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc. System 7 is a trademark of Apple Computers Inc. Microsoft Word is a trademark of Microsoft Corporation.
Fault-tolerant three-level inverter
Edwards, John; Xu, Longya; Bhargava, Brij B.
2006-12-05
A method for driving a neutral point clamped three-level inverter is provided. In one exemplary embodiment, DC current is received at a neutral point-clamped three-level inverter. The inverter has a plurality of nodes including first, second and third output nodes. The inverter also has a plurality of switches. Faults are checked for in the inverter and predetermined switches are automatically activated responsive to a detected fault such that three-phase electrical power is provided at the output nodes.
Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks
Huang, Rimao; Qiu, Xuesong; Rui, Lanlan
2011-01-01
Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789
Huang, Rimao; Qiu, Xuesong; Rui, Lanlan
2011-01-01
Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate.
Barall, Michael
2009-01-01
We present a new finite-element technique for calculating dynamic 3-D spontaneous rupture on an earthquake fault, which can reduce the required computational resources by a factor of six or more, without loss of accuracy. The grid-doubling technique employs small cells in a thin layer surrounding the fault. The remainder of the modelling volume is filled with larger cells, typically two or four times as large as the small cells. In the resulting non-conforming mesh, an interpolation method is used to join the thin layer of smaller cells to the volume of larger cells. Grid-doubling is effective because spontaneous rupture calculations typically require higher spatial resolution on and near the fault than elsewhere in the model volume. The technique can be applied to non-planar faults by morphing, or smoothly distorting, the entire mesh to produce the desired 3-D fault geometry. Using our FaultMod finite-element software, we have tested grid-doubling with both slip-weakening and rate-and-state friction laws, by running the SCEC/USGS 3-D dynamic rupture benchmark problems. We have also applied it to a model of the Hayward fault, Northern California, which uses realistic fault geometry and rock properties. FaultMod implements fault slip using common nodes, which represent motion common to both sides of the fault, and differential nodes, which represent motion of one side of the fault relative to the other side. We describe how to modify the traction-at-split-nodes method to work with common and differential nodes, using an implicit time stepping algorithm.
NASA Astrophysics Data System (ADS)
Wang, Rongxi; Gao, Xu; Gao, Jianmin; Gao, Zhiyong; Kang, Jiani
2018-02-01
As one of the most important approaches for analyzing the mechanism of fault pervasion, fault root cause tracing is a powerful and useful tool for detecting the fundamental causes of faults so as to prevent any further propagation and amplification. Focused on the problems arising from the lack of systematic and comprehensive integration, an information transfer-based novel data-driven framework for fault root cause tracing of complex electromechanical systems in the processing industry was proposed, taking into consideration the experience and qualitative analysis of conventional fault root cause tracing methods. Firstly, an improved symbolic transfer entropy method was presented to construct a directed-weighted information model for a specific complex electromechanical system based on the information flow. Secondly, considering the feedback mechanisms in the complex electromechanical systems, a method for determining the threshold values of weights was developed to explore the disciplines of fault propagation. Lastly, an iterative method was introduced to identify the fault development process. The fault root cause was traced by analyzing the changes in information transfer between the nodes along with the fault propagation pathway. An actual fault root cause tracing application of a complex electromechanical system is used to verify the effectiveness of the proposed framework. A unique fault root cause is obtained regardless of the choice of the initial variable. Thus, the proposed framework can be flexibly and effectively used in fault root cause tracing for complex electromechanical systems in the processing industry, and formulate the foundation of system vulnerability analysis and condition prediction, as well as other engineering applications.
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.
Is Slow Slip a Cause or a Result of Tremor?
NASA Astrophysics Data System (ADS)
Luo, Y.; Ampuero, J. P.
2017-12-01
While various modeling efforts have been conducted to reproduce subsets of observations of tremor and slow-slip events (SSE), a fundamental but yet unanswered question is whether slow slip is a cause or a result of tremor. Tremor is commonly regarded as driven by SSE. This view is mainly based on observations of SSE without detected tremors and on (frequency-limited) estimates of total tremor seismic moment being lower than 1% of their concomitant SSE moment. In previous studies we showed that models of heterogeneous faults, composed of seismic asperities embedded in an aseismic fault zone matrix, reproduce quantitatively the hierarchical patterns of tremor migration observed in Cascadia and Shikoku. To address the title question, we design two end-member models of a heterogeneous fault. In the SSE-driven-tremor model, slow slip events are spontaneously generated by the matrix (even in the absence of seismic asperities) and drive tremor. In the Tremor-driven-SSE model the matrix is stable (it slips steadily in the absence of asperities) and slow slip events result from the collective behavior of tremor asperities interacting via transient creep (local afterslip fronts). We study these two end-member models through 2D quasi-dynamic multi-cycle simulations of faults governed by rate-and-state friction with heterogeneous frictional properties and effective normal stress, using the earthquake simulation software QDYN (https://zenodo.org/record/322459). We find that both models reproduce first-order observations of SSE and tremor and have very low seismic to aseismic moment ratio. However, the Tremor-driven-SSE model assumes a simpler rheology than the SSE-driven-tremor model and matches key observations better and without fine tuning, including the ratio of propagation speeds of forward SSE and rapid tremor reversals and the decay of inter-event times of Low Frequency Earthquakes. These modeling results indicate that, in contrast to a common view, SSE could be a result of tremor activity. We also find that, despite important interactions between asperities, tremor activity rates are proportional to the underlying aseismic slip rate, supporting an approach to estimate SSE properties with high spatial-temporal resolutions via tremor activity.
Cell boundary fault detection system
Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN
2009-05-05
A method determines a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.
A Self-Stabilizing Hybrid Fault-Tolerant Synchronization Protocol
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2015-01-01
This paper presents a strategy for solving the Byzantine general problem for self-stabilizing a fully connected network from an arbitrary state and in the presence of any number of faults with various severities including any number of arbitrary (Byzantine) faulty nodes. The strategy consists of two parts: first, converting Byzantine faults into symmetric faults, and second, using a proven symmetric-fault tolerant algorithm to solve the general case of the problem. A protocol (algorithm) is also present that tolerates symmetric faults, provided that there are more good nodes than faulty ones. The solution applies to realizable systems, while allowing for differences in the network elements, provided that the number of arbitrary faults is not more than a third of the network size. The only constraint on the behavior of a node is that the interactions with other nodes are restricted to defined links and interfaces. The solution does not rely on assumptions about the initial state of the system and no central clock nor centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. A mechanical verification of a proposed protocol is also present. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV). The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirming claims of determinism and linear convergence with respect to the self-stabilization period.
Computer hardware fault administration
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-09-14
Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.
Cell boundary fault detection system
Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN
2011-04-19
An apparatus and program product determine a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.
A Self-Stabilizing Byzantine-Fault-Tolerant Clock Synchronization Protocol
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2009-01-01
This report presents a rapid Byzantine-fault-tolerant self-stabilizing clock synchronization protocol that is independent of application-specific requirements. It is focused on clock synchronization of a system in the presence of Byzantine faults after the cause of any transient faults has dissipated. A model of this protocol is mechanically verified using the Symbolic Model Verifier (SMV) [SMV] where the entire state space is examined and proven to self-stabilize in the presence of one arbitrary faulty node. Instances of the protocol are proven to tolerate bursts of transient failures and deterministically converge with a linear convergence time with respect to the synchronization period. This protocol does not rely on assumptions about the initial state of the system other than the presence of sufficient number of good nodes. All timing measures of variables are based on the node s local clock, and no central clock or externally generated pulse is used. The Byzantine faulty behavior modeled here is a node with arbitrarily malicious behavior that is allowed to influence other nodes at every clock tick. The only constraint is that the interactions are restricted to defined interfaces.
Napolitano, Jr., Leonard M.
1995-01-01
The Lambda network is a single stage, packet-switched interprocessor communication network for a distributed memory, parallel processor computer. Its design arises from the desired network characteristics of minimizing mean and maximum packet transfer time, local routing, expandability, deadlock avoidance, and fault tolerance. The network is based on fixed degree nodes and has mean and maximum packet transfer distances where n is the number of processors. The routing method is detailed, as are methods for expandability, deadlock avoidance, and fault tolerance.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène
2016-04-01
Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically-based simulations. The following nodes represents for each rupture scenario different rupture forecast models (i.e; characteristic or Gutenberg-Richter) and for a given rupture forecast, two probability models commonly used in seismic hazard assessment: poissonian or time-dependent. The final node represents an exhaustive set of ground motion prediction equations chosen in order to be compatible with the region. Finally, the expected probability of exceeding a given ground motion level is computed at each sites. Results will be discussed for a few specific localities of the West Corinth Gulf.
Sleep, Norman H.; Blanpied, M.L.
1994-01-01
A simple cyclic process is proposed to explain why major strike-slip fault zones, including the San Andreas, are weak. Field and laboratory studies suggest that the fluid within fault zones is often mostly sealed from that in the surrounding country rock. Ductile creep driven by the difference between fluid pressure and lithostatic pressure within a fault zone leads to compaction that increases fluid pressure. The increased fluid pressure allows frictional failure in earthquakes at shear tractions far below those required when fluid pressure is hydrostatic. The frictional slip associated with earthquakes creates porosity in the fault zone. The cycle adjusts so that no net porosity is created (if the fault zone remains constant width). The fluid pressure within the fault zone reaches long-term dynamic equilibrium with the (hydrostatic) pressure in the country rock. One-dimensional models of this process lead to repeatable and predictable earthquake cycles. However, even modest complexity, such as two parallel fault splays with different pressure histories, will lead to complicated earthquake cycles. Two-dimensional calculations allowed computation of stress and fluid pressure as a function of depth but had complicated behavior with the unacceptable feature that numerical nodes failed one at a time rather than in large earthquakes. A possible way to remove this unphysical feature from the models would be to include a failure law in which the coefficient of friction increases at first with frictional slip, stabilizing the fault, and then decreases with further slip, destabilizing it. ?? 1994 Birkha??user Verlag.
NASA Technical Reports Server (NTRS)
Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven
2010-01-01
Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.
Napolitano, L.M. Jr.
1995-11-28
The Lambda network is a single stage, packet-switched interprocessor communication network for a distributed memory, parallel processor computer. Its design arises from the desired network characteristics of minimizing mean and maximum packet transfer time, local routing, expandability, deadlock avoidance, and fault tolerance. The network is based on fixed degree nodes and has mean and maximum packet transfer distances where n is the number of processors. The routing method is detailed, as are methods for expandability, deadlock avoidance, and fault tolerance. 14 figs.
Robust Routing Protocol For Digital Messages
NASA Technical Reports Server (NTRS)
Marvit, Maclen
1994-01-01
Refinement of ditigal-message-routing protocol increases fault tolerance of polled networks. AbNET-3 is latest of generic AbNET protocols for transmission of messages among computing nodes. AbNET concept described in "Multiple-Ring Digital Communication Network" (NPO-18133). Specifically aimed at increasing fault tolerance of network in broadcast mode, in which one node broadcasts message to and receives responses from all other nodes. Communication in network of computers maintained even when links fail.
Built-in-test by signature inspection (bitsi)
Bergeson, Gary C.; Morneau, Richard A.
1991-01-01
A system and method for fault detection for electronic circuits. A stimulus generator sends a signal to the input of the circuit under test. Signature inspection logic compares the resultant signal from test nodes on the circuit to an expected signal. If the signals do not match, the signature inspection logic sends a signal to the control logic for indication of fault detection in the circuit. A data input multiplexer between the test nodes of the circuit under test and the signature inspection logic can provide for identification of the specific node at fault by the signature inspection logic. Control logic responsive to the signature inspection logic conveys information about fault detection for use in determining the condition of the circuit. When used in conjunction with a system test controller, the built-in test by signature inspection system and method can be used to poll a plurality of circuits automatically and continuous for faults and record the results of such polling in the system test controller.
Fault isolation through no-overhead link level CRC
Chen, Dong; Coteus, Paul W.; Gara, Alan G.
2007-04-24
A fault isolation technique for checking the accuracy of data packets transmitted between nodes of a parallel processor. An independent crc is kept of all data sent from one processor to another, and received from one processor to another. At the end of each checkpoint, the crcs are compared. If they do not match, there was an error. The crcs may be cleared and restarted at each checkpoint. In the preferred embodiment, the basic functionality is to calculate a CRC of all packet data that has been successfully transmitted across a given link. This CRC is done on both ends of the link, thereby allowing an independent check on all data believed to have been correctly transmitted. Preferably, all links have this CRC coverage, and the CRC used in this link level check is different from that used in the packet transfer protocol. This independent check, if successfully passed, virtually eliminates the possibility that any data errors were missed during the previous transfer period.
Fault current limiter and alternating current circuit breaker
Boenig, Heinrich J.
1998-01-01
A solid-state circuit breaker and current limiter for a load served by an alternating current source having a source impedance, the solid-state circuit breaker and current limiter comprising a thyristor bridge interposed between the alternating current source and the load, the thyristor bridge having four thyristor legs and four nodes, with a first node connected to the alternating current source, and a second node connected to the load. A coil is connected from a third node to a fourth node, the coil having an impedance of a value calculated to limit the current flowing therethrough to a predetermined value. Control means are connected to the thyristor legs for limiting the alternating current flow to the load under fault conditions to a predetermined level, and for gating the thyristor bridge under fault conditions to quickly reduce alternating current flowing therethrough to zero and thereafter to maintain the thyristor bridge in an electrically open condition preventing the alternating current from flowing therethrough for a predetermined period of time.
Fault current limiter and alternating current circuit breaker
Boenig, H.J.
1998-03-10
A solid-state circuit breaker and current limiter are disclosed for a load served by an alternating current source having a source impedance, the solid-state circuit breaker and current limiter comprising a thyristor bridge interposed between the alternating current source and the load, the thyristor bridge having four thyristor legs and four nodes, with a first node connected to the alternating current source, and a second node connected to the load. A coil is connected from a third node to a fourth node, the coil having an impedance of a value calculated to limit the current flowing therethrough to a predetermined value. Control means are connected to the thyristor legs for limiting the alternating current flow to the load under fault conditions to a predetermined level, and for gating the thyristor bridge under fault conditions to quickly reduce alternating current flowing therethrough to zero and thereafter to maintain the thyristor bridge in an electrically open condition preventing the alternating current from flowing therethrough for a predetermined period of time. 9 figs.
Lee, Y; Tien, J M
2001-01-01
We present mathematical models that determine the optimal parameters for strategically routing multidestination traffic in an end-to-end network setting. Multidestination traffic refers to a traffic type that can be routed to any one of a multiple number of destinations. A growing number of communication services is based on multidestination routing. In this parameter-driven approach, a multidestination call is routed to one of the candidate destination nodes in accordance with predetermined decision parameters associated with each candidate node. We present three different approaches: (1) a link utilization (LU) approach, (2) a network cost (NC) approach, and (3) a combined parametric (CP) approach. The LU approach provides the solution that would result in an optimally balanced link utilization, whereas the NC approach provides the least expensive way to route traffic to destinations. The CP approach, on the other hand, provides multiple solutions that help leverage link utilization and cost. The LU approach has in fact been implemented by a long distance carrier resulting in a considerable efficiency improvement in its international direct services, as summarized.
Heterogeneous delays making parents synchronized: A coupled maps on Cayley tree model
NASA Astrophysics Data System (ADS)
Singh, Aradhana; Jalan, Sarika
2014-06-01
We study the phase synchronized clusters in the diffusively coupled maps on the Cayley tree networks for heterogeneous delay values. Cayley tree networks comprise of two parts: the inner nodes and the boundary nodes. We find that heterogeneous delays lead to various cluster states, such as; (a) cluster state consisting of inner nodes and boundary nodes, and (b) cluster state consisting of only boundary nodes. The former state may comprise of nodes from all the generations forming self-organized cluster or nodes from few generations yielding driven clusters depending upon on the parity of heterogeneous delay values. Furthermore, heterogeneity in delays leads to the lag synchronization between the siblings lying on the boundary by destroying the exact synchronization among them. The time lag being equal to the difference in the delay values. The Lyapunov function analysis sheds light on the destruction of the exact synchrony among the last generation nodes. To the end we discuss the relevance of our results with respect to their applications in the family business as well as in understanding the occurrence of genetic diseases.
Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay
2017-11-01
Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.
Fault-tolerant power distribution system
NASA Technical Reports Server (NTRS)
Volp, Jeffrey A. (Inventor)
1987-01-01
A fault-tolerant power distribution system which includes a plurality of power sources and a plurality of nodes responsive thereto for supplying power to one or more loads associated with each node. Each node includes a plurality of switching circuits, each of which preferably uses a power field effect transistor which provides a diode operation when power is first applied to the nodes and which thereafter provides bi-directional current flow through the switching circuit in a manner such that a low voltage drop is produced in each direction. Each switching circuit includes circuitry for disabling the power field effect transistor when the current in the switching circuit exceeds a preselected value.
Sum, John Pui-Fai; Leung, Chi-Sing; Ho, Kevin I-J
2012-02-01
Improving fault tolerance of a neural network has been studied for more than two decades. Various training algorithms have been proposed in sequel. The on-line node fault injection-based algorithm is one of these algorithms, in which hidden nodes randomly output zeros during training. While the idea is simple, theoretical analyses on this algorithm are far from complete. This paper presents its objective function and the convergence proof. We consider three cases for multilayer perceptrons (MLPs). They are: (1) MLPs with single linear output node; (2) MLPs with multiple linear output nodes; and (3) MLPs with single sigmoid output node. For the convergence proof, we show that the algorithm converges with probability one. For the objective function, we show that the corresponding objective functions of cases (1) and (2) are of the same form. They both consist of a mean square errors term, a regularizer term, and a weight decay term. For case (3), the objective function is slight different from that of cases (1) and (2). With the objective functions derived, we can compare the similarities and differences among various algorithms and various cases.
Fault Injection Campaign for a Fault Tolerant Duplex Framework
NASA Technical Reports Server (NTRS)
Sacco, Gian Franco; Ferraro, Robert D.; von llmen, Paul; Rennels, Dave A.
2007-01-01
Fault tolerance is an efficient approach adopted to avoid or reduce the damage of a system failure. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1, 2] that uses a fault tolerant approach and allows to run two replicas of the same process on two different nodes of a commercial off-the-shelf (COTS) computer cluster. A third process running on a different node, constantly monitors the results computed by the two replicas, and eventually restarts the two replica processes if an inconsistency in their computation is detected. This approach is very cost efficient and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.
Bisectional fault detection system
Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN
2012-02-14
An apparatus, program product and method logically divide a group of nodes and causes node pairs comprising a node from each section to communicate. Results from the communications may be analyzed to determine performance characteristics, such as bandwidth and proper connectivity.
Bisectional fault detection system
Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN
2009-08-04
An apparatus and program product logically divide a group of nodes and causes node pairs comprising a node from each section to communicate. Results from the communications may be analyzed to determine performance characteristics, such as bandwidth and proper connectivity.
Bisectional fault detection system
Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN
2008-11-11
An apparatus, program product and method logically divides a group of nodes and causes node pairs comprising a node from each section to communicate. Results from the communications may be analyzed to determine performance characteristics, such as bandwidth and proper connectivity.
Dynamical jumping real-time fault-tolerant routing protocol for wireless sensor networks.
Wu, Guowei; Lin, Chi; Xia, Feng; Yao, Lin; Zhang, He; Liu, Bing
2010-01-01
In time-critical wireless sensor network (WSN) applications, a high degree of reliability is commonly required. A dynamical jumping real-time fault-tolerant routing protocol (DMRF) is proposed in this paper. Each node utilizes the remaining transmission time of the data packets and the state of the forwarding candidate node set to dynamically choose the next hop. Once node failure, network congestion or void region occurs, the transmission mode will switch to jumping transmission mode, which can reduce the transmission time delay, guaranteeing the data packets to be sent to the destination node within the specified time limit. By using feedback mechanism, each node dynamically adjusts the jumping probabilities to increase the ratio of successful transmission. Simulation results show that DMRF can not only efficiently reduce the effects of failure nodes, congestion and void region, but also yield higher ratio of successful transmission, smaller transmission delay and reduced number of control packets.
Tien, Nguyen Xuan; Kim, Semog; Rhee, Jong Myung; Park, Sang Yoon
2017-07-25
Fault tolerance has long been a major concern for sensor communications in fault-tolerant cyber physical systems (CPSs). Network failure problems often occur in wireless sensor networks (WSNs) due to various factors such as the insufficient power of sensor nodes, the dislocation of sensor nodes, the unstable state of wireless links, and unpredictable environmental interference. Fault tolerance is thus one of the key requirements for data communications in WSN applications. This paper proposes a novel path redundancy-based algorithm, called dual separate paths (DSP), that provides fault-tolerant communication with the improvement of the network traffic performance for WSN applications, such as fault-tolerant CPSs. The proposed DSP algorithm establishes two separate paths between a source and a destination in a network based on the network topology information. These paths are node-disjoint paths and have optimal path distances. Unicast frames are delivered from the source to the destination in the network through the dual paths, providing fault-tolerant communication and reducing redundant unicast traffic for the network. The DSP algorithm can be applied to wired and wireless networks, such as WSNs, to provide seamless fault-tolerant communication for mission-critical and life-critical applications such as fault-tolerant CPSs. The analyzed and simulated results show that the DSP-based approach not only provides fault-tolerant communication, but also improves network traffic performance. For the case study in this paper, when the DSP algorithm was applied to high-availability seamless redundancy (HSR) networks, the proposed DSP-based approach reduced the network traffic by 80% to 88% compared with the standard HSR protocol, thus improving network traffic performance.
Proactive Fault Tolerance Using Preemptive Migration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Vallee, Geoffroy R; Naughton, III, Thomas J
2009-01-01
Proactive fault tolerance (FT) in high-performance computing is a concept that prevents compute node failures from impacting running parallel applications by preemptively migrating application parts away from nodes that are about to fail. This paper provides a foundation for proactive FT by defining its architecture and classifying implementation options. This paper further relates prior work to the presented architecture and classification, and discusses the challenges ahead for needed supporting technologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Qing, E-mail: qing.gao.chance@gmail.com; Dong, Daoyi, E-mail: daoyidong@gmail.com; Petersen, Ian R., E-mail: i.r.petersen@gmai.com
The purpose of this paper is to solve the fault tolerant filtering and fault detection problem for a class of open quantum systems driven by a continuous-mode bosonic input field in single photon states when the systems are subject to stochastic faults. Optimal estimates of both the system observables and the fault process are simultaneously calculated and characterized by a set of coupled recursive quantum stochastic differential equations.
Fault-Tolerant Local-Area Network
NASA Technical Reports Server (NTRS)
Morales, Sergio; Friedman, Gary L.
1988-01-01
Local-area network (LAN) for computers prevents single-point failure from interrupting communication between nodes of network. Includes two complete cables, LAN 1 and LAN 2. Microprocessor-based slave switches link cables to network-node devices as work stations, print servers, and file servers. Slave switches respond to commands from master switch, connecting nodes to two cable networks or disconnecting them so they are completely isolated. System monitor and control computer (SMC) acts as gateway, allowing nodes on either cable to communicate with each other and ensuring that LAN 1 and LAN 2 are fully used when functioning properly. Network monitors and controls itself, automatically routes traffic for efficient use of resources, and isolates and corrects its own faults, with potential dramatic reduction in time out of service.
Proactive Fault Tolerance for HPC with Xen Virtualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Arun Babu; Mueller, Frank; Engelmann, Christian
2007-01-01
with thousands of processors. At such large counts of compute nodes, faults are becoming common place. Current techniques to tolerate faults focus on reactive schemes to recover from faults and generally rely on a checkpoint/restart mechanism. Yet, in today's systems, node failures can often be anticipated by detecting a deteriorating health status. Instead of a reactive scheme for fault tolerance (FT), we are promoting a proactive one where processes automatically migrate from unhealthy nodes to healthy ones. Our approach relies on operating system virtualization techniques exemplied by but not limited to Xen. This paper contributes an automatic and transparent mechanismmore » for proactive FT for arbitrary MPI applications. It leverages virtualization techniques combined with health monitoring and load-based migration. We exploit Xen's live migration mechanism for a guest operating system (OS) to migrate an MPI task from a health-deteriorating node to a healthy one without stopping the MPI task during most of the migration. Our proactive FT daemon orchestrates the tasks of health monitoring, load determination and initiation of guest OS migration. Experimental results demonstrate that live migration hides migration costs and limits the overhead to only a few seconds making it an attractive approach to realize FT in HPC systems. Overall, our enhancements make proactive FT a valuable asset for long-running MPI application that is complementary to reactive FT using full checkpoint/ restart schemes since checkpoint frequencies can be reduced as fewer unanticipated failures are encountered. In the context of OS virtualization, we believe that this is the rst comprehensive study of proactive fault tolerance where live migration is actually triggered by health monitoring.« less
Delay-induced cluster patterns in coupled Cayley tree networks
NASA Astrophysics Data System (ADS)
Singh, A.; Jalan, S.
2013-07-01
We study effects of delay in diffusively coupled logistic maps on the Cayley tree networks. We find that smaller coupling values exhibit sensitiveness to value of delay, and lead to different cluster patterns of self-organized and driven types. Whereas larger coupling strengths exhibit robustness against change in delay values, and lead to stable driven clusters comprising nodes from last generation of the Cayley tree. Furthermore, introduction of delay exhibits suppression as well as enhancement of synchronization depending upon coupling strength values. To the end we discuss the importance of results to understand conflicts and cooperations observed in family business.
Observations of Displacement-driven Maturation along a Subduction-Transform Edge Propagator Fault
NASA Astrophysics Data System (ADS)
Neely, J. S.; Furlong, K. P.
2016-12-01
The Solomon Islands-Vanuatu composite subduction zone represents a tectonically complex region along the Pacific-Australia plate boundary in the southwest Pacific Ocean. Here the Australia plate subducts under the Pacific plate in two parts - the Solomon Trench and the Vanuatu Trench - with the two segments separated by a transform fault produced by a tear in the approaching Australia plate. As a result of the Australia plate tearing, the two subducting sections are offset by the 280 km long San Cristobal Trough (SCT) transform fault, which acts as a Subduction-Transform Edge Propagator (STEP) fault. The formation of this transform fault provides an opportunity to study the evolution of a newly created transform plate boundary. As distance from the tear increases, both the magnitude and frequency of earthquakes along the transform increase reflecting the coalescence of fault segments into a through-going structure. Over the past few decades, there have been several instances of larger magnitude earthquakes migrating westward along the STEP through a rapid succession of events. A recent May 2015 sequence of MW 6.8, MW 6.9, and MW 6.8 earthquakes followed this pattern, with an east to west migration over three days. However, neither this 2015 sequence, nor a previous 1993 progression, ruptured into or nucleated a large earthquake within the region near the tear. SCT sequence termination outside the region of the newly formed fault occurs even though Coulomb Failure Stress analyses reveal that the tear end of the SCT is positively loaded for failure by the earthquake sequence. Changing seismicity patterns along the SCT are also mapped by b-value variations that correspond to the rupture patterns of these propagating sequences. These seismicity pattern changes along the SCT reveal a fault maturation process with strain localization driven by cumulative slip corresponding to approximately 80-100 km of displacement.
Response of faults to climate-driven changes in ice and water volumes on Earth's surface.
Hampel, Andrea; Hetzel, Ralf; Maniatis, Georgios
2010-05-28
Numerical models including one or more faults in a rheologically stratified lithosphere show that climate-induced variations in ice and water volumes on Earth's surface considerably affect the slip evolution of both thrust and normal faults. In general, the slip rate and hence the seismicity of a fault decreases during loading and increases during unloading. Here, we present several case studies to show that a postglacial slip rate increase occurred on faults worldwide in regions where ice caps and lakes decayed at the end of the last glaciation. Of note is that the postglacial amplification of seismicity was not restricted to the areas beneath the large Laurentide and Fennoscandian ice sheets but also occurred in regions affected by smaller ice caps or lakes, e.g. the Basin-and-Range Province. Our results do not only have important consequences for the interpretation of palaeoseismological records from faults in these regions but also for the evaluation of the future seismicity in regions currently affected by deglaciation like Greenland and Antarctica: shrinkage of the modern ice sheets owing to global warming may ultimately lead to an increase in earthquake frequency in these regions.
Squid - a simple bioinformatics grid.
Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M
2005-08-03
BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.
Li, Yunji; Wu, QingE; Peng, Li
2018-01-23
In this paper, a synthesized design of fault-detection filter and fault estimator is considered for a class of discrete-time stochastic systems in the framework of event-triggered transmission scheme subject to unknown disturbances and deception attacks. A random variable obeying the Bernoulli distribution is employed to characterize the phenomena of the randomly occurring deception attacks. To achieve a fault-detection residual is only sensitive to faults while robust to disturbances, a coordinate transformation approach is exploited. This approach can transform the considered system into two subsystems and the unknown disturbances are removed from one of the subsystems. The gain of fault-detection filter is derived by minimizing an upper bound of filter error covariance. Meanwhile, system faults can be reconstructed by the remote fault estimator. An recursive approach is developed to obtain fault estimator gains as well as guarantee the fault estimator performance. Furthermore, the corresponding event-triggered sensor data transmission scheme is also presented for improving working-life of the wireless sensor node when measurement information are aperiodically transmitted. Finally, a scaled version of an industrial system consisting of local PC, remote estimator and wireless sensor node is used to experimentally evaluate the proposed theoretical results. In particular, a novel fault-alarming strategy is proposed so that the real-time capacity of fault-detection is guaranteed when the event condition is triggered.
Delivery and application of precise timing for a traveling wave powerline fault locator system
NASA Technical Reports Server (NTRS)
Street, Michael A.
1990-01-01
The Bonneville Power Administration (BPA) has successfully operated an in-house developed powerline fault locator system since 1986. The BPA fault locator system consists of remotes installed at cardinal power transmission line system nodes and a central master which polls the remotes for traveling wave time-of-arrival data. A power line fault produces a fast rise-time traveling wave which emanates from the fault point and propagates throughout the power grid. The remotes time-tag the traveling wave leading edge as it passes through the power system cardinal substation nodes. A synchronizing pulse transmitted via the BPA analog microwave system on a wideband channel sychronizes the time-tagging counters in the remote units to a different accuracy of better than one microsecond. The remote units correct the raw time tags for synchronizing pulse propagation delay and return these corrected values to the fault locator master. The master then calculates the power system disturbance source using the collected time tags. The system design objective is a fault location accuracy of 300 meters. BPA's fault locator system operation, error producing phenomena, and method of distributing precise timing are described.
AF-DHNN: Fuzzy Clustering and Inference-Based Node Fault Diagnosis Method for Fire Detection
Jin, Shan; Cui, Wen; Jin, Zhigang; Wang, Ying
2015-01-01
Wireless Sensor Networks (WSNs) have been utilized for node fault diagnosis in the fire detection field since the 1990s. However, the traditional methods have some problems, including complicated system structures, intensive computation needs, unsteady data detection and local minimum values. In this paper, a new diagnosis mechanism for WSN nodes is proposed, which is based on fuzzy theory and an Adaptive Fuzzy Discrete Hopfield Neural Network (AF-DHNN). First, the original status of each sensor over time is obtained with two features. One is the root mean square of the filtered signal (FRMS), the other is the normalized summation of the positive amplitudes of the difference spectrum between the measured signal and the healthy one (NSDS). Secondly, distributed fuzzy inference is introduced. The evident abnormal nodes’ status is pre-alarmed to save time. Thirdly, according to the dimensions of the diagnostic data, an adaptive diagnostic status system is established with a Fuzzy C-Means Algorithm (FCMA) and Sorting and Classification Algorithm to reducing the complexity of the fault determination. Fourthly, a Discrete Hopfield Neural Network (DHNN) with iterations is improved with the optimization of the sensors’ detected status information and standard diagnostic levels, with which the associative memory is achieved, and the search efficiency is improved. The experimental results show that the AF-DHNN method can diagnose abnormal WSN node faults promptly and effectively, which improves the WSN reliability. PMID:26193280
Zhao, Kaihui; Li, Peng; Zhang, Changfan; Li, Xiangfei; He, Jing; Lin, Yuliang
2017-12-06
This paper proposes a new scheme of reconstructing current sensor faults and estimating unknown load disturbance for a permanent magnet synchronous motor (PMSM)-driven system. First, the original PMSM system is transformed into two subsystems; the first subsystem has unknown system load disturbances, which are unrelated to sensor faults, and the second subsystem has sensor faults, but is free from unknown load disturbances. Introducing a new state variable, the augmented subsystem that has sensor faults can be transformed into having actuator faults. Second, two sliding mode observers (SMOs) are designed: the unknown load disturbance is estimated by the first SMO in the subsystem, which has unknown load disturbance, and the sensor faults can be reconstructed using the second SMO in the augmented subsystem, which has sensor faults. The gains of the proposed SMOs and their stability analysis are developed via the solution of linear matrix inequality (LMI). Finally, the effectiveness of the proposed scheme was verified by simulations and experiments. The results demonstrate that the proposed scheme can reconstruct current sensor faults and estimate unknown load disturbance for the PMSM-driven system.
Deformation driven by subduction and microplate collision: Geodynamics of Cook Inlet basin, Alaska
Bruhn, R.L.; Haeussler, Peter J.
2006-01-01
Late Neogene and younger deformation in Cook Inlet basin is caused by dextral transpression in the plate margin of south-central Alaska. Collision and subduction of the Yakutat microplate at the northeastern end of the Aleutian subduction zone is driving the accretionary complex of the Chugach and Kenai Mountains toward the Alaska Range on the opposite side of the basin. This deformation creates belts of fault-cored anticlines that are prolific traps of hydrocarbons and are also potential sources for damaging earthquakes. The faults dip steeply, extend into the Mesozoic basement beneath the Tertiary basin fill, and form conjugate flower structures at some localities. Comparing the geometry of the natural faults and folds with analog models created in a sandbox deformation apparatus suggests that some of the faults accommodate significant dextral as well as reverse-slip motion. We develop a tectonic model in which dextral shearing and horizontal shortening of the basin is driven by microplate collision with an additional component of thrust-type strain caused by plate subduction. This model predicts temporally fluctuating stress fields that are coupled to the recurrence intervals of large-magnitude subduction zone earthquakes. The maximum principal compressive stress is oriented east-southeast to east-northeast with nearly vertical least compressive stress when the basin's lithosphere is mostly decoupled from the underlying subduction megathrust. This stress tensor is compatible with principal stresses inferred from focal mechanisms of earthquakes that occur within the crust beneath Cook Inlet basin. Locking of the megathrust between great magnitude earthquakes may cause the maximum principal compressive stress to rotate toward the northwest. Moderate dipping faults that strike north to northeast may be optimally oriented for rupture in the ambient stress field, but steeply dipping faults within the cores of some anticlines are unfavorably oriented with respect to both modeled and observed stress fields, suggesting that elevated fluid pressure may be required to trigger fault rupture. ?? 2006 Geological Society of America.
Evidence of displacement-driven maturation along the San Cristobal Trough transform plate boundary
NASA Astrophysics Data System (ADS)
Neely, James S.; Furlong, Kevin P.
2018-03-01
The San Cristobal Trough (SCT), formed by the tearing of the Australia plate as it subducts under the Pacific plate near the Solomon Islands, provides an opportunity to study the transform boundary development process. Recent seismicity (2013-2016) along the 280 km long SCT, known as a Subduction-Transform Edge Propagator (STEP) fault, highlights the tearing process and ongoing development of the plate boundary. The region's earthquakes reveal two key characteristics. First, earthquakes at the western terminus of the SCT, which we interpret to indicate the Australia plate tearing, display disparate fault geometries. These events demonstrate that plate tearing is accommodated via multiple intersecting planes rather than a single through-going fault. Second, the SCT hosts sequences of Mw ∼7 strike-slip earthquakes that migrate westward through a rapid succession of events. Sequences in 1993 and 2015 both began along the eastern SCT and propagated west, but neither progression ruptured into or nucleated a large earthquake within the region near the tear. Utilizing b-value and Coulomb Failure Stress analyses, we examine these along-strike variations in the SCT's seismicity. b-Values are highest along the youngest, western end of the SCT and decrease with increasing distance from the tear. This trend may reflect increasing strain localization with increasing displacement. Coulomb Failure Stress analyses indicate that the stress conditions were conducive to continued western propagation of the 1993 and 2015 sequences suggesting that the unruptured western SCT may have fault geometries or properties that inhibit continued rupture. Our results indicate a displacement-driven fault maturation process. The multi-plane Australia plate tearing likely creates a western SCT with diffuse strain accommodated along a network of disorganized faults. After ∼90 km of cumulative displacement (∼900,000 yr of plate motion), strain localizes and faults align, allowing the SCT to host large earthquakes.
Fault-Tolerant Algorithms for Connectivity Restoration in Wireless Sensor Networks.
Zeng, Yali; Xu, Li; Chen, Zhide
2015-12-22
As wireless sensor network (WSN) is often deployed in a hostile environment, nodes in the networks are prone to large-scale failures, resulting in the network not working normally. In this case, an effective restoration scheme is needed to restore the faulty network timely. Most of existing restoration schemes consider more about the number of deployed nodes or fault tolerance alone, but fail to take into account the fact that network coverage and topology quality are also important to a network. To address this issue, we present two algorithms named Full 2-Connectivity Restoration Algorithm (F2CRA) and Partial 3-Connectivity Restoration Algorithm (P3CRA), which restore a faulty WSN in different aspects. F2CRA constructs the fan-shaped topology structure to reduce the number of deployed nodes, while P3CRA constructs the dual-ring topology structure to improve the fault tolerance of the network. F2CRA is suitable when the restoration cost is given the priority, and P3CRA is suitable when the network quality is considered first. Compared with other algorithms, these two algorithms ensure that the network has stronger fault-tolerant function, larger coverage area and better balanced load after the restoration.
NASA Astrophysics Data System (ADS)
Chen, Chunfeng; Liu, Hua; Fan, Ge
2005-02-01
In this paper we consider the problem of designing a network of optical cross-connects(OXCs) to provide end-to-end lightpath services to label switched routers (LSRs). Like some previous work, we select the number of OXCs as our objective. Compared with the previous studies, we take into account the fault-tolerant characteristic of logical topology. First of all, using a Prufer number randomly generated, we generate a tree. By adding some edges to the tree, we can obtain a physical topology which consists of a certain number of OXCs and fiber links connecting OXCs. It is notable that we for the first time limit the number of layers of the tree produced according to the method mentioned above. Then we design the logical topologies based on the physical topologies mentioned above. In principle, we will select the shortest path in addition to some consideration on the load balancing of links and the limitation owing to the SRLG. Notably, we implement the routing algorithm for the nodes in increasing order of the degree of the nodes. With regarding to the problem of the wavelength assignment, we adopt the heuristic algorithm of the graph coloring commonly used. It is clear our problem is computationally intractable especially when the scale of the network is large. We adopt the taboo search algorithm to find the near optimal solution to our objective. We present numerical results for up to 1000 LSRs and for a wide range of system parameters such as the number of wavelengths supported by each fiber link and traffic. The results indicate that it is possible to build large-scale optical networks with rich connectivity in a cost-effective manner, using relatively few but properly dimensioned OXCs.
Locating hardware faults in a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-04-13
Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.
Li, Xiangfei; Lin, Yuliang
2017-01-01
This paper proposes a new scheme of reconstructing current sensor faults and estimating unknown load disturbance for a permanent magnet synchronous motor (PMSM)-driven system. First, the original PMSM system is transformed into two subsystems; the first subsystem has unknown system load disturbances, which are unrelated to sensor faults, and the second subsystem has sensor faults, but is free from unknown load disturbances. Introducing a new state variable, the augmented subsystem that has sensor faults can be transformed into having actuator faults. Second, two sliding mode observers (SMOs) are designed: the unknown load disturbance is estimated by the first SMO in the subsystem, which has unknown load disturbance, and the sensor faults can be reconstructed using the second SMO in the augmented subsystem, which has sensor faults. The gains of the proposed SMOs and their stability analysis are developed via the solution of linear matrix inequality (LMI). Finally, the effectiveness of the proposed scheme was verified by simulations and experiments. The results demonstrate that the proposed scheme can reconstruct current sensor faults and estimate unknown load disturbance for the PMSM-driven system. PMID:29211017
NASA Astrophysics Data System (ADS)
Dawers, N. H.; McLindon, C.
2017-12-01
A synthesis of late Quaternary faults within the Mississippi River deltaic plain aims to provide a more accurate assessment of regional and local fault architecture, and interactions between faulting, sediment loading, salt withdrawal and compaction. This effort was initiated by the New Orleans Geological Society and has resulted in access to industry 3d seismic reflection data, as well as fault trace maps, and various types of well data and biostratigraphy. An unexpected outgrowth of this project is a hypothesis that gravity-driven normal faults in deltaic settings may be good candidates for shallow aseismic and slow-slip phenomena. The late Quaternary fault population is characterized by several large, highly segmented normal fault arrays: the Baton Rouge-Tepetate fault zone, the Lake Pontchartrain-Lake Borgne fault zone, the Golden Meadow fault zone (GMFZ), and a major counter-regional salt withdrawal structure (the Bay Marchand-Timbalier Bay-Caillou Island salt complex and West Delta fault zone) that lies just offshore of southeastern Louisiana. In comparison to the other, more northerly fault zones, the GMFZ is still significantly salt-involved. Salt structures segment the GMFZ with fault tips ending near or within salt, resulting in highly localized fault and compaction related subsidence separated by shallow salt structures, which are inherently buoyant and virtually incompressible. At least several segments within the GMFZ are characterized by marsh breaks that formed aseismically over timescales of days to months, such as near Adams Bay and Lake Enfermer. One well-documented surface rupture adjacent to a salt dome propagated over a 3 day period in 1943. We suggest that Louisiana's coastal faults make excellent analogues for deltaic faults in general, and propose that a series of positive feedbacks keep them active in the near surface. These include differential sediment loading and compaction, weak fault zone materials, high fluid pressure, low elastic stiffness in surrounding materials, and low confining pressure.
NASA Astrophysics Data System (ADS)
Dalguer, L. A.; Day, S. M.
2006-12-01
Accuracy in finite difference (FD) solutions to spontaneous rupture problems is controlled principally by the scheme used to represent the fault discontinuity, and not by the grid geometry used to represent the continuum. We have numerically tested three fault representation methods, the Thick Fault (TF) proposed by Madariaga et al (1998), the Stress Glut (SG) described by Andrews (1999), and the Staggered-Grid Split-Node (SGSN) methods proposed by Dalguer and Day (2006), each implemented in a the fourth-order velocity-stress staggered-grid (VSSG) FD scheme. The TF and the SG methods approximate the discontinuity through inelastic increments to stress components ("inelastic-zone" schemes) at a set of stress grid points taken to lie on the fault plane. With this type of scheme, the fault surface is indistinguishable from an inelastic zone with a thickness given by a spatial step dx for the SG, and 2dx for the TF model. The SGSN method uses the traction-at-split-node (TSN) approach adapted to the VSSG FD. This method represents the fault discontinuity by explicitly incorporating discontinuity terms at velocity nodes in the grid, with interactions between the "split nodes" occurring exclusively through the tractions (frictional resistance) acting between them. These tractions in turn are controlled by the jump conditions and a friction law. Our 3D tests problem solutions show that the inelastic-zone TF and SG methods show much poorer performance than does the SGSN formulation. The SG inelastic-zone method achieved solutions that are qualitatively meaningful and quantitatively reliable to within a few percent. The TF inelastic-zone method did not achieve qualitatively agreement with the reference solutions to the 3D test problem, and proved to be sufficiently computationally inefficient that it was not feasible to explore convergence quantitatively. The SGSN method gives very accurate solutions, and is also very efficient. Reliable solution of the rupture time is reached with a median resolution of the cohesive zone of only ~2 grid points, and efficiency is competitive with the Boundary Integral (BI) method. The results presented here demonstrate that appropriate fault representation in a numerical scheme is crucial to reduce uncertainties in numerical simulations of earthquake source dynamics and ground motion, and therefore important to improving our understanding of earthquake physics in general.
Coordinated Fault-Tolerance for High-Performance Computing Final Project Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panda, Dhabaleswar Kumar; Beckman, Pete
2011-07-28
With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system throughmore » fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on top of existing publish-subscribe tools. We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.« less
NASA Astrophysics Data System (ADS)
Yamashita, F.; Fukuyama, E.; Xu, S.; Kawakata, H.; Mizoguchi, K.; Takizawa, S.
2017-12-01
We report two types of foreshock activities observed on meter-scale laboratory experiments: slow-slip-driven type and cascade-up type. We used two rectangular metagabbro blocks as experimental specimens, whose nominal contacting area was 1.5 m long and 0.1 m wide. To monitor stress changes and seismic activities on the fault, we installed dense arrays of 32 triaxial rosette strain gauges and 64 PZT seismic sensors along the fault. We repeatedly conducted experiments with the same pair of rock specimens, causing the evolution of damage on the fault. We focus on two experiments successively conducted under the same loading condition (normal stress of 6.7 MPa and loading rate of 0.01 mm/s) but different initial fault surface conditions; the first experiment preserved the gouge generated from the previous experiment while the second experiment started with all gouge removed. Note that the distribution of gouge was heterogeneous, because we did not make the gouge layer uniform. We observed many foreshocks in both experiments, but found that the b-value of foreshocks was smaller in the first experiment with pre-existing gouge (PEG). In the second experiment without PEG, we observed premonitory slow slip associated with nucleation process preceding most main events by the strain measurements. We also found that foreshocks were triggered by the slow slip at the end of the nucleation process. In the experiment with PEG, on the contrary, no clear premonitory slow slips were found. Instead, foreshock activity accelerated towards the main event, as confirmed by a decreasing b-value. Spatiotemporal distribution of foreshock hypocenters suggests that foreshocks migrated and cascaded up to the main event. We infer that heterogeneous gouge distribution caused stress-concentrated and unstable patches, which impeded stable slow slip but promoted foreshocks on the fault. Further, our results suggest that b-value is a useful parameter for characterizing these observations.
A Self-Stabilizing Hybrid-Fault Tolerant Synchronization Protocol
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2014-01-01
In this report we present a strategy for solving the Byzantine general problem for self-stabilizing a fully connected network from an arbitrary state and in the presence of any number of faults with various severities including any number of arbitrary (Byzantine) faulty nodes. Our solution applies to realizable systems, while allowing for differences in the network elements, provided that the number of arbitrary faults is not more than a third of the network size. The only constraint on the behavior of a node is that the interactions with other nodes are restricted to defined links and interfaces. Our solution does not rely on assumptions about the initial state of the system and no central clock nor centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. We also present a mechanical verification of a proposed protocol. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV). The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirming claims of determinism and linear convergence with respect to the self-stabilization period. We believe that our proposed solution solves the general case of the clock synchronization problem.
Rupture Dynamics Simulation for Non-Planar fault by a Curved Grid Finite Difference Method
NASA Astrophysics Data System (ADS)
Zhang, Z.; Zhu, G.; Chen, X.
2011-12-01
We first implement the non-staggered finite difference method to solve the dynamic rupture problem, with split-node, for non-planar fault. Split-node method for dynamic simulation has been used widely, because of that it's more precise to represent the fault plane than other methods, for example, thick fault, stress glut and so on. The finite difference method is also a popular numeric method to solve kinematic and dynamic problem in seismology. However, previous works focus most of theirs eyes on the staggered-grid method, because of its simplicity and computational efficiency. However this method has its own disadvantage comparing to non-staggered finite difference method at some fact for example describing the boundary condition, especially the irregular boundary, or non-planar fault. Zhang and Chen (2006) proposed the MacCormack high order non-staggered finite difference method based on curved grids to precisely solve irregular boundary problem. Based upon on this non-staggered grid method, we make success of simulating the spontaneous rupture problem. The fault plane is a kind of boundary condition, which could be irregular of course. So it's convinced that we could simulate rupture process in the case of any kind of bending fault plane. We will prove this method is valid in the case of Cartesian coordinate first. In the case of bending fault, the curvilinear grids will be used.
[The Application of the Fault Tree Analysis Method in Medical Equipment Maintenance].
Liu, Hongbin
2015-11-01
In this paper, the traditional fault tree analysis method is presented, detailed instructions for its application characteristics in medical instrument maintenance is made. It is made significant changes when the traditional fault tree analysis method is introduced into the medical instrument maintenance: gave up the logic symbolic, logic analysis and calculation, gave up its complicated programs, and only keep its image and practical fault tree diagram, and the fault tree diagram there are also differences: the fault tree is no longer a logical tree but the thinking tree in troubleshooting, the definition of the fault tree's nodes is different, the composition of the fault tree's branches is also different.
Fault Tolerance in ZigBee Wireless Sensor Networks
NASA Technical Reports Server (NTRS)
Alena, Richard; Gilstrap, Ray; Baldwin, Jarren; Stone, Thom; Wilson, Pete
2011-01-01
Wireless sensor networks (WSN) based on the IEEE 802.15.4 Personal Area Network standard are finding increasing use in the home automation and emerging smart energy markets. The network and application layers, based on the ZigBee 2007 PRO Standard, provide a convenient framework for component-based software that supports customer solutions from multiple vendors. This technology is supported by System-on-a-Chip solutions, resulting in extremely small and low-power nodes. The Wireless Connections in Space Project addresses the aerospace flight domain for both flight-critical and non-critical avionics. WSNs provide the inherent fault tolerance required for aerospace applications utilizing such technology. The team from Ames Research Center has developed techniques for assessing the fault tolerance of ZigBee WSNs challenged by radio frequency (RF) interference or WSN node failure.
NASA Astrophysics Data System (ADS)
de Laat, Cees; Develder, Chris; Jukan, Admela; Mambretti, Joe
This topic is devoted to communication issues in scalable compute and storage systems, such as parallel computers, networks of workstations, and clusters. All aspects of communication in modern systems were solicited, including advances in the design, implementation, and evaluation of interconnection networks, network interfaces, system and storage area networks, on-chip interconnects, communication protocols, routing and communication algorithms, and communication aspects of parallel and distributed algorithms. In total 15 papers were submitted to this topic of which we selected the 7 strongest papers. We grouped the papers in two sessions of 3 papers each and one paper was selected for the best paper session. We noted a number of papers dealing with changing topologies, stability and forwarding convergence in source routing based cluster interconnect network architectures. We grouped these for the first session. The authors of the paper titled: “Implementing a Change Assimilation Mechanism for Source Routing Interconnects” propose a mechanism that can obtain the new topology, and compute and distribute a new set of fabric paths to the source routed network end points to minimize the impact on the forwarding service. The article entitled “Dependability Analysis of a Fault-tolerant Network Reconfiguration Strateg” reports on a case study analyzing the effects of network size, mean time to node failure, mean time to node repair, mean time to network repair and coverage of the failure when using a 2D mesh network with a fault-tolerant mechanism (similar to the one used in the BlueGene/L system), that is able to remove rows and/or columns in the presence of failures. The last paper in this session: “RecTOR: A New and Efficient Method for Dynamic Network Reconfiguration” presents a new dynamic reconfiguration method, that ensures deadlock-freedom during the reconfiguration without causing performance degradation such as increased latency or decreased throughput. The second session groups 3 papers presenting methods, protocols and architectures that enhance capacities in the Networks. The paper titled: “NIC-assisted Cache-Efficient Receive Stack for Message Passing over Ethernet” presents the addition of multiqueue support in the Open-MX receive stack so that all incoming packets for the same process are treated on the same core. It then introduces the idea of binding the target end process near its dedicated receive queue. In general this multiqueue receive stack performs better than the original single queue stack, especially on large communication patterns where multiple processes are involved and manual binding is difficult. The authors of: “A Multipath Fault-Tolerant Routing Method for High-Speed Interconnection Networks” focus on the problem of fault tolerance for high-speed interconnection networks by designing a fault tolerant routing method. The goal was to solve a certain number of link and node failures, considering its impact, and occurrence probability. Their experiments show that their method allows applications to successfully finalize their execution in the presence of several faults, with an average performance value of 97% with respect to the fault-free scenarios. The paper: “Hardware implementation study of the Self-Clocked Fair Queuing Credit Aware (SCFQ-CA) and Deficit Round Robin Credit Aware (DRR-CA) scheduling algorithms” proposes specific implementations of the two schedulers taking into account the characteristics of current high-performance networks. A comparison is presented on the complexity of these two algorithms in terms of silicon area and computation delay. Finally we selected one paper for the special paper session: “A Case Study of Communication Optimizations on 3D Mesh Interconnects”. In this paper the authors present topology aware mapping as a technique to optimize communication on 3-dimensional mesh interconnects and hence improve performance. Results are presented for OpenAtom on up to 16,384 processors of Blue Gene/L, 8,192 processors of Blue Gene/P and 2,048 processors of Cray XT3.
Message Efficient Checkpointing and Rollback Recovery in Heterogeneous Mobile Networks
NASA Astrophysics Data System (ADS)
Jaggi, Parmeet Kaur; Singh, Awadhesh Kumar
2016-06-01
Heterogeneous networks provide an appealing way of expanding the computing capability of mobile networks by combining infrastructure-less mobile ad-hoc networks with the infrastructure-based cellular mobile networks. The nodes in such a network range from low-power nodes to macro base stations and thus, vary greatly in their capabilities such as computation power and battery power. The nodes are susceptible to different types of transient and permanent failures and therefore, the algorithms designed for such networks need to be fault-tolerant. The article presents a checkpointing algorithm for the rollback recovery of mobile hosts in a heterogeneous mobile network. Checkpointing is a well established approach to provide fault tolerance in static and cellular mobile distributed systems. However, the use of checkpointing for fault tolerance in a heterogeneous environment remains to be explored. The proposed protocol is based on the results of zigzag paths and zigzag cycles by Netzer-Xu. Considering the heterogeneity prevalent in the network, an uncoordinated checkpointing technique is employed. Yet, useless checkpoints are avoided without causing a high message overhead.
Data driven CAN node reliability assessment for manufacturing system
NASA Astrophysics Data System (ADS)
Zhang, Leiming; Yuan, Yong; Lei, Yong
2017-01-01
The reliability of the Controller Area Network(CAN) is critical to the performance and safety of the system. However, direct bus-off time assessment tools are lacking in practice due to inaccessibility of the node information and the complexity of the node interactions upon errors. In order to measure the mean time to bus-off(MTTB) of all the nodes, a novel data driven node bus-off time assessment method for CAN network is proposed by directly using network error information. First, the corresponding network error event sequence for each node is constructed using multiple-layer network error information. Then, the generalized zero inflated Poisson process(GZIP) model is established for each node based on the error event sequence. Finally, the stochastic model is constructed to predict the MTTB of the node. The accelerated case studies with different error injection rates are conducted on a laboratory network to demonstrate the proposed method, where the network errors are generated by a computer controlled error injection system. Experiment results show that the MTTB of nodes predicted by the proposed method agree well with observations in the case studies. The proposed data driven node time to bus-off assessment method for CAN networks can successfully predict the MTTB of nodes by directly using network error event data.
Mini-Ckpts: Surviving OS Failures in Persistent Memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiala, David; Mueller, Frank; Ferreira, Kurt Brian
Concern is growing in the high-performance computing (HPC) community on the reliability of future extreme-scale systems. Current efforts have focused on application fault-tolerance rather than the operating system (OS), despite the fact that recent studies have suggested that failures in OS memory are more likely. The OS is critical to a system's correct and efficient operation of the node and processes it governs -- and in HPC also for any other nodes a parallelized application runs on and communicates with: Any single node failure generally forces all processes of this application to terminate due to tight communication in HPC. Therefore,more » the OS itself must be capable of tolerating failures. In this work, we introduce mini-ckpts, a framework which enables application survival despite the occurrence of a fatal OS failure or crash. Mini-ckpts achieves this tolerance by ensuring that the critical data describing a process is preserved in persistent memory prior to the failure. Following the failure, the OS is rejuvenated via a warm reboot and the application continues execution effectively making the failure and restart transparent. The mini-ckpts rejuvenation and recovery process is measured to take between three to six seconds and has a failure-free overhead of between 3-5% for a number of key HPC workloads. In contrast to current fault-tolerance methods, this work ensures that the operating and runtime system can continue in the presence of faults. This is a much finer-grained and dynamic method of fault-tolerance than the current, coarse-grained, application-centric methods. Handling faults at this level has the potential to greatly reduce overheads and enables mitigation of additional fault scenarios.« less
NASA Astrophysics Data System (ADS)
Li, Shuanghong; Cao, Hongliang; Yang, Yupu
2018-02-01
Fault diagnosis is a key process for the reliability and safety of solid oxide fuel cell (SOFC) systems. However, it is difficult to rapidly and accurately identify faults for complicated SOFC systems, especially when simultaneous faults appear. In this research, a data-driven Multi-Label (ML) pattern identification approach is proposed to address the simultaneous fault diagnosis of SOFC systems. The framework of the simultaneous-fault diagnosis primarily includes two components: feature extraction and ML-SVM classifier. The simultaneous-fault diagnosis approach can be trained to diagnose simultaneous SOFC faults, such as fuel leakage, air leakage in different positions in the SOFC system, by just using simple training data sets consisting only single fault and not demanding simultaneous faults data. The experimental result shows the proposed framework can diagnose the simultaneous SOFC system faults with high accuracy requiring small number training data and low computational burden. In addition, Fault Inference Tree Analysis (FITA) is employed to identify the correlations among possible faults and their corresponding symptoms at the system component level.
CFTLB: a novel cross-layer fault tolerant and load balancing protocol for WMN
NASA Astrophysics Data System (ADS)
Krishnaveni, N. N.; Chitra, K.
2017-12-01
Wireless mesh network (WMN) forms a wireless backbone framework for multi-hop transmission among the routers and clients in the extensible coverage area. To improve the throughput of WMNs with multiple gateways (GWs), several issues related to GW selection, load balancing and frequent link failures due to the presence of dynamic obstacles and channel interference should be addressed. This paper presents a novel cross-layer fault tolerant and load balancing (CFTLB) protocol to overcome the issues in WMN. Initially, the neighbour GW is searched and channel load is calculated. The GW having least channel load is selected which is estimated during the arrival of the new node. The proposed algorithm finds the alternate GWs and calculates the channel availability under high loading scenarios. If the current load in the GW is high, another GW is found and channel availability is calculated. Besides, it initiates the channel switching and establishes the communication with the mesh client effectively. The utilisation of hashing technique in proposed CFTLB verifies the status of the packets and achieves better performance in terms of router average throughput, throughput, average channel access time and lower end-to-end delay, communication overhead and average data loss in the channel compared to the existing protocols.
A hierarchical stress release model for synthetic seismicity
NASA Astrophysics Data System (ADS)
Bebbington, Mark
1997-06-01
We construct a stochastic dynamic model for synthetic seismicity involving stochastic stress input, release, and transfer in an environment of heterogeneous strength and interacting segments. The model is not fault-specific, having a number of adjustable parameters with physical interpretation, namely, stress relaxation, stress transfer, stress dissipation, segment structure, strength, and strength heterogeneity, which affect the seismicity in various ways. Local parameters are chosen to be consistent with large historical events, other parameters to reproduce bulk seismicity statistics for the fault as a whole. The one-dimensional fault is divided into a number of segments, each comprising a varying number of nodes. Stress input occurs at each node in a simple random process, representing the slow buildup due to tectonic plate movements. Events are initiated, subject to a stochastic hazard function, when the stress on a node exceeds the local strength. An event begins with the transfer of excess stress to neighboring nodes, which may in turn transfer their excess stress to the next neighbor. If the event grows to include the entire segment, then most of the stress on the segment is transferred to neighboring segments (or dissipated) in a characteristic event. These large events may themselves spread to other segments. We use the Middle America Trench to demonstrate that this model, using simple stochastic stress input and triggering mechanisms, can produce behavior consistent with the historical record over five units of magnitude. We also investigate the effects of perturbing various parameters in order to show how the model might be tailored to a specific fault structure. The strength of the model lies in this ability to reproduce the behavior of a general linear fault system through the choice of a relatively small number of parameters. It remains to develop a procedure for estimating the internal state of the model from the historical observations in order to use the model for forward prediction.
NASA Astrophysics Data System (ADS)
Martel, Stephen J.; Pollard, David D.
1989-07-01
We exploit quasi-static fracture mechanics models for slip along pre-existing faults to account for the fracture structure observed along small exhumed faults and small segmented fault zones in the Mount Abbot quadrangle of California and to estimate stress drop and shear fracture energy from geological field measurements. Along small strike-slip faults, cracks that splay from the faults are common only near fault ends. In contrast, many cracks splay from the boundary faults at the edges of a simple fault zone. Except near segment ends, the cracks preferentially splay into a zone. We infer that shear displacement discontinuities (slip patches) along a small fault propagated to near the fault ends and caused fracturing there. Based on elastic stress analyses, we suggest that slip on one boundary fault triggered slip on the adjacent boundary fault, and that the subsequent interaction of the slip patches preferentially led to the generation of fractures that splayed into the zones away from segment ends and out of the zones near segment ends. We estimate the average stress drops for slip events along the fault zones as ˜1 MPa and the shear fracture energy release rate during slip as 5 × 102 - 2 × 104 J/m2. This estimate is similar to those obtained from shear fracture of laboratory samples, but orders of magnitude less than those for large fault zones. These results suggest that the shear fracture energy release rate increases as the structural complexity of fault zones increases.
Real-Time System Verification by Kappa-Induction
NASA Technical Reports Server (NTRS)
Pike, Lee S.
2005-01-01
We report the first formal verification of a reintegration protocol for a safety-critical, fault-tolerant, real-time distributed embedded system. A reintegration protocol increases system survivability by allowing a node that has suffered a fault to regain state consistent with the operational nodes. The protocol is verified in the Symbolic Analysis Laboratory (SAL), where bounded model checking and decision procedures are used to verify infinite-state systems by k-induction. The protocol and its environment are modeled as synchronizing timeout automata. Because k-induction is exponential with respect to k, we optimize the formal model to reduce the size of k. Also, the reintegrator's event-triggered behavior is conservatively modeled as time-triggered behavior to further reduce the size of k and to make it invariant to the number of nodes modeled. A corollary is that a clique avoidance property is satisfied.
Fault tolerant hypercube computer system architecture
NASA Technical Reports Server (NTRS)
Madan, Herb S. (Inventor); Chow, Edward (Inventor)
1989-01-01
A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node operably connected to the first multiplexer whereby the second watch dog node can selectively communicate with individual ones of the computing nodes through the second and fourth networks. The branch is completed by a first load balancing node; and a second multiplexer connected between the first load balancing node and the first and second watch dog nodes, allowing the first load balancing node to selectively communicate with the first and second watch dog nodes.
A bottom-driven mechanism for distributed faulting in the Gulf of California rift
NASA Astrophysics Data System (ADS)
Persaud, Patricia; Tan, Eh; Contreras, Juan; Lavier, Luc
2017-11-01
Observations of active faulting in the continent-ocean transition of the Northern Gulf of California show multiple oblique-slip faults distributed in a 200 × 70 km2 area developed some time after a westward relocation of the plate boundary at 2 Ma. In contrast, north and south of this broad pull-apart structure, major transform faults accommodate Pacific-North America plate motion. Here we propose that the mechanism for distributed brittle deformation results from the boundary conditions present in the Northern Gulf, where basal shear is distributed between the Cerro Prieto strike-slip fault (southernmost fault of the San Andreas fault system) and the Ballenas Transform Fault. We hypothesize that in oblique-extensional settings whether deformation is partitioned in a few dip-slip and strike-slip faults, or in numerous oblique-slip faults may depend on (1) bottom-driven, distributed extension and shear deformation of the lower crust or upper mantle, and (2) the rift obliquity. To test this idea, we explore the effects of bottom-driven shear on the deformation of a brittle elastic-plastic layer with the help of pseudo-three dimensional numerical models that include side forces. Strain localization results when the basal shear abruptly increases in a step-function manner while oblique-slip on numerous faults dominates when basal shear is distributed. We further explore how the style of faulting varies with obliquity and demonstrate that the style of delocalized faulting observed in the Northern Gulf of California is reproduced in models with an obliquity of 0.7 and distributed basal shear boundary conditions, consistent with the interpreted obliquity and boundary conditions of the study area.
NASA Astrophysics Data System (ADS)
Busby, Cathy J.; Bassett, Kari N.
2007-09-01
The three-dimensional arrangement of volcanic deposits in strike-slip basins is not only the product of volcanic processes, but also of tectonic processes. We use a strike-slip basin within the Jurassic arc of southern Arizona (Santa Rita Glance Conglomerate) to construct a facies model for a strike-slip basin dominated by volcanism. This model is applicable to releasing-bend strike-slip basins, bounded on one side by a curved and dipping strike-slip fault, and on the other by curved normal faults. Numerous, very deep unconformities are formed during localized uplift in the basin as it passes through smaller restraining bends along the strike-slip fault. In our facies model, the basin fill thins and volcanism decreases markedly away from the master strike-slip fault (“deep” end), where subsidence is greatest, toward the basin-bounding normal faults (“shallow” end). Talus cone-alluvial fan deposits are largely restricted to the master fault-proximal (deep) end of the basin. Volcanic centers are sited along the master fault and along splays of it within the master fault-proximal (deep) end of the basin. To a lesser degree, volcanic centers also form along the curved faults that form structural highs between sub-basins and those that bound the distal ends of the basin. Abundant volcanism along the master fault and its splays kept the deep (master fault-proximal) end of the basin overfilled, so that it could not provide accommodation for reworked tuffs and extrabasinally-sourced ignimbrites that dominate the shallow (underfilled) end of the basin. This pattern of basin fill contrasts markedly with that of nonvolcanic strike-slip basins on transform margins, where clastic sedimentation commonly cannot keep pace with subsidence in the master fault-proximal end. Volcanic and subvolcanic rocks in the strike-slip basin largely record polygenetic (explosive and effusive) small-volume eruptions from many vents in the complexly faulted basin, referred to here as multi-vent complexes. Multi-vent complexes like these reflect proximity to a continuously active fault zone, where numerous strands of the fault frequently plumb small batches of magma to the surface. Releasing-bend extension promotes small, multivent styles of volcanism in preference to caldera collapse, which is more likely to form at releasing step-overs along a strike-slip fault.
Topographically driven groundwater flow and the San Andreas heat flow paradox revisited
Saffer, D.M.; Bekins, B.A.; Hickman, S.
2003-01-01
Evidence for a weak San Andreas Fault includes (1) borehole heat flow measurements that show no evidence for a frictionally generated heat flow anomaly and (2) the inferred orientation of ??1 nearly perpendicular to the fault trace. Interpretations of the stress orientation data remain controversial, at least in close proximity to the fault, leading some researchers to hypothesize that the San Andreas Fault is, in fact, strong and that its thermal signature may be removed or redistributed by topographically driven groundwater flow in areas of rugged topography, such as typify the San Andreas Fault system. To evaluate this scenario, we use a steady state, two-dimensional model of coupled heat and fluid flow within cross sections oriented perpendicular to the fault and to the primary regional topography. Our results show that existing heat flow data near Parkfield, California, do not readily discriminate between the expected thermal signature of a strong fault and that of a weak fault. In contrast, for a wide range of groundwater flow scenarios in the Mojave Desert, models that include frictional heat generation along a strong fault are inconsistent with existing heat flow data, suggesting that the San Andreas Fault at this location is indeed weak. In both areas, comparison of modeling results and heat flow data suggest that advective redistribution of heat is minimal. The robust results for the Mojave region demonstrate that topographically driven groundwater flow, at least in two dimensions, is inadequate to obscure the frictionally generated heat flow anomaly from a strong fault. However, our results do not preclude the possibility of transient advective heat transport associated with earthquakes.
Shaper design in CMOS for high dynamic range
De Geronimo, Gianluigi; Li, Shaorui
2015-06-30
An analog filter is presented that comprises a chain of filter stages, a feedback resistor for providing a negative feedback, and a feedback capacitor for providing a positive feedback. Each filter stage has an input node and an output node. The output node of a filter stage is connected to the input node of an immediately succeeding filter stage through a resistor. The feedback resistor has a first end connected to the output node of the last filter stage along the chain of filter stages, and a second end connected to the input node of a first preceding filter stage. The feedback capacitor has a first end connected to the output node of one of the chain of filter stages, and a second end connected to the input node of a second preceding filter stage.
Stress evolution during caldera collapse
NASA Astrophysics Data System (ADS)
Holohan, E. P.; Schöpfer, M. P. J.; Walsh, J. J.
2015-07-01
The mechanics of caldera collapse are subject of long-running debate. Particular uncertainties concern how stresses around a magma reservoir relate to fracturing as the reservoir roof collapses, and how roof collapse in turn impacts upon the reservoir. We used two-dimensional Distinct Element Method models to characterise the evolution of stress around a depleting sub-surface magma body during gravity-driven collapse of its roof. These models illustrate how principal stress orientations rotate during progressive deformation so that roof fracturing transitions from initial reverse faulting to later normal faulting. They also reveal four end-member stress paths to fracture, each corresponding to a particular location within the roof. Analysis of these paths indicates that fractures associated with ultimate roof failure initiate in compression (i.e. as shear fractures). We also report on how mechanical and geometric conditions in the roof affect pre-failure unloading and post-failure reloading of the reservoir. In particular, the models show how residual friction within a failed roof could, without friction reduction mechanisms or fluid-derived counter-effects, inhibit a return to a lithostatically equilibrated pressure in the magma reservoir. Many of these findings should be transferable to other gravity-driven collapse processes, such as sinkhole formation, mine collapse and subsidence above hydrocarbon reservoirs.
A Wireless Sensor System for Real-Time Monitoring and Fault Detection of Motor Arrays
Medina-García, Jonathan; Sánchez-Rodríguez, Trinidad; Galán, Juan Antonio Gómez; Delgado, Aránzazu; Gómez-Bravo, Fernando; Jiménez, Raúl
2017-01-01
This paper presents a wireless fault detection system for industrial motors that combines vibration, motor current and temperature analysis, thus improving the detection of mechanical faults. The design also considers the time of detection and further possible actions, which are also important for the early detection of possible malfunctions, and thus for avoiding irreversible damage to the motor. The remote motor condition monitoring is implemented through a wireless sensor network (WSN) based on the IEEE 802.15.4 standard. The deployed network uses the beacon-enabled mode to synchronize several sensor nodes with the coordinator node, and the guaranteed time slot mechanism provides data monitoring with a predetermined latency. A graphic user interface offers remote access to motor conditions and real-time monitoring of several parameters. The developed wireless sensor node exhibits very low power consumption since it has been optimized both in terms of hardware and software. The result is a low cost, highly reliable and compact design, achieving a high degree of autonomy of more than two years with just one 3.3 V/2600 mAh battery. Laboratory and field tests confirm the feasibility of the wireless system. PMID:28245623
A Wireless Sensor System for Real-Time Monitoring and Fault Detection of Motor Arrays.
Medina-García, Jonathan; Sánchez-Rodríguez, Trinidad; Galán, Juan Antonio Gómez; Delgado, Aránzazu; Gómez-Bravo, Fernando; Jiménez, Raúl
2017-02-25
This paper presents a wireless fault detection system for industrial motors that combines vibration, motor current and temperature analysis, thus improving the detection of mechanical faults. The design also considers the time of detection and further possible actions, which are also important for the early detection of possible malfunctions, and thus for avoiding irreversible damage to the motor. The remote motor condition monitoring is implemented through a wireless sensor network (WSN) based on the IEEE 802.15.4 standard. The deployed network uses the beacon-enabled mode to synchronize several sensor nodes with the coordinator node, and the guaranteed time slot mechanism provides data monitoring with a predetermined latency. A graphic user interface offers remote access to motor conditions and real-time monitoring of several parameters. The developed wireless sensor node exhibits very low power consumption since it has been optimized both in terms of hardware and software. The result is a low cost, highly reliable and compact design, achieving a high degree of autonomy of more than two years with just one 3.3 V/2600 mAh battery. Laboratory and field tests confirm the feasibility of the wireless system.
NASA Astrophysics Data System (ADS)
Wang, Y.; Wei, S.; Tapponnier, P.; WANG, X.; Lindsey, E.; Sieh, K.
2016-12-01
A gravity-driven "Mega-Landslide" model has been evoked to explain the shortening seen offshore Sabah and Brunei in oil-company seismic data. Although this model is considered to account simultaneously for recent folding at the edge of the submarine NW Sabah trough and normal faulting on the Sabah shelf, such a gravity-driven model is not consistent with geodetic data or critical examination of extant structural restorations. The rupture that produced the 2015 Mw6.0 Mt. Kinabalu earthquake is also inconsistent with the gravity-driven model. Our teleseismic analysis shows that the centroid depth of that earthquake's mainshock was 13 to 14 km, and its favored fault-plane solution is a 60° NW-dipping normal fault. Our finite-rupture model exhibits major fault slip between 5 and 15 km depth, in keeping with our InSAR analysis, which shows no appreciable surface deformation. Both the hypocentral depth and the depth of principal slip are far too deep to be explained by gravity-driven failure, as such a model would predict a listric normal fault connecting at a much shallower depth with a very gentle detachment. Our regional mapping of tectonic landforms also suggests the recent rupture is part of a 200-km long system of narrowly distributed active extension in northern Sabah. Taken together, the nature of the 2015 rupture, the belt of active normal faults, and structural consideration indicate that active tectonic shortening plays the leading role in controlling the overall deformation of northern Sabah and that deep-seated, onland normal faulting likely results from an abrupt change in the dip-angle of the collision interface beneath the Sabah accretionary prism.
NASA Astrophysics Data System (ADS)
Coussement, C.; Gente, P.; Rolet, J.; Tiercelin, J.-J.; Wafula, M.; Buku, S.
1994-10-01
The two branches of the East African Rift system include numerous hydrothermal fields, which are closely related to the present fault motion and to volcanic and seismic activity. In this study structural data from Pemba and Cape Banza hydrothermal fields (western branch, North Tanganyika, Zaire) are discussed in terms of neotectonic phenomena. Different types of records, such as fieldwork (onshore and underwater) and LANDSAT and SPOT imagery, are used to explain structural controls on active and fossil hydrothermal systems and their significance. The Pemba site is located at the intersection of 000-020°-trending normal faults belonging to the Uvira Border Fault System and a 120-130°-trending transtensional fault zone and is an area of high seismicity, with events of relatively large magnitude ( Ms < 6.5). The Cape Banza site occurs at the northern end of the Ubawari Peninsula horst. It is bounded by two fault systems trending 015° and is characterized seismically by events of small magnitude ( Ms < 4). The hydrothermal area itself is tectonically controlled by structures striking 170-180° and 080°. The analysis of both hydrothermal areas demonstrates the rejuvenation of older Proterozoic structures during Recent rift faulting and the location of the hydrothermal activity at the junctions of submeridian and transverse faults. The fault motion is compatible with a regional direction of extension of 090-110°. The Cape Banza and Pemba hydrothermal fields may testify to magma chambers existing below the junctions of the faults. They appear to form at structural nodes and may represent a future volcanic province. Together with the four surface volcanic provinces existing along the western branch, they possibly indicate an incipient rift segmentation related to 'valley-valley' or 'transverse fault-valley' junctions, contrasting with the spacing of the volcanoes measured in the eastern branch. These spacings appear to express the different elastic thicknesses between the eastern and western branches of the East African Rift system, perhaps related to a difference in stage of evolution of the two branches.
Staged-Fault Testing of Distance Protection Relay Settings
NASA Astrophysics Data System (ADS)
Havelka, J.; Malarić, R.; Frlan, K.
2012-01-01
In order to analyze the operation of the protection system during induced fault testing in the Croatian power system, a simulation using the CAPE software has been performed. The CAPE software (Computer-Aided Protection Engineering) is expert software intended primarily for relay protection engineers, which calculates current and voltage values during faults in the power system, so that relay protection devices can be properly set up. Once the accuracy of the simulation model had been confirmed, a series of simulations were performed in order to obtain the optimal fault location to test the protection system. The simulation results were used to specify the test sequence definitions for the end-to-end relay testing using advanced testing equipment with GPS synchronization for secondary injection in protection schemes based on communication. The objective of the end-to-end testing was to perform field validation of the protection settings, including verification of the circuit breaker operation, telecommunication channel time and the effectiveness of the relay algorithms. Once the end-to-end secondary injection testing had been completed, the induced fault testing was performed with three-end lines loaded and in service. This paper describes and analyses the test procedure, consisting of CAPE simulations, end-to-end test with advanced secondary equipment and staged-fault test of a three-end power line in the Croatian transmission system.
Mapping fault-controlled volatile migration in equatorial layered deposits on Mars
NASA Astrophysics Data System (ADS)
Okubo, C. H.
2006-12-01
Research in terrestrial settings shows that clastic sedimentary deposits are productive host rocks for underground volatile reservoirs because of their high porosity and permeability. Within such reservoirs, faults play an important role in controlling pathways for volatile migration, because faults act as either barriers or conduits. Therefore faults are important volatile concentrators, which means that evidence of geochemical, hydrologic and biologic processes are commonly concentrated at these locations. Accordingly, faulted sedimentary deposits on Mars are plausible areas to search for evidence of past volatile activity and associated processes. Indeed, evidence for volatile migration through layered sedimentary deposits on Mars has been documented in detail by the Opportunity rover in Meridiani Planum. Thus evidence for past volatile- driven processes that could have occurred within the protective depths of these deposits may now exposed at the surface and more likely found around faults. Owing to the extensive distribution of layered deposits on Mars, a major challenge in looking for and investigating evidence of past volatile processes in these deposits is identifying and prioritizing study areas. Toward this end, this presentation details initial results of a multiyear project to develop quantitative maps of latent pathways for fault-controlled volatile migration through the layered sedimentary deposits on Mars. Available MOC and THEMIS imagery are used to map fault traces within equatorial layered deposits, with an emphasis on proposed regions for MSL landing sites. These fault maps define regions of interest for stereo imaging by HiRISE and identify areas to search for existing MOC stereo coverage. Stereo coverage of identified areas of interest allows for the construction of digital elevation models and ultimately extraction of fault plane and displacement vector orientations. These fault and displacement data will be fed through numerical modeling techniques that are developed for exploring terrestrial geologic reservoirs. This will yield maps of latent pathways for volatile migration through the faulted layered deposits and provide insight into the geologic history of volatiles on Mars.
Chang, Liang-Cheng; Lee, Da-Sheng
2012-01-01
Installation of a Wireless and Powerless Sensing Node (WPSN) inside a spindle enables the direct transmission of monitoring signals through a metal case of a certain thickness instead of the traditional method of using connecting cables. Thus, the node can be conveniently installed inside motors to measure various operational parameters. This study extends this earlier finding by applying this advantage to the monitoring of spindle systems. After over 2 years of system observation and optimization, the system has been verified to be superior to traditional methods. The innovation of fault diagnosis in this study includes the unmatched assembly dimensions of the spindle system, the unbalanced system, and bearing damage. The results of the experiment demonstrate that the WPSN provides a desirable signal-to-noise ratio (SNR) in all three of the simulated faults, with the difference of SNR reaching a maximum of 8.6 dB. Following multiple repetitions of the three experiment types, 80% of the faults were diagnosed when the spindle revolved at 4,000 rpm, significantly higher than the 30% fault recognition rate of traditional methods. The experimental results of monitoring of the spindle production line indicated that monitoring using the WPSN encounters less interference from noise compared to that of traditional methods. Therefore, this study has successfully developed a prototype concept into a well-developed monitoring system, and the monitoring can be implemented in a spindle production line or real-time monitoring of machine tools. PMID:22368456
Chang, Liang-Cheng; Lee, Da-Sheng
2012-01-01
Installation of a Wireless and Powerless Sensing Node (WPSN) inside a spindle enables the direct transmission of monitoring signals through a metal case of a certain thickness instead of the traditional method of using connecting cables. Thus, the node can be conveniently installed inside motors to measure various operational parameters. This study extends this earlier finding by applying this advantage to the monitoring of spindle systems. After over 2 years of system observation and optimization, the system has been verified to be superior to traditional methods. The innovation of fault diagnosis in this study includes the unmatched assembly dimensions of the spindle system, the unbalanced system, and bearing damage. The results of the experiment demonstrate that the WPSN provides a desirable signal-to-noise ratio (SNR) in all three of the simulated faults, with the difference of SNR reaching a maximum of 8.6 dB. Following multiple repetitions of the three experiment types, 80% of the faults were diagnosed when the spindle revolved at 4,000 rpm, significantly higher than the 30% fault recognition rate of traditional methods. The experimental results of monitoring of the spindle production line indicated that monitoring using the WPSN encounters less interference from noise compared to that of traditional methods. Therefore, this study has successfully developed a prototype concept into a well-developed monitoring system, and the monitoring can be implemented in a spindle production line or real-time monitoring of machine tools.
Automatic translation of digraph to fault-tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.
1992-01-01
The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.
Three-dimensional models of deformation near strike-slip faults
ten Brink, Uri S.; Katzman, Rafael; Lin, J.
1996-01-01
We use three-dimensional elastic models to help guide the kinematic interpretation of crustal deformation associated with strike-slip faults. Deformation of the brittle upper crust in the vicinity of strike-slip fault systems is modeled with the assumption that upper crustal deformation is driven by the relative plate motion in the upper mantle. The driving motion is represented by displacement that is specified on the bottom of a 15-km-thick elastic upper crust everywhere except in a zone of finite width in the vicinity of the faults, which we term the "shear zone." Stress-free basal boundary conditions are specified within the shear zone. The basal driving displacement is either pure strike slip or strike slip with a small oblique component, and the geometry of the fault system includes a single fault, several parallel faults, and overlapping en echelon faults. We examine the variations in deformation due to changes in the width of the shear zone and due to changes in the shear strength of the faults. In models with weak faults the width of the shear zone has a considerable effect on the surficial extent and amplitude of the vertical and horizontal deformation and on the amount of rotation around horizontal and vertical axes. Strong fault models have more localized deformation at the tip of the faults, and the deformation is partly distributed outside the fault zone. The dimensions of large basins along strike-slip faults, such as the Rukwa and Dead Sea basins, and the absence of uplift around pull-apart basins fit models with weak faults better than models with strong faults. Our models also suggest that the length-to-width ratio of pull-apart basins depends on the width of the shear zone and the shear strength of the faults and is not constant as previously suggested. We show that pure strike-slip motion can produce tectonic features, such as elongate half grabens along a single fault, rotated blocks at the ends of parallel faults, or extension perpendicular to overlapping en echelon faults, which can be misinterpreted to indicate a regional component of extension. Zones of subsidence or uplift can become wider than expected for transform plate boundaries when a minor component of oblique motion is added to a system of parallel strike-slip faults.
Three-dimensional models of deformation near strike-slip faults
ten Brink, Uri S.; Katzman, Rafael; Lin, Jian
1996-01-01
We use three-dimensional elastic models to help guide the kinematic interpretation of crustal deformation associated with strike-slip faults. Deformation of the brittle upper crust in the vicinity of strike-slip fault systems is modeled with the assumption that upper crustal deformation is driven by the relative plate motion in the upper mantle. The driving motion is represented by displacement that is specified on the bottom of a 15-km-thick elastic upper crust everywhere except in a zone of finite width in the vicinity of the faults, which we term the “shear zone.” Stress-free basal boundary conditions are specified within the shear zone. The basal driving displacement is either pure strike slip or strike slip with a small oblique component, and the geometry of the fault system includes a single fault, several parallel faults, and overlapping en echelon faults. We examine the variations in deformation due to changes in the width of the shear zone and due to changes in the shear strength of the faults. In models with weak faults the width of the shear zone has a considerable effect on the surficial extent and amplitude of the vertical and horizontal deformation and on the amount of rotation around horizontal and vertical axes. Strong fault models have more localized deformation at the tip of the faults, and the deformation is partly distributed outside the fault zone. The dimensions of large basins along strike-slip faults, such as the Rukwa and Dead Sea basins, and the absence of uplift around pull-apart basins fit models with weak faults better than models with strong faults. Our models also suggest that the length-to-width ratio of pull-apart basins depends on the width of the shear zone and the shear strength of the faults and is not constant as previously suggested. We show that pure strike-slip motion can produce tectonic features, such as elongate half grabens along a single fault, rotated blocks at the ends of parallel faults, or extension perpendicular to overlapping en echelon faults, which can be misinterpreted to indicate a regional component of extension. Zones of subsidence or uplift can become wider than expected for transform plate boundaries when a minor component of oblique motion is added to a system of parallel strike-slip faults.
Cross-layer design for intrusion detection and data security in wireless ad hoc sensor networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
2007-09-01
A wireless ad hoc sensor network is a configuration for area surveillance that affords rapid, flexible deployment in arbitrary threat environments. There is no infrastructure support and sensor nodes communicate with each other only when they are in transmission range. The nodes are severely resource-constrained, with limited processing, memory and power capacities and must operate cooperatively to fulfill a common mission in typically unattended modes. In a wireless sensor network (WSN), each sensor at a node can observe locally some underlying physical phenomenon and sends a quantized version of the observation to sink (destination) nodes via wireless links. Since the wireless medium can be easily eavesdropped, links can be compromised by intrusion attacks from nodes that may mount denial-of-service attacks or insert spurious information into routing packets, leading to routing loops, long timeouts, impersonation, and node exhaustion. A cross-layer design based on protocol-layer interactions is proposed for detection and identification of various intrusion attacks on WSN operation. A feature set is formed from selected cross-layer parameters of the WSN protocol to detect and identify security threats due to intrusion attacks. A separate protocol is not constructed from the cross-layer design; instead, security attributes and quantified trust levels at and among nodes established during data exchanges complement customary WSN metrics of energy usage, reliability, route availability, and end-to-end quality-of-service (QoS) provisioning. Statistical pattern recognition algorithms are applied that use observed feature-set patterns observed during network operations, viewed as security audit logs. These algorithms provide the "best" network global performance in the presence of various intrusion attacks. A set of mobile (software) agents distributed at the nodes implement the algorithms, by moving among the layers involved in the network response at each active node and trust neighborhood, collecting parametric information and executing assigned decision tasks. The communications overhead due to security mechanisms and the latency in network response are thus minimized by reducing the need to move large amounts of audit data through resource-limited nodes and by locating detection/identification programs closer to audit data. If network partitioning occurs due to uncoordinated node exhaustion, data compromise or other effects of the attacks, the mobile agents can continue to operate, thereby increasing fault tolerance in the network response to intrusions. Since the mobile agents behave like an ant colony in securing the WSN, published ant colony optimization (ACO) routines and other evolutionary algorithms are adapted to protect network security, using data at and through nodes to create audit records to detect and respond to denial-of-service attacks. Performance evaluations of algorithms are performed by simulation of a few intrusion attacks, such as black hole, flooding, Sybil and others, to validate the ability of the cross-layer algorithms to enable WSNs to survive the attacks. Results are compared for the different algorithms.
NASA Astrophysics Data System (ADS)
Tranos, Markos D.
2018-02-01
Synthetic heterogeneous fault-slip data as driven by Andersonian compressional stress tensors were used to examine the efficiency of best-fit stress inversion methods in separating them. Heterogeneous fault-slip data are separated only if (a) they have been driven by stress tensors defining 'hybrid' compression (R < 0.375), and their σ1 axes differ in trend more than 30° (R = 0) or 50° (R = 0.25). Separation is not feasible if they have been driven by (b) 'real' (R ≥ 0.375) and 'hybrid' compressional tensors having their σ1 axes in similar trend, or (c) 'real' compressional tensors. In case (a), the Stress Tensor Discriminator Faults (STDF) exist in more than 50% of the activated fault slip data while in cases (b) and (c), they exist in percentages of much less than 50% or not at all. They constitute a necessary discriminatory tool for the establishment and comparison of two compressional stress tensors determined by a best-fit stress inversion method. The best-fit stress inversion methods are not able to determine more than one 'real' compressional stress tensor, as far as the thrust stacking in an orogeny is concerned. They can only possibly discern stress differences in the late-orogenic faulting processes, but not between the main- and late-orogenic stages.
A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.
Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto
2017-09-29
The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.
Going End to End to Deliver High-Speed Data
NASA Technical Reports Server (NTRS)
2005-01-01
By the end of the 1990s, the optical fiber "backbone" of the telecommunication and data-communication networks had evolved from megabits-per-second transmission rates to gigabits-per-second transmission rates. Despite this boom in bandwidth, however, users at the end nodes were still not being reached on a consistent basis. (An end node is any device that does not behave like a router or a managed hub or switch. Examples of end node objects are computers, printers, serial interface processor phones, and unmanaged hubs and switches.) The primary reason that prevents bandwidth from reaching the end nodes is the complex local network topology that exists between the optical backbone and the end nodes. This complex network topology consists of several layers of routing and switch equipment which introduce potential congestion points and network latency. By breaking down the complex network topology, a true optical connection can be achieved. Access Optical Networks, Inc., is making this connection a reality with guidance from NASA s nondestructive evaluation experts.
Verification of a Byzantine-Fault-Tolerant Self-stabilizing Protocol for Clock Synchronization
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2008-01-01
This paper presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system except for the presence of sufficient good nodes, thus making the weakest possible assumptions and producing the strongest results. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV). The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space.
Non-smooth saddle-node bifurcations III: Strange attractors in continuous time
NASA Astrophysics Data System (ADS)
Fuhrmann, G.
2016-08-01
Non-smooth saddle-node bifurcations give rise to minimal sets of interesting geometry built of so-called strange non-chaotic attractors. We show that certain families of quasiperiodically driven logistic differential equations undergo a non-smooth bifurcation. By a previous result on the occurrence of non-smooth bifurcations in forced discrete time dynamical systems, this yields that within the class of families of quasiperiodically driven differential equations, non-smooth saddle-node bifurcations occur in a set with non-empty C2-interior.
NASA Astrophysics Data System (ADS)
Pan, Jun; Chen, Jinglong; Zi, Yanyang; Yuan, Jing; Chen, Binqiang; He, Zhengjia
2016-12-01
It is significant to perform condition monitoring and fault diagnosis on rolling mills in steel-making plant to ensure economic benefit. However, timely fault identification of key parts in a complicated industrial system under operating condition is still a challenging task since acquired condition signals are usually multi-modulated and inevitably mixed with strong noise. Therefore, a new data-driven mono-component identification method is proposed in this paper for diagnostic purpose. First, the modified nonlocal means algorithm (NLmeans) is proposed to reduce noise in vibration signals without destroying its original Fourier spectrum structure. During the modified NLmeans, two modifications are investigated and performed to improve denoising effect. Then, the modified empirical wavelet transform (MEWT) is applied on the de-noised signal to adaptively extract empirical mono-component modes. Finally, the modes are analyzed for mechanical fault identification based on Hilbert transform. The results show that the proposed data-driven method owns superior performance during system operation compared with the MEWT method.
Uncovering hidden nodes in complex networks in the presence of noise
Su, Ri-Qi; Lai, Ying-Cheng; Wang, Xiao; Do, Younghae
2014-01-01
Ascertaining the existence of hidden objects in a complex system, objects that cannot be observed from the external world, not only is curiosity-driven but also has significant practical applications. Generally, uncovering a hidden node in a complex network requires successful identification of its neighboring nodes, but a challenge is to differentiate its effects from those of noise. We develop a completely data-driven, compressive-sensing based method to address this issue by utilizing complex weighted networks with continuous-time oscillatory or discrete-time evolutionary-game dynamics. For any node, compressive sensing enables accurate reconstruction of the dynamical equations and coupling functions, provided that time series from this node and all its neighbors are available. For a neighboring node of the hidden node, this condition cannot be met, resulting in abnormally large prediction errors that, counterintuitively, can be used to infer the existence of the hidden node. Based on the principle of differential signal, we demonstrate that, when strong noise is present, insofar as at least two neighboring nodes of the hidden node are subject to weak background noise only, unequivocal identification of the hidden node can be achieved. PMID:24487720
Cost-effective and monitoring-active technique for TDM-passive optical networks
NASA Astrophysics Data System (ADS)
Chi, Chang-Chia; Lin, Hong-Mao; Tarn, Chen-Wen; Lin, Huang-Liang
2014-08-01
A reliable, detection-active and cost-effective method which employs the hello and heartbeat signals for branched node distinguishing to monitor fiber fault in any branch of distribution fibers of a time division multiplexing passive optical network (TDM-PON) is proposed. With this method, the material cost of building an optical network monitor system for a TDM-PON with 168 ONUs and the time of identifying a multiple branch faults is significantly reduced in a TDM-PON system of any scale. A fault location in a 1 × 32 TDM-PON system using this method to identify the fault branch is demonstrated.
Coordinated Fault Tolerance for High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, Jack; Bosilca, George; et al.
2013-04-08
Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.
NASA Astrophysics Data System (ADS)
Kinscher, J.; Krüger, F.; Woith, H.; Lühr, B. G.; Hintersberger, E.; Irmak, T. S.; Baris, S.
2013-11-01
The Armutlu peninsula, located in the eastern Marmara Sea, coincides with the western end of the rupture of the 17 August 1999, İzmit MW 7.6 earthquake which is the penultimate event of an apparently westward migrating series of strong and disastrous earthquakes along the NAFZ during the past century. We present new seismotectonic data of this key region in order to evaluate previous seismotectonic models and their implications for seismic hazard assessment in the eastern Marmara Sea. Long term kinematics were investigated by performing paleo strain reconstruction from geological field investigations by morphotectonic and kinematic analysis of exposed brittle faults. Short term kinematics were investigated by inverting for the moment tensor of 13 small to moderate recent earthquakes using surface wave amplitude spectra. Our results confirm previous models interpreting the eastern Marmara Sea Region as an active transtensional pull-apart environment associated with significant NNE-SSW extension and vertical displacement. At the northern peninsula, long term deformation pattern did not change significantly since Pliocene times contradicting regional tectonic models which postulate a newly formed single dextral strike slip fault in the Marmara Sea Region. This area is interpreted as a horsetail splay fault structure associated with a major normal fault segment that we call the Waterfall Fault. Apart from the Waterfall Fault, the stress strain relation appears complex associated with a complicated internal fault geometry, strain partitioning, and reactivation of pre-existing plane structures. At the southern peninsula, recent deformation indicates active pull-apart tectonics constituted by NE-SW trending dextral strike slip faults. Earthquakes generated by stress release along large rupture zones seem to be less probable at the northern, but more probable at the southern peninsula. Additionally, regional seismicity appears predominantly driven by plate boundary stresses as transtensional faulting is consistent with the southwest directed far field deformation of the Anatolian plate.
Chen, Gang; Song, Yongduan; Lewis, Frank L
2016-05-03
This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.
Bluetooth-based wireless sensor networks
NASA Astrophysics Data System (ADS)
You, Ke; Liu, Rui Qiang
2007-11-01
In this work a Bluetooth-based wireless sensor network is proposed. In this bluetooth-based wireless sensor networks, information-driven star topology and energy-saved mode are used, through which a blue master node can control more than seven slave node, the energy of each sensor node is reduced and secure management of each sensor node is improved.
NASA Astrophysics Data System (ADS)
Campbell, Jocelyn K.; Nicol, Andrew; Howard, Matthew E.
2003-09-01
Two sites are described from range front faults along the foothills of the Southern Alps of New Zealand, where apparently a period of 200-300 years of accelerated river incision preceded late Holocene coseismic ruptures, each probably in excess of M w 7.5. They relate to separate fault segments and seismic events on a transpressive system associated with fault-driven folding, but both show similar evidence of off-plane aseismic deformation during the downcutting phase. The incision history is documented by the ages, relative elevations and profiles of degradation terraces. The surface dating is largely based on the weathering rind technique of McSaveney (McSaveney, M.J., 1992. A Manual for Weathering-rind Dating of Grey Sandstones of the Torlesse Supergroup, New Zealand. 92/4, Institute of Geological and Nuclear Sciences), supported by some consistent radiocarbon ages. On the Porters Pass Fault, drainage from Red Lakes has incised up to 12 m into late Pleistocene recessional outwash, but the oldest degradation terrace surface T I is dated at only 690±50 years BP. The upper terraces T I and T II converge uniformly downstream right across the fault trace, but by T III the terrace has a reversed gradient upstream. T II and T III break into multiple small terraces on the hanging wall only, close to the fault trace. Continued backtilting during incision caused T IV to diverge downstream relative to the older surfaces. Coseismic faulting displaced T V and all the older terraces by a metre high reverse scarp and an uncertain right lateral component. This event cannot be younger than a nearby ca. 500 year old rock avalanche covering the trace. The second site in the middle reaches of the Waipara River valley involves the interaction of four faults associated with the Doctors Anticline. The main river and tributaries have incised steeply into a 2000 year old mid-Holocene, broad, degradation surface downcutting as much as 55 m. Beginning approximately 600 years ago accelerating incision eventually attained rates in excess of 100 mm/year in those reaches closely associated with the Doctors Anticline and related thrust and transfer faults. All four faults ruptured, either synchronously or sequentially, between 250 and 400 years ago when the river was close to 8 m above its present bed. Better cross-method checks on dating would eliminate some uncertainties, but the apparent similarities suggest a pattern of precursor events initiated by a period of base level drop extending for several kilometres across the structure, presumably in response to general uplift. Over time, deformation is concentrated close to the fault zone causing tilting of degradation terraces, and demonstrably in the Waipara case at least, coseismic rupture is preceded by marked acceleration of the downcutting rate. Overall base level drop is an order of magnitude greater than the throw on the eventual fault scarp. The Ostler Fault (Van Dissen et al., 1993) demonstrates that current deformation is taking place on similar thrust-fault driven folding in the Southern Alps. Regular re-levelling since 1966 has shown uplift rates of 1.0-1.5 mm/year at the crest of a 1-2 km half wave length anticline, but this case also illustrates the general problem of interpreting the significance of rates derived from geophysical monitoring relative to the long term seismic cycle. If the geomorphic signals described can be shown to hold for other examples, then criteria for targeting faults approaching the end of the seismic cycle in some tectonic settings may be possible.
Focused exhumation along megathrust splay faults in Prince William Sound, Alaska
NASA Astrophysics Data System (ADS)
Haeussler, P. J.; Armstrong, P. A.; Liberty, L. M.; Ferguson, K.; Finn, S.; Arkle, J. C.; Pratt, T. L.
2011-12-01
Megathrust splay faults have been identified as important for generating tsunamis in some subduction zone earthquakes (1946 Nankai, 1964 Alaska, 2004 Sumatra). The larger role of megathrust splay faults in accretionary prisms is not well known. In Alaska, we have new evidence that megathrust splay faults are conduits for focused exhumation. In the southern Alaska accretionary complex, in the Prince William Sound region above the 1964 M9.2 earthquake rupture, apatite (U-Th)/He (AHe) ages, with closure temperatures of about 65°C, are typically in the range of 10-20 Ma. These relatively old ages indicate little to no accumulation of permanent strain during the megathrust earthquake cycle. However, the youngest AHe ages in all of Prince William Sound are from Montague Island, with two ages of 1.4 Ma on the southwest part of the island and two ages of 4 Ma at the northeast end of the island. Montague Island lies in the hanging wall of the Patton Bay megathrust splay fault, which ruptured during the 1964 earthquake, and resulted in 9 m of vertical uplift. Two other megathrust splay faults also ruptured during the 1964 earthquake in the same area. New high-resolution bathymetry and seismic reflection profiles show abundant normal faults in the region adjacent and north of the megathrust splay faults. The largest of these is the Montague Strait fault, which has 80 m of post glacial offset (~12kya?). We interpret this extension in the hanging wall as accommodating the exhumation of the rocks on Montague Island along the megathrust splay faults. An examination of legacy seismic reflection profiles shows the megathrust splay faults rooting downward into the decollement. At least some extension in the hanging wall may also be related to thrusting over a ramp-flat geometry. These megathrust splay faults are out of sequence thrusts, as they are located about 130 km inboard from the trench. This out of sequence thrusting that is causing the exhumation on Montague Island may be driven by underplating or by the Yakutat microplate collision. We suggest that rapid exhumation along megathrust splay faults, in association with normal faulting, may be a feature along other megathrust splay faults around the world.
A data-driven multiplicative fault diagnosis approach for automation processes.
Hao, Haiyang; Zhang, Kai; Ding, Steven X; Chen, Zhiwen; Lei, Yaguo
2014-09-01
This paper presents a new data-driven method for diagnosing multiplicative key performance degradation in automation processes. Different from the well-established additive fault diagnosis approaches, the proposed method aims at identifying those low-level components which increase the variability of process variables and cause performance degradation. Based on process data, features of multiplicative fault are extracted. To identify the root cause, the impact of fault on each process variable is evaluated in the sense of contribution to performance degradation. Then, a numerical example is used to illustrate the functionalities of the method and Monte-Carlo simulation is performed to demonstrate the effectiveness from the statistical viewpoint. Finally, to show the practical applicability, a case study on the Tennessee Eastman process is presented. Copyright © 2013. Published by Elsevier Ltd.
Fault Tolerance Middleware for a Multi-Core System
NASA Technical Reports Server (NTRS)
Some, Raphael R.; Springer, Paul L.; Zima, Hans P.; James, Mark; Wagner, David A.
2012-01-01
Fault Tolerance Middleware (FTM) provides a framework to run on a dedicated core of a multi-core system and handles detection of single-event upsets (SEUs), and the responses to those SEUs, occurring in an application running on multiple cores of the processor. This software was written expressly for a multi-core system and can support different kinds of fault strategies, such as introspection, algorithm-based fault tolerance (ABFT), and triple modular redundancy (TMR). It focuses on providing fault tolerance for the application code, and represents the first step in a plan to eventually include fault tolerance in message passing and the FTM itself. In the multi-core system, the FTM resides on a single, dedicated core, separate from the cores used by the application. This is done in order to isolate the FTM from application faults and to allow it to swap out any application core for a substitute. The structure of the FTM consists of an interface to a fault tolerant strategy module, a responder module, a fault manager module, an error factory, and an error mapper that determines the severity of the error. In the present reference implementation, the only fault tolerant strategy implemented is introspection. The introspection code waits for an application node to send an error notification to it. It then uses the error factory to create an error object, and at this time, a severity level is assigned to the error. The introspection code uses its built-in knowledge base to generate a recommended response to the error. Responses might include ignoring the error, logging it, rolling back the application to a previously saved checkpoint, swapping in a new node to replace a bad one, or restarting the application. The original error and recommended response are passed to the top-level fault manager module, which invokes the response. The responder module also notifies the introspection module of the generated response. This provides additional information to the introspection module that it can use in generating its next response. For example, if the responder triggers an application rollback and errors are still occurring, the introspection module may decide to recommend an application restart.
A probabilistic dynamic energy model for ad-hoc wireless sensors network with varying topology
NASA Astrophysics Data System (ADS)
Al-Husseini, Amal
In this dissertation we investigate the behavior of Wireless Sensor Networks (WSNs) from the degree distribution and evolution perspective. In specific, we focus on implementation of a scale-free degree distribution topology for energy efficient WSNs. WSNs is an emerging technology that finds its applications in different areas such as environment monitoring, agricultural crop monitoring, forest fire monitoring, and hazardous chemical monitoring in war zones. This technology allows us to collect data without human presence or intervention. Energy conservation/efficiency is one of the major issues in prolonging the active life WSNs. Recently, many energy aware and fault tolerant topology control algorithms have been presented, but there is dearth of research focused on energy conservation/efficiency of WSNs. Therefore, we study energy efficiency and fault-tolerance in WSNs from the degree distribution and evolution perspective. Self-organization observed in natural and biological systems has been directly linked to their degree distribution. It is widely known that scale-free distribution bestows robustness, fault-tolerance, and access efficiency to system. Fascinated by these properties, we propose two complex network theoretic self-organizing models for adaptive WSNs. In particular, we focus on adopting the Barabasi and Albert scale-free model to fit into the constraints and limitations of WSNs. We developed simulation models to conduct numerical experiments and network analysis. The main objective of studying these models is to find ways to reducing energy usage of each node and balancing the overall network energy disrupted by faulty communication among nodes. The first model constructs the wireless sensor network relative to the degree (connectivity) and remaining energy of every individual node. We observed that it results in a scale-free network structure which has good fault tolerance properties in face of random node failures. The second model considers additional constraints on the maximum degree of each node as well as the energy consumption relative to degree changes. This gives more realistic results from a dynamical network perspective. It results in balanced network-wide energy consumption. The results show that networks constructed using the proposed approach have good properties for different centrality measures. The outcomes of the presented research are beneficial to building WSN control models with greater self-organization properties which leads to optimal energy consumption.
NASA Astrophysics Data System (ADS)
Kissling, W. M.; Villamor, P.; Ellis, S. M.; Rae, A.
2018-05-01
Present-day geothermal activity on the margins of the Ngakuru graben and evidence of fossil hydrothermal activity in the central graben suggest that a graben-wide system of permeable intersecting faults acts as the principal conduit for fluid flow to the surface. We have developed numerical models of fluid and heat flow in a regional-scale 2-D cross-section of the Ngakuru Graben. The models incorporate simplified representations of two 'end-member' fault architectures (one symmetric at depth, the other highly asymmetric) which are consistent with the surface locations and dips of the Ngakuru graben faults. The models are used to explore controls on buoyancy-driven convective fluid flow which could explain the differences between the past and present hydrothermal systems associated with these faults. The models show that the surface flows from the faults are strongly controlled by the fault permeability, the fault system architecture and the location of the heat source with respect to the faults in the graben. In particular, fault intersections at depth allow exchange of fluid between faults, and the location of the heat source on the footwall of normal faults can facilitate upflow along those faults. These controls give rise to two distinct fluid flow regimes in the fault network. The first, a regular flow regime, is characterised by a nearly unchanging pattern of fluid flow vectors within the fault network as the fault permeability evolves. In the second, complex flow regime, the surface flows depend strongly on fault permeability, and can fluctuate in an erratic manner. The direction of flow within faults can reverse in both regimes as fault permeability changes. Both flow regimes provide insights into the differences between the present-day and fossil geothermal systems in the Ngakuru graben. Hydrothermal upflow along the Paeroa fault seems to have occurred, possibly continuously, for tens of thousands of years, while upflow in other faults in the graben has switched on and off during the same period. An asymmetric graben architecture with the Paeroa being the major boundary fault will facilitate the predominant upflow along this fault. Upflow on the axial faults is more difficult to explain with this modelling. It occurs most easily with an asymmetric graben architecture and heat sources close to the graben axis (which could be associated with remnant heat from recent eruptions from Okataina Volcanic Centre). Temporal changes in upflow can also be associated with acceleration and deceleration of fault activity if this is considered a proxy for fault permeability. Other explanations for temporal variations in hydrothermal activity not explored here are different permeability on different faults, and different permeability along fault strike.
Trade-offs between driving nodes and time-to-control in complex networks
Pequito, Sérgio; Preciado, Victor M.; Barabási, Albert-László; Pappas, George J.
2017-01-01
Recent advances in control theory provide us with efficient tools to determine the minimum number of driving (or driven) nodes to steer a complex network towards a desired state. Furthermore, we often need to do it within a given time window, so it is of practical importance to understand the trade-offs between the minimum number of driving/driven nodes and the minimum time required to reach a desired state. Therefore, we introduce the notion of actuation spectrum to capture such trade-offs, which we used to find that in many complex networks only a small fraction of driving (or driven) nodes is required to steer the network to a desired state within a relatively small time window. Furthermore, our empirical studies reveal that, even though synthetic network models are designed to present structural properties similar to those observed in real networks, their actuation spectra can be dramatically different. Thus, it supports the need to develop new synthetic network models able to replicate controllability properties of real-world networks. PMID:28054597
Trade-offs between driving nodes and time-to-control in complex networks
NASA Astrophysics Data System (ADS)
Pequito, Sérgio; Preciado, Victor M.; Barabási, Albert-László; Pappas, George J.
2017-01-01
Recent advances in control theory provide us with efficient tools to determine the minimum number of driving (or driven) nodes to steer a complex network towards a desired state. Furthermore, we often need to do it within a given time window, so it is of practical importance to understand the trade-offs between the minimum number of driving/driven nodes and the minimum time required to reach a desired state. Therefore, we introduce the notion of actuation spectrum to capture such trade-offs, which we used to find that in many complex networks only a small fraction of driving (or driven) nodes is required to steer the network to a desired state within a relatively small time window. Furthermore, our empirical studies reveal that, even though synthetic network models are designed to present structural properties similar to those observed in real networks, their actuation spectra can be dramatically different. Thus, it supports the need to develop new synthetic network models able to replicate controllability properties of real-world networks.
Scalable and fault tolerant orthogonalization based on randomized distributed data aggregation
Gansterer, Wilfried N.; Niederbrucker, Gerhard; Straková, Hana; Schulze Grotthoff, Stefan
2013-01-01
The construction of distributed algorithms for matrix computations built on top of distributed data aggregation algorithms with randomized communication schedules is investigated. For this purpose, a new aggregation algorithm for summing or averaging distributed values, the push-flow algorithm, is developed, which achieves superior resilience properties with respect to failures compared to existing aggregation methods. It is illustrated that on a hypercube topology it asymptotically requires the same number of iterations as the optimal all-to-all reduction operation and that it scales well with the number of nodes. Orthogonalization is studied as a prototypical matrix computation task. A new fault tolerant distributed orthogonalization method rdmGS, which can produce accurate results even in the presence of node failures, is built on top of distributed data aggregation algorithms. PMID:24748902
Distributed downhole drilling network
Hall, David R.; Hall, Jr., H. Tracy; Fox, Joe; Pixton, David S.
2006-11-21
A high-speed downhole network providing real-time data from downhole components of a drilling strings includes a bottom-hole node interfacing to a bottom-hole assembly located proximate the bottom end of a drill string. A top-hole node is connected proximate the top end of the drill string. One or several intermediate nodes are located along the drill string between the bottom-hole node and the top-hole node. The intermediate nodes are configured to receive and transmit data packets transmitted between the bottom-hole node and the top-hole node. A communications link, integrated into the drill string, is used to operably connect the bottom-hole node, the intermediate nodes, and the top-hole node. In selected embodiments, a personal or other computer may be connected to the top-hole node, to analyze data received from the intermediate and bottom-hole nodes.
A review on data-driven fault severity assessment in rolling bearings
NASA Astrophysics Data System (ADS)
Cerrada, Mariela; Sánchez, René-Vinicio; Li, Chuan; Pacheco, Fannia; Cabrera, Diego; Valente de Oliveira, José; Vásquez, Rafael E.
2018-01-01
Health condition monitoring of rotating machinery is a crucial task to guarantee reliability in industrial processes. In particular, bearings are mechanical components used in most rotating devices and they represent the main source of faults in such equipments; reason for which research activities on detecting and diagnosing their faults have increased. Fault detection aims at identifying whether the device is or not in a fault condition, and diagnosis is commonly oriented towards identifying the fault mode of the device, after detection. An important step after fault detection and diagnosis is the analysis of the magnitude or the degradation level of the fault, because this represents a support to the decision-making process in condition based-maintenance. However, no extensive works are devoted to analyse this problem, or some works tackle it from the fault diagnosis point of view. In a rough manner, fault severity is associated with the magnitude of the fault. In bearings, fault severity can be related to the physical size of fault or a general degradation of the component. Due to literature regarding the severity assessment of bearing damages is limited, this paper aims at discussing the recent methods and techniques used to achieve the fault severity evaluation in the main components of the rolling bearings, such as inner race, outer race, and ball. The review is mainly focused on data-driven approaches such as signal processing for extracting the proper fault signatures associated with the damage degradation, and learning approaches that are used to identify degradation patterns with regards to health conditions. Finally, new challenges are highlighted in order to develop new contributions in this field.
Robot Position Sensor Fault Tolerance
NASA Technical Reports Server (NTRS)
Aldridge, Hal A.
1997-01-01
Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. A new method is proposed that utilizes analytical redundancy to allow for continued operation during joint position sensor failure. Joint torque sensors are used with a virtual passive torque controller to make the robot joint stable without position feedback and improve position tracking performance in the presence of unknown link dynamics and end-effector loading. Two Cartesian accelerometer based methods are proposed to determine the position of the joint. The joint specific position determination method utilizes two triaxial accelerometers attached to the link driven by the joint with the failed position sensor. The joint specific method is not computationally complex and the position error is bounded. The system wide position determination method utilizes accelerometers distributed on different robot links and the end-effector to determine the position of sets of multiple joints. The system wide method requires fewer accelerometers than the joint specific method to make all joint position sensors fault tolerant but is more computationally complex and has lower convergence properties. Experiments were conducted on a laboratory manipulator. Both position determination methods were shown to track the actual position satisfactorily. A controller using the position determination methods and the virtual passive torque controller was able to servo the joints to a desired position during position sensor failure.
An Evaluation of Concurrent Priority Queue Algorithms
1991-02-01
node is the corresponding index in the array. and node i occupies location i. The left child of node i. LCHILD(i). occupies location 2z and its right... child , RCHILD(i), occupies location 2i + 1. The parent of node i is at I- Associated with the heap are data fields lastelern and fulllevel, in which...then 14 p = rchild(p) 15 1 = -j 16 else 17 p ! child (p) 18 end 19 j/2 20 end 21 key[p] = nkey 22 end insert Figure 2.2: Insert operation on binary heap
Aftershocks driven by afterslip and fluid pressure sweeping through a fault-fracture mesh
Ross, Zachary E.; Rollins, Christopher; Cochran, Elizabeth S.; Hauksson, Egill; Avouac, Jean-Philippe; Ben-Zion, Yehuda
2017-01-01
A variety of physical mechanisms are thought to be responsible for the triggering and spatiotemporal evolution of aftershocks. Here we analyze a vigorous aftershock sequence and postseismic geodetic strain that occurred in the Yuha Desert following the 2010 Mw 7.2 El Mayor-Cucapah earthquake. About 155,000 detected aftershocks occurred in a network of orthogonal faults and exhibit features of two distinct mechanisms for aftershock triggering. The earliest aftershocks were likely driven by afterslip that spread away from the main shock with the logarithm of time. A later pulse of aftershocks swept again across the Yuha Desert with square root time dependence and swarm-like behavior; together with local geological evidence for hydrothermalism, these features suggest that the events were driven by fluid diffusion. The observations illustrate how multiple driving mechanisms and the underlying fault structure jointly control the evolution of an aftershock sequence.
Initiation and development of the southern California uplift along its northern margin
Stein, R.S.; Thatcher, W.; Castle, R.O.
1979-01-01
Analysis of three first-order leveling lines that traverse the White Wolf fault (site of the 1952 M = 7.7 earthquake), each resurveyed nine times between 1926 and 1974, reveals probable preseismic tilting, major coseismic movements, and a spatial association between these movements and the subsequently recognized southern California uplift. In examining the vertical control record, we have both searched for evidence of systematic errors and excluded from consideration portions of the lines contaminated by subsurface fluid and gas extraction. Movements have been referred to an invariant datum based on the 1926 position of tidal BM 8 in San Pedro, corrected for subsequent eustatic sea-level change. An 8 ??rad up-to-the-north preseismic tilt (6 cm/7.5 km) was apparently recorded on two adjacent line segments within 10 km of the 1952 epicenter between 1942 and 1947. It is possible, however, that this tilt was in part caused by extraction-induced subsidence at one of the six releveled benchmarks. Data also show evidence of episodic tilts that are not earthquake related. At the junction of the Garlock and San Andreas faults, for example, an ???5 ??rad up-to-the-north tilt (7.2 cm/???16 km) took place between Lebec and Grapevine within three months during 1964. Comparison of the 1947 and 1953 surveys, which includes the coseismic interval, shows that the SW-fault end (nearest the epicenter) and the central fault reach sustained four times the uplift recorded at the NE end of the fault (+72 cm SW, +53 cm Central, +16 cm NE). A regional postseismic uplift of 4 cm extended ???25 km to either side of the fault after the main event, from 1953 to 1956. An interval of relative quiescence followed at least through 1959, in which the elevation change did not exceed ??3 cm. The detailed pattern of aseismic uplift demonstrates that movement proceeded in space-time pulses: one half of the uplift at the SW-fault end and extending southward occurred between 1959 and 1961, one half of the uplift at the NE-fault end and extending eastward occurred between 1961 and 1965, while the central fault reach sustained successive pulses of subsidence, uplift, and collapse (-4 cm, 1953-1960; +7 cm, 1960-1965; -2 cm, 1965-1970). In addition, the number of aftershocks concentrated near the fault ends increased in the NE relative to the SW from 1952 to 1974. These observations suggest that the aseismic uplift may have migrated northeastward from 1959 to 1965 at an approximate rate of 7-16 km/yr. Evidence for a mechanical coupling between the earthquake and the subsequent aseismic uplift is equivocal. At both fault ends, the major NWbounding flexure or tilted front of the southern California uplift is spatially coincident with the coseismic flexure that preceded it. In addition, the postulated migration of vertical deformation is similar to the 1952 seismic event in which the rupture initiated at the SW end of the fault and then propagated to the NE-fault end. However, the spatial distribution of aseismic uplift, nearly identical at both fault ends and to the south and east, and near zero in the central fault reach, is distinctly different from the nonuniform and localized coseismic deformation. ?? 1979.
Method and system for dynamic probabilistic risk assessment
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)
2013-01-01
The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.
An Architectural Concept for Intrusion Tolerance in Air Traffic Networks
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Miner, Paul S.
2003-01-01
The goal of an intrusion tolerant network is to continue to provide predictable and reliable communication in the presence of a limited num ber of compromised network components. The behavior of a compromised network component ranges from a node that no longer responds to a nod e that is under the control of a malicious entity that is actively tr ying to cause other nodes to fail. Most current data communication ne tworks do not include support for tolerating unconstrained misbehavio r of components in the network. However, the fault tolerance communit y has developed protocols that provide both predictable and reliable communication in the presence of the worst possible behavior of a limited number of nodes in the system. One may view a malicious entity in a communication network as a node that has failed and is behaving in an arbitrary manner. NASA/Langley Research Center has developed one such fault-tolerant computing platform called SPIDER (Scalable Proces sor-Independent Design for Electromagnetic Resilience). The protocols and interconnection mechanisms of SPIDER may be adapted to large-sca le, distributed communication networks such as would be required for future Air Traffic Management systems. The predictability and reliabi lity guarantees provided by the SPIDER protocols have been formally v erified. This analysis can be readily adapted to similar network stru ctures.
NASA Astrophysics Data System (ADS)
Henry, C. D.; Faulds, J. E.
2006-12-01
The Gulf of California (GC) and Walker Lane (WL) have undergone strikingly similar development with strike- slip faulting following initial extension. They differ significantly in the amount of Pacific-North American plate motion taken up by each: essentially all relative motion in the GC and ~25% in the WL. In both areas, ancestral arc magmatism preceded and probably focused deformation, perhaps because heating and/or hydration weakened the lithosphere. However, differences in migration of the Rivera (RTJ) and Mendocino triple junctions (MTJ) related to differences in the orientation of plate boundaries determined how strike-slip faulting developed. Abrupt southward jumps in the RTJ led to abrupt cessation of magmatism over arc lengths of as much as 1000 km and initiation of east-northeast extension within the future GC. The best known jump was at ~13 Ma, but an earlier jump occurred at ~18 Ma. Arc magmatism has been best documented in Baja California, Sonora, and Nayarit, although Baja constituted the most-trenchward fringe of the ancestral arc. New and published data indicate that Sinaloa underwent a similar history of arc magmatism. The greatest volume of the arc immediately preceding RTJ jumps was probably in mainland Mexico. Arc magmatism shut off following these jumps, extension began in the future GC, and strike-slip faulting either followed or accompanied extension in the GC. In contrast, the MTJ migrated progressively northward. New and published data indicate magmatism generally shut off coincident with this retreat, but distinct nodes or zones of magmatism, presumably unrelated to subduction, persisted or initiated after arc activity ceased. We have suggested that the WL has grown progressively northward, following the retreating arc, and that the northern WL is its youngest part. However, the timing of initiation of strike-slip faulting in most of the WL is poorly known and controversial. Testing our hypothesis requires determining initiation and magnitudes of total slip across different parts. Despite the progressive migration of the MTJ, arc magmatism ceased abruptly at the latitude of Lake Tahoe (39.2°) at about 3 Ma, and the southern end of the active Cascade arc jumped ~160 km northward to Lassen Peak (40.5°), where it remains. Geologic data indicate strike-slip faulting began between these two areas immediately following the end of arc magmatism. The southern Cascade arc is undergoing ~east-west extension, which was the case for the northern Walker Lane immediately before strike-slip faulting began. Further progression or steps in magmatism and strike-slip faulting will likely follow further northward migration of the MTJ.
Formation mechanism of fivefold deformation twins in a face-centered cubic alloy.
Zhang, Zhenyu; Huang, Siling; Chen, Leilei; Zhu, Zhanwei; Guo, Dongming
2017-03-28
The formation mechanism considers fivefold deformation twins originating from the grain boundaries in a nanocrystalline material, resulting in that fivefold deformation twins derived from a single crystal have not been reported by molecular dynamics simulations. In this study, fivefold deformation twins are observed in a single crystal of face-centered cubic (fcc) alloy. A new formation mechanism is proposed for fivefold deformation twins in a single crystal. A partial dislocation is emitted from the incoherent twin boundaries (ITBs) with high energy, generating a stacking fault along {111} plane, and resulting in the nucleating and growing of a twin by the successive emission of partials. A node is fixed at the intersecting center of the four different slip {111} planes. With increasing stress under the indentation, ITBs come into being close to the node, leading to the emission of a partial from the node. This generates a stacking fault along a {111} plane, nucleating and growing a twin by the continuous emission of the partials. This process repeats until the formation of fivefold deformation twins.
Formation mechanism of fivefold deformation twins in a face-centered cubic alloy
NASA Astrophysics Data System (ADS)
Zhang, Zhenyu; Huang, Siling; Chen, Leilei; Zhu, Zhanwei; Guo, Dongming
2017-03-01
The formation mechanism considers fivefold deformation twins originating from the grain boundaries in a nanocrystalline material, resulting in that fivefold deformation twins derived from a single crystal have not been reported by molecular dynamics simulations. In this study, fivefold deformation twins are observed in a single crystal of face-centered cubic (fcc) alloy. A new formation mechanism is proposed for fivefold deformation twins in a single crystal. A partial dislocation is emitted from the incoherent twin boundaries (ITBs) with high energy, generating a stacking fault along {111} plane, and resulting in the nucleating and growing of a twin by the successive emission of partials. A node is fixed at the intersecting center of the four different slip {111} planes. With increasing stress under the indentation, ITBs come into being close to the node, leading to the emission of a partial from the node. This generates a stacking fault along a {111} plane, nucleating and growing a twin by the continuous emission of the partials. This process repeats until the formation of fivefold deformation twins.
A bottom-driven mechanism for distributed faulting: Insights from the Gulf of California Rift
NASA Astrophysics Data System (ADS)
Persaud, P.; Tan, E.; Choi, E.; Contreras, J.; Lavier, L. L.
2017-12-01
The Gulf of California is a young oblique rift that displays a variation in rifting style along strike. Despite the rapid localization of strain in the Gulf at 6 Ma, the northern rift segment has the characteristics of a wide rift, with broadly distributed extensional strain and small gradients in topography and crustal thinning. Observations of active faulting in the continent-ocean transition of the Northern Gulf show multiple oblique-slip faults distributed in a 200 x 70 km2area developed some time after a westward relocation of the plate boundary at 2 Ma. In contrast, north and south of this broad pull-apart structure, major transform faults accommodate Pacific-North America plate motion. Here we propose that the mechanism for distributed brittle deformation results from the boundary conditions present in the Northern Gulf, where basal shear is distributed between the Cerro Prieto strike-slip fault (southernmost fault of the San Andreas fault system) and the Ballenas Transform fault. We hypothesize that in oblique-extensional settings whether deformation is partitioned in a few dip-slip and strike-slip faults, or in numerous oblique-slip faults may depend on (1) bottom-driven, distributed extension and shear deformation of the lower crust or upper mantle, and (2) the rift obliquity. To test this idea, we explore the effects of bottom-driven shear on the deformation of a brittle elastic-plastic layer with pseudo-three dimensional numerical models that include side forces. Strain localization results when the basal shear is a step-function while oblique-slip on numerous faults dominates when basal shear is distributed. We further investigate how the style of faulting varies with obliquity and demonstrate that the style of faulting observed in the Northern Gulf of California is reproduced in models with an obliquity of 0.7 and distributed basal shear boundary conditions, consistent with the interpreted obliquity and boundary conditions of the study area. Our findings motivate a suite of 3D models of the early plate boundary evolution in the Gulf, and highlight the importance of local stress field perturbations as a mechanism for broadening the deformation zone in other regions such as the Basin and Range, Rio Grande Rift and Malawi Rift.
Scalable cloud without dedicated storage
NASA Astrophysics Data System (ADS)
Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.
2015-05-01
We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.
Model Checking a Self-Stabilizing Distributed Clock Synchronization Protocol for Arbitrary Digraphs
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2011-01-01
This report presents the mechanical verification of a self-stabilizing distributed clock synchronization protocol for arbitrary digraphs in the absence of faults. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. The system under study is an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. This protocol deterministically converges within a time bound that is a linear function of the self-stabilization period.
Fault-Tolerant Self-Stabilizing Distributed Clock Synchronization Protocol for Arbitrary Digraphs
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R. (Inventor)
2014-01-01
A self-stabilizing network in the form of an arbitrary, non-partitioned digraph includes K nodes having a synchronizer executing a protocol. K-1 monitors of each node may receive a Sync message transmitted from a directly connected node. When the Sync message is received, the logical clock value for the receiving node is set to between 0 and a communication latency value (gamma) if the clock value is less than a minimum event-response delay (D). A new Sync message is also transmitted to any directly connected nodes if the clock value is greater than or equal to both D and a graph threshold (T(sub S)). When the Sync message is not received the synchronizer increments the clock value if the clock value is less than a resynchronization period (P), and resets the clock value and transmits a new Sync message to all directly connected nodes when the clock value equals or exceeds P.
The P-Mesh: A Commodity-based Scalable Network Architecture for Clusters
NASA Technical Reports Server (NTRS)
Nitzberg, Bill; Kuszmaul, Chris; Stockdale, Ian; Becker, Jeff; Jiang, John; Wong, Parkson; Tweten, David (Technical Monitor)
1998-01-01
We designed a new network architecture, the P-Mesh which combines the scalability and fault resilience of a torus with the performance of a switch. We compare the scalability, performance, and cost of the hub, switch, torus, tree, and P-Mesh architectures. The latter three are capable of scaling to thousands of nodes, however, the torus has severe performance limitations with that many processors. The tree and P-Mesh have similar latency, bandwidth, and bisection bandwidth, but the P-Mesh outperforms the switch architecture (a lower bound for tree performance) on 16-node NAB Parallel Benchmark tests by up to 23%, and costs 40% less. Further, the P-Mesh has better fault resilience characteristics. The P-Mesh architecture trades increased management overhead for lower cost, and is a good bridging technology while the price of tree uplinks is expensive.
NASA Astrophysics Data System (ADS)
Wan, Yongge; Shen, Zheng-Kang; Bürgmann, Roland; Sun, Jianbao; Wang, Min
2017-02-01
We revisit the problem of coseismic rupture of the 2008 Mw7.9 Wenchuan earthquake. Precise determination of the fault structure and slip distribution provides critical information about the mechanical behaviour of the fault system and earthquake rupture. We use all the geodetic data available, craft a more realistic Earth structure and fault model compared to previous studies, and employ a nonlinear inversion scheme to optimally solve for the fault geometry and slip distribution. Compared to a homogeneous elastic half-space model and laterally uniform layered models, adopting separate layered elastic structure models on both sides of the Beichuan fault significantly improved data fitting. Our results reveal that: (1) The Beichuan fault is listric in shape, with near surface fault dip angles increasing from ˜36° at the southwest end to ˜83° at the northeast end of the rupture. (2) The fault rupture style changes from predominantly thrust at the southwest end to dextral at the northeast end of the fault rupture. (3) Fault slip peaks near the surface for most parts of the fault, with ˜8.4 m thrust and ˜5 m dextral slip near Hongkou and ˜6 m thrust and ˜8.4 m dextral slip near Beichuan, respectively. (4) The peak slips are located around fault geometric complexities, suggesting that earthquake style and rupture propagation were determined by fault zone geometric barriers. Such barriers exist primarily along restraining left stepping discontinuities of the dextral-compressional fault system. (5) The seismic moment released on the fault above 20 km depth is 8.2×1021 N m, corresponding to an Mw7.9 event. The seismic moments released on the local slip concentrations are equivalent to events of Mw7.5 at Yingxiu-Hongkou, Mw7.3 at Beichuan-Pingtong, Mw7.2 near Qingping, Mw7.1 near Qingchuan, and Mw6.7 near Nanba, respectively. (6) The fault geometry and kinematics are consistent with a model in which crustal deformation at the eastern margin of the Tibetan plateau is decoupled by differential motion across a decollement in the mid crust, above which deformation is dominated by brittle reverse faulting and below which deformation occurs by viscous horizontal shortening and vertical thickening.
Opinion formation driven by PageRank node influence on directed networks
NASA Astrophysics Data System (ADS)
Eom, Young-Ho; Shepelyansky, Dima L.
2015-10-01
We study a two states opinion formation model driven by PageRank node influence and report an extensive numerical study on how PageRank affects collective opinion formations in large-scale empirical directed networks. In our model the opinion of a node can be updated by the sum of its neighbor nodes' opinions weighted by the node influence of the neighbor nodes at each step. We consider PageRank probability and its sublinear power as node influence measures and investigate evolution of opinion under various conditions. First, we observe that all networks reach steady state opinion after a certain relaxation time. This time scale is decreasing with the heterogeneity of node influence in the networks. Second, we find that our model shows consensus and non-consensus behavior in steady state depending on types of networks: Web graph, citation network of physics articles, and LiveJournal social network show non-consensus behavior while Wikipedia article network shows consensus behavior. Third, we find that a more heterogeneous influence distribution leads to a more uniform opinion state in the cases of Web graph, Wikipedia, and Livejournal. However, the opposite behavior is observed in the citation network. Finally we identify that a small number of influential nodes can impose their own opinion on significant fraction of other nodes in all considered networks. Our study shows that the effects of heterogeneity of node influence on opinion formation can be significant and suggests further investigations on the interplay between node influence and collective opinion in networks.
2013-05-01
representation of a centralized control system on a turbine engine. All actuators and sensors are point-to-point cabled to the controller ( FADEC ) which...electronics themselves. Figure 1: Centralized Control System Each function resides within the FADEC and uses Unique Point-to-Point Analog...distributed control system on the same turbine engine. The actuators and sensors interface to Smart Nodes which, in turn communicate to the FADEC via
Le, Duc Van; Oh, Hoon; Yoon, Seokhoon
2013-07-05
In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay.
Van Le, Duc; Oh, Hoon; Yoon, Seokhoon
2013-01-01
In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay. PMID:23881134
Classification between Failed Nodes and Left Nodes in Mobile Asset Tracking Systems †
Kim, Kwangsoo; Jin, Jae-Yeon; Jin, Seong-il
2016-01-01
Medical asset tracking systems track a medical device with a mobile node and determine its status as either in or out, because it can leave a monitoring area. Due to a failed node, this system may decide that a mobile asset is outside the area, even though it is within the area. In this paper, an efficient classification method is proposed to separate mobile nodes disconnected from a wireless sensor network between nodes with faults and a node that actually has left the monitoring region. The proposed scheme uses two trends extracted from the neighboring nodes of a disconnected mobile node. First is the trend in a series of the neighbor counts; the second is that of the ratios of the boundary nodes included in the neighbors. Based on such trends, the proposed method separates failed nodes from mobile nodes that are disconnected from a wireless sensor network without failures. The proposed method is evaluated using both real data generated from a medical asset tracking system and also using simulations with the network simulator (ns-2). The experimental results show that the proposed method correctly differentiates between failed nodes and nodes that are no longer in the monitoring region, including the cases that the conventional methods fail to detect. PMID:26901200
Zhang, Gongxuan; Wang, Yongli; Wang, Tianshu
2018-01-01
We study the problem of employing a mobile-sink into a large-scale Event-Driven Wireless Sensor Networks (EWSNs) for the purpose of data harvesting from sensor-nodes. Generally, this employment improves the main weakness of WSNs that is about energy-consumption in battery-driven sensor-nodes. The main motivation of our work is to address challenges which are related to a network’s topology by adopting a mobile-sink that moves in a predefined trajectory in the environment. Since, in this fashion, it is not possible to gather data from sensor-nodes individually, we adopt the approach of defining some of the sensor-nodes as Rendezvous Points (RPs) in the network. We argue that RP-planning in this case is a tradeoff between minimizing the number of RPs while decreasing the number of hops for a sensor-node that needs data transformation to the related RP which leads to minimizing average energy consumption in the network. We address the problem by formulating the challenges and expectations as a Mixed Integer Linear Programming (MILP). Henceforth, by proving the NP-hardness of the problem, we propose three effective and distributed heuristics for RP-planning, identifying sojourn locations, and constructing routing trees. Finally, experimental results prove the effectiveness of our approach. PMID:29734718
Vajdi, Ahmadreza; Zhang, Gongxuan; Zhou, Junlong; Wei, Tongquan; Wang, Yongli; Wang, Tianshu
2018-05-04
We study the problem of employing a mobile-sink into a large-scale Event-Driven Wireless Sensor Networks (EWSNs) for the purpose of data harvesting from sensor-nodes. Generally, this employment improves the main weakness of WSNs that is about energy-consumption in battery-driven sensor-nodes. The main motivation of our work is to address challenges which are related to a network’s topology by adopting a mobile-sink that moves in a predefined trajectory in the environment. Since, in this fashion, it is not possible to gather data from sensor-nodes individually, we adopt the approach of defining some of the sensor-nodes as Rendezvous Points (RPs) in the network. We argue that RP-planning in this case is a tradeoff between minimizing the number of RPs while decreasing the number of hops for a sensor-node that needs data transformation to the related RP which leads to minimizing average energy consumption in the network. We address the problem by formulating the challenges and expectations as a Mixed Integer Linear Programming (MILP). Henceforth, by proving the NP-hardness of the problem, we propose three effective and distributed heuristics for RP-planning, identifying sojourn locations, and constructing routing trees. Finally, experimental results prove the effectiveness of our approach.
Multi-asperity models of slow slip and tremor
NASA Astrophysics Data System (ADS)
Ampuero, Jean Paul; Luo, Yingdi; Lengline, Olivier; Inbal, Asaf
2016-04-01
Field observations of exhumed faults indicate that fault zones can comprise mixtures of materials with different dominant deformation mechanisms, including contrasts in strength, frictional stability and hydrothermal transport properties. Computational modeling helps quantify the potential effects of fault zone heterogeneity on fault slip styles from seismic to aseismic slip, including slow slip and tremor phenomena, foreshocks sequences and swarms, high- and low-frequency radiation during large earthquakes. We will summarize results of ongoing modeling studies of slow slip and tremor in which fault zone structure comprises a collection of frictionally unstable patches capable of seismic slip (tremorgenic asperities) embedded in a frictionally stable matrix hosting aseismic transient slips. Such models are consistent with the current view that tremors result from repeated shear failure of multiple asperities as Low Frequency Earthquakes (LFEs). The collective behavior of asperities embedded in creeping faults generate a rich spectrum of tremor migration patterns, as observed in natural faults, whose seismicity rate, recurrence time and migration speed can be mechanically related to the underlying transient slow slip rate. Tremor activity and slow slip also responds to periodic loadings induced by tides or surface waves, and models relate tremor tidal sensitivity to frictional properties, fluid pressure and creep rate. The overall behavior of a heterogeneous fault is affected by structural parameters, such as the ratio of stable to unstable materials, but also by time-dependent variables, such as pore pressure and loading rate. Some behaviors are well predicted by homogenization theory based on spatially-averaged frictional properties, but others are somewhat unexpected, such as seismic slip behavior found in asperities that are much smaller than their nucleation size. Two end-member regimes are obtained in rate-and-state models with velocity-weakening asperities embedded in a matrix with either (A) velocity-strengthening friction or (B) a transition from velocity-weakening to velocity-strengthening at increasing slip velocity. The most conventional regime is tremor driven by slow slip. However, if the interaction between asperities mediated by intervening transient creep is strong enough, a regime of slow slip driven by tremors emerges. These two regimes lead to different statistics of inter-event times of LFE sequences, which we confront to observations from LFE catalogs in Mexico, Cascadia and Parkfield. These models also suggest that the depth dependence of tremor and slow slip behavior, for instance their shorter recurrence time and weaker amplitude with increasing depth, are not necessarily related to depth dependent size distribution of asperities, but could be due to depth-dependence of the properties of the intervening creep materials. Simplified fracture mechanics models illustrate how the resistance of the fault zone matrix can control the effective distance of interaction between asperities, and lead to transitions between Gutenberg-Richter to size-bounded (exponential) frequency-magnitude distributions. Structural fault zone properties such as the thickness of the damage zone can also introduce characteristic length scales that may affect the size distribution of tremors. Earthquake cycle simulations on heterogeneous faults also provide insight into the conditions that allow asperities to generate foreshock activity and high-frequency radiation during large earthquakes.
Technology transfer by means of fault tree synthesis
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.
NASA Astrophysics Data System (ADS)
Li, Qi; Tan, Kai; Wang, Dong Zhen; Zhao, Bin; Zhang, Rui; Li, Yu; Qi, Yu Jie
2018-02-01
The spatio-temporal slip distribution of the earthquake that occurred on 8 August 2017 in Jiuzhaigou, China, was estimated from the teleseismic body wave and near-field Global Navigation Satellite System (GNSS) data (coseismic displacements and high-rate GPS data) based on a finite fault model. Compared with the inversion results from the teleseismic body waves, the near-field GNSS data can better restrain the rupture area, the maximum slip, the source time function, and the surface rupture. The results show that the maximum slip of the earthquake approaches 1.4 m, the scalar seismic moment is 8.0 × 1018 N·m (M w ≈ 6.5), and the centroid depth is 15 km. The slip is mainly driven by the left-lateral strike-slip and it is initially inferred that the seismogenic fault occurs in the south branch of the Tazang fault or an undetectable fault, a NW-trending left-lateral strike-slip fault, and belongs to one of the tail structures at the easternmost end of the eastern Kunlun fault zone. The earthquake rupture is mainly concentrated at depths of 5-15 km, which results in the complete rupture of the seismic gap left by the previous four earthquakes with magnitudes > 6.0 in 1973 and 1976. Therefore, the possibility of a strong aftershock on the Huya fault is low. The source duration is 30 s and there are two major ruptures. The main rupture occurs in the first 10 s, 4 s after the earthquake; the second rupture peak arrives in 17 s. In addition, the Coulomb stress study shows that the epicenter of the earthquake is located in the area where the static Coulomb stress change increased because of the 12 May 2017 M w7.9 Wenchuan, China, earthquake. Therefore, the Wenchuan earthquake promoted the occurrence of the 8 August 2017 Jiuzhaigou earthquake.
NASA Astrophysics Data System (ADS)
Li, Qi; Tan, Kai; Wang, Dong Zhen; Zhao, Bin; Zhang, Rui; Li, Yu; Qi, Yu Jie
2018-05-01
The spatio-temporal slip distribution of the earthquake that occurred on 8 August 2017 in Jiuzhaigou, China, was estimated from the teleseismic body wave and near-field Global Navigation Satellite System (GNSS) data (coseismic displacements and high-rate GPS data) based on a finite fault model. Compared with the inversion results from the teleseismic body waves, the near-field GNSS data can better restrain the rupture area, the maximum slip, the source time function, and the surface rupture. The results show that the maximum slip of the earthquake approaches 1.4 m, the scalar seismic moment is 8.0 × 1018 N·m ( M w ≈ 6.5), and the centroid depth is 15 km. The slip is mainly driven by the left-lateral strike-slip and it is initially inferred that the seismogenic fault occurs in the south branch of the Tazang fault or an undetectable fault, a NW-trending left-lateral strike-slip fault, and belongs to one of the tail structures at the easternmost end of the eastern Kunlun fault zone. The earthquake rupture is mainly concentrated at depths of 5-15 km, which results in the complete rupture of the seismic gap left by the previous four earthquakes with magnitudes > 6.0 in 1973 and 1976. Therefore, the possibility of a strong aftershock on the Huya fault is low. The source duration is 30 s and there are two major ruptures. The main rupture occurs in the first 10 s, 4 s after the earthquake; the second rupture peak arrives in 17 s. In addition, the Coulomb stress study shows that the epicenter of the earthquake is located in the area where the static Coulomb stress change increased because of the 12 May 2017 M w7.9 Wenchuan, China, earthquake. Therefore, the Wenchuan earthquake promoted the occurrence of the 8 August 2017 Jiuzhaigou earthquake.
Flow-driven triboelectric generator for directly powering a wireless sensor node.
Wang, Shuhua; Mu, Xiaojing; Yang, Ya; Sun, Chengliang; Gu, Alex Yuandong; Wang, Zhong Lin
2015-01-14
A triboelectric generator (TEG) for scavenging flow-driven mechanical -energy to directly power a wireless sensor node is demonstrated for the first time. The output performances of TEGs with different dimensions are systematically investigated, indicating that a largest output power of about 3.7 mW for one TEG can be achieved under an external load of 3 MΩ. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modeling Dynamic Evolution of Online Friendship Network
NASA Astrophysics Data System (ADS)
Wu, Lian-Ren; Yan, Qiang
2012-10-01
In this paper, we study the dynamic evolution of friendship network in SNS (Social Networking Site). Our analysis suggests that an individual joining a community depends not only on the number of friends he or she has within the community, but also on the friendship network generated by those friends. In addition, we propose a model which is based on two processes: first, connecting nearest neighbors; second, strength driven attachment mechanism. The model reflects two facts: first, in the social network it is a universal phenomenon that two nodes are connected when they have at least one common neighbor; second, new nodes connect more likely to nodes which have larger weights and interactions, a phenomenon called strength driven attachment (also called weight driven attachment). From the simulation results, we find that degree distribution P(k), strength distribution P(s), and degree-strength correlation are all consistent with empirical data.
Fault Diagnosis in HVAC Chillers
NASA Technical Reports Server (NTRS)
Choi, Kihoon; Namuru, Setu M.; Azam, Mohammad S.; Luo, Jianhui; Pattipati, Krishna R.; Patterson-Hine, Ann
2005-01-01
Modern buildings are being equipped with increasingly sophisticated power and control systems with substantial capabilities for monitoring and controlling the amenities. Operational problems associated with heating, ventilation, and air-conditioning (HVAC) systems plague many commercial buildings, often the result of degraded equipment, failed sensors, improper installation, poor maintenance, and improperly implemented controls. Most existing HVAC fault-diagnostic schemes are based on analytical models and knowledge bases. These schemes are adequate for generic systems. However, real-world systems significantly differ from the generic ones and necessitate modifications of the models and/or customization of the standard knowledge bases, which can be labor intensive. Data-driven techniques for fault detection and isolation (FDI) have a close relationship with pattern recognition, wherein one seeks to categorize the input-output data into normal or faulty classes. Owing to the simplicity and adaptability, customization of a data-driven FDI approach does not require in-depth knowledge of the HVAC system. It enables the building system operators to improve energy efficiency and maintain the desired comfort level at a reduced cost. In this article, we consider a data-driven approach for FDI of chillers in HVAC systems. To diagnose the faults of interest in the chiller, we employ multiway dynamic principal component analysis (MPCA), multiway partial least squares (MPLS), and support vector machines (SVMs). The simulation of a chiller under various fault conditions is conducted using a standard chiller simulator from the American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE). We validated our FDI scheme using experimental data obtained from different types of chiller faults.
Shakal, A.; Haddadi, H.; Graizer, V.; Lin, K.; Huang, M.
2006-01-01
The 2004 Parkfield, California, earthquake was recorded by an extensive set of strong-motion instruments well positioned to record details of the motion in the near-fault region, where there has previously been very little recorded data. The strong-motion measurements obtained are highly varied, with significant variations occurring over only a few kilometers. The peak accelerations in the near fault region range from 0.13g to over 1.8g (one of the highest acceleration recorded to date, exceeding the capacity of the recording instrument The largest accelerations occurred near the northwest end of the inferred rupture zone. These motions are consistent with directivity for a fault rupturing from the hypocenter near Gold Hill toward the northwest. However, accelerations up to 0.8g were also observed in the opposite direction, at the south end of the Cholame Valley near Highway 41, consistent with bilateral rupture, with rupture southeast of the hypocenter. Several stations near and over the rupturing fault recorded relatively weak motions, consistent with seemingly paradoxical observations of low shaking damage near strike-slip faults. This event had more ground-motion observations within 10 km of the fault than many other earthquakes combined. At moderate distances peak horizontal ground acceleration (PGA) values dropped off more rapidly with distance than standard relationships. At close-in distance the wide variation of PGA suggests a distance-dependent sigma may be important to consider. The near-fault ground-motion variation is greater than that assumed in ShakeMap interpolations, based on the existing set of observed data. Higher density of stations near faults may be the only means in the near future to reduce uncertainty in the interpolations. Outside of the near-fault zone the variance is closer to that assumed. This set of data provides the first case where near-fault radiation has been observed at an adequate number of stations around the fault to allow detailed study of the fault-normal and fault-parallel motion and the near-field S-wave radiation. The fault-normal motions are significant, but they are not large at the central part of the fault, away from the ends. The fault-normal and fault-parallel motions drop off quite rapidly with distance from the fault. Analysis of directivity indicates increased values of peak velocity in the rupture direction. No such dependence is observed in the peak acceleration, except for stations close to the strike of the fault near and beyond the ends of the faulting.
Self-stabilizing byzantine-fault-tolerant clock synchronization system and method
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R. (Inventor)
2012-01-01
Systems and methods for rapid Byzantine-fault-tolerant self-stabilizing clock synchronization are provided. The systems and methods are based on a protocol comprising a state machine and a set of monitors that execute once every local oscillator tick. The protocol is independent of specific application specific requirements. The faults are assumed to be arbitrary and/or malicious. All timing measures of variables are based on the node's local clock and thus no central clock or externally generated pulse is used. Instances of the protocol are shown to tolerate bursts of transient failures and deterministically converge with a linear convergence time with respect to the synchronization period as predicted.
Optimally Distributed Kalman Filtering with Data-Driven Communication †
Dormann, Katharina
2018-01-01
For multisensor data fusion, distributed state estimation techniques that enable a local processing of sensor data are the means of choice in order to minimize storage and communication costs. In particular, a distributed implementation of the optimal Kalman filter has recently been developed. A significant disadvantage of this algorithm is that the fusion center needs access to each node so as to compute a consistent state estimate, which requires full communication each time an estimate is requested. In this article, different extensions of the optimally distributed Kalman filter are proposed that employ data-driven transmission schemes in order to reduce communication expenses. As a first relaxation of the full-rate communication scheme, it can be shown that each node only has to transmit every second time step without endangering consistency of the fusion result. Also, two data-driven algorithms are introduced that even allow for lower transmission rates, and bounds are derived to guarantee consistent fusion results. Simulations demonstrate that the data-driven distributed filtering schemes can outperform a centralized Kalman filter that requires each measurement to be sent to the center node. PMID:29596392
Liu, Chang; Wang, Guofeng; Xie, Qinglu; Zhang, Yanchao
2014-01-01
Effective fault classification of rolling element bearings provides an important basis for ensuring safe operation of rotating machinery. In this paper, a novel vibration sensor-based fault diagnosis method using an Ellipsoid-ARTMAP network (EAM) and a differential evolution (DE) algorithm is proposed. The original features are firstly extracted from vibration signals based on wavelet packet decomposition. Then, a minimum-redundancy maximum-relevancy algorithm is introduced to select the most prominent features so as to decrease feature dimensions. Finally, a DE-based EAM (DE-EAM) classifier is constructed to realize the fault diagnosis. The major characteristic of EAM is that the sample distribution of each category is realized by using a hyper-ellipsoid node and smoothing operation algorithm. Therefore, it can depict the decision boundary of disperse samples accurately and effectively avoid over-fitting phenomena. To optimize EAM network parameters, the DE algorithm is presented and two objectives, including both classification accuracy and nodes number, are simultaneously introduced as the fitness functions. Meanwhile, an exponential criterion is proposed to realize final selection of the optimal parameters. To prove the effectiveness of the proposed method, the vibration signals of four types of rolling element bearings under different loads were collected. Moreover, to improve the robustness of the classifier evaluation, a two-fold cross validation scheme is adopted and the order of feature samples is randomly arranged ten times within each fold. The results show that DE-EAM classifier can recognize the fault categories of the rolling element bearings reliably and accurately. PMID:24936949
Geology and structure of the North Boqueron Bay-Punta Montalva Fault System
NASA Astrophysics Data System (ADS)
Roig Silva, Coral Marie
The North Boqueron Bay-Punta Montalva Fault Zone is an active fault system that cuts across the Lajas Valley in southwestern Puerto Rico. The fault zone has been recognized and mapped based upon detailed analysis of geophysical data, satellite images and field mapping. The fault zone consists of a series of Cretaceous bedrock faults that reactivated and deformed Miocene limestone and Quaternary alluvial fan sediments. The fault zone is seismically active (ML < 5.0) with numerous locally felt earthquakes. Focal mechanism solutions and structural field data suggest strain partitioning with predominantly east-west left-lateral displacements with small normal faults oriented mostly toward the northeast. Evidence for recent displacement consists of fractures and small normal faults oriented mostly northeast found in intermittent streams that cut through the Quaternary alluvial fan deposits along the southern margin of the Lajas Valley, Areas of preferred erosion, within the alluvial fan, trend toward the west-northwest parallel to the on-land projection of the North Boqueron Bay Fault. Beyond the faulted alluvial fan and southeast of the Lajas Valley, the Northern Boqueron Bay Fault joins with the Punta Montalva Fault. The Punta Montalva Fault is defined by a strong topographic WNW lineament along which stream channels are displaced left laterally 200 meters and Miocene strata are steeply tilted to the south. Along the western end of the fault zone in northern Boqueron Bay, the older strata are only tilted 3° south and are covered by flat lying Holocene sediments. Focal mechanisms solutions along the western end suggest NW-SE shortening, which is inconsistent with left lateral strain partitioning along the fault zone. The limited deformation of older strata and inconsistent strain partitioning may be explained by a westerly propagation of the fault system from the southwest end. The limited geomorphic structural expression along the North Boqueron Bay Fault segment could also be because most of the displacement along the fault zone is older than the Holocene and that the rate of displacement is low, such that the development of fault escarpments and deformation all along the fault zone has yet to occur.
NASA Astrophysics Data System (ADS)
Shi, Xuhua; Wang, Yu; Sieh, Kerry; Weldon, Ray; Feng, Lujia; Chan, Chung-Han; Liu-Zeng, Jing
2018-03-01
Characterizing the 700 km wide system of active faults on the Shan Plateau, southeast of the eastern Himalayan syntaxis, is critical to understanding the geodynamics and seismic hazard of the large region that straddles neighboring China, Myanmar, Thailand, Laos, and Vietnam. Here we evaluate the fault styles and slip rates over multi-timescales, reanalyze previously published short-term Global Positioning System (GPS) velocities, and evaluate slip-rate gradients to interpret the regional kinematics and geodynamics that drive the crustal motion. Relative to the Sunda plate, GPS velocities across the Shan Plateau define a broad arcuate tongue-like crustal motion with a progressively northwestward increase in sinistral shear over a distance of 700 km followed by a decrease over the final 100 km to the syntaxis. The cumulative GPS slip rate across the entire sinistral-slip fault system on the Shan Plateau is 12 mm/year. Our observations of the fault geometry, slip rates, and arcuate southwesterly directed tongue-like patterns of GPS velocities across the region suggest that the fault kinematics is characterized by a regional southwestward distributed shear across the Shan Plateau, compared to more block-like rotation and indentation north of the Red River fault. The fault geometry, kinematics, and regional GPS velocities are difficult to reconcile with regional bookshelf faulting between the Red River and Sagaing faults or localized lower crustal channel flows beneath this region. The crustal motion and fault kinematics can be driven by a combination of basal traction of a clockwise, southwestward asthenospheric flow around the eastern Himalayan syntaxis and gravitation or shear-driven indentation from north of the Shan Plateau.
Advanced information processing system: Authentication protocols for network communication
NASA Technical Reports Server (NTRS)
Harper, Richard E.; Adams, Stuart J.; Babikyan, Carol A.; Butler, Bryan P.; Clark, Anne L.; Lala, Jaynarayan H.
1994-01-01
In safety critical I/O and intercomputer communication networks, reliable message transmission is an important concern. Difficulties of communication and fault identification in networks arise primarily because the sender of a transmission cannot be identified with certainty, an intermediate node can corrupt a message without certainty of detection, and a babbling node cannot be identified and silenced without lengthy diagnosis and reconfiguration . Authentication protocols use digital signature techniques to verify the authenticity of messages with high probability. Such protocols appear to provide an efficient solution to many of these problems. The objective of this program is to develop, demonstrate, and evaluate intercomputer communication architectures which employ authentication. As a context for the evaluation, the authentication protocol-based communication concept was demonstrated under this program by hosting a real-time flight critical guidance, navigation and control algorithm on a distributed, heterogeneous, mixed redundancy system of workstations and embedded fault-tolerant computers.
An Estimation Method of System Voltage Sag Profile using Recorded Sag Data
NASA Astrophysics Data System (ADS)
Tanaka, Kazuyuki; Sakashita, Tadashi
The influence of voltage sag to electric equipment has become big issues because of wider utilization of voltage sensitive devices. In order to reduce the influence of voltage sag appearing at each customer side, it is necessary to recognize the level of receiving voltage drop due to lightning faults for transmission line. However it is hard to measure directly those sag level at every load node. In this report, a new method of efficiently estimating system voltage sag profile is proposed based on symmetrical coordinate. In the proposed method, limited recorded sag data is used as the estimation condition which is recorded at each substation in power systems. From the point of view that the number of the recorded node is generally far less than those of the transmission route, a fast solution method is developed to calculate only recorder faulted voltage by applying reciprocity theorem for Y matrix. Furthermore, effective screening process is incorporated, in which the limited candidate of faulted transmission line can be chosen. Demonstrative results are presented using the IEEJ East10 standard system and actual 1700 bus system. The results show that estimation accuracy is sufficiently acceptable under less computation labor.
NASA Astrophysics Data System (ADS)
Youn, Joo-Sang; Seok, Seung-Joon; Kang, Chul-Hee
This paper presents a new QoS model for end-to-end service provisioning in multi-hop wireless networks. In legacy IEEE 802.11e based multi-hop wireless networks, the fixed assignment of service classes according to flow's priority at every node causes priority inversion problem when performing end-to-end service differentiation. Thus, this paper proposes a new QoS provisioning model called Dynamic Hop Service Differentiation (DHSD) to alleviate the problem and support effective service differentiation between end-to-end nodes. Many previous works for QoS model through the 802.11e based service differentiation focus on packet scheduling on several service queues with different service rate and service priority. Our model, however, concentrates on a dynamic class selection scheme, called Per Hop Class Assignment (PHCA), in the node's MAC layer, which selects a proper service class for each packet, in accordance with queue states and service requirement, in every node along the end-to-end route of the packet. The proposed QoS solution is evaluated using the OPNET simulator. The simulation results show that the proposed model outperforms both best-effort and 802.11e based strict priority service models in mobile ad hoc environments.
NASA Astrophysics Data System (ADS)
Harkins, Nathan W.
A mechanical description of the interplay between ongoing crustal deformation and topographic evolution within the Tibetan Plateau remains outstanding, and thus our ability to describe the mechanisms responsible for the creation of this and other continental plateaus is limited. In this work, we employ a multidisciplinary approach to investigate the Quaternary record of active tectonism and coeval topographic evolution in the northeastern Tibetan Plateau. Fluvial channel topographic data paired with geochronologically calibrated measures of erosion rate reveal a headward migrating wave of dramatically accelerated incision rates in the headwaters of the Yellow River, which drains a large portion of northeastern Tibet. This transient increase in incision is likely driven by downstream base-level changes along the plateau margin and is superimposed onto a broad region of higher erosion rates confined to the plateau itself, within the Anyemaqen Shan (mountains). The Kunlun fault, one of the major active strike-slip faults of Tibet, trends through the Anyemaqen Shan. Using a careful approach towards quantifying millennial slip-rates along this fault zone based on the age of offset landforms, we constrain the Pleistocene kinematics of the eastern portion of the Kunlun fault and link this deformation to tectonically-driven erosion in the Anyemaqen Shan. Consideration of the age and morphology of fluvial terraces offset by the fault both highlights uncertainties associated with slip-rate determinations and allow more confident quantification of the allowable range of slip-rates at sites that take advantage of these features. Several new slip-rate determinations from this study at select locations corroborate a small number of previous determinations to identify an eastward decreasing slip-rate gradient and termination of the Kunlun fault within the Anyemaqen Shan. Existing geodetic data reveals a similar pattern of eastward-decreasing distributed shear across the fault zone. The spatial coincidence of tectonically driven erosion in the Anyemaqen Shan with the slip-rate gradient and termination the Kunlun fault implies that the crust of the northeastern plateau has the ability to accumulate regionally distributed permanent strain. Therefore, traditional 'rigid-body' rotation type descriptions of Tibetan Plateau kinematics fail to describe deformation on the northeastern plateau.
Mixed H2/H∞-Based Fusion Estimation for Energy-Limited Multi-Sensors in Wearable Body Networks
Li, Chao; Zhang, Zhenjiang; Chao, Han-Chieh
2017-01-01
In wireless sensor networks, sensor nodes collect plenty of data for each time period. If all of data are transmitted to a Fusion Center (FC), the power of sensor node would run out rapidly. On the other hand, the data also needs a filter to remove the noise. Therefore, an efficient fusion estimation model, which can save the energy of the sensor nodes while maintaining higher accuracy, is needed. This paper proposes a novel mixed H2/H∞-based energy-efficient fusion estimation model (MHEEFE) for energy-limited Wearable Body Networks. In the proposed model, the communication cost is firstly reduced efficiently while keeping the estimation accuracy. Then, the parameters in quantization method are discussed, and we confirm them by an optimization method with some prior knowledge. Besides, some calculation methods of important parameters are researched which make the final estimates more stable. Finally, an iteration-based weight calculation algorithm is presented, which can improve the fault tolerance of the final estimate. In the simulation, the impacts of some pivotal parameters are discussed. Meanwhile, compared with the other related models, the MHEEFE shows a better performance in accuracy, energy-efficiency and fault tolerance. PMID:29280950
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Amjad Majid; Albert, Don; Andersson, Par
SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work 9normally a parallel job) on the set of allocated nodes. Finally, it arbitrates conflicting requests for resources by managing a queue of pending work.
Reliable and Fault-Tolerant Software-Defined Network Operations Scheme for Remote 3D Printing
NASA Astrophysics Data System (ADS)
Kim, Dongkyun; Gil, Joon-Min
2015-03-01
The recent wide expansion of applicable three-dimensional (3D) printing and software-defined networking (SDN) technologies has led to a great deal of attention being focused on efficient remote control of manufacturing processes. SDN is a renowned paradigm for network softwarization, which has helped facilitate remote manufacturing in association with high network performance, since SDN is designed to control network paths and traffic flows, guaranteeing improved quality of services by obtaining network requests from end-applications on demand through the separated SDN controller or control plane. However, current SDN approaches are generally focused on the controls and automation of the networks, which indicates that there is a lack of management plane development designed for a reliable and fault-tolerant SDN environment. Therefore, in addition to the inherent advantage of SDN, this paper proposes a new software-defined network operations center (SD-NOC) architecture to strengthen the reliability and fault-tolerance of SDN in terms of network operations and management in particular. The cooperation and orchestration between SDN and SD-NOC are also introduced for the SDN failover processes based on four principal SDN breakdown scenarios derived from the failures of the controller, SDN nodes, and connected links. The abovementioned SDN troubles significantly reduce the network reachability to remote devices (e.g., 3D printers, super high-definition cameras, etc.) and the reliability of relevant control processes. Our performance consideration and analysis results show that the proposed scheme can shrink operations and management overheads of SDN, which leads to the enhancement of responsiveness and reliability of SDN for remote 3D printing and control processes.
NASA Astrophysics Data System (ADS)
Prévost, Jean H.; Sukumar, N.
2016-01-01
Faults are geological entities with thicknesses several orders of magnitude smaller than the grid blocks typically used to discretize reservoir and/or over-under-burden geological formations. Introducing faults in a complex reservoir and/or geomechanical mesh therefore poses significant meshing difficulties. In this paper, we consider the strong-coupling of solid displacement and fluid pressure in a three-dimensional poro-mechanical (reservoir-geomechanical) model. We introduce faults in the mesh without meshing them explicitly, by using the extended finite element method (X-FEM) in which the nodes whose basis function support intersects the fault are enriched within the framework of partition of unity. For the geomechanics, the fault is treated as an internal displacement discontinuity that allows slipping to occur using a Mohr-Coulomb type criterion. For the reservoir, the fault is either an internal fluid flow conduit that allows fluid flow in the fault as well as to enter/leave the fault or is a barrier to flow (sealing fault). For internal fluid flow conduits, the continuous fluid pressure approximation admits a discontinuity in its normal derivative across the fault, whereas for an impermeable fault, the pressure approximation is discontinuous across the fault. Equal-order displacement and pressure approximations are used. Two- and three-dimensional benchmark computations are presented to verify the accuracy of the approach, and simulations are presented that reveal the influence of the rate of loading on the activation of faults.
Fault tolerant architectures for integrated aircraft electronics systems, task 2
NASA Technical Reports Server (NTRS)
Levitt, K. N.; Melliar-Smith, P. M.; Schwartz, R. L.
1984-01-01
The architectural basis for an advanced fault tolerant on-board computer to succeed the current generation of fault tolerant computers is examined. The network error tolerant system architecture is studied with particular attention to intercluster configurations and communication protocols, and to refined reliability estimates. The diagnosis of faults, so that appropriate choices for reconfiguration can be made is discussed. The analysis relates particularly to the recognition of transient faults in a system with tasks at many levels of priority. The demand driven data-flow architecture, which appears to have possible application in fault tolerant systems is described and work investigating the feasibility of automatic generation of aircraft flight control programs from abstract specifications is reported.
Re-evaluation of heat flow data near Parkfield, CA: Evidence for a weak San Andreas Fault
Fulton, P.M.; Saffer, D.M.; Harris, Reid N.; Bekins, B.A.
2004-01-01
Improved interpretations of the strength of the San Andreas Fault near Parkfield, CA based on thermal data require quantification of processes causing significant scatter and uncertainty in existing heat flow data. These effects include topographic refraction, heat advection by topographically-driven groundwater flow, and uncertainty in thermal conductivity. Here, we re-evaluate the heat flow data in this area by correcting for full 3-D terrain effects. We then investigate the potential role of groundwater flow in redistributing fault-generated heat, using numerical models of coupled heat and fluid flow for a wide range of hydrologic scenarios. We find that a large degree of the scatter in the data can be accounted for by 3-D terrain effects, and that for plausible groundwater flow scenarios frictional heat generated along a strong fault is unlikely to be redistributed by topographically-driven groundwater flow in a manner consistent with the 3-D corrected data. Copyright 2004 by the American Geophysical Union.
Single-phase power distribution system power flow and fault analysis
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.
1992-01-01
Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.
Cenozoic Spatio-temporal Variations of Tian Shan Deformation
NASA Astrophysics Data System (ADS)
Sobel, E. R.; Bande, A.; Chen, J.; Thiede, R. C.; Macaulay, E. A.; Mikolaichuk, A.; Gilder, S. A.; Kley, J.
2016-12-01
The Cenozoic deformation of the Tian Shan is driven by north-vergent compression caused by the India-Asia collision, the indentation of the Pamir, and/or right-lateral transpression driven by the indentation of Arabia into Eurasia. The Talas-Fergana fault (TFF) region corresponds to the widest portion of high topography of the Tianshan Mountains. The width of the range tapers both east and west, albeit the geometry is more complex to the west. We synthesize published AFT, apatite (U-Th)/He, magnetostratigraphic and paleomagnetically-determined rotation data combined with our own work from the Tianshan domain to map spatial patterns of exhumation and deformation. Prior to middle Cenozoic deformation, the area of the present range was characterized by low relief; adjacent sedimentary basins record very low accumulation rates or hiatuses. Localized Eocene deformation events have been proposed but do not appear to reflect significant shortening. The first large pulse of deformation commenced in the Late Oligocene or Early Miocene, represented by isolated range uplifts, often related to reactivation of older structures, and pulses of clastic sedimentation. Perhaps the most significant deformation at this time occurred north of the Pamir along the NW-SE trending dextral TFF, in the Chatkal ranges at its NW end, and the Kokshaal and At-Bashi ranges at the SE end of the fault. The Fergana basin, west of the TFF, underwent significant counter-clockwise rotation that was accommodated by these structures. Relatively rapid slip along the TFF persisted from ca. 25 Ma until at least 13.5 Ma. A second, larger deformation episode commenced in the Middle-Late Miocene along the length of the Tian Shan. Similar-aged deformation is reported from the Tadjik depression and within the Pamir. Important questions to address include whether the drivers for the two episodes were the same and what were the relative roles of the Tarim block and the Pamir indenter in producing the deformation.
Is there a "blind" strike-slip fault at the southern end of the San Jacinto Fault system?
NASA Astrophysics Data System (ADS)
Tymofyeyeva, E.; Fialko, Y. A.
2015-12-01
We have studied the interseismic deformation at the southern end of the San Jacinto fault system using Interferometric Synthetic Aperture Radar (InSAR) and Global Positioning System (GPS) data. To complement the continuous GPS measurements from the PBO network, we have conducted campaign-style GPS surveys of 19 benchmarks along Highway 78 in the years 2012, 2013, and 2014. We processed the campaign GPS data using GAMIT to obtain horizontal velocities. The data show high velocity gradients East of the surface trace of the Coyote Creek Fault. We also processed InSAR data from the ascending and descending tracks of the ENVISAT mission between the years 2003 and 2010. The InSAR data were corrected for atmospheric artifacts using an iterative common point stacking method. We combined average velocities from different look angles to isolate the fault-parallel velocity field, and used fault-parallel velocities to compute strain rate. We filtered the data over a range of wavelengths prior to numerical differentiation, to reduce the effects of noise and to investigate both shallow and deep sources of deformation. At spatial wavelengths less than 2km the strain rate data show prominent anomalies along the San Andreas and Superstition Hills faults, where shallow creep has been documented by previous studies. Similar anomalies are also observed along parts of the Coyote Creek Fault, San Felipe Fault, and an unmapped southern continuation of the Clark strand of the San Jacinto Fault. At wavelengths on the order of 20km, we observe elevated strain rates concentrated east of the Coyote Creek Fault. The long-wavelength strain anomaly east of the Coyote Creek Fault, and the localized shallow creep observed in the short-wavelength strain rate data over the same area suggest that there may be a "blind" segment of the Clark Fault that accommodates a significant portion of the deformation on the southern end of the San Jacinto Fault.
NASA Astrophysics Data System (ADS)
Eisses, A.; Kell, A. M.; Kent, G.; Driscoll, N. W.; Karlin, R. E.; Baskin, R. L.; Louie, J. N.; Smith, K. D.; Pullammanappallil, S.
2011-12-01
Preliminary slip rates measured across the East Pyramid Lake fault, or the Lake Range fault, help provide new estimates of extension across the Pyramid Lake basin. Multiple stratigraphic horizons spanning 48 ka were tracked throughout the lake, with layer offsets measured across all significant faults in the basin. A chronstratigraphic framework acquired from four sediment cores allows slip rates of the Lake Range and other faults to be calculated accurately. This region of the northern Walker Lake, strategically placed between the right-lateral strike-slip faults of Honey and Eagle Lakes to the north, and the normal fault bounded basins to the southwest (e.g., Tahoe, Carson), is critical in understanding the underlying structural complexity that is not only necessary for geothermal exploration, but also earthquake hazard assessment due to the proximity of the Reno-Sparks metropolitan area. In addition, our seismic CHIRP imaging with submeter resolution allows the construction of the first fault map of Pyramid Lake. The Lake Range fault can be obviously traced west of Anahoe Island extending north along the east end of the lake in numerous CHIRP lines. Initial drafts of the fault map reveal active transtension through a series of numerous, small, northwest striking, oblique-slip faults in the north end of the lake. A previously field mapped northwest striking fault near Sutcliff can be extended into the west end of Pyramid Lake. This fault map, along with the calculated slip rate of the Lake Range, and potentially multiple other faults, gives a clearer picture into understanding the geothermal potential, tectonic regime and earthquake hazards in the Pyramid Lake basin and the northern Walker Lane. These new results have also been merged with seismicity maps, along with focal mechanisms for the larger events to begin to extend our fault map in depth.
The optimal community detection of software based on complex networks
NASA Astrophysics Data System (ADS)
Huang, Guoyan; Zhang, Peng; Zhang, Bing; Yin, Tengteng; Ren, Jiadong
2016-02-01
The community structure is important for software in terms of understanding the design patterns, controlling the development and the maintenance process. In order to detect the optimal community structure in the software network, a method Optimal Partition Software Network (OPSN) is proposed based on the dependency relationship among the software functions. First, by analyzing the information of multiple execution traces of one software, we construct Software Execution Dependency Network (SEDN). Second, based on the relationship among the function nodes in the network, we define Fault Accumulation (FA) to measure the importance of the function node and sort the nodes with measure results. Third, we select the top K(K=1,2,…) nodes as the core of the primal communities (only exist one core node). By comparing the dependency relationships between each node and the K communities, we put the node into the existing community which has the most close relationship. Finally, we calculate the modularity with different initial K to obtain the optimal division. With experiments, the method OPSN is verified to be efficient to detect the optimal community in various softwares.
Low latency, high bandwidth data communications between compute nodes in a parallel computer
Blocksome, Michael A
2014-04-01
Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.
Low latency, high bandwidth data communications between compute nodes in a parallel computer
Blocksome, Michael A
2014-04-22
Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.
Low latency, high bandwidth data communications between compute nodes in a parallel computer
Blocksome, Michael A
2013-07-02
Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.
A New Kinematic Model for Polymodal Faulting: Implications for Fault Connectivity
NASA Astrophysics Data System (ADS)
Healy, D.; Rizzo, R. E.
2015-12-01
Conjugate, or bimodal, fault patterns dominate the geological literature on shear failure. Based on Anderson's (1905) application of the Mohr-Coulomb failure criterion, these patterns have been interpreted from all tectonic regimes, including normal, strike-slip and thrust (reverse) faulting. However, a fundamental limitation of the Mohr-Coulomb failure criterion - and others that assume faults form parallel to the intermediate principal stress - is that only plane strain can result from slip on the conjugate faults. However, deformation in the Earth is widely accepted as being three-dimensional, with truly triaxial stresses and strains. Polymodal faulting, with three or more sets of faults forming and slipping simultaneously, can generate three-dimensional strains from truly triaxial stresses. Laboratory experiments and outcrop studies have verified the occurrence of the polymodal fault patterns in nature. The connectivity of polymodal fault networks differs significantly from conjugate fault networks, and this presents challenges to our understanding of faulting and an opportunity to improve our understanding of seismic hazards and fluid flow. Polymodal fault patterns will, in general, have more connected nodes in 2D (and more branch lines in 3D) than comparable conjugate (bimodal) patterns. The anisotropy of permeability is therefore expected to be very different in rocks with polymodal fault patterns in comparison to conjugate fault patterns, and this has implications for the development of hydrocarbon reservoirs, the genesis of ore deposits and the management of aquifers. In this contribution, I assess the published evidence and models for polymodal faulting before presenting a novel kinematic model for general triaxial strain in the brittle field.
Method and system for controlling a permanent magnet machine during fault conditions
Krefta, Ronald John; Walters, James E.; Gunawan, Fani S.
2004-05-25
Method and system for controlling a permanent magnet machine driven by an inverter is provided. The method allows for monitoring a signal indicative of a fault condition. The method further allows for generating during the fault condition a respective signal configured to maintain a field weakening current even though electrical power from an energy source is absent during said fault condition. The level of the maintained field-weakening current enables the machine to operate in a safe mode so that the inverter is protected from excess voltage.
Crustal velocity field near the big bend of California's San Andreas fault
Snay, R.A.; Cline, M.W.; Philipp, C.R.; Jackson, D.D.; Feng, Y.; Shen, Z.-K.; Lisowski, M.
1996-01-01
We use geodetic data spanning the 1920-1992 interval to estimate the horizontal velocity field near the big bend segment of California's San Andreas fault (SAF). More specifically, we estimate a horizontal velocity vector for each node of a two-dimensional grid that has a 15-min-by-15-min mesh and that extends between latitudes 34.0??N and 36.0??N and longitudes 117.5??W and 120.5??W. For this estimation process, we apply bilinear interpolation to transfer crustal deformation information from geodetic sites to the grid nodes. The data include over a half century of triangulation measurements, over two decades of repeated electronic distance measurements, a decade of repeated very long baseline interferometry measurements, and several years of Global Positioning System measurements. Magnitudes for our estimated velocity vectors have formal standard errors ranging from 0.7 to 6.8 mm/yr. Our derived velocity field shows that (1) relative motion associated with the SAF exceeds 30 mm/yr and is distributed on the Earth's surface across a band (> 100 km wide) that is roughly centered on this fault; (2) when velocities are expressed relative to a fixed North America plate, the motion within our primary study region has a mean orientation of N44??W ?? 2?? and the surface trace of the SAF is congruent in shape to nearby contours of constant speed yet this trace is oriented between 5?? and 10?? counterclockwise relative to these contours; and (3) large strain rates (shear rates > 150 nrad/yr and/or areal dilatation rates < -150 nstr/yr) exist near the Garlock fault, near the White Wolf fault, and in the Ventura basin.
A fuzzy Petri-net-based mode identification algorithm for fault diagnosis of complex systems
NASA Astrophysics Data System (ADS)
Propes, Nicholas C.; Vachtsevanos, George
2003-08-01
Complex dynamical systems such as aircraft, manufacturing systems, chillers, motor vehicles, submarines, etc. exhibit continuous and event-driven dynamics. These systems undergo several discrete operating modes from startup to shutdown. For example, a certain shipboard system may be operating at half load or full load or may be at start-up or shutdown. Of particular interest are extreme or "shock" operating conditions, which tend to severely impact fault diagnosis or the progression of a fault leading to a failure. Fault conditions are strongly dependent on the operating mode. Therefore, it is essential that in any diagnostic/prognostic architecture, the operating mode be identified as accurately as possible so that such functions as feature extraction, diagnostics, prognostics, etc. can be correlated with the predominant operating conditions. This paper introduces a mode identification methodology that incorporates both time- and event-driven information about the process. A fuzzy Petri net is used to represent the possible successive mode transitions and to detect events from processed sensor signals signifying a mode change. The operating mode is initialized and verified by analysis of the time-driven dynamics through a fuzzy logic classifier. An evidence combiner module is used to combine the results from both the fuzzy Petri net and the fuzzy logic classifier to determine the mode. Unlike most event-driven mode identifiers, this architecture will provide automatic mode initialization through the fuzzy logic classifier and robustness through the combining of evidence of the two algorithms. The mode identification methodology is applied to an AC Plant typically found as a component of a shipboard system.
Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting
NASA Technical Reports Server (NTRS)
Bergman, Eric A.; Solomon, Sean C.
1987-01-01
The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.
Ho, Kevin I-J; Leung, Chi-Sing; Sum, John
2010-06-01
In the last two decades, many online fault/noise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common fault/noise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive input noise, 2) injecting additive/multiplicative weight noise, 3) injecting multiplicative node noise, 4) injecting multiweight fault (random disconnection of weights), 5) injecting multinode fault during training, and 6) weight decay with injecting multinode fault. Based on the Gladyshev theorem, we show that the convergence of these six online algorithms is almost sure. Moreover, their true objective functions being minimized are derived. For injecting additive input noise during training, the objective function is identical to that of the Tikhonov regularizer approach. For injecting additive/multiplicative weight noise during training, the objective function is the simple mean square training error. Thus, injecting additive/multiplicative weight noise during training cannot improve the fault tolerance of an RBF network. Similar to injective additive input noise, the objective functions of other fault/noise-injection-based online algorithms contain a mean square error term and a specialized regularization term.
NASA Astrophysics Data System (ADS)
Kaduri, Maor; Gratier, Jean-Pierre; Renard, François; Çakir, Ziyadin; Lasserre, Cécile
2017-04-01
In the last decade aseismic creep has been noted as one of the key processes along tectonic plate boundaries. It contributes to the energy budget during the seismic cycle, delaying or triggering the occurrence of large earthquakes. Several major continental active faults show spatial alternation of creeping and locked segments. A great challenge is to understand which parameters control the transition from seismic to aseismic deformation in fault zones, such as the lithology, the degree of deformation from damage rocks to gouge, and the stress driven fault architecture transformations at all scales. The present study focuses on the North Anatolian Fault (Turkey) and characterizes the mechanisms responsible for the partition between seismic and aseismic deformation. Strain values were calculated using various methods, e.g. Fry, R-φs from microstructural measurements in gouge and damage samples collected on more than 30 outcrops along the fault. Maps of mineral composition were reconstructed from microprobe measurements of gouge and damage rock microstructure, in order to calculate the relative mass changes due to stress driven processes during deformation. Strain values were extracted, in addition to the geometrical properties of grain orientation and size distribution. Our data cover subsamples in the damage zones that were protected from deformation and are reminiscent of the host rock microstructure and composition, and subsamples that were highly deformed and recorded both seismic and aseismic deformations. Increase of strain value is linked to the evolution of the orientation of the grains from random to sheared sub-parallel and may be related to various parameters: (1) relative mass transfer increase with increasing strain indicating how stress driven mass transfer processes control aseismic creep evolution with time; (2) measured strain is strongly related with the initial lithology and with the evolution of mineral composition: monomineralic rocks are stronger (less deformed) than polymineralic ones; (3) strain measurements allow to evaluate the cumulated geological displacement accommodated by aseismic creep and the relative ratio between seismic and aseismic displacement for each section of an active fault. These relations allow to quantify more accurately the aseismic creep processes and their evolution with time along the North Anatolian Fault which are controlled by a superposition of two kinds of mechanisms: (1) stress driven mass transfer (pressure solution and metamorphism) that control local and regional mass transfer and associated rheology evolution and (2) grain boundary sliding along weak mineral interfaces (initially weak minerals or more often transformed by deformation-related reactions).
Roig‐Silva, Coral Marie; Asencio, Eugenio; Joyce, James
2013-01-01
The North Boquerón Bay–Punta Montalva fault zone has been mapped crossing the Lajas Valley in southwest Puerto Rico. Identification of the fault was based upon detailed analysis of geophysical data, satellite images, and field mapping. The fault zone consists of a series of Cretaceous bedrock faults that reactivated and deformed Miocene limestone and Quaternary alluvial fan sediments. The fault zone is seismically active (local magnitude greater than 5.0) with numerous locally felt earthquakes. Focal mechanism solutions suggest strain partitioning with predominantly east–west left-lateral displacements with small normal faults striking mostly toward the northeast. Northeast-trending fractures and normal faults can be found in intermittent streams that cut through the Quaternary alluvial fan deposits along the southern margin of the Lajas Valley, an east–west-trending 30-km-long fault-controlled depression. Areas of preferred erosion within the alluvial fan trend toward the west-northwest parallel to the onland projection of the North Boquerón Bay fault. The North Boquerón Bay fault aligns with the Punta Montalva fault southeast of the Lajas Valley. Both faults show strong southward tilting of Miocene strata. On the western end, the Northern Boquerón Bay fault is covered with flat-lying Holocene sediments, whereas at the southern end the Punta Montalva fault shows left-lateral displacement of stream drainage on the order of a few hundred meters.
Nearly frictionless faulting by unclamping in long-term interaction models
Parsons, T.
2002-01-01
In defiance of direct rock-friction observations, some transform faults appear to slide with little resistance. In this paper finite element models are used to show how strain energy is minimized by interacting faults that can cause long-term reduction in fault-normal stresses (unclamping). A model fault contained within a sheared elastic medium concentrates stress at its end points with increasing slip. If accommodating structures free up the ends, then the fault responds by rotating, lengthening, and unclamping. This concept is illustrated by a comparison between simple strike-slip faulting and a mid-ocean-ridge model with the same total transform length; calculations show that the more complex system unclapms the transforms and operates at lower energy. In another example, the overlapping San Andreas fault system in the San Francisco Bay region is modeled; this system is complicated by junctions and stepovers. A finite element model indicates that the normal stress along parts of the faults could be reduced to hydrostatic levels after ???60-100 k.y. of system-wide slip. If this process occurs in the earth, then parts of major transform fault zones could appear nearly frictionless.
Wireless Avionics Packet to Support Fault Tolerance for Flight Applications
NASA Technical Reports Server (NTRS)
Block, Gary L.; Whitaker, William D.; Dillon, James W.; Lux, James P.; Ahmad, Mohammad
2009-01-01
In this protocol and packet format, data traffic is monitored by all network interfaces to determine the health of transmitter and subsystems. When failures are detected, the network inter face applies its recover y policies to provide continued service despite the presence of faults. The protocol, packet format, and inter face are independent of the data link technology used. The current demonstration system supports both commercial off-the-shelf wireless connections and wired Ethernet connections. Other technologies such as 1553 or serial data links can be used for the network backbone. The Wireless Avionics packet is divided into three parts: a header, a data payload, and a checksum. The header has the following components: magic number, version, quality of service, time to live, sending transceiver, function code, payload length, source Application Data Interface (ADI) address, destination ADI address, sending node address, target node address, and a sequence number. The magic number is used to identify WAV packets, and allows the packet format to be updated in the future. The quality of service field allows routing decisions to be made based on this value and can be used to route critical management data over a dedicated channel. The time to live value is used to discard misrouted packets while the source transceiver is updated at each hop. This information is used to monitor the health of each transceiver in the network. To identify the packet type, the function code is used. Besides having a regular data packet, the system supports diagnostic packets for fault detection and isolation. The payload length specifies the number of data bytes in the payload, and this supports variable-length packets in the network. The source ADI is the address of the originating interface. This can be used by the destination application to identify the originating source of the packet where the address consists of a subnet, subsystem class within the subnet, a subsystem unit, and the local ADI number. The destination ADI is used to route the packet to its ultimate destination. At each hop, the sending interface uses the destination address to determine the next node for the data. The sending node is the node address of the interface that is broadcasting the packet. This field is used to determine the health of the subsystem that is sending the packet. In the case of a packet that traverses several intermediate nodes, it may be the node address of the intermediate node. The target node is the node address of the next hop for the packet. It may be an intermediate node, or the final destination for the packet. The sequence number is used to identify duplicate packets. Because each interface has multiple transceivers, the same packet will appear at both receivers. The sequence number allows the interface to correlate the reception and forward a single, unique packet for additional processing. The subnet field allows data traffic to be partitioned into segregated local networks to support large networks while keeping each subnet at a manageable size. This also keeps the routing table small enough so routing can be done by a simple table lookup in an FPGA device. The subsystem class identifies members of a set of redundant subsystems, and, in a hot standby configuration, all members of the subsystem class will receive the data packets. Only the active subsystem will generate data traffic. Specific units in a class of redundant units can be identified and, if the hot standby configuration is not used, packets will be directed to a specific subsystem unit.
Time Series Analysis for Spatial Node Selection in Environment Monitoring Sensor Networks
Bhandari, Siddhartha; Jurdak, Raja; Kusy, Branislav
2017-01-01
Wireless sensor networks are widely used in environmental monitoring. The number of sensor nodes to be deployed will vary depending on the desired spatio-temporal resolution. Selecting an optimal number, position and sampling rate for an array of sensor nodes in environmental monitoring is a challenging question. Most of the current solutions are either theoretical or simulation-based where the problems are tackled using random field theory, computational geometry or computer simulations, limiting their specificity to a given sensor deployment. Using an empirical dataset from a mine rehabilitation monitoring sensor network, this work proposes a data-driven approach where co-integrated time series analysis is used to select the number of sensors from a short-term deployment of a larger set of potential node positions. Analyses conducted on temperature time series show 75% of sensors are co-integrated. Using only 25% of the original nodes can generate a complete dataset within a 0.5 °C average error bound. Our data-driven approach to sensor position selection is applicable for spatiotemporal monitoring of spatially correlated environmental parameters to minimize deployment cost without compromising data resolution. PMID:29271880
A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp
High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less
NASA Astrophysics Data System (ADS)
Michaud, François; Calmus, Thierry; Ratzov, Gueorgui; Royer, Jean-Yves; Sosson, Marc; Bigot-Cormier, Florence; Bandy, William; Mortera Gutiérrez, Carlos
2011-08-01
The relative motion of the Pacific plate with respect to the North America plate is partitioned between transcurrent faults located along the western margin of Baja California and transform faults and spreading ridges in the Gulf of California. However, the amount of right lateral offset along the Baja California western margin is still debated. We revisited multibeam swath bathymetry data along the southern end of the Tosco-Abreojos fault system. In this area the depths are less than 1,000 m and allow a finer gridding at 60 m cell spacing. This improved resolution unveils several transcurrent right lateral faults offsetting the seafloor and canyons, which can be used as markers to quantify local offsets. The seafloor of the southern end of the Tosco-Abreojos fault system (south of 24°N) displays NW-SE elongated bathymetric highs and lows, suggesting a transtensional tectonic regime associated with the formation of pull-apart basins. In such an active tectonic context, submarine canyon networks are unstable. Using the deformation rate inferred from kinematic predictions and pull-apart geometry, we suggest a minimum age for the reorganization of the canyon network.
NASA Astrophysics Data System (ADS)
Huang, Jinhui; Liu, Wenxiang; Su, Yingxue; Wang, Feixue
2018-02-01
Space networks, in which connectivity is deterministic and intermittent, can be modeled by delay/disruption tolerant networks. In space delay/disruption tolerant networks, a packet is usually transmitted from the source node to the destination node indirectly via a series of relay nodes. If anyone of the nodes in the path becomes congested, the packet will be dropped due to buffer overflow. One of the main reasons behind congestion is the unbalanced network traffic distribution. We propose a load balancing strategy which takes the congestion status of both the local node and relay nodes into account. The congestion status, together with the end-to-end delay, is used in the routing selection. A lookup-table enhancement is also proposed. The off-line computation and the on-line adjustment are combined together to make a more precise estimate of the end-to-end delay while at the same time reducing the onboard computation. Simulation results show that the proposed strategy helps to distribute network traffic more evenly and therefore reduces the packet drop ratio. In addition, the average delay is also decreased in most cases. The lookup-table enhancement provides a compromise between the need for better communication performance and the desire for less onboard computation.
Three dimensional modelling of earthquake rupture cycles on frictional faults
NASA Astrophysics Data System (ADS)
Simpson, Guy; May, Dave
2017-04-01
We are developing an efficient MPI-parallel numerical method to simulate earthquake sequences on preexisting faults embedding within a three dimensional viscoelastic half-space. We solve the velocity form of the elasto(visco)dynamic equations using a continuous Galerkin Finite Element Method on an unstructured pentahedral mesh, which thus permits local spatial refinement in the vicinity of the fault. Friction sliding is coupled to the viscoelastic solid via rate- and state-dependent friction laws using the split-node technique. Our coupled formulation employs a picard-type non-linear solver with a fully implicit, first order accurate time integrator that utilises an adaptive time step that efficiently evolves the system through multiple seismic cycles. The implementation leverages advanced parallel solvers, preconditioners and linear algebra from the Portable Extensible Toolkit for Scientific Computing (PETSc) library. The model can treat heterogeneous frictional properties and stress states on the fault and surrounding solid as well as non-planar fault geometries. Preliminary tests show that the model successfully reproduces dynamic rupture on a vertical strike-slip fault in a half-space governed by rate-state friction with the ageing law.
Recognition on space photographs of structural elements of Baja California
NASA Technical Reports Server (NTRS)
Hamilton, W.
1971-01-01
Gemini and Apollo photographs provide illustrations of known structural features of the peninsula and some structures not recognized previously. An apparent transform relationship between strike-slip and normal faulting is illustrated by the overlapping vertical photographs of northern Baja California. The active Agua Blanca right-lateral strike-slip fault trends east-southeastward to end at the north end of the Valle San Felipe and Valle Chico. The uplands of the high Sierra San Pedro Martir are a low-relief surface deformed by young faults, monoclines, and warps, which mostly produce west-facing steps and slopes; the topography is basically structural. The Sierra Cucapas of northeasternmost Baja California and the Colorado River delta of northwesternmost Sonora are broken by northwest-trending strike-slip faults. A strike-slip fault is inferred to trend northward obliquely from near Cabo San Lucas to La Paz, thence offshore until it comes ashore again as the Bahia Concepcion strike-slip fault.
Qayyum, M A; Shaad, F U
1976-01-01
Anatomy, histology and innervation of the heart of the rose ringed parakeet, Psittacula krameri have been studied in the present investigation. The sinuatrial node is found to be well-developed. It is located towards the right side of the cephalic end of the interatrial septum and composed of a few nucleated cells and a large fibrous mass. The atrioventricular node is poorly defined, present at the caudal end of the interatrial septum. The node is somewhat triangular in shape and is composed of elongated and multinucleated specialized fibres. The node is not covered by any connective tissue sheath. The poor development of the atrio ventricular node and the absence of any sheath around it may be correlated with the fast rate of the heart beat. The atrioventricular bundle is observed at the cephalic end of the interventricular septum. A branch from the right limb of the atrioventricular bundle is noted to pass directly into the right atrioventricular valve. The heart is richly innervated. Ganglion cells along with nerve fibres have been observed at the sulcus terminalis and the atrioventricular junction. A direct nervous connection could be observed between the sinuatrial and atrioventricular nodes. It is argued that the impulse which originates in the sinuatrial node would reach the atrioventricular node through the unspecialized muscle fibres and nerve fibres of the interatrial septum. Nerve cells could not be traced in the substance of the sinuatrial node, atrioventricular node and atrioventricular bundle.
Fault zone hydrogeologic properties and processes revealed by borehole temperature monitoring
NASA Astrophysics Data System (ADS)
Fulton, P. M.; Brodsky, E. E.
2015-12-01
High-resolution borehole temperature monitoring can provide valuable insight into the hydrogeologic structure of fault zones and transient processes that affect fault zone stability. Here we report on results from a subseafloor temperature observatory within the Japan Trench plate boundary fault. In our efforts to interpret this unusual dataset, we have developed several new methods for probing hydrogeologic properties and processes. We illustrate how spatial variations in the thermal recovery of the borehole after drilling and other spectral characteristics provide a measure of the subsurface permeability architecture. More permeable zones allow for greater infiltration of cool drilling fluids, are more greatly thermally disturbed, and take longer to recover. The results from the JFAST (Japan Trench Fast Drilling Project) observatory are consistent with geophysical logs, core data, and other hydrologic observations and suggest a permeable damage zone consisting of steeply dipping faults and fractures overlays a low-permeability clay-rich plate boundary fault. Using high-resolution time series data, we have also developed methods to map out when and where fluid advection occurs in the subsurface over time. In the JFAST data, these techniques reveal dozens of transient earthquake-driven fluid pulses that are spatially correlated and consistently located around inferred permeable areas of the fault damage zone. These observations are suspected to reflect transient fluid flow driven by pore pressure changes in response to dynamic and/or static stresses associated with nearby earthquakes. This newly recognized hydrologic phenomenon has implications for understanding subduction zone heat and chemical transport as well as the redistribution of pore fluid pressure which influences fault stability and can trigger other earthquakes.
phylo-node: A molecular phylogenetic toolkit using Node.js.
O'Halloran, Damien M
2017-01-01
Node.js is an open-source and cross-platform environment that provides a JavaScript codebase for back-end server-side applications. JavaScript has been used to develop very fast and user-friendly front-end tools for bioinformatic and phylogenetic analyses. However, no such toolkits are available using Node.js to conduct comprehensive molecular phylogenetic analysis. To address this problem, I have developed, phylo-node, which was developed using Node.js and provides a stable and scalable toolkit that allows the user to perform diverse molecular and phylogenetic tasks. phylo-node can execute the analysis and process the resulting outputs from a suite of software options that provides tools for read processing and genome alignment, sequence retrieval, multiple sequence alignment, primer design, evolutionary modeling, and phylogeny reconstruction. Furthermore, phylo-node enables the user to deploy server dependent applications, and also provides simple integration and interoperation with other Node modules and languages using Node inheritance patterns, and a customized piping module to support the production of diverse pipelines. phylo-node is open-source and freely available to all users without sign-up or login requirements. All source code and user guidelines are openly available at the GitHub repository: https://github.com/dohalloran/phylo-node.
Comparing Different Fault Identification Algorithms in Distributed Power System
NASA Astrophysics Data System (ADS)
Alkaabi, Salim
A power system is a huge complex system that delivers the electrical power from the generation units to the consumers. As the demand for electrical power increases, distributed power generation was introduced to the power system. Faults may occur in the power system at any time in different locations. These faults cause a huge damage to the system as they might lead to full failure of the power system. Using distributed generation in the power system made it even harder to identify the location of the faults in the system. The main objective of this work is to test the different fault location identification algorithms while tested on a power system with the different amount of power injected using distributed generators. As faults may lead the system to full failure, this is an important area for research. In this thesis different fault location identification algorithms have been tested and compared while the different amount of power is injected from distributed generators. The algorithms were tested on IEEE 34 node test feeder using MATLAB and the results were compared to find when these algorithms might fail and the reliability of these methods.
Morpho-functional characterization of the systemic venous pole of the reptile heart.
Jensen, Bjarke; Vesterskov, Signe; Boukens, Bastiaan J; Nielsen, Jan M; Moorman, Antoon F M; Christoffels, Vincent M; Wang, Tobias
2017-07-27
Mammals evolved from reptile-like ancestors, and while the mammalian heart is driven by a distinct sinus node, a sinus node is not apparent in reptiles. We characterized the myocardial systemic venous pole, the sinus venosus, in reptiles to identify the dominant pacemaker and to assess whether the sinus venosus remodels and adopts an atrium-like phenotype as observed in mammals. Anolis lizards had an extensive sinus venosus of myocardium expressing Tbx18. A small sub-population of cells encircling the sinuatrial junction expressed Isl1, Bmp2, Tbx3, and Hcn4, homologues of genes marking the mammalian sinus node. Electrical mapping showed that hearts of Anolis lizards and Python snakes were driven from the sinuatrial junction. The electrical impulse was delayed between the sinus venosus and the right atrium, allowing the sinus venosus to contract and aid right atrial filling. In proximity of the systemic veins, the Anolis sinus venosus expressed markers of the atrial phenotype Nkx2-5 and Gja5. In conclusion, the reptile heart is driven by a pacemaker region with an expression signature similar to that of the immature sinus node of mammals. Unlike mammals, reptiles maintain a sinuatrial delay of the impulse, allowing the partly atrialized sinus venosus to function as a chamber.
Development of a space-systems network testbed
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan; Alger, Linda; Adams, Stuart; Burkhardt, Laura; Nagle, Gail; Murray, Nicholas
1988-01-01
This paper describes a communications network testbed which has been designed to allow the development of architectures and algorithms that meet the functional requirements of future NASA communication systems. The central hardware components of the Network Testbed are programmable circuit switching communication nodes which can be adapted by software or firmware changes to customize the testbed to particular architectures and algorithms. Fault detection, isolation, and reconfiguration has been implemented in the Network with a hybrid approach which utilizes features of both centralized and distributed techniques to provide efficient handling of faults within the Network.
NASA Astrophysics Data System (ADS)
Belapurkar, Rohit K.
Future aircraft engine control systems will be based on a distributed architecture, in which, the sensors and actuators will be connected to the Full Authority Digital Engine Control (FADEC) through an engine area network. Distributed engine control architecture will allow the implementation of advanced, active control techniques along with achieving weight reduction, improvement in performance and lower life cycle cost. The performance of a distributed engine control system is predominantly dependent on the performance of the communication network. Due to the serial data transmission policy, network-induced time delays and sampling jitter are introduced between the sensor/actuator nodes and the distributed FADEC. Communication network faults and transient node failures may result in data dropouts, which may not only degrade the control system performance but may even destabilize the engine control system. Three different architectures for a turbine engine control system based on a distributed framework are presented. A partially distributed control system for a turbo-shaft engine is designed based on ARINC 825 communication protocol. Stability conditions and control design methodology are developed for the proposed partially distributed turbo-shaft engine control system to guarantee the desired performance under the presence of network-induced time delay and random data loss due to transient sensor/actuator failures. A fault tolerant control design methodology is proposed to benefit from the availability of an additional system bandwidth and from the broadcast feature of the data network. It is shown that a reconfigurable fault tolerant control design can help to reduce the performance degradation in presence of node failures. A T-700 turbo-shaft engine model is used to validate the proposed control methodology based on both single input and multiple-input multiple-output control design techniques.
NASA Astrophysics Data System (ADS)
Viesca, R. C.
2015-12-01
Subsurface fluid injection is often followed by observations of an enlarging cloud of microseismicity. The cloud's diffusive growth is thought to be a direct response to the diffusion of elevated pore fluid pressure reaching pre-stressed faults, triggering small instabilities; the observed high rates of this growth are interpreted to reflect a relatively high permeability of a fractured subsurface [e.g., Shapiro, GJI 1997]. We investigate an alternative mechanism for growing a microseismic cloud: the elastic transfer of stress due to slow, aseismic slip on a subset of the pre-existing faults in this damaged subsurface. We show that the growth of the slipping region of the fault may be self-similar in a diffusive manner. While this slip is driven by fluid injection, we show that, for critically stressed faults, the apparent diffusion of this slow slip may quickly exceed the poroelastically driven diffusion of the elevated pore fluid pressure. Under these conditions, microseismicity can be first triggered by the off-fault stress perturbation due to the expanding region of slip on principal faults. This provides an alternative interpretation of diffusive growth rates in terms of the subsurface stress state rather than an enhanced hydraulic diffusivity. That such aseismic slip may occur, outpace fluid diffusion, and in turn trigger microseismic events, is also suggested by on- and near-fault observations in past and recently reported fluid injection experiments [e.g., Cornet et al., PAGEOPH 1997; Guglielmi et al., Science 2015]. The model of injection-induced slip assumes elastic off-fault behavior and a fault strength determined by the product of a constant friction coefficient and the local effective normal stress. The sliding region is enlarged by the pore pressure increase resolved on the fault plane. Remarkably, the rate of self-similar expansion may be determined by a single parameter reflecting both the initial stress state and the magnitude of the pore pressure increase.
Using Trust to Establish a Secure Routing Model in Cognitive Radio Network.
Zhang, Guanghua; Chen, Zhenguo; Tian, Liqin; Zhang, Dongwen
2015-01-01
Specific to the selective forwarding attack on routing in cognitive radio network, this paper proposes a trust-based secure routing model. Through monitoring nodes' forwarding behaviors, trusts of nodes are constructed to identify malicious nodes. In consideration of that routing selection-based model must be closely collaborative with spectrum allocation, a route request piggybacking available spectrum opportunities is sent to non-malicious nodes. In the routing decision phase, nodes' trusts are used to construct available path trusts and delay measurement is combined for making routing decisions. At the same time, according to the trust classification, different responses are made specific to their service requests. By adopting stricter punishment on malicious behaviors from non-trusted nodes, the cooperation of nodes in routing can be stimulated. Simulation results and analysis indicate that this model has good performance in network throughput and end-to-end delay under the selective forwarding attack.
NASA Astrophysics Data System (ADS)
Ellis, A. P.; DeMets, C.; Briole, P.; Cosenza, B.; Flores, O.; Guzman-Speziale, M.; Hernandez, D.; Kostoglodov, V.; La Femina, P. C.; Lord, N. E.; Lasserre, C.; Lyon-Caen, H.; McCaffrey, R.; Molina, E.; Rodriguez, M.; Staller, A.; Rogers, R.
2017-12-01
We describe plate rotations, fault slip rates, and fault locking estimated from a new 100-station GPS velocity field at the western end of the Caribbean plate, where the Motagua-Polochic fault zone, Middle America trench, and Central America volcanic arc faults converge. In northern Central America, fifty-one upper-plate earthquakes caused approximately 40,000 fatalities since 1900. The proximity of main population centers to these destructive earthquakes and the resulting loss of human life provide strong motivation for studying the present-day tectonics of Central America. Plate rotations, fault slip rates, and deformation are quantified via a two-stage inversion of daily GPS position time series using TDEFNODE modeling software. In the first stage, transient deformation associated with three M>7 earthquakes in 2009 and 2012 is estimated and removed from the GPS position time series. In Stage 2, linear velocities determined from the corrected GPS time series are inverted to estimate deformation within the western Caribbean plate, slip rates along the Motagua-Polochic faults and faults in the Central America volcanic arc, and the gradient of extension in the Honduras-Guatemala wedge. Major outcomes of the second inversion include the following: (1) Confirmation that slip rates on the Motagua fault decrease from 17-18 mm/yr at its eastern end to 0-5 mm/yr at its western end, in accord with previous results. (2) A transition from moderate subduction zone locking offshore from southern Mexico and parts of southern Guatemala to weak or zero coupling offshore from El Salvador and parts of Nicaragua along the Middle America trench. (3) Evidence for significant east-west extension in southern Guatemala between the Motagua fault and volcanic arc. Our study also shows evidence for creep on the eastern Motagua fault that diminishes westward along the North America-Caribbean plate boundary.
Asynchronous Data Retrieval from an Object-Oriented Database
NASA Astrophysics Data System (ADS)
Gilbert, Jonathan P.; Bic, Lubomir
We present an object-oriented semantic database model which, similar to other object-oriented systems, combines the virtues of four concepts: the functional data model, a property inheritance hierarchy, abstract data types and message-driven computation. The main emphasis is on the last of these four concepts. We describe generic procedures that permit queries to be processed in a purely message-driven manner. A database is represented as a network of nodes and directed arcs, in which each node is a logical processing element, capable of communicating with other nodes by exchanging messages. This eliminates the need for shared memory and for centralized control during query processing. Hence, the model is suitable for implementation on a multiprocessor computer architecture, consisting of large numbers of loosely coupled processing elements.
NASA Astrophysics Data System (ADS)
Lu, Siliang; Wang, Xiaoxian; He, Qingbo; Liu, Fang; Liu, Yongbin
2016-12-01
Transient signal analysis (TSA) has been proven an effective tool for motor bearing fault diagnosis, but has yet to be applied in processing bearing fault signals with variable rotating speed. In this study, a new TSA-based angular resampling (TSAAR) method is proposed for fault diagnosis under speed fluctuation condition via sound signal analysis. By applying the TSAAR method, the frequency smearing phenomenon is eliminated and the fault characteristic frequency is exposed in the envelope spectrum for bearing fault recognition. The TSAAR method can accurately estimate the phase information of the fault-induced impulses using neither complicated time-frequency analysis techniques nor external speed sensors, and hence it provides a simple, flexible, and data-driven approach that realizes variable-speed motor bearing fault diagnosis. The effectiveness and efficiency of the proposed TSAAR method are verified through a series of simulated and experimental case studies.
NASA Astrophysics Data System (ADS)
Wang, F.; Bevis, M. G.; Blewitt, G.; Gomez, D.
2017-12-01
We study the postseismic transient displacements following the 2011 Mw 9.0 Tohoku earthquake using the Nevada Geodetic Laboratory's daily and 5-minute interval PPP solutions for 1,272 continuous GPS stations in Japan, with particular emphasis on the early transient displacements of these stations. One significant complication is the Mw 7.9 aftershock that occurred just 29.3 minutes after the main shock, since the coseismic (and postseismic) displacements driven by the aftershock are superimposed on the postseismic transients driven by the main shock. We address the question of whether or not the stresses induced by the Mw 9.0 main shock were relaxed by any major faults within Japan. The notion is that significant stress relaxation which is localized on a fault system should be manifested in the spatial pattern of the postseismic transient displacement field in the vicinity of that system. This would provide a basis for distinguishing between faults that engage in stick-slip behavior and those that creep instead. The distinction is important in that it has implications for the seismic risk associated with upper plate faulting. We will make the case that we do detect localized fault creeping in response to the coseismic stress field produced by the Mw 9 event.
Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M.
2009-09-09
SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciated nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.
Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger
NASA Astrophysics Data System (ADS)
Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun
2011-04-01
This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.
GTRF: a game theory approach for regulating node behavior in real-time wireless sensor networks.
Lin, Chi; Wu, Guowei; Pirozmand, Poria
2015-06-04
The selfish behaviors of nodes (or selfish nodes) cause packet loss, network congestion or even void regions in real-time wireless sensor networks, which greatly decrease the network performance. Previous methods have focused on detecting selfish nodes or avoiding selfish behavior, but little attention has been paid to regulating selfish behavior. In this paper, a Game Theory-based Real-time & Fault-tolerant (GTRF) routing protocol is proposed. GTRF is composed of two stages. In the first stage, a game theory model named VA is developed to regulate nodes' behaviors and meanwhile balance energy cost. In the second stage, a jumping transmission method is adopted, which ensures that real-time packets can be successfully delivered to the sink before a specific deadline. We prove that GTRF theoretically meets real-time requirements with low energy cost. Finally, extensive simulations are conducted to demonstrate the performance of our scheme. Simulation results show that GTRF not only balances the energy cost of the network, but also prolongs network lifetime.
Submicron Systems Architecture
1983-11-01
hours, and is producing successively more and more refined statistics . It will run for several hundred more hours before improving significantly on the...splitting a node into two parts and connecting to. Similarly, a stuck-open transitor falt ishr t n whi modeled by putting a fault transistor in series
Multi-focus and multi-level techniques for visualization and analysis of networks with thematic data
NASA Astrophysics Data System (ADS)
Cossalter, Michele; Mengshoel, Ole J.; Selker, Ted
2013-01-01
Information-rich data sets bring several challenges in the areas of visualization and analysis, even when associated with node-link network visualizations. This paper presents an integration of multi-focus and multi-level techniques that enable interactive, multi-step comparisons in node-link networks. We describe NetEx, a visualization tool that enables users to simultaneously explore different parts of a network and its thematic data, such as time series or conditional probability tables. NetEx, implemented as a Cytoscape plug-in, has been applied to the analysis of electrical power networks, Bayesian networks, and the Enron e-mail repository. In this paper we briefly discuss visualization and analysis of the Enron social network, but focus on data from an electrical power network. Specifically, we demonstrate how NetEx supports the analytical task of electrical power system fault diagnosis. Results from a user study with 25 subjects suggest that NetEx enables more accurate isolation of complex faults compared to an especially designed software tool.
enhancedGraphics: a Cytoscape app for enhanced node graphics
Morris, John H.; Kuchinsky, Allan; Ferrin, Thomas E.; Pico, Alexander R.
2014-01-01
enhancedGraphics ( http://apps.cytoscape.org/apps/enhancedGraphics) is a Cytoscape app that implements a series of enhanced charts and graphics that may be added to Cytoscape nodes. It enables users and other app developers to create pie, line, bar, and circle plots that are driven by columns in the Cytoscape Node Table. Charts are drawn using vector graphics to allow full-resolution scaling. PMID:25285206
Distributed Multihoming Routing Method by Crossing Control MIPv6 with SCTP
NASA Astrophysics Data System (ADS)
Shi, Hongbo; Hamagami, Tomoki
There are various wireless communication technologies, such as 3G, WiFi, used widely in the world. Recently, not only the laptop but also the smart phones can be equipped with multiple wireless devices. The communication terminals which are implemented with multiple interfaces are usually called multi-homed nodes. Meanwhile, a multi-homed node with multiple interfaces can also be regarded as multiple single-homed nodes. For example, when a person who is using smart phone and laptop to connect to the Internet concurrently, we may regard the person as a multi-homed node in the Internet. This paper proposes a new routing method, Multi-homed Mobile Cross-layer Control to handle multi-homed mobile nodes. Our suggestion can provide a distributed end-to-end routing method for handling the communications among multi-homed nodes at the fundamental network layer.
An enhanced performance through agent-based secure approach for mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Bisen, Dhananjay; Sharma, Sanjeev
2018-01-01
This paper proposes an agent-based secure enhanced performance approach (AB-SEP) for mobile ad hoc network. In this approach, agent nodes are selected through optimal node reliability as a factor. This factor is calculated on the basis of node performance features such as degree difference, normalised distance value, energy level, mobility and optimal hello interval of node. After selection of agent nodes, a procedure of malicious behaviour detection is performed using fuzzy-based secure architecture (FBSA). To evaluate the performance of the proposed approach, comparative analysis is done with conventional schemes using performance parameters such as packet delivery ratio, throughput, total packet forwarding, network overhead, end-to-end delay and percentage of malicious detection.
Pseudo-fault signal assisted EMD for fault detection and isolation in rotating machines
NASA Astrophysics Data System (ADS)
Singh, Dheeraj Sharan; Zhao, Qing
2016-12-01
This paper presents a novel data driven technique for the detection and isolation of faults, which generate impacts in a rotating equipment. The technique is built upon the principles of empirical mode decomposition (EMD), envelope analysis and pseudo-fault signal for fault separation. Firstly, the most dominant intrinsic mode function (IMF) is identified using EMD of a raw signal, which contains all the necessary information about the faults. The envelope of this IMF is often modulated with multiple vibration sources and noise. A second level decomposition is performed by applying pseudo-fault signal (PFS) assisted EMD on the envelope. A pseudo-fault signal is constructed based on the known fault characteristic frequency of the particular machine. The objective of using external (pseudo-fault) signal is to isolate different fault frequencies, present in the envelope . The pseudo-fault signal serves dual purposes: (i) it solves the mode mixing problem inherent in EMD, (ii) it isolates and quantifies a particular fault frequency component. The proposed technique is suitable for real-time implementation, which has also been validated on simulated fault and experimental data corresponding to a bearing and a gear-box set-up, respectively.
Quantum key distribution with an entangled light emitting diode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dzurnak, B.; Stevenson, R. M.; Nilsson, J.
Measurements performed on entangled photon pairs shared between two parties can allow unique quantum cryptographic keys to be formed, creating secure links between users. An advantage of using such entangled photon links is that they can be adapted to propagate entanglement to end users of quantum networks with only untrusted nodes. However, demonstrations of quantum key distribution with entangled photons have so far relied on sources optically excited with lasers. Here, we realize a quantum cryptography system based on an electrically driven entangled-light-emitting diode. Measurement bases are passively chosen and we show formation of an error-free quantum key. Our measurementsmore » also simultaneously reveal Bell's parameter for the detected light, which exceeds the threshold for quantum entanglement.« less
Quantum key distribution with an entangled light emitting diode
NASA Astrophysics Data System (ADS)
Dzurnak, B.; Stevenson, R. M.; Nilsson, J.; Dynes, J. F.; Yuan, Z. L.; Skiba-Szymanska, J.; Farrer, I.; Ritchie, D. A.; Shields, A. J.
2015-12-01
Measurements performed on entangled photon pairs shared between two parties can allow unique quantum cryptographic keys to be formed, creating secure links between users. An advantage of using such entangled photon links is that they can be adapted to propagate entanglement to end users of quantum networks with only untrusted nodes. However, demonstrations of quantum key distribution with entangled photons have so far relied on sources optically excited with lasers. Here, we realize a quantum cryptography system based on an electrically driven entangled-light-emitting diode. Measurement bases are passively chosen and we show formation of an error-free quantum key. Our measurements also simultaneously reveal Bell's parameter for the detected light, which exceeds the threshold for quantum entanglement.
NASA Astrophysics Data System (ADS)
Wu, S.; Mclaskey, G.
2017-12-01
We investigate foreshocks and aftershocks of dynamic stick-slip events generated on a newly constructed 3 m biaxial friction apparatus at Cornell University (attached figure). In a typical experiment, two rectangular granite blocks are squeezed together under 4 or 7 MPa of normal pressure ( 4 or 7 million N on a 1 m2 fault surface), and then shear stress is increased until the fault slips 10 - 400 microns in a dynamic rupture event similar to a M -2 to M -3 earthquake. Some ruptures nucleate near the north end of the fault, where the shear force is applied, other ruptures nucleate 2 m from the north end of the fault. The samples are instrumented with 16 piezoelectric sensors, 16 eddy current sensors, and 8 strain gage rosettes, evenly placed along the fault to measure vertical ground motion, local slip, and local stress, respectively. We studied sequences of tens of slip events and identified a total of 194 foreshocks and 66 aftershocks located within 6 s time windows around the stick-slip events and analyzed their timing and locations relative to the quasistatic nucleation process. We found that the locations of the foreshocks and aftershocks were distributed all along the length of the fault, with the majority located at the ends of the fault where local normal and shear stress is highest (caused by both edge effects and the finite stiffness of the steel frame surrounding the granite blocks). We also opened the laboratory fault and inspected the fault surface and found increased wear at the sample ends. To explore the foreshocks' and aftershocks' relationship to the nucleation and afterslip, we compared the occurrence of foreshocks to the local slip rate on the laboratory fault closest to each foreshock in space and time. We found that that majority of foreshocks were generated from local slip rates between 1 and 100 microns/s, though we were not able to resolve slip rate lower than about 1 micron/s. Our experiments provide insight into how foreshocks and aftershocks in natural earthquakes may be influenced both by fault structure and slow slip associated with nucleation or afterslip.
NASA Astrophysics Data System (ADS)
Ott, B.; Mann, P.; Saunders, M.
2013-12-01
Previous workers, mainly mapping onland active faults on Caribbean islands, defined the northern Caribbean plate boundary zone as a 200-km-wide bounded by two active and parallel strike-slip faults: the Oriente fault along the northern edge of the Cayman trough with a GPS rate of 14 mm/yr, and and the Enriquillo-Plaintain Garden fault zone (EPGFZ) with a rate of 5-7 mm/yr. In this study we use 5,000 km of industry and academic data from the Nicaraguan Rise south and southwest of the EPGFZ in the maritime areas of Jamaica, Honduras, and Colombia to define an offshore, 700-km-long, active, left-lateral strike-slip fault in what has previously been considered the stable interior of the Caribbean plate as determined from plate-wide GPS studies. The fault was named by previous workers as the Pedro Banks fault zone because a 100-km-long segment of the fault forms an escarpment along the Pedro carbonate bank of the Nicaraguan Rise. Two fault segments of the PBFZ are defined: the 400-km-long eastern segment that exhibits large negative flower structures 10-50 km in width, with faults segments rupturing the sea floor as defined by high resolution 2D seismic data, and a 300-km-long western segment that is defined by a narrow zone of anomalous seismicity first observed by previous workers. The western end of the PBFZ terminates on a Quaternary rift structure, the San Andres rift, associated with Plio-Pleistocene volcanism and thickening trends indicating initial rifting in the Late Miocene. The southern end of the San Andreas rift terminates on the western Hess fault which also exhibits active strands consistent with left-lateral, strike-slip faults. The total length of the PBFZ-San Andres rift-Southern Hess escarpment fault is 1,200 km and traverses the entire western end of the Caribbean plate. Our interpretation is similar to previous models that have proposed the "stable" western Caribbean plate is broken by this fault whose rate of displacement is less than the threshold recognizable from the current GPS network (~3 mm/yr). The Late Miocene age of the fault indicates it may have activated during the Late Miocene to recent Hispaniola-Bahamas oblique collision event.
NASA Astrophysics Data System (ADS)
Camafort, Miquel; Booth-Rea, Guillermo; Pérez-Peña, Jose Vicente; Melki, Fetheddine; Gracia, Eulalia; Azañón, Jose Miguel; Ranero, César R.
2017-04-01
Active tectonics in North Africa is fundamentally driven by NW-SE directed slow convergence between the Nubia and Eurasia plates, leading to a region of thrust and strike-slip faulting. In this paper we analyze the morphometric characteristics of the little-studied northern Tunisia sector. The study aimed at identifying previously unknown active tectonic structures, and to further understand the mechanisms that drive the drainage evolution in this region of slow convergence. The interpretation of morphometric data was supported with a field campaign of a selection of structures. The analysis indicates that recent fluvial captures have been the main factor rejuvenating drainage catchments. The Medjerda River, which is the main catchment in northern Tunisia, has increased its drainage area during the Quaternary by capturing adjacent axial valleys to the north and south of its drainage divide. These captures are probably driven by gradual uplift of adjacent axial valleys by reverse/oblique faults or associated folds like El Alia-Teboursouk and Dkhila faults. Our fieldwork found that these faults cut Holocene colluvial fans containing seismites like clastic dikes and sand volcanoes, indicating recent seismogenic faulting. The growth and stabilization of the axial Medjerda River against the natural tendency of transverse drainages might be caused by a combination of dynamic topography and transpressive tectonics. The orientation of the large axial Medjerda drainage that runs from eastern Algeria towards northeastern Tunisia into the Gulf of Tunis, might be the associated to negative buoyancy caused by the underlying Nubia slab at its mouth, together with uplift of the Medjerda headwaters along the South Atlassic dextral transfer zone.
NASA Astrophysics Data System (ADS)
Booth-Rea, Guillermo; Pérez-Peña, Vicente; Azañón, José Miguel; de Lis Mancilla, Flor; Morales, Jose; Stich, Daniel; Giaconia, Flavio
2014-05-01
Most of the geological features of the Betics and Rif have resulted from slab tearing, edge delamination and punctual slab breakoff events between offset STEP faults. New P-reciever function data of the deep structure under the Betics and Rif have helped to map the deep boundaries of slab tearing and rupture in the area. Linking surface geological features with the deep structure shows that STEP faulting under the Betics occurred along ENE-WSW segments offset towards the south, probably do to the westward narrowing of the Tethys slab. The surface expression of STEP faulting at the Betics consists of ENE-WSW dextral strike-slip fault segments like the Crevillente, Alpujarras or Torcal faults that are interrupted by basins and elongated extensional domes were exhumed HP middle crust occurs. Exhumation of deep crust erases the effects of strike-slip faulting in the overlying brittle crust. Slab tearing affected the eastern Betics during the Tortonian to Messinian, producing the Fortuna and Lorca basins, and later propagated westward generating the end-Messinian to Pleistocene Guadix-Baza basins and the Granada Pliocene-Pleistocene depocentre. At present slab tearing is occurring beneath the Málaga depression, where the Torcal dextral strike-slip fault ends in a region of active distributed shortening and where intermediate depth seismicity occurs. STEP fault migration has occurred at average rates between 2 and 4 cm/yr since the late Miocene, producing a wave of alternating uplift-subsidence pulses. These initiate with uplift related to slab flexure, subsidence related to slab-pull, followed by uplift after rupture and ending with thermal subsidence. This "yo-yo" type tectonic evolution leads to the generation of endorheic basins that later evolve to exhorheic when they are uplifted and captured above the region where asthenospheric upwelling occurs.
Dual Interlocked Logic for Single-Event Transient Mitigation
2017-03-01
SPICE simulation and fault-injection analysis. Exemplar SPICE simulations have been performed in a 32nm partially- depleted silicon-on-insulator...in this work. The model has been validated at the 32nm SOI technology node with extensive heavy-ion data [7]. For the SPICE simulations, three
Analytical Study of different types Of network failure detection and possible remedies
NASA Astrophysics Data System (ADS)
Saxena, Shikha; Chandra, Somnath
2012-07-01
Faults in a network have various causes,such as the failure of one or more routers, fiber-cuts, failure of physical elements at the optical layer, or extraneous causes like power outages. These faults are usually detected as failures of a set of dependent logical entities and the links affected by the failed components. A reliable control plane plays a crucial role in creating high-level services in the next-generation transport network based on the Generalized Multiprotocol Label Switching (GMPLS) or Automatically Switched Optical Networks (ASON) model. In this paper, approaches to control-plane survivability, based on protection and restoration mechanisms, are examined. Procedures for the control plane state recovery are also discussed, including link and node failure recovery and the concepts of monitoring paths (MPs) and monitoring cycles (MCs) for unique localization of shared risk linked group (SRLG) failures in all-optical networks. An SRLG failure is a failure of multiple links due to a failure of a common resource. MCs (MPs) start and end at same (distinct) monitoring location(s). They are constructed such that any SRLG failure results in the failure of a unique combination of paths and cycles. We derive necessary and sufficient conditions on the set of MCs and MPs needed for localizing an SRLG failure in an arbitrary graph. Procedure of Protection and Restoration of the SRLG failure by backup re-provisioning algorithm have also been discussed.
NASA Astrophysics Data System (ADS)
Okada, Hironao; Kobayashi, Takeshi; Masuda, Takashi; Itoh, Toshihiro
2009-07-01
We describe a low power consumption wireless sensor node designed for monitoring the conditions of animals, especially of chickens. The node detects variations in 24-h behavior patterns by acquiring the number of the movement of an animal whose acceleration exceeds a threshold measured in per unit time. Wireless sensor nodes when operated intermittently are likely to miss necessary data during their sleep mode state and waste the power in the case of acquiring useless data. We design the node worked only when required acceleration is detected using a piezoelectric accelerometer and a comparator for wake-up source of micro controller unit.
NASA Astrophysics Data System (ADS)
El Houda Thabet, Rihab; Combastel, Christophe; Raïssi, Tarek; Zolghadri, Ali
2015-09-01
The paper develops a set membership detection methodology which is applied to the detection of abnormal positions of aircraft control surfaces. Robust and early detection of such abnormal positions is an important issue for early system reconfiguration and overall optimisation of aircraft design. In order to improve fault sensitivity while ensuring a high level of robustness, the method combines a data-driven characterisation of noise and a model-driven approach based on interval prediction. The efficiency of the proposed methodology is illustrated through simulation results obtained based on data recorded in several flight scenarios of a highly representative aircraft benchmark.
NASA Astrophysics Data System (ADS)
Shao, Xinxin; Naghdy, Fazel; Du, Haiping
2017-03-01
A fault-tolerant fuzzy H∞ control design approach for active suspension of in-wheel motor driven electric vehicles in the presence of sprung mass variation, actuator faults and control input constraints is proposed. The controller is designed based on the quarter-car active suspension model with a dynamic-damping-in-wheel-motor-driven-system, in which the suspended motor is operated as a dynamic absorber. The Takagi-Sugeno (T-S) fuzzy model is used to model this suspension with possible sprung mass variation. The parallel-distributed compensation (PDC) scheme is deployed to derive a fault-tolerant fuzzy controller for the T-S fuzzy suspension model. In order to reduce the motor wear caused by the dynamic force transmitted to the in-wheel motor, the dynamic force is taken as an additional controlled output besides the traditional optimization objectives such as sprung mass acceleration, suspension deflection and actuator saturation. The H∞ performance of the proposed controller is derived as linear matrix inequalities (LMIs) comprising three equality constraints which are solved efficiently by means of MATLAB LMI Toolbox. The proposed controller is applied to an electric vehicle suspension and its effectiveness is demonstrated through computer simulation.
Cooperation Helps Power Saving
2009-04-07
the destination node hears the poll, the link between the two nodes is activated. In the original STEM, two radios working on two separate channels... hears the poll, the link between the two nodes is activated. In the original STEM, two radios working on two separate chan- nels are used: one radio is...Computer and Communications Societies. Proceedings. IEEE, vol. 3, pp. 1548–1557 vol.3, 2001. [2] R . Kravets and P. Krishnan, “Application-driven power
NASA Astrophysics Data System (ADS)
Joo, Seong-Soon; Nam, Hyun-Soon; Lim, Chang-Kyu
2003-08-01
With the rapid growth of the Optical Internet, high capacity pipes is finally destined to support end-to-end IP on the WDM optical network. Newly launched 2D MEMS optical switching module in the market supports that expectations of upcoming a transparent optical cross-connect in the network have encouraged the field applicable research on establishing real all-optical transparent network. To open up a customer-driven bandwidth services, design of the optical transport network becomes more challenging task in terms of optimal network resource usage. This paper presents a practical approach to finding a route and wavelength assignment for wavelength routed all-optical network, which has λ-plane OXC switches and wavelength converters, and supports that optical paths are randomly set up and released by dynamic wavelength provisioning to create bandwidth between end users with timescales on the order of seconds or milliseconds. We suggest three constraints to make the RWA problem become more practical one on deployment for wavelength routed all-optical network in network view: limitation on maximum hop of a route within bearable optical network impairments, limitation on minimum hops to travel before converting a wavelength, and limitation on calculation time to find all routes for connections requested at once. We design the NRCD (Normalized Resource and Constraints for All-Optical Network RWA Design) algorithm for the Tera OXC: network resource for a route is calculated by the number of internal switching paths established in each OXC nodes on the route, and is normalized by ratio of number of paths established and number of paths equipped in a node. We show that it fits for the RWA algorithm of the wavelength routed all-optical network through real experiments on the distributed objects platform.
NASA Astrophysics Data System (ADS)
Fouquet, Yves; Cambon, Pierre; Etoubleau, Joël; Charlou, Jean Luc; Ondréas, Hélène; Barriga, Fernando J. A. S.; Cherkashov, Georgy; Semkova, Tatiana; Poroshina, Irina; Bohn, M.; Donval, Jean Pierre; Henry, Katell; Murphy, Pamela; Rouxel, Olivier
Several hydrothermal deposits associated with ultramafic rocks have recently been found along slow spreading ridges with a low magmatic budget. Three preferential settings are identified: (1) rift valley walls near the amagmatic ends of ridge segments; (2) nontransform offsets; and (3) ultramafic domes at inside corners of ridge transform-fault intersections. The exposed mantle at these sites is often interpreted to be a detachment fault. Hydrothermal cells in ultramafic rocks may be driven by regional heat flow, cooling gabbroic intrusions, and exothermic heat produced during serpentinization. Along the Mid-Atlantic Ridge (MAR), hydrothermal deposits in ultramafic rocks include the following: (1) sulfide mounds related to high-temperature low-pH fluids (Logatchev, Rainbow, and Ashadze); (2) carbonate chimneys related to low-temperature, high-pH fluids (Lost City); (3) low-temperature diffuse venting and high-methane discharge associated with silica, minor sulfides, manganese oxides, and pervasive alteration (Saldanha); and (4) stockwork quartz veins with sulfides at the base of detachment faults (15°05'N). These settings are closely linked to preferential circulation of fluid along permeable detachment faults. Compared to mineralization in basaltic environments, sulfide deposits associated with ultramafic rocks are enriched in Cu, Zn, Co, Au, and Ni. Gold has a bimodal distribution in low-temperature Zn-rich and in high-temperature Cu-rich mineral assemblages. The Cu-Zn-Co-Au deposits along the MAR seem to be more abundant than in ophiolites on land. This may be because ultramafic-hosted volcanogenic massive sulfide deposits on slow spreading ridges are usually not accreted to continental margins during obduction and may constitute a specific marine type of mineralization.
Experimental fault characterization of a neural network
NASA Technical Reports Server (NTRS)
Tan, Chang-Huong
1990-01-01
The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.
Dating paleo-seismic faulting in the Taiwan Mountain Belt
NASA Astrophysics Data System (ADS)
Lo, C. H.; Wu, C. Y.; Chu, H. T.; Yui, T. F.
2017-12-01
In-situ 40Ar/39Ar laser microprobe dating was carried out on the Hoping pseudotachylite from a mylonite-fault zone in the metamorphosed basement complex of the active Taiwan Mountain Belt to determine the timing of the responsible earthquake(s). The dating results, distributed between 3.2 to 1.6 Ma with errors ranging 0.2 1.1 Ma, were derived from a combination of two Ar isotopic system end-members with inverse isochron ages of 1.55±0.05 and 2.87±0.07 Ma, respectively. Fault melt was found mixed with ultracataclasis in petrographical observations, therefore the older inverse isochron end-member may be attributed to the relic wall rock Ar isotopic system contained in micro-breccia as published 40Ar/39Ar mylonitization ages from 4.1 to 3.0 Ma. Without significant Ar loss expected, the young 1.6 Ma end-member represents the Ar isotopic system and age of the exact pseudotachylite. Seismic faulting therefore occurred during basement rock exhumation in the Taiwanese hinterland.
A distributed fault-tolerant signal processor /FTSP/
NASA Astrophysics Data System (ADS)
Bonneau, R. J.; Evett, R. C.; Young, M. J.
1980-01-01
A digital fault-tolerant signal processor (FTSP), an example of a self-repairing programmable system is analyzed. The design configuration is discussed in terms of fault tolerance, system-level fault detection, isolation and common memory. Special attention is given to the FDIR (fault detection isolation and reconfiguration) logic, noting that the reconfiguration decisions are based on configuration, summary status, end-around tests, and north marker/synchro data. Several mechanisms of fault detection are described which initiate reconfiguration at different levels. It is concluded that the reliability of a signal processor can be significantly enhanced by the use of fault-tolerant techniques.
NASA Astrophysics Data System (ADS)
Dutta, Rishabh; Jónsson, Sigurjón; Wang, Teng; Vasyura-Bathke, Hannes
2018-04-01
Several researchers have studied the source parameters of the 2005 Fukuoka (northwestern Kyushu Island, Japan) earthquake (Mw 6.6) using teleseismic, strong motion and geodetic data. However, in all previous studies, errors of the estimated fault solutions have been neglected, making it impossible to assess the reliability of the reported solutions. We use Bayesian inference to estimate the location, geometry and slip parameters of the fault and their uncertainties using Interferometric Synthetic Aperture Radar and Global Positioning System data. The offshore location of the earthquake makes the fault parameter estimation challenging, with geodetic data coverage mostly to the southeast of the earthquake. To constrain the fault parameters, we use a priori constraints on the magnitude of the earthquake and the location of the fault with respect to the aftershock distribution and find that the estimated fault slip ranges from 1.5 to 2.5 m with decreasing probability. The marginal distributions of the source parameters show that the location of the western end of the fault is poorly constrained by the data whereas that of the eastern end, located closer to the shore, is better resolved. We propagate the uncertainties of the fault model and calculate the variability of Coulomb failure stress changes for the nearby Kego fault, located directly below Fukuoka city, showing that the main shock increased stress on the fault and brought it closer to failure.
NASA Astrophysics Data System (ADS)
Neely, J. S.; Huang, Y.; Furlong, K.
2017-12-01
Subduction-Transform Edge Propagator (STEP) faults, produced by the tearing of a subducting plate, allow us to study the development of a transform plate boundary and improve our understanding of both long-term geologic processes and short-term seismic hazards. The 280 km long San Cristobal Trough (SCT), formed by the tearing of the Australia plate as it subducts under the Pacific plate near the Solomon and Vanuatu subduction zones, shows along-strike variations in earthquake behaviors. The segment of the SCT closest to the tear rarely hosts earthquakes > Mw 6, whereas the SCT sections more than 80 - 100 km from the tear experience Mw7 earthquakes with repeated rupture along the same segments. To understand the effect of cumulative displacement on SCT seismicity, we analyze b-values, centroid-time delays and corner frequencies of the SCT earthquakes. We use the spectral ratio method based on Empirical Green's Functions (eGfs) to isolate source effects from propagation and site effects. We find high b-values along the SCT closest to the tear with values decreasing with distance before finally increasing again towards the far end of the SCT. Centroid time-delays for the Mw 7 strike-slip earthquakes increase with distance from the tear, but corner frequency estimates for a recent sequence of Mw 7 earthquakes are approximately equal, indicating a growing complexity in earthquake behavior with distance from the tear due to a displacement-driven transform boundary development process (see figure). The increasing complexity possibly stems from the earthquakes along the eastern SCT rupturing through multiple asperities resulting in multiple moment pulses. If not for the bounding Vanuatu subduction zone at the far end of the SCT, the eastern SCT section, which has experienced the most displacement, might be capable of hosting larger earthquakes. When assessing the seismic hazard of other STEP faults, cumulative fault displacement should be considered a key input in determining potential earthquake size.
Aagaard, Brad T.; Hall, J.F.; Heaton, T.H.
2004-01-01
We study how the fault dip and slip rake angles affect near-source ground velocities and displacements as faulting transitions from strike-slip motion on a vertical fault to thrust motion on a shallow-dipping fault. Ground motions are computed for five fault geometries with different combinations of fault dip and rake angles and common values for the fault area and the average slip. The nature of the shear-wave directivity is the key factor in determining the size and distribution of the peak velocities and displacements. Strong shear-wave directivity requires that (1) the observer is located in the direction of rupture propagation and (2) the rupture propagates parallel to the direction of the fault slip vector. We show that predominantly along-strike rupture of a thrust fault (geometry similar in the Chi-Chi earthquake) minimizes the area subjected to large-amplitude velocity pulses associated with rupture directivity, because the rupture propagates perpendicular to the slip vector; that is, the rupture propagates in the direction of a node in the shear-wave radiation pattern. In our simulations with a shallow hypocenter, the maximum peak-to-peak horizontal velocities exceed 1.5 m/sec over an area of only 200 km2 for the 30??-dipping fault (geometry similar to the Chi-Chi earthquake), whereas for the 60??- and 75??-dipping faults this velocity is exceeded over an area of 2700 km2 . These simulations indicate that the area subjected to large-amplitude long-period ground motions would be larger for events of the same size as Chi-Chi that have different styles of faulting or a deeper hypocenter.
On the stochastic dissemination of faults in an admissible network
NASA Technical Reports Server (NTRS)
Kyrala, A.
1987-01-01
The dynamic distribution of faults in a general type network is discussed. The starting point is a uniquely branched network in which each pair of nodes is connected by a single branch. Mathematical expressions for the uniquely branched network transition matrix are derived to show that sufficient stationarity exists to ensure the validity of the use of the Markov Chain model to analyze networks. In addition the conditions for the use of Semi-Markov models are discussed. General mathematical expressions are derived in an examination of branch redundancy techniques commonly used to increase reliability.
Pavlis, T.L.; Picornell, C.; Serpa, L.; Bruhn, R.L.; Plafker, G.
2004-01-01
Oblique convergence in the St. Elias orogen of southern Alaska and northwestern Canada has constructed the world's highest coastal mountain range and is the principal driver constructing all of the high topography in northern North America. The orogen originated when the Yakutat terrane was excised from the Cordilleran margin and was transported along margin-parallel strike-slip faults into the subduction-transform transition at the eastern end of the Aleutian trench. We examine the last 3 m.y. of this collision through an analysis of Euler poles for motion of the Yakutat microplate with respect to North America and the Pacific. This analysis indicates a Yakutat-Pacific pole near the present southern triple junction of the microplate and' predicts convergence to dextral-oblique convergence across the offshore Transition fault, onland structures adjacent to the Yakutat foreland, or both, with plate speeds increasing from 10 to 30 mm/yr from southeast to northwest. Reconstructions based on these poles show that NNW transport of the collided block into the NE trending subduction zone forced contraction of EW line elements as the collided block was driven into the subduction-transform transition. This suggests the collided block was constricted as it was driven into the transition. Constriction provides an explanation for observed vertical axis refolding of both earlier formed fold-thrust systems and the collisional suture at the top of the fold-thrust stack. We also suggest that this motion was partially accommodated by lateral extrusion of the western portion of the orogen toward the Aleutian trench. Important questions remain regarding which structures accommodated parts of this motion. The Transition fault may have accommodated much of the Yakutat-Pacific convergence on the basis of our analysis and previous interpretations of GPS-based geodetic data. Nonetheless, it is locally overlapped by up to 800 m of undeformed sediment, yet elsewhere shows evidence of young deformation. This contradiction could be produced if the overlapping sediments are too young to have accumulated significant deformation, or GPS motions may be deflected by transient strains or strains from poorly understood fault interactions. In either case, more data are needed to resolve the paradox. Copyright 2004 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Shipton, Z.; Caine, J. S.; Lunn, R. J.
2013-12-01
Geologists are tiny creatures living on the 2-and-a-bit-D surface of a sphere who observe essentially 1D vanishingly small portions (boreholes, roadcuts, stream and beach sections) of complex, 4D tectonic-scale structures. Field observations of fault zones are essential to understand the processes of fault growth and to make predictions of fault zone mechanical and hydraulic properties at depth. Here, we argue that a failure of geologists to communicate their knowledge effectively to other scientists/engineers can lead to unrealistic assumptions being made about fault properties, and may result in poor economic performance and a lack of robustness in industrial safety cases. Fault zones are composed of many heterogeneously distributed deformation-related elements. Low permeability features include regions of intense grain-size reduction, pressure solution, cementation and shale smears. Other elements are likely to have enhanced permeability through fractures and breccias. Slip surfaces can have either enhanced or reduced permeability depending on whether they are open or closed, and the local stress state. The highly variable nature of 1) the architecture of faults and 2) the properties of deformation-related elements demonstrates that there are many factors controlling the evolution of fault zone internal structures (fault architecture). The aim of many field studies of faults is to provide data to constrain predictions at depth. For these data to be useful, pooling of data from multiple sites is usually necessary. This effort is frequently hampered by variability in the usage of fault terminologies. In addition, these terms are often used in ways as to make it easy for 'end-users' such as petroleum reservoir engineers, mining geologists, and seismologists to mis-interpret or over-simplify the implications of field studies. Field geologists are comfortable knowing that if you walk along strike or up dip of a fault zone you will find variations in fault rock type, number and orientations of slip surfaces, variation in fracture density, relays, asperities, variable juxtaposition relationships etc. Problems can arise when "users" of structural geology try to apply models to general cases without understanding that these are simplified models. For example, when a section like the one in Chester and Logan 1996, gets projected infinitely into the third dimension along a fault the size of the San Andreas (seismology), or Shale Gouge Ratios are blindly applied to an Allen diagram without recognising that sub-seismic scale relays may provide "hidden" juxtapositions resulting in fluids bypassing low permeability fault cores. Phrases like 'low-permeability fault core and high-permeabilty damage zone' fail to appreciate fault zone complexity. Internicene arguments over the details of terminology that baffle the "end users" can make detailed field studies that characterise fault heterogeneity seem irrelevant. We argue that the field geology community needs to consider ways to make sure that we educate end-users to appropriate and cautious approaches to use of the data we provide with an appreciation of the uncertainties inherent in our limited ability to characterize 4D, tectonic structures, at the same time as understanding the value of carefully collected field data.
Nucleation and triggering of earthquake slip: effect of periodic stresses
Dieterich, J.H.
1987-01-01
Results of stability analyses for spring and slider systems, with state variable constitutive properties, are applied to slip on embedded fault patches. Unstable slip may nucleate only if the slipping patch exceeds some minimum size. Subsequent to the onset of instability the earthquake slip may propagate well beyond the patch. It is proposed that the seismicity of a volume of the earth's crust is determined by the distribution of initial conditions on the population of fault patches that nucleate earthquake slip, and the loading history acting upon the volume. Patches with constitutive properties inferred from laboratory experiments are characterized by an interval of self-driven accelerating slip prior to instability, if initial stress exceeds a minimum threshold. This delayed instability of the patches provides an explanation for the occurrence of aftershocks and foreshocks including decay of earthquake rates by time-1. A population of patches subjected to loading with a periodic component results in periodic variation of the rate of occurrence of instabilities. The change of the rate of seismicity for a sinusoidal load is proportional to the amplitude of the periodic stress component and inversely proportional to both the normal stress acting on the fault patches and the constitutive parameter, A1, that controls the direct velocity dependence of fault slip. Values of A1 representative of laboratory experiments indicate that in a homogeneous crust, correlation of earthquake rates with earth tides should not be detectable at normal stresses in excess of about 8 MPa. Correlation of earthquakes with tides at higher normal stresses can be explained if there exist inhomogeneities that locally amplify the magnitude of the tidal stresses. Such amplification might occur near magma chambers or other soft inclusions in the crust and possibly near the ends of creeping fault segments if the creep or afterslip rates vary in response to tides. Observations of seismicity rate variations associated with seasonal fluctuations of reservoir levels appear to be consistent with the model. ?? 1987.
A routing protocol based on energy and link quality for Internet of Things applications.
Machado, Kássio; Rosário, Denis; Cerqueira, Eduardo; Loureiro, Antonio A F; Neto, Augusto; Souza, José Neuman de
2013-02-04
The Internet of Things (IoT) is attracting considerable attention from the universities, industries, citizens and governments for applications, such as healthcare, environmental monitoring and smart buildings. IoT enables network connectivity between smart devices at all times, everywhere, and about everything. In this context, Wireless Sensor Networks (WSNs) play an important role in increasing the ubiquity of networks with smart devices that are low-cost and easy to deploy. However, sensor nodes are restricted in terms of energy, processing and memory. Additionally, low-power radios are very sensitive to noise, interference and multipath distortions. In this context, this article proposes a routing protocol based on Routing by Energy and Link quality (REL) for IoT applications. To increase reliability and energy-efficiency, REL selects routes on the basis of a proposed end-to-end link quality estimator mechanism, residual energy and hop count. Furthermore, REL proposes an event-driven mechanism to provide load balancing and avoid the premature energy depletion of nodes/networks. Performance evaluations were carried out using simulation and testbed experiments to show the impact and benefits of REL in small and large-scale networks. The results show that REL increases the network lifetime and services availability, as well as the quality of service of IoT applications. It also provides an even distribution of scarce network resources and reduces the packet loss rate, compared with the performance of well-known protocols.
A Routing Protocol Based on Energy and Link Quality for Internet of Things Applications
Machado, Kassio; Rosário, Denis; Cerqueira, Eduardo; Loureiro, Antonio A. F.; Neto, Augusto; de Souza, José Neuman
2013-01-01
The Internet of Things (IoT) is attracting considerable attention from the universities, industries, citizens and governments for applications, such as healthcare,environmental monitoring and smart buildings. IoT enables network connectivity between smart devices at all times, everywhere, and about everything. In this context, Wireless Sensor Networks (WSNs) play an important role in increasing the ubiquity of networks with smart devices that are low-cost and easy to deploy. However, sensor nodes are restricted in terms of energy, processing and memory. Additionally, low-power radios are very sensitive to noise, interference and multipath distortions. In this context, this article proposes a routing protocol based on Routing by Energy and Link quality (REL) for IoT applications. To increase reliability and energy-efficiency, REL selects routes on the basis of a proposed end-to-end link quality estimator mechanism, residual energy and hop count. Furthermore, REL proposes an event-driven mechanism to provide load balancing and avoid the premature energy depletion of nodes/networks. Performance evaluations were carried out using simulation and testbed experiments to show the impact and benefits of REL in small and large-scale networks. The results show that REL increases the network lifetime and services availability, as well as the quality of service of IoT applications. It also provides an even distribution of scarce network resources and reduces the packet loss rate, compared with the performance of well-known protocols. PMID:23385410
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2007-01-01
This report presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV) [SMV]. The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space. Also, additional innovative state space reduction techniques are introduced that can be used in future verification efforts applied to this and other protocols.
Trade-off Analysis of Underwater Acoustic Sensor Networks
NASA Astrophysics Data System (ADS)
Tuna, G.; Das, R.
2017-09-01
In the last couple of decades, Underwater Acoustic Sensor Networks (UASNs) were started to be used for various commercial and non-commercial purposes. However, in underwater environments, there are some specific inherent constraints, such as high bit error rate, variable and large propagation delay, limited bandwidth capacity, and short-range communications, which severely degrade the performance of UASNs and limit the lifetime of underwater sensor nodes as well. Therefore, proving reliability of UASN applications poses a challenge. In this study, we try to balance energy consumption of underwater acoustic sensor networks and minimize end-to-end delay using an efficient node placement strategy. Our simulation results reveal that if the number of hops is reduced, energy consumption can be reduced. However, this increases end-to-end delay. Hence, application-specific requirements must be taken into consideration when determining a strategy for node deployment.
A framework for visualization of battlefield network behavior
NASA Astrophysics Data System (ADS)
Perzov, Yury; Yurcik, William
2006-05-01
An extensible network simulation application was developed to study wireless battlefield communications. The application monitors node mobility and depicts broadcast and unicast traffic as expanding rings and directed links. The network simulation was specially designed to support fault injection to show the impact of air strikes on disabling nodes. The application takes standard ns-2 trace files as an input and provides for performance data output in different graphical forms (histograms and x/y plots). Network visualization via animation of simulation output can be saved in AVI format that may serve as a basis for a real-time battlefield awareness system.
Software/hardware distributed processing network supporting the Ada environment
NASA Astrophysics Data System (ADS)
Wood, Richard J.; Pryk, Zen
1993-09-01
A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.
NASA Astrophysics Data System (ADS)
Lee, J.; Chen, H.; Hsu, Y.; Yu, S.
2013-12-01
Active faults developed into a rather complex three-thrust fault system at the southern end of the narrow Longitudinal Valley in eastern Taiwan, a present-day on-land plate suture between the Philippine Sea plate and Eurasia. Based on more than ten years long geodetic data (including GPS and levelling), field geological investigation, seismological data, and regional tomography, this paper aims at elucidating the architecture of this three-thrust system and the associated surface deformation, as well as providing insights on fault kinematics, slip behaviors and implications of regional tectonics. Combining the results of interseismic (secular) horizontal and vertical velocities, we are able to map the surface traces of the three active faults in the Taitung area. The west-verging Longitudinal Valley Fault (LVF), along which the Coastal Range of the northern Luzon arc is thrusting over the Central Range of the Chinese continental margin, braches into two active strands bounding both sides of an uplifted, folded Quaternary fluvial deposits (Peinanshan massif) within the valley: the Lichi fault to the east and the Luyeh fault to the west. Both faults are creeping, to some extent, in the shallow surface level. However, while the Luyeh fault shows nearly pure thrust type, the Lichi fault reveals transpression regime in the north and transtension in the south end of the LVF in the Taitung plain. The results suggest that the deformation in the southern end of the Longitudinal Valley corresponds to a transition zone from present arc-collision to pre-collision zone in the offshore SE Taiwan. Concerning the Central Range, the third major fault in the area, the secular velocities indicate that the fault is mostly locked during the interseismic period and the accumulated strain would be able to produce a moderate earthquake, such as the example of the 2006 M6.1 Peinan earthquake, expressed by an oblique thrust (verging toward east) with significant left-lateral strike slip component. Taking into account of the recent study on the regional seismic Vp tomography, it shows a high velocity zone with steep east-dipping angle fills the gap under the Longitudinal Valley between the opposing verging LVF and the Central Range fault, implying a possible rolled-back forearc basement under the Coastal Range.
Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Shi, Hongling; Gholami, Khalid El
2014-01-01
Operating system (OS) technology is significant for the proliferation of the wireless sensor network (WSN). With an outstanding OS; the constrained WSN resources (processor; memory and energy) can be utilized efficiently. Moreover; the user application development can be served soundly. In this article; a new hybrid; real-time; memory-efficient; energy-efficient; user-friendly and fault-tolerant WSN OS MIROS is designed and implemented. MIROS implements the hybrid scheduler and the dynamic memory allocator. Real-time scheduling can thus be achieved with low memory consumption. In addition; it implements a mid-layer software EMIDE (Efficient Mid-layer Software for User-Friendly Application Development Environment) to decouple the WSN application from the low-level system. The application programming process can consequently be simplified and the application reprogramming performance improved. Moreover; it combines both the software and the multi-core hardware techniques to conserve the energy resources; improve the node reliability; as well as achieve a new debugging method. To evaluate the performance of MIROS; it is compared with the other WSN OSes (TinyOS; Contiki; SOS; openWSN and mantisOS) from different OS concerns. The final evaluation results prove that MIROS is suitable to be used even on the tight resource-constrained WSN nodes. It can support the real-time WSN applications. Furthermore; it is energy efficient; user friendly and fault tolerant. PMID:25248069
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ofek, Y.
1994-05-01
This work describes a new technique, based on exchanging control signals between neighboring nodes, for constructing a stable and fault-tolerant global clock in a distributed system with an arbitrary topology. It is shown that it is possible to construct a global clock reference with time step that is much smaller than the propagation delay over the network's links. The synchronization algorithm ensures that the global clock tick' has a stable periodicity, and therefore, it is possible to tolerate failures of links and clocks that operate faster and/or slower than nominally specified, as well as hard failures. The approach taken inmore » this work is to generate a global clock from the ensemble of the local transmission clocks and not to directly synchronize these high-speed clocks. The steady-state algorithm, which generates the global clock, is executed in hardware by the network interface of each node. At the network interface, it is possible to measure accurately the propagation delay between neighboring nodes with a small error or uncertainty and thereby to achieve global synchronization that is proportional to these error measurements. It is shown that the local clock drift (or rate uncertainty) has only a secondary effect on the maximum global clock rate. The synchronization algorithm can tolerate any physical failure. 18 refs.« less
Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Shi, Hongling; El Gholami, Khalid
2014-09-22
Operating system (OS) technology is significant for the proliferation of the wireless sensor network (WSN). With an outstanding OS; the constrained WSN resources (processor; memory and energy) can be utilized efficiently. Moreover; the user application development can be served soundly. In this article; a new hybrid; real-time; memory-efficient; energy-efficient; user-friendly and fault-tolerant WSN OS MIROS is designed and implemented. MIROS implements the hybrid scheduler and the dynamic memory allocator. Real-time scheduling can thus be achieved with low memory consumption. In addition; it implements a mid-layer software EMIDE (Efficient Mid-layer Software for User-Friendly Application Development Environment) to decouple the WSN application from the low-level system. The application programming process can consequently be simplified and the application reprogramming performance improved. Moreover; it combines both the software and the multi-core hardware techniques to conserve the energy resources; improve the node reliability; as well as achieve a new debugging method. To evaluate the performance of MIROS; it is compared with the other WSN OSes (TinyOS; Contiki; SOS; openWSN and mantisOS) from different OS concerns. The final evaluation results prove that MIROS is suitable to be used even on the tight resource-constrained WSN nodes. It can support the real-time WSN applications. Furthermore; it is energy efficient; user friendly and fault tolerant.
Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks.
Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan
2017-06-26
Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H²RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H²RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller.
Energy modelling in sensor networks
NASA Astrophysics Data System (ADS)
Schmidt, D.; Krämer, M.; Kuhn, T.; Wehn, N.
2007-06-01
Wireless sensor networks are one of the key enabling technologies for the vision of ambient intelligence. Energy resources for sensor nodes are very scarce. A key challenge is the design of energy efficient communication protocols. Models of the energy consumption are needed to accurately simulate the efficiency of a protocol or application design, and can also be used for automatic energy optimizations in a model driven design process. We propose a novel methodology to create models for sensor nodes based on few simple measurements. In a case study the methodology was used to create models for MICAz nodes. The models were integrated in a simulation environment as well as in a SDL runtime framework of a model driven design process. Measurements on a test application that was created automatically from an SDL specification showed an 80% reduction in energy consumption compared to an implementation without power saving strategies.
Broadcasting Topology and Routing Information in Computer Networks
1985-05-01
DOWN\\ linki inki FIgwre 1.2.1: Topology Problem Example messages from node 2 before receiving the first DOWN message from node 3. Now assume that before...node to each of the link’s end nodes. 54 link.1 cc 4 1 -. distances to linki Figue 3.4.2: SPTA Port Distance Table Example An example of these
Pierce, Kenneth L.; Morgan, Lisa A.
2009-01-01
Both the belts of faulting and the YCHT are asymmetrical across the volcanic hotspot track, flaring out 1.6 times more on the south than the north side. This and the southeast tilt of the Yellowstone plume may reflect southeast flow of the upper mantle.
NASA Astrophysics Data System (ADS)
He, J.; Wang, W.; Xiao, J.
2015-12-01
The 2013 Mw7.7 Balochistan, Pakistan, earthquake occurred on the curved Hoshab fault. This fault connects with the north-south trending Chaman strike-slip fault to northeast, and with the west-east trending Makran thrust fault system to southwest. Teleseismic waveform inversion, incorporated with coseismic ground surface deformation data, show that the rupture of this earthquake nucleated around northeast segment of the fault, and then propagated southwestward along the northwest dipping Hoshab fault about 200 km, with the maximum coseismic displacement, featured mainly by purely left-lateral strike-slip motion, about 10 meters. In context of the India-Asia collision frame, associating with the fault geometry around this region, the rupture propagation of this earthquake seems to not follow an optimal path along the fault segment, because after nucleation of this event the Hoshab fault on the southwest of hypocenter of this earthquake is clamped by elastic stress change. Here, we build a three-dimensional finite-element model to explore the evolution of both stress and pore-pressure during the rupturing process of this earthquake. In the model, the crustal deformation is treated as undrained poroelastic media as described by Biot's theory, and the instantaneous rupture process is specified with split-node technique. By testing a reasonable range of parameters, including the coefficient of friction, the undrained Poisson's ratio, the permeability of the fault zone and the bulk crust, numerical results have shown that after the nucleation of rupture of this earthquake around the northeast of the Hoshab fault, the positive change of normal stress (clamping the fault) on the fault plane is greatly reduced by the instantaneous increase of pore pressure (unclamping the fault). This process could result in the change of Coulomb failure stress resolved on the Hoshab fault to be hastened, explaining the possible mechanism for southwestward propagation of rupture of the Mw7.7 Balochistan earthquake along the Hoshab fault.
Superconducting matrix fault current limiter with current-driven trigger mechanism
Yuan; Xing
2008-04-15
A modular and scalable Matrix-type Fault Current Limiter (MFCL) that functions as a "variable impedance" device in an electric power network, using components made of superconducting and non-superconducting electrically conductive materials. An inductor is connected in series with the trigger superconductor in the trigger matrix and physically surrounds the superconductor. The current surge during a fault will generate a trigger magnetic field in the series inductor to cause fast and uniform quenching of the trigger superconductor to significantly reduce burnout risk due to superconductor material non-uniformity.
A new method of converter transformer protection without commutation failure
NASA Astrophysics Data System (ADS)
Zhang, Jiayu; Kong, Bo; Liu, Mingchang; Zhang, Jun; Guo, Jianhong; Jing, Xu
2018-01-01
With the development of AC / DC hybrid transmission technology, converter transformer as nodes of AC and DC conversion of HVDC transmission technology, its reliable safe and stable operation plays an important role in the DC transmission. As a common problem of DC transmission, commutation failure poses a serious threat to the safe and stable operation of power grid. According to the commutation relation between the AC bus voltage of converter station and the output DC voltage of converter, the generalized transformation ratio is defined, and a new method of converter transformer protection based on generalized transformation ratio is put forward. The method uses generalized ratio to realize the on-line monitoring of the fault or abnormal commutation components, and the use of valve side of converter transformer bushing CT current characteristics of converter transformer fault accurately, and is not influenced by the presence of commutation failure. Through the fault analysis and EMTDC/PSCAD simulation, the protection can be operated correctly under the condition of various faults of the converter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Shuai; Wang, Jian
In this work, using the Cu–Ni (111) semi-coherent interface as a model system, we combine atomistic simulations and defect theory to reveal the relaxation mechanisms, structure, and properties of semi-coherent interfaces. By calculating the generalized stacking fault energy (GSFE) profile of the interface, two stable structures and a high-energy structure are located. During the relaxation, the regions that possess the stable structures expand and develop into coherent regions; the regions with high-energy structure shrink into the intersection of misfit dislocations (nodes). This process reduces the interface excess potential energy but increases the core energy of the misfit dislocations and nodes.more » The core width is dependent on the GSFE of the interface. The high-energy structure relaxes by relative rotation and dilatation between the crystals. The relative rotation is responsible for the spiral pattern at nodes. The relative dilatation is responsible for the creation of free volume at nodes, which facilitates the nodes’ structural transformation. Several node structures have been observed and analyzed. In conclusion, the various structures have significant impact on the plastic deformation in terms of lattice dislocation nucleation, as well as the point defect formation energies.« less
The 1999 Izmit, Turkey, earthquake: A 3D dynamic stress transfer model of intraearthquake triggering
Harris, R.A.; Dolan, J.F.; Hartleb, R.; Day, S.M.
2002-01-01
Before the August 1999 Izmit (Kocaeli), Turkey, earthquake, theoretical studies of earthquake ruptures and geological observations had provided estimates of how far an earthquake might jump to get to a neighboring fault. Both numerical simulations and geological observations suggested that 5 km might be the upper limit if there were no transfer faults. The Izmit earthquake appears to have followed these expectations. It did not jump across any step-over wider than 5 km and was instead stopped by a narrower step-over at its eastern end and possibly by a stress shadow caused by a historic large earthquake at its western end. Our 3D spontaneous rupture simulations of the 1999 Izmit earthquake provide two new insights: (1) the west- to east-striking fault segments of this part of the North Anatolian fault are oriented so as to be low-stress faults and (2) the easternmost segment involved in the August 1999 rupture may be dipping. An interesting feature of the Izmit earthquake is that a 5-km-long gap in surface rupture and an adjacent 25° restraining bend in the fault zone did not stop the earthquake. The latter observation is a warning that significant fault bends in strike-slip faults may not arrest future earthquakes.
Gao, Yuan; Min, Kyungji; Zhang, Yibing; Su, John; Greenwood, Matthew; Gronert, Karsten
2015-01-01
Immune-driven dry eye disease primarily affects women; the cause for this sex-specific prevalence is unknown. PMN have distinct phenotypes that drive inflammation but also regulate lymphocytes and are the rate-limiting cell for generating anti-inflammatory lipoxin A4 (LXA4). Estrogen regulates the LXA4 circuit to induce delayed female-specific wound healing in the cornea. However, the role of PMN in dry eye disease remains unexplored. We discovered a LXA4-producing tissue-PMN population in the corneal limbus, lacrimal glands and cervical lymph nodes of healthy male and female mice. These tissue-PMN, unlike inflammatory-PMN, expressed a highly amplified LXA4 circuit and were sex-specifically regulated during immune-driven dry eye disease. Desiccating stress in females, unlike in males, triggered a remarkable decrease in lymph node PMN and LXA4 formation that remained depressed during dry eye disease. Depressed lymph node PMN and LXA4 in females correlated with an increase in T effector cells (TH1 and TH17), a decrease in regulatory T cells (Treg) and increased dry eye pathogenesis. Antibody depletion of tissue-PMN abrogated LXA4 formation in lymph nodes, caused a marked increase in TH1 and TH17 and decrease in Treg cells. To establish an immune regulatory role for PMN-derived LXA4 in dry eye females were treated with LXA4. LXA4 treatment markedly inhibited TH1 and TH17 and amplified Treg cells in draining lymph nodes, while reducing dry eye pathogenesis. These results identify female-specific regulation of LXA4-producing tissue-PMN as a potential key factor in aberrant T effector cell activation and initiation of immune-driven dry eye disease. PMID:26324767
Fault Location Based on Synchronized Measurements: A Comprehensive Survey
Al-Mohammed, A. H.; Abido, M. A.
2014-01-01
This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs), when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research. PMID:24701191
NASA Astrophysics Data System (ADS)
Mulyadi, Y.; Sucita, T.; Rahmawan, M. D.
2018-01-01
This study was a case study in PT. PLN (Ltd.) APJ Bandung area with the subject taken was the installation of distributed generation (DG) on 20-kV distribution channels. The purpose of this study is to find out the effect of DG to the changes in voltage profile and three-phase short circuit fault in the 20-kV distribution system with load conditions considered to be balanced. The reason for this research is to know how far DG can improve the voltage profile of the channel and to what degree DG can increase the three-phase short circuit fault on each bus. The method used in this study was comparing the simulation results of power flow and short-circuit fault using ETAP Power System software with manual calculations. The result obtained from the power current simulation before the installation of DG voltage was the drop at the end of the channel at 2.515%. Meanwhile, the three-phase short-circuit current fault before the DG installation at the beginning of the channel was 13.43 kA. After the installation of DG with injection of 50%, DG power obtained voltage drop at the end of the channel was 1.715% and the current fault at the beginning network was 14.05 kA. In addition, with injection of 90%, DG power obtained voltage drop at the end of the channel was 1.06% and the current fault at the beginning network was 14.13%.
Diversity Driven Coexistence: Collective Stability in the Cyclic Competition of Three Species
NASA Astrophysics Data System (ADS)
Bassler, Kevin E.; Frey, Erwin; Zia, R. K. P.
2015-03-01
The basic physics of collective behavior are often difficult to quantify and understand, particularly when the system is driven out of equilibrium. Many complex systems are usefully described as complex networks, consisting of nodes and links. The nodes specify individual components of the system and the links describe their interactions. When both nodes and links change dynamically, or `co-evolve', as happens in many realistic systems, complex mathematical structures are encountered, posing challenges to our understanding. In this context, we introduce a minimal system of node and link degrees of freedom, co-evolving with stochastic rules. Specifically, we show that diversity of social temperament (intro- or extroversion) can produce collective stable coexistence when three species compete cyclically. It is well-known that when only extroverts exist in a stochastic rock-paper-scissors game, or in a conserved predator-prey, Lotka-Volterra system, extinction occurs at times of O(N), where N is the number of nodes. We find that when both introverts and extroverts exist, where introverts sever social interactions and extroverts create them, collective coexistence prevails in long-living, quasi-stationary states. Work supported by the NSF through Grants DMR-1206839 (KEB) and DMR-1244666 (RKPZ), and by the AFOSR and DARPA through Grant FA9550-12-1-0405 (KEB).
Principal facts for gravity stations in the vicinity of San Bernardino, Southern California
Anderson, Megan L.; Roberts, Carter W.; Jachens, Robert C.
2000-01-01
New gravity measurements in the vicinity of San Bernardino, California were collected to help define the characteristics of the Rialto-Colton fault. The data were processed using standard reduction formulas and parameters. Rock properties such as lithology, magnetic susceptibility and density also were measured at several locations. Rock property measurements will be helpful for future modeling and density inversion calculations from the gravity data. On both the Bouguer and isostatic gravity maps, a prominent, 13-km long (8 mi), approximately 1-km (0.62 mi) wide gradient with an amplitude of 7 mGal, down to the northeast, is interpreted as the gravity expression of the Rialto-Colton fault. The gravity gradient strikes in a northwest direction and runs from the San Jacinto fault zone at its south end to San Sevine Canyon at the foot of the San Gabriel mountains at its north end. The Rialto-Colton fault has experienced both right-lateral strike-slip and normal fault motion that has offset basement rocks; therefore it is interpreted as a major, through-going fault.
Geometry and kinematics of adhesive wear in brittle strike-slip fault zones
NASA Astrophysics Data System (ADS)
Swanson, Mark T.
2005-05-01
Detailed outcrop surface mapping in Late Paleozoic cataclastic strike-slip faults of coastal Maine shows that asymmetric sidewall ripouts, 0.1-200 m in length, are a significant component of many mapped faults and an important wall rock deformation mechanism during faulting. The geometry of these structures ranges from simple lenses to elongate slabs cut out of the sidewalls of strike-slip faults by a lateral jump of the active zone of slip during adhesion along a section of the main fault. The new irregular trace of the active fault after this jump creates an indenting asperity that is forced to plow through the adjoining wall rock during continued adhesion or be cut off by renewed motion along the main section of the fault. Ripout translation during adhesion sets up the structural asymmetry with trailing extensional and leading contractional ends to the ripout block. The inactive section of the main fault trace at the trailing end can develop a 'sag' or 'half-graben' type geometry due to block movement along the scallop-shaped connecting ramp to the flanking ripout fault. Leading contractional ramps can develop 'thrust' type imbrication and forces the 'humpback' geometry to the ripout slab due to distortion of the inactive main fault surface by ripout translation. Similar asymmetric ripout geometries are recognized in many other major crustal scale strike-slip fault zones worldwide. Ripout structures in the 5-500 km length range can be found on the Atacama fault system of northern Chile, the Qujiang and Xiaojiang fault zones in western China, the Yalakom-Hozameen fault zone in British Columbia and the San Andreas fault system in southern California. For active crustal-scale faults the surface expression of ripout translation includes a coupled system of extensional trailing ramps as normal oblique-slip faults with pull-apart basin sedimentation and contractional leading ramps as oblique thrust or high angle reverse faults with associated uplift and erosion. The sidewall ripout model, as a mechanism for adhesive wear during fault zone deformation, can be useful in studies of fault zone geometry, kinematics and evolution from outcrop- to crustal-scales.
NASA Astrophysics Data System (ADS)
Zuza, Andrew V.; Yin, An
2016-05-01
Collision-induced continental deformation commonly involves complex interactions between strike-slip faulting and off-fault deformation, yet this relationship has rarely been quantified. In northern Tibet, Cenozoic deformation is expressed by the development of the > 1000-km-long east-striking left-slip Kunlun, Qinling, and Haiyuan faults. Each have a maximum slip in the central fault segment exceeding 10s to ~ 100 km but a much smaller slip magnitude (~< 10% of the maximum slip) at their terminations. The along-strike variation of fault offsets and pervasive off-fault deformation create a strain pattern that departs from the expectations of the classic plate-like rigid-body motion and flow-like distributed deformation end-member models for continental tectonics. Here we propose a non-rigid bookshelf-fault model for the Cenozoic tectonic development of northern Tibet. Our model, quantitatively relating discrete left-slip faulting to distributed off-fault deformation during regional clockwise rotation, explains several puzzling features, including the: (1) clockwise rotation of east-striking left-slip faults against the northeast-striking left-slip Altyn Tagh fault along the northwestern margin of the Tibetan Plateau, (2) alternating fault-parallel extension and shortening in the off-fault regions, and (3) eastward-tapering map-view geometries of the Qimen Tagh, Qaidam, and Qilian Shan thrust belts that link with the three major left-slip faults in northern Tibet. We refer to this specific non-rigid bookshelf-fault system as a passive bookshelf-fault system because the rotating bookshelf panels are detached from the rigid bounding domains. As a consequence, the wallrock of the strike-slip faults deforms to accommodate both the clockwise rotation of the left-slip faults and off-fault strain that arises at the fault ends. An important implication of our model is that the style and magnitude of Cenozoic deformation in northern Tibet vary considerably in the east-west direction. Thus, any single north-south cross section and its kinematic reconstruction through the region do not properly quantify the complex deformational processes of plateau formation.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Grondona, M
2002-12-19
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.
SLURM: Simplex Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Grondona, M
2003-04-22
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling, and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.
An advanced SEU tolerant latch based on error detection
NASA Astrophysics Data System (ADS)
Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao
2018-05-01
This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).
A Byzantine-Fault Tolerant Self-Stabilizing Protocol for Distributed Clock Synchronization Systems
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2006-01-01
Embedded distributed systems have become an integral part of safety-critical computing applications, necessitating system designs that incorporate fault tolerant clock synchronization in order to achieve ultra-reliable assurance levels. Many efficient clock synchronization protocols do not, however, address Byzantine failures, and most protocols that do tolerate Byzantine failures do not self-stabilize. Of the Byzantine self-stabilizing clock synchronization algorithms that exist in the literature, they are based on either unjustifiably strong assumptions about initial synchrony of the nodes or on the existence of a common pulse at the nodes. The Byzantine self-stabilizing clock synchronization protocol presented here does not rely on any assumptions about the initial state of the clocks. Furthermore, there is neither a central clock nor an externally generated pulse system. The proposed protocol converges deterministically, is scalable, and self-stabilizes in a short amount of time. The convergence time is linear with respect to the self-stabilization period. Proofs of the correctness of the protocol as well as the results of formal verification efforts are reported.
Cyber situational awareness and differential hardening
NASA Astrophysics Data System (ADS)
Dwivedi, Anurag; Tebben, Dan
2012-06-01
The advent of cyber threats has created a need for a new network planning, design, architecture, operations, control, situational awareness, management, and maintenance paradigms. Primary considerations include the ability to assess cyber attack resiliency of the network, and rapidly detect, isolate, and operate during deliberate simultaneous attacks against the network nodes and links. Legacy network planning relied on automatic protection of a network in the event of a single fault or a very few simultaneous faults in mesh networks, but in the future it must be augmented to include improved network resiliency and vulnerability awareness to cyber attacks. Ability to design a resilient network requires the development of methods to define, and quantify the network resiliency to attacks, and to be able to develop new optimization strategies for maintaining operations in the midst of these newly emerging cyber threats. Ways to quantify resiliency, and its use in visualizing cyber vulnerability awareness and in identifying node or link criticality, are presented in the current work, as well as a methodology of differential network hardening based on the criticality profile of cyber network components.
The Development of Design Tools for Fault Tolerant Quantum Dot Cellular Automata Based Logic
NASA Technical Reports Server (NTRS)
Armstrong, Curtis D.; Humphreys, William M.
2003-01-01
We are developing software to explore the fault tolerance of quantum dot cellular automata gate architectures in the presence of manufacturing variations and device defects. The Topology Optimization Methodology using Applied Statistics (TOMAS) framework extends the capabilities of the A Quantum Interconnected Network Array Simulator (AQUINAS) by adding front-end and back-end software and creating an environment that integrates all of these components. The front-end tools establish all simulation parameters, configure the simulation system, automate the Monte Carlo generation of simulation files, and execute the simulation of these files. The back-end tools perform automated data parsing, statistical analysis and report generation.
Kinematics of shallow backthrusts in the Seattle fault zone, Washington State
Pratt, Thomas L.; Troost, K.G.; Odum, Jackson K.; Stephenson, William J.
2015-01-01
Near-surface thrust fault splays and antithetic backthrusts at the tips of major thrust fault systems can distribute slip across multiple shallow fault strands, complicating earthquake hazard analyses based on studies of surface faulting. The shallow expression of the fault strands forming the Seattle fault zone of Washington State shows the structural relationships and interactions between such fault strands. Paleoseismic studies document an ∼7000 yr history of earthquakes on multiple faults within the Seattle fault zone, with some backthrusts inferred to rupture in small (M ∼5.5–6.0) earthquakes at times other than during earthquakes on the main thrust faults. We interpret seismic-reflection profiles to show three main thrust faults, one of which is a blind thrust fault directly beneath downtown Seattle, and four small backthrusts within the Seattle fault zone. We then model fault slip, constrained by shallow deformation, to show that the Seattle fault forms a fault propagation fold rather than the alternatively proposed roof thrust system. Fault slip modeling shows that back-thrust ruptures driven by moderate (M ∼6.5–6.7) earthquakes on the main thrust faults are consistent with the paleoseismic data. The results indicate that paleoseismic data from the back-thrust ruptures reveal the times of moderate earthquakes on the main fault system, rather than indicating smaller (M ∼5.5–6.0) earthquakes involving only the backthrusts. Estimates of cumulative shortening during known Seattle fault zone earthquakes support the inference that the Seattle fault has been the major seismic hazard in the northern Cascadia forearc in the late Holocene.
Greedy data transportation scheme with hard packet deadlines for wireless ad hoc networks.
Lee, HyungJune
2014-01-01
We present a greedy data transportation scheme with hard packet deadlines in ad hoc sensor networks of stationary nodes and multiple mobile nodes with scheduled trajectory path and arrival time. In the proposed routing strategy, each stationary ad hoc node en route decides whether to relay a shortest-path stationary node toward destination or a passing-by mobile node that will carry closer to destination. We aim to utilize mobile nodes to minimize the total routing cost as far as the selected route can satisfy the end-to-end packet deadline. We evaluate our proposed routing algorithm in terms of routing cost, packet delivery ratio, packet delivery time, and usability of mobile nodes based on network level simulations. Simulation results show that our proposed algorithm fully exploits the remaining time till packet deadline to turn into networking benefits of reducing the overall routing cost and improving packet delivery performance. Also, we demonstrate that the routing scheme guarantees packet delivery with hard deadlines, contributing to QoS improvement in various network services.
Greedy Data Transportation Scheme with Hard Packet Deadlines for Wireless Ad Hoc Networks
Lee, HyungJune
2014-01-01
We present a greedy data transportation scheme with hard packet deadlines in ad hoc sensor networks of stationary nodes and multiple mobile nodes with scheduled trajectory path and arrival time. In the proposed routing strategy, each stationary ad hoc node en route decides whether to relay a shortest-path stationary node toward destination or a passing-by mobile node that will carry closer to destination. We aim to utilize mobile nodes to minimize the total routing cost as far as the selected route can satisfy the end-to-end packet deadline. We evaluate our proposed routing algorithm in terms of routing cost, packet delivery ratio, packet delivery time, and usability of mobile nodes based on network level simulations. Simulation results show that our proposed algorithm fully exploits the remaining time till packet deadline to turn into networking benefits of reducing the overall routing cost and improving packet delivery performance. Also, we demonstrate that the routing scheme guarantees packet delivery with hard deadlines, contributing to QoS improvement in various network services. PMID:25258736
2003-08-27
KENNEDY SPACE CENTER, FLA. - The U.S. Node 2 is undergoing a Multi-Element Integrated Test (MEIT) in the Space Station Processing Facility. Node 2 attaches to the end of the U.S. Lab on the ISS and provides attach locations for the Japanese laboratory, European laboratory, the Centrifuge Accommodation Module and, eventually, Multipurpose Logistics Modules. It will provide the primary docking location for the Shuttle when a pressurized mating adapter is attached to Node 2. Installation of the module will complete the U.S. Core of the ISS.
An improved PRoPHET routing protocol in delay tolerant network.
Han, Seung Deok; Chung, Yun Won
2015-01-01
In delay tolerant network (DTN), an end-to-end path is not guaranteed and packets are delivered from a source node to a destination node via store-carry-forward based routing. In DTN, a source node or an intermediate node stores packets in buffer and carries them while it moves around. These packets are forwarded to other nodes based on predefined criteria and finally are delivered to a destination node via multiple hops. In this paper, we improve the dissemination speed of PRoPHET (probability routing protocol using history of encounters and transitivity) protocol by employing epidemic protocol for disseminating message m, if forwarding counter and hop counter values are smaller than or equal to the threshold values. The performance of the proposed protocol was analyzed from the aspect of delivery probability, average delay, and overhead ratio. Numerical results show that the proposed protocol can improve the delivery probability, average delay, and overhead ratio of PRoPHET protocol by appropriately selecting the threshold forwarding counter and threshold hop counter values.
NASA Astrophysics Data System (ADS)
Che-Aron, Z.; Abdalla, A. H.; Abdullah, K.; Hassan, W. H.
2013-12-01
In recent years, Cognitive Radio (CR) technology has largely attracted significant studies and research. Cognitive Radio Ad Hoc Network (CRAHN) is an emerging self-organized, multi-hop, wireless network which allows unlicensed users to opportunistically access available licensed spectrum bands for data communication under an intelligent and cautious manner. However, in CRAHNs, a lot of failures can easily occur during data transmission caused by PU (Primary User) activity, topology change, node fault, or link degradation. In this paper, an attempt has been made to evaluate the performance of the Multi-Radio Link-Quality Source Routing (MR-LQSR) protocol in CRAHNs under different path failure rate. In the MR-LQSR protocol, the Weighted Cumulative Expected Transmission Time (WCETT) is used as the routing metric. The simulations are carried out using the NS-2 simulator. The protocol performance is evaluated with respect to performance metrics like average throughput, packet loss, average end-to-end delay and average jitter. From the simulation results, it is observed that the number of path failures depends on the PUs number and mobility rate of SUs (Secondary Users). Moreover, the protocol performance is greatly affected when the path failure rate is high, leading to major service outages.
Automatic Generation Control Study in Two Area Reheat Thermal Power System
NASA Astrophysics Data System (ADS)
Pritam, Anita; Sahu, Sibakanta; Rout, Sushil Dev; Ganthia, Sibani; Prasad Ganthia, Bibhu
2017-08-01
Due to industrial pollution our living environment destroyed. An electric grid system has may vital equipment like generator, motor, transformers and loads. There is always be an imbalance between sending end and receiving end system which cause system unstable. So this error and fault causing problem should be solved and corrected as soon as possible else it creates faults and system error and fall of efficiency of the whole power system. The main problem developed from this fault is deviation of frequency cause instability to the power system and may cause permanent damage to the system. Therefore this mechanism studied in this paper make the system stable and balance by regulating frequency at both sending and receiving end power system using automatic generation control using various controllers taking a two area reheat thermal power system into account.
A Fresh Look at Longitudinal Standing Waves on a Spring
ERIC Educational Resources Information Center
Rutherford, Casey
2013-01-01
Transverse standing waves produced on a string, as shown in Fig. 1, are a common demonstration of standing wave patterns that have nodes at both ends. Longitudinal standing waves can be produced on a helical spring that is mounted vertically and attached to a speaker, as shown in Fig. 2, and used to produce both node-node (NN) and node-antinode…
Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Stephen; Heaney, Michael; Jin, Xin
Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energymore » models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Stephen; Heaney, Michael; Jin, Xin
Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energymore » models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.« less
NASA Astrophysics Data System (ADS)
Sasaki, T.; Ueta, K.; Inoue, D.; Aoyagi, Y.; Yanagida, M.; Ichikawa, K.; Goto, N.
2010-12-01
It is important to evaluate the magnitude of earthquake caused by multiple active faults, taking into account the simultaneous effects. The simultaneity of adjacent active faults are often decided on the basis of geometric distances except for known these paleoseismic records. We have been studied the step area between the Nukumi fault and the Neodani fault, which appeared as consecutive ruptures in the 1891 Nobi earthquake, since 2009. The purpose of this study is to establish innovation in valuation technique of the simultaneity of adjacent active faults in addition to the paleoseismic record and the geometric distance. Geomorphological, geological and reconnaissance microearthquake surveys are concluded. The present work is intended to clarify the distribution of tectonic geomorphology along the Nukumi fault and the Neodani fault by high-resolution interpretations of airborne LiDAR DEM and aerial photograph, and the field survey of outcrops and location survey. The study area of this work is the southeastern Nukumi fault and the northwestern Neodani fault. We interpret DEM using shaded relief map and stereoscopic bird's-eye view made from 2m mesh DEM data which is obtained by airborne laser scanner of Kokusai Kogyo Co., Ltd. Aerial photographic survey is for confirmation of DEM interpretation using 1/16,000 scale photo. As a result of topographic survey, we found consecutive tectonic topography which is left lateral displacement of ridge and valley lines and reverse scarplets along the Nukumi fault and the Neodani fault . From Ogotani 2km southeastern of Nukumi pass which is located at the southeastern end of surface rupture along the Nukumi fault by previous study to Neooppa 9km southeastern of Nukumi pass, we can interpret left lateral topographies and small uphill-facing fault scarps on the terrace surface by detail DEM investigation. These topographies are unrecognized by aerial photographic survey because of heavy vegetation. We have found several new outcrops in this area where the surface ruptures of the 1891 Nobi earthquake have not been known. These outcrops have active fault which cut the layer of terrace deposit and slope deposit to the bottom of present soil layer in common. At the locality of Ogotani outcrop, the humic layer which age is from14th century to 15th century by 14C age dating is deformed by the active fault. The vertical displacement of the humic layer is 0.8-0.9m and the terrace deposit layer below the humic layer is ca. 1.3m. For this reason and the existence of fain grain deposit including AT tephra (28ka) in the footwall of the fault, this fault movement occurred more than once since the last glacial age. We conclude that the surface rupture of Nukumi fault in the 1891 Nobi earthquake is continuous to 9km southeast of Nukumi pass. In other words, these findings indicate that there is 10km parallel overlap zone between the surface rupture of the southeastern end of Nukumi fault and the northwestern end of Neodani fault.
Improved multi-objective ant colony optimization algorithm and its application in complex reasoning
NASA Astrophysics Data System (ADS)
Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing
2013-09-01
The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.
Dense, Efficient Chip-to-Chip Communication at the Extremes of Computing
ERIC Educational Resources Information Center
Loh, Matthew
2013-01-01
The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural…
An adaptive density-based routing protocol for flying Ad Hoc networks
NASA Astrophysics Data System (ADS)
Zheng, Xueli; Qi, Qian; Wang, Qingwen; Li, Yongqiang
2017-10-01
An Adaptive Density-based Routing Protocol (ADRP) for Flying Ad Hoc Networks (FANETs) is proposed in this paper. The main objective is to calculate forwarding probability adaptively in order to increase the efficiency of forwarding in FANETs. ADRP dynamically fine-tunes the rebroadcasting probability of a node for routing request packets according to the number of neighbour nodes. Indeed, it is more interesting to privilege the retransmission by nodes with little neighbour nodes. We describe the protocol, implement it and evaluate its performance using NS-2 network simulator. Simulation results reveal that ADRP achieves better performance in terms of the packet delivery fraction, average end-to-end delay, normalized routing load, normalized MAC load and throughput, which is respectively compared with AODV.
NASA Astrophysics Data System (ADS)
Ferrer, O.; Vendeville, B. C.; Roca, E.
2012-04-01
Using sandbox analogue modelling we determine the role played by a pre-kinematic or a syn-kinematic viscous salt layer during rollover folding of the hangingwall of a normal fault with a variable kinked-planar geometry, as well as understand the origin and the mechanisms that control the formation, kinematic evolution and geometry of salt structures developed in the hangingwall of this fault. The experiments we conducted consisted of nine models made of dry quartz-sand (35μm average grain size) simulating brittle rocks and a viscous silicone polymer (SMG 36 from Dow Corning) simulating salt in nature. The models were constructed between two end walls, one of which was fixed, whereas the other was moved by a motor-driven worm screw. The fixed wall was part of the rigid footwall of the model's master border fault. This fault was simulated using three different wood block configurations, which was overlain by a flexible (but not stretchable) sheet that was attached to the mobile endwall of the model. We applied three different infill hangingwall configurations to each fault geometry: (1) without silicone (sand only), (2) sand overlain by a pre-kinematic silicone layer deposited above the entire hanginwall, and (3) sand partly overlain by a syn-kinematic silicone layer that overlain only parts of the hangingwall. All models were subjected to a 14 cm of basement extension in a direction orthogonal to that of the border fault. Results show that the presence of a viscous layer (silicone) clearly controls the deformation pattern of the hangingwall. Thus, regardless of the silicone layer's geometry (either pre- or syn-extensional) or the geometry of the extensional fault, the silicone layer acts as a very efficient detachment level separating two different structural styles in each unit. In particular, the silicone layer acts as an extensional ductile shear zone inhibiting upward propagation of normal faults and/or shears bands from the sub-silicone layers. Whereas the basement is affected by antithetic normal faults that are more or less complex depending on the geometry of the master fault, the lateral flow of the silicone produces salt-cored anticlines, walls and diapirs in the overburden of the hangingwall. The mechanical behavior of the silicone layer as an extensional shear zone, combined with the lateral changes in pressure gradients due to overburden thickness changes, triggered the silicone migration from the half-graben depocenter towards the rollover shoulder. As a result, the accumulation of silicone produces gentle silicone-cored anticlines and local diapirs with minor extensional faults. Upwards fault propagation from the sub-silicone "basement" to the supra-silicone unit only occurs either when the supra- and sub-silicone materials are welded, or when the amount of slip along the master fault is large enough so that the tip of the silicone reaches the junction between the upper and lower panels of the master faults. Comparison between the results of these models with data from the western offshore Parentis Basin (Eastern Bay of Biscay) validates the structural interpretation of this region.
NASA Astrophysics Data System (ADS)
Zhou, Yu; Walker, Richard T.; Elliott, John R.; Parsons, Barry
2016-04-01
Fault dips are usually measured from outcrops in the field or inferred through geodetic or seismological modeling. Here we apply the classic structural geology approach of calculating dip from a fault's 3-D surface trace using recent, high-resolution topography. A test study applied to the 2010 El Mayor-Cucapah earthquake shows very good agreement between our results and those previously determined from field measurements. To obtain a reliable estimate, a fault segment ≥120 m long with a topographic variation ≥15 m is suggested. We then applied this method to the 2013 Balochistan earthquake, getting dips similar to previous estimates. Our dip estimates show a switch from north to south dipping at the southern end of the main trace, which appears to be a response to local extension within a stepover. We suggest that this previously unidentified geometrical complexity may act as the endpoint of earthquake ruptures for the southern end of the Hoshab fault.
Structural Data for the Columbus Salt Marsh Geothermal Area - GIS Data
Faulds, James E.
2011-12-31
Shapefiles and spreadsheets of structural data, including attitudes of faults and strata and slip orientations of faults. - Detailed geologic mapping of ~30 km2 was completed in the vicinity of the Columbus Marsh geothermal field to obtain critical structural data that would elucidate the structural controls of this field. - Documenting E‐ to ENE‐striking left lateral faults and N‐ to NNE‐striking normal faults. - Some faults cut Quaternary basalts. - This field appears to occupy a displacement transfer zone near the eastern end of a system of left‐lateral faults. ENE‐striking sinistral faults diffuse into a system of N‐ to NNE‐striking normal faults within the displacement transfer zone. - Columbus Marsh therefore corresponds to an area of enhanced extension and contains a nexus of fault intersections, both conducive for geothermal activity.
Dependence of frictional strength on compositional variations of Hayward fault rock gouges
Morrow, Carolyn A.; Moore, Diane E.; Lockner, David A.
2010-01-01
The northern termination of the locked portion of the Hayward Fault near Berkeley, California, is found to coincide with the transition from strong Franciscan metagraywacke to melange on the western side of the fault. Both of these units are juxtaposed with various serpentinite, gabbro and graywacke units to the east, suggesting that the gouges formed within the Hayward Fault zone may vary widely due to the mixing of adjacent rock units and that the mechanical behavior of the fault would be best modeled by determining the frictional properties of mixtures of the principal rock types. To this end, room temperature, water-saturated, triaxial shearing tests were conducted on binary and ternary mixtures of fine-grained gouges prepared from serpentinite and gabbro from the Coast Range Ophiolite, a Great Valley Sequence graywacke, and three different Franciscan Complex metasedimentary rocks. Friction coefficients ranged from 0.36 for the serpentinite to 0.84 for the gabbro, with four of the rock types having coefficients of friction ranging from 0.67-0.84. The friction coefficients of the mixtures can be predicted reliably by a simple weighted average of the end-member dry-weight percentages and strengths for all samples except those containing serpentinite. For the serpentinite mixtures, a linear trend between end-member values slightly overestimates the coefficients of friction in the midcomposition ranges. The range in strength for these rock admixtures suggests that both theoretical and numerical modeling of the fault should attempt to account for variations in rock and gouge properties.
[Medical Equipment Maintenance Methods].
Liu, Hongbin
2015-09-01
Due to the high technology and the complexity of medical equipment, as well as to the safety and effectiveness, it determines the high requirements of the medical equipment maintenance work. This paper introduces some basic methods of medical instrument maintenance, including fault tree analysis, node method and exclusive method which are the three important methods in the medical equipment maintenance, through using these three methods for the instruments that have circuit drawings, hardware breakdown maintenance can be done easily. And this paper introduces the processing methods of some special fault conditions, in order to reduce little detours in meeting the same problems. Learning is very important for stuff just engaged in this area.
Seo, Jaewan; Kim, Moonseong; Hur, In; Choi, Wook; Choo, Hyunseung
2010-01-01
Recent studies have shown that in realistic wireless sensor network environments links are extremely unreliable. To recover from corrupted packets, most routing schemes with an assumption of ideal radio environments use a retransmission mechanism, which may cause unnecessary retransmissions. Therefore, guaranteeing energy-efficient reliable data transmission is a fundamental routing issue in wireless sensor networks. However, it is not encouraged to propose a new reliable routing scheme in the sense that every existing routing scheme cannot be replaced with the new one. This paper proposes a Distributed and Reliable Data Transmission (DRDT) scheme with a goal to efficiently guarantee reliable data transmission. In particular, this is based on a pluggable modular approach so that it can be extended to existing routing schemes. DRDT offers reliable data transmission using neighbor nodes, i.e., helper nodes. A helper node is selected among the neighbor nodes of the receiver node which overhear the data packet in a distributed manner. DRDT effectively reduces the number of retransmissions by delegating the retransmission task from the sender node to the helper node that has higher link quality to the receiver node when the data packet reception fails due to the low link quality between the sender and the receiver nodes. Comprehensive simulation results show that DRDT improves end-to-end transmission cost by up to about 45% and reduces its delay by about 40% compared to existing schemes.
Determination of Algorithm Parallelism in NP Complete Problems for Distributed Architectures
1990-03-05
12 structure STACK declare OpenStack (S-.NODE **TopPtr) -+TopPtrI FlushStack(S.-NODE **TopPtr) -*TopPtr PushOnStack(S-.NODE **TopPtr, ITEM *NewltemPtr...OfCoveringSets, CoveringSets, L, Best CoverTime, Vertex, Set3end SCND ADT B.26 structure STACKI declare OpenStack (S-NODE **TopPtr) -+TopPtr FlushStack(S
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2012-01-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2011-12-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.
Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha
2017-04-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.
Evolution of the continental margin of southern Spain and the Alboran Sea
Dillon, William P.; Robb, James M.; Greene, H. Gary; Lucena, Juan Carlos
1980-01-01
Seismic reflection profiles and magnetic intensity measurements were collected across the southern continental margin of Spain and the Alboran basin between Spain and Africa. Correlation of the distinct seismic stratigraphy observed in the profiles to stratigraphic information obtained from cores at Deep Sea Drilling Project site 121 allows effective dating of tectonic events. The Alboran Sea basin occupies a zone of motion between the African and Iberian lithospheric plates that probably began to form by extension in late Miocene time (Tortonian). At the end of Miocene time (end of Messinian) profiles show that an angular unconformity was cut, and then the strata were block faulted before subsequent deposition. The erosion of the unconformity probably resulted from lowering of Mediterranean sea level by evaporation when the previous channel between the Mediterranean and Atlantic was closed. Continued extension probably caused the block faulting and, eventually the opening of the present channel to the Atlantic through the Strait of Gibraltar and the reflooding of the Mediterranean. Minor tectonic movements at the end of Calabrian time (early Pleistocene) apparently resulted in minor faulting, extensive transgression in southeastern Spain, and major changes in the sedimentary environment of the Alboran basin. Active faulting observed at five locations on seismic profiles seems to form a NNE zone of transcurrent movement across the Alboran Sea. This inferred fault trend is coincident with some bathymetric, magnetic and seismicity trends and colinear with active faults that have been mapped on-shore in Morocco and Spain. The faults were probably caused by stresses related to plate movements, and their direction was modified by inherited fractures in the lithosphere that floors the Alboran Sea.
Diverse Geological Applications For Basil: A 2d Finite-deformation Computational Algorithm
NASA Astrophysics Data System (ADS)
Houseman, Gregory A.; Barr, Terence D.; Evans, Lynn
Geological processes are often characterised by large finite-deformation continuum strains, on the order of 100% or greater. Microstructural processes cause deformation that may be represented by a viscous constitutive mechanism, with viscosity that may depend on temperature, pressure, or strain-rate. We have developed an effective com- putational algorithm for the evaluation of 2D deformation fields produced by Newto- nian or non-Newtonian viscous flow. With the implementation of this algorithm as a computer program, Basil, we have applied it to a range of diverse applications in Earth Sciences. Viscous flow fields in 2D may be defined for the thin-sheet case or, using a velocity-pressure formulation, for the plane-strain case. Flow fields are represented using 2D triangular elements with quadratic interpolation for velocity components and linear for pressure. The main matrix equation is solved by an efficient and compact conjugate gradient algorithm with iteration for non-Newtonian viscosity. Regular grids may be used, or grids based on a random distribution of points. Definition of the prob- lem requires that velocities, tractions, or some combination of the two, are specified on all external boundary nodes. Compliant boundaries may also be defined, based on the idea that traction is opposed to and proportional to boundary displacement rate. In- ternal boundary segments, allowing fault-like displacements within a viscous medium have also been developed, and we find that the computed displacement field around the fault tip is accurately represented for Newtonian and non-Newtonian viscosities, in spite of the stress singularity at the fault tip. Basil has been applied by us and colleagues to problems that include: thin sheet calculations of continental collision, Rayleigh-Taylor instability of the continental mantle lithosphere, deformation fields around fault terminations at the outcrop scale, stress and deformation fields in and around porphyroblasts, and deformation of the subducted oceanic slab. Application of Basil to a diverse range of topics is facilitated by the use of command syntax input files that allow most aspects of the calculation to be controlled easily, together with a post-processing package, Sybil, for display and interpretation of the results. Sybil uses a menu-driven graphical interface to access a powerful combination of commands, to- gether with log files that allow repetitive tasks to be more automated
Discrete Wavelet Transform for Fault Locations in Underground Distribution System
NASA Astrophysics Data System (ADS)
Apisit, C.; Ngaopitakkul, A.
2010-10-01
In this paper, a technique for detecting faults in underground distribution system is presented. Discrete Wavelet Transform (DWT) based on traveling wave is employed in order to detect the high frequency components and to identify fault locations in the underground distribution system. The first peak time obtained from the faulty bus is employed for calculating the distance of fault from sending end. The validity of the proposed technique is tested with various fault inception angles, fault locations and faulty phases. The result is found that the proposed technique provides satisfactory result and will be very useful in the development of power systems protection scheme.
NASA Astrophysics Data System (ADS)
Copley, Alex; Mitra, Supriyo; Sloan, R. Alastair; Gaonkar, Sharad; Reynolds, Kirsty
2014-08-01
We present observations of active faulting within peninsular India, far from the surrounding plate boundaries. Offset alluvial fan surfaces indicate one or more magnitude 7.6-8.4 thrust-faulting earthquakes on the Tapti Fault (Maharashtra, western India) during the Holocene. The high ratio of fault displacement to length on the alluvial fan offsets implies high stress-drop faulting, as has been observed elsewhere in the peninsula. The along-strike extent of the fan offsets is similar to the thickness of the seismogenic layer, suggesting a roughly equidimensional fault rupture. The subsiding footwall of the fault is likely to have been responsible for altering the continental-scale drainage pattern in central India and creating the large west flowing catchment of the Tapti river. A preexisting sedimentary basin in the uplifting hanging wall implies that the Tapti Fault was active as a normal fault during the Mesozoic and has been reactivated as a thrust, highlighting the role of preexisting structures in determining the rheology and deformation of the lithosphere. The slip sense of faults and earthquakes in India suggests that deformation south of the Ganges foreland basin is driven by the compressive force transmitted between India and the Tibetan Plateau. The along-strike continuation of faulting to the east of the Holocene ruptures we have studied represents a significant seismic hazard in central India.
Koley, Ebha; Verma, Khushaboo; Ghosh, Subhojit
2015-01-01
Restrictions on right of way and increasing power demand has boosted development of six phase transmission. It offers a viable alternative for transmitting more power, without major modification in existing structure of three phase double circuit transmission system. Inspite of the advantages, low acceptance of six phase system is attributed to the unavailability of a proper protection scheme. The complexity arising from large number of possible faults in six phase lines makes the protection quite challenging. The proposed work presents a hybrid wavelet transform and modular artificial neural network based fault detector, classifier and locator for six phase lines using single end data only. The standard deviation of the approximate coefficients of voltage and current signals obtained using discrete wavelet transform are applied as input to the modular artificial neural network for fault classification and location. The proposed scheme has been tested for all 120 types of shunt faults with variation in location, fault resistance, fault inception angles. The variation in power system parameters viz. short circuit capacity of the source and its X/R ratio, voltage, frequency and CT saturation has also been investigated. The result confirms the effectiveness and reliability of the proposed protection scheme which makes it ideal for real time implementation.
NASA Technical Reports Server (NTRS)
Wysocky, Terry; Kopf, Edward, Jr.; Katanyoutananti, Sunant; Steiner, Carl; Balian, Harry
2010-01-01
The high-speed ring bus at the Jet Propulsion Laboratory (JPL) allows for future growth trends in spacecraft seen with future scientific missions. This innovation constitutes an enhancement of the 1393 bus as documented in the Institute of Electrical and Electronics Engineers (IEEE) 1393-1999 standard for a spaceborne fiber-optic data bus. It allows for high-bandwidth and time synchronization of all nodes on the ring. The JPL ring bus allows for interconnection of active units with autonomous operation and increased fault handling at high bandwidths. It minimizes the flight software interface with an intelligent physical layer design that has few states to manage as well as simplified testability. The design will soon be documented in the AS-1393 standard (Serial Hi-Rel Ring Network for Aerospace Applications). The framework is designed for "Class A" spacecraft operation and provides redundant data paths. It is based on "fault containment regions" and "redundant functional regions (RFR)" and has a method for allocating cables that completely supports the redundancy in spacecraft design, allowing for a complete RFR to fail. This design reduces the mass of the bus by incorporating both the Control Unit and the Data Unit in the same hardware. The standard uses ATM (asynchronous transfer mode) packets, standardized by ITU-T, ANSI, ETSI, and the ATM Forum. The IEEE-1393 standard uses the UNI form of the packet and provides no protection for the data portion of the cell. The JPL design adds optional formatting to this data portion. This design extends fault protection beyond that of the interconnect. This includes adding protection to the data portion that is contained within the Bus Interface Units (BIUs) and by adding to the signal interface between the Data Host and the JPL 1393 Ring Bus. Data transfer on the ring bus does not involve a master or initiator. Following bus protocol, any BIU may transmit data on the ring whenever it has data received from its host. There is no centralized arbitration or bus granting. The JPL design provides for autonomous synchronization of the nodes on the ring bus. An address-synchronous latency adjust buffer (LAB) has been designed that cannot get out of synchronization and needs no external input. Also, a priority-driven cable selection behavior has been programmed into each unit on the ring bus. This makes the bus able to connect itself up, according to a maximum redundancy priority system, without the need for computer intervention at startup. Switching around a failed or switched-off unit is also autonomous. The JPL bus provides a map of all the active units for the host computer to read and use for fault management. With regard to timing, this enhanced bus recognizes coordinated timing on a spacecraft as critical and addresses this with a single source of absolute and relative time, which is broadcast to all units on the bus with synchronization maintained to the tens of nanoseconds. Each BIU consists of up to five programmable triggers, which may be programmed for synchronization of events within the spacecraft of instrument. All JPL-formatted data transmitted on the ring bus are automatically time-stamped.
The effect of roughness on the nucleation and propagation of shear rupture on small faults
NASA Astrophysics Data System (ADS)
Tal, Y.; Hager, B. H.
2016-12-01
Faults are rough at all scales and can be described as self-affine fractals. This deviation from planarity results in geometric asperities and a locally heterogeneous stress field, which affect the nucleation and propagation of shear rupture. We study this effect numerically and aim to understand the relative effects of different fault geometries, remote stresses, and medium and fault properties, focusing on small earthquakes, in which realistic geometry and friction law parameters can be incorporated in the model. Our numerical approach includes three main features. First, to enable slip that is large relative to the size of the elements near the fault, as well as the variation of normal stress during slip, we implement slip-weakening and rate-and state-friction laws into the Mortar Finite Element Method, in which non-matching meshes are allowed across the fault and the contacts are continuously updated. Second, we refine the mesh near the fault using hanging nodes, thereby enabling accurate representation of the fault geometry. Finally, using a variable time step size, we gradually increase the remote stress and let the rupture nucleate spontaneously. This procedure involves a quasi-static backward Euler scheme for the inter-seismic stages and a dynamic implicit Newmark scheme for the co-seismic stages. In general, under the same range of external loads, rougher faults experience more events but with smaller slips, stress drops, and slip rates, where the roughest faults experience only slow-slip aseismic events. Moreover, the roughness complicates the nucleation process, with asymmetric expansion of the rupture and larger nucleation length. In the propagation phase of the seismic events, the roughness results in larger breakdown zones.
Knowledge diffusion of dynamical network in terms of interaction frequency.
Liu, Jian-Guo; Zhou, Qing; Guo, Qiang; Yang, Zhen-Hua; Xie, Fei; Han, Jing-Ti
2017-09-07
In this paper, we present a knowledge diffusion (SKD) model for dynamic networks by taking into account the interaction frequency which always used to measure the social closeness. A set of agents, which are initially interconnected to form a random network, either exchange knowledge with their neighbors or move toward a new location through an edge-rewiring procedure. The activity of knowledge exchange between agents is determined by a knowledge transfer rule that the target node would preferentially select one neighbor node to transfer knowledge with probability p according to their interaction frequency instead of the knowledge distance, otherwise, the target node would build a new link with its second-order neighbor preferentially or select one node in the system randomly with probability 1 - p. The simulation results show that, comparing with the Null model defined by the random selection mechanism and the traditional knowledge diffusion (TKD) model driven by knowledge distance, the knowledge would spread more fast based on SKD driven by interaction frequency. In particular, the network structure of SKD would evolve as an assortative one, which is a fundamental feature of social networks. This work would be helpful for deeply understanding the coevolution of the knowledge diffusion and network structure.
The Quaternary Silver Creek Fault Beneath the Santa Clara Valley, California
Wentworth, Carl M.; Williams, Robert A.; Jachens, Robert C.; Graymer, Russell W.; Stephenson, William J.
2010-01-01
The northwest-trending Silver Creek Fault is a 40-km-long strike-slip fault in the eastern Santa Clara Valley, California, that has exhibited different behaviors within a changing San Andreas Fault system over the past 10-15 Ma. Quaternary alluvium several hundred meters thick that buries the northern half of the Silver Creek Fault, and that has been sampled by drilling and imaged in a detailed seismic reflection profile, provides a record of the Quaternary history of the fault. We assemble evidence from areal geology, stratigraphy, paleomagnetics, ground-water hydrology, potential-field geophysics, and reflection and earthquake seismology to determine the long history of the fault in order to evaluate its current behavior. The fault formed in the Miocene more than 100 km to the southeast, as the southwestern fault in a 5-km-wide right step to the Hayward Fault, within which the 40-km-long Evergreen pull-apart basin formed. Later, this basin was obliquely cut by the newly recognized Mt. Misery Fault to form a more direct connection to the Hayward Fault, although continued growth of the basin was sufficient to accommodate at least some late Pliocene alluvium. Large offset along the San Andreas-Calaveras-Mt Misery-Hayward Faults carried the basin northwestward almost to its present position when, about 2 Ma, the fault system was reorganized. This led to near abandonment of the faults bounding the pull-apart basin in favor of right slip extending the Calaveras Fault farther north before stepping west to the Hayward Fault, as it does today. Despite these changes, the Silver Creek Fault experienced a further 200 m of dip slip in the early Quaternary, from which we infer an associated 1.6 km or so of right slip, based on the ratio of the 40-km length of the strike-slip fault to a 5-km depth of the Evergreen Basin. This dip slip ends at a mid-Quaternary unconformity, above which the upper 300 m of alluvial cover exhibits a structural sag at the fault that we interpret as a negative flower structure. This structure implies some continuing strike slip on the Silver Creek Fault in the late Quaternary as well, with a transtensional component but no dip slip. Our only basis for estimating the rate of this later Quaternary strike slip on the Silver Creek Fault is to assume continuation of the inferred early Quaternary rate of less than 2 mm/yr. Faulting evident in a detailed seismic reflection profile across the Silver Creek Fault extends up to the limit of data at a depth of 50 m and age of about 140 ka, and the course of Coyote Creek suggests Holocene capture in a structural depression along the fault. No surface trace is evident on the alluvial plain, however, and convincing evidence of Holocene offset is lacking. Few instrumentally recorded earthquakes are located near the fault, and those that are near its southern end represent cross-fault shortening, not strike slip. The fault might have been responsible, however, for two poorly located moderate earthquakes that occurred in the area in 1903. Its southeastern end does mark an abrupt change in the pattern of abundant instrumentally recorded earthquakes along the Calaveras Fault-in both its strike and in the depth distribution of hypocenters-that could indicate continuing influence by the Silver Creek Fault. In the absence of convincing evidence to the contrary, and as a conservative estimate, we presume that the Silver Creek Fault has continued its strike-slip movement through the Holocene, but at a very slow rate. Such a slow rate would, at most, yield very infrequent damaging earthquakes. If the 1903 earthquakes did, in fact, occur on the Silver Creek Fault, they would have greatly reduced the short-term future potential for large earthquakes on the fault.
NASA Astrophysics Data System (ADS)
Sapra, Karan; Gupta, Saurabh; Atchley, Scott; Anantharaj, Valentine; Miller, Ross; Vazhkudai, Sudharshan
2016-04-01
Efficient resource utilization is critical for improved end-to-end computing and workflow of scientific applications. Heterogeneous node architectures, such as the GPU-enabled Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), present us with further challenges. In many HPC applications on Titan, the accelerators are the primary compute engines while the CPUs orchestrate the offloading of work onto the accelerators, and moving the output back to the main memory. On the other hand, applications that do not exploit GPUs, the CPU usage is dominant while the GPUs idle. We utilized Heterogenous Functional Partitioning (HFP) runtime framework that can optimize usage of resources on a compute node to expedite an application's end-to-end workflow. This approach is different from existing techniques for in-situ analyses in that it provides a framework for on-the-fly analysis on-node by dynamically exploiting under-utilized resources therein. We have implemented in the Community Earth System Model (CESM) a new concurrent diagnostic processing capability enabled by the HFP framework. Various single variate statistics, such as means and distributions, are computed in-situ by launching HFP tasks on the GPU via the node local HFP daemon. Since our current configuration of CESM does not use GPU resources heavily, we can move these tasks to GPU using the HFP framework. Each rank running the atmospheric model in CESM pushes the variables of of interest via HFP function calls to the HFP daemon. This node local daemon is responsible for receiving the data from main program and launching the designated analytics tasks on the GPU. We have implemented these analytics tasks in C and use OpenACC directives to enable GPU acceleration. This methodology is also advantageous while executing GPU-enabled configurations of CESM when the CPUs will be idle during portions of the runtime. In our implementation results, we demonstrate that it is more efficient to use HFP framework to offload the tasks to GPUs instead of doing it in the main application. We observe increased resource utilization and overall productivity in this approach by using HFP framework for end-to-end workflow.
Identification and interpretation of tectonic features from Skylab imagery. [California to Arizona
NASA Technical Reports Server (NTRS)
Abdel-Gawad, M. (Principal Investigator)
1974-01-01
The author has identified the following significant results. S190-B imagery confirmed previous conclusions from S190-A that the Garlock fault does not extend eastward beyond its known termination near the southern end of Death Valley. In the Avawatz Mountains, California, two faults related to the Garlock fault zone (Mule Spring fault and Leach Spring fault) show evidence of recent activity. There is evidence that faulting related to Death Valley fault zone extends southeastward across the Old Dad Mountains. There, the Old Dad fault shows evidence of recent activity. A significant fault lineament has been identified from McCullough Range, California southeastward to Eagle Tail Mountains in southwestern Arizona. The lineament appears to control tertiary and possible cretaceous intrusives. Considerable right lateral shear is suspected to have taken place along parts of this lineament.
NASA Astrophysics Data System (ADS)
Bennett, J. T.; Sorlien, C. C.; Cormier, M.; Bauer, R. L.
2011-12-01
The San Andreas fault system is distributed across hundreds of kilometers in southern California. This transform system includes offshore faults along the shelf, slope and basin- comprising part of the Inner California Continental Borderland. Previously, offshore faults have been interpreted as being discontinuous and striking parallel to the coast between Long Beach and San Diego. Our recent work, based on several thousand kilometers of deep-penetration industry multi-channel seismic reflection data (MCS) as well as high resolution U.S. Geological Survey MCS, indicates that many of the offshore faults are more geometrically continuous than previously reported. Stratigraphic interpretations of MCS profiles included the ca. 1.8 Ma Top Lower Pico, which was correlated from wells located offshore Long Beach (Sorlien et. al. 2010). Based on this age constraint, four younger (Late) Quaternary unconformities are interpreted through the slope and basin. The right-lateral Newport-Inglewood fault continues offshore near Newport Beach. We map a single fault for 25 kilometers that continues to the southeast along the base of the slope. There, the Newport-Inglewood fault splits into the San Mateo-Carlsbad fault, which is mapped for 55 kilometers along the base of the slope to a sharp bend. This bend is the northern end of a right step-over of 10 kilometers to the Descanso fault and about 17 km to the Coronado Bank fault. We map these faults for 50 kilometers as they continue over the Mexican border. Both the San Mateo - Carlsbad with the Newport-Inglewood fault and the Coronado Bank with the Descanso fault are paired faults that form flower structures (positive and negative, respectively) in cross section. Preliminary kinematic models indicate ~1km of right-lateral slip since ~1.8 Ma at the north end of the step-over. We are modeling the slip on the southern segment to test our hypothesis for a kinematically continuous right-lateral fault system. We are correlating four younger Quaternary unconformities across portions of these faults to test whether the post- ~1.8 Ma deformation continues into late Quaternary. This will provide critical information for a meaningful assessment of the seismic hazards facing Newport beach through metropolitan San Diego.
NASA Astrophysics Data System (ADS)
Naderi, E.; Khorasani, K.
2018-02-01
In this work, a data-driven fault detection, isolation, and estimation (FDI&E) methodology is proposed and developed specifically for monitoring the aircraft gas turbine engine actuator and sensors. The proposed FDI&E filters are directly constructed by using only the available system I/O data at each operating point of the engine. The healthy gas turbine engine is stimulated by a sinusoidal input containing a limited number of frequencies. First, the associated system Markov parameters are estimated by using the FFT of the input and output signals to obtain the frequency response of the gas turbine engine. These data are then used for direct design and realization of the fault detection, isolation and estimation filters. Our proposed scheme therefore does not require any a priori knowledge of the system linear model or its number of poles and zeros at each operating point. We have investigated the effects of the size of the frequency response data on the performance of our proposed schemes. We have shown through comprehensive case studies simulations that desirable fault detection, isolation and estimation performance metrics defined in terms of the confusion matrix criterion can be achieved by having access to only the frequency response of the system at only a limited number of frequencies.
Monitoring of Microseismicity with ArrayTechniques in the Peach Tree Valley Region
NASA Astrophysics Data System (ADS)
Garcia-Reyes, J. L.; Clayton, R. W.
2016-12-01
This study is focused on the analysis of microseismicity along the San Andreas Fault in the PeachTree Valley region. This zone is part of the transition zone between the locked portion to the south (Parkfield, CA) and the creeping section to the north (Jovilet, et al., JGR, 2014). The data for the study comes from a 2-week deployment of 116 Zland nodes in a cross-shaped configuration along (8.2 km) and across (9 km) the Fault. We analyze the distribution of microseismicity using a 3D backprojection technique, and we explore the use of Hidden Markov Models to identify different patterns of microseismicity (Hammer et al., GJI, 2013). The goal of the study is to relate the style of seismicity to the mechanical state of the Fault. The results show the evolution of seismic activity as well as at least two different patterns of seismic signals.
Fault detection for hydraulic pump based on chaotic parallel RBF network
NASA Astrophysics Data System (ADS)
Lu, Chen; Ma, Ning; Wang, Zhipeng
2011-12-01
In this article, a parallel radial basis function network in conjunction with chaos theory (CPRBF network) is presented, and applied to practical fault detection for hydraulic pump, which is a critical component in aircraft. The CPRBF network consists of a number of radial basis function (RBF) subnets connected in parallel. The number of input nodes for each RBF subnet is determined by different embedding dimension based on chaotic phase-space reconstruction. The output of CPRBF is a weighted sum of all RBF subnets. It was first trained using the dataset from normal state without fault, and then a residual error generator was designed to detect failures based on the trained CPRBF network. Then, failure detection can be achieved by the analysis of the residual error. Finally, two case studies are introduced to compare the proposed CPRBF network with traditional RBF networks, in terms of prediction and detection accuracy.
Robustness and percolation of holes in complex networks
NASA Astrophysics Data System (ADS)
Zhou, Andu; Maletić, Slobodan; Zhao, Yi
2018-07-01
Efficient robustness and fault tolerance of complex network is significantly influenced by its connectivity, commonly modeled by the structure of pairwise relations between network elements, i.e., nodes. Nevertheless, aggregations of nodes build higher-order structures embedded in complex network, which may be more vulnerable when the fraction of nodes is removed. The structure of higher-order aggregations of nodes can be naturally modeled by simplicial complexes, whereas the removal of nodes affects the values of topological invariants, like the number of higher-dimensional holes quantified with Betti numbers. Following the methodology of percolation theory, as the fraction of nodes is removed, new holes appear, which have the role of merger between already present holes. In the present article, relationship between the robustness and homological properties of complex network is studied, through relating the graph-theoretical signatures of robustness and the quantities derived from topological invariants. The simulation results of random failures and intentional attacks on networks suggest that the changes of graph-theoretical signatures of robustness are followed by differences in the distribution of number of holes per cluster under different attack strategies. In the broader sense, the results indicate the importance of topological invariants research for obtaining further insights in understanding dynamics taking place over complex networks.
Model Checking A Self-Stabilizing Synchronization Protocol for Arbitrary Digraphs
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2012-01-01
This report presents the mechanical verification of a self-stabilizing distributed clock synchronization protocol for arbitrary digraphs in the absence of faults. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. The system under study is an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. This protocol deterministically converges within a time bound that is a linear function of the self-stabilization period. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV) for a subset of digraphs. Modeling challenges of the protocol and the system are addressed. The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period.
On the implementation of faults in finite-element glacial isostatic adjustment models
NASA Astrophysics Data System (ADS)
Steffen, Rebekka; Wu, Patrick; Steffen, Holger; Eaton, David W.
2014-01-01
Stresses induced in the crust and mantle by continental-scale ice sheets during glaciation have triggered earthquakes along pre-existing faults, commencing near the end of the deglaciation. In order to get a better understanding of the relationship between glacial loading/unloading and fault movement due to the spatio-temporal evolution of stresses, a commonly used model for glacial isostatic adjustment (GIA) is extended by including a fault structure. Solving this problem is enabled by development of a workflow involving three cascaded finite-element simulations. Each step has identical lithospheric and mantle structure and properties, but evolving stress conditions along the fault. The purpose of the first simulation is to compute the spatio-temporal evolution of rebound stress when the fault is tied together. An ice load with a parabolic profile and simple ice history is applied to represent glacial loading of the Laurentide Ice Sheet. The results of the first step describe the evolution of the stress and displacement induced by the rebound process. The second step in the procedure augments the results of the first, by computing the spatio-temporal evolution of total stress (i.e. rebound stress plus tectonic background stress and overburden pressure) and displacement with reaction forces that can hold the model in equilibrium. The background stress is estimated by assuming that the fault is in frictional equilibrium before glaciation. The third step simulates fault movement induced by the spatio-temporal evolution of total stress by evaluating fault stability in a subroutine. If the fault remains stable, no movement occurs; in case of fault instability, the fault displacement is computed. We show an example of fault motion along a 45°-dipping fault at the ice-sheet centre for a two-dimensional model. Stable conditions along the fault are found during glaciation and the initial part of deglaciation. Before deglaciation ends, the fault starts to move, and fault offsets of up to 22 m are obtained. A fault scarp at the surface of 19.74 m is determined. The fault is stable in the following time steps with a high stress accumulation at the fault tip. Along the upper part of the fault, GIA stresses are released in one earthquake.
NASA Astrophysics Data System (ADS)
Xu, G.; Lavelle, J. W.
2016-12-01
A numerical model of ocean flow and transport is used to extrapolate observations of currents and hydrography and infer patterns of material flux in the deep ocean around Axial Volcano--the destination node of the Ocean Observatories Initiative (OOI)'s Cabled Array. Using an inverse method, the model is made to approximate measured deep ocean flow around this site during a 35-day time period in 2002. The model is then used to extract month-long mean patterns and examine smaller-scale spatial and temporal variability around Axial. Like prior observations, model month-long mean currents flow anti-cyclonically (clockwise) around the volcano's summit in toroidal form at speeds of up to 7 cm/s. The mean vertical circulation has a net effect of pumping water out of the caldera. Temperature and salinity iso-surfaces sweep upward and downward on opposite sides of the volcano with vertical excursions of up to 70 m. As a time mean, the temperature (salinity) anomaly takes the form of a cold (briny) dome above the summit. Passive tracer material released at the location of the ASHES vent field exits the caldera through its southern open end and over the western bounding wall driven by vertical flow. Once outside the caldera, the tracer circles the summit in clockwise fashion, while gradually bleeding southwestward into the ambient ocean. Another tracer release experiment using a source of 2-day duration inside and near the northern end of the caldera suggests a residence time of the fluid at that locale of 5-6 days.
KWICgrouper--Designing a Tool for Corpus-Driven Concordance Analysis
ERIC Educational Resources Information Center
O'Donnell, Matthew Brook
2008-01-01
The corpus-driven analysis of concordance data often results in the identification of groups of lines in which repeated patterns around the node item establish membership in a particular function meaning group (Mahlberg, 2005). This paper explains the KWICgrouper, a concept designed to support this kind of concordance analysis. Groups are defined…
LoRa Scalability: A Simulation Model Based on Interference Measurements
Haxhibeqiri, Jetmir; Van den Abeele, Floris; Moerman, Ingrid; Hoebeke, Jeroen
2017-01-01
LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data. PMID:28545239
LoRa Scalability: A Simulation Model Based on Interference Measurements.
Haxhibeqiri, Jetmir; Van den Abeele, Floris; Moerman, Ingrid; Hoebeke, Jeroen
2017-05-23
LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.
Fluvial-Deltaic Strata as a High-Resolution Recorder of Fold Growth and Fault Slip
NASA Astrophysics Data System (ADS)
Anastasio, D. J.; Kodama, K. P.; Pazzaglia, F. P.
2008-12-01
Fluvial-deltaic systems characterize the depositional record of most wedge-top and foreland basins, where the synorogenic stratigraphy responds to interactions between sediment supply driven by tectonic uplift, climate modulated sea level change and erosion rate variability, and fold growth patterns driven by unsteady fault slip. We integrate kinematic models of fault-related folds with growth strata and fluvial terrace records to determine incremental rates of shortening, rock uplift, limb tilting, and fault slip with 104-105 year temporal resolution in the Pyrenees and Apennines. At Pico del Aguila anticline, a transverse dècollement fold along the south Pyrenean mountain front, formation-scale synorogenic deposition and clastic facies patterns in prodeltaic and slope facies reflect tectonic forcing of sediment supply, sea level variability controlling delta front position, and climate modulated changes in terrestrial runoff. Growth geometries record a pinned anticline and migrating syncline hinges during folding above the emerging Guarga thrust sheet. Lithologic and anhysteretic remanent magnetization (ARM) data series from the Eocene Arguis Fm. show cyclicity at Milankovitch frequencies allowing detailed reconstruction of unsteady fold growth. Multiple variations in limb tilting rates from <8° to 28°/my over 7my are attributed to unsteady fault slip along the roof ramp and basal dècollement. Along the northern Apennine mountain front, the age and geometry of strath terraces preserved across the Salsomaggiore anticline records the Pleistocene-Recent kinematics of the underlying fault-propagation fold as occurring with a fixed anticline hinge, a rolling syncline hinge, and along-strike variations in uplift and forelimb tilting. The uplifted intersection of terrace deposits documents syncline axial surface migration and underlying fault-tip propagation at a rate of ~1.4 cm/yr since the Middle Pleistocene. Because this record of fault slip coincides with the well-known large amplitude oscillations in global climate that contribute to the filling and deformation of the Po foreland, we hypothesize that climatically-modulated surface processes are reflected in the observed rates of fault slip and fold growth.
Khan, Anwar; Ahmedy, Ismail; Anisi, Mohammad Hossein; Javaid, Nadeem; Ali, Ihsan; Khan, Nawsher; Alsaqer, Mohammed; Mahmood, Hasan
2018-01-09
Interference and energy holes formation in underwater wireless sensor networks (UWSNs) threaten the reliable delivery of data packets from a source to a destination. Interference also causes inefficient utilization of the limited battery power of the sensor nodes in that more power is consumed in the retransmission of the lost packets. Energy holes are dead nodes close to the surface of water, and their early death interrupts data delivery even when the network has live nodes. This paper proposes a localization-free interference and energy holes minimization (LF-IEHM) routing protocol for UWSNs. The proposed algorithm overcomes interference during data packet forwarding by defining a unique packet holding time for every sensor node. The energy holes formation is mitigated by a variable transmission range of the sensor nodes. As compared to the conventional routing protocols, the proposed protocol does not require the localization information of the sensor nodes, which is cumbersome and difficult to obtain, as nodes change their positions with water currents. Simulation results show superior performance of the proposed scheme in terms of packets received at the final destination and end-to-end delay.
Khan, Anwar; Anisi, Mohammad Hossein; Javaid, Nadeem; Khan, Nawsher; Alsaqer, Mohammed; Mahmood, Hasan
2018-01-01
Interference and energy holes formation in underwater wireless sensor networks (UWSNs) threaten the reliable delivery of data packets from a source to a destination. Interference also causes inefficient utilization of the limited battery power of the sensor nodes in that more power is consumed in the retransmission of the lost packets. Energy holes are dead nodes close to the surface of water, and their early death interrupts data delivery even when the network has live nodes. This paper proposes a localization-free interference and energy holes minimization (LF-IEHM) routing protocol for UWSNs. The proposed algorithm overcomes interference during data packet forwarding by defining a unique packet holding time for every sensor node. The energy holes formation is mitigated by a variable transmission range of the sensor nodes. As compared to the conventional routing protocols, the proposed protocol does not require the localization information of the sensor nodes, which is cumbersome and difficult to obtain, as nodes change their positions with water currents. Simulation results show superior performance of the proposed scheme in terms of packets received at the final destination and end-to-end delay. PMID:29315247
Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks
Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan
2017-01-01
Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H2RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H2RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller. PMID:28672856
A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis
NASA Astrophysics Data System (ADS)
Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.
2016-12-01
Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.
Relaxation mechanisms, structure and properties of semi-coherent interfaces
Shao, Shuai; Wang, Jian
2015-10-15
In this work, using the Cu–Ni (111) semi-coherent interface as a model system, we combine atomistic simulations and defect theory to reveal the relaxation mechanisms, structure, and properties of semi-coherent interfaces. By calculating the generalized stacking fault energy (GSFE) profile of the interface, two stable structures and a high-energy structure are located. During the relaxation, the regions that possess the stable structures expand and develop into coherent regions; the regions with high-energy structure shrink into the intersection of misfit dislocations (nodes). This process reduces the interface excess potential energy but increases the core energy of the misfit dislocations and nodes.more » The core width is dependent on the GSFE of the interface. The high-energy structure relaxes by relative rotation and dilatation between the crystals. The relative rotation is responsible for the spiral pattern at nodes. The relative dilatation is responsible for the creation of free volume at nodes, which facilitates the nodes’ structural transformation. Several node structures have been observed and analyzed. In conclusion, the various structures have significant impact on the plastic deformation in terms of lattice dislocation nucleation, as well as the point defect formation energies.« less
Error recovery to enable error-free message transfer between nodes of a computer network
Blumrich, Matthias A.; Coteus, Paul W.; Chen, Dong; Gara, Alan; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd; Steinmacher-Burow, Burkhard; Vranas, Pavlos M.
2016-01-26
An error-recovery method to enable error-free message transfer between nodes of a computer network. A first node of the network sends a packet to a second node of the network over a link between the nodes, and the first node keeps a copy of the packet on a sending end of the link until the first node receives acknowledgment from the second node that the packet was received without error. The second node tests the packet to determine if the packet is error free. If the packet is not error free, the second node sets a flag to mark the packet as corrupt. The second node returns acknowledgement to the first node specifying whether the packet was received with or without error. When the packet is received with error, the link is returned to a known state and the packet is sent again to the second node.
Triggering of destructive earthquakes in El Salvador
NASA Astrophysics Data System (ADS)
Martínez-Díaz, José J.; Álvarez-Gómez, José A.; Benito, Belén; Hernández, Douglas
2004-01-01
We investigate the existence of a mechanism of static stress triggering driven by the interaction of normal faults in the Middle American subduction zone and strike-slip faults in the El Salvador volcanic arc. The local geology points to a large strike-slip fault zone, the El Salvador fault zone, as the source of several destructive earthquakes in El Salvador along the volcanic arc. We modeled the Coulomb failure stress (CFS) change produced by the June 1982 and January 2001 subduction events on planes parallel to the El Salvador fault zone. The results have broad implications for future risk management in the region, as they suggest a causative relationship between the position of the normal-slip events in the subduction zone and the strike-slip events in the volcanic arc. After the February 2001 event, an important area of the El Salvador fault zone was loaded with a positive change in Coulomb failure stress (>0.15 MPa). This scenario must be considered in the seismic hazard assessment studies that will be carried out in this area.
Deconvoluting complex structural histories archived in brittle fault zones
NASA Astrophysics Data System (ADS)
Viola, G.; Scheiber, T.; Fredin, O.; Zwingmann, H.; Margreth, A.; Knies, J.
2016-11-01
Brittle deformation can saturate the Earth's crust with faults and fractures in an apparently chaotic fashion. The details of brittle deformational histories and implications on, for example, seismotectonics and landscape, can thus be difficult to untangle. Fortunately, brittle faults archive subtle details of the stress and physical/chemical conditions at the time of initial strain localization and eventual subsequent slip(s). Hence, reading those archives offers the possibility to deconvolute protracted brittle deformation. Here we report K-Ar isotopic dating of synkinematic/authigenic illite coupled with structural analysis to illustrate an innovative approach to the high-resolution deconvolution of brittle faulting and fluid-driven alteration of a reactivated fault in western Norway. Permian extension preceded coaxial reactivation in the Jurassic and Early Cretaceous fluid-related alteration with pervasive clay authigenesis. This approach represents important progress towards time-constrained structural models, where illite characterization and K-Ar analysis are a fundamental tool to date faulting and alteration in crystalline rocks.
2003-06-03
KENNEDY SPACE CENTER, FLA. - An overhead crane in the Space Station Processing Facility lifts the U.S. Node 2 out of its shipping container. The node will be moved to a workstand. The second of three connecting modules on the International Space Station, the Italian-built Node 2 attaches to the end of the U.S. Lab and provides attach locations for the Japanese laboratory, European laboratory, the Centrifuge Accommodation Module and, later, Multipurpose Logistics Modules. It will provide the primary docking location for the Shuttle when a pressurized mating adapter is attached to Node 2. Installation of the module will complete the U.S. Core of the ISS. Node 2 is the designated payload for mission STS-120. No orbiter or launch date has been determined yet.
Implementation of bipartite or remote unitary gates with repeater nodes
NASA Astrophysics Data System (ADS)
Yu, Li; Nemoto, Kae
2016-08-01
We propose some protocols to implement various classes of bipartite unitary operations on two remote parties with the help of repeater nodes in-between. We also present a protocol to implement a single-qubit unitary with parameters determined by a remote party with the help of up to three repeater nodes. It is assumed that the neighboring nodes are connected by noisy photonic channels, and the local gates can be performed quite accurately, while the decoherence of memories is significant. A unitary is often a part of a larger computation or communication task in a quantum network, and to reduce the amount of decoherence in other systems of the network, we focus on the goal of saving the total time for implementing a unitary including the time for entanglement preparation. We review some previously studied protocols that implement bipartite unitaries using local operations and classical communication and prior shared entanglement, and apply them to the situation with repeater nodes without prior entanglement. We find that the protocols using piecewise entanglement between neighboring nodes often require less total time compared to preparing entanglement between the two end nodes first and then performing the previously known protocols. For a generic bipartite unitary, as the number of repeater nodes increases, the total time could approach the time cost for direct signal transfer from one end node to the other. We also prove some lower bounds of the total time when there are a small number of repeater nodes. The application to position-based cryptography is discussed.
NASA Technical Reports Server (NTRS)
Liggett, M. A. (Principal Investigator); Childs, J. F.
1974-01-01
The author has identified the following significant results. The pattern of faulting associated with the termination of the Death Valley-Furnace Creek Fault Zone in northern Fish Lake Valley, Nevada was studied in ERTS-1 MSS color composite imagery and color IR U-2 photography. Imagery analysis was supported by field reconnaissance and low altitude aerial photography. The northwest-trending right-lateral Death Valley-Furnace Creek Fault Zone changes northward to a complex pattern of discontinuous dip slip and strike slip faults. This fault pattern terminates to the north against an east-northeast trending zone herein called the Montgomery Fault Zone. No evidence for continuation of the Death Valley-Furnace Creek Fault Zone is recognized north of the Montgomery Fault Zone. Penecontemporaneous displacement in the Death Valley-Furnace Creek Fault Zone, the complex transitional zone, and the Montgomery Fault Zone suggests that the systems are genetically related. Mercury mineralization appears to have been localized along faults recognizable in ERTS-1 imagery within the transitional zone and the Montgomery Fault Zone.
Won, Jongho; Ma, Chris Y. T.; Yau, David K. Y.; ...
2016-06-01
Smart meters are integral to demand response in emerging smart grids, by reporting the electricity consumption of users to serve application needs. But reporting real-time usage information for individual households raises privacy concerns. Existing techniques to guarantee differential privacy (DP) of smart meter users either are not fault tolerant or achieve (possibly partial) fault tolerance at high communication overheads. In this paper, we propose a fault-tolerant protocol for smart metering that can handle general communication failures while ensuring DP with significantly improved efficiency and lower errors compared with the state of the art. Our protocol handles fail-stop faults proactively bymore » using a novel design of future ciphertexts, and distributes trust among the smart meters by sharing secret keys among them. We prove the DP properties of our protocol and analyze its advantages in fault tolerance, accuracy, and communication efficiency relative to competing techniques. We illustrate our analysis by simulations driven by real-world traces of electricity consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Won, Jongho; Ma, Chris Y. T.; Yau, David K. Y.
Smart meters are integral to demand response in emerging smart grids, by reporting the electricity consumption of users to serve application needs. But reporting real-time usage information for individual households raises privacy concerns. Existing techniques to guarantee differential privacy (DP) of smart meter users either are not fault tolerant or achieve (possibly partial) fault tolerance at high communication overheads. In this paper, we propose a fault-tolerant protocol for smart metering that can handle general communication failures while ensuring DP with significantly improved efficiency and lower errors compared with the state of the art. Our protocol handles fail-stop faults proactively bymore » using a novel design of future ciphertexts, and distributes trust among the smart meters by sharing secret keys among them. We prove the DP properties of our protocol and analyze its advantages in fault tolerance, accuracy, and communication efficiency relative to competing techniques. We illustrate our analysis by simulations driven by real-world traces of electricity consumption.« less
Swetapadma, Aleena; Yadav, Anamika
2015-01-01
Many schemes are reported for shunt fault location estimation, but fault location estimation of series or open conductor faults has not been dealt with so far. The existing numerical relays only detect the open conductor (series) fault and give the indication of the faulty phase(s), but they are unable to locate the series fault. The repair crew needs to patrol the complete line to find the location of series fault. In this paper fuzzy based fault detection/classification and location schemes in time domain are proposed for both series faults, shunt faults, and simultaneous series and shunt faults. The fault simulation studies and fault location algorithm have been developed using Matlab/Simulink. Synchronized phasors of voltage and current signals of both the ends of the line have been used as input to the proposed fuzzy based fault location scheme. Percentage of error in location of series fault is within 1% and shunt fault is 5% for all the tested fault cases. Validation of percentage of error in location estimation is done using Chi square test with both 1% and 5% level of significance. PMID:26413088
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Ning; Gombos, Gergely; Mousavi, Mirrasoul J.
A new fault location algorithm for two-end series-compensated double-circuit transmission lines utilizing unsynchronized two-terminal current phasors and local voltage phasors is presented in this paper. The distributed parameter line model is adopted to take into account the shunt capacitance of the lines. The mutual coupling between the parallel lines in the zero-sequence network is also considered. The boundary conditions under different fault types are used to derive the fault location formulation. The developed algorithm directly uses the local voltage phasors on the line side of series compensation (SC) and metal oxide varistor (MOV). However, when potential transformers are not installedmore » on the line side of SC and MOVs for the local terminal, these measurements can be calculated from the local terminal bus voltage and currents by estimating the voltages across the SC and MOVs. MATLAB SimPowerSystems is used to generate cases under diverse fault conditions to evaluating accuracy. The simulation results show that the proposed algorithm is qualified for practical implementation.« less
NASA Astrophysics Data System (ADS)
Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang
2018-05-01
The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.
NASA Astrophysics Data System (ADS)
Ma, S.; Ma, J.; Liu, L.; Liu, P.
2007-12-01
Digital speckle correlation method (DSCM) is one kind of photomechanical deformation measurement method. DSCM could obtain continuous deformation field contactlessly by just capturing speckle images from specimen surface. Therefore, it is suitable to observe high spatial resolution deformation field in tectonophysical experiment. However, in the general DSCM experiment, the inspected surface of specimen needs to be painted to bear speckle grains in order to obtain the high quality speckle image. This also affects the realization of other measurement techniques. In this study, an improved DSCM system is developed and utilized to measure deformation field of rock specimen without surface painting. The granodiorite with high contrast nature grains is chosen to manufacture the specimen, and a specially designed DSCM algorithm is developed to analyze this kind of nature speckle images. Verification and calibration experiments show that the system could inspect a continuous (about 15Hz) high resolution displacement field (with resolution of 5μm) and strain field (with resolution of 50μɛ), dispensing with any preparation on rock specimen. Therefore, it could be conveniently utilized to study the failure of rock structure. Samples with compressive en echelon faults and extensional en echelon faults are studied on a two-direction servo-control test machine. The failure process of the samples is discussed based on the DSCM results. Experiment results show that: 1) The contours of displacement field could clearly indicate the activities of faults and new cracks. The displacement gradient adjacent to active faults and cracks is much greater than other areas. 2) Before failure of the samples, the mean strain of the jog area is largest for the compressive en echelon fault, while that is smallest for the extensional en echelon fault. This consists with the understanding that the jog area of compressive fault subjects to compression and that of extensional fault subjects to tension. 3) For the extensional en echelon sample, the dislocation across fault on load-driving end is greater than that cross fault on fixed end. Within the same fault, the dislocation across branch far from the jog area is greater than that across branch near the jog area. This indicates the restriction effect of jog area on the activity of fault. Moreover, the average dislocation across faults is much greater than that across the cracks. 4) For the compressive en echelon fault, the wing cracks initialized firstly and propagate outwards the jog area. Subsequently, a wedge strain concentration area is initialized and developed in the jog area because of the interaction of the two faults. Finally, the jog area failed when one crack propagates rapidly and connects the two ends of faults. The DSCM system used in this study could clearly show the deformation and failure process of the en echelon fault sample. The experiment using DSCM could be performed dispensing with any preparation on specimen and not affecting other inspection. Therefore, DSCM is expected to be a suitable tool for experimental study of fault samples in laboratory.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-04-24
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Productionmore » Control System (DPCS), a meta-batch and resource management system.« less
Genetic transformation of carnation (Dianthus caryophylus L.).
Nontaswatsri, Chalermsri; Fukai, Seiichi
2010-01-01
This chapter describes a rapid and efficient protocol for explant preparation and genetic transformation of carnation. Node explants from greenhouse-grown plants and leaf explants from in vitro plants are infected with Agrobacterium tumefaciens AGL0 harboring pKT3 plasmid, consisting of GUS and NPTII genes. Explant preparation is an important factor to obtain the transformed plants. The GUS-staining area was located only on the cut end of explants and only explants with a cut end close to the connecting area between node and leaf, produced transformed shoots. The cocultivation medium is also an important factor for the successful genetic transformation of carnation node and leaf explants. High genetic transformation efficiency of node and leaf explants cocultured with Agrobacterium tumefaciens was achieved when the explants were cocultivated on a filter paper soaked with water or water and acetosyringone mixture (AS).
Shell Tectonics: A Mechanical Model for Strike-slip Displacement on Europa
NASA Technical Reports Server (NTRS)
Rhoden, Alyssa Rose; Wurman, Gilead; Huff, Eric M.; Manga, Michael; Hurford, Terry A.
2012-01-01
We introduce a new mechanical model for producing tidally-driven strike-slip displacement along preexisting faults on Europa, which we call shell tectonics. This model differs from previous models of strike-slip on icy satellites by incorporating a Coulomb failure criterion, approximating a viscoelastic rheology, determining the slip direction based on the gradient of the tidal shear stress rather than its sign, and quantitatively determining the net offset over many orbits. This model allows us to predict the direction of net displacement along faults and determine relative accumulation rate of displacement. To test the shell tectonics model, we generate global predictions of slip direction and compare them with the observed global pattern of strike-slip displacement on Europa in which left-lateral faults dominate far north of the equator, right-lateral faults dominate in the far south, and near-equatorial regions display a mixture of both types of faults. The shell tectonics model reproduces this global pattern. Incorporating a small obliquity into calculations of tidal stresses, which are used as inputs to the shell tectonics model, can also explain regional differences in strike-slip fault populations. We also discuss implications for fault azimuths, fault depth, and Europa's tectonic history.
Implications of the earthquake cycle for inferring fault locking on the Cascadia megathrust
Pollitz, Fred; Evans, Eileen
2017-01-01
GPS velocity fields in the Western US have been interpreted with various physical models of the lithosphere-asthenosphere system: (1) time-independent block models; (2) time-dependent viscoelastic-cycle models, where deformation is driven by viscoelastic relaxation of the lower crust and upper mantle from past faulting events; (3) viscoelastic block models, a time-dependent variation of the block model. All three models are generally driven by a combination of loading on locked faults and (aseismic) fault creep. Here we construct viscoelastic block models and viscoelastic-cycle models for the Western US, focusing on the Pacific Northwest and the earthquake cycle on the Cascadia megathrust. In the viscoelastic block model, the western US is divided into blocks selected from an initial set of 137 microplates using the method of Total Variation Regularization, allowing potential trade-offs between faulting and megathrust coupling to be determined algorithmically from GPS observations. Fault geometry, slip rate, and locking rates (i.e. the locking fraction times the long term slip rate) are estimated simultaneously within the TVR block model. For a range of mantle asthenosphere viscosity (4.4 × 1018 to 3.6 × 1020 Pa s) we find that fault locking on the megathrust is concentrated in the uppermost 20 km in depth, and a locking rate contour line of 30 mm yr−1 extends deepest beneath the Olympic Peninsula, characteristics similar to previous time-independent block model results. These results are corroborated by viscoelastic-cycle modelling. The average locking rate required to fit the GPS velocity field depends on mantle viscosity, being higher the lower the viscosity. Moreover, for viscosity ≲ 1020 Pa s, the amount of inferred locking is higher than that obtained using a time-independent block model. This suggests that time-dependent models for a range of admissible viscosity structures could refine our knowledge of the locking distribution and its epistemic uncertainty.
Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns
NASA Astrophysics Data System (ADS)
Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar
2014-05-01
We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.
Evaluating Application Resilience with XRay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Sui; Bronevetsky, Greg; Li, Bin
2015-05-07
The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run. The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resiilent to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithmspecific error detection andmore » tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults and (ii) the appropriate resilience technique is applied to each code region. This paper presents XRay, a tool to view the application vulnerability to soft errors, and illustrates how XRay can be used in the context of a representative application. In addition to providing actionable insights into application behavior XRay automatically selects the number of fault injection experiments required to provide an informative view of application behavior, ensuring that the information is statistically well-grounded without performing unnecessary experiments.« less
NASA Astrophysics Data System (ADS)
Pan, J.; Li, H.; Chevalier, M.; Liu, D.; Sun, Z.; Pei, J.; Wu, F.; Xu, W.
2013-12-01
Located at the northwestern end of the Himalayan-Tibetan orogenic belt, the Kongur Shan extensional system (KES) is a significant tectonic unit in the Chinese Pamir. E-W extension of the KES accommodates deformation due to the India/Asia collision in this area. Cenozoic evolution of the KES has been extensively studied, whereas Late Quaternary deformation along the KES is still poorly constrained. Besides, whether the KES is the northern extension of the Karakorum fault is still debated. Well-preserved normal fault scarps are present all along the KES. Interpretation of satellite images as well as field investigation allowed us to map active normal faults and associated vertically offset geomorphological features along the KES. At one site along the northern Kongur Shan detachment fault, in the eastern Muji basin, a Holocene alluvial fan is vertically offset by the active fault. We measured the vertical displacement of the fan with total station, and collected quartz cobbles for cosmogenic nuclide 10Be dating. Combining the 5-7 m offset and the preliminary surface-exposure ages of ~2.7 ka, we obtain a Holocene vertical slip-rate of 1.8-2.6 mm/yr along the fault. This vertical slip-rate is comparable to the right-lateral horizontal-slip rate along the Muji fault (~4.5 mm/yr, which is the northern end of the KES. Our result is also similar to the Late Quaternary slip-rate derived along the KES around the Muztagh Ata as well as the Tashkurgan normal fault (1-3 mm/yr). Geometry, kinematics, and geomorphology of the KES combined with the compatible slip-rate between the right-lateral strike-slip Muji fault and the Kongur Shan normal fault indicate that the KES may be an elongated pull-apart basin formed between the EW-striking right-lateral strike-slip Muji fault and the NW-SE-striking Karakorum fault. This unique elongated pull-apart structure with long normal fault in the NS direction and relatively short strike-slip fault in the ~EW direction seems to still be in formation, with the Karakorum fault still propagating to the north.
Stress transfer to the Denali and other regional faults from the M 9.2 Alaska earthquake of 1964
Bufe, C.G.
2004-01-01
Stress transfer from the great 1964 Prince William Sound earthquake is modeled on the Denali fault, including the Denali-Totschunda fault segments that ruptured in 2002, and on other regional fault systems where M 7.5 and larger earthquakes have occurred since 1900. The results indicate that analysis of Coulomb stress transfer from the dominant earthquake in a region is a potentially powerful tool in assessing time-varying earthquake hazard. Modeled Coulomb stress increases on the northern Denali and Totschunda faults from the great 1964 earthquake coincide with zones that ruptured in the 2002 Denali fault earthquake, although stress on the Susitna Glacier thrust plane, where the 2002 event initiated, was decreased. A southeasterlytrending Coulomb stress transect along the right-lateral Totschunda-Fairweather-Queen Charlotte trend shows stress transfer from the 1964 event advancing slip on the Totschunda, Fairweather, and Queen Charlotte segments, including the southern Fairweather segment that ruptured in 1972. Stress transfer retarding right-lateral strike slip was observed from the southern part of the Totschunda fault to the northern end of the Fairweather fault (1958 rupture). This region encompasses a gap with shallow thrust faulting but with little evidence of strike-slip faulting connecting the segments to the northwest and southeast. Stress transfer toward failure was computed on the north-south trending right-lateral strike-slip faults in the Gulf of Alaska that ruptured in 1987 and 1988, with inhibitory stress changes at the northern end of the northernmost (1987) rupture. The northern Denali and Totschunda faults, including the zones that ruptured in the 2002 earthquakes, follow very closely (within 3%), for about 90??, an arc of a circle of radius 375 km. The center of this circle is within a few kilometers of the intersection at depth of the Patton Bay fault with the Alaskan megathrust. This inferred asperity edge may be the pole of counterclockwise rotation of the block south of the Denali fault. These observations suggest that the asperity and its recurrent rupture in great earthquakes as in 1964 may have influenced the tectonics of the region during the later stages of evolution of the Denali strike-slip fault system.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-07-08
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. The design also includes a scalable, general-purpose communication infrastructure. This paper presents a overview of the SLURM architecture and functionality.
Neural networks for aircraft control
NASA Technical Reports Server (NTRS)
Linse, Dennis
1990-01-01
Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.
Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture
NASA Astrophysics Data System (ADS)
Meng, Chunfang
2017-03-01
We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.
Thatcher, W.; England, P.C.
1998-01-01
We have carried out two-dimensional (2-D) numerical experiments on the bulk flow of a layer of fluid that is driven in a strike-slip sense by constant velocities applied at its boundaries. The fluid has the (linearized) conventional rheology assumed to apply to lower crust/upper mantle rocks. The temperature dependence of the effective viscosity of the fluid and the shear heating that accompanies deformation have been incorporated into the calculations, as has thermal conduction in an overlying crustal layer. Two end-member boundary conditions have been considered, corresponding to a strong upper crust driving a weaker ductile substrate and a strong ductile layer driving a passive, weak crust. In many cases of practical interest, shear heating is concentrated close to the axial plane of the shear zone for either boundary condition. For these cases, the resulting steady state temperature field is well approximated by a cylindrical heat source embedded in a conductive half-space at a depth corresponding to the top of the fluid layer. This approximation, along with the application of a theoretical result for one-dimensional shear zones, permits us to obtain simple analytical approximations to the thermal effects of 2-D ductile shear zones for a range of assumed rheologies and crustal geotherms, making complex numerical calculations unnecessary. Results are compared with observable effects on heat flux near the San Andreas fault using constraints on the slip distribution across the entire fault system. Ductile shearing in the lower crust or upper mantle can explain the observed increase in surface heat flux southeast of the Mendocino triple junction and match the amplitude of the regional heat flux anomaly in the California Coast Ranges. Because ductile dissipation depends only weakly on slip rate, faults moving only a few millimeters per year can be important heat sources, and the superposition of effects of localized ductile shearing on both currently active and now inactive strands of the San Andreas system can explain the breadth of the heat flux anomaly across central California.
NASA Astrophysics Data System (ADS)
Scharf, A.; Handy, M. R.; Favaro, S.; Schmid, S. M.; Bertrand, A.
2013-09-01
The Tauern Window exposes a Paleogene nappe stack consisting of highly metamorphosed oceanic (Alpine Tethys) and continental (distal European margin) thrust sheets. In the eastern part of this window, this nappe stack (Eastern Tauern Subdome, ETD) is bounded by a Neogene system of shear (the Katschberg Shear Zone System, KSZS) that accommodated orogen-parallel stretching, orogen-normal shortening, and exhumation with respect to the structurally overlying Austroalpine units (Adriatic margin). The KSZS comprises a ≤5-km-thick belt of retrograde mylonite, the central segment of which is a southeast-dipping, low-angle extensional shear zone with a brittle overprint (Katschberg Normal Fault, KNF). At the northern and southern ends of this central segment, the KSZS loses its brittle overprint and swings around both corners of the ETD to become subvertical, dextral, and sinistral strike-slip faults. The latter represent stretching faults whose displacements decrease westward to near zero. The kinematic continuity of top-east to top-southeast ductile shearing along the central, low-angle extensional part of the KSZS with strike-slip shearing along its steep ends, combined with maximum tectonic omission of nappes of the ETD in the footwall of the KNF, indicates that north-south shortening, orogen-parallel stretching, and normal faulting were coeval. Stratigraphic and radiometric ages constrain exhumation of the folded nappe complex in the footwall of the KSZS to have begun at 23-21 Ma, leading to rapid cooling between 21 and 16 Ma. This exhumation involved a combination of tectonic unroofing by extensional shearing, upright folding, and erosional denudation. The contribution of tectonic unroofing is greatest along the central segment of the KSZS and decreases westward to the central part of the Tauern Window. The KSZS formed in response to the indentation of wedge-shaped blocks of semi-rigid Austroalpine basement located in front of the South-Alpine indenter that was part of the Adriatic microplate. Northward motion of this indenter along the sinistral Giudicarie Belt offsets the Periadriatic Fault and triggered rapid exhumation of orogenic crust within the entire Tauern Window. Exhumation involved strike-slip and normal faulting that accommodated about 100 km of orogen-parallel extension and was contemporaneous with about 30 km of orogen-perpendicular, north-south shortening of the ETD. Extension of the Pannonian Basin related to roll-back subduction in the Carpathians began at 20 Ma, but did not affect the Eastern Alps before about 17 Ma. The effect of this extension was to reduce the lateral resistance to eastward crustal flow away from the zone of greatest thickening in the Tauern Window area. Therefore, we propose that roll-back subduction temporarily enhanced rather than triggered exhumation and orogen-parallel motion in the Eastern Alps. Lateral extrusion and orogen-parallel extension in the Eastern Alps have continued from 12 to 10 Ma to the present and are driven by northward push of Adria.
Cascading Policies Provide Fault Tolerance for Pervasive Clinical Communications.
Williams, Rose; Jalan, Srikant; Stern, Edie; Lussier, Yves A
2005-03-21
We implemented an end-to-end notification system that pushed urgent clinical laboratory results to Blackberry 7510 devices over the Nextel cellular network. We designed our system to use user roles and notification policies to abstract and execute clinical notification procedures. We anticipated some problems with dropped and non-delivered messages when the device was out-of-network, however, we did not expect the same problems in other situations like device reconnection to the network. We addressed these problems by creating cascading "fault tolerance" policies to drive notification escalation when messages timed-out or delivery failed. This paper describes our experience in providing an adaptable, fault tolerant pervasive notification system for delivering secure, critical, time-sensitive patient laboratory results.
Plafter, George
1967-01-01
Two reverse faults on southwestern Montague Island in Prince William Sound were reactivated during the earthquake of March 27, 1964. New fault scarps, fissures, cracks, and flexures appeared in bedrock and unconsolidated surficial deposits along or near the fault traces. Average strike of the faults is between N. 37° E. and N. 47° E.; they dip northwest at angles ranging from 50° to 85°. The dominant motion was dip slip; the blocks northwest of the reactivated faults were relatively upthrown, and both blocks were upthrown relative to sea level. No other earthquake faults have been found on land. The Patton Bay fault on land is a complex system of en echelon strands marked by a series of spectacular landslides along the scarp and (or) by a zone of fissures and flexures on the upthrown block that locally is as much as 3,000 feet wide. The fault can be traced on land for 22 miles, and it has been mapped on the sea floor to the southwest of Montague Island an additional 17 miles. The maximum measured vertical component of slip is 20 to 23 feet and the maximum indicated dip slip is about 26 feet. A left-lateral strike-slip component of less than 2 feet occurs near the southern end of the fault on land where its strike changes from northeast to north. Indirect evidence from the seismic sea waves and aftershocks associated with the earthquake, and from the distribution of submarine scarps, suggests that the faulting on and near Montague Island occurred at the northeastern end of a reactivated submarine fault system that may extend discontinuously for more than 300 miles from Montague Island to the area offshore of the southeast coast of Kodiak Island. The Hanning Bay fault is a minor rupture only 4 miles long that is marked by an exceptionally well defined almost continuous scarp. The maximum measured vertical component of slip is 16⅓ feet near the midpoint, and the indicated dip slip is about 20 feet. There is a maximum left-lateral strike-slip component of one-half foot near the southern end of the scarp. Warping and extension cracking occurred in bedrock near the midpoint on the upthrown block within about 1,000 feet of the fault scarp. The reverse faults on Montague Island and their postulated submarine extensions lie within a tectonically important narrow zone of crustal attenuation and maximum uplift associated with the earthquake. However, there are no significant lithologic differences in the rock sequences across these faults to suggest that they form major tectonic boundaries. Their spatial distribution relative to the regional uplift associated with the earthquake, the earthquake focal region, and the epicenter of the main shock suggest that they are probably subsidiary features rather than the causative faults along which the earthquake originated. Approximately 70 percent of the new breakage along the Patton Bay and the Hanning Bay faults on Montague Island was along obvious preexisting active fault traces. The estimated ages of undisturbed trees on and near the fault trace indicate that no major disc placement had occurred on these faults for at least 150 to 300 years before the 1964 earthquake.
NASA Astrophysics Data System (ADS)
Gratier, J. P.; Noiriel, C. N.; Renard, F.
2014-12-01
Natural deformation of rocks is often associated with differentiation processes leading to irreversible transformations of their microstructural thus leading in turn to modifications of their rheological properties. The mechanisms of development of such processes at work during diagenesis, metamorphism or fault differentiation are poorly known as they are not easy to reproduce in the laboratory due to the long duration required for most of chemically controlled differentiation processes. Here we show that experimental compaction with layering development, similar to what happens in natural deformation, can be obtained in the laboratory by indenter techniques. Samples of plaster mixed with clay and samples of diatomite loosely interbedded with clays were loaded during several months at 40°C (plaster) and 150°C (diatomite) in presence of their saturated solutions. High-resolution X-ray tomography and SEM studies show that the layering development is a self-organized process. Stress driven dissolution of the soluble minerals (gypsum in plaster, silica in diatomite) is initiated in the zones initially richer in clays because the kinetics of diffusive mass transfer along the clay/soluble mineral interfaces is much faster than along the healed boundaries of the soluble minerals. The passive concentration of the clay minerals amplifies the localization of the dissolution along some layers oriented perpendicular to the maximum compressive stress component. Conversely, in the areas with initial low content in clay and clustered soluble minerals, dissolution is more difficult as the grain boundaries of the soluble species are healed together. These areas are less deformed and they act as rigid objects that concentrate the dissolution near their boundaries thus amplifying the differentiation. Applications to fault processes are discussed: i) localized pressure solution and sealing processes may lead to fault rheology differentiation with a partition between two end-member behaviors: seismic (in sealed zones) and aseismic (in dissolved zones); ii) tectonic layering may lead to highly anisotropic structures with a drastic decrease of the rock strength parallel to the layering.
Modeling and Fault Simulation of Propellant Filling System
NASA Astrophysics Data System (ADS)
Jiang, Yunchun; Liu, Weidong; Hou, Xiaobo
2012-05-01
Propellant filling system is one of the key ground plants in launching site of rocket that use liquid propellant. There is an urgent demand for ensuring and improving its reliability and safety, and there is no doubt that Failure Mode Effect Analysis (FMEA) is a good approach to meet it. Driven by the request to get more fault information for FMEA, and because of the high expense of propellant filling, in this paper, the working process of the propellant filling system in fault condition was studied by simulating based on AMESim. Firstly, based on analyzing its structure and function, the filling system was modular decomposed, and the mathematic models of every module were given, based on which the whole filling system was modeled in AMESim. Secondly, a general method of fault injecting into dynamic system was proposed, and as an example, two typical faults - leakage and blockage - were injected into the model of filling system, based on which one can get two fault models in AMESim. After that, fault simulation was processed and the dynamic characteristics of several key parameters were analyzed under fault conditions. The results show that the model can simulate effectively the two faults, and can be used to provide guidance for the filling system maintain and amelioration.
Amaya, N; Irfan, M; Zervas, G; Nejabati, R; Simeonidou, D; Sakaguchi, J; Klaus, W; Puttnam, B J; Miyazawa, T; Awaji, Y; Wada, N; Henning, I
2013-04-08
We present the first elastic, space division multiplexing, and multi-granular network based on two 7-core MCF links and four programmable optical nodes able to switch traffic utilising the space, frequency and time dimensions with over 6000-fold bandwidth granularity. Results show good end-to-end performance on all channels with power penalties between 0.75 dB and 3.7 dB.
Chou, Ming-Chung; Ko, Chih-Hung; Chang, Jer-Ming; Hsieh, Tsyh-Jyi
2018-05-04
End-stage renal disease (ESRD) patients on hemodialysis were demonstrated to exhibit silent and invisible white-matter alterations which would likely lead to disruptions of brain structural networks. Therefore, the purpose of this study was to investigate the disruptions of brain structural network in ESRD patients. Thiry-three ESRD patients with normal-appearing brain tissues and 29 age- and gender-matched healthy controls were enrolled in this study and underwent both cognitive ability screening instrument (CASI) assessment and diffusion tensor imaging (DTI) acquisition. Brain structural connectivity network was constructed using probabilistic tractography with automatic anatomical labeling template. Graph-theory analysis was performed to detect the alterations of node-strength, node-degree, node-local efficiency, and node-clustering coefficient in ESRD patients. Correlational analysis was performed to understand the relationship between network measures, CASI score, and dialysis duration. Structural connectivity, node-strength, node-degree, and node-local efficiency were significantly decreased, whereas node-clustering coefficient was significantly increased in ESRD patients as compared with healthy controls. The disrupted local structural networks were generally associated with common neurological complications of ESRD patients, but the correlational analysis did not reveal significant correlation between network measures, CASI score, and dialysis duration. Graph-theory analysis was helpful to investigate disruptions of brain structural network in ESRD patients with normal-appearing brain tissues. Copyright © 2018. Published by Elsevier Masson SAS.
Williams, R.A.; Simpson, R.W.; Jachens, R.C.; Stephenson, W.J.; Odum, J.K.; Ponce, D.A.
2005-01-01
A 1.6-km-long seismic reflection profile across the creeping trace of the southern Hayward fault near Fremont, California, images the fault to a depth of 650 m. Reflector truncations define a fault dip of about 70 degrees east in the 100 to 650 m depth range that projects upward to the creeping surface trace, and is inconsistent with a nearly vertical fault in this vicinity as previously believed. This fault projects to the Mission seismicity trend located at 4-10 km depth about 2 km east of the surface trace and suggests that the southern end of the fault is as seismically active as the part north of San Leandro. The seismic hazard implication is that the Hayward fault may have a more direct connection at depth with the Calaveras fault, affecting estimates of potential event magnitudes that could occur on the combined fault surfaces, thus affecting hazard assessments for the south San Francisco Bay region.
McLaren, Marcia K.; Hardebeck, Jeanne L.; Van Der Elst, Nicholas; Unruh, Jeffrey R.; Bawden, Gerald W.; Blair, James Luke
2008-01-01
We use data from two seismic networks and satellite interferometric synthetic aperture radar (InSAR) imagery to characterize the 22 December 2003 Mw 6.5 San Simeon earthquake sequence. Absolute locations for the mainshock and nearly 10,000 aftershocks were determined using a new three-dimensional (3D) seismic velocity model; relative locations were obtained using double difference. The mainshock location found using the 3D velocity model is 35.704° N, 121.096° W at a depth of 9.7±0.7 km. The aftershocks concentrate at the northwest and southeast parts of the aftershock zone, between the mapped traces of the Oceanic and Nacimiento fault zones. The northwest end of the mainshock rupture, as defined by the aftershocks, projects from the mainshock hypocenter to the surface a few kilometers west of the mapped trace of the Oceanic fault, near the Santa Lucia Range front and the >5 mm postseismic InSAR imagery contour. The Oceanic fault in this area, as mapped by Hall (1991), is therefore probably a second-order synthetic thrust or reverse fault that splays upward from the main seismogenic fault at depth. The southeast end of the rupture projects closer to the mapped Oceanic fault trace, suggesting much of the slip was along this fault, or at a minimum is accommodating much of the postseismic deformation. InSAR imagery shows ∼72 mm of postseismic uplift in the vicinity of maximum coseismic slip in the central section of the rupture, and ∼48 and ∼45 mm at the northwest and southeast end of the aftershock zone, respectively. From these observations, we model a ∼30-km-long northwest-trending northeast-dipping mainshock rupture surface—called the mainthrust—which is likely the Oceanic fault at depth, a ∼10-km-long southwest-dipping backthrust parallel to the mainthrust near the hypocenter, several smaller southwest-dipping structures in the southeast, and perhaps additional northeast-dipping or subvertical structures southeast of the mainshock plane. Discontinuous backthrust features opposite the mainthrust in the southeast part of the aftershock zone may offset the relic Nacimiento fault zone at depth. The InSAR data image surface deformation associated with both aseismic slip and aftershock production on the mainthrust and the backthrusts at the northwest and southeast ends of the aftershock zone. The well-defined mainthrust at the latitude of the epicenter and antithetic backthrust illuminated by the aftershock zone indicate uplift of the Santa Lucia Range as a popup block; aftershocks in the southeast part of the zone also indicate a popup block, but it is less well defined. The absence of backthrust features in the central part of the zone suggests range-front uplift by fault-propagation folding, or backthrusts in the central part were not activated during the mainshock.
Integration and validation of a data grid software
NASA Astrophysics Data System (ADS)
Carenton-Madiec, Nicolas; Berger, Katharina; Cofino, Antonio
2014-05-01
The Earth System Grid Federation (ESGF) Peer-to-Peer (P2P) is a software infrastructure for the management, dissemination, and analysis of model output and observational data. The ESGF grid is composed with several types of nodes which have different roles. About 40 data nodes host model outputs and datasets using thredds catalogs. About 25 compute nodes offer remote visualization and analysis tools. About 15 index nodes crawl data nodes catalogs and implement faceted and federated search in a web interface. About 15 Identity providers nodes manage accounts, authentication and authorization. Here we will present an actual size test federation spread across different institutes in different countries and a python test suite that were started in December 2013. The first objective of the test suite is to provide a simple tool that helps to test and validate a single data node and its closest index, compute and identity provider peer. The next objective will be to run this test suite on every data node of the federation and therefore test and validate every single node of the whole federation. The suite already implements nosetests, requests, myproxy-logon, subprocess, selenium and fabric python libraries in order to test both web front ends, back ends and security services. The goal of this project is to improve the quality of deliverable in a small developers team context. Developers are widely spread around the world working collaboratively and without hierarchy. This kind of working organization context en-lighted the need of a federated integration test and validation process.
Jachens, Robert C.; Wentworth, Carl M.; Graymer, Russell W.; Williams, Robert; Ponce, David A.; Mankinen, Edward A.; Stephenson, William J.; Langenheim, Victoria
2017-01-01
The Evergreen basin is a 40-km-long, 8-km-wide Cenozoic sedimentary basin that lies mostly concealed beneath the northeastern margin of the Santa Clara Valley near the south end of San Francisco Bay (California, USA). The basin is bounded on the northeast by the strike-slip Hayward fault and an approximately parallel subsurface fault that is structurally overlain by a set of west-verging reverse-oblique faults which form the present-day southeastward extension of the Hayward fault. It is bounded on the southwest by the Silver Creek fault, a largely dormant or abandoned fault that splays from the active southern Calaveras fault. We propose that the Evergreen basin formed as a strike-slip pull-apart basin in the right step from the Silver Creek fault to the Hayward fault during a time when the Silver Creek fault served as a segment of the main route by which slip was transferred from the central California San Andreas fault to the Hayward and other East Bay faults. The dimensions and shape of the Evergreen basin, together with palinspastic reconstructions of geologic and geophysical features surrounding it, suggest that during its lifetime, the Silver Creek fault transferred a significant portion of the ∼100 km of total offset accommodated by the Hayward fault, and of the 175 km of total San Andreas system offset thought to have been accommodated by the entire East Bay fault system. As shown previously, at ca. 1.5–2.5 Ma the Hayward-Calaveras connection changed from a right-step, releasing regime to a left-step, restraining regime, with the consequent effective abandonment of the Silver Creek fault. This reorganization was, perhaps, preceded by development of the previously proposed basin-bisecting Mount Misery fault, a fault that directly linked the southern end of the Hayward fault with the southern Calaveras fault during extinction of pull-apart activity. Historic seismicity indicates that slip below a depth of 5 km is mostly transferred from the Calaveras fault to the Hayward fault across the Mission seismic trend northeast of the Evergreen basin, whereas slip above a depth of 5 km is transferred through a complex zone of oblique-reverse faults along and over the northeast basin margin. However, a prominent groundwater flow barrier and related land-subsidence discontinuity coincident with the concealed Silver Creek fault, a discontinuity in the pattern of seismicity on the Calaveras fault at the Silver Creek fault intersection, and a structural sag indicative of a negative flower structure in Quaternary sediments along the southwest basin margin indicate that the Silver Creek fault has had minor ongoing slip over the past few hundred thousand years. Two earthquakes with ∼M6 occurred in A.D. 1903 in the vicinity of the Silver Creek fault, but the available information is not sufficient to reliably identify them as Silver Creek fault events.
NASA Technical Reports Server (NTRS)
Bruhn, Ronald L.; Sauber, Jeanne; Cotton, Michele M.; Pavlis, Terry L.; Burgess, Evan; Ruppert, Natalia; Forster, Richard R.
2012-01-01
The northwest directed motion of the Pacific plate is accompanied by migration and collision of the Yakutat terrane into the cusp of southern Alaska. The nature and magnitude of accretion and translation on upper crustal faults and folds is poorly constrained, however, due to pervasive glaciation. In this study we used high-resolution topography, geodetic imaging, seismic, and geologic data to advance understanding of the transition from strike-slip motion on the Fairweather fault to plate margin deformation on the Bagley fault, which cuts through the upper plate of the collisional suture above the subduction megathrust. The Fairweather fault terminates by oblique-extensional splay faulting within a structural syntaxis, allowing rapid tectonic upwelling of rocks driven by thrust faulting and crustal contraction. Plate motion is partly transferred from the Fairweather to the Bagley fault, which extends 125 km farther west as a dextral shear zone that is partly reactivated by reverse faulting. The Bagley fault dips steeply through the upper plate to intersect the subduction megathrust at depth, forming a narrow fault-bounded crustal sliver in the obliquely convergent plate margin. Since . 20 Ma the Bagley fault has accommodated more than 50 km of dextral shearing and several kilometers of reverse motion along its southern flank during terrane accretion. The fault is considered capable of generating earthquakes because it is linked to faults that generated large historic earthquakes, suitably oriented for reactivation in the contemporary stress field, and locally marked by seismicity. The fault may generate earthquakes of Mw <= 7.5.
NASA Technical Reports Server (NTRS)
Markley, R. W.; Williams, B. F.
1993-01-01
NASA has proposed missions to the Moon and Mars that reflect three areas of emphasis: human presence, exploration, and space resource development for the benefit of Earth. A major requirement for such missions is a robust and reliable communications architecture. Network management--the ability to maintain some degree of human and automatic control over the span of the network from the space elements to the end users on Earth--is required to realize such robust and reliable communications. This article addresses several of the architectural issues associated with space network management. Round-trip delays, such as the 5- to 40-min delays in the Mars case, introduce a host of problems that must be solved by delegating significant control authority to remote nodes. Therefore, management hierarchy is one of the important architectural issues. The following article addresses these concerns, and proposes a network management approach based on emerging standards that covers the needs for fault, configuration, and performance management, delegated control authority, and hierarchical reporting of events. A relatively simple approach based on standards was demonstrated in the DSN 2000 Information Systems Laboratory, and the results are described.
Transient many-body instability in driven Dirac materials
NASA Astrophysics Data System (ADS)
Pertsova, Anna; Triola, Christopher; Balatsky, Alexander
The defining feature of a Dirac material (DM) is the presence of nodes in the low-energy excitation spectrum leading to a strong energy dependence of the density of states (DOS). The vanishing of the DOS at the nodal point implies a very low effective coupling constant which leads to stability of the node against electron-electron interactions. Non-equilibrium or driven DM, in which the DOS and hence the effective coupling can be controlled by external drive, offer a new platform for investigating collective instabilities. In this work, we discuss the possibility of realizing transient collective states in driven DMs. Motivated by recent pump-probe experiments which demonstrate the existence of long-lived photo-excited states in DMs, we consider an example of a transient excitonic instability in an optically-pumped DM. We identify experimental signatures of the transient excitonic condensate and provide estimates of the critical temperatures and lifetimes of these states for few important examples of DMs, such as single-layer graphene and topological-insulator surfaces.
Fault Interaction and Stress Accumulation in Chaman Fault System, Balouchistan, Pakistan, Since 1892
NASA Astrophysics Data System (ADS)
Riaz, M. S.; Shan, B.; Xiong, X.; Xie, Z.
2017-12-01
The curved-shaped left-lateral Chaman fault is the Western boundary of the Indian plate, which is approximately 1000 km long. The Chaman fault is an active fault and also locus of many catastrophic earthquakes. Since the inception of strike-slip movement at 20-25Ma along the western collision boundary between Indian and Eurasian plates, the average geologically constrained slip rate of 24 to 35 mm/yr accounts for a total displacement of 460±10 km along the Chaman fault system (Beun et al., 1979; Lawrence et al., 1992). Based on earthquake triggering theory, the change in Coulomb Failure Stress (DCFS) either halted (shadow stress) or advances (positive stress) the occurrence of subsequent earthquakes. Several major earthquakes occurred in Chaman fault system, and this region is poorly studied to understand the earthquake/fault interaction and hazard assessment. In order to do so, we have analyzed the earthquakes catalog and collected significant earthquakes with M ≥6.2 since 1892. We then investigate the evolution of DCFS in the Chaman fault system is computed by integration of coseismic static and postseismic viscoelastic relaxation stress transfer since the 1892, using the codePSGRN/PSCMP (Wang et al., 2006). Moreover, for postseismic stress transfer simulation, we adopted linear Maxwell rheology to calculate the viscoelastic effects in this study. Our results elucidate that three out of four earthquakes are triggered by the preceding earthquakes. The 1892-earthquake with magnitude Mw6.8, which occurred on the North segment of Chaman fault has not influence the 1935-earthquake which occurred on Ghazaband fault, a parallel fault 20km east to Chaman fault. The 1935-earthquake with magnitude Mw7.7 significantly loaded the both ends of rupture with positive stress (CFS ≥0.01 Mpa), which later on triggered the 1975-earthquake with 23% of its rupture length where CFS ≥0.01 Mpa, on Chaman fault, and 1990-earthquke with 58% of its rupture length where CFS ≥0.01 Mpa, on Ghazaband fault. Since the 1935-earthquke significantly increased the stress on both ends of its rupture, the 2013-earthquake with magnitude Mw7.7 occurred on Hoshab fault in the positive stress zone with 26% of its rupture length where CFS ≥0.01 Mpa, Fig 1. Our results revealed the interaction among the earthquakes as well as faults in the study region.
2003-06-03
KENNEDY SPACE CENTER, FLA. - An overhead crane in the Space Station Processing Facility is attached to the U.S. Node 2 to lift it out of its shipping container. The node will be moved to a workstand. The second of three connecting modules on the International Space Station, the Italian-built Node 2 attaches to the end of the U.S. Lab and provides attach locations for the Japanese laboratory, European laboratory, the Centrifuge Accommodation Module and, later, Multipurpose Logistics Modules. It will provide the primary docking location for the Shuttle when a pressurized mating adapter is attached to Node 2. Installation of the module will complete the U.S. Core of the ISS. Node 2 is the designated payload for mission STS-120. No orbiter or launch date has been determined yet.
McLaughlin, Robert J.; Sarna-Wojcicki, Andrei M.; Wagner, David L.; Fleck, Robert J.; Langenheim, V.E.; Jachens, Robert C.; Clahan, Kevin; Allen, James R.
2012-01-01
The Rodgers Creek–Maacama fault system in the northern California Coast Ranges (United States) takes up substantial right-lateral motion within the wide transform boundary between the Pacific and North American plates, over a slab window that has opened northward beneath the Coast Ranges. The fault system evolved in several right steps and splays preceded and accompanied by extension, volcanism, and strike-slip basin development. Fault and basin geometries have changed with time, in places with younger basins and faults overprinting older structures. Along-strike and successional changes in fault and basin geometry at the southern end of the fault system probably are adjustments to frequent fault zone reorganizations in response to Mendocino Triple Junction migration and northward transit of a major releasing bend in the northern San Andreas fault. The earliest Rodgers Creek fault zone displacement is interpreted to have occurred ca. 7 Ma along extensional basin-forming faults that splayed northwest from a west-northwest proto-Hayward fault zone, opening a transtensional basin west of Santa Rosa. After ca. 5 Ma, the early transtensional basin was compressed and extensional faults were reactivated as thrusts that uplifted the northeast side of the basin. After ca. 2.78 Ma, the Rodgers Creek fault zone again splayed from the earlier extensional and thrust faults to steeper dipping faults with more north-northwest orientations. In conjunction with the changes in orientation and slip mode, the Rodgers Creek fault zone dextral slip rate increased from ∼2–4 mm/yr 7–3 Ma, to 5–8 mm/yr after 3 Ma. The Maacama fault zone is shown from several data sets to have initiated ca. 3.2 Ma and has slipped right-laterally at ∼5–8 mm/yr since its initiation. The initial Maacama fault zone splayed northeastward from the south end of the Rodgers Creek fault zone, accompanied by the opening of several strike-slip basins, some of which were later uplifted and compressed during late-stage fault zone reorganization. The Santa Rosa pull-apart basin formed ca. 1 Ma, during the reorganization of the right stepover geometry of the Rodgers Creek–Maacama fault system, when the maturely evolved overlapping geometry of the northern Rodgers Creek and Maacama fault zones was overprinted by a less evolved, non-overlapping stepover geometry. The Rodgers Creek–Maacama fault system has contributed at least 44–53 km of right-lateral displacement to the East Bay fault system south of San Pablo Bay since 7 Ma, at a minimum rate of 6.1–7.8 mm/yr.
Windows .NET Network Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST)
Dowd, Scot E; Zaragoza, Joaquin; Rodriguez, Javier R; Oliver, Melvin J; Payton, Paxton R
2005-01-01
Background BLAST is one of the most common and useful tools for Genetic Research. This paper describes a software application we have termed Windows .NET Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST), which enhances the BLAST utility by improving usability, fault recovery, and scalability in a Windows desktop environment. Our goal was to develop an easy to use, fault tolerant, high-throughput BLAST solution that incorporates a comprehensive BLAST result viewer with curation and annotation functionality. Results W.ND-BLAST is a comprehensive Windows-based software toolkit that targets researchers, including those with minimal computer skills, and provides the ability increase the performance of BLAST by distributing BLAST queries to any number of Windows based machines across local area networks (LAN). W.ND-BLAST provides intuitive Graphic User Interfaces (GUI) for BLAST database creation, BLAST execution, BLAST output evaluation and BLAST result exportation. This software also provides several layers of fault tolerance and fault recovery to prevent loss of data if nodes or master machines fail. This paper lays out the functionality of W.ND-BLAST. W.ND-BLAST displays close to 100% performance efficiency when distributing tasks to 12 remote computers of the same performance class. A high throughput BLAST job which took 662.68 minutes (11 hours) on one average machine was completed in 44.97 minutes when distributed to 17 nodes, which included lower performance class machines. Finally, there is a comprehensive high-throughput BLAST Output Viewer (BOV) and Annotation Engine components, which provides comprehensive exportation of BLAST hits to text files, annotated fasta files, tables, or association files. Conclusion W.ND-BLAST provides an interactive tool that allows scientists to easily utilizing their available computing resources for high throughput and comprehensive sequence analyses. The install package for W.ND-BLAST is freely downloadable from . With registration the software is free, installation, networking, and usage instructions are provided as well as a support forum. PMID:15819992
NASA Astrophysics Data System (ADS)
Wang, Guo-Hong; Li, He; Zhao, Hai-Wei; Zhang, Wei-Kang
2017-05-01
This study aimed to elucidate the relationship between climate and the phylogenetic and morphological divergence of spruces (Picea) worldwide. Climatic and georeferenced data were collected from a total of 3388 sites distributed within the global domain of spruce species. A phylogenetic tree and a morphological tree for the global spruces were reconstructed based on DNA sequences and morphological characteristics. Spatial evolutionary and ecological vicariance analysis (SEEVA) was used to detect the ecological divergence among spruces. A divergence index (D) with (0, 1) scaling was calculated for each climatic factor at each node for both trees. The annual mean values, extreme values and annual range of the climatic variables were among the major determinants for spruce divergence. The ecological divergence was significant (P < 0. 001) for 185 of the 279 comparisons at 31 nodes in the phylogenetic tree, as well as for 196 of the 288 comparisons at 32 nodes in the morphological tree. Temperature parameters and precipitation parameters tended to be the main driving factors for the primary divergences of spruce phylogeny and morphology, respectively. Generally, the maximum D of the climatic variables was smaller in the basal nodes than in the remaining nodes. Notably, the primary divergence of morphology and phylogeny among the investigated spruces tended to be driven by different selective pressures. Given the climate scenario of severe and widespread drought over land areas in the next 30-90 years, our findings shed light on the prediction of spruce distribution under future climate change.
Kinematics of polygonal fault systems: observations from the northern North Sea
NASA Astrophysics Data System (ADS)
Wrona, Thilo; Magee, Craig; Jackson, Christopher A.-L.; Huuse, Mads; Taylor, Kevin G.
2017-12-01
Layer-bound, low-displacement normal faults, arranged into a broadly polygonal pattern, are common in many sedimentary basins. Despite having constrained their gross geometry, we have a relatively poor understanding of the processes controlling the nucleation and growth (i.e. the kinematics) of polygonal fault systems. In this study we use high-resolution 3-D seismic reflection and borehole data from the northern North Sea to undertake a detailed kinematic analysis of faults forming part of a seismically well-imaged polygonal fault system hosted within the up to 1000 m thick, Early Palaeocene-to-Middle Miocene mudstones of the Hordaland Group. Growth strata and displacement-depth profiles indicate faulting commenced during the Eocene to early Oligocene, with reactivation possibly occurring in the late Oligocene to middle Miocene. Mapping the position of displacement maxima on 137 polygonal faults suggests that the majority (64%) nucleated in the lower 500 m of the Hordaland Group. The uniform distribution of polygonal fault strikes in the area indicates that nucleation and growth were not driven by gravity or far-field tectonic extension as has previously been suggested. Instead, fault growth was likely facilitated by low coefficients of residual friction on existing slip surfaces, and probably involved significant layer-parallel contraction (strains of 0.01-0.19) of the host strata. To summarize, our kinematic analysis provides new insights into the spatial and temporal evolution of polygonal fault systems.
Seasonal Modulation of Earthquake Swarm Activity Near Maupin, Oregon
NASA Astrophysics Data System (ADS)
Braunmiller, J.; Nabelek, J.; Trehu, A. M.
2012-12-01
Between December 2006 and November 2011, the Pacific Northwest Seismic Network (PNSN) reported 464 earthquakes in a swarm about 60 km east-southeast of Mt. Hood near the town of Maupin, Oregon. Relocation of forty-five MD≥2.5 earthquakes and regional moment tensor analysis of nine 3.3≤Mw≤3.9 earthquakes reveals a north-northwest trending, less than 1 km2 sized active fault patch on a 70° west dipping fault. At about 17 km depth, the swarm occurred at or close to the bottom of the seismogenic crust. The swarm's cumulative seismic moment release, equivalent to an Mw=4.4 earthquake, is not dominated by a single shock; it is rather mainly due to 20 MD≥3.0 events, which occurred throughout the swarm. The swarm started at the southern end and, during the first 18 months of activity, migrated to the northwest at a rate of about 1-2 m/d until reaching its northern terminus. A 10° fault bend, inferred from locations and fault plane solutions, acted as geometrical barrier that temporarily halted event migration in mid-2007 before continuing north in early 2008. The slow event migration points to a pore pressure diffusion process suggesting the swarm onset was triggered by fluid inflow into the fault zone. At 17 km depth, triggering by meteoritic water seems unlikely for a normal crustal permeability. The double couple source mechanisms preclude a magmatic intrusion at the depth of the earthquakes. However, fluids (or gases) associated with a deeper, though undocumented, magma injection beneath the Cascade Mountains, could trigger seismicity in a pre-stressed region when they have migrated upward and reached the seismogenic crust. Superimposed on overall swarm evolution, we found a statistically significant annual seismicity variation, which is likely surface driven. The annual seismicity peak during spring (March-May) coincides with the maximum snow load on the near-by Cascades. The load corresponds to a surface pressure variation of about 6 kPa, which likely causes an annual peak-to-peak vertical displacement of about 1 cm at GPS sites in the Cascades and GPS signals that decay with increasing distance from the Cascades. Stress changes due to loading and unloading of snow pack in the Cascades can act in two ways to instantaneously enhance seismicity. For a strike-slip fault roughly parallel to the trend of the load and 10s of km away from it, normal stress decreases slightly leading to slight fault unclamping. The load also leads to simultaneous compression of fluid conduits at greater depth driving fluids rapidly upward into the swarm source region. The small, temporally variable stress changes on the order of a few kPa or less seem to be adequate to modulate seismicity by varying fault normal stresses and controlling fluid injection into a critically stressed fault zone. The swarm region has been quiet since February 2012 suggesting stresses on the fault have been nearly completely released.
Centralized Routing and Scheduling Using Multi-Channel System Single Transceiver in 802.16d
NASA Astrophysics Data System (ADS)
Al-Hemyari, A.; Noordin, N. K.; Ng, Chee Kyun; Ismail, A.; Khatun, S.
This paper proposes a cross-layer optimized strategy that reduces the effect of interferences from neighboring nodes within a mesh networks. This cross-layer design relies on the routing information in network layer and the scheduling table in medium access control (MAC) layer. A proposed routing algorithm in network layer is exploited to find the best route for all subscriber stations (SS). Also, a proposed centralized scheduling algorithm in MAC layer is exploited to assign a time slot for each possible node transmission. The cross-layer optimized strategy is using multi-channel single transceiver and single channel single transceiver systems for WiMAX mesh networks (WMNs). Each node in WMN has a transceiver that can be tuned to any available channel for eliminating the secondary interference. Among the considered parameters in the performance analysis are interference from the neighboring nodes, hop count to the base station (BS), number of children per node, slot reuse, load balancing, quality of services (QoS), and node identifier (ID). Results show that the proposed algorithms significantly improve the system performance in terms of length of scheduling, channel utilization ratio (CUR), system throughput, and average end to end transmission delay.
2003-07-18
KENNEDY SPACE CENTER, FLA. - STS-120 Mission Specialists Piers Sellers and Michael Foreman are in the Space Station Processing Facility for hardware familiarization. The mission will deliver the second of three Station connecting modules, Node 2, which attaches to the end of U.S. Lab. It will provide attach locations for the Japanese laboratory, European laboratory, the Centrifuge Accommodation Module and later Multi-Purpose Logistics Modules. The addition of Node 2 will complete the U.S. core of the International Space Station.
Traffic-driven epidemic spreading on scale-free networks with tunable degree distribution
NASA Astrophysics Data System (ADS)
Yang, Han-Xin; Wang, Bing-Hong
2016-04-01
We study the traffic-driven epidemic spreading on scale-free networks with tunable degree distribution. The heterogeneity of networks is controlled by the exponent γ of power-law degree distribution. It is found that the epidemic threshold is minimized at about γ=2.2. Moreover, we find that nodes with larger algorithmic betweenness are more likely to be infected. We expect our work to provide new insights in to the effect of network structures on traffic-driven epidemic spreading.
Oak Ridge fault, Ventura fold belt, and the Sisar decollement, Ventura basin, California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeats, R.S.; Huftile, G.J.; Grigsby, F.B.
1988-12-01
The rootless Ventura Avenue, San Miguelito, and Rincon anticlines (Ventura fold belt) in Pliocene -Pleistocene turbidites are fault-propagation folds related to south-dipping reverse faults rising from a decollement in Miocene shale. To the east, the Sulfur Mountain anticlinorium overlies and is cut by the Sisar, Big Canyon, and Lion south-dipping thrusts that merge downward into the Sisar decollement in lower Miocene shale. Shortening of the Miocene and younger sequence is {approximately} 3 km greater than that of underlying competent Paleogens strata in the Ventura fold belt and {approximately} 7 km greater farther east at Sulfur Mountain. Cross-section balancing requires thatmore » this difference be taken up by the Paleogene sequence at the Oak Ridge fault to the south. Convergence is northeast to north-northeast on the base of earthquake focal mechanisms, borehole breakouts, and piercing-point offest of the South Mountain seaknoll by the Oak Ridge fault. A northeast-trending line connecting the west end of Oak Ridge and the east end of Sisar fault separates an eastern domain where late Quaternary displacement is taken up entirely on the Oak Ridge fault and a western domain where displacement is transferred to the Sisar decollement and its overlying rootless folds. This implies that (1) the Oak Ridge fault near the coast presents as much seismic risk as it does farther east, despite negligible near-surface late Quaternary movement; (2) ground-rupture hazard is high for the Sisar fault set in the upper Ojai Valley; and (3) the decollement itself could produce an earthquake analogous to the 1987 Whittier Narrows event in Low Angeles.« less
Dickinson, William R.; Ducea, M.; Rosenberg, Lewis I.; Greene, H. Gary; Graham, Stephan A.; Clark, Joseph C.; Weber, Gerald E.; Kidder, Steven; Ernst, W. Gary; Brabb, Earl E.
2005-01-01
Reinterpretation of onshore and offshore geologic mapping, examination of a key offshore well core, and revision of cross-fault ties indicate Neogene dextral strike slip of 156 ± 4 km along the San Gregorio–Hosgri fault zone, a major strand of the San Andreas transform system in coastal California. Delineating the full course of the fault, defining net slip across it, and showing its relationship to other major tectonic features of central California helps clarify the evolution of the San Andreas system.San Gregorio–Hosgri slip rates over time are not well constrained, but were greater than at present during early phases of strike slip following fault initiation in late Miocene time. Strike slip took place southward along the California coast from the western fl ank of the San Francisco Peninsula to the Hosgri fault in the offshore Santa Maria basin without significant reduction by transfer of strike slip into the central California Coast Ranges. Onshore coastal segments of the San Gregorio–Hosgri fault include the Seal Cove and San Gregorio faults on the San Francisco Peninsula, and the Sur and San Simeon fault zones along the flank of the Santa Lucia Range.Key cross-fault ties include porphyritic granodiorite and overlying Eocene strata exposed at Point Reyes and at Point Lobos, the Nacimiento fault contact between Salinian basement rocks and the Franciscan Complex offshore within the outer Santa Cruz basin and near Esalen on the flank of the Santa Lucia Range, Upper Cretaceous (Campanian) turbidites of the Pigeon Point Formation on the San Francisco Peninsula and the Atascadero Formation in the southern Santa Lucia Range, assemblages of Franciscan rocks exposed at Point Sur and at Point San Luis, and a lithic assemblage of Mesozoic rocks and their Tertiary cover exposed near Point San Simeon and at Point Sal, as restored for intrabasinal deformation within the onshore Santa Maria basin.Slivering of the Salinian block by San Gregorio–Hosgri displacements elongated its northern end and offset its western margin delineated by the older Nacimiento fault, a sinistral strike-slip fault of latest Cretaceous to Paleocene age. North of its juncture with the San Andreas fault, dextral slip along the San Gregorio–Hosgri fault augments net San Andreas displacement. Alternate restorations of the Gualala block imply that nearly half the net San Gregorio–Hosgri slip was accommodated along the offshore Gualala fault strand lying west of the Gualala block, which is bounded on the east by the current master trace of the San Andreas fault. With San Andreas and San Gregorio–Hosgri slip restored, there remains an unresolved proto–San Andreas mismatch of ∼100 km between the offset northern end of the Salinian block and the southern end of the Sierran-Tehachapi block.On the south, San Gregorio–Hosgri strike slip is transposed into crustal shortening associated with vertical-axis tectonic rotation of fault-bounded crustal panels that form the western Transverse Ranges, and with kinematically linked deformation within the adjacent Santa Maria basin. The San Gregorio–Hosgri fault serves as the principal link between transrotation in the western Transverse Ranges and strike slip within the San Andreas transform system of central California.
Mechanical end joint system for structural column elements
NASA Technical Reports Server (NTRS)
Bush, H. G.; Wallsom, R. E. (Inventor)
1982-01-01
A mechanical end joint system, useful for the transverse connection of strut elements to a common node, comprises a node joint half with a semicircular tongue and groove, and a strut joint half with a semicircular tongue and groove. The two joint halves are engaged transversely and the connection is made secure by the inherent physical property characteristics of locking latches and/or by a spring-actioned shaft. A quick release mechanism provides rapid disengagement of the joint halves.
An Efficient Next Hop Selection Algorithm for Multi-Hop Body Area Networks
Ayatollahitafti, Vahid; Ngadi, Md Asri; Mohamad Sharif, Johan bin; Abdullahi, Mohammed
2016-01-01
Body Area Networks (BANs) consist of various sensors which gather patient’s vital signs and deliver them to doctors. One of the most significant challenges faced, is the design of an energy-efficient next hop selection algorithm to satisfy Quality of Service (QoS) requirements for different healthcare applications. In this paper, a novel efficient next hop selection algorithm is proposed in multi-hop BANs. This algorithm uses the minimum hop count and a link cost function jointly in each node to choose the best next hop node. The link cost function includes the residual energy, free buffer size, and the link reliability of the neighboring nodes, which is used to balance the energy consumption and to satisfy QoS requirements in terms of end to end delay and reliability. Extensive simulation experiments were performed to evaluate the efficiency of the proposed algorithm using the NS-2 simulator. Simulation results show that our proposed algorithm provides significant improvement in terms of energy consumption, number of packets forwarded, end to end delay and packet delivery ratio compared to the existing routing protocol. PMID:26771586
2004-08-24
KENNEDY SPACE CENTER, FLA. - In the Space Station Processing Facility, a worker observes data from the Traveled Work Systems Test (TWST) conducted on the Node 2. The TWST executes open work that traveled with the Node 2 from Italy and simulates the on-orbit activation sequence. Node 2 was powered up Aug. 19 for the testing. The second of three Space Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Node 2 is scheduled to launch on mission STS-120, assembly flight 10A to the International Space Station.
Enabling Service Discovery in a Federation of Systems: WS-Discovery Case Study
2014-06-01
found that Pastry [3] coupled with SCRIBE [4] provides everything we require from the overlay network: Pastry nodes form a decentralized, self...application-independent manner. Furthermore, Pastry provides mechanisms that support and facilitate application-specific object replication, caching, and fault...recovery. Add SCRIBE to Pastry , and you get a generic, scalable and efficient group communication and event notification system providing
Release of mineral-bound water prior to subduction tied to shallow seismogenic slip off Sumatra
NASA Astrophysics Data System (ADS)
Hüpers, Andre; Torres, Marta E.; Owari, Satoko; McNeill, Lisa C.; Dugan, Brandon; Henstock, Timothy J.; Milliken, Kitty L.; Petronotis, Katerina E.; Backman, Jan; Bourlange, Sylvain; Chemale, Farid; Chen, Wenhuang; Colson, Tobias A.; Frederik, Marina C. G.; Guèrin, Gilles; Hamahashi, Mari; House, Brian M.; Jeppson, Tamara N.; Kachovich, Sarah; Kenigsberg, Abby R.; Kuranaga, Mebae; Kutterolf, Steffen; Mitchison, Freya L.; Mukoyoshi, Hideki; Nair, Nisha; Pickering, Kevin T.; Pouderoux, Hugo F. A.; Shan, Yehua; Song, Insun; Vannucchi, Paola; Vrolijk, Peter J.; Yang, Tao; Zhao, Xixi
2017-05-01
Plate-boundary fault rupture during the 2004 Sumatra-Andaman subduction earthquake extended closer to the trench than expected, increasing earthquake and tsunami size. International Ocean Discovery Program Expedition 362 sampled incoming sediments offshore northern Sumatra, revealing recent release of fresh water within the deep sediments. Thermal modeling links this freshening to amorphous silica dehydration driven by rapid burial-induced temperature increases in the past 9 million years. Complete dehydration of silicates is expected before plate subduction, contrasting with prevailing models for subduction seismogenesis calling for fluid production during subduction. Shallow slip offshore Sumatra appears driven by diagenetic strengthening of deeply buried fault-forming sediments, contrasting with weakening proposed for the shallow Tohoku-Oki 2011 rupture, but our results are applicable to other thickly sedimented subduction zones including those with limited earthquake records.
NASA Astrophysics Data System (ADS)
Yin, A.; Pappalardo, R. T.
2013-12-01
Detailed photogeologic mapping of the tiger-stripe fractures in the South Polar Terrain (SPT) of Enceladus indicates that these structures are left-slip faults and terminate at hook-shaped fold-thrust zones and/or Y-shaped horsetail splay-fault zones. The semi-square-shaped tectonic domain that hosts the tiger-stripe faults is bounded by right-slip and left-slip faults on the north and south edges and fold-thrust and extensional zones on the western and eastern edges. We explain the above observations by a passive bookshelf-faulting model in which individual tiger-stripe faults are bounded by deformable wall rocks accommodating distributed deformation. Based on topographic data, we suggest that gravitational spreading had caused the SPT to spread unevenly from west to east. This process was accommodated by right-slip and left-slip faulting on the north and south sides and thrusting and extension along the eastern and southern margins of the tiger-stripe tectonic domain. The uneven spreading, expressed by a gradual northward increase in the number of extensional faults and thrusts/folds along the western and eastern margins, was accommodated by distributed right-slip simple shear across the whole tiger-stripe tectonic domain. This mode of deformation in turn resulted in the development of a passive bookshelf-fault system characterized by left-slip faulting on individual tiger-stripe fractures.
Fault-controlled CO2 leakage from natural reservoirs in the Colorado Plateau, East-Central Utah
NASA Astrophysics Data System (ADS)
Jung, Na-Hyun; Han, Weon Shik; Watson, Z. T.; Graham, Jack P.; Kim, Kue-Young
2014-10-01
The study investigated a natural analogue for soil CO2 fluxes where CO2 has naturally leaked on the Colorado Plateau, East-Central Utah in order to identify various factors that control CO2 leakage and to understand regional-scale CO2 leakage processes in fault systems. The total 332 and 140 measurements of soil CO2 flux were made at 287 and 129 sites in the Little Grand Wash (LGW) and Salt Wash (SW) fault zones, respectively. Measurement sites for CO2 flux involved not only conspicuous CO2 degassing features (e.g., CO2-driven springs/geysers) but also linear features (e.g., joints/fractures and areas of diffusive leakage around a fault damage zone). CO2 flux anomalies were mostly observed along the fault traces. Specifically, CO2 flux anomalies were focused in the northern footwall of the both LGW and SW faults, supporting the existence of north-plunging anticlinal CO2 trap against south-dipping faults as well as higher probability of the north major fault traces as conduits. Anomalous CO2 fluxes also appeared in active travertines adjacent to CO2-driven cold springs and geysers (e.g., 36,259 g m-2 d-1 at Crystal Geyser), ancient travertines (e.g., 5,917 g m-2 d-1), joint zones in sandstone (e.g., 120 g m-2 d-1), and brine discharge zones (e.g., 5,515 g m-2 d-1). These observations indicate that CO2 has escaped through those pathways and that CO2 leakage from these fault zones does not correspond to point source leakage. The magnitude of CO2 flux is progressively reduced from north (i.e. the LGW fault zone, ∼36,259 g m-2 d-1) to south (i.e. the SW fault zone, ∼1,428 g m-2 d-1) despite new inputs of CO2 and CO2-saturated brine to the northerly SW fault from depth. This discrepancy in CO2 flux is most likely resulting from the differences in fault zone architecture and associated permeability structure. CO2-rich fluids from the LGW fault zone may become depleted with respect to CO2 during lateral transport, resulting in an additional decrease in CO2 fluxes within the SW fault zone. In other words, CO2 and CO2-charged brine originating from the LGW fault zone could migrate southward over 10-20 km through a series of high-permeable aquifers (e.g., Entrada, Navajo, Kayenta, Wingate, and White Rim Sandstones). These CO2-rich fluids could finally reach the southernmost Tumbleweed and Chaffin Ranch Geysers across the SW fault zone. The potential lateral transport of both CO2 and CO2-laden brine can be further supported by similar CO2/3He and 3He/4He ratios of gas and a systematic chemical evolution of water emitted from the regional springs and geysers, which suggest the same crustal origins of CO2 and CO2-rich brine for the region.
Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang
2014-01-01
A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197
Dynamic modeling of gearbox faults: A review
NASA Astrophysics Data System (ADS)
Liang, Xihui; Zuo, Ming J.; Feng, Zhipeng
2018-01-01
Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.
NASA Astrophysics Data System (ADS)
Rotta, Davide; Sebastiano, Fabio; Charbon, Edoardo; Prati, Enrico
2017-06-01
Even the quantum simulation of an apparently simple molecule such as Fe2S2 requires a considerable number of qubits of the order of 106, while more complex molecules such as alanine (C3H7NO2) require about a hundred times more. In order to assess such a multimillion scale of identical qubits and control lines, the silicon platform seems to be one of the most indicated routes as it naturally provides, together with qubit functionalities, the capability of nanometric, serial, and industrial-quality fabrication. The scaling trend of microelectronic devices predicting that computing power would double every 2 years, known as Moore's law, according to the new slope set after the 32-nm node of 2009, suggests that the technology roadmap will achieve the 3-nm manufacturability limit proposed by Kelly around 2020. Today, circuital quantum information processing architectures are predicted to take advantage from the scalability ensured by silicon technology. However, the maximum amount of quantum information per unit surface that can be stored in silicon-based qubits and the consequent space constraints on qubit operations have never been addressed so far. This represents one of the key parameters toward the implementation of quantum error correction for fault-tolerant quantum information processing and its dependence on the features of the technology node. The maximum quantum information per unit surface virtually storable and controllable in the compact exchange-only silicon double quantum dot qubit architecture is expressed as a function of the complementary metal-oxide-semiconductor technology node, so the size scale optimizing both physical qubit operation time and quantum error correction requirements is assessed by reviewing the physical and technological constraints. According to the requirements imposed by the quantum error correction method and the constraints given by the typical strength of the exchange coupling, we determine the workable operation frequency range of a silicon complementary metal-oxide-semiconductor quantum processor to be within 1 and 100 GHz. Such constraint limits the feasibility of fault-tolerant quantum information processing with complementary metal-oxide-semiconductor technology only to the most advanced nodes. The compatibility with classical complementary metal-oxide-semiconductor control circuitry is discussed, focusing on the cryogenic complementary metal-oxide-semiconductor operation required to bring the classical controller as close as possible to the quantum processor and to enable interfacing thousands of qubits on the same chip via time-division, frequency-division, and space-division multiplexing. The operation time range prospected for cryogenic control electronics is found to be compatible with the operation time expected for qubits. By combining the forecast of the development of scaled technology nodes with operation time and classical circuitry constraints, we derive a maximum quantum information density for logical qubits of 2.8 and 4 Mqb/cm2 for the 10 and 7-nm technology nodes, respectively, for the Steane code. The density is one and two orders of magnitude less for surface codes and for concatenated codes, respectively. Such values provide a benchmark for the development of fault-tolerant quantum algorithms by circuital quantum information based on silicon platforms and a guideline for other technologies in general.
Geodetic Finite-Fault-based Earthquake Early Warning Performance for Great Earthquakes Worldwide
NASA Astrophysics Data System (ADS)
Ruhl, C. J.; Melgar, D.; Grapenthin, R.; Allen, R. M.
2017-12-01
GNSS-based earthquake early warning (EEW) algorithms estimate fault-finiteness and unsaturated moment magnitude for the largest, most damaging earthquakes. Because large events are infrequent, algorithms are not regularly exercised and insufficiently tested on few available datasets. The Geodetic Alarm System (G-larmS) is a GNSS-based finite-fault algorithm developed as part of the ShakeAlert EEW system in the western US. Performance evaluations using synthetic earthquakes offshore Cascadia showed that G-larmS satisfactorily recovers magnitude and fault length, providing useful alerts 30-40 s after origin time and timely warnings of ground motion for onshore urban areas. An end-to-end test of the ShakeAlert system demonstrated the need for GNSS data to accurately estimate ground motions in real-time. We replay real data from several subduction-zone earthquakes worldwide to demonstrate the value of GNSS-based EEW for the largest, most damaging events. We compare predicted ground acceleration (PGA) from first-alert-solutions with those recorded in major urban areas. In addition, where applicable, we compare observed tsunami heights to those predicted from the G-larmS solutions. We show that finite-fault inversion based on GNSS-data is essential to achieving the goals of EEW.
Kuusk, Teele; De Bruijn, Roderick; Brouwer, Oscar R; De Jong, Jeroen; Donswijk, Maarten; Grivas, Nikolaos; Hendricksen, Kees; Horenblas, Simon; Prevoo, Warner; Valdés Olmos, Renato A; Van Der Poel, Henk G; Van Rhijn, Bas W G; Wit, Esther M; Bex, Axel
2018-06-01
Lymphatic drainage from renal tumors is unpredictable. In vivo drainage studies of primary lymphatic landing sites may reveal the variability and dynamics of lymphatic connections. The purpose of this study was to investigate the lymphatic drainage pattern of renal tumors in vivo with single photon emission/computerized tomography after intratumor radiotracer injection. We performed a phase II, prospective, single arm study to investigate the distribution of sentinel nodes from renal tumors on single photon emission/computerized tomography. Patients with cT1-3 (less than 10 cm) cN0M0 renal tumors of any subtype were enrolled in analysis. After intratumor ultrasound guided injection of 0.4 ml 99m Tc-nanocolloid we performed preoperative imaging of sentinel nodes with lymphoscintigraphy and single photon emission/computerized tomography. Sentinel and locoregional nonsentinel nodes were resected with a γ probe combined with a mobile γ camera. The primary study end point was the location of sentinel nodes outside the locoregional retroperitoneal templates on single photon emission/computerized tomography. Using a Simon minimax 2-stage design to detect a 25% extralocoregional retroperitoneal template location of sentinel nodes on imaging at α = 0.05 and 80% power at least 40 patients with sentinel node imaging on single photon emission/computerized tomography were needed. Of the 68 patients 40 underwent preoperative single photon emission/computerized tomography of sentinel nodes and were included in primary end point analysis. Lymphatic drainage outside the locoregional retroperitoneal templates was observed in 14 patients (35%). Eight patients (20%) had supradiaphragmatic sentinel nodes. Sentinel nodes from renal tumors were mainly located in the respective locoregional retroperitoneal templates. Simultaneous sentinel nodes were located outside the suggested lymph node dissection templates, including supradiaphragmatic sentinel nodes in more than a third of the patients. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
The influence of climatically-driven surface loading variations on continental strain and seismicity
NASA Astrophysics Data System (ADS)
Craig, Tim; Calais, Eric; Fleitout, Luce; Bollinger, Laurent; Scotti, Oona
2016-04-01
In slowly deforming regions of plate interiors, secondary sources of stress and strain can result in transient deformation rates comparable to, or greater than, the background tectonic rates. Highly variable in space and time, these transients have the potential to influence the spatio-temporal distribution of seismicity, interfering with any background tectonic effects to either promote or inhibit the failure of pre-existing faults, and potentially leading to a clustered, or 'pulse-like', seismic history. Here, we investigate the ways in which the large-scale deformation field resulting from climatically-controlled changes in surface ice mass over the Pleistocene and Holocene may have influenced not only the seismicity of glaciated regions, but also the wider seismicity around the ice periphery. We first use a set of geodynamic models to demonstrate that a major pulse of seismic activity occurring in Fennoscandia, coincident with the time of end-glaciation, occurred in a setting where the contemporaneous horizontal strain-rate resulting from the changing ice mass, was extensional - opposite to the reverse sense of coseismic displacement accommodated on these faults. Therefore, faulting did not release extensional elastic strain that was building up at the time of failure, but compressional elastic strain that had accumulated in the lithosphere on timescales longer than the glacial cycle, illustrating the potential for a non-tectonic trigger to tap in to the background tectonic stress-state. We then move on to investigate the more distal influence that changing ice (and ocean) volumes may have had on the evolving strain field across intraplate Europe, how this is reflected in the seismicity across intraplate Europe, and what impact this might have on the paleoseismic record.
Interfacing HTCondor-CE with OpenStack
NASA Astrophysics Data System (ADS)
Bockelman, B.; Caballero Bejar, J.; Hover, J.
2017-10-01
Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.
Mobility based key management technique for multicast security in mobile ad hoc networks.
Madhusudhanan, B; Chitra, S; Rajan, C
2015-01-01
In MANET multicasting, forward and backward secrecy result in increased packet drop rate owing to mobility. Frequent rekeying causes large message overhead which increases energy consumption and end-to-end delay. Particularly, the prevailing group key management techniques cause frequent mobility and disconnections. So there is a need to design a multicast key management technique to overcome these problems. In this paper, we propose the mobility based key management technique for multicast security in MANET. Initially, the nodes are categorized according to their stability index which is estimated based on the link availability and mobility. A multicast tree is constructed such that for every weak node, there is a strong parent node. A session key-based encryption technique is utilized to transmit a multicast data. The rekeying process is performed periodically by the initiator node. The rekeying interval is fixed depending on the node category so that this technique greatly minimizes the rekeying overhead. By simulation results, we show that our proposed approach reduces the packet drop rate and improves the data confidentiality.
Dendritic cells control fibroblastic reticular network tension and lymph node expansion.
Acton, Sophie E; Farrugia, Aaron J; Astarita, Jillian L; Mourão-Sá, Diego; Jenkins, Robert P; Nye, Emma; Hooper, Steven; van Blijswijk, Janneke; Rogers, Neil C; Snelgrove, Kathryn J; Rosewell, Ian; Moita, Luis F; Stamp, Gordon; Turley, Shannon J; Sahai, Erik; Reis e Sousa, Caetano
2014-10-23
After immunogenic challenge, infiltrating and dividing lymphocytes markedly increase lymph node cellularity, leading to organ expansion. Here we report that the physical elasticity of lymph nodes is maintained in part by podoplanin (PDPN) signalling in stromal fibroblastic reticular cells (FRCs) and its modulation by CLEC-2 expressed on dendritic cells. We show in mouse cells that PDPN induces actomyosin contractility in FRCs via activation of RhoA/C and downstream Rho-associated protein kinase (ROCK). Engagement by CLEC-2 causes PDPN clustering and rapidly uncouples PDPN from RhoA/C activation, relaxing the actomyosin cytoskeleton and permitting FRC stretching. Notably, administration of CLEC-2 protein to immunized mice augments lymph node expansion. In contrast, lymph node expansion is significantly constrained in mice selectively lacking CLEC-2 expression in dendritic cells. Thus, the same dendritic cells that initiate immunity by presenting antigens to T lymphocytes also initiate remodelling of lymph nodes by delivering CLEC-2 to FRCs. CLEC-2 modulation of PDPN signalling permits FRC network stretching and allows for the rapid lymph node expansion--driven by lymphocyte influx and proliferation--that is the critical hallmark of adaptive immunity.
Role of the Kazerun fault system in active deformation of the Zagros fold-and-thrust belt (Iran)
NASA Astrophysics Data System (ADS)
Authemayou, Christine; Bellier, Olivier; Chardon, Dominique; Malekzade, Zaman; Abassi, Mohammad
2005-04-01
Field structural and SPOT image analyses document the kinematic framework enhancing transfer of strike-slip partitioned motion from along the backstop to the interior of the Zagros fold-and-thrust belt in a context of plate convergence slight obliquity. Transfer occurs by slip on the north-trending right-lateral Kazerun Fault System (KFS) that connects to the Main Recent Fault, a major northwest-trending dextral fault partitioning oblique convergence at the rear of the belt. The KFS formed by three fault zones ended by bent orogen-parallel thrusts allows slip from along the Main Recent Fault to become distributed by transfer to longitudinal thrusts and folds. To cite this article: C. Authemayou et al., C. R. Geoscience 337 (2005).
Gamell, Marc; Teranishi, Keita; Kolla, Hemanth; ...
2017-10-26
In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamell, Marc; Teranishi, Keita; Kolla, Hemanth
In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less
A New On-Line Diagnosis Protocol for the SPIDER Family of Byzantine Fault Tolerant Architectures
NASA Technical Reports Server (NTRS)
Geser, Alfons; Miner, Paul S.
2004-01-01
This paper presents the formal verification of a new protocol for online distributed diagnosis for the SPIDER family of architectures. An instance of the Scalable Processor-Independent Design for Electromagnetic Resilience (SPIDER) architecture consists of a collection of processing elements communicating over a Reliable Optical Bus (ROBUS). The ROBUS is a specialized fault-tolerant device that guarantees Interactive Consistency, Distributed Diagnosis (Group Membership), and Synchronization in the presence of a bounded number of physical faults. Formal verification of the original SPIDER diagnosis protocol provided a detailed understanding that led to the discovery of a significantly more efficient protocol. The original protocol was adapted from the formally verified protocol used in the MAFT architecture. It required O(N) message exchanges per defendant to correctly diagnose failures in a system with N nodes. The new protocol achieves the same diagnostic fidelity, but only requires O(1) exchanges per defendant. This paper presents this new diagnosis protocol and a formal proof of its correctness using PVS.
Interface For Fault-Tolerant Control System
NASA Technical Reports Server (NTRS)
Shaver, Charles; Williamson, Michael
1989-01-01
Interface unit and controller emulator developed for research on electronic helicopter-flight-control systems equipped with artificial intelligence. Interface unit interrupt-driven system designed to link microprocessor-based, quadruply-redundant, asynchronous, ultra-reliable, fault-tolerant control system (controller) with electronic servocontrol unit that controls set of hydraulic actuators. Receives digital feedforward messages from, and transmits digital feedback messages to, controller through differential signal lines or fiber-optic cables (thus far only differential signal lines have been used). Analog signals transmitted to and from servocontrol unit via coaxial cables.
Sentinel lymph node biopsy for early oral cancers: Westmead Hospital experience.
Abdul-Razak, Muzib; Chung, Hsiang; Wong, Eva; Palme, Carsten; Veness, Michael; Farlow, David; Coleman, Hedley; Morgan, Gary
2017-01-01
Sentinel lymph node biopsy (SLNB) has become an alternative option to elective neck dissection (END) for early oral cavity squamous cell carcinoma (OCSCC) outside of Australia. We sought to assess the technical feasibility of SLNB and validate its accuracy against that of END in an Australian setting. We performed a prospective cohort study consisting of 30 consecutive patients with cT 1 - 2 N 0 OCSCC referred to the Head and Neck Cancer Service, Westmead Hospital, Sydney, between 2011 and 2014. All patients underwent SLNB followed by immediate selective neck dissection (levels I-III). A total of 30 patients were diagnosed with an early clinically node-negative OCSCC (seven cT1 and 23 cT2), with the majority located on the oral tongue. A median of three (range: 1-14) sentinel nodes were identified on lymphoscintigraphy, and all sentinel nodes were successfully retrieved, with 50% having a pathologically positive sentinel node. No false-negative sentinel nodes were identified using selective neck dissection as the gold standard. The negative predictive value (NPV) of SLNB was 100%, with 40% having a sentinel node identified outside the field of planned neck dissection on lymphoscintigraphy. Of these, one patient had a positive sentinel node outside of the ipsilateral supraomohyoid neck dissection template. SLNB for early OCSCC is technically feasible in an Australian setting. It has a high NPV and can potentially identify at-risk lymphatic basins outside the traditional selective neck dissection levels even in well-lateralized lesions. © 2016 Royal Australasian College of Surgeons.
The Kumamoto Mw7.1 mainshock: deep initiation triggered by the shallow foreshocks
NASA Astrophysics Data System (ADS)
Shi, Q.; Wei, S.
2017-12-01
The Kumamoto Mw7.1 earthquake and its Mw6.2 foreshock struck the central Kyushu region in mid-April, 2016. The surface ruptures are characterized with multiple fault segments and a mix of strike-slip and normal motion extended from the intersection area of Hinagu and Futagawa faults to the southwest of Mt. Aso. Despite complex surface ruptures, most of the finite fault inversions use two fault segments to approximate the fault geometry. To study the rupture process and the complex fault geometry of this earthquake, we performed a multiple point source inversion for the mainshock using the data on 93 K-net and Kik-net stations. With path calibration from the Mw6.0 foreshock, we selected the frequency ranges for the Pnl waves (0.02 0.26 Hz) and surface waves (0.02 0.12 Hz), as well as the components that can be well modeled with the 1D velocity model. Our four-point-source results reveal a unilateral rupture towards Mt. Aso and varying fault geometries. The first sub-event is a high angle ( 79°) right-lateral strike-slip event at the depth of 16 km on the north end of the Hinagu fault. Notably the two M>6 foreshocks is located by our previous studies near the north end of the Hinagu fault at the depth of 5 9 km, which may give rise to the stress concentration at depth. The following three sub-events are distributed along the surface rupture of the Futagawa fault, with focal depths within 4 10 km. Their focal mechanisms present similar right-lateral fault slips with relatively small dip angles (62 67°) and apparent normal-fault component. Thus, the mainshock rupture initiated from the relatively deep part of the Hinagu fault and propagated through the fault-bend toward NE along the relatively shallow part of the Futagawa fault until it was terminated near Mt. Aso. Based on the four-point-source solution, we conducted a finite-fault inversion and obtained a kinematic rupture model of the mainshock. We then performed the Coulomb Stress analyses on the two foreshocks and the mainshock. The results support that the stress alternation after the foreshocks may have triggered the failure on the fault plane of the Mw7.1 earthquake. Therefore, the 2016 Kumamoto earthquake sequence is dominated by a series of large triggering events whose initiation is associated with the geometric barrier in the intersection of the Futagawa and Hinagu faults.
Evaluation of the Effects of Hidden Node Problems in IEEE 802.15.7 Uplink Performance
Ley-Bosch, Carlos; Alonso-González, Itziar; Sánchez-Rodríguez, David; Ramírez-Casañas, Carlos
2016-01-01
In the last few years, the increasing use of LEDs in illumination systems has been conducted due to the emergence of Visible Light Communication (VLC) technologies, in which data communication is performed by transmitting through the visible band of the electromagnetic spectrum. In 2011, the Institute of Electrical and Electronics Engineers (IEEE) published the IEEE 802.15.7 standard for Wireless Personal Area Networks based on VLC. Due to limitations in the coverage of the transmitted signal, wireless networks can suffer from the hidden node problems, when there are nodes in the network whose transmissions are not detected by other nodes. This problem can cause an important degradation in communications when they are made by means of the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) access control method, which is used in IEEE 802.15.7 This research work evaluates the effects of the hidden node problem in the performance of the IEEE 802.15.7 standard We implement a simulator and analyze VLC performance in terms of parameters like end-to-end goodput and message loss rate. As part of this research work, a solution to the hidden node problem is proposed, based on the use of idle patterns defined in the standard. Idle patterns are sent by the network coordinator node to communicate to the other nodes that there is an ongoing transmission. The validity of the proposed solution is demonstrated with simulation results. PMID:26861352
Evaluation of the Effects of Hidden Node Problems in IEEE 802.15.7 Uplink Performance.
Ley-Bosch, Carlos; Alonso-González, Itziar; Sánchez-Rodríguez, David; Ramírez-Casañas, Carlos
2016-02-06
In the last few years, the increasing use of LEDs in illumination systems has been conducted due to the emergence of Visible Light Communication (VLC) technologies, in which data communication is performed by transmitting through the visible band of the electromagnetic spectrum. In 2011, the Institute of Electrical and Electronics Engineers (IEEE) published the IEEE 802.15.7 standard for Wireless Personal Area Networks based on VLC. Due to limitations in the coverage of the transmitted signal, wireless networks can suffer from the hidden node problems, when there are nodes in the network whose transmissions are not detected by other nodes. This problem can cause an important degradation in communications when they are made by means of the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) access control method, which is used in IEEE 802.15.7 This research work evaluates the effects of the hidden node problem in the performance of the IEEE 802.15.7 standard We implement a simulator and analyze VLC performance in terms of parameters like end-to-end goodput and message loss rate. As part of this research work, a solution to the hidden node problem is proposed, based on the use of idle patterns defined in the standard. Idle patterns are sent by the network coordinator node to communicate to the other nodes that there is an ongoing transmission. The validity of the proposed solution is demonstrated with simulation results.
NASA Technical Reports Server (NTRS)
Tai, Ann T.; Chau, Savio N.; Alkalai, Leon
2000-01-01
Using COTS products, standards and intellectual properties (IPs) for all the system and component interfaces is a crucial step toward significant reduction of both system cost and development cost as the COTS interfaces enable other COTS products and IPs to be readily accommodated by the target system architecture. With respect to the long-term survivable systems for deep-space missions, the major challenge for us is, under stringent power and mass constraints, to achieve ultra-high reliability of the system comprising COTS products and standards that are not developed for mission-critical applications. The spirit of our solution is to exploit the pertinent standard features of a COTS product to circumvent its shortcomings, though these standard features may not be originally designed for highly reliable systems. In this paper, we discuss our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. We first derive and qualitatively analyze a -'stacktree topology" that not only complies with IEEE 1394 but also enables the implementation of a fault-tolerant bus architecture without node redundancy. We then present a quantitative evaluation that demonstrates significant reliability improvement from the COTS-based fault tolerance.
Rutter, Ernest; Hackston, Abigail
2017-09-28
Fluid injection into rocks is increasingly used for energy extraction and for fluid wastes disposal, and can trigger/induce small- to medium-scale seismicity. Fluctuations in pore fluid pressure may also be associated with natural seismicity. The energy release in anthropogenically induced seismicity is sensitive to amount and pressure of fluid injected, through the way that seismic moment release is related to slipped area, and is strongly affected by the hydraulic conductance of the faulted rock mass. Bearing in mind the scaling issues that apply, fluid injection-driven fault motion can be studied on laboratory-sized samples. Here, we investigate both stable and unstable induced fault slip on pre-cut planar surfaces in Darley Dale and Pennant sandstones, with or without granular gouge. They display contrasting permeabilities, differing by a factor of 10 5 , but mineralogies are broadly comparable. In permeable Darley Dale sandstone, fluid can access the fault plane through the rock matrix and the effective stress law is followed closely. Pore pressure change shifts the whole Mohr circle laterally. In tight Pennant sandstone, fluid only injects into the fault plane itself; stress state in the rock matrix is unaffected. Sudden access by overpressured fluid to the fault plane via hydrofracture causes seismogenic fault slips.This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'. © 2017 The Authors.
NASA Astrophysics Data System (ADS)
Rutter, Ernest; Hackston, Abigail
2017-08-01
Fluid injection into rocks is increasingly used for energy extraction and for fluid wastes disposal, and can trigger/induce small- to medium-scale seismicity. Fluctuations in pore fluid pressure may also be associated with natural seismicity. The energy release in anthropogenically induced seismicity is sensitive to amount and pressure of fluid injected, through the way that seismic moment release is related to slipped area, and is strongly affected by the hydraulic conductance of the faulted rock mass. Bearing in mind the scaling issues that apply, fluid injection-driven fault motion can be studied on laboratory-sized samples. Here, we investigate both stable and unstable induced fault slip on pre-cut planar surfaces in Darley Dale and Pennant sandstones, with or without granular gouge. They display contrasting permeabilities, differing by a factor of 105, but mineralogies are broadly comparable. In permeable Darley Dale sandstone, fluid can access the fault plane through the rock matrix and the effective stress law is followed closely. Pore pressure change shifts the whole Mohr circle laterally. In tight Pennant sandstone, fluid only injects into the fault plane itself; stress state in the rock matrix is unaffected. Sudden access by overpressured fluid to the fault plane via hydrofracture causes seismogenic fault slips. This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'.
Hackston, Abigail
2017-01-01
Fluid injection into rocks is increasingly used for energy extraction and for fluid wastes disposal, and can trigger/induce small- to medium-scale seismicity. Fluctuations in pore fluid pressure may also be associated with natural seismicity. The energy release in anthropogenically induced seismicity is sensitive to amount and pressure of fluid injected, through the way that seismic moment release is related to slipped area, and is strongly affected by the hydraulic conductance of the faulted rock mass. Bearing in mind the scaling issues that apply, fluid injection-driven fault motion can be studied on laboratory-sized samples. Here, we investigate both stable and unstable induced fault slip on pre-cut planar surfaces in Darley Dale and Pennant sandstones, with or without granular gouge. They display contrasting permeabilities, differing by a factor of 105, but mineralogies are broadly comparable. In permeable Darley Dale sandstone, fluid can access the fault plane through the rock matrix and the effective stress law is followed closely. Pore pressure change shifts the whole Mohr circle laterally. In tight Pennant sandstone, fluid only injects into the fault plane itself; stress state in the rock matrix is unaffected. Sudden access by overpressured fluid to the fault plane via hydrofracture causes seismogenic fault slips. This article is part of the themed issue ‘Faulting, friction and weakening: from slow to fast motion’. PMID:28827423
The Trans-Rocky Mountain Fault System - A Fundamental Precambrian Strike-Slip System
Sims, P.K.
2009-01-01
Recognition of a major Precambrian continental-scale, two-stage conjugate strike-slip fault system - here designated as the Trans-Rocky Mountain fault system - provides new insights into the architecture of the North American continent. The fault system consists chiefly of steep linear to curvilinear, en echelon, braided and branching ductile-brittle shears and faults, and local coeval en echelon folds of northwest strike, that cut indiscriminately across both Proterozoic and Archean cratonic elements. The fault system formed during late stages of two distinct tectonic episodes: Neoarchean and Paleoproterozoic orogenies at about 2.70 and 1.70 billion years (Ga). In the Archean Superior province, the fault system formed (about 2.70-2.65 Ga) during a late stage of the main deformation that involved oblique shortening (dextral transpression) across the region and progressed from crystal-plastic to ductile-brittle deformation. In Paleoproterozoic terranes, the fault system formed about 1.70 Ga, shortly following amalgamation of Paleoproterozoic and Archean terranes and the main Paleoproterozoic plastic-fabric-producing events in the protocontinent, chiefly during sinistral transpression. The postulated driving force for the fault system is subcontinental mantle deformation, the bottom-driven deformation of previous investigators. This model, based on seismic anisotropy, invokes mechanical coupling and subsequent shear between the lithosphere and the asthenosphere such that a major driving force for plate motion is deep-mantle flow.
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.
NASA Astrophysics Data System (ADS)
Yu, Jing-xing; Zheng, Wen-jun; Zhang, Pei-zhen; Lei, Qi-yun; Wang, Xu-long; Wang, Wei-tao; Li, Xin-nan; Zhang, Ning
2017-11-01
The Hexi Corridor and the southern Gobi Alashan are composed of discontinuous a set of active faults with various strikes and slip motions that are located to the north of the northern Tibetan Plateau. Despite growing understanding of the geometry and kinematics of these active faults, the late Quaternary deformation pattern in the Hexi Corridor and the southern Gobi Alashan remains controversial. The active E-W trending Taohuala Shan-Ayouqi fault zone is located in the southern Gobi Alashan. Study of the geometry and nature of slip along this fault zone holds crucial value for better understanding the regional deformation pattern. Field investigations combined with high-resolution imagery show that the Taohuala Shan fault and the E-W trending faults within the Ayouqi fault zone (F2 and F5) are left-lateral strike-slip faults, whereas the NW or WNW-trending faults within the Ayouqi fault zone (F1 and F3) are reverse faults. We collected Optically Stimulated Luminescence (OSL) and cosmogenic exposure age dating samples from offset alluvial fan surfaces, and estimated a vertical slip rate of 0.1-0.3 mm/yr, and a strike-slip rate of 0.14-0.93 mm/yr for the Taohuala Shan fault. Strata revealed in a trench excavated across the major fault (F5) in the Ayouqi fault zone and OSL dating results indicate that the most recent earthquake occurred between ca. 11.05 ± 0.52 ka and ca. 4.06 ± 0.29 ka. The geometry and kinematics of the Taohuala Shan-Ayouqi fault zone enable us to build a deformation pattern for the entire Hexi Corridor and the southern Gobi Alashan, which suggest that this region experiences northeastward oblique extrusion of the northern Tibetan Plateau. These left-lateral strike-slip faults in the region are driven by oblique compression but not associated with the northeastward extension of the Altyn Tagh fault.
Joint estimation of preferential attachment and node fitness in growing complex networks
NASA Astrophysics Data System (ADS)
Pham, Thong; Sheridan, Paul; Shimodaira, Hidetoshi
2016-09-01
Complex network growth across diverse fields of science is hypothesized to be driven in the main by a combination of preferential attachment and node fitness processes. For measuring the respective influences of these processes, previous approaches make strong and untested assumptions on the functional forms of either the preferential attachment function or fitness function or both. We introduce a Bayesian statistical method called PAFit to estimate preferential attachment and node fitness without imposing such functional constraints that works by maximizing a log-likelihood function with suitably added regularization terms. We use PAFit to investigate the interplay between preferential attachment and node fitness processes in a Facebook wall-post network. While we uncover evidence for both preferential attachment and node fitness, thus validating the hypothesis that these processes together drive complex network evolution, we also find that node fitness plays the bigger role in determining the degree of a node. This is the first validation of its kind on real-world network data. But surprisingly the rate of preferential attachment is found to deviate from the conventional log-linear form when node fitness is taken into account. The proposed method is implemented in the R package PAFit.
Joint estimation of preferential attachment and node fitness in growing complex networks
Pham, Thong; Sheridan, Paul; Shimodaira, Hidetoshi
2016-01-01
Complex network growth across diverse fields of science is hypothesized to be driven in the main by a combination of preferential attachment and node fitness processes. For measuring the respective influences of these processes, previous approaches make strong and untested assumptions on the functional forms of either the preferential attachment function or fitness function or both. We introduce a Bayesian statistical method called PAFit to estimate preferential attachment and node fitness without imposing such functional constraints that works by maximizing a log-likelihood function with suitably added regularization terms. We use PAFit to investigate the interplay between preferential attachment and node fitness processes in a Facebook wall-post network. While we uncover evidence for both preferential attachment and node fitness, thus validating the hypothesis that these processes together drive complex network evolution, we also find that node fitness plays the bigger role in determining the degree of a node. This is the first validation of its kind on real-world network data. But surprisingly the rate of preferential attachment is found to deviate from the conventional log-linear form when node fitness is taken into account. The proposed method is implemented in the R package PAFit. PMID:27601314
NASA Astrophysics Data System (ADS)
Ritzberger, D.; Jakubek, S.
2017-09-01
In this work, a data-driven identification method, based on polynomial nonlinear autoregressive models with exogenous inputs (NARX) and the Volterra series, is proposed to describe the dynamic and nonlinear voltage and current characteristics of polymer electrolyte membrane fuel cells (PEMFCs). The structure selection and parameter estimation of the NARX model is performed on broad-band voltage/current data. By transforming the time-domain NARX model into a Volterra series representation using the harmonic probing algorithm, a frequency-domain description of the linear and nonlinear dynamics is obtained. With the Volterra kernels corresponding to different operating conditions, information from existing diagnostic tools in the frequency domain such as electrochemical impedance spectroscopy (EIS) and total harmonic distortion analysis (THDA) are effectively combined. Additionally, the time-domain NARX model can be utilized for fault detection by evaluating the difference between measured and simulated output. To increase the fault detectability, an optimization problem is introduced which maximizes this output residual to obtain proper excitation frequencies. As a possible extension it is shown, that by optimizing the periodic signal shape itself that the fault detectability is further increased.
Network Connectivity for Permanent, Transient, Independent, and Correlated Faults
NASA Technical Reports Server (NTRS)
White, Allan L.; Sicher, Courtney; henry, Courtney
2012-01-01
This paper develops a method for the quantitative analysis of network connectivity in the presence of both permanent and transient faults. Even though transient noise is considered a common occurrence in networks, a survey of the literature reveals an emphasis on permanent faults. Transient faults introduce a time element into the analysis of network reliability. With permanent faults it is sufficient to consider the faults that have accumulated by the end of the operating period. With transient faults the arrival and recovery time must be included. The number and location of faults in the system is a dynamic variable. Transient faults also introduce system recovery into the analysis. The goal is the quantitative assessment of network connectivity in the presence of both permanent and transient faults. The approach is to construct a global model that includes all classes of faults: permanent, transient, independent, and correlated. A theorem is derived about this model that give distributions for (1) the number of fault occurrences, (2) the type of fault occurrence, (3) the time of the fault occurrences, and (4) the location of the fault occurrence. These results are applied to compare and contrast the connectivity of different network architectures in the presence of permanent, transient, independent, and correlated faults. The examples below use a Monte Carlo simulation, but the theorem mentioned above could be used to guide fault-injections in a laboratory.
Model-Based Diagnosis and Prognosis of a Water Recycling System
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Hafiychuk, Vasyl; Goebel, Kai Frank
2013-01-01
A water recycling system (WRS) deployed at NASA Ames Research Center s Sustainability Base (an energy efficient office building that integrates some novel technologies developed for space applications) will serve as a testbed for long duration testing of next generation spacecraft water recycling systems for future human spaceflight missions. This system cleans graywater (waste water collected from sinks and showers) and recycles it into clean water. Like all engineered systems, the WRS is prone to standard degradation due to regular use, as well as other faults. Diagnostic and prognostic applications will be deployed on the WRS to ensure its safe, efficient, and correct operation. The diagnostic and prognostic results can be used to enable condition-based maintenance to avoid unplanned outages, and perhaps extend the useful life of the WRS. Diagnosis involves detecting when a fault occurs, isolating the root cause of the fault, and identifying the extent of damage. Prognosis involves predicting when the system will reach its end of life irrespective of whether an abnormal condition is present or not. In this paper, first, we develop a physics model of both nominal and faulty system behavior of the WRS. Then, we apply an integrated model-based diagnosis and prognosis framework to the simulation model of the WRS for several different fault scenarios to detect, isolate, and identify faults, and predict the end of life in each fault scenario, and present the experimental results.
The Northern end of the Dead Sea Basin: Geometry from reflection seismic evidence
Al-Zoubi, A. S.; Heinrichs, T.; Qabbani, I.; ten Brink, Uri S.
2007-01-01
Recently released reflection seismic lines from the Eastern side of the Jordan River north of the Dead Sea were interpreted by using borehole data and incorporated with the previously published seismic lines of the eastern side of the Jordan River. For the first time, the lines from the eastern side of the Jordan River were combined with the published reflection seismic lines from the western side of the Jordan River. In the complete cross sections, the inner deep basin is strongly asymmetric toward the Jericho Fault supporting the interpretation of this segment of the fault as the long-lived and presently active part of the Dead Sea Transform. There is no indication for a shift of the depocenter toward a hypothetical eastern major fault with time, as recently suggested. Rather, the north-eastern margin of the deep basin takes the form of a large flexure, modestly faulted. In the N-S-section along its depocenter, the floor of the basin at its northern end appears to deepen continuously by roughly 0.5??km over 10??km distance, without evidence of a transverse fault. The asymmetric and gently-dipping shape of the basin can be explained by models in which the basin is located outside the area of overlap between en-echelon strike-slip faults. ?? 2007 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Clausen, O. R.; Egholm, D. L.; Wesenberg, R.
2012-04-01
Salt deformation has been the topic of numerous studies through the 20th century and up until present because of the close relation between commercial hydrocarbons and salt structure provinces of the world (Hudec & Jackson, 2007). The fault distribution in sediments above salt structures influences among other things the productivity due to the segmentation of the reservoir (Stewart 2006). 3D seismic data above salt structures can map such fault patterns in great detail and studies have shown that a variety of fault patterns exists. Yet, most patterns fall between two end members: concentric and radiating fault patterns. Here we use a modified version of the numerical spring-slider model introduced by Malthe-Sørenssen et al.(1998a) for simulating the emergence of small scale faults and fractures above a rising salt structure. The three-dimensional spring-slider model enables us to control the rheology of the deforming overburden, the mechanical coupling between the overburden and the underlying salt, as well as the kinematics of the moving salt structure. In this presentation, we demonstrate how the horizontal component on the salt motion influences the fracture patterns within the overburden. The modeling shows that purely vertical movement of the salt introduces a mesh of concentric normal faults in the overburden, and that the frequency of radiating faults increases with the amount of lateral movements across the salt-overburden interface. The two end-member fault patterns (concentric vs. radiating) can thus be linked to two different styles of salt movement: i) the vertical rising of a salt indenter and ii) the inflation of a 'salt-balloon' beneath the deformed strata. The results are in accordance with published analogue and theoretical models, as well as natural systems, and the model may - when used appropriately - provide new insight into how the internal dynamics of the salt in a structure controls the generation of fault patterns above the structure. The model is thus an important contribution to the understanding of small-scale faults, which may be unresolved by seismic data when the hydrocarbon production from reservoirs located above salt structures is optimized.
LoWMob: Intra-PAN Mobility Support Schemes for 6LoWPAN
Bag, Gargi; Raza, Muhammad Taqi; Kim, Ki-Hyung; Yoo, Seung-Wha
2009-01-01
Mobility in 6LoWPAN (IPv6 over Low Power Personal Area Networks) is being utilized in realizing many applications where sensor nodes, while moving, sense and transmit the gathered data to a monitoring server. By employing IEEE802.15.4 as a baseline for the link layer technology, 6LoWPAN implies low data rate and low power consumption with periodic sleep and wakeups for sensor nodes, without requiring them to incorporate complex hardware. Also enabling sensor nodes with IPv6 ensures that the sensor data can be accessed anytime and anywhere from the world. Several existing mobility-related schemes like HMIPv6, MIPv6, HAWAII, and Cellular IP require active participation of mobile nodes in the mobility signaling, thus leading to the mobility-related changes in the protocol stack of mobile nodes. In this paper, we present LoWMob, which is a network-based mobility scheme for mobile 6LoWPAN nodes in which the mobility of 6LoWPAN nodes is handled at the network-side. LoWMob ensures multi-hop communication between gateways and mobile nodes with the help of the static nodes within a 6LoWPAN. In order to reduce the signaling overhead of static nodes for supporting mobile nodes, LoWMob proposes a mobility support packet format at the adaptation layer of 6LoWPAN. Also we present a distributed version of LoWMob, named as DLoWMob (or Distributed LoWMob), which employs Mobility Support Points (MSPs) to distribute the traffic concentration at the gateways and to optimize the multi-hop routing path between source and destination nodes in a 6LoWPAN. Moreover, we have also discussed the security considerations for our proposed mobility schemes. The performance of our proposed schemes is evaluated in terms of mobility signaling costs, end-to-end delay, and packet success ratio. PMID:22346730
LoWMob: Intra-PAN Mobility Support Schemes for 6LoWPAN.
Bag, Gargi; Raza, Muhammad Taqi; Kim, Ki-Hyung; Yoo, Seung-Wha
2009-01-01
Mobility in 6LoWPAN (IPv6 over Low Power Personal Area Networks) is being utilized in realizing many applications where sensor nodes, while moving, sense and transmit the gathered data to a monitoring server. By employing IEEE802.15.4 as a baseline for the link layer technology, 6LoWPAN implies low data rate and low power consumption with periodic sleep and wakeups for sensor nodes, without requiring them to incorporate complex hardware. Also enabling sensor nodes with IPv6 ensures that the sensor data can be accessed anytime and anywhere from the world. Several existing mobility-related schemes like HMIPv6, MIPv6, HAWAII, and Cellular IP require active participation of mobile nodes in the mobility signaling, thus leading to the mobility-related changes in the protocol stack of mobile nodes. In this paper, we present LoWMob, which is a network-based mobility scheme for mobile 6LoWPAN nodes in which the mobility of 6LoWPAN nodes is handled at the network-side. LoWMob ensures multi-hop communication between gateways and mobile nodes with the help of the static nodes within a 6LoWPAN. In order to reduce the signaling overhead of static nodes for supporting mobile nodes, LoWMob proposes a mobility support packet format at the adaptation layer of 6LoWPAN. Also we present a distributed version of LoWMob, named as DLoWMob (or Distributed LoWMob), which employs Mobility Support Points (MSPs) to distribute the traffic concentration at the gateways and to optimize the multi-hop routing path between source and destination nodes in a 6LoWPAN. Moreover, we have also discussed the security considerations for our proposed mobility schemes. The performance of our proposed schemes is evaluated in terms of mobility signaling costs, end-to-end delay, and packet success ratio.
Modelling crash propensity of carshare members.
Dixit, Vinayak; Rashidi, Taha Hossein
2014-09-01
Carshare systems are considered a promising solution for sustainable development of cities. To promote carsharing it is imperative to make them cost effective, which includes reduction in costs associated to crashes and insurance. To achieve this goal, it is important to characterize carshare users involved in crashes and understand factors that can explain at-fault and not-at fault drivers. This study utilizes data from GoGet carshare users in Sydney, Australia. Based on this study it was found that carshare users who utilize cars less frequently, own one or more cars, have less number of accidents in the past ten years, have chosen a higher insurance excess and have had a license for a longer period of time are less likely to be involved in a crash. However, if a crash occurs, carshare users not needing a car on the weekend, driving less than 1000km in the last year, rarely using a car and having an Australian license increases the likelihood to be at-fault. Since the dataset contained information about all members as well as not-at-fault drivers, it provided a unique opportunity to explore some aspects of quasi-induced exposure. The results indicate systematic differences in the distribution between the not-at-fault drivers and the carshare members based on the kilometres driven last year, main mode of travel, car ownership status and how often the car is needed. Finally, based on this study it is recommended that creating an incentive structure based on training and experience (based on kilometres driven), possibly tagged to the insurance excess could improve safety, and reduce costs associated to crashes for carshare systems. Copyright © 2014 Elsevier Ltd. All rights reserved.
Affinity-aware checkpoint restart
Saini, Ajay; Rezaei, Arash; Mueller, Frank; ...
2014-12-08
Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less
Affinity-aware checkpoint restart
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saini, Ajay; Rezaei, Arash; Mueller, Frank
Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less
Linking Europa’s Plume Activity to Tides, Tectonics, and Liquid Water
NASA Astrophysics Data System (ADS)
Rhoden, Alyssa R.; Hurford, Terry; Roth, Lorenz; Retherford, Kurt
2014-11-01
Much of the geologic activity preserved on Europa’s icy surface has been attributed to tidal deformation, mainly due to Europa’s eccentric orbit. Although the surface is geologically young, evidence of ongoing tidally-driven processes has been lacking. However, a recent observation of water vapor near Europa’s south pole suggests that it may be geologically active. Non-detections in previous and follow-up observations indicate a temporal variation in plume visibility and suggests a relationship to Europa’s tidal cycle. Similarly, the Cassini spacecraft has observed plumes emanating from the south pole of Saturn’s moon, Enceladus, and variability in the intensity of eruptions has been linked to its tidal cycle. The inference that a similar mechanism controls plumes at both Europa and Enceladus motivates further analysis of Europa’s plume behavior and the relationship between plumes, tides, and liquid water on these two satellites.We determine the locations and orientations of hypothetical tidally-driven fractures that best match the temporal variability of the plumes observed at Europa. Specifically, we identify model faults that are in tension at the time in Europa’s orbit when a plume was detected and in compression at times when the plume was not detected. We find that tidal stress driven solely by eccentricity is incompatible with the observations unless additional mechanisms are controlling the eruption timing or restricting the longevity of the plumes. In contrast, the addition of obliquity tides, and corresponding precession of the spin pole, can generate a number of model faults that are consistent with the pattern of plume detections. The locations and orientations of the model faults are robust across a broad range of precession rates and spin pole directions. Analysis of the stress variations across model faults suggests that the plumes would be best observed earlier in Europa’s orbit. Our results indicate that Europa’s plumes, if confirmed, differ in many respects from the Enceladean plumes and that either active fractures or volatile sources are rare.
NASA Astrophysics Data System (ADS)
Bialas, Jörg; Dannowski, Anke; Reston, Timothy J.
2015-12-01
A wide-angle seismic section across the Mid-Atlantic Ridge just south of the Ascension transform system reveals laterally varying crustal thickness, and to the east a strongly distorted Moho that appears to result from slip along a large-offset normal fault, termed an oceanic detachment fault. Gravity modelling supports the inferred crustal structure. We investigate the interplay between magmatism, detachment faulting and the changing asymmetry of crustal accretion, and consider several possible scenarios. The one that appears most likely is remarkably simple: an episode of detachment faulting which accommodates all plate divergence and results in the westward migration of the ridge axis, is interspersed with dominantly magmatic and moderately asymmetric (most on the western side) spreading which moves the spreading axis back towards the east. Following the runaway weakening of a normal fault and its development into an oceanic detachment fault, magma both intrudes the footwall to the fault, producing a layer of gabbro (subsequently partially exhumed).
2004-02-03
KENNEDY SPACE CENTER, FLA. - Astronaut Tim Kopra aids in Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.
2004-02-03
KENNEDY SPACE CENTER, FLA. - In the Space Station Processing Facility, workers check over the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.
Numerical Modeling of the Deformation Behavior of Fault Bounded Lens Shaped Bodies in 2D
NASA Astrophysics Data System (ADS)
van der Zee, W.; Urai, J. L.
2001-12-01
Fault zones cause dramatic discontinuous changes in mechanical properties. The early stages of evolution of fault zones are important for its long-term behavior. We consider faults which develop from deformation bands or pre-existing joints which are the initially unconnected discontinuities. With further deformation, these coalesce into a connected network, and develop into a 'mature' fault gouge. When segments are not coplanar, soft linkage or bends in the fault plane (releasing and restraining bends, fault bounded lens-shaped bodies etc) necessarily occurs. Further movement causes additional deformation, and the fault zone has a strongly variable thickness. Here, we present the results of detailed fieldwork combined with numerical modeling on the deformation of fault bounded lens-shaped bodies in the fault zone. Detailed study of a number of lenses in the field shows that the lens is invariably more deformed than the surrounding material. This observation can be explained in several ways. In one end member most of the deformation in the future lens occurs before full coalescence of the slip planes and the formation of the lens. The other end member is that the slip planes coalesce before plastic deformation of the lens is occurring. The internal deformation of the lens occurs after the lens is formed, due to the redistributed stresses in the structure. If this is the case, then lens shaped bodies can be always expected to deform preferentially. Finite element models were used to investigate the shear behavior of a planar fault with a lens shaped body or a sinus-shaped asperity. In a sensitivity analysis, we consider different lens shapes and fault friction coefficients. Results show that 1) during slip, the asperity shears off to form a lens shaped body 2) lens interior deforms more than the surroundings, due to the redistribution of stresses 3) important parameters in this system are the length-thickness ratio of the lens and the fault friction coefficient 4) lens structures can evolve in different ways, but in the final stage the result is a lens with deformed interior In the later stages after further displacement, these zones of preferential deformation evolve into sections containing thick gouge, and the initial lens width controls long term fault gouge thickness.
NASA Astrophysics Data System (ADS)
Bormann, J. M.; Kent, G. M.; Driscoll, N. W.; Harding, A. J.
2016-12-01
The seismic hazard posed by offshore faults for coastal communities in Southern California is poorly understood and may be considerable, especially when these communities are located near long faults that have the ability to produce large earthquakes. The San Diego Trough fault (SDTF) and San Pedro Basin fault (SPBF) systems are active northwest striking, right-lateral faults in the Inner California Borderland that extend offshore between San Diego and Los Angeles. Recent work shows that the SDTF slip rate accounts for 25% of the 6-8 mm/yr of deformation accommodated by the offshore fault network, and seismic reflection data suggest that these two fault zones may be one continuous structure. Here, we use recently acquired CHIRP, high-resolution multichannel seismic (MCS) reflection, and multibeam bathymetric data in combination with USGS and industry MCS profiles to characterize recent deformation on the SDTF and SPBF zones and to evaluate the potential for an end-to-end rupture that spans both fault systems. The SDTF offsets young sediments at the seafloor for 130 km between the US/Mexico border and Avalon Knoll. The northern SPBF has robust geomorphic expression and offsets the seafloor in the Santa Monica Basin. The southern SPBF lies within a 25-km gap between high-resolution MCS surveys. Although there does appear to be a through-going fault at depth in industry MCS profiles, the low vertical resolution of these data inhibits our ability to confirm recent slip on the southern SPBF. Empirical scaling relationships indicate that a 200-km-long rupture of the SDTF and its southern extension, the Bahia Soledad fault, could produce a M7.7 earthquake. If the SDTF and the SPBF are linked, the length of the combined fault increases to >270 km. This may allow ruptures initiating on the SDTF to propagate within 25 km of the Los Angeles Basin. At present, the paleoseismic histories of the faults are unknown. We present new observations from CHIRP and coring surveys at three locations where thin lenses of sediment mantle the SDTF, providing the ideal sedimentary record to constrain the timing of the most recent event. Characterizing the paleoseismic histories is a key step toward defining the extent and variability of past ruptures, which in turn, will improve maximum magnitude estimates for the SDTF and SPBF systems.
CONTROL AND FAULT DETECTOR CIRCUIT
Winningstad, C.N.
1958-04-01
A power control and fault detectcr circuit for a radiofrequency system is described. The operation of the circuit controls the power output of a radio- frequency power supply to automatically start the flow of energizing power to the radio-frequency power supply and to gradually increase the power to a predetermined level which is below the point where destruction occurs upon the happening of a fault. If the radio-frequency power supply output fails to increase during such period, the control does not further increase the power. On the other hand, if the output of the radio-frequency power supply properly increases, then the control continues to increase the power to a maximum value. After the maximumn value of radio-frequency output has been achieved. the control is responsive to a ''fault,'' such as a short circuit in the radio-frequency system being driven, so that the flow of power is interrupted for an interval before the cycle is repeated.
NASA Astrophysics Data System (ADS)
Chan, Christine S.; Ostertag, Michael H.; Akyürek, Alper Sinan; Šimunić Rosing, Tajana
2017-05-01
The Internet of Things envisions a web-connected infrastructure of billions of sensors and actuation devices. However, the current state-of-the-art presents another reality: monolithic end-to-end applications tightly coupled to a limited set of sensors and actuators. Growing such applications with new devices or behaviors, or extending the existing infrastructure with new applications, involves redesign and redeployment. We instead propose a modular approach to these applications, breaking them into an equivalent set of functional units (context engines) whose input/output transformations are driven by general-purpose machine learning, demonstrating an improvement in compute redundancy and computational complexity with minimal impact on accuracy. In conjunction with formal data specifications, or ontologies, we can replace application-specific implementations with a composition of context engines that use common statistical learning to generate output, thus improving context reuse. We implement interconnected context-aware applications using our approach, extracting user context from sensors in both healthcare and grid applications. We compare our infrastructure to single-stage monolithic implementations with single-point communications between sensor nodes and the cloud servers, demonstrating a reduction in combined system energy by 22-45%, and multiplying the battery lifetime of power-constrained devices by at least 22x, with easy deployment across different architectures and devices.
A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment
Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda
2017-01-01
In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment. PMID:28629131
A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment.
Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda
2017-06-17
In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment.
Fluid-driven normal faulting earthquake sequences in the Taiwan orogen
NASA Astrophysics Data System (ADS)
Wang, Ling-hua; Rau, Ruey-Juin; Lee, En-Jui
2017-04-01
Seismicity in the Central Range of Taiwan shows normal faulting mechanisms with T-axes directing NE, subparallel to the strike of the mountain belt. We analyze earthquake sequences occurred within 2012-2015 in the Nanshan area of northern Taiwan which indicating swarm behavior and migration characteristics. We select events larger than 2.0 from Central Weather Bureau catalog and use the double-difference relocation program hypoDD with waveform cross-correlation in the Nanshan area. We obtained a final count of 1406 (95%) relocated earthquakes. Moreover, we compute focal mechanisms using USGS program HASH by P-wave first motion and S/P ratio picking and 114 fault plane solutions with M 3.0-5.87 were determined. To test for fluid diffusion, we model seismicity using the equation of Shapiro et al. (1997) by fitting earthquake diffusing rate D during the migration period. According to the relocation result, seismicity in the Taiwan orogenic belt present mostly N25E orientation parallel to the mountain belt with the same direction of the tension axis. In addition, another seismic fracture depicted by seismicity rotated 35 degree counterclockwise to the NW direction. Nearly all focal mechanisms are normal fault type. In the Nanshan area, events show N10W distribution with a focal depth range from 5-12 km and illustrate fault plane dipping about 45-60 degree to SW. Three months before the M 5.87 mainshock which occurred in March, 2013, there were some foreshock events occurred in the shallow part of the fault plane of the mainshock. Half a year following the mainshock, earthquakes migrated to the north and south, respectively with processes matched the diffusion model at a rate of 0.2-0.6 m2/s. This migration pattern and diffusion rate offer an evidence of 'fluid-driven' process in the fault zone. We also find the upward migration of earthquakes in the mainshock source region. These phenomena are likely caused by the opening of the permeable conduit due to the M 5.87 earthquake and the rise of the high pressure fluid.
Ring-fault activity at subsiding calderas studied from analogue experiments and numerical modeling
NASA Astrophysics Data System (ADS)
Liu, Y. K.; Ruch, J.; Vasyura-Bathke, H.; Jonsson, S.
2017-12-01
Several subsiding calderas, such as the ones in the Galápagos archipelago and the Axial seamount in the Pacific Ocean have shown a complex but similar ground deformation pattern, composed of a broad deflation signal affecting the entire volcanic edifice and of a localized subsidence signal focused within the caldera. However, it is still debated how deep processes at subsiding calderas, including magmatic pressure changes, source locations and ring-faulting, relate to this observed surface deformation pattern. We combine analogue sandbox experiments with numerical modeling to study processes involved from initial subsidence to later collapse of calderas. The sandbox apparatus is composed of a motor driven subsiding half-piston connected to the bottom of a glass box. During the experiments the observation is done by five digital cameras photographing from various perspectives. We use Photoscan, a photogrammetry software and PIVLab, a time-resolved digital image correlation tool, to retrieve time-series of digital elevation models and velocity fields from acquired photographs. This setup allows tracking the processes acting both at depth and at the surface, and to assess their relative importance as the subsidence evolves to a collapse. We also use the Boundary Element Method to build a numerical model of the experiment setup, which comprises contracting sill-like source in interaction with a ring-fault in elastic half-space. We then compare our results from these two approaches with the examples observed in nature. Our preliminary experimental and numerical results show that at the initial stage of magmatic withdrawal, when the ring-fault is not yet well formed, broad and smooth deflation dominates at the surface. As the withdrawal increases, narrower subsidence bowl develops accompanied by the upward propagation of the ring-faulting. This indicates that the broad deflation, affecting the entire volcano edifice, is primarily driven by the contraction of the magmatic source, whereas the ring-faulting tends to concentrate deformation within the caldera. This interaction between ring-faulting and pressure decrease in a magma reservoir therefore provides a possible explanation for the deformation pattern observed at several subsiding calderas.
Mann, G.M.; Meyer, C.E.
1993-01-01
Late Cenozoic fault geometry, structure, paleoseismicity, and patterns of recent seismicity at two seismic zones along the Olympic-Wallowa lineament (OWL) of western Idaho, northeast Oregon, and southeast Washington indicate limited right-oblique slip displacement along multiple northwest-striking faults that constitute the lineament. The southern end of the OWL originates in the Long Valley fault system and western Snake River Plain in western Idaho. The OWL in northeast Oregon consists of a wide zone of northwest-striking faults and is associated with several large, inferred, pull-apart basins. The OWL then emerges from the Blue Mountain uplift as a much narrower zone of faults in the Columbia Plateau known as the Wallula fault zone (WFZ). Stuctural relationships in the WFZ strongly suggest that it is a right-slip extensional duplex. -from Authors
Astypalaea Linea: A Large-Scale Strike-Slip Fault on Europa
NASA Astrophysics Data System (ADS)
Tufts, B. Randall; Greenberg, Richard; Hoppa, Gregory; Geissler, Paul
1999-09-01
Astypalaea Linea is an 810-km strike-slip fault, located near the south pole of Europa. In length, it rivals the San Andreas Fault in California, and it is the largest strike-slip fault yet known on Europa. The fault was discovered using Voyager 2 images, based upon the presence of familiar strike-slip features including linearity, pull-aparts, and possible braids, and upon the offset of multiple piercing points. Fault displacement is 42 km, right-lateral, in the southern and central parts and probably throughout. Pull-aparts present along the fault trace probably are gaps in the lithosphere bounded by vertical cracks, and which opened due to fault motion and filled with material from below. Crosscutting relationships suggest the fault to be of intermediate relative age. The fault may have initiated as a crack due to tension from combined diurnal tides and nonsynchronous rotation, according to the tectonic model of R. Greenberg et al. (1998a, Icarus135, 64-78). Under the influence of varying diurnal tides, strike-slip offset may have occurred through a process called “walking,” which depends upon an inelastic lithospheric response to displacement. Alternatively, fault displacement may have been driven by currents in the theorized Europan ocean, which may have created simple shear structures such as braids. The discovery of Astypalaea Linea extends the geographical range of lateral motion on Europa. Such motion requires the presence of a decoupling zone of ductile ice or liquid water, a sufficiently rigid lithosphere, and a mechanism to consume surface area.
Furquim, Gustavo; Filho, Geraldo P R; Jalali, Roozbeh; Pessin, Gustavo; Pazzi, Richard W; Ueyama, Jó
2018-03-19
The rise in the number and intensity of natural disasters is a serious problem that affects the whole world. The consequences of these disasters are significantly worse when they occur in urban districts because of the casualties and extent of the damage to goods and property that is caused. Until now feasible methods of dealing with this have included the use of wireless sensor networks (WSNs) for data collection and machine-learning (ML) techniques for forecasting natural disasters. However, there have recently been some promising new innovations in technology which have supplemented the task of monitoring the environment and carrying out the forecasting. One of these schemes involves adopting IP-based (Internet Protocol) sensor networks, by using emerging patterns for IoT. In light of this, in this study, an attempt has been made to set out and describe the results achieved by SENDI (System for dEtecting and forecasting Natural Disasters based on IoT). SENDI is a fault-tolerant system based on IoT, ML and WSN for the detection and forecasting of natural disasters and the issuing of alerts. The system was modeled by means of ns-3 and data collected by a real-world WSN installed in the town of São Carlos - Brazil, which carries out the data collection from rivers in the region. The fault-tolerance is embedded in the system by anticipating the risk of communication breakdowns and the destruction of the nodes during disasters. It operates by adding intelligence to the nodes to carry out the data distribution and forecasting, even in extreme situations. A case study is also included for flash flood forecasting and this makes use of the ns-3 SENDI model and data collected by WSN.
Ranking Causal Anomalies via Temporal and Dynamical Analysis on Vanishing Correlations.
Cheng, Wei; Zhang, Kai; Chen, Haifeng; Jiang, Guofei; Chen, Zhengzhang; Wang, Wei
2016-08-01
Modern world has witnessed a dramatic increase in our ability to collect, transmit and distribute real-time monitoring and surveillance data from large-scale information systems and cyber-physical systems. Detecting system anomalies thus attracts significant amount of interest in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be a powerful way in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: 1) fault propagation in the network is ignored; 2) the root casual anomalies may not always be the nodes with a high-percentage of vanishing correlations; 3) temporal patterns of vanishing correlations are not exploited for robust detection. To address these limitations, in this paper we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network, and can perform joint inference on both the structural, and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations, and can compensate for unstructured measurement noise in the system. Extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets demonstrate the effectiveness of our approach.
Furquim, Gustavo; Filho, Geraldo P. R.; Pessin, Gustavo; Pazzi, Richard W.
2018-01-01
The rise in the number and intensity of natural disasters is a serious problem that affects the whole world. The consequences of these disasters are significantly worse when they occur in urban districts because of the casualties and extent of the damage to goods and property that is caused. Until now feasible methods of dealing with this have included the use of wireless sensor networks (WSNs) for data collection and machine-learning (ML) techniques for forecasting natural disasters. However, there have recently been some promising new innovations in technology which have supplemented the task of monitoring the environment and carrying out the forecasting. One of these schemes involves adopting IP-based (Internet Protocol) sensor networks, by using emerging patterns for IoT. In light of this, in this study, an attempt has been made to set out and describe the results achieved by SENDI (System for dEtecting and forecasting Natural Disasters based on IoT). SENDI is a fault-tolerant system based on IoT, ML and WSN for the detection and forecasting of natural disasters and the issuing of alerts. The system was modeled by means of ns-3 and data collected by a real-world WSN installed in the town of São Carlos - Brazil, which carries out the data collection from rivers in the region. The fault-tolerance is embedded in the system by anticipating the risk of communication breakdowns and the destruction of the nodes during disasters. It operates by adding intelligence to the nodes to carry out the data distribution and forecasting, even in extreme situations. A case study is also included for flash flood forecasting and this makes use of the ns-3 SENDI model and data collected by WSN. PMID:29562657
A Self-Stabilizing Distributed Clock Synchronization Protocol for Arbitrary Digraphs
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2011-01-01
This report presents a self-stabilizing distributed clock synchronization protocol in the absence of faults in the system. It is focused on the distributed clock synchronization of an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. We present an outline of a deductive proof of the correctness of the protocol. A model of the protocol was mechanically verified using the Symbolic Model Verifier (SMV) for a variety of topologies. Results of the mechanical proof of the correctness of the protocol are provided. The model checking results have verified the correctness of the protocol as they apply to the networks with unidirectional and bidirectional links. In addition, the results confirm the claims of determinism and linear convergence. As a result, we conjecture that the protocol solves the general case of this problem. We also present several variations of the protocol and discuss that this synchronization protocol is indeed an emergent system.
NASA Astrophysics Data System (ADS)
Suzuki, K.; Nakano, M.; Hori, T.; Takahashi, N.
2015-12-01
The Japan Agency for Marine-Earth Science and Technology installed permanent ocean bottom observation network called Dense Oceanfloor Network System for Earthquakes and Tsunamis (DONET) off the Kii Peninsula, southwest of Japan, to monitor earthquakes and tsunamis. We detected the long-term vertical displacements of sea floor from the ocean-bottom pressure records, starting from March 2013, at several DONET stations (Suzuki et al., 2014). We consider that these displacements were caused by the crustal deformation due to a slow slip event (SSE). We estimated the fault geometry of the SSE by using the observed ocean-bottom displacements. The ocean-bottom displacements were obtained by removing the tidal components from the pressure records. We also subtracted the average of pressure changes taken over the records at stations connected to each science node from each record in order to remove the contributions due to atmospheric pressure changes and non-tidal ocean dynamic mass variations. Therefore we compared observed displacements with the theoretical ones that was subtracted the average displacement in the fault geometry estimation. We also compared observed and theoretical average displacements for the model evaluation. In this study, the observed average displacements were assumed to be zero. Although there are nine parameters in the fault model, we observed vertical displacements at only four stations. Therefore we assumed three fault geometries; (1) a reverse fault slip along the plate boundary, (2) a strike slip along a splay fault, and (3) a reverse fault slip along the splay fault. We obtained that the model (3) gives the smallest residual between observed and calculated displacements. We also observed that this SSE was synchronized with a decrease in the background seismicity within the area of a nearby earthquake cluster. In the future, we will investigate the relationship between the SSE and the seismicity change.
Analysis of Earthquake Source Spectra in Salton Trough
NASA Astrophysics Data System (ADS)
Chen, X.; Shearer, P. M.
2009-12-01
Previous studies of the source spectra of small earthquakes in southern California show that average Brune-type stress drops vary among different regions, with particularly low stress drops observed in the Salton Trough (Shearer et al., 2006). The Salton Trough marks the southern end of the San Andreas Fault and is prone to earthquake swarms, some of which are driven by aseismic creep events (Lohman and McGuire, 2007). In order to learn the stress state and understand the physical mechanisms of swarms and slow slip events, we analyze the source spectra of earthquakes in this region. We obtain Southern California Seismic Network (SCSN) waveforms for earthquakes from 1977 to 2009 archived at the Southern California Earthquake Center (SCEC) data center, which includes over 17,000 events. After resampling the data to a uniform 100 Hz sample rate, we compute spectra for both signal and noise windows for each seismogram, and select traces with a P-wave signal-to-noise ratio greater than 5 between 5 Hz and 15 Hz. Using selected displacement spectra, we isolate the source spectra from station terms and path effects using an empirical Green’s function approach. From the corrected source spectra, we compute corner frequencies and estimate moments and stress drops. Finally we analyze spatial and temporal variations in stress drop in the Salton Trough and compare them with studies of swarms and creep events to assess the evolution of faulting and stress in the region. References: Lohman, R. B., and J. J. McGuire (2007), Earthquake swarms driven by aseismic creep in the Salton Trough, California, J. Geophys. Res., 112, B04405, doi:10.1029/2006JB004596 Shearer, P. M., G. A. Prieto, and E. Hauksson (2006), Comprehensive analysis of earthquake source spectra in southern California, J. Geophys. Res., 111, B06303, doi:10.1029/2005JB003979.
Perl Extension to the Bproc Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grunau, Daryl W.
2004-06-07
The Beowulf Distributed process Space (Bproc) software stack is comprised of UNIX/Linux kernel modifications and a support library by which a cluster of machines, each running their own private kernel, can present itself as a unified process space to the user. A Bproc cluster contains a single front-end machine and many back-end nodes which receive and run processes given to them by the front-end. Any process which is migrated to a back-end node is also visible as a ghost process on the fron-end, and may be controlled there using traditional UNIX semantics (e.g. ps(1), kill(1), etc). This software is amore » Perl extension to the Bproc library which enables the Perl programmer to make direct calls to functions within the Bproc library. See http://www.clustermatic.org, http://bproc.sourceforge.net, and http://www.perl.org« less
Tectonics of the Jemez Lineament in the Jemez Mountains and Rio Grande Rift
NASA Astrophysics Data System (ADS)
Aldrich, M. J., Jr.
1986-02-01
The Jemez lineament is a NE trending crustal flaw that controlled volcanism and tectonism in the Jemez Mountains and the Rio Grande rift zone. The fault system associated with the lineament in the rift zone includes, from west to east, the Jemez fault zone southwest of the Valles-Toledo caldera complex, a series of NE trending faults on the resurgent dome in the Valles caldera, a structural discontinuity with a high fracture intensity in the NE Jemez Mountains, and the Embudo fault zone in the Española Basin. The active western boundary faulting of the Española Basin may have been restricted to the south side of the lineament since the mid-Miocene. The faulting apparently began on the Sierrita fault on the east side of the Nacimiento Mountains in the late Oligocene and stepped eastward in the early Miocene to the Canada de Cochiti fault zone. At the end of the Miocene (about 5 Ma) the active boundary faulting again stepped eastward to the Pajarito fault zone on the east side of the Jemez Mountains. The north end of the Pajarito fault terminates against the Jemez lineament at a point where it changes from a structural discontinuity (zone of high fracture intensity) on the west to the Embudo fault zone on the east. Major transcurrent movement occurred on the Embudo fault zone during the Pliocene and has continued at a much slower rate since then. The relative sense of displacement changes from right slip on the western part of the fault zone to left slip on the east. The kinematics of this faulting probably reflect the combined effects of faster spreading in the Española Basin than the area north of the lineament (Abiquiu embayment and San Luis Basin), the right step in the rift that juxtaposes the San Luis Basin against the Picuris Mountains, and counterclockwise rotation of various crustal blocks within the rift zone. No strike-slip displacements have occurred on the lineament in the central and eastern Jemez Mountains since at least the mid-Miocene, although movements on the still active Jemez fault zone, in the western Jemez Mountains, may have a significant strike-slip component. Basaltic volcanism was occurring in the Jemez Mountains at four discrete vent areas on the lineament between about 15 Ma and 10 Ma and possibly as late as 7 Ma, indicating that it was being extended during that time.
Region-Based Collision Avoidance Beaconless Geographic Routing Protocol in Wireless Sensor Networks.
Lee, JeongCheol; Park, HoSung; Kang, SeokYoon; Kim, Ki-Il
2015-06-05
Due to the lack of dependency on beacon messages for location exchange, the beaconless geographic routing protocol has attracted considerable attention from the research community. However, existing beaconless geographic routing protocols are likely to generate duplicated data packets when multiple winners in the greedy area are selected. Furthermore, these protocols are designed for a uniform sensor field, so they cannot be directly applied to practical irregular sensor fields with partial voids. To prevent the failure of finding a forwarding node and to remove unnecessary duplication, in this paper, we propose a region-based collision avoidance beaconless geographic routing protocol to increase forwarding opportunities for randomly-deployed sensor networks. By employing different contention priorities into the mutually-communicable nodes and the rest of the nodes in the greedy area, every neighbor node in the greedy area can be used for data forwarding without any packet duplication. Moreover, simulation results are given to demonstrate the increased packet delivery ratio and shorten end-to-end delay, rather than well-referred comparative protocols.
Region-Based Collision Avoidance Beaconless Geographic Routing Protocol in Wireless Sensor Networks
Lee, JeongCheol; Park, HoSung; Kang, SeokYoon; Kim, Ki-Il
2015-01-01
Due to the lack of dependency on beacon messages for location exchange, the beaconless geographic routing protocol has attracted considerable attention from the research community. However, existing beaconless geographic routing protocols are likely to generate duplicated data packets when multiple winners in the greedy area are selected. Furthermore, these protocols are designed for a uniform sensor field, so they cannot be directly applied to practical irregular sensor fields with partial voids. To prevent the failure of finding a forwarding node and to remove unnecessary duplication, in this paper, we propose a region-based collision avoidance beaconless geographic routing protocol to increase forwarding opportunities for randomly-deployed sensor networks. By employing different contention priorities into the mutually-communicable nodes and the rest of the nodes in the greedy area, every neighbor node in the greedy area can be used for data forwarding without any packet duplication. Moreover, simulation results are given to demonstrate the increased packet delivery ratio and shorten end-to-end delay, rather than well-referred comparative protocols. PMID:26057037
Dodia, Nazera; El-Sharief, Deena; Kirwan, Cliona C
2015-01-01
Sentinel lymph nodes are mapped using (99m)Technetium, injected on day of surgery (1-day protocol) or day before (2-day protocol). This retrospective cohort study compares efficacy between the two protocols. Histopathology for all unilateral sentinel lymph node biopsies (March 2012-March 2013) in a single centre were reviewed. Number of sentinel lymph nodes, non-sentinel lymph nodes and pathology was compared. 2/270 (0.7 %) in 1-day protocol and 8/192 (4 %) in 2-day protocol had no sentinel lymph nodes removed (p = 0.02). The median (range) number of sentinel lymph nodes removed per patient was 2 (0-7) and 1 (0-11) in the 1- and 2-day protocols respectively (p = 0.08). There was a trend for removing more non-sentinel lymph nodes in 2-day protocol [1-day: 52/270 (19 %); 2-day: 50/192 (26 %), p = 0.07]. Using 2-day, sentinel lymph node identification failure rate is higher, although within acceptable rates. The 1 and 2 day protocols are both effective, therefore choice of protocol should be driven by patient convenience and hospital efficiency. However, this study raises the possibility that 1-day may be preferable when higher sentinel lymph node count is beneficial, for example following neoadjuvant chemotherapy.
SeaMARC II mapping of transform faults in the Cayman Trough, Caribbean Sea
Rosencrantz, Eric; Mann, Paul
1992-01-01
SeaMARC II maps of the southern wall of the Cayman Trough between Honduras and Jamaica show zones of continuous, well-defined fault lineaments adjacent and parallel to the wall, both to the east and west of the Cayman spreading axis. These lineaments mark the present, active traces of transform faults which intersect the southern end of the spreading axis at a triple junction. The Swan Islands transform fault to the west is dominated by two major lineaments that overlap with right-stepping sense across a large push-up ridge beneath the Swan Islands. The fault zone to the east of the axis, named the Walton fault, is more complex, containing multiple fault strands and a large pull-apart structure. The Walton fault links the spreading axis to Jamaican and Hispaniolan strike-slip faults, and it defines the southern boundary of a microplate composed of the eastern Cayman Trough and western Hispaniola. The presence of this microplate raises questions about the veracity of Caribbean plate velocities based primarily on Cayman Trough opening rates.
Highball: A high speed, reserved-access, wide area network
NASA Technical Reports Server (NTRS)
Mills, David L.; Boncelet, Charles G.; Elias, John G.; Schragger, Paul A.; Jackson, Alden W.
1990-01-01
A network architecture called Highball and a preliminary design for a prototype, wide-area data network designed to operate at speeds of 1 Gbps and beyond are described. It is intended for applications requiring high speed burst transmissions where some latency between requesting a transmission and granting the request can be anticipated and tolerated. Examples include real-time video and disk-disk transfers, national filestore access, remote sensing, and similar applications. The network nodes include an intelligent crossbar switch, but have no buffering capabilities; thus, data must be queued at the end nodes. There are no restrictions on the network topology, link speeds, or end-end protocols. The end system, nodes, and links can operate at any speed up to the limits imposed by the physical facilities. An overview of an initial design approach is presented and is intended as a benchmark upon which a detailed design can be developed. It describes the network architecture and proposed access protocols, as well as functional descriptions of the hardware and software components that could be used in a prototype implementation. It concludes with a discussion of additional issues to be resolved in continuing stages of this project.
Robust Fault Detection for Switched Fuzzy Systems With Unknown Input.
Han, Jian; Zhang, Huaguang; Wang, Yingchun; Sun, Xun
2017-10-03
This paper investigates the fault detection problem for a class of switched nonlinear systems in the T-S fuzzy framework. The unknown input is considered in the systems. A novel fault detection unknown input observer design method is proposed. Based on the proposed observer, the unknown input can be removed from the fault detection residual. The weighted H∞ performance level is considered to ensure the robustness. In addition, the weighted H₋ performance level is introduced, which can increase the sensibility of the proposed detection method. To verify the proposed scheme, a numerical simulation example and an electromechanical system simulation example are provided at the end of this paper.
Staging studies for cutaneous melanoma in the United States: a population-based analysis.
Wasif, Nabil; Etzioni, David; Haddad, Dana; Gray, Richard J; Bagaria, Sanjay P; Pockaj, Barbara A
2015-04-01
Routine cross-sectional imaging for staging of early-stage cutaneous melanoma is not recommended. This study sought to investigate the use of imaging for staging of cutaneous melanoma in the United States. Patients with nonmetastatic cutaneous melanoma newly diagnosed between 2000 and 2007 were identified from the Surveillance Epidemiology End Results-Medicare registry. Any imaging study performed within 90 days after diagnosis was considered a staging study. The study identified 25,643 patients, 3,116 (12.2 %) of whom underwent cross-sectional imaging: positron emission tomography (PET) (7.2 %), computed tomography (CT) (5.9 %), and magnetic resonance imaging (MRI) (0.6 %). From 2000 to 2007, the use of cross-sectional imaging increased from 8.7 to 16.1 % (p < 0.001), driven predominantly by increased usage of PET (4.2-12.1 %). Stratification by T and N classification showed that cross-sectional imaging was used for 8.6 % of T1, 14.3 % of T2, 18.6 % of T3, and 26.7 % of T4 tumors (p < 0.001) and for 33.3 % of node-positive patients versus 11.1 % of node-negative patients (p < 0.001). Factors predictive of cross-sectional imaging included T classification [odds ratio (OR) for T4 vs T1, 2.66; 95 % confidence interval (CI) 2.33-3.03], node positivity (OR 2.70; 95 % CI 2.36-3.10), more recent year of diagnosis (OR 2.05 for 2007 vs 2000; 95 % CI 1.74-2.42), atypical histology, and non-Caucasian race (OR 1.32; 95 % CI 1.02-1.73). The use of cross-sectional imaging for staging of early-stage cutaneous melanoma is increasing in the Medicare population. Better dissemination of guidelines and judicious use of imaging should be encouraged.
Kujala, Rainer; Glerean, Enrico; Pan, Raj Kumar; Jääskeläinen, Iiro P; Sams, Mikko; Saramäki, Jari
2016-11-01
Networks have become a standard tool for analyzing functional magnetic resonance imaging (fMRI) data. In this approach, brain areas and their functional connections are mapped to the nodes and links of a network. Even though this mapping reduces the complexity of the underlying data, it remains challenging to understand the structure of the resulting networks due to the large number of nodes and links. One solution is to partition networks into modules and then investigate the modules' composition and relationship with brain functioning. While this approach works well for single networks, understanding differences between two networks by comparing their partitions is difficult and alternative approaches are thus necessary. To this end, we present a coarse-graining framework that uses a single set of data-driven modules as a frame of reference, enabling one to zoom out from the node- and link-level details. As a result, differences in the module-level connectivity can be understood in a transparent, statistically verifiable manner. We demonstrate the feasibility of the method by applying it to networks constructed from fMRI data recorded from 13 healthy subjects during rest and movie viewing. While independently partitioning the rest and movie networks is shown to yield little insight, the coarse-graining framework enables one to pinpoint differences in the module-level structure, such as the increased number of intra-module links within the visual cortex during movie viewing. In addition to quantifying differences due to external stimuli, the approach could also be applied in clinical settings, such as comparing patients with healthy controls. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Amato, Vincenzo; Aucelli, Pietro P. C.; Bellucci Sessa, Eliana; Cesarano, Massimo; Incontri, Pietro; Pappone, Gerardo; Valente, Ettore; Vilardo, Giuseppe
2017-04-01
A multidisciplinary methodology, integrating stratigraphic, geomorphological and structural data, combined with GIS-aided analysis and PS-InSAR interferometric data, was applied to characterize the relationships between ground deformations and the stratigraphic and the morphostructural setting of the Venafro intermontane basin. This basin is a morphostructural depression related to NW-SE and NE-SW oriented high angle normal faults bordering and crossing it. In particular, a well-known active fault crossing the plain is the Aquae Juliae Fault, whose recent activity is evidenced by archeoseismological data. The approach applied here reveals new evidence of possible faulting, acting during the Lower to Upper Pleistocene, which has driven the morphotectonic and the environmental evolution of the basin. In particular, the tectonic setting emerging from this study highlights the influence of the NW-SE oriented extensional phase during the late Lower Pleistocene - early Middle Pleistocene, in the generation of NE-SW trending, SE dipping, high-angle faults and NW-SE trending, high-angle transtensive faults. This phase has been followed by a NE-SW extensional one, responsible for the formation of NW-SE trending, both NW and SE dipping, high-angle normal faults, and the reactivation of the oldest NE-SW oriented structures. These NW-SE trending normal faults include the Aquae Juliae Fault and a new one, unknown until now, crossing the plain between the Venafro village and the Colle Cupone Mt. (hereinafter named the Venafro-Colle Cupone Fault, VCCF). This fault has controlled deposition of the youngest sedimentary units (late Middle Pleistocene to late Upper Pleistocene) suggesting its recent activity and it is well constrained by PS-InSAR data, as testified by the increase of the subsidence rate in the hanging wall block.
Gögler, E
1985-01-01
In different tables the most important faults with enteral sutures and anastomoses in general and at special operations are demonstrated: end-to-end anastomoses with congruent diameter, anastomoses with different diameters, B I, B II, low anterior resection, esophago-jejunostomy. Only if the surgeon has experience in standard technique, faults and risks with mechanical staplers and manual sutures, the advantage-progress of staplers will be effective avoiding special risks. Surgeons without experience may produce real catastrophes which may turn out hopeless without training in manual suture technique.
NASA Astrophysics Data System (ADS)
Martínez-Martínez, José Miguel; Booth-Rea, Guillermo; Azañón, José Miguel; Torcal, Federico
2006-08-01
Pliocene and Quaternary tectonic structures mainly consisting of segmented northwest-southeast normal faults, and associated seismicity in the central Betics do not agree with the transpressive tectonic nature of the Africa-Eurasia plate boundary in the Ibero-Maghrebian region. Active extensional deformation here is heterogeneous, individual segmented normal faults being linked by relay ramps and transfer faults, including oblique-slip and both dextral and sinistral strike-slip faults. Normal faults extend the hanging wall of an extensional detachment that is the active segment of a complex system of successive WSW-directed extensional detachments which have thinned the Betic upper crust since middle Miocene. Two areas, which are connected by an active 40-km long dextral strike-slip transfer fault zone, concentrate present-day extension. Both the seismicity distribution and focal mechanisms agree with the position and regime of the observed faults. The activity of the transfer zone during middle Miocene to present implies a mode of extension which must have remained substantially the same over the entire period. Thus, the mechanisms driving extension should still be operating. Both the westward migration of the extensional loci and the high asymmetry of the extensional systems can be related to edge delamination below the south Iberian margin coupled with roll-back under the Alborán Sea; involving the asymmetric westward inflow of asthenospheric material under the margins.
Late Quaternary faulting in the Sevier Desert driven by magmatism.
Stahl, T; Niemi, N A
2017-03-14
Seismic hazard in continental rifts varies as a function of strain accommodation by tectonic or magmatic processes. The nature of faulting in the Sevier Desert, located in eastern Basin and Range of central Utah, and how this faulting relates to the Sevier Desert Detachment low-angle normal fault, have been debated for nearly four decades. Here, we show that the geodetic signal of extension across the eastern Sevier Desert is best explained by magma-assisted rifting associated with Plio-Pleistocene volcanism. GPS velocities from 14 continuous sites across the region are best-fit by interseismic strain accumulation on the southern Wasatch Fault at c. 3.4 mm yr -1 with a c. 0.5 mm yr -1 tensile dislocation opening in the eastern Sevier Desert. The characteristics of surface deformation from field surveys are consistent with dike-induced faulting and not with faults soling into an active detachment. Geologic extension rates of c. 0.6 mm yr -1 over the last c. 50 kyr in the eastern Sevier Desert are consistent with the rates estimated from the geodetic model. Together, these findings suggest that Plio-Pleistocene extension is not likely to have been accommodated by low-angle normal faulting on the Sevier Desert Detachment and is instead accomplished by strain localization in a zone of narrow, magma-assisted rifting.
Late Quaternary faulting in the Sevier Desert driven by magmatism
Stahl, T.; Niemi, N. A.
2017-01-01
Seismic hazard in continental rifts varies as a function of strain accommodation by tectonic or magmatic processes. The nature of faulting in the Sevier Desert, located in eastern Basin and Range of central Utah, and how this faulting relates to the Sevier Desert Detachment low-angle normal fault, have been debated for nearly four decades. Here, we show that the geodetic signal of extension across the eastern Sevier Desert is best explained by magma-assisted rifting associated with Plio-Pleistocene volcanism. GPS velocities from 14 continuous sites across the region are best-fit by interseismic strain accumulation on the southern Wasatch Fault at c. 3.4 mm yr−1 with a c. 0.5 mm yr−1 tensile dislocation opening in the eastern Sevier Desert. The characteristics of surface deformation from field surveys are consistent with dike-induced faulting and not with faults soling into an active detachment. Geologic extension rates of c. 0.6 mm yr−1 over the last c. 50 kyr in the eastern Sevier Desert are consistent with the rates estimated from the geodetic model. Together, these findings suggest that Plio-Pleistocene extension is not likely to have been accommodated by low-angle normal faulting on the Sevier Desert Detachment and is instead accomplished by strain localization in a zone of narrow, magma-assisted rifting. PMID:28290529
Complex rupture during the 12 January 2010 Haiti earthquake
Hayes, G.P.; Briggs, R.W.; Sladen, A.; Fielding, E.J.; Prentice, C.; Hudnut, K.; Mann, P.; Taylor, F.W.; Crone, A.J.; Gold, R.; Ito, T.; Simons, M.
2010-01-01
Initially, the devastating Mw 7.0, 12 January 2010 Haiti earthquake seemed to involve straightforward accommodation of oblique relative motion between the Caribbean and North American plates along the Enriquillog-Plantain Garden fault zone. Here, we combine seismological observations, geologic field data and space geodetic measurements to show that, instead, the rupture process may have involved slip on multiple faults. Primary surface deformation was driven by rupture on blind thrust faults with only minor, deep, lateral slip along or near the main Enriquillog-Plantain Garden fault zone; thus the event only partially relieved centuries of accumulated left-lateral strain on a small part of the plate-boundary system. Together with the predominance of shallow off-fault thrusting, the lack of surface deformation implies that remaining shallow shear strain will be released in future surface-rupturing earthquakes on the Enriquillog-Plantain Garden fault zone, as occurred in inferred Holocene and probable historic events. We suggest that the geological signature of this earthquakeg-broad warping and coastal deformation rather than surface rupture along the main fault zoneg-will not be easily recognized by standard palaeoseismic studies. We conclude that similarly complex earthquakes in tectonic environments that accommodate both translation and convergenceg-such as the San Andreas fault through the Transverse Ranges of Californiag-may be missing from the prehistoric earthquake record. ?? 2010 Macmillan Publishers Limited. All rights reserved.
Test experience on an ultrareliable computer communication network
NASA Technical Reports Server (NTRS)
Abbott, L. W.
1984-01-01
The dispersed sensor processing mesh (DSPM) is an experimental, ultrareliable, fault-tolerant computer communications network that exhibits an organic-like ability to regenerate itself after suffering damage. The regeneration is accomplished by two routines - grow and repair. This paper discusses the DSPM concept for achieving fault tolerance and provides a brief description of the mechanization of both the experiment and the six-node experimental network. The main topic of this paper is the system performance of the growth algorithm contained in the grow routine. The characteristics imbued to DSPM by the growth algorithm are also discussed. Data from an experimental DSPM network and software simulation of larger DSPM-type networks are used to examine the inherent limitation on growth time by the growth algorithm and the relationship of growth time to network size and topology.
Fallon, Nevada FORGE 3D Geologic Model
Blankenship, Doug; Siler, Drew
2018-03-01
The 3D geologic model for the Fallon for site was constructed in EarthVision software using methods similar to (Moeck et al., 2009, 2010; Faulds et al., 2010b; Jolie et al., 2012, 2015; Hinz et al., 2013a; Siler and Faulds, 2013; Siler et al., 2016a, b) - References are included in archive. The model contains 48 faults (numbered 1-48), and 4 stratigraphic surfaces from oldest to youngest (1) undivided Mesozoic basement, consisting of Mesozoic metasedimentary, metavolcanic, and plutonic units (Mzu); (2) Miocene volcanic and interbedded sedimentary rocks, consisting primarily of basaltic and basaltic andesite lava flows (Tvs); and (3) late Miocene to Pliocene (i.e., Neogene) undivided sedimentary rocks (Ns); and (4) Quaternary sediments (Qs). The two files contain points that describe nodes along the fault surfaces and stratigraphic horizons.
Current Sensor Fault Diagnosis Based on a Sliding Mode Observer for PMSM Driven Systems
Huang, Gang; Luo, Yi-Ping; Zhang, Chang-Fan; Huang, Yi-Shan; Zhao, Kai-Hui
2015-01-01
This paper proposes a current sensor fault detection method based on a sliding mode observer for the torque closed-loop control system of interior permanent magnet synchronous motors. First, a sliding mode observer based on the extended flux linkage is built to simplify the motor model, which effectively eliminates the phenomenon of salient poles and the dependence on the direct axis inductance parameter, and can also be used for real-time calculation of feedback torque. Then a sliding mode current observer is constructed in αβ coordinates to generate the fault residuals of the phase current sensors. The method can accurately identify abrupt gain faults and slow-variation offset faults in real time in faulty sensors, and the generated residuals of the designed fault detection system are not affected by the unknown input, the structure of the observer, and the theoretical derivation and the stability proof process are concise and simple. The RT-LAB real-time simulation is used to build a simulation model of the hardware in the loop. The simulation and experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:25970258
NASA Astrophysics Data System (ADS)
Mercuri, Marco; Scuderi, Marco Maria; Tesei, Telemaco; Carminati, Eugenio; Collettini, Cristiano
2018-04-01
A great number of earthquakes occur within thick carbonate sequences in the shallow crust. At the same time, carbonate fault rocks exhumed from a depth < 6 km (i.e., from seismogenic depths) exhibit the coexistence of structures related to brittle (i.e., cataclasis) and ductile deformation processes (i.e., pressure-solution and granular plasticity). We performed friction experiments on water-saturated simulated carbonate-bearing faults for a wide range of normal stresses (from 5 to 120 MPa) and slip velocities (from 0.3 to 100 μm/s). At high normal stresses (σn > 20 MPa) fault gouges undergo strain-weakening, that is more pronounced at slow slip velocities, and causes a significant reduction of frictional strength, from μ = 0.7 to μ = 0.47. Microstructural analysis show that fault gouge weakening is driven by deformation accommodated by cataclasis and pressure-insensitive deformation processes (pressure solution and granular plasticity) that become more efficient at slow slip velocity. The reduction in frictional strength caused by strain weakening behaviour promoted by the activation of pressure-insensitive deformation might play a significant role in carbonate-bearing faults mechanics.
McBride, J.H.; Pugin, Andre J.M.; Nelson, W.J.; Larson, T.H.; Sargent, S.L.; Devera, J.A.; Denny, F.B.; Woolery, E.W.
2003-01-01
High-resolution shallow seismic reflection profiles across the northwesternmost part of the New Madrid seismic zone (NMSZ) and northwestern margin of the Reelfoot rift, near the confluence of the Ohio and Mississippi Rivers in the northern Mississippi embayment, reveal intense structural deformation that apparently took place during the late Paleozoic and/or Mesozoic up to near the end of the Cretaceous Period. The seismic profiles were sited on both sides of the northeast-trending Olmsted fault, defined by varying elevations of the top of Mississippian (locally base of Cretaceous) bedrock. The trend of this fault is close to and parallel with an unusually straight segment of the Ohio River and is approximately on trend with the westernmost of two groups of northeast-aligned epicenters ("prongs") in the NMSZ. Initially suspected on the basis of pre-existing borehole data, the deformation along the fault has been confirmed by four seismic reflection profiles, combined with some new information from drilling. The new data reveal (1) many high-angle normal and reverse faults expressed as narrow grabens and anticlines (suggesting both extensional and compressional regimes) that involved the largest displacements during the late Cretaceous (McNairy); (2) a different style of deformation involving probably more horizontal displacements (i.e., thrusting) that occurred at the end of this phase near the end of McNairy deposition, with some fault offsets of Paleocene and younger units; (3) zones of steeply dipping faults that bound chaotic blocks similar to that observed previously from the nearby Commerce geophysical lineament (CGL); and (4) complex internal deformation stratigraphically restricted to the McNairy, suggestive of major sediment liquefaction or landsliding. Our results thus confirm the prevalence of complex Cretaceous deformations continuing up into Tertiary strata near the northern terminus of the NMSZ. ?? 2003 Elsevier Science B.V. All rights reserved.
Van Noten, Koen; Lecocq, Thomas; Shah, Anjana K.; Camelbeeck, Thierry
2015-01-01
Between 12 July 2008 and 18 January 2010 a seismic swarm occurred close to the town of Court-Saint-Etienne, 20 km SE of Brussels (Belgium). The Belgian network and a temporary seismic network covering the epicentral area established a seismic catalogue in which magnitude varies between ML -0.7 and ML 3.2. Based on waveform cross-correlation of co-located earthquakes, the spatial distribution of the hypocentre locations was improved considerably and shows a dense cluster displaying a 200 m-wide, 1.5-km long, NW-SE oriented fault structure at a depth range between 5 and 7 km, located in the Cambrian basement rocks of the Lower Palaeozoic Anglo-Brabant Massif. Waveform comparison of the largest events of the 2008–2010 swarm with an ML 4.0 event that occurred during swarm activity between 1953 and 1957 in the same region shows similar P- and S-wave arrivals at the Belgian Uccle seismic station. The geometry depicted by the hypocentral distribution is consistent with a nearly vertical, left-lateral strike-slip fault taking place in a current local WNW–ESE oriented local maximum horizontal stress field. To determine a relevant tectonic structure, a systematic matched filtering approach of aeromagnetic data, which can approximately locate isolated anomalies associated with hypocentral depths, has been applied. Matched filtering shows that the 2008–2010 seismic swarm occurred along a limited-sized fault which is situated in slaty, low-magnetic rocks of the Mousty Formation. The fault is bordered at both ends with obliquely oriented magnetic gradients. Whereas the NW end of the fault is structurally controlled, its SE end is controlled by a magnetic gradient representing an early-orogenic detachment fault separating the low-magnetic slaty Mousty Formation from the high-magnetic Tubize Formation. The seismic swarm is therefore interpreted as a sinistral reactivation of an inherited NW–SE oriented isolated fault in a weakened crust within the Cambrian core of the Brabant Massif.
NASA Astrophysics Data System (ADS)
Ferguson, Kelly M.
Deformation related to the transition from strike-slip to convergent slip during flat-slab subduction of the Yakutat microplate has resulted in regions of focused rock uplift and exhumation. In the St. Elias and Chugach Mountains, faulting related to transpressional processes and bending of fault systems coupled with enhanced glacial erosion causes rapid exhumation. Underplating below the syntaxial bend farther west in the Chugach Mountains and central Prince William Sound causes focused, but less rapid, exhumation. Farther south in the Prince William Sound, plate boundary deformation transitions from strike-slip to nearly full convergence in the Montague Island and Hinchinbrook Island region, which is ˜20 km above the megathrust between the Yakutat microplate and overriding North American Plate. Montague and Hinchinbrook Islands are narrow, elongate, and steep, with a structural grain formed by several megathrust fault splays, some of which slipped during the 1964 M9.2 earthquake. Presented here are 32 new apatite (U-Th)/He (AHe) and 28 new apatite fission-track (AFT) ages from the Montague and Hinchinbrook Island regions. Most AHe ages are <5 Ma, with some as young as 1.1 Ma. AHe ages are youngest at the southwest end of Montague Island, where maximum fault displacement occurred on the Hanning Bay and Patton Bay faults during the 1964 earthquake. AFT ages range from ˜5 Ma to ˜20 Ma and are also younger at the SW end of Montague Island. These ages and corresponding exhumation rates indicate that the Montague and Hinchinbrook Island region is a narrow zone of intense deformation probably related to duplex thrusting along one or more megathrust fault splays. I interpret the rates of rock uplift and exhumation to have increased in the last ˜5 My, especially at the southwest end of the island system and farthest from the region dominated by strike-slip and transpressional deformation to the northeast. The narrow band of deformation along these islands likely represents the northwestern edge of a broader swath of plate boundary deformation between the Montague-Hinchinbrook Island region and the Kayak Island fault zone.
Results from the NASA Spacecraft Fault Management Workshop: Cost Drivers for Deep Space Missions
NASA Technical Reports Server (NTRS)
Newhouse, Marilyn E.; McDougal, John; Barley, Bryan; Stephens Karen; Fesq, Lorraine M.
2010-01-01
Fault Management, the detection of and response to in-flight anomalies, is a critical aspect of deep-space missions. Fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for five missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that four out of the five missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, and academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and tools that have not kept pace with the increasing complexity of mission requirements and spacecraft systems. This paper summarizes the findings and recommendations from that workshop, particularly as fault management development issues affect operations and the development of operations capabilities.
Dynamic rupture simulations of the 2016 Mw7.8 Kaikōura earthquake: a cascading multi-fault event
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.; Ampuero, J. P.; Xu, W.; Feng, G.
2017-12-01
The Mw7.8 Kaikōura earthquake struck the Northern part of New Zealand's South Island roughly one year ago. It ruptured multiple segments of the contractional North Canterbury fault zone and of the Marlborough fault system. Field observations combined with satellite data suggest a rupture path involving partly unmapped faults separated by large stepover distances larger than 5 km, the maximum distance usually considered by the latest seismic hazard assessment methods. This might imply distant rupture transfer mechanisms generally not considered in seismic hazard assessment. We present high-resolution 3D dynamic rupture simulations of the Kaikōura earthquake under physically self-consistent initial stress and strength conditions. Our simulations are based on recent finite-fault slip inversions that constrain fault system geometry and final slip distribution from remote sensing, surface rupture and geodetic data (Xu et al., 2017). We assume a uniform background stress field, without lateral fault stress or strength heterogeneity. We use the open-source software SeisSol (www.seissol.org) which is based on an arbitrary high-order accurate DERivative Discontinuous Galerkin method (ADER-DG). Our method can account for complex fault geometries, high resolution topography and bathymetry, 3D subsurface structure, off-fault plasticity and modern friction laws. It enables the simulation of seismic wave propagation with high-order accuracy in space and time in complex media. We show that a cascading rupture driven by dynamic triggering can break all fault segments that were involved in this earthquake without mechanically requiring an underlying thrust fault. Our prefered fault geometry connects most fault segments: it does not features stepover larger than 2 km. The best scenario matches the main macroscopic characteristics of the earthquake, including its apparently slow rupture propagation caused by zigzag cascading, the moment magnitude and the overall inferred slip distribution. We observe a high sensitivity of cascading dynamics on fault-step over distance and off-fault energy dissipation.
Yu, Qingbao; Du, Yuhui; Chen, Jiayu; He, Hao; Sui, Jing; Pearlson, Godfrey; Calhoun, Vince D
2017-11-01
A key challenge in building a brain graph using fMRI data is how to define the nodes. Spatial brain components estimated by independent components analysis (ICA) and regions of interest (ROIs) determined by brain atlas are two popular methods to define nodes in brain graphs. It is difficult to evaluate which method is better in real fMRI data. Here we perform a simulation study and evaluate the accuracies of a few graph metrics in graphs with nodes of ICA components, ROIs, or modified ROIs in four simulation scenarios. Graph measures with ICA nodes are more accurate than graphs with ROI nodes in all cases. Graph measures with modified ROI nodes are modulated by artifacts. The correlations of graph metrics across subjects between graphs with ICA nodes and ground truth are higher than the correlations between graphs with ROI nodes and ground truth in scenarios with large overlapped spatial sources. Moreover, moving the location of ROIs would largely decrease the correlations in all scenarios. Evaluating graphs with different nodes is promising in simulated data rather than real data because different scenarios can be simulated and measures of different graphs can be compared with a known ground truth. Since ROIs defined using brain atlas may not correspond well to real functional boundaries, overall findings of this work suggest that it is more appropriate to define nodes using data-driven ICA than ROI approaches in real fMRI data. Copyright © 2017 Elsevier B.V. All rights reserved.
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-12-01
Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. An ultrafast, reliable and scalable 4D CBCT∕CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment.
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-01-01
Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm. Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. Conclusions: An ultrafast, reliable and scalable 4D CBCT/CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment. PMID:22149842
Soft-core processor study for node-based architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Houten, Jonathan Roger; Jarosz, Jason P.; Welch, Benjamin James
2008-09-01
Node-based architecture (NBA) designs for future satellite projects hold the promise of decreasing system development time and costs, size, weight, and power and positioning the laboratory to address other emerging mission opportunities quickly. Reconfigurable Field Programmable Gate Array (FPGA) based modules will comprise the core of several of the NBA nodes. Microprocessing capabilities will be necessary with varying degrees of mission-specific performance requirements on these nodes. To enable the flexibility of these reconfigurable nodes, it is advantageous to incorporate the microprocessor into the FPGA itself, either as a hardcore processor built into the FPGA or as a soft-core processor builtmore » out of FPGA elements. This document describes the evaluation of three reconfigurable FPGA based processors for use in future NBA systems--two soft cores (MicroBlaze and non-fault-tolerant LEON) and one hard core (PowerPC 405). Two standard performance benchmark applications were developed for each processor. The first, Dhrystone, is a fixed-point operation metric. The second, Whetstone, is a floating-point operation metric. Several trials were run at varying code locations, loop counts, processor speeds, and cache configurations. FPGA resource utilization was recorded for each configuration. Cache configurations impacted the results greatly; for optimal processor efficiency it is necessary to enable caches on the processors. Processor caches carry a penalty; cache error mitigation is necessary when operating in a radiation environment.« less
Fault tolerant operation of switched reluctance machine
NASA Astrophysics Data System (ADS)
Wang, Wei
The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and experiments. With the proposed optimal waveform, torque production is greatly improved under the same Root Mean Square (RMS) current constraint. Additionally, position sensorless operation methods under phase faults are investigated to account for the combination of physical position sensor and phase winding faults. A comprehensive solution for position sensorless operation under single and multiple phases fault are proposed and validated through experiments. Continuous position sensorless operation with seamless transition between various numbers of phase fault is achieved.
Managing Fault Management Development
NASA Technical Reports Server (NTRS)
McDougal, John M.
2010-01-01
As the complexity of space missions grows, development of Fault Management (FM) capabilities is an increasingly common driver for significant cost overruns late in the development cycle. FM issues and the resulting cost overruns are rarely caused by a lack of technology, but rather by a lack of planning and emphasis by project management. A recent NASA FM Workshop brought together FM practitioners from a broad spectrum of institutions, mission types, and functional roles to identify the drivers underlying FM overruns and recommend solutions. They identified a number of areas in which increased program and project management focus can be used to control FM development cost growth. These include up-front planning for FM as a distinct engineering discipline; managing different, conflicting, and changing institutional goals and risk postures; ensuring the necessary resources for a disciplined, coordinated approach to end-to-end fault management engineering; and monitoring FM coordination across all mission systems.
Cheating and Anti-Cheating in Gossip-Based Protocol: An Experimental Investigation
NASA Astrophysics Data System (ADS)
Xiao, Xin; Shi, Yuanchun; Tang, Yun; Zhang, Nan
During recent years, there has been a rapid growth in deployment of gossip-based protocol in many multicast applications. In a typical gossip-based protocol, each node acts as dual roles of receiver and sender, independently exchanging data with its neighbors to facilitate scalability and resilience. However, most of previous work in this literature seldom considered cheating issue of end users, which is also very important in face of the fact that the mutual cooperation inherently determines overall system performance. In this paper, we investigate the dishonest behaviors in decentralized gossip-based protocol through extensive experimental study. Our original contributions come in two-fold: In the first part of cheating study, we analytically discuss two typical cheating strategies, that is, intentionally increasing subscription requests and untruthfully calculating forwarding probability, and further evaluate their negative impacts. The results indicate that more attention should be paid to defending cheating behaviors in gossip-based protocol. In the second part of anti-cheating study, we propose a receiver-driven measurement mechanism, which evaluates individual forwarding traffic from the perspective of receivers and thus identifies cheating nodes with high incoming/outgoing ratio. Furthermore, we extend our mechanism by introducing reliable factor to further improve its accuracy. The experiments under various conditions show that it performs quite well in case of serious cheating and achieves considerable performance in other cases.
NASA Astrophysics Data System (ADS)
Benesh, N. P.; Plesch, A.; Shaw, J. H.; Frost, E. K.
2007-03-01
Using the discrete element modeling method, we examine the two-dimensional nature of fold development above an anticlinal bend in a blind thrust fault. Our models were composed of numerical disks bonded together to form pregrowth strata overlying a fixed fault surface. This pregrowth package was then driven along the fault surface at a fixed velocity using a vertical backstop. Additionally, new particles were generated and deposited onto the pregrowth strata at a fixed rate to produce sequential growth layers. Models with and without mechanical layering were used, and the process of folding was analyzed in comparison with fold geometries predicted by kinematic fault bend folding as well as those observed in natural settings. Our results show that parallel fault bend folding behavior holds to first order in these models; however, a significant decrease in limb dip is noted for younger growth layers in all models. On the basis of comparisons to natural examples, we believe this deviation from kinematic fault bend folding to be a realistic feature of fold development resulting from an axial zone of finite width produced by materials with inherent mechanical strength. These results have important implications for how growth fold structures are used to constrain slip and paleoearthquake ages above blind thrust faults. Most notably, deformation localized about axial surfaces and structural relief across the fold limb seem to be the most robust observations that can readily constrain fault activity and slip. In contrast, fold limb width and shallow growth layer dips appear more variable and dependent on mechanical properties of the strata.
Fault Mechanics and Post-seismic Deformation at Bam, SE Iran
NASA Astrophysics Data System (ADS)
Wimpenny, S. E.; Copley, A.
2017-12-01
The extent to which aseismic deformation relaxes co-seismic stress changes on a fault zone is fundamental to assessing the future seismic hazard following any earthquake, and in understanding the mechanical behaviour of faults. We used models of stress-driven afterslip and visco-elastic relaxation, in conjunction with a dense time series of post-seismic InSAR measurements, to show that there has been minimal release of co-seismic stress changes through post-seismic deformation following the 2003 Mw 6.6 Bam earthquake. Our modelling indicates that the faults at Bam may remain predominantly locked, and that the co- plus inter-seismically accumulated elastic strain stored down-dip of the 2003 rupture patch may be released in a future Mw 6 earthquake. Modelling also suggests parts of the fault that experienced post-seismic creep between 2003-2009 overlapped with areas that also slipped co-seismically. Our observations and models also provide an opportunity to probe how aseismic fault slip leads to the growth of topography at Bam. We find that, for our modelled afterslip distribution to be consistent with forming the sharp step in the local topography at Bam over repeated earthquake cycles, and also to be consistent with the geodetic observations, requires either (1) far-field tectonic loading equivalent to a 2-10 MPa deviatoric stress acting across the fault system, which suggests it supports stresses 60-100 times less than classical views of static fault strength, or (2) that the fault surface has some form of mechanical anisotropy, potentially related to corrugations on the fault plane, that controls the sense of slip.
NASA Astrophysics Data System (ADS)
Attal, M.; Hobley, D.; Cowie, P. A.; Whittaker, A. C.; Tucker, G. E.; Roberts, G. P.
2008-12-01
Prominent convexities in channel long profiles, or knickzones, are an expected feature of bedrock rivers responding to a change in the rate of base level fall driven by tectonic processes. In response to a change in relative uplift rate, the simple stream power model which is characterized by a slope exponent equal to unity predicts that knickzone retreat velocity is independent of uplift rate and that channel slope and uplift rate are linearly related along the reaches which have re-equilibrated with respect to the new uplift condition (i.e., downstream of the profile convexity). However, a threshold for erosion has been shown to introduce non- linearity between slope and uplift rate when associated with stochastic rainfall variability. We present field data regarding the height and retreat rates of knickzones in rivers upstream of active normal faults in the central Apennines, Italy, where excellent constraints exist on the temporal and spatial history of fault movement. The knickzones developed in response to an independently-constrained increase in fault throw rate 0.75 Ma. Channel characteristics and Shield stress values suggest that these rivers lie close to the detachment-limited end-member but the knickzone retreat velocity (calculated from the time since fault acceleration) has been found to scale systematically with the known fault throw rates, even after accounting for differences in drainage area. In addition, the relationship between measured channel slope and relative uplift rate is non-linear, suggesting that a threshold for erosion might be effective in this setting. We use the Channel-Hillslope Integrated Landscape Development (CHILD) model to quantify the effect of such a threshold on river long profile development and knickzone retreat in response to tectonic perturbation. In particular, we investigate the evolutions of 3 Italian catchments of different size characterized by contrasted degree of tectonic perturbation, using physically realistic threshold values based on sediment grain-size measurements along the studied rivers. We show that the threshold alone cannot account for field observations of the size, position and retreat rate of profile convexities and that other factors neglected by the simple stream power law (e.g. role of sediments) have to be invoked to explain the discrepancy between field observations and modeled topographies.
NASA Astrophysics Data System (ADS)
Barcos, Leticia; Balanyá, Juan Carlos; Díaz-Azpiroz, Manuel; Expósito, Inmaculada; Jiménez-Bonilla, Alejandro
2014-05-01
Structural trend line patterns of orogenic arcs depict diverse geometries resulting from multiple factors such as indenter geometry, thickness of pre-deformational sequences and rheology of major decollement surfaces. Within them, salient-recess transitions often result in transpressive deformation bands. The Gibraltar Arc results from the Neogene collision of a composite metamorphic terrane (Alboran Domain, acting as a relative backstop) against two foreland margins (Southiberian and Maghrebian Domains). Within it, the Western Gibraltar Arc (WGA) is a protruded salient, 200 km in length cord, closely coinciding with the apex zone of the major arc. The WGA terminates at two transpressional zones. The main structure in the northern (Betic) end zone is a 70 km long and 4-5 km wide brittle deformation band, the so-called Torcal Shear Zone (TSZ). The TSZ forms a W-E topographic alignment along which the kinematic data show an overall dextral transpression. Within the TSZ strain is highly partitioned into mainly shortening, extensional and strike-slip structures. The strain partitioning is heterogeneous along the band and, accordingly, four distinct sectors can be identified. i) The Peñarrubia-Almargen Transverse Zone (PATZ), located at the W-end of the TSZ presents WNW-ESE folds and dextral faults, together with normal faults that accommodate extension parallel to the dominant structural trend. WNW ESE dextral faults might be related with synthetic splays at the lateral end of the TSZ. ii) The Sierra del Valle de Abdalajís (SVA) is characterized by WSW-ENE trending folds and dextral-reverse faults dipping to SSE, and NW-SE normal faults. The southern boundary of the SVA is a dextral fault zone. iii) The Torcal de Antequera Massif (TAM) presents two types of structural domains. Two outer domains located at both margins characterized by E-W trending, dextral strike-slip structures, and an inner domain, characterized by en echelon SE-vergent open folds and reverse shear zones as well as normal faults accommodating fold axis parallel extension. iiii) The Sierra de las Cabras-Camorolos sector, located at the E-end of the TSZ, is divided into two structural domains: a western domain, dominated by N120ºE dextral strike-slip faults, and an eastern domain structured by a WSW-ENE thrust system and normal faults with extension subparallel to the direction of the shortening structures. TSZ displacement at the lateral tip of this sector seems to be mainly accommodated by NNE trending thrusts in the northern TSZ block. The TSZ induces the near vertical extrusion of paleomargin rock units within the deformation band and the dextral deflection of the structural trend shaping the lateral end of the WGA salient. Our results suggest the TSZ started in the Upper Miocene and is still active. Moreover, the TSZ trends oblique to regional transport direction assessed both by field data and modelling. The estimated WNW-ESE far-field velocity vector in the TAM and the SVA points to the importance of the westward drift of the Internal Zones relative to the external wedge and fits well with the overall WGA kinematic frame. Nor the WGA salient neither the TSZ can be fully explained by the single Europe-Africa plate convergence.
Radial basis function neural networks applied to NASA SSME data
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Dhawan, Atam P.
1993-01-01
This paper presents a brief report on the application of Radial Basis Function Neural Networks (RBFNN) to the prediction of sensor values for fault detection and diagnosis of the Space Shuttle's Main Engines (SSME). The location of the Radial Basis Function (RBF) node centers was determined with a K-means clustering algorithm. A neighborhood operation about these center points was used to determine the variances of the individual processing notes.
Impact of network structure on the capacity of wireless multihop ad hoc communication
NASA Astrophysics Data System (ADS)
Krause, Wolfram; Glauche, Ingmar; Sollacher, Rudolf; Greiner, Martin
2004-07-01
As a representative of a complex technological system, the so-called wireless multihop ad hoc communication networks are discussed. They represent an infrastructure-less generalization of todays wireless cellular phone networks. Lacking a central control authority, the ad hoc nodes have to coordinate themselves such that the overall network performs in an optimal way. A performance indicator is the end-to-end throughput capacity. Various models, generating differing ad hoc network structure via differing transmission power assignments, are constructed and characterized. They serve as input for a generic data traffic simulation as well as some semi-analytic estimations. The latter reveal that due to the most-critical-node effect the end-to-end throughput capacity sensitively depends on the underlying network structure, resulting in differing scaling laws with respect to network size.
Design and Manufacture of Structurally Efficient Tapered Struts
NASA Technical Reports Server (NTRS)
Brewster, Jebediah W.
2009-01-01
Composite materials offer the potential of weight savings for numerous spacecraft and aircraft applications. A composite strut is just one integral part of the node-to-node system and the optimization of the shut and node assembly is needed to take full advantage of the benefit of composites materials. Lockheed Martin designed and manufactured a very light weight one piece composite tapered strut that is fully representative of a full scale flight article. In addition, the team designed and built a prototype of the node and end fitting system that will effectively integrate and work with the full scale flight articles.
A Fresh Look at Longitudinal Standing Waves on a Spring
NASA Astrophysics Data System (ADS)
Rutherford, Casey
2013-01-01
Transverse standing waves produced on a string, as shown in Fig. 1, are a common demonstration of standing wave patterns that have nodes at both ends. Longitudinal standing waves can be produced on a helical spring that is mounted vertically and attached to a speaker, as shown in Fig. 2, and used to produce both node-node (NN) and node-antinode (NA) standing waves. The resonant frequencies of the two standing wave patterns are related with theory that is accessible to students in algebra-based introductory physics courses, and actual measurements show good agreement with theoretical predictions.
Simulation-driven machine learning: Bearing fault classification
NASA Astrophysics Data System (ADS)
Sobie, Cameron; Freitas, Carina; Nicolai, Mike
2018-01-01
Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.
Models of recurrent strike-slip earthquake cycles and the state of crustal stress
NASA Technical Reports Server (NTRS)
Lyzenga, Gregory A.; Raefsky, Arthur; Mulligan, Stephanie G.
1991-01-01
Numerical models of the strike-slip earthquake cycle, assuming a viscoelastic asthenosphere coupling model, are examined. The time-dependent simulations incorporate a stress-driven fault, which leads to tectonic stress fields and earthquake recurrence histories that are mutually consistent. Single-fault simulations with constant far-field plate motion lead to a nearly periodic earthquake cycle and a distinctive spatial distribution of crustal shear stress. The predicted stress distribution includes a local minimum in stress at depths less than typical seismogenic depths. The width of this stress 'trough' depends on the magnitude of crustal stress relative to asthenospheric drag stresses. The models further predict a local near-fault stress maximum at greater depths, sustained by the cyclic transfer of strain from the elastic crust to the ductile asthenosphere. Models incorporating both low-stress and high-stress fault strength assumptions are examined, under Newtonian and non-Newtonian rheology assumptions. Model results suggest a preference for low-stress (a shear stress level of about 10 MPa) fault models, in agreement with previous estimates based on heat flow measurements and other stress indicators.