Sample records for grid failure detection

  1. Reliable Detection and Smart Deletion of Malassez Counting Chamber Grid in Microscopic White Light Images for Microbiological Applications.

    PubMed

    Denimal, Emmanuel; Marin, Ambroise; Guyot, Stéphane; Journaux, Ludovic; Molin, Paul

    2015-08-01

    In biology, hemocytometers such as Malassez slides are widely used and are effective tools for counting cells manually. In a previous work, a robust algorithm was developed for grid extraction in Malassez slide images. This algorithm was evaluated on a set of 135 images and grids were accurately detected in most cases, but there remained failures for the most difficult images. In this work, we present an optimization of this algorithm that allows for 100% grid detection and a 25% improvement in grid positioning accuracy. These improvements make the algorithm fully reliable for grid detection. This optimization also allows complete erasing of the grid without altering the cells, which eases their segmentation.

  2. Rate based failure detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward

    This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less

  3. Prediction and Control of Network Cascade: Example of Power Grid or Networking Adaptability from WMD Disruption and Cascading Failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chertkov, Michael

    2012-07-24

    The goal of the DTRA project is to develop a mathematical framework that will provide the fundamental understanding of network survivability, algorithms for detecting/inferring pre-cursors of abnormal network behaviors, and methods for network adaptability and self-healing from cascading failures.

  4. Robustness analysis of complex networks with power decentralization strategy via flow-sensitive centrality against cascading failures

    NASA Astrophysics Data System (ADS)

    Guo, Wenzhang; Wang, Hao; Wu, Zhengping

    2018-03-01

    Most existing cascading failure mitigation strategy of power grids based on complex network ignores the impact of electrical characteristics on dynamic performance. In this paper, the robustness of the power grid under a power decentralization strategy is analysed through cascading failure simulation based on AC flow theory. The flow-sensitive (FS) centrality is introduced by integrating topological features and electrical properties to help determine the siting of the generation nodes. The simulation results of the IEEE-bus systems show that the flow-sensitive centrality method is a more stable and accurate approach and can enhance the robustness of the network remarkably. Through the study of the optimal flow-sensitive centrality selection for different networks, we find that the robustness of the network with obvious small-world effect depends more on contribution of the generation nodes detected by community structure, otherwise, contribution of the generation nodes with important influence on power flow is more critical. In addition, community structure plays a significant role in balancing the power flow distribution and further slowing the propagation of failures. These results are useful in power grid planning and cascading failure prevention.

  5. Association rule mining on grid monitoring data to detect error sources

    NASA Astrophysics Data System (ADS)

    Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin

    2010-04-01

    Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.

  6. Failure probability analysis of optical grid

    NASA Astrophysics Data System (ADS)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    Traditionally power distribution networks are either not observable or only partially observable. This complicates development and implementation of new smart grid technologies, such as those related to demand response, outage detection and management, and improved load-monitoring. In this two part paper, inspired by proliferation of the metering technology, we discuss estimation problems in structurally loopy but operationally radial distribution grids from measurements, e.g. voltage data, which are either already available or can be made available with a relatively minor investment. In Part I, the objective is to learn the operational layout of the grid. Part II of this paper presentsmore » algorithms that estimate load statistics or line parameters in addition to learning the grid structure. Further, Part II discusses the problem of structure estimation for systems with incomplete measurement sets. Our newly suggested algorithms apply to a wide range of realistic scenarios. The algorithms are also computationally efficient – polynomial in time– which is proven theoretically and illustrated computationally on a number of test cases. The technique developed can be applied to detect line failures in real time as well as to understand the scope of possible adversarial attacks on the grid.« less

  8. A decision support system using combined-classifier for high-speed data stream in smart grid

    NASA Astrophysics Data System (ADS)

    Yang, Hang; Li, Peng; He, Zhian; Guo, Xiaobin; Fong, Simon; Chen, Huajun

    2016-11-01

    Large volume of high-speed streaming data is generated by big power grids continuously. In order to detect and avoid power grid failure, decision support systems (DSSs) are commonly adopted in power grid enterprises. Among all the decision-making algorithms, incremental decision tree is the most widely used one. In this paper, we propose a combined classifier that is a composite of a cache-based classifier (CBC) and a main tree classifier (MTC). We integrate this classifier into a stream processing engine on top of the DSS such that high-speed steaming data can be transformed into operational intelligence efficiently. Experimental results show that our proposed classifier can return more accurate answers than other existing ones.

  9. The impact of the topology on cascading failures in a power grid model

    NASA Astrophysics Data System (ADS)

    Koç, Yakup; Warnier, Martijn; Mieghem, Piet Van; Kooij, Robert E.; Brazier, Frances M. T.

    2014-05-01

    Cascading failures are one of the main reasons for large scale blackouts in power transmission grids. Secure electrical power supply requires, together with careful operation, a robust design of the electrical power grid topology. Currently, the impact of the topology on grid robustness is mainly assessed by purely topological approaches, that fail to capture the essence of electric power flow. This paper proposes a metric, the effective graph resistance, to relate the topology of a power grid to its robustness against cascading failures by deliberate attacks, while also taking the fundamental characteristics of the electric power grid into account such as power flow allocation according to Kirchhoff laws. Experimental verification on synthetic power systems shows that the proposed metric reflects the grid robustness accurately. The proposed metric is used to optimize a grid topology for a higher level of robustness. To demonstrate its applicability, the metric is applied on the IEEE 118 bus power system to improve its robustness against cascading failures.

  10. Reliability analysis in interdependent smart grid systems

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong

    2018-06-01

    Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.

  11. Cascading failures in ac electricity grids.

    PubMed

    Rohden, Martin; Jung, Daniel; Tamrakar, Samyak; Kettemann, Stefan

    2016-09-01

    Sudden failure of a single transmission element in a power grid can induce a domino effect of cascading failures, which can lead to the isolation of a large number of consumers or even to the failure of the entire grid. Here we present results of the simulation of cascading failures in power grids, using an alternating current (AC) model. We first apply this model to a regular square grid topology. For a random placement of consumers and generators on the grid, the probability to find more than a certain number of unsupplied consumers decays as a power law and obeys a scaling law with respect to system size. Varying the transmitted power threshold above which a transmission line fails does not seem to change the power-law exponent q≈1.6. Furthermore, we study the influence of the placement of generators and consumers on the number of affected consumers and demonstrate that large clusters of generators and consumers are especially vulnerable to cascading failures. As a real-world topology, we consider the German high-voltage transmission grid. Applying the dynamic AC model and considering a random placement of consumers, we find that the probability to disconnect more than a certain number of consumers depends strongly on the threshold. For large thresholds the decay is clearly exponential, while for small ones the decay is slow, indicating a power-law decay.

  12. A Study of Energy Management Systems and its Failure Modes in Smart Grid Power Distribution

    NASA Astrophysics Data System (ADS)

    Musani, Aatif

    The subject of this thesis is distribution level load management using a pricing signal in a smart grid infrastructure. The project relates to energy management in a spe-cialized distribution system known as the Future Renewable Electric Energy Delivery and Management (FREEDM) system. Energy management through demand response is one of the key applications of smart grid. Demand response today is envisioned as a method in which the price could be communicated to the consumers and they may shift their loads from high price periods to the low price periods. The development and deployment of the FREEDM system necessitates controls of energy and power at the point of end use. In this thesis, the main objective is to develop the control model of the Energy Management System (EMS). The energy and power management in the FREEDM system is digitally controlled therefore all signals containing system states are discrete. The EMS is modeled as a discrete closed loop transfer function in the z-domain. A breakdown of power and energy control devices such as EMS components may result in energy con-sumption error. This leads to one of the main focuses of the thesis which is to identify and study component failures of the designed control system. Moreover, H-infinity ro-bust control method is applied to ensure effectiveness of the control architecture. A focus of the study is cyber security attack, specifically bad data detection in price. Test cases are used to illustrate the performance of the EMS control design, the effect of failure modes and the application of robust control technique. The EMS was represented by a linear z-domain model. The transfer function be-tween the pricing signal and the demand response was designed and used as a test bed. EMS potential failure modes were identified and studied. Three bad data detection meth-odologies were implemented and a voting policy was used to declare bad data. The run-ning mean and standard deviation analysis method proves to be the best method to detect bad data. An H-infinity robust control technique was applied for the first time to design discrete EMS controller for the FREEDM system.

  13. Real-time estimation of ionospheric delay using GPS measurements

    NASA Astrophysics Data System (ADS)

    Lin, Lao-Sheng

    1997-12-01

    When radio waves such as the GPS signals propagate through the ionosphere, they experience an extra time delay. The ionospheric delay can be eliminated (to the first order) through a linear combination of L1 and L2 observations from dual-frequency GPS receivers. Taking advantage of this dispersive principle, one or more dual- frequency GPS receivers can be used to determine a model of the ionospheric delay across a region of interest and, if implemented in real-time, can support single-frequency GPS positioning and navigation applications. The research objectives of this thesis were: (1) to develop algorithms to obtain accurate absolute Total Electron Content (TEC) estimates from dual-frequency GPS observables, and (2) to develop an algorithm to improve the accuracy of real-time ionosphere modelling. In order to fulfil these objectives, four algorithms have been proposed in this thesis. A 'multi-day multipath template technique' is proposed to mitigate the pseudo-range multipath effects at static GPS reference stations. This technique is based on the assumption that the multipath disturbance at a static station will be constant if the physical environment remains unchanged from day to day. The multipath template, either single-day or multi-day, can be generated from the previous days' GPS data. A 'real-time failure detection and repair algorithm' is proposed to detect and repair the GPS carrier phase 'failures', such as the occurrence of cycle slips. The proposed algorithm uses two procedures: (1) application of a statistical test on the state difference estimated from robust and conventional Kalman filters in order to detect and identify the carrier phase failure, and (2) application of a Kalman filter algorithm to repair the 'identified carrier phase failure'. A 'L1/L2 differential delay estimation algorithm' is proposed to estimate GPS satellite transmitter and receiver L1/L2 differential delays. This algorithm, based on the single-site modelling technique, is able to estimate the sum of the satellite and receiver L1/L2 differential delay for each tracked GPS satellite. A 'UNSW grid-based algorithm' is proposed to improve the accuracy of real-time ionosphere modelling. The proposed algorithm is similar to the conventional grid-based algorithm. However, two modifications were made to the algorithm: (1) an 'exponential function' is adopted as the weighting function, and (2) the 'grid-based ionosphere model' estimated from the previous day is used to predict the ionospheric delay ratios between the grid point and reference points. (Abstract shortened by UMI.)

  14. Grid site availability evaluation and monitoring at CMS

    DOE PAGES

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  15. Grid site availability evaluation and monitoring at CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  16. Grid site availability evaluation and monitoring at CMS

    NASA Astrophysics Data System (ADS)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.

  17. Geomagnetism applications

    USGS Publications Warehouse

    Campbell, Wallace H.

    1995-01-01

    The social uses of geomagnetism include the physics of the space environment, satellite damage, pipeline corrosion, electric power-grid failure, communication interference, global positioning disruption, mineral-resource detection, interpretation of the Earth's formation and structure, navigation, weather, and magnetoreception in organisms. The need for continuing observations of the geomagnetic field, together with careful archiving of these records and mechanisms for dissemination of these data, is emphasized.

  18. Local vs. global redundancy - trade-offs between resilience against cascading failures and frequency stability

    NASA Astrophysics Data System (ADS)

    Plietzsch, A.; Schultz, P.; Heitzig, J.; Kurths, J.

    2016-05-01

    When designing or extending electricity grids, both frequency stability and resilience against cascading failures have to be considered amongst other aspects of energy security and economics such as construction costs due to total line length. Here, we compare an improved simulation model for cascading failures with state-of-the-art simulation models for short-term grid dynamics. Random ensembles of realistic power grid topologies are generated using a recent model that allows for a tuning of global vs local redundancy. The former can be measured by the algebraic connectivity of the network, whereas the latter can be measured by the networks transitivity. We show that, while frequency stability of an electricity grid benefits from a global form of redundancy, resilience against cascading failures rather requires a more local form of redundancy and further analyse the corresponding trade-off.

  19. Brief analysis of Jiangsu grid security and stability based on multi-infeed DC index in power system

    NASA Astrophysics Data System (ADS)

    Zhang, Wenjia; Wang, Quanquan; Ge, Yi; Huang, Junhui; Chen, Zhengfang

    2018-02-01

    The impact of Multi-infeed HVDC has gradually increased to security and stability operating in Jiangsu power grid. In this paper, an appraisal method of Multi-infeed HVDC power grid security and stability is raised with Multi-Infeed Effective Short Circuit Ratio, Multi-Infeed Interaction Factor and Commutation Failure Immunity Index. These indices are adopted in security and stability simulating calculation of Jiangsu Multi-infeed HVDC system. The simulation results indicate that Jiangsu power grid is operating with a strong DC system. It has high level of power grid security and stability, and meet the safety running requirements. Jinpin-Suzhou DC system is located in the receiving end with huge capacity, which is easily leading to commutation failure of the transmission line. In order to resolve this problem, dynamic reactive power compensation can be applied in power grid near Jinpin-Suzhou DC system. Simulation result shows this method is feasible to commutation failure.

  20. Feature combination analysis in smart grid based using SOM for Sudan national grid

    NASA Astrophysics Data System (ADS)

    Bohari, Z. H.; Yusof, M. A. M.; Jali, M. H.; Sulaima, M. F.; Nasir, M. N. M.

    2015-12-01

    In the investigation of power grid security, the cascading failure in multicontingency situations has been a test because of its topological unpredictability and computational expense. Both system investigations and burden positioning routines have their limits. In this project, in view of sorting toward Self Organizing Maps (SOM), incorporated methodology consolidating spatial feature (distance)-based grouping with electrical attributes (load) to evaluate the vulnerability and cascading impact of various part sets in the force lattice. Utilizing the grouping result from SOM, sets of overwhelming stacked beginning victimized people to perform assault conspires and asses the consequent falling impact of their failures, and this SOM-based approach viably distinguishes the more powerless sets of substations than those from the conventional burden positioning and other bunching strategies. The robustness of power grids is a central topic in the design of the so called "smart grid". In this paper, to analyze the measures of importance of the nodes in a power grid under cascading failure. With these efforts, we can distinguish the most vulnerable nodes and protect them, improving the safety of the power grid. Also we can measure if a structure is proper for power grids.

  1. Detection of Local Temperature Change on HTS Cables via Time-Frequency Domain Reflectometry

    NASA Astrophysics Data System (ADS)

    Bang, Su Sik; Lee, Geon Seok; Kwon, Gu-Young; Lee, Yeong Ho; Ji, Gyeong Hwan; Sohn, Songho; Park, Kijun; Shin, Yong-June

    2017-07-01

    High temperature superconducting (HTS) cables are drawing attention as transmission and distribution cables in future grid, and related researches on HTS cables have been conducted actively. As HTS cables have come to the demonstration stage, failures of cooling systems inducing quench phenomenon of the HTS cables have become significant. Several diagnosis of the HTS cables have been developed but there are still some limitations of the experimental setup. In this paper, a non-destructive diagnostic technique for the detection of the local temperature change point is proposed. Also, a simulation model of HTS cables with a local temperature change point is suggested to verify the proposed diagnosis. The performance of the diagnosis is checked by comparative analysis between the proposed simulation results and experiment results of a real-world HTS cable. It is expected that the suggested simulation model and diagnosis will contribute to the commercialization of HTS cables in the power grid.

  2. Design of power cable grounding wire anti-theft monitoring system

    NASA Astrophysics Data System (ADS)

    An, Xisheng; Lu, Peng; Wei, Niansheng; Hong, Gang

    2018-01-01

    In order to prevent the serious consequences of the power grid failure caused by the power cable grounding wire theft, this paper presents a GPRS based power cable grounding wire anti-theft monitoring device system, which includes a camera module, a sensor module, a micro processing system module, and a data monitoring center module, a mobile terminal module. Our design utilize two kinds of methods for detecting and reporting comprehensive image, it can effectively solve the problem of power and cable grounding wire box theft problem, timely follow-up grounded cable theft events, prevent the occurrence of electric field of high voltage transmission line fault, improve the reliability of the safe operation of power grid.

  3. Dynamically induced cascading failures in power grids.

    PubMed

    Schäfer, Benjamin; Witthaut, Dirk; Timme, Marc; Latora, Vito

    2018-05-17

    Reliable functioning of infrastructure networks is essential for our modern society. Cascading failures are the cause of most large-scale network outages. Although cascading failures often exhibit dynamical transients, the modeling of cascades has so far mainly focused on the analysis of sequences of steady states. In this article, we focus on electrical transmission networks and introduce a framework that takes into account both the event-based nature of cascades and the essentials of the network dynamics. We find that transients of the order of seconds in the flows of a power grid play a crucial role in the emergence of collective behaviors. We finally propose a forecasting method to identify critical lines and components in advance or during operation. Overall, our work highlights the relevance of dynamically induced failures on the synchronization dynamics of national power grids of different European countries and provides methods to predict and model cascading failures.

  4. Security Policies for Mitigating the Risk of Load Altering Attacks on Smart Grid Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryutov, Tatyana; AlMajali, Anas; Neuman, Clifford

    2015-04-01

    While demand response programs implement energy efficiency and power quality objectives, they bring potential security threats to the Smart Grid. The ability to influence load in a system enables attackers to cause system failures and impacts the quality and integrity of power delivered to customers. This paper presents a security mechanism to monitor and control load according to a set of security policies during normal system operation. The mechanism monitors, detects, and responds to load altering attacks. We examined the security requirements of Smart Grid stakeholders and constructed a set of load control policies enforced by the mechanism. We implementedmore » a proof of concept prototype and tested it using the simulation environment. By enforcing the proposed policies in this prototype, the system is maintained in a safe state in the presence of load drop attacks.« less

  5. Small vulnerable sets determine large network cascades in power grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less

  6. Small vulnerable sets determine large network cascades in power grids

    DOE PAGES

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    2017-11-17

    The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less

  7. Improving Distribution Resiliency with Microgrids and State and Parameter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuffner, Francis K.; Williams, Tess L.; Schneider, Kevin P.

    Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking themore » system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using advanced metering infrastructure and other distribution-level measurements to create a three-phase, unbalanced distribution state estimation approach. With distribution-level state estimation, the grid can be operated more efficiently, and outages or equipment failures can be caught faster, improving the overall resilience and reliability of the grid.« less

  8. Applicability of out-of-pile fretting wear tests to in-reactor fretting wear-induced failure time prediction

    NASA Astrophysics Data System (ADS)

    Kim, Kyu-Tae

    2013-02-01

    In order to investigate whether or not the grid-to-rod fretting wear-induced fuel failure will occur for newly developed spacer grid spring designs for the fuel lifetime, out-of-pile fretting wear tests with one or two fuel assemblies are to be performed. In this study, the out-of-pile fretting wear tests were performed in order to compare the potential for wear-induced fuel failure in two newly-developed, Korean PWR spacer grid designs. Lasting 20 days, the tests simulated maximum grid-to-rod gap conditions and the worst flow induced vibration effects that might take place over the fuel life time. The fuel rod perforation times calculated from the out-of-pile tests are greater than 1933 days for 2 μm oxidized fuel rods with a 100 μm grid-to-rod gap, whereas those estimated from in-reactor fretting wear failure database may be about in the range of between 60 and 100 days. This large discrepancy in fuel rod perforation may occur due to irradiation-induced cladding oxide microstructure changes on the one hand and a temperature gradient-induced hydrogen content profile across the cladding metal region on the other hand, which may accelerate brittleness in the grid-contacting cladding oxide and metal regions during the reactor operation. A three-phase grid-to-rod fretting wear model is proposed to simulate in-reactor fretting wear progress into the cladding, considering the microstructure changes of the cladding oxide and the hydrogen content profile across the cladding metal region combined with the temperature gradient. The out-of-pile tests cannot be directly applicable to the prediction of in-reactor fretting wear-induced cladding perforations but they can be used only for evaluating a relative wear resistance of one grid design against the other grid design.

  9. Reliable Broadcast under Cascading Failures in Interdependent Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Sisi; Lee, Sangkeun; Chinthavali, Supriya

    Reliable broadcast is an essential tool to disseminate information among a set of nodes in the presence of failures. We present a novel study of reliable broadcast in interdependent networks, in which the failures in one network may cascade to another network. In particular, we focus on the interdependency between the communication network and power grid network, where the power grid depends on the signals from the communication network for control and the communication network depends on the grid for power. In this paper, we build a resilient solution to handle crash failures in the communication network that may causemore » cascading failures and may even partition the network. In order to guarantee that all the correct nodes deliver the messages, we use soft links, which are inactive backup links to non-neighboring nodes that are only active when failures occur. At the core of our work is a fully distributed algorithm for the nodes to predict and collect the information of cascading failures so that soft links can be maintained to correct nodes prior to the failures. In the presence of failures, soft links are activated to guarantee message delivery and new soft links are built accordingly for long term robustness. Our evaluation results show that the algorithm achieves low packet drop rate and handles cascading failures with little overhead.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobson, Ian; Hiskens, Ian; Linderoth, Jeffrey

    Building on models of electrical power systems, and on powerful mathematical techniques including optimization, model predictive control, and simluation, this project investigated important issues related to the stable operation of power grids. A topic of particular focus was cascading failures of the power grid: simulation, quantification, mitigation, and control. We also analyzed the vulnerability of networks to component failures, and the design of networks that are responsive to and robust to such failures. Numerous other related topics were investigated, including energy hubs and cascading stall of induction machines

  11. Interrelation of structure and operational states in cascading failure of overloading lines in power grids

    NASA Astrophysics Data System (ADS)

    Xue, Fei; Bompard, Ettore; Huang, Tao; Jiang, Lin; Lu, Shaofeng; Zhu, Huaiying

    2017-09-01

    As the modern power system is expected to develop to a more intelligent and efficient version, i.e. the smart grid, or to be the central backbone of energy internet for free energy interactions, security concerns related to cascading failures have been raised with consideration of catastrophic results. The researches of topological analysis based on complex networks have made great contributions in revealing structural vulnerabilities of power grids including cascading failure analysis. However, existing literature with inappropriate assumptions in modeling still cannot distinguish the effects between the structure and operational state to give meaningful guidance for system operation. This paper is to reveal the interrelation between network structure and operational states in cascading failure and give quantitative evaluation by integrating both perspectives. For structure analysis, cascading paths will be identified by extended betweenness and quantitatively described by cascading drop and cascading gradient. Furthermore, the operational state for cascading paths will be described by loading level. Then, the risk of cascading failure along a specific cascading path can be quantitatively evaluated considering these two factors. The maximum cascading gradient of all possible cascading paths can be used as an overall metric to evaluate the entire power grid for its features related to cascading failure. The proposed method is tested and verified on IEEE30-bus system and IEEE118-bus system, simulation evidences presented in this paper suggests that the proposed model can identify the structural causes for cascading failure and is promising to give meaningful guidance for the protection of system operation in the future.

  12. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems

    PubMed Central

    Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.

    2017-01-01

    The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075

  13. PNNL Data-Intensive Computing for a Smarter Energy Grid

    ScienceCinema

    Carol Imhoff; Zhenyu (Henry) Huang; Daniel Chavarria

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework, an integrated platform to solve data analysis and processing needs, supports PNNL research on the U.S. electric power grid. MeDICi is enabling development of visualizations of grid operations and vulnerabilities, with goal of near real-time analysis to aid operators in preventing and mitigating grid failures.

  14. Multiple perspective vulnerability analysis of the power network

    NASA Astrophysics Data System (ADS)

    Wang, Shuliang; Zhang, Jianhua; Duan, Na

    2018-02-01

    To understand the vulnerability of the power network from multiple perspectives, multi-angle and multi-dimensional vulnerability analysis as well as community based vulnerability analysis are proposed in this paper. Taking into account of central China power grid as an example, correlation analysis of different vulnerability models is discussed. Then, vulnerabilities produced by different vulnerability metrics under the given vulnerability models and failure scenarios are analyzed. At last, applying the community detecting approach, critical areas of central China power grid are identified, Vulnerable and robust communities on both topological and functional perspective are acquired and analyzed. The approach introduced in this paper can be used to help decision makers develop optimal protection strategies. It will be also useful to give a multiple vulnerability analysis of the other infrastructure systems.

  15. The judgement of simultaneous commutation failure in HVDC about hierarchical connection to AC grid

    NASA Astrophysics Data System (ADS)

    Li, Ming; Song, Xinli; Huang, Daoshan; Liu, Wenzhuo; Zhao, Shutao; Ye, Xiaohui; Meng, Hang

    2017-09-01

    The hierarchical connection to AC grid at inverter sides in UHVDC has been take in several projects. This paper introduced the frame of the connection mode in hierarchical access system and compared it with the traditional one at the case of HVDC-Cigre. Then the criterion of commutation failure according to the same valves current was deduced. In order to verify the accuracy of the criterion, this paper used PSD-BPA (Bonneville Power Administration) to simulate the setting voltage drop in the East China power grid and certified the correctness of the formula.

  16. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  17. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  18. Statistical analysis of cascading failures in power grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chertkov, Michael; Pfitzner, Rene; Turitsyn, Konstantin

    2010-12-01

    We introduce a new microscopic model of cascading failures in transmission power grids. This model accounts for automatic response of the grid to load fluctuations that take place on the scale of minutes, when optimum power flow adjustments and load shedding controls are unavailable. We describe extreme events, caused by load fluctuations, which cause cascading failures of loads, generators and lines. Our model is quasi-static in the causal, discrete time and sequential resolution of individual failures. The model, in its simplest realization based on the Directed Current description of the power flow problem, is tested on three standard IEEE systemsmore » consisting of 30, 39 and 118 buses. Our statistical analysis suggests a straightforward classification of cascading and islanding phases in terms of the ratios between average number of removed loads, generators and links. The analysis also demonstrates sensitivity to variations in line capacities. Future research challenges in modeling and control of cascading outages over real-world power networks are discussed.« less

  19. Reliability of lead-calcium automotive batteries in practical operations

    NASA Astrophysics Data System (ADS)

    Burghoff, H.-G.; Richter, G.

    In order to reach a statistically sound conclusion on the suitability of maintenance-free, lead-calcium automotive batteries for practical operations, the failure behaviour of such batteries has been observed in a large-scale experiment carried out by Mercedes Benz AG and Robert Bosch GmbH in different climatic zones of North America. The results show that the average failure behaviour is not significantly different to that of batteries from other manufacturers using other grid alloy systems and operated under otherwise identical conditions; the cumulative failure probability after 30 months is 17%. The principal causes of failure are: (i) early failure: transport damage, filling errors, and short-circuits due to the outer plates being pushed up during plate-block assembly (manufacturing defect); (ii) statistical failure: short-circuits due to growth of positive plates caused by a reduction in the mechanical strength of the cast positive grid as a result of corrosion; (iii) late failure due to an increased occurrence of short-circuits, especially frequent in outer cell facing the engine of the vehicle (subjected to high temperature), and to defects caused by capacity decay. As expected, the batteries exhibit extremely low water loss in each cell. The poor cyclical performance of stationary batteries, caused by acid stratification and well-known from laboratory tests, has no detrimental effect on the batteries in use. After a thorough analysis of the corrosion process, the battery manufacturer changed the grid alloy and the method of its production, and thus limited the corrosion problem with cast lead-calcium grids and with it the possibility of plate growth. The mathematical methods used in this study, and in particular the characteristic factors derived from them, have proven useful for assessing the suitability of automotive batteries.

  20. Impact of Measurement Error on Synchrophasor Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less

  1. Cyber-Physical System Security of Smart Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dagle, Jeffery E.

    2012-01-31

    Abstract—This panel presentation will provide perspectives of cyber-physical system security of smart grids. As smart grid technologies are deployed, the interconnected nature of these systems is becoming more prevalent and more complex, and the cyber component of this cyber-physical system is increasing in importance. Studying system behavior in the face of failures (e.g., cyber attacks) allows a characterization of the systems’ response to failure scenarios, loss of communications, and other changes in system environment (such as the need for emergent updates and rapid reconfiguration). The impact of such failures on the availability of the system can be assessed and mitigationmore » strategies considered. Scenarios associated with confidentiality, integrity, and availability are considered. The cyber security implications associated with the American Recovery and Reinvestment Act of 2009 in the United States are discussed.« less

  2. Metrics for Assessment of Smart Grid Data Integrity Attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annarita Giani; Miles McQueen; Russell Bent

    2012-07-01

    There is an emerging consensus that the nation’s electricity grid is vulnerable to cyber attacks. This vulnerability arises from the increasing reliance on using remote measurements, transmitting them over legacy data networks to system operators who make critical decisions based on available data. Data integrity attacks are a class of cyber attacks that involve a compromise of information that is processed by the grid operator. This information can include meter readings of injected power at remote generators, power flows on transmission lines, and relay states. These data integrity attacks have consequences only when the system operator responds to compromised datamore » by redispatching generation under normal or contingency protocols. These consequences include (a) financial losses from sub-optimal economic dispatch to service loads, (b) robustness/resiliency losses from placing the grid at operating points that are at greater risk from contingencies, and (c) systemic losses resulting from cascading failures induced by poor operational choices. This paper is focused on understanding the connections between grid operational procedures and cyber attacks. We first offer two examples to illustrate how data integrity attacks can cause economic and physical damage by misleading operators into taking inappropriate decisions. We then focus on unobservable data integrity attacks involving power meter data. These are coordinated attacks where the compromised data are consistent with the physics of power flow, and are therefore passed by any bad data detection algorithm. We develop metrics to assess the economic impact of these attacks under re-dispatch decisions using optimal power flow methods. These metrics can be use to prioritize the adoption of appropriate countermeasures including PMU placement, encryption, hardware upgrades, and advance attack detection algorithms.« less

  3. Distributed intrusion detection system based on grid security model

    NASA Astrophysics Data System (ADS)

    Su, Jie; Liu, Yahui

    2008-03-01

    Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.

  4. Spatial correlation analysis of cascading failures: Congestions and Blackouts

    PubMed Central

    Daqing, Li; Yinan, Jiang; Rui, Kang; Havlin, Shlomo

    2014-01-01

    Cascading failures have become major threats to network robustness due to their potential catastrophic consequences, where local perturbations can induce global propagation of failures. Unlike failures spreading via direct contacts due to structural interdependencies, overload failures usually propagate through collective interactions among system components. Despite the critical need in developing protection or mitigation strategies in networks such as power grids and transportation, the propagation behavior of cascading failures is essentially unknown. Here we find by analyzing our collected data that jams in city traffic and faults in power grid are spatially long-range correlated with correlations decaying slowly with distance. Moreover, we find in the daily traffic, that the correlation length increases dramatically and reaches maximum, when morning or evening rush hour is approaching. Our study can impact all efforts towards improving actively system resilience ranging from evaluation of design schemes, development of protection strategies to implementation of mitigation programs. PMID:24946927

  5. Optimizing Data Management in Grid Environments

    NASA Astrophysics Data System (ADS)

    Zissimos, Antonis; Doka, Katerina; Chazapis, Antony; Tsoumakos, Dimitrios; Koziris, Nectarios

    Grids currently serve as platforms for numerous scientific as well as business applications that generate and access vast amounts of data. In this paper, we address the need for efficient, scalable and robust data management in Grid environments. We propose a fully decentralized and adaptive mechanism comprising of two components: A Distributed Replica Location Service (DRLS) and a data transfer mechanism called GridTorrent. They both adopt Peer-to-Peer techniques in order to overcome performance bottlenecks and single points of failure. On one hand, DRLS ensures resilience by relying on a Byzantine-tolerant protocol and is able to handle massive concurrent requests even during node churn. On the other hand, GridTorrent allows for maximum bandwidth utilization through collaborative sharing among the various data providers and consumers. The proposed integrated architecture is completely backwards-compatible with already deployed Grids. To demonstrate these points, experiments have been conducted in LAN as well as WAN environments under various workloads. The evaluation shows that our scheme vastly outperforms the conventional mechanisms in both efficiency (up to 10 times faster) and robustness in case of failures and flash crowd instances.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unneberg, L.

    The main features of the 16 core grids (top guides) designed by ABB ATOM AB are briefly described and the evolution of the design is discussed. One important characteristic of the first nine grids is the existence of bolts securing guide bars to the core grid plates. These bolts are made of precipitation hardened or solution annealed stainless steel. During operation, bolts in all none grids have cracked. The failure analyses indicate that intergranular stress corrosion cracking (IGSCC), possibly accelerated by crevice conditions and/or irradiation, was the cause of failure. Fast neutron fluences approaching or exceeding the levels considered asmore » critical for irradiation assisted stress corrosion cracking (IASCC) will be reached in a few cases only. Temporary measures were taken immediately after the discovery of the cracking. For five of the nine reactors affected, it was decided to replace the complete grids. Two of these replacements have been successfully carried out to date. IASCC as a potential future problem is discussed and it is pointed out that, during their life times, the ABB ATOM core grids will be exposed to sufficiently high fast neutron fluences to cause some concern.« less

  7. Comprehensive risk assessment method of catastrophic accident based on complex network properties

    NASA Astrophysics Data System (ADS)

    Cui, Zhen; Pang, Jun; Shen, Xiaohong

    2017-09-01

    On the macro level, the structural properties of the network and the electrical characteristics of the micro components determine the risk of cascading failures. And the cascading failures, as a process with dynamic development, not only the direct risk but also potential risk should be considered. In this paper, comprehensively considered the direct risk and potential risk of failures based on uncertain risk analysis theory and connection number theory, quantified uncertain correlation by the node degree and node clustering coefficient, then established a comprehensive risk indicator of failure. The proposed method has been proved by simulation on the actual power grid. Modeling a network according to the actual power grid, and verified the rationality of the proposed method.

  8. Cascades in interdependent flow networks

    NASA Astrophysics Data System (ADS)

    Scala, Antonio; De Sanctis Lucentini, Pier Giorgio; Caldarelli, Guido; D'Agostino, Gregorio

    2016-06-01

    In this manuscript, we investigate the abrupt breakdown behavior of coupled distribution grids under load growth. This scenario mimics the ever-increasing customer demand and the foreseen introduction of energy hubs interconnecting the different energy vectors. We extend an analytical model of cascading behavior due to line overloads to the case of interdependent networks and find evidence of first order transitions due to the long-range nature of the flows. Our results indicate that the foreseen increase in the couplings between the grids has two competing effects: on the one hand, it increases the safety region where grids can operate without withstanding systemic failures; on the other hand, it increases the possibility of a joint systems' failure.

  9. Mitigation of commutation failures in LCC-HVDC systems based on superconducting fault current limiters

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Geon; Khan, Umer Amir; Lee, Ho-Yun; Lim, Sung-Woo; Lee, Bang-Wook

    2016-11-01

    Commutation failure in line commutated converter based HVDC systems cause severe damages on the entire power grid system. For LCC-HVDC, thyristor valves are turned on by a firing signal but turn off control is governed by the external applied AC voltage from surrounding network. When the fault occurs in AC system, turn-off control of thyristor valves is unavailable due to the voltage collapse of point of common coupling (PCC), which causes the commutation failure in LCC-HVDC link. Due to the commutation failure, the power transfer interruption, dc voltage drop and severe voltage fluctuation in the AC system could be occurred. In a severe situation, it might cause the protection system to block the valves. In this paper, as a solution to prevent the voltage collapse on PCC and to limit the fault current, the application study of resistive superconducting fault current limiter (SFCL) on LCC-HVDC grid system was performed with mathematical and simulation analyses. The simulation model was designed by Matlab/Simulink considering Haenam-Jeju HVDC power grid in Korea which includes conventional AC system and onshore wind farm and resistive SFCL model. From the result, it was observed that the application of SFCL on LCC-HVDC system is an effective solution to mitigate the commutation failure. And then the process to determine optimum quench resistance of SFCL which enables the recovery of commutation failure was deeply investigated.

  10. Characterization of the High-Speed-Stage Bearing Skidding of Wind Turbine Gearboxes Induced by Dynamic Electricity Grid Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helsen, Jan; Guillaume, Patrick; Guo, Yi

    Bearing behavior is an important factor for wind turbine drivetrain reliability. Extreme loads and dynamic excitations pose challenges to the bearing design and therefore its performance. Excessive skidding of the bearing rollers should be avoided because it can cause scuffing failures. Excitations coming from wind and the electricity grid can subject the drivetrain to fluctuating torque and nontorque loads. Wind-induced excitations have been investigated predominantly in literature. However, modern wind turbines are subjected more and more to grid-induced loads because of stricter electricity grid regulations. For example, during fault-ride-through events, turbines are required to stay connected for a longer periodmore » of time during the grid failure. This work investigates the influence of electrically induced excitations on the skidding behaviour of the tapered roller bearings on the high-speed stage of a wind turbine gearbox. This skidding behaviour during dynamic events is described as a potential bearing failure initiator by many researchers; however, only limited full-scale dynamic testing is documented. Therefore, a dedicated gridloss-type event is defined in the paper and conducted in a dynamometer test on a full-scale wind turbine nacelle. During the event, a complete electricity grid failure is simulated while the turbine is at rated speed and predefined torque levels. Particular focus is on the characterization of the high-speed shaft tapered roller bearing slip behavior. Strain-gauge bridges in grooves along the circumference of the outer ring are used to characterize the bearing load zone in detail. It is shown that during the torque reversals of the transient event, roller slip can be induced. This indicates the potential of the applied load case to go beyond the preload of the tapered roller bearing. Furthermore, the relation between the applied torque and skidding level is studied.« less

  11. Characterization of the High-Speed-Stage Bearing Skidding of Wind Turbine Gearboxes Induced by Dynamic Electricity Grid Events: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helsen, Jan; Guillaume, Patrick; Guo, Yi

    Bearing behavior is an important factor for wind turbine drivetrain reliability. Extreme loads and dynamic excitations pose challenges to the bearing design and therefore its performance. Excessive skidding of the bearing rollers should be avoided because it can cause scuffing failures. Excitations coming from wind and the electricity grid can subject the drivetrain to fluctuating torque and nontorque loads. Wind-induced excitations have been investigated predominantly in literature. However, modern wind turbines are subjected more and more to grid-induced loads because of stricter electricity grid regulations. For example, during fault-ride-through events, turbines are required to stay connected for a longer periodmore » of time during the grid failure. This work investigates the influence of electrically induced excitations on the skidding behaviour of the tapered roller bearings on the high-speed stage of a wind turbine gearbox. This skidding behaviour during dynamic events is described as a potential bearing failure initiator by many researchers; however, only limited full-scale dynamic testing is documented. Therefore, a dedicated gridloss-type event is defined in the paper and conducted in a dynamometer test on a full-scale wind turbine nacelle. During the event, a complete electricity grid failure is simulated while the turbine is at rated speed and predefined torque levels. Particular focus is on the characterization of the high-speed shaft tapered roller bearing slip behavior. Strain-gauge bridges in grooves along the circumference of the outer ring are used to characterize the bearing load zone in detail. It is shown that during the torque reversals of the transient event, roller slip can be induced. This indicates the potential of the applied load case to go beyond the preload of the tapered roller bearing. Furthermore, the relation between the applied torque and skidding level is studied.« less

  12. Reliability considerations in the placement of control system components

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1983-01-01

    This paper presents a methodology, along with applications to a grid type structure, for incorporating reliability considerations in the decision for actuator placement on large space structures. The method involves the minimization of a criterion that considers mission life and the reliability of the system components. It is assumed that the actuator gains are to be readjusted following failures, but their locations cannot be changed. The goal of the design is to suppress vibrations of the grid and the integral square of the grid modal amplitudes is used as a measure of performance of the control system. When reliability of the actuators is considered, a more pertinent measure is the expected value of the integral; that is, the sum of the squares of the modal amplitudes for each possible failure state considered, multiplied by the probability that the failure state will occur. For a given set of actuator locations, the optimal criterion may be graphed as a function of the ratio of the mean time to failure of the components and the design mission life or reservicing interval. The best location of the actuators is typically different for a short mission life than for a long one.

  13. Power system voltage stability and agent based distribution automation in smart grid

    NASA Astrophysics Data System (ADS)

    Nguyen, Cuong Phuc

    2011-12-01

    Our interconnected electric power system is presently facing many challenges that it was not originally designed and engineered to handle. The increased inter-area power transfers, aging infrastructure, and old technologies, have caused many problems including voltage instability, widespread blackouts, slow control response, among others. These problems have created an urgent need to transform the present electric power system to a highly stable, reliable, efficient, and self-healing electric power system of the future, which has been termed "smart grid". This dissertation begins with an investigation of voltage stability in bulk transmission networks. A new continuation power flow tool for studying the impacts of generator merit order based dispatch on inter-area transfer capability and static voltage stability is presented. The load demands are represented by lumped load models on the transmission system. While this representation is acceptable in traditional power system analysis, it may not be valid in the future smart grid where the distribution system will be integrated with intelligent and quick control capabilities to mitigate voltage problems before they propagate into the entire system. Therefore, before analyzing the operation of the whole smart grid, it is important to understand the distribution system first. The second part of this dissertation presents a new platform for studying and testing emerging technologies in advanced Distribution Automation (DA) within smart grids. Due to the key benefits over the traditional centralized approach, namely flexible deployment, scalability, and avoidance of single-point-of-failure, a new distributed approach is employed to design and develop all elements of the platform. A multi-agent system (MAS), which has the three key characteristics of autonomy, local view, and decentralization, is selected to implement the advanced DA functions. The intelligent agents utilize a communication network for cooperation and negotiation. Communication latency is modeled using a user-defined probability density function. Failure-tolerant communication strategies are developed for agent communications. Major elements of advanced DA are developed in a completely distributed way and successfully tested for several IEEE standard systems, including: Fault Detection, Location, Isolation, and Service Restoration (FLISR); Coordination of Distributed Energy Storage Systems (DES); Distributed Power Flow (DPF); Volt-VAR Control (VVC); and Loss Reduction (LR).

  14. Design and Implementation of Real-Time Off-Grid Detection Tool Based on FNET/GridEye

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Jiahui; Zhang, Ye; Liu, Yilu

    2014-01-01

    Real-time situational awareness tools are of critical importance to power system operators, especially during emergencies. The availability of electric power has become a linchpin of most post disaster response efforts as it is the primary dependency for public and private sector services, as well as individuals. Knowledge of the scope and extent of facilities impacted, as well as the duration of their dependence on backup power, enables emergency response officials to plan for contingencies and provide better overall response. Based on real-time data acquired by Frequency Disturbance Recorders (FDRs) deployed in the North American power grid, a real-time detection methodmore » is proposed. This method monitors critical electrical loads and detects the transition of these loads from an on-grid state, where the loads are fed by the power grid to an off-grid state, where the loads are fed by an Uninterrupted Power Supply (UPS) or a backup generation system. The details of the proposed detection algorithm are presented, and some case studies and off-grid detection scenarios are also provided to verify the effectiveness and robustness. Meanwhile, the algorithm has already been implemented based on the Grid Solutions Framework (GSF) and has effectively detected several off-grid situations.« less

  15. Fatigue and Fracture Characterization of GlasGridRTM Reinforced Asphalt Concrete Pavement

    NASA Astrophysics Data System (ADS)

    Safavizadeh, Seyed Amirshayan

    The purpose of this research is to develop an experimental and analytical framework for describing, modeling, and predicting the reflective cracking patterns and crack growth rates in GlasGridRTM-reinforced asphalt pavements. In order to fulfill this objective, the effects of different interfacial conditions (mixture and tack coat type, and grid opening size) on reflective cracking-related failure mechanisms and the fatigue and fracture characteristics of fiberglass grid-reinforced asphalt concrete beams were studied by means of four- and threepoint bending notched beam fatigue tests (NBFTs) and cyclic and monotonic interface shear tests. The digital image correlation (DIC) technique was utilized for obtaining the displacement and strain contours of specimen surfaces during each test. The DIC analysis results were used to develop crack tip detection methods that were in turn used to determine interfacial crack lengths in the shear tests, and vertical and horizontal (interfacial) crack lengths in the notched beam fatigue tests. Linear elastic fracture mechanics (LEFM) principles were applied to the crack length data to describe the crack growth. In the case of the NBFTs, a finite element (FE) code was developed and used for modeling each beam at different stages of testing and back-calculating the stress intensity factors (SIFs) for the vertical and horizontal cracks. The local effect of reinforcement on the stiffness of the system at a vertical crack-interface intersection or the resistance of the grid system to the deflection differential at the joint/crack (hereinafter called joint stiffness) for GlasGrid-reinforced asphalt concrete beams was determined by implementing a joint stiffness parameter into the finite element code. The strain level dependency of the fatigue and fracture characteristics of the GlasGrid-reinforced beams was studied by performing four-point bending notched beam fatigue tests at strain levels of 600, 750, and 900 microstrain. These beam tests were conducted at 15°C, 20°C, and 23°C, with the main focus being to find the characteristics at 20°C. The results obtained from the tests at the different temperatures were used to investigate the effects of temperature on the reflective cracking performance of the gridreinforced beam specimens. The temperature tests were also used to investigate the validity of the time-temperature superposition (t-TS) principle in shear and the beam fatigue performance of the grid-reinforced specimens. The NBFT results suggest that different interlayer conditions do not reflect a unique failure mechanism, and thus, in order to predict and model the performance of grid-reinforced pavement, all the mechanisms involved in weakening its structural integrity, including damage within the asphalt layers and along the interface, must be considered. The shear and beam fatigue test results suggest that the grid opening size, interfacial bond quality, and mixture type play important roles in the reflective cracking performance of GlasGrid-reinforced asphalt pavements. According to the NBTF results, GlasGrid reinforcement retards reflective crack growth by stiffening the composite system and introducing a joint stiffness parameter. The results also show that the higher the bond strength and interlayer stiffness values, the higher the joint stiffness and retardation effects. The t-TS studies proved the validity of this principle in terms of the reflective crack growth of the grid-reinforced beam specimens and the shear modulus and shear strength of the grid-reinforced interfaces.

  16. Reliability analysis for the smart grid : from cyber control and communication to physical manifestations of failure.

    DOT National Transportation Integrated Search

    2010-01-01

    The Smart Grid is a cyber-physical system comprised of physical components, such as transmission lines and generators, and a : network of embedded systems deployed for their cyber control. Our objective is to qualitatively and quantitatively analyze ...

  17. Adaptive Connectivity Restoration from Node Failure(s) in Wireless Sensor Networks

    PubMed Central

    Wang, Huaiyuan; Ding, Xu; Huang, Cheng; Wu, Xiaobei

    2016-01-01

    Recently, there is a growing interest in the applications of wireless sensor networks (WSNs). A set of sensor nodes is deployed in order to collectively survey an area of interest and/or perform specific surveillance tasks in some of the applications, such as battlefield reconnaissance. Due to the harsh deployment environments and limited energy supply, nodes may fail, which impacts the connectivity of the whole network. Since a single node failure (cut-vertex) will destroy the connectivity and divide the network into disjoint blocks, most of the existing studies focus on the problem of single node failure. However, the failure of multiple nodes would be a disaster to the whole network and must be repaired effectively. Only few studies are proposed to handle the problem of multiple cut-vertex failures, which is a special case of multiple node failures. Therefore, this paper proposes a comprehensive solution to address the problems of node failure (single and multiple). Collaborative Single Node Failure Restoration algorithm (CSFR) is presented to solve the problem of single node failure only with cooperative communication, but CSFR-M, which is the extension of CSFR, handles the single node failure problem more effectively with node motion. Moreover, Collaborative Connectivity Restoration Algorithm (CCRA) is proposed on the basis of cooperative communication and node maneuverability to restore network connectivity after multiple nodes fail. CSFR-M and CCRA are reactive methods that initiate the connectivity restoration after detecting the node failure(s). In order to further minimize the energy dissipation, CCRA opts to simplify the recovery process by gridding. Moreover, the distance that an individual node needs to travel during recovery is reduced by choosing the nearest suitable candidates. Finally, extensive simulations validate the performance of CSFR, CSFR-M and CCRA. PMID:27690030

  18. Prognostics for Microgrid Components

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav

    2012-01-01

    Prognostics is the science of predicting future performance and potential failures based on targeted condition monitoring. Moving away from the traditional reliability centric view, prognostics aims at detecting and quantifying the time to impending failures. This advance warning provides the opportunity to take actions that can preserve uptime, reduce cost of damage, or extend the life of the component. The talk will focus on the concepts and basics of prognostics from the viewpoint of condition-based systems health management. Differences with other techniques used in systems health management and philosophies of prognostics used in other domains will be shown. Examples relevant to micro grid systems and subsystems will be used to illustrate various types of prediction scenarios and the resources it take to set up a desired prognostic system. Specifically, the implementation results for power storage and power semiconductor components will demonstrate specific solution approaches of prognostics. The role of constituent elements of prognostics, such as model, prediction algorithms, failure threshold, run-to-failure data, requirements and specifications, and post-prognostic reasoning will be explained. A discussion on performance evaluation and performance metrics will conclude the technical discussion followed by general comments on open research problems and challenges in prognostics.

  19. SU-E-T-538: Evaluation of IMRT Dose Calculation Based on Pencil-Beam and AAA Algorithms.

    PubMed

    Yuan, Y; Duan, J; Popple, R; Brezovich, I

    2012-06-01

    To evaluate the accuracy of dose calculation for intensity modulated radiation therapy (IMRT) based on Pencil Beam (PB) and Analytical Anisotropic Algorithm (AAA) computation algorithms. IMRT plans of twelve patients with different treatment sites, including head/neck, lung and pelvis, were investigated. For each patient, dose calculation with PB and AAA algorithms using dose grid sizes of 0.5 mm, 0.25 mm, and 0.125 mm, were compared with composite-beam ion chamber and film measurements in patient specific QA. Discrepancies between the calculation and the measurement were evaluated by percentage error for ion chamber dose and γ〉l failure rate in gamma analysis (3%/3mm) for film dosimetry. For 9 patients, ion chamber dose calculated with AAA-algorithms is closer to ion chamber measurement than that calculated with PB algorithm with grid size of 2.5 mm, though all calculated ion chamber doses are within 3% of the measurements. For head/neck patients and other patients with large treatment volumes, γ〉l failure rate is significantly reduced (within 5%) with AAA-based treatment planning compared to generally more than 10% with PB-based treatment planning (grid size=2.5 mm). For lung and brain cancer patients with medium and small treatment volumes, γ〉l failure rates are typically within 5% for both AAA and PB-based treatment planning (grid size=2.5 mm). For both PB and AAA-based treatment planning, improvements of dose calculation accuracy with finer dose grids were observed in film dosimetry of 11 patients and in ion chamber measurements for 3 patients. AAA-based treatment planning provides more accurate dose calculation for head/neck patients and other patients with large treatment volumes. Compared with film dosimetry, a γ〉l failure rate within 5% can be achieved for AAA-based treatment planning. © 2012 American Association of Physicists in Medicine.

  20. Improving battery safety by early detection of internal shorting with a bifunctional separator

    NASA Astrophysics Data System (ADS)

    Wu, Hui; Zhuo, Denys; Kong, Desheng; Cui, Yi

    2014-10-01

    Lithium-based rechargeable batteries have been widely used in portable electronics and show great promise for emerging applications in transportation and wind-solar-grid energy storage, although their safety remains a practical concern. Failures in the form of fire and explosion can be initiated by internal short circuits associated with lithium dendrite formation during cycling. Here we report a new strategy for improving safety by designing a smart battery that allows internal battery health to be monitored in situ. Specifically, we achieve early detection of lithium dendrites inside batteries through a bifunctional separator, which offers a third sensing terminal in addition to the cathode and anode. The sensing terminal provides unique signals in the form of a pronounced voltage change, indicating imminent penetration of dendrites through the separator. This detection mechanism is highly sensitive, accurate and activated well in advance of shorting and can be applied to many types of batteries for improved safety.

  1. Multiplex Superconducting Transmission Line for green power consolidation on a Smart Grid

    NASA Astrophysics Data System (ADS)

    McIntyre, P.; Gerity, J.; Kellams, J.; Sattarov, A.

    2017-12-01

    A multiplex superconducting transmission line (MSTL) is being developed for applications requiring interconnection of multi-MW electric power generation among a number of locations. MSTL consists of a cluster of many 2- or 3-conductor transmission lines within a coaxial cryostat envelope. Each line operates autonomously, so that the interconnection of multiple power loads can be done in a failure-tolerant network. Specifics of the electrical, mechanical, and cryogenic design are presented. The consolidation of transformation and conditioning and the failure-tolerant interconnects have the potential to offer important benefit for the green energy components of a Smart Grid.

  2. Research on Ultrasonic Flaw Detection of Steel Weld in Spatial Grid Structure

    NASA Astrophysics Data System (ADS)

    Du, Tao; Sun, Jiandong; Fu, Shengguang; Zhang, Changquan; Gao, Qing

    2017-06-01

    The welding quality of spatial grid member is an important link in quality control of steel structure. The paper analyzed the reasons that the welding seam of small-bore pipe with thin wall grid structure is difficult to be detected by ultrasonic wave from the theoretical and practical aspects. A series of feasible detection methods was also proposed by improving probe and operation approaches in this paper, and the detection methods were verified by project cases. Over the years, the spatial grid structure is widely used the engineering by virtue of its several outstanding characteristics such as reasonable structure type, standard member, excellent space integrity and quick installation. The wide application of spatial grid structure brings higher requirements on nondestructive test of grid structure. The implementation of new Code for Construction Quality Acceptance of Steel Structure Work GB50205-2001 strengthens the site inspection of steel structure, especially the site inspection of ultrasonic flaw detection in steel weld. The detection for spatial grid member structured by small-bore and thin-walled pipes is difficult due to the irregular influence of sound pressure in near-field region of sound field, sound beam diffusion generated by small bore pipe and reduction of sensitivity. Therefore, it is quite significant to select correct detecting conditions. The spatial grid structure of welding ball and bolt ball is statically determinate structure with high-order axial force which is connected by member bars and joints. It is welded by shrouding or conehead of member bars and of member bar and bolt-node sphere. It is obvious that to ensure the quality of these welding positions is critical to the quality of overall grid structure. However, the complexity of weld structure and limitation of ultrasonic detection method cause many difficulties in detection. No satisfactory results will be obtained by the conventional detection technology, so some special approaches must be used.

  3. Effect of molding conditions on fracture mechanisms and stiffness of a composite of grid structure

    NASA Astrophysics Data System (ADS)

    Nikolaev, V. P.; Pichugin, V. S.; Korobeinikov, A. G.

    1999-01-01

    Methods of determining a complex of stiffness and deformability characteristics of a composite with rhomb-type grid structure were elaborated. Rhomb-type specimens were used for testing the ribs of the structure in tension, compression, and bending and the nodal points in shear in the plane of the ribs. The effect of additional tensioning of the ribs preceding the curing of the binder was investigated (ten tensioning levels ranging from 8 to 70 N/bundle with a linear density of 390 tex were applied). In testing epoxy-carbon specimens (UKN-5000+EHD-MK) in compression and tension, the failure mode changed depending on the tensioning level, i.e., the presence or absence of delamination and the appearance of "dry" fibers were detected. Dependences of the mechanical properties on tensioning were of a markedly pronounced extreme nature. The methods elaborated allow us to investigate the effect of other molding parameters, as well as the conditions and nature of loading, on the mechanical characteristics of composites.

  4. Single and double grid long-range alpha detectors

    DOEpatents

    MacArthur, Duncan W.; Allander, Krag S.

    1993-01-01

    Alpha particle detectors capable of detecting alpha radiation from distant sources. In one embodiment, a voltage is generated in a single electrically conductive grid while a fan draws air containing air molecules ionized by alpha particles through an air passage and across the conductive grid. The current in the conductive grid can be detected and used for measurement or alarm. Another embodiment builds on this concept and provides an additional grid so that air ions of both polarities can be detected. The detector can be used in many applications, such as for pipe or duct, tank, or soil sample monitoring.

  5. Single and double grid long-range alpha detectors

    DOEpatents

    MacArthur, D.W.; Allander, K.S.

    1993-03-16

    Alpha particle detectors capable of detecting alpha radiation from distant sources. In one embodiment, a voltage is generated in a single electrically conductive grid while a fan draws air containing air molecules ionized by alpha particles through an air passage and across the conductive grid. The current in the conductive grid can be detected and used for measurement or alarm. Another embodiment builds on this concept and provides an additional grid so that air ions of both polarities can be detected. The detector can be used in many applications, such as for pipe or duct, tank, or soil sample monitoring.

  6. Parallel Proximity Detection for Computer Simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor); Wieland, Frederick P. (Inventor)

    1997-01-01

    The present invention discloses a system for performing proximity detection in computer simulations on parallel processing architectures utilizing a distribution list which includes movers and sensor coverages which check in and out of grids. Each mover maintains a list of sensors that detect the mover's motion as the mover and sensor coverages check in and out of the grids. Fuzzy grids are includes by fuzzy resolution parameters to allow movers and sensor coverages to check in and out of grids without computing exact grid crossings. The movers check in and out of grids while moving sensors periodically inform the grids of their coverage. In addition, a lookahead function is also included for providing a generalized capability without making any limiting assumptions about the particular application to which it is applied. The lookahead function is initiated so that risk-free synchronization strategies never roll back grid events. The lookahead function adds fixed delays as events are scheduled for objects on other nodes.

  7. Parallel Proximity Detection for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor); Wieland, Frederick P. (Inventor)

    1998-01-01

    The present invention discloses a system for performing proximity detection in computer simulations on parallel processing architectures utilizing a distribution list which includes movers and sensor coverages which check in and out of grids. Each mover maintains a list of sensors that detect the mover's motion as the mover and sensor coverages check in and out of the grids. Fuzzy grids are included by fuzzy resolution parameters to allow movers and sensor coverages to check in and out of grids without computing exact grid crossings. The movers check in and out of grids while moving sensors periodically inform the grids of their coverage. In addition, a lookahead function is also included for providing a generalized capability without making any limiting assumptions about the particular application to which it is applied. The lookahead function is initiated so that risk-free synchronization strategies never roll back grid events. The lookahead function adds fixed delays as events are scheduled for objects on other nodes.

  8. Controlling Electron Backstreaming Phenomena Through the Use of a Transverse Magnetic Field

    NASA Technical Reports Server (NTRS)

    Foster, John E.; Patterson, Michael J.

    2002-01-01

    DEEP-SPACE mission propulsion requirements can be satisfied by the use of high specific impulse systems such as ion thrusters. For such missions. however. the ion thruster will be required to provide thrust for long periods of time. To meet the long operation time and high-propellant throughput requirements, thruster lifetime must be increased. In general, potential ion thruster failure mechanisms associated with long-duration thrusting can be grouped into four areas: (1) ion optics failure; (2) discharge cathode failure; (3) neutralizer failure; and (4) electron backstreaming caused by accelerator grid aperture enlargement brought on by accelerator grid erosion. The work presented here focuses on electron backstreaming. which occurs when the potential at the center of an accelerator grid aperture is insufficient to prevent the backflow of electrons into the ion thruster. The likelihood of this occurring depends on ion source operation time. plasma density, and grid voltages, as accelerator grid apertures enlarge as a result of erosion. Electrons that enter the gap between the high-voltage screen and accelerator grids are accelerated to the energies approximately equal to the beam voltage. This energetic electron beam (typically higher than 1 kV) can damage not only the ion source discharge cathode assembly. but also any of the discharge surfaces upstream of the ion acceleration optics that the electrons happen to impact. Indeed. past backstreaming studies have shown that near the backstreaming limit, which corresponds to the absolute value of the accelerator grid voltage below which electrons can backflow into the thruster, there is a rather sharp rise in temperature at structures such as the cathode keeper electrode. In this respect operation at accelerator grid voltages near the backstreaming limit is avoided. Generally speaking, electron backstreaming is prevented by operating the accelerator grid at a sufficiently negative voltage to ensure a sufficiently negative aperture center potential. This approach can provide the necessary margin assuming an expected aperture enlargement. Operation at very negative accelerator grid voltages, however, enhances ion charge-exchange and direct impingement erosion of the accelerator grid. The focus of the work presented here is the mitigation of electron backstreaming by the use of a magnetic field. The presence of a magnetic field oriented perpendicular to the thruster axis can significantly decrease the magnitude of the backflowing electron current by significantly reducing the electron diffusion coefficient. Negative ion sources utilize this principle to reduce the fraction of electrons in the negative ion beam. The focus of these efforts has been on the attenuation of electron current diffusing from the discharge plasma into the negative ion extraction optics by placing the transverse magnetic field upstream of the extraction electrodes. In contrast. in the case of positive ion sources such as ion thrusters, the approach taken in the work presented here is to apply the transverse field downstream of the ion extraction system so as to prevent electrons from flowing back into the source. It was found in the work presented here that the magnetic field also reduces the absolute value of the electron backstreaming limit voltage. In this respect. the applied transverse magnetic field provides two mechanisms for electron backstreaming mitigation: (1) electron current attenuation and (2) backstreaming limit voltage shift. Such a shift to less negative voltages can lead to reduced accelerator grid erosion rates.

  9. The equal load-sharing model of cascade failures in power grids

    NASA Astrophysics Data System (ADS)

    Scala, Antonio; De Sanctis Lucentini, Pier Giorgio

    2016-11-01

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing power demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into ;super-grids;.

  10. Abruptness of Cascade Failures in Power Grids

    NASA Astrophysics Data System (ADS)

    Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio

    2014-01-01

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into ``super-grids''.

  11. Abruptness of cascade failures in power grids.

    PubMed

    Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio

    2014-01-15

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into "super-grids".

  12. Complex Dynamics of the Power Transmission Grid (and other Critical Infrastructures)

    NASA Astrophysics Data System (ADS)

    Newman, David

    2015-03-01

    Our modern societies depend crucially on a web of complex critical infrastructures such as power transmission networks, communication systems, transportation networks and many others. These infrastructure systems display a great number of the characteristic properties of complex systems. Important among these characteristics, they exhibit infrequent large cascading failures that often obey a power law distribution in their probability versus size. This power law behavior suggests that conventional risk analysis does not apply to these systems. It is thought that much of this behavior comes from the dynamical evolution of the system as it ages, is repaired, upgraded, and as the operational rules evolve with human decision making playing an important role in the dynamics. In this talk, infrastructure systems as complex dynamical systems will be introduced and some of their properties explored. The majority of the talk will then be focused on the electric power transmission grid though many of the results can be easily applied to other infrastructures. General properties of the grid will be discussed and results from a dynamical complex systems power transmission model will be compared with real world data. Then we will look at a variety of uses of this type of model. As examples, we will discuss the impact of size and network homogeneity on the grid robustness, the change in risk of failure as generation mix (more distributed vs centralized for example) changes, as well as the effect of operational changes such as the changing the operational risk aversion or grid upgrade strategies. One of the important outcomes from this work is the realization that ``improvements'' in the system components and operational efficiency do not always improve the system robustness, and can in fact greatly increase the risk, when measured as a risk of large failure.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babun, Leonardo; Aksu, Hidayet; Uluagac, A. Selcuk

    The core vision of the smart grid concept is the realization of reliable two-­way communications between smart devices (e.g., IEDs, PLCs, PMUs). The benefits of the smart grid also come with tremendous security risks and new challenges in protecting the smart grid systems from cyber threats. Particularly, the use of untrusted counterfeit smart grid devices represents a real problem. Consequences of propagating false or malicious data, as well as stealing valuable user or smart grid state information from counterfeit devices are costly. Hence, early detection of counterfeit devices is critical for protecting smart grid’s components and users. To address thesemore » concerns, in this poster, we introduce our initial design of a configurable framework that utilize system call tracing, library interposition, and statistical techniques for monitoring and detection of counterfeit smart grid devices. In our framework, we consider six different counterfeit device scenarios with different smart grid devices and adversarial seZings. Our initial results on a realistic testbed utilizing actual smart-­grid GOOSE messages with IEC-­61850 communication protocol are very promising. Our framework is showing excellent rates on detection of smart grid counterfeit devices from impostors.« less

  14. GridMass: a fast two-dimensional feature detection method for LC/MS.

    PubMed

    Treviño, Victor; Yañez-Garza, Irma-Luz; Rodriguez-López, Carlos E; Urrea-López, Rafael; Garza-Rodriguez, Maria-Lourdes; Barrera-Saldaña, Hugo-Alberto; Tamez-Peña, José G; Winkler, Robert; Díaz de-la-Garza, Rocío-Isabel

    2015-01-01

    One of the initial and critical procedures for the analysis of metabolomics data using liquid chromatography and mass spectrometry is feature detection. Feature detection is the process to detect boundaries of the mass surface from raw data. It consists of detected abundances arranged in a two-dimensional (2D) matrix of mass/charge and elution time. MZmine 2 is one of the leading software environments that provide a full analysis pipeline for these data. However, the feature detection algorithms provided in MZmine 2 are based mainly on the analysis of one-dimension at a time. We propose GridMass, an efficient algorithm for 2D feature detection. The algorithm is based on landing probes across the chromatographic space that are moved to find local maxima providing accurate boundary estimations. We tested GridMass on a controlled marker experiment, on plasma samples, on plant fruits, and in a proteome sample. Compared with other algorithms, GridMass is faster and may achieve comparable or better sensitivity and specificity. As a proof of concept, GridMass has been implemented in Java under the MZmine 2 environment and is available at http://www.bioinformatica.mty.itesm.mx/GridMass and MASSyPup. It has also been submitted to the MZmine 2 developing community. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Inverter Anti-Islanding with Advanced Grid Support in Single- and Multi-Inverter Islands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoke, Andy

    As PV and other DER systems are connected to the grid at increased penetration levels, island detection may become more challenging for two reasons: 1. In islands containing many DERs, active inverter-based anti-islanding methods may have more difficulty detecting islands because each individual inverter's efforts to detect the island may be interfered with by the other inverters in the island. 2. The increasing numbers of DERs are leading to new requirements that DERs ride through grid disturbances and even actively try to regulate grid voltage and frequency back towards nominal operating conditions. These new grid support requirements may directly ormore » indirectly interfere with anti-islanding controls. This report describes a series of tests designed to examine the impacts of both grid support functions and multi-inverter islands on anti-islanding effectiveness.« less

  16. Method and apparatus for detecting cyber attacks on an alternating current power grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McEachern, Alexander; Hofmann, Ronald

    A method and apparatus for detecting cyber attacks on remotely-operable elements of an alternating current distribution grid. Two state estimates of the distribution grid are prepared, one of which uses micro-synchrophasors. A difference between the two state estimates indicates a possible cyber attack.

  17. Deep learning for classification of islanding and grid disturbance based on multi-resolution singular spectrum entropy

    NASA Astrophysics Data System (ADS)

    Li, Tie; He, Xiaoyang; Tang, Junci; Zeng, Hui; Zhou, Chunying; Zhang, Nan; Liu, Hui; Lu, Zhuoxin; Kong, Xiangrui; Yan, Zheng

    2018-02-01

    Forasmuch as the distinguishment of islanding is easy to be interfered by grid disturbance, island detection device may make misjudgment thus causing the consequence of photovoltaic out of service. The detection device must provide with the ability to differ islanding from grid disturbance. In this paper, the concept of deep learning is introduced into classification of islanding and grid disturbance for the first time. A novel deep learning framework is proposed to detect and classify islanding or grid disturbance. The framework is a hybrid of wavelet transformation, multi-resolution singular spectrum entropy, and deep learning architecture. As a signal processing method after wavelet transformation, multi-resolution singular spectrum entropy combines multi-resolution analysis and spectrum analysis with entropy as output, from which we can extract the intrinsic different features between islanding and grid disturbance. With the features extracted, deep learning is utilized to classify islanding and grid disturbance. Simulation results indicate that the method can achieve its goal while being highly accurate, so the photovoltaic system mistakenly withdrawing from power grids can be avoided.

  18. Convectively cooled electrical grid structure

    DOEpatents

    Paterson, J.A.; Koehler, G.W.

    1980-11-10

    Undesirable distortions of electrical grid conductors from thermal cycling are minimized and related problems such as unwanted thermionic emission and structural failure from overheating are avoided by providing for a flow of fluid coolant within each conductor. The conductors are secured at each end to separate flexible support elements which accommodate to individual longitudinal expansion and contraction of each conductor while resisting lateral displacements, the coolant flow preferably being directed into and out of each conductor through passages in the flexible support elements. The grid may have a modular or divided construction which facilitates manufacture and repairs.

  19. An Experimental Framework for Executing Applications in Dynamic Grid Environments

    NASA Technical Reports Server (NTRS)

    Huedo, Eduardo; Montero, Ruben S.; Llorente, Ignacio M.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The Grid opens up opportunities for resource-starved scientists and engineers to harness highly distributed computing resources. A number of Grid middleware projects are currently available to support the simultaneous exploitation of heterogeneous resources distributed in different administrative domains. However, efficient job submission and management continue being far from accessible to ordinary scientists and engineers due to the dynamic and complex nature of the Grid. This report describes a new Globus framework that allows an easier and more efficient execution of jobs in a 'submit and forget' fashion. Adaptation to dynamic Grid conditions is achieved by supporting automatic application migration following performance degradation, 'better' resource discovery, requirement change, owner decision or remote resource failure. The report also includes experimental results of the behavior of our framework on the TRGP testbed.

  20. Islanding detection technique using wavelet energy in grid-connected PV system

    NASA Astrophysics Data System (ADS)

    Kim, Il Song

    2016-08-01

    This paper proposes a new islanding detection method using wavelet energy in a grid-connected photovoltaic system. The method detects spectral changes in the higher-frequency components of the point of common coupling voltage and obtains wavelet coefficients by multilevel wavelet analysis. The autocorrelation of the wavelet coefficients can clearly identify islanding detection, even in the variations of the grid voltage harmonics during normal operating conditions. The advantage of the proposed method is that it can detect islanding condition the conventional under voltage/over voltage/under frequency/over frequency methods fail to detect. The theoretical method to obtain wavelet energies is evolved and verified by the experimental result.

  1. Sunlight Helps Laboratory Get Ready for Y2K

    Science.gov Websites

    by the end of December to provide emergency electricity to the Site Entrance Building (SEB), which solar power if the supply of electricity from the local utility grid is interrupted. The solar generator failure disrupts electricity supplies. If a power failure should be protracted, a secondary propane backup

  2. On non-parametric maximum likelihood estimation of the bivariate survivor function.

    PubMed

    Prentice, R L

    The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.

  3. Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells.

    PubMed

    Trimper, John B; Trettel, Sean G; Hwaun, Ernie; Colgin, Laura Lee

    2017-01-01

    At rest, hippocampal "place cells," neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These "replay" events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay.

  4. Machine learning for the New York City power grid.

    PubMed

    Rudin, Cynthia; Waltz, David; Anderson, Roger N; Boulanger, Albert; Salleb-Aouissi, Ansaf; Chow, Maggie; Dutta, Haimonti; Gross, Philip N; Huang, Bert; Ierome, Steve; Isaac, Delfina F; Kressner, Arthur; Passonneau, Rebecca J; Radeva, Axinia; Wu, Leon

    2012-02-01

    Power companies can benefit from the use of knowledge discovery methods and statistical machine learning for preventive maintenance. We introduce a general process for transforming historical electrical grid data into models that aim to predict the risk of failures for components and systems. These models can be used directly by power companies to assist with prioritization of maintenance and repair work. Specialized versions of this process are used to produce 1) feeder failure rankings, 2) cable, joint, terminator, and transformer rankings, 3) feeder Mean Time Between Failure (MTBF) estimates, and 4) manhole events vulnerability rankings. The process in its most general form can handle diverse, noisy, sources that are historical (static), semi-real-time, or realtime, incorporates state-of-the-art machine learning algorithms for prioritization (supervised ranking or MTBF), and includes an evaluation of results via cross-validation and blind test. Above and beyond the ranked lists and MTBF estimates are business management interfaces that allow the prediction capability to be integrated directly into corporate planning and decision support; such interfaces rely on several important properties of our general modeling approach: that machine learning features are meaningful to domain experts, that the processing of data is transparent, and that prediction results are accurate enough to support sound decision making. We discuss the challenges in working with historical electrical grid data that were not designed for predictive purposes. The “rawness” of these data contrasts with the accuracy of the statistical models that can be obtained from the process; these models are sufficiently accurate to assist in maintaining New York City’s electrical grid.

  5. Influence of Different Coupling Modes on the Robustness of Smart Grid under Targeted Attack.

    PubMed

    Kang, WenJie; Hu, Gang; Zhu, PeiDong; Liu, Qiang; Hang, Zhi; Liu, Xin

    2018-05-24

    Many previous works only focused on the cascading failure of global coupling of one-to-one structures in interdependent networks, but the local coupling of dual coupling structures has rarely been studied due to its complex structure. This will result in a serious consequence that many conclusions of the one-to-one structure may be incorrect in the dual coupling network and do not apply to the smart grid. Therefore, it is very necessary to subdivide the dual coupling link into a top-down coupling link and a bottom-up coupling link in order to study their influence on network robustness by combining with different coupling modes. Additionally, the power flow of the power grid can cause the load of a failed node to be allocated to its neighboring nodes and trigger a new round of load distribution when the load of these nodes exceeds their capacity. This means that the robustness of smart grids may be affected by four factors, i.e., load redistribution, local coupling, dual coupling link and coupling mode; however, the research on the influence of those factors on the network robustness is missing. In this paper, firstly, we construct the smart grid as a two-layer network with a dual coupling link and divide the power grid and communication network into many subnets based on the geographical location of their nodes. Secondly, we define node importance ( N I ) as an evaluation index to access the impact of nodes on the cyber or physical network and propose three types of coupling modes based on N I of nodes in the cyber and physical subnets, i.e., Assortative Coupling in Subnets (ACIS), Disassortative Coupling in Subnets (DCIS), and Random Coupling in Subnets (RCIS). Thirdly, a cascading failure model is proposed for studying the effect of local coupling of dual coupling link in combination with ACIS, DCIS, and RCIS on the robustness of the smart grid against a targeted attack, and the survival rate of functional nodes is used to assess the robustness of the smart grid. Finally, we use the IEEE 118-Bus System and the Italian High-Voltage Electrical Transmission Network to verify our model and obtain the same conclusions: (I) DCIS applied to the top-down coupling link is better able to enhance the robustness of the smart grid against a targeted attack than RCIS or ACIS, (II) ACIS applied to a bottom-up coupling link is better able to enhance the robustness of the smart grid against a targeted attack than RCIS or DCIS, and (III) the robustness of the smart grid can be improved by increasing the tolerance α . This paper provides some guidelines for slowing down the speed of the cascading failures in the design of architecture and optimization of interdependent networks, such as a top-down link with DCIS, a bottom-up link with ACIS, and an increased tolerance α .

  6. Measure of robustness for complex networks

    NASA Astrophysics Data System (ADS)

    Youssef, Mina Nabil

    Critical infrastructures are repeatedly attacked by external triggers causing tremendous amount of damages. Any infrastructure can be studied using the powerful theory of complex networks. A complex network is composed of extremely large number of different elements that exchange commodities providing significant services. The main functions of complex networks can be damaged by different types of attacks and failures that degrade the network performance. These attacks and failures are considered as disturbing dynamics, such as the spread of viruses in computer networks, the spread of epidemics in social networks, and the cascading failures in power grids. Depending on the network structure and the attack strength, every network differently suffers damages and performance degradation. Hence, quantifying the robustness of complex networks becomes an essential task. In this dissertation, new metrics are introduced to measure the robustness of technological and social networks with respect to the spread of epidemics, and the robustness of power grids with respect to cascading failures. First, we introduce a new metric called the Viral Conductance (VCSIS ) to assess the robustness of networks with respect to the spread of epidemics that are modeled through the susceptible/infected/susceptible (SIS) epidemic approach. In contrast to assessing the robustness of networks based on a classical metric, the epidemic threshold, the new metric integrates the fraction of infected nodes at steady state for all possible effective infection strengths. Through examples, VCSIS provides more insights about the robustness of networks than the epidemic threshold. In addition, both the paradoxical robustness of Barabasi-Albert preferential attachment networks and the effect of the topology on the steady state infection are studied, to show the importance of quantifying the robustness of networks. Second, a new metric VCSIR is introduced to assess the robustness of networks with respect to the spread of susceptible/infected/recovered (SIR) epidemics. To compute VCSIR, we propose a novel individual-based approach to model the spread of SIR epidemics in networks, which captures the infection size for a given effective infection rate. Thus, VCSIR quantitatively integrates the infection strength with the corresponding infection size. To optimize the VCSIR metric, a new mitigation strategy is proposed, based on a temporary reduction of contacts in social networks. The social contact network is modeled as a weighted graph that describes the frequency of contacts among the individuals. Thus, we consider the spread of an epidemic as a dynamical system, and the total number of infection cases as the state of the system, while the weight reduction in the social network is the controller variable leading to slow/reduce the spread of epidemics. Using optimal control theory, the obtained solution represents an optimal adaptive weighted network defined over a finite time interval. Moreover, given the high complexity of the optimization problem, we propose two heuristics to find the near optimal solutions by reducing the contacts among the individuals in a decentralized way. Finally, the cascading failures that can take place in power grids and have recently caused several blackouts are studied. We propose a new metric to assess the robustness of the power grid with respect to the cascading failures. The power grid topology is modeled as a network, which consists of nodes and links representing power substations and transmission lines, respectively. We also propose an optimal islanding strategy to protect the power grid when a cascading failure event takes place in the grid. The robustness metrics are numerically evaluated using real and synthetic networks to quantify their robustness with respect to disturbing dynamics. We show that the proposed metrics outperform the classical metrics in quantifying the robustness of networks and the efficiency of the mitigation strategies. In summary, our work advances the network science field in assessing the robustness of complex networks with respect to various disturbing dynamics.

  7. Integrity Verification for SCADA Devices Using Bloom Filters and Deep Packet Inspection

    DTIC Science & Technology

    2014-03-27

    prevent intrusions in smart grids [PK12]. Parthasarathy proposed an anomaly detection based IDS that takes into account system state. In his implementation...Security, 25(7):498–506, 10 2006. [LMV12] O. Linda, M. Manic, and T. Vollmer. Improving cyber-security of smart grid systems via anomaly detection and...6 2012. 114 [PK12] S. Parthasarathy and D. Kundur. Bloom filter based intrusion detection for smart grid SCADA. In Electrical & Computer Engineering

  8. Wire-chamber radiation detector with discharge control

    DOEpatents

    Perez-Mendez, V.; Mulera, T.A.

    1982-03-29

    A wire chamber; radiation detector has spaced apart parallel electrodes and grids defining an ignition region in which charged particles or other ionizing radiations initiate brief localized avalanche discharges and defining an adjacent memory region in which sustained glow discharges are initiated by the primary discharges. Conductors of the grids at each side of the memory section extend in orthogonal directions enabling readout of the X-Y coordinates of locations at which charged particles were detected by sequentially transmitting pulses to the conductors of one grid while detecting transmissions of the pulses to the orthogonal conductors of the other grid through glow discharges. One of the grids bounding the memory region is defined by an array of conductive elements each of which is connected to the associated readout conductor through a separate resistance. The wire chamber avoids ambiguities and imprecisions in the readout of coordinates when large numbers of simultaneous or; near simultaneous charged particles have been detected. Down time between detection periods and the generation of radio frequency noise are also reduced.

  9. Convectively cooled electrical grid structure

    DOEpatents

    Paterson, James A.; Koehler, Gary W.

    1982-01-01

    Undesirable distortions of electrical grid conductors (12) from thermal cycling are minimized and related problems such as unwanted thermionic emission and structural failure from overheating are avoided by providing for a flow of fluid coolant within each conductor (12). The conductors (12) are secured at each end to separate flexible support elements (16) which accommodate to individual longitudinal expansion and contraction of each conductor (12) while resisting lateral displacements, the coolant flow preferably being directed into and out of each conductor through passages (48) in the flexible support elements (16). The grid (11) may have a modular or divided construction which facilitates manufacture and repairs.

  10. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.

    Increased coupling between critical infrastructure networks, such as power and communication systems, has important implications for the reliability and security of these systems. To understand the effects of power-communication coupling, several researchers have studied models of interdependent networks and reported that increased coupling can increase vulnerability. However, these conclusions come largely from models that have substantially different mechanisms of cascading failure, relative to those found in actual power and communication networks, and that do not capture the benefits of connecting systems with complementary capabilities. In order to understand the importance of these details, this paper compares network vulnerability in simplemore » topological models and in models that more accurately capture the dynamics of cascading in power systems. First, we compare a simple model of topological contagion to a model of cascading in power systems and find that the power grid model shows a higher level of vulnerability, relative to the contagion model. Second, we compare a percolation model of topological cascading in coupled networks to three different models of power networks coupled to communication systems. Again, the more accurate models suggest very different conclusions than the percolation model. In all but the most extreme case, the physics-based power grid models indicate that increased power-communication coupling decreases vulnerability. This is opposite from what one would conclude from the percolation model, in which zero coupling is optimal. Only in an extreme case, in which communication failures immediately cause grid failures, did we find that increased coupling can be harmful. Together, these results suggest design strategies for reducing the risk of cascades in interdependent infrastructure systems.« less

  11. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence

    DOE PAGES

    Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.; ...

    2017-03-20

    Increased coupling between critical infrastructure networks, such as power and communication systems, has important implications for the reliability and security of these systems. To understand the effects of power-communication coupling, several researchers have studied models of interdependent networks and reported that increased coupling can increase vulnerability. However, these conclusions come largely from models that have substantially different mechanisms of cascading failure, relative to those found in actual power and communication networks, and that do not capture the benefits of connecting systems with complementary capabilities. In order to understand the importance of these details, this paper compares network vulnerability in simplemore » topological models and in models that more accurately capture the dynamics of cascading in power systems. First, we compare a simple model of topological contagion to a model of cascading in power systems and find that the power grid model shows a higher level of vulnerability, relative to the contagion model. Second, we compare a percolation model of topological cascading in coupled networks to three different models of power networks coupled to communication systems. Again, the more accurate models suggest very different conclusions than the percolation model. In all but the most extreme case, the physics-based power grid models indicate that increased power-communication coupling decreases vulnerability. This is opposite from what one would conclude from the percolation model, in which zero coupling is optimal. Only in an extreme case, in which communication failures immediately cause grid failures, did we find that increased coupling can be harmful. Together, these results suggest design strategies for reducing the risk of cascades in interdependent infrastructure systems.« less

  12. Grid-connected photovoltaic (PV) systems with batteries storage as solution to electrical grid outages in Burkina Faso

    NASA Astrophysics Data System (ADS)

    Abdoulaye, D.; Koalaga, Z.; Zougmore, F.

    2012-02-01

    This paper deals with a key solution for power outages problem experienced by many African countries and this through grid-connected photovoltaic (PV) systems with batteries storage. African grids are characterized by an insufficient power supply and frequent interruptions. Due to this fact, users who especially use classical grid-connected photovoltaic systems are unable to profit from their installation even if there is sun. In this study, we suggest the using of a grid-connected photovoltaic system with batteries storage as a solution to these problems. This photovoltaic system works by injecting the surplus of electricity production into grid and can also deliver electricity as a stand-alone system with all security needed. To achieve our study objectives, firstly we conducted a survey of a real situation of one African electrical grid, the case of Burkina Faso (SONABEL: National Electricity Company of Burkina). Secondly, as study case, we undertake a sizing, a modeling and a simulation of a grid-connected PV system with batteries storage for the LAME laboratory at the University of Ouagadougou. The simulation shows that the proposed grid-connected system allows users to profit from their photovoltaic installation at any time even if the public electrical grid has some failures either during the day or at night.

  13. Radiation detector based on a matrix of crossed wavelength-shifting fibers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kross, Brian J.; Weisenberger, Andrew; Zorn, Carl

    A radiation detection system comprising a detection grid of wavelength shifting fibers with a volume of scintillating material at the intersecting points of the fibers. Light detectors, preferably Silicon Photomultipliers, are positioned at the ends of the fibers. The position of radiation is determined from data obtained from the detection grid. The system is easily scalable, customizable, and also suitable for use in soil and underground applications. An alternate embodiment employs a fiber grid sheet or layer which is comprised of multiple fibers secured to one another within the same plane. This embodiment further includes shielding in order to preventmore » radiation cross-talk within the grid layer.« less

  14. Groundwater-quality data in the Santa Cruz, San Gabriel, and Peninsular Ranges Hard Rock Aquifers study unit, 2011-2012: results from the California GAMA program

    USGS Publications Warehouse

    Davis, Tracy A.; Shelton, Jennifer L.

    2014-01-01

    Results for constituents with nonregulatory benchmarks set for aesthetic concerns showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in samples from 19 grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in 27 grid wells. Chloride was detected at a concentration greater than the SMCL-CA upper benchmark of 500 mg/L in one grid well. TDS concentrations in three grid wells were greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  15. Method for the depth corrected detection of ionizing events from a co-planar grids sensor

    DOEpatents

    De Geronimo, Gianluigi [Syosset, NY; Bolotnikov, Aleksey E [South Setauket, NY; Carini, Gabriella [Port Jefferson, NY

    2009-05-12

    A method for the detection of ionizing events utilizing a co-planar grids sensor comprising a semiconductor substrate, cathode electrode, collecting grid and non-collecting grid. The semiconductor substrate is sensitive to ionizing radiation. A voltage less than 0 Volts is applied to the cathode electrode. A voltage greater than the voltage applied to the cathode is applied to the non-collecting grid. A voltage greater than the voltage applied to the non-collecting grid is applied to the collecting grid. The collecting grid and the non-collecting grid are summed and subtracted creating a sum and difference respectively. The difference and sum are divided creating a ratio. A gain coefficient factor for each depth (distance between the ionizing event and the collecting grid) is determined, whereby the difference between the collecting electrode and the non-collecting electrode multiplied by the corresponding gain coefficient is the depth corrected energy of an ionizing event. Therefore, the energy of each ionizing event is the difference between the collecting grid and the non-collecting grid multiplied by the corresponding gain coefficient. The depth of the ionizing event can also be determined from the ratio.

  16. Numerical simulation of deformation and failure processes of a complex technical object under impact loading

    NASA Astrophysics Data System (ADS)

    Kraus, E. I.; Shabalin, I. I.; Shabalin, T. I.

    2018-04-01

    The main points of development of numerical tools for simulation of deformation and failure of complex technical objects under nonstationary conditions of extreme loading are presented. The possibility of extending the dynamic method for construction of difference grids to the 3D case is shown. A 3D realization of discrete-continuum approach to the deformation and failure of complex technical objects is carried out. The efficiency of the existing software package for 3D modelling is shown.

  17. Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells

    PubMed Central

    Trimper, John B.; Trettel, Sean G.; Hwaun, Ernie; Colgin, Laura Lee

    2017-01-01

    At rest, hippocampal “place cells,” neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These “replay” events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay. PMID:28824388

  18. Development of new positive-grid alloy and its application to long-life batteries for automotive industry

    NASA Astrophysics Data System (ADS)

    Furukawa, Jun; Nehyo, Y.; Shiga, S.

    Positive-grid corrosion and its resulting creep or growth is one of the major causes of the failure of automotive lead-acid batteries. The importance of grid corrosion and growth is increasing given the tendency for rising temperatures in the engine compartments of modern vehicles. In order to cope with this situation, a new lead alloy has been developed for positive-grids by utilizing an optimized combination of lead-calcium-tin and barium. In addition to enhanced mechanical strength at high temperature, the corrosion-resistance of the grid is improved by as much as two-fold so that the high temperature durability of batteries using such grids has been demonstrated in both hot SAE J240 tests and in field trials in Japan and Thailand. A further advantage of the alloy is its recycleability compared with alloys containing silver. The new alloy gives superior performance in both 12-V flooded and 36-V valve-regulated lead-acid (VRLA) batteries.

  19. A New Approach to Micro-arcsecond Astrometry with SIM Allowing Early Mission Narrow Angle Measurements of Compelling Astronomical Targets

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart; Pan, Xiaopei

    2004-01-01

    The Space Interferometry Mission (SIM) is capable of detecting and measuring the mass of terrestrial planets around stars other than our own. It can measure the mass of black holes and the visual orbits of radio and x-ray binary sources. SIM makes possible a new level of understanding of complex astrophysical processes. SIM achieves its high precision in the so-called narrow-angle regime. This is defined by a 1 degree diameter field in which the position of a target star is measured with respect to a set of reference stars. The observation is performed in two parts: first, SIM observes a grid of stars that spans the full sky. After a few years, repeated observations of the grid allow one to determine the orientation of the interferometer baseline. Second, throughout the mission, SIM periodically observes in the narrow-angle mode. Every narrow-angle observation is linked to the grid to determine the precise attitude and length of the baseline. The narrow angle process demands patience. It is not until five years after launch that SIM achieves its ultimate accuracy of 1 microarcsecond. The accuracy is degraded by a factor of approx. 2 at mid-mission. Our work proposes a technique for narrow angle astrometry that does not rely on the measurement of grid stars. This technique, called Gridless Narrow Angle Astrometry (GNAA) can obtain microarcsecond accuracy and can detect extra-solar planets and other exciting objects with a few days of observation. It can be applied as early as during the first six months of in-orbit calibration (IOC). The motivations for doing this are strong. First, and obviously, it is an insurance policy against a catastrophic mid-mission failure. Second, at the start of the mission, with several space-based interferometers in the planning or implementation phase, NASA will be eager to capture the public's imagination with interferometric science. Third, early results and a technique that can duplicate those results throughout the mission will give the analysts important experience in the proper use and calibration of SIM.

  20. Sandia and NJ TRANSIT Authority Developing Resilient Power Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanley, Charles J.; Ellis, Abraham

    2014-11-01

    Through the memorandum of understanding between the Depratment of Energy (DOE), the New Jersey Transit Authority (NJ Transit), and the New Jersey Board of Public Utilities, Sandia National Labs is assisting NJ Transit in developing NJ TransitGrid: an electric microgrid that will include a large-scale gas-fired generation facility and distributed energy resources (photovoltaics [PV], energy storage, electric vehicles, combined heat and power [CHP]) to supply reliable power during storms or other times of significant power failure. The NJ TransitGrid was awarded $410M from the Department of Transportation to develop a first-of-its-kind electric microgrid capable of supplying highly-reliable power.

  1. System and method for islanding detection and prevention in distributed generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhowmik, Shibashis; Mazhari, Iman; Parkhideh, Babak

    Various examples are directed to systems and methods for detecting an islanding condition at an inverter configured to couple a distributed generation system to an electrical grid network. A controller may determine a command frequency and a command frequency variation. The controller may determine that the command frequency variation indicates a potential islanding condition and send to the inverter an instruction to disconnect the distributed generation system from the electrical grid network. When the distributed generation system is disconnected from the electrical grid network, the controller may determine whether the grid network is valid.

  2. Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant.

    PubMed

    Moreno-Garcia, Isabel M; Palacios-Garcia, Emilio J; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J; Varo-Martinez, Marta; Real-Calvo, Rafael J

    2016-05-26

    There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant's components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid.

  3. Continuous glucose monitoring: quality of hypoglycaemia detection.

    PubMed

    Zijlstra, E; Heise, T; Nosek, L; Heinemann, L; Heckermann, S

    2013-02-01

    To evaluate the accuracy of a (widely used) continuous glucose monitoring (CGM)-system and its ability to detect hypoglycaemic events. A total of 18 patients with type 1 diabetes mellitus used continuous glucose monitoring (Guardian REAL-Time CGMS) during two 9-day in-house periods. A hypoglycaemic threshold alarm alerted patients to sensor readings <70 mg/dl. Continuous glucose monitoring sensor readings were compared to laboratory reference measurements taken every 4 h and in case of a hypoglycaemic alarm. A total of 2317 paired data points were evaluated. Overall, the mean absolute relative difference (MARD) was 16.7%. The percentage of data points in the clinically accurate or acceptable Clarke Error Grid zones A + B was 94.6%. In the hypoglycaemic range, accuracy worsened (MARD 38.8%) leading to a failure to detect more than half of the true hypoglycaemic events (sensitivity 37.5%). Furthermore, more than half of the alarms that warn patients for hypoglycaemia were false (false alert rate 53.3%). Above the low alert threshold, the sensor confirmed 2077 of 2182 reference values (specificity 95.2%). Patients using continuous glucose monitoring should be aware of its limitation to accurately detect hypoglycaemia. © 2012 Blackwell Publishing Ltd.

  4. Structured light: theory and practice and practice and practice...

    NASA Astrophysics Data System (ADS)

    Keizer, Richard L.; Jun, Heesung; Dunn, Stanley M.

    1991-04-01

    We have developed a structured light system for noncontact 3-D measurement of human body surface areas and volumes. We illustrate the image processing steps and algorithms used to recover range data from a single camera image, reconstruct a complete surface from one or more sets of range data, and measure areas and volumes. The development of a working system required the solution to a number of practical problems in image processing and grid labeling (the stereo correspondence problem for structured light). In many instances we found that the standard cookbook techniques for image processing failed. This was due in part to the domain (human body), the restrictive assumptions of the models underlying the cookbook techniques, and the inability to consistently predict the outcome of the image processing operations. In this paper, we will discuss some of our successes and failures in two key steps in acquiring range data using structured light: First, the problem of detecting intersections in the structured light grid, and secondly, the problem of establishing correspondence between projected and detected intersections. We will outline the problems and solutions we have arrived at after several years of trial and error. We can now measure range data with an r.m.s. relative error of 0.3% and measure areas on the human body surface within 3% and volumes within 10%. We have found that the solution to building a working vision system requires the right combination of theory and experimental verification.

  5. Effect of Component Failures on Economics of Distributed Photovoltaic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lubin, Barry T.

    2012-02-02

    This report describes an applied research program to assess the realistic costs of grid connected photovoltaic (PV) installations. A Board of Advisors was assembled that included management from the regional electric power utilities, as well as other participants from companies that work in the electric power industry. Although the program started with the intention of addressing effective load carrying capacity (ELCC) for utility-owned photovoltaic installations, results from the literature study and recommendations from the Board of Advisors led investigators to the conclusion that obtaining effective data for this analysis would be difficult, if not impossible. The effort was then re-focusedmore » on assessing the realistic costs and economic valuations of grid-connected PV installations. The 17 kW PV installation on the University of Hartford's Lincoln Theater was used as one source of actual data. The change in objective required a more technically oriented group. The re-organized working group (changes made due to the need for more technically oriented participants) made site visits to medium-sized PV installations in Connecticut with the objective of developing sources of operating histories. An extensive literature review helped to focus efforts in several technical and economic subjects. The objective of determining the consequences of component failures on both generation and economic returns required three analyses. The first was a Monte-Carlo-based simulation model for failure occurrences and the resulting downtime. Published failure data, though limited, was used to verify the results. A second model was developed to predict the reduction in or loss of electrical generation related to the downtime due to these failures. Finally, a comprehensive economic analysis, including these failures, was developed to determine realistic net present values of installed PV arrays. Two types of societal benefits were explored, with quantitative valuations developed for both. Some societal benefits associated with financial benefits to the utility of having a distributed generation capacity that is not fossil-fuel based have been included into the economic models. Also included and quantified in the models are several benefits to society more generally: job creation and some estimates of benefits from avoiding greenhouse emissions. PV system failures result in a lowering of the economic values of a grid-connected system, but this turned out to be a surprisingly small effect on the overall economics. The most significant benefit noted resulted from including the societal benefits accrued to the utility. This provided a marked increase in the valuations of the array and made the overall value proposition a financially attractive one, in that net present values exceeded installation costs. These results indicate that the Department of Energy and state regulatory bodies should consider focusing on societal benefits that create economic value for the utility, confirm these quantitative values, and work to have them accepted by the utilities and reflected in the rate structures for power obtained from grid-connected arrays. Understanding and applying the economic benefits evident in this work can significantly improve the business case for grid-connected PV installations. This work also indicates that the societal benefits to the population are real and defensible, but not nearly as easy to justify in a business case as are the benefits that accrue directly to the utility.« less

  6. Overload cascading failure on complex networks with heterogeneous load redistribution

    NASA Astrophysics Data System (ADS)

    Hou, Yueyi; Xing, Xiaoyun; Li, Menghui; Zeng, An; Wang, Yougui

    2017-09-01

    Many real systems including the Internet, power-grid and financial networks experience rare but large overload cascading failures triggered by small initial shocks. Many models on complex networks have been developed to investigate this phenomenon. Most of these models are based on the load redistribution process and assume that the load on a failed node shifts to nearby nodes in the networks either evenly or according to the load distribution rule before the cascade. Inspired by the fact that real power-grid tends to place the excess load on the nodes with high remaining capacities, we study a heterogeneous load redistribution mechanism in a simplified sandpile model in this paper. We find that weak heterogeneity in load redistribution can effectively mitigate the cascade while strong heterogeneity in load redistribution may even enlarge the size of the final failure. With a parameter θ to control the degree of the redistribution heterogeneity, we identify a rather robust optimal θ∗ = 1. Finally, we find that θ∗ tends to shift to a larger value if the initial sand distribution is homogeneous.

  7. Wire chamber radiation detector with discharge control

    DOEpatents

    Perez-Mendez, Victor; Mulera, Terrence A.

    1984-01-01

    A wire chamber radiation detector (11) has spaced apart parallel electrodes (16) and grids (17, 18, 19) defining an ignition region (21) in which charged particles (12) or other ionizing radiations initiate brief localized avalanche discharges (93) and defining an adjacent memory region (22) in which sustained glow discharges (94) are initiated by the primary discharges (93). Conductors (29, 32) of the grids (18, 19) at each side of the memory section (22) extend in orthogonal directions enabling readout of the X-Y coordinates of locations at which charged particles (12) were detected by sequentially transmitting pulses to the conductors (29) of one grid (18) while detecting transmissions of the pulses to the orthogonal conductors (36) of the other grid (19) through glow discharges (94). One of the grids (19) bounding the memory region (22) is defined by an array of conductive elements (32) each of which is connected to the associated readout conductor (36) through a separate resistance (37). The wire chamber (11) avoids ambiguities and imprecisions in the readout of coordinates when large numbers of simultaneous or near simultaneous charged particles (12) have been detected. Down time between detection periods and the generation of radio frequency noise are also reduced.

  8. The Evaluation Method of the Lightning Strike on Transmission Lines Aiming at Power Grid Reliability

    NASA Astrophysics Data System (ADS)

    Wen, Jianfeng; Wu, Jianwei; Huang, Liandong; Geng, Yinan; Yu, zhanqing

    2018-01-01

    Lightning protection of power system focuses on reducing the flashover rate, only distinguishing by the voltage level, without considering the functional differences between the transmission lines, and being lack of analysis the effect on the reliability of power grid. This will lead lightning protection design of general transmission lines is surplus but insufficient for key lines. In order to solve this problem, the analysis method of lightning striking on transmission lines for power grid reliability is given. Full wave process theory is used to analyze the lightning back striking; the leader propagation model is used to describe the process of shielding failure of transmission lines. The index of power grid reliability is introduced and the effect of transmission line fault on the reliability of power system is discussed in detail.

  9. CANDID: Companion Analysis and Non-Detection in Interferometric Data

    NASA Astrophysics Data System (ADS)

    Gallenne, A.; Mérand, A.; Kervella, P.; Monnier, J. D.; Schaefer, G. H.; Baron, F.; Breitfelder, J.; Le Bouquin, J. B.; Roettenbacher, R. M.; Gieren, W.; Pietrzynski, G.; McAlister, H.; ten Brummelaar, T.; Sturmann, J.; Sturmann, L.; Turner, N.; Ridgway, S.; Kraus, S.

    2015-05-01

    CANDID finds faint companion around star in interferometric data in the OIFITS format. It allows systematically searching for faint companions in OIFITS data, and if not found, estimates the detection limit. The tool is based on model fitting and Chi2 minimization, with a grid for the starting points of the companion position. It ensures all positions are explored by estimating a-posteriori if the grid is dense enough, and provides an estimate of the optimum grid density.

  10. A Wireless Sensor System for Real-Time Measurement of Pressure Profiles at Lower Limb Protheses to Ensure Proper Fitting

    DTIC Science & Technology

    2011-10-01

    been developed. The next step is to develop a the base technology into a grid like mapping sensor, construct the excitation and detection circuits...the project involves advancing the base technology into a grid -like mapping se nsor, constructing the excitation and detection circuits, modifying and...further. In conclusion, the screen printing and etching process allows for precise repeat able production of sensing elements for grid fabrication

  11. Cybersecurity Awareness in the Power Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean; Franklin, Lyndsey; Le Blanc, Katya L.

    2016-07-10

    We report on a series of interviews and observations conducted with control room dispatchers in a bulk electrical system. These dispatchers must react quickly to incidents as they happen in order to ensure the reliability and safe operation of the power grid. They do not have the time to evaluate incidents for signs of cyber-attack as part of their initial response. Cyber-attack detection involves multiple personnel from a variety of roles at both local and regional levels. Smart grid technology will improve detection and defense capabilities of the future grid, however, the current infrastructure remains a mixture of old andmore » new equipment which will continue to operate for some time. Thus, research still needs to focus on strategies for the detection of malicious activity on current infrastructure as well as protection and remediation.« less

  12. A Structured Grid Based Solution-Adaptive Technique for Complex Separated Flows

    NASA Technical Reports Server (NTRS)

    Thornburg, Hugh; Soni, Bharat K.; Kishore, Boyalakuntla; Yu, Robert

    1996-01-01

    The objective of this work was to enhance the predictive capability of widely used computational fluid dynamic (CFD) codes through the use of solution adaptive gridding. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. In order to study the accuracy and efficiency improvements due to the grid adaptation, it is necessary to quantify grid size and distribution requirements as well as computational times of non-adapted solutions. Flow fields about launch vehicles of practical interest often involve supersonic freestream conditions at angle of attack exhibiting large scale separate vortical flow, vortex-vortex and vortex-surface interactions, separated shear layers and multiple shocks of different intensity. In this work, a weight function and an associated mesh redistribution procedure is presented which detects and resolves these features without user intervention. Particular emphasis has been placed upon accurate resolution of expansion regions and boundary layers. Flow past a wedge at Mach=2.0 is used to illustrate the enhanced detection capabilities of this newly developed weight function.

  13. Detection of new-onset choroidal neovascularization using optical coherence tomography: the AMD DOC Study.

    PubMed

    Do, Diana V; Gower, Emily W; Cassard, Sandra D; Boyer, David; Bressler, Neil M; Bressler, Susan B; Heier, Jeffrey S; Jefferys, Joan L; Singerman, Lawrence J; Solomon, Sharon D

    2012-04-01

    To determine the sensitivity of time domain optical coherence tomography (OCT) in detecting conversion to neovascular age-related macular degeneration (AMD) in eyes at high risk for choroidal neovascularization (CNV), compared with detection using fluorescein angiography (FA) as the gold standard. Prospective, multicenter, observational study. Individuals aged ≥50 years with nonneovascular AMD at high risk of progressing to CNV in the study eye and evidence of neovascular AMD in the fellow eye. At study entry and every 3 months through 2 years, participants underwent best-corrected visual acuity, supervised Amsler grid testing, preferential hyperacuity perimetry (PHP) testing, stereoscopic digital fundus photographs with FA, and OCT imaging. A central Reading Center graded all images. The sensitivity of OCT in detecting conversion to neovascular AMD by 2 years, using FA as the reference standard. Secondary outcomes included comparison of sensitivity, specificity, positive predictive value, and negative predictive value of OCT, PHP, and supervised Amsler grid relative to FA for detecting incident CNV. A total of 98 participants were enrolled; 87 (89%) of these individuals either completed the 24-month visit or exited the study after developing CNV. Fifteen (17%) study eyes had incident CNV confirmed on FA by the Reading Center. The sensitivity of each modality for detecting CNV was: OCT 0.40 (95% confidence interval [CI], 0.16-0.68), supervised Amsler grid 0.42 (95% CI, 0.15-0.72), and PHP 0.50 (95% CI, 0.23-0.77). Treatment for incident CNV was recommended by the study investigator in 13 study eyes. Sensitivity of the testing modalities for detection of CNV in these 13 eyes was 0.69 (95% CI, 0.39-0.91) for OCT, 0.50 (95% CI, 0.19-0.81) for supervised Amsler grid, and 0.70 (95% CI, 0.35-0.93) for PHP. Specificity of the OCT was higher than that of the Amsler grid and PHP. Time-domain OCT, supervised Amsler grid, and PHP have low to moderate sensitivity for detection of new-onset CNV compared with FA. Optical coherence tomography has greater specificity than Amsler grid or PHP. Among fellow eyes of individuals with unilateral CNV, FA remains the best method to detect new-onset CNV. Copyright © 2012 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  14. Quality assurance in mammography: artifact analysis.

    PubMed

    Hogge, J P; Palmer, C H; Muller, C C; Little, S T; Smith, D C; Fatouros, P P; de Paredes, E S

    1999-01-01

    Evaluation of mammograms for artifacts is essential for mammographic quality assurance. A variety of mammographic artifacts (i.e., variations in mammographic density not caused by true attenuation differences) can occur and can create pseudolesions or mask true abnormalities. Many artifacts are readily identified, whereas others present a true diagnostic challenge. Factors that create artifacts may be related to the processor (eg, static, dirt or excessive developer buildup on the rollers, excessive roller pressure, damp film, scrapes and scratches, incomplete fixing, power failure, contaminated developer), the technologist (eg, improper film handling and loading, improper use of the mammography unit and related equipment, positioning and darkroom errors), the mammography unit (eg, failure of the collimation mirror to rotate, grid inhomogeneity, failure of the reciprocating grid to move, material in the tube housing, compression failure, improper alignment of the compression paddle with the Bucky tray, defective compression paddle), or the patient (e.g., motion, superimposed objects or substances [jewelry, body parts, clothing, hair, implanted medical devices, foreign bodies, substances on the skin]). Familiarity with the broad range of artifacts and the measures required to eliminate them is vital. Careful attention to darkroom cleanliness, care in film handling, regularly scheduled processor maintenance and chemical replenishment, daily quality assurance activities, and careful attention to detail during patient positioning and mammography can reduce or eliminate most mammographic artifacts.

  15. Evaluation of acoustic telemetry grids for determining aquatic animal movement and survival

    USGS Publications Warehouse

    Kraus, Richard T.; Holbrook, Christopher; Vandergoot, Christopher; Stewart, Taylor R.; Faust, Matthew D.; Watkinson, Douglas A.; Charles, Colin; Pegg, Mark; Enders, Eva C.; Krueger, Charles C.

    2018-01-01

    Acoustic telemetry studies have frequently prioritized linear configurations of hydrophone receivers, such as perpendicular from shorelines or across rivers, to detect the presence of tagged aquatic animals. This approach introduces unknown bias when receivers are stationed for convenience at geographic bottlenecks (e.g., at the mouth of an embayment or between islands) as opposed to deployments following a statistical sampling design.We evaluated two-dimensional acoustic receiver arrays (grids: receivers spread uniformly across space) as an alternative approach to provide estimates of survival, movement, and habitat use. Performance of variably-spaced receiver grids (5–25 km spacing) was evaluated by simulating (1) animal tracks as correlated random walks (speed: 0.1–0.9 m/s; turning angle standard deviation: 5–30 degrees); (2) variable tag transmission intervals along each track (nominal delay: 15–300 seconds); and (3) probability of detection of each transmission based on logistic detection range curves (midpoint: 200–1500 m). From simulations, we quantified i) time between successive detections on any receiver (detection time), ii) time between successive detections on different receivers (transit time), and iii) distance between successive detections on different receivers (transit distance).In the most restrictive detection range scenario (200 m), the 95th percentile of transit time was 3.2 days at 5 km grid spacing, 5.7 days at 7 km, and 15.2 days at 25 km; for the 1500 m detection range scenario, it was 0.1 days at 5 km, 0.5 days at 7 km, and 10.8 days at 25 km. These values represented upper bounds on the expected maximum time that an animal could go undetected. Comparison of the simulations with pilot studies on three fishes (walleye Sander vitreus, common carp Cyprinus carpio, and channel catfish Ictalurus punctatus) from two independent large lake ecosystems (lakes Erie and Winnipeg) revealed shorter detection and transit times than what simulations predicted.By spreading effort uniformly across space, grids can improve understanding of fish migration over the commonly employed receiver line approach, but at increased time cost for maintaining grids.

  16. Multi-state time-varying reliability evaluation of smart grid with flexible demand resources utilizing Lz transform

    NASA Astrophysics Data System (ADS)

    Jia, Heping; Jin, Wende; Ding, Yi; Song, Yonghua; Yu, Dezhao

    2017-01-01

    With the expanding proportion of renewable energy generation and development of smart grid technologies, flexible demand resources (FDRs) have been utilized as an approach to accommodating renewable energies. However, multiple uncertainties of FDRs may influence reliable and secure operation of smart grid. Multi-state reliability models for a single FDR and aggregating FDRs have been proposed in this paper with regard to responsive abilities for FDRs and random failures for both FDR devices and information system. The proposed reliability evaluation technique is based on Lz transform method which can formulate time-varying reliability indices. A modified IEEE-RTS has been utilized as an illustration of the proposed technique.

  17. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    DOE PAGES

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; ...

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the newmore » technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less

  18. A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.

    This work proposes and analyzes a hyper-spherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of themore » hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less

  19. Analysis on the Operation Strategy of the 220kV External Transmission Channel for the Nujiang Power Grid after the Installation of Series Compensation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoxin; Feng, Peilei; Jan, Lisheng; Dai, Xiaozhong; Cai, Pengcheng

    2018-01-01

    In recent years, Nujiang Prefecture vigorously develop hydropower, the grid structure in the northwest of Yunnan Province is not perfect, which leads to the research and construction of the power grid lags behind the development of the hydropower. In 2015, the company in view of the nu river hydropower dilemma decided to change outside the nu river to send out a passage with series compensation device in order to improve the transmission capacity, the company to the main problems related to the system plan, but not too much in the region distribution network and detailed study. Nujiang power grid has unique structure and properties of the nujiang power grid after respectively, a whole rack respectively into two parts, namely power delivery channels, load power supply, the whole grid occurred fundamental changes, the original strategy of power network is not applicable, especially noteworthy is the main failure after network of independent operation problem, how to avoid the local series, emergency problem is more urgent, very tolerance test area power grid, this paper aims at the analysis of existing data, simulation, provide a reference for respectively after the operation for the stable operation of the power grid.

  20. Accelerated Thermal Cycling and Failure Mechanisms

    NASA Technical Reports Server (NTRS)

    Ghaffarian, R.

    1999-01-01

    This paper reviews the accelerated thermal cycling test methods that are currently used by industry to characterize the interconnect reliability of commercial-off-the-shelf (COTS) ball grid array (BGA) and chip scale package (CSP) assemblies.

  1. Occupancy change detection system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2009-09-01

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes instructions for producing an occupancy grid map of an environment around the robot, scanning the environment to generate a current obstacle map relative to a current robot position, and converting the current obstacle map to a current occupancy grid map. The instructions also include processing each grid cell in the occupancy grid map. Within the processing of each grid cell, the instructions include comparing each grid cell in the occupancy grid map to a corresponding grid cell in the current occupancy grid map. For grid cells with a difference, the instructions include defining a change vector for each changed grid cell, wherein the change vector includes a direction from the robot to the changed grid cell and a range from the robot to the changed grid cell.

  2. Groundwater-quality data in the Monterey–Salinas shallow aquifer study unit, 2013: Results from the California GAMA Program

    USGS Publications Warehouse

    Goldrath, Dara A.; Kulongoski, Justin T.; Davis, Tracy A.

    2016-09-01

    Groundwater quality in the 3,016-square-mile Monterey–Salinas Shallow Aquifer study unit was investigated by the U.S. Geological Survey (USGS) from October 2012 to May 2013 as part of the California State Water Resources Control Board Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project. The GAMA Monterey–Salinas Shallow Aquifer study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the shallow-aquifer systems in parts of Monterey and San Luis Obispo Counties and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The shallow-aquifer system in the Monterey–Salinas Shallow Aquifer study unit was defined as those parts of the aquifer system shallower than the perforated depth intervals of public-supply wells, which generally corresponds to the part of the aquifer system used by domestic wells. Groundwater quality in the shallow aquifers can differ from the quality in the deeper water-bearing zones; shallow groundwater can be more vulnerable to surficial contamination.Samples were collected from 170 sites that were selected by using a spatially distributed, randomized grid-based method. The study unit was divided into 4 study areas, each study area was divided into grid cells, and 1 well was sampled in each of the 100 grid cells (grid wells). The grid wells were domestic wells or wells with screen depths similar to those in nearby domestic wells. A greater spatial density of data was achieved in 2 of the study areas by dividing grid cells in those study areas into subcells, and in 70 subcells, samples were collected from exterior faucets at sites where there were domestic wells or wells with screen depths similar to those in nearby domestic wells (shallow-well tap sites).Field water-quality indicators (dissolved oxygen, water temperature, pH, and specific conductance) were measured, and samples for analysis of inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids, and alkalinity) were collected at all 170 sites. In addition to these constituents, the samples from grid wells were analyzed for organic constituents (volatile organic compounds, pesticides and pesticide degradates), constituents of special interest (perchlorate and N-nitrosodimethylamine, or NDMA), radioactive constituents (radon-222 and gross-alpha and gross-beta radioactivity), and geochemical and age-dating tracers (stable isotopes of carbon in dissolved inorganic carbon, carbon-14 abundances, stable isotopes of hydrogen and oxygen in water, and tritium activities).Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 11 percent of the wells in the Monterey–Salinas Shallow Aquifer study unit, and the results for these samples were used to evaluate the quality of the data from the groundwater samples. With the exception of trace elements, blanks rarely contained detectable concentrations of any constituent, indicating that contamination from sample-collection procedures was not a significant source of bias in the data for the groundwater samples. Low concentrations of some trace elements were detected in blanks; therefore, the data were re-censored at higher reporting levels. Replicate samples generally were within the limits of acceptable analytical reproducibility. The median values of matrix-spike recoveries were within the acceptable range (70 to 130 percent) for the volatile organic compounds (VOCs) and N-nitrosodimethylamine (NDMA), but were only approximately 64 percent for pesticides and pesticide degradates.The sample-collection protocols used in this study were designed to obtain representative samples of groundwater. The quality of groundwater can differ from the quality of drinking water because water chemistry can change as a result of contact with plumbing systems or the atmosphere; because of treatment, disinfection, or blending with water from other sources; or some combination of these. Water quality in domestic wells is not regulated in California, however, to provide context for the water-quality data presented in this report, results were compared to benchmarks established for drinking-water quality. The primary comparison benchmarks were maximum contaminant levels established by the U.S. Environmental Protection Agency and the State of California (MCL-US and MCL-CA, respectively). Non-regulatory benchmarks were used for constituents without maximum contaminant levels (MCLs), including Health Based Screening Levels (HBSLs) developed by the USGS and State of California secondary maximum contaminant levels (SMCL-CA) and notification levels. Most constituents detected in samples from the Monterey–Salinas Shallow Aquifer study unit had concentrations less than their respective benchmarks.Of the 148 organic constituents analyzed in the 100 grid-well samples, 38 were detected, and all concentrations were less than the benchmarks. Volatile organic compounds were detected in 26 of the grid wells, and pesticides and pesticide degradates were detected in 28 grid wells. The special-interest constituent NDMA was detected above the HBSL in three samples, one of which also had a perchlorate concentration greater than the MCL-CA.Of the inorganic constituents, 6 were detected at concentrations above their respective MCL benchmarks in grid-well samples: arsenic (5 grid wells above the MCL of 10 micrograms per liter, μg/L), selenium (3 grid wells, MCL of 50 μg/L), uranium (4 grid wells, MCL of 30 μg/L), nitrate (16 grid wells, MCL of 10 milligrams per liter, mg/L), adjusted gross alpha particle activity (10 grid wells, MCL of 15 picocuries per liter, pCi/L), and gross beta particle activity (1 grid well, MCL of 50 pCi/L). An additional 4 inorganic constituents were detected at concentrations above their respective HBSL benchmarks in grid-well samples: boron (1 grid well above the HBSL of 6,000 μg/L), manganese (8 grid wells, HBSL of 300 μg/L), molybdenum (6 grid wells, HBSL of 40 μg/L), and strontium (6 grid wells, HBSL of 4,000 μg/L). Of the inorganic constituents, 4 were detected at concentrations above their non-health based SMCL benchmarks in grid-well samples: iron (9 grid wells above the SMCL of 300 μg/L), chloride (7 grid wells, SMCL of 500 mg/L), sulfate (14 grid wells, SMCL of 500 mg/L), and total dissolved solids (27 grid wells, SMCL of 1,000 mg/L).Of the inorganic constituents analyzed in the 70 shallow-well tap sites, 10 were detected at concentrations above the benchmarks. Of the inorganic constituents, 3 were detected at concentrations above their respective MCL benchmarks in shallow-well tap sites: arsenic (2 shallow-well tap sites above the MCL of 10 μg/L), uranium (2 shallow-well tap sites, MCL of 30 μg/L), and nitrate (24 shallow-well tap sites, MCL of 10 mg/L). An additional 3 inorganic constituents were detected above their respective HBSL benchmarks in shallow-well tap sites: manganese (4 shallow-well tap sites above the HBSL of 300 μg/L), molybdenum (4 shallow-well tap sites, HBSL of 40 μg/L), and zinc (2 shallow-well tap sites, HBSL of 2,000 μg/L). Of the inorganic constituents, 4 were detected at concentrations above their non-health based SMCL benchmarks in shallow-well tap sites: iron (6 shallow-well tap sites above the SMCL of 300 μg/L), chloride (1 shallow-well tap site, SMCL of 500 mg/L), sulfate (9 shallow-well tap sites, SMCL of 500 mg/L), and total dissolved solids (15 shallow-well tap sites, SMCL of 1,000 mg/L).

  3. Impingement-Current-Erosion Characteristics of Accelerator Grids on Two-Grid Ion Thrusters

    NASA Technical Reports Server (NTRS)

    Barker, Timothy

    1996-01-01

    Accelerator grid sputter erosion resulting from charge-exchange-ion impingement is considered to be a primary cause of failure for electrostatic ion thrusters. An experimental method was developed and implemented to measure erosion characteristics of ion-thruster accel-grids for two-grid systems as a function of beam current, accel-grid potential, and facility background pressure. Intricate accelerator grid erosion patterns, that are typically produced in a short time (a few hours), are shown. Accelerator grid volumetric and depth-erosion rates are calculated from these erosion patterns and reported for each of the parameters investigated. A simple theoretical volumetric erosion model yields results that are compared to experimental findings. Results from the model and experiments agree to within 10%, thereby verifying the testing technique. In general, the local distribution of erosion is concentrated in pits between three adjacent holes and trenches that join pits. The shapes of the pits and trenches are shown to be dependent upon operating conditions. Increases in beam current and the accel-grid voltage magnitude lead to deeper pits and trenches. Competing effects cause complex changes in depth-erosion rates as background pressure is increased. Shape factors that describe pits and trenches (i.e. ratio of the average erosion width to the maximum possible width) are also affected in relatively complex ways by changes in beam current, ac tel-grid voltage magnitude, and background pressure. In all cases, however, gross volumetric erosion rates agree with theoretical predictions.

  4. Design and Implementation of a C++ Multithreaded Operational Tool for the Generation of Detection Time Grids in 2D for P- and S-waves taking into Consideration Seismic Network Topology and Data Latency

    NASA Astrophysics Data System (ADS)

    Sardina, V.

    2017-12-01

    The Pacific Tsunami Warning Center's round the clock operations rely on the rapid determination of the source parameters of earthquakes occurring around the world. To rapidly estimate source parameters such as earthquake location and magnitude the PTWC analyzes data streams ingested in near-real time from a global network of more than 700 seismic stations. Both the density of this network and the data latency of its member stations at any given time have a direct impact on the speed at which the PTWC scientists on duty can locate an earthquake and estimate its magnitude. In this context, it turns operationally advantageous to have the ability of assessing how quickly the PTWC operational system can reasonably detect and locate and earthquake, estimate its magnitude, and send the corresponding tsunami message whenever appropriate. For this purpose, we designed and implemented a multithreaded C++ software package to generate detection time grids for both P- and S-waves after taking into consideration the seismic network topology and the data latency of its member stations. We first encapsulate all the parameters of interest at a given geographic point, such as geographic coordinates, P- and S-waves detection time in at least a minimum number of stations, and maximum allowed azimuth gap into a DetectionTimePoint class. Then we apply composition and inheritance to define a DetectionTimeLine class that handles a vector of DetectionTimePoint objects along a given latitude. A DetectionTimesGrid class in turn handles the dynamic allocation of new TravelTimeLine objects and assigning the calculation of the corresponding P- and S-waves' detection times to new threads. Finally, we added a GUI that allows the user to interactively set all initial calculation parameters and output options. Initial testing in an eight core system shows that generation of a global 2D grid at 1 degree resolution setting detection on at least 5 stations and no azimuth gap restriction takes under 25 seconds. Under the same initial conditions, generation of a 2D grid at 0.1 degree resolution (2.6 million grid points) takes no more than 22 minutes. This preliminary results show a significant gain in grid generation speed when compared to other implementation via either scripts, or previous versions of the C++ code that did not implement multithreading.

  5. Self-Healing Networks: Redundancy and Structure

    PubMed Central

    Quattrociocchi, Walter; Caldarelli, Guido; Scala, Antonio

    2014-01-01

    We introduce the concept of self-healing in the field of complex networks modelling; in particular, self-healing capabilities are implemented through distributed communication protocols that exploit redundant links to recover the connectivity of the system. We then analyze the effect of the level of redundancy on the resilience to multiple failures; in particular, we measure the fraction of nodes still served for increasing levels of network damages. Finally, we study the effects of redundancy under different connectivity patterns—from planar grids, to small-world, up to scale-free networks—on healing performances. Small-world topologies show that introducing some long-range connections in planar grids greatly enhances the resilience to multiple failures with performances comparable to the case of the most resilient (and least realistic) scale-free structures. Obvious applications of self-healing are in the important field of infrastructural networks like gas, power, water, oil distribution systems. PMID:24533065

  6. Efficient computation paths for the systematic analysis of sensitivities

    NASA Astrophysics Data System (ADS)

    Greppi, Paolo; Arato, Elisabetta

    2013-01-01

    A systematic sensitivity analysis requires computing the model on all points of a multi-dimensional grid covering the domain of interest, defined by the ranges of variability of the inputs. The issues to efficiently perform such analyses on algebraic models are handling solution failures within and close to the feasible region and minimizing the total iteration count. Scanning the domain in the obvious order is sub-optimal in terms of total iterations and is likely to cause many solution failures. The problem of choosing a better order can be translated geometrically into finding Hamiltonian paths on certain grid graphs. This work proposes two paths, one based on a mixed-radix Gray code and the other, a quasi-spiral path, produced by a novel heuristic algorithm. Some simple, easy-to-visualize examples are presented, followed by performance results for the quasi-spiral algorithm and the practical application of the different paths in a process simulation tool.

  7. Abruptness of Cascade Failures in Power Grids

    PubMed Central

    Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio

    2014-01-01

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into “super-grids”. PMID:24424239

  8. Best Practices for Unstructured Grid Shock-Fitting

    NASA Technical Reports Server (NTRS)

    McCoud, Peter L.

    2017-01-01

    Unstructured grid solvers have well-known issues predicting surface heat fluxes when strong shocks are present. Various efforts have been made to address the underlying numerical issues that cause the erroneous predictions. The present work addresses some of the shortcomings of unstructured grid solvers, not by addressing the numerics, but by applying structured grid best practices to unstructured grids. A methodology for robust shock detection and shock-fitting is outlined and applied to production-relevant cases. Results

  9. A Framework for Testing Automated Detection, Diagnosis, and Remediation Systems on the Smart Grid

    NASA Technical Reports Server (NTRS)

    Lau, Shing-hon

    2011-01-01

    America's electrical grid is currently undergoing a multi-billion dollar modernization effort aimed at producing a highly reliable critical national infrastructure for power - a Smart Grid. While the goals for the Smart Grid include upgrades to accommodate large quantities of clean, but transient, renewable energy and upgrades to provide customers with real-time pricing information, perhaps the most important objective is to create an electrical grid with a greatly increased robustness.

  10. Health management system for rocket engines

    NASA Technical Reports Server (NTRS)

    Nemeth, Edward

    1990-01-01

    The functional framework of a failure detection algorithm for the Space Shuttle Main Engine (SSME) is developed. The basic algorithm is based only on existing SSME measurements. Supplemental measurements, expected to enhance failure detection effectiveness, are identified. To support the algorithm development, a figure of merit is defined to estimate the likelihood of SSME criticality 1 failure modes and the failure modes are ranked in order of likelihood of occurrence. Nine classes of failure detection strategies are evaluated and promising features are extracted as the basis for the failure detection algorithm. The failure detection algorithm provides early warning capabilities for a wide variety of SSME failure modes. Preliminary algorithm evaluation, using data from three SSME failures representing three different failure types, demonstrated indications of imminent catastrophic failure well in advance of redline cutoff in all three cases.

  11. An Optimal Current Controller Design for a Grid Connected Inverter to Improve Power Quality and Test Commercial PV Inverters.

    PubMed

    Algaddafi, Ali; Altuwayjiri, Saud A; Ahmed, Oday A; Daho, Ibrahim

    2017-01-01

    Grid connected inverters play a crucial role in generating energy to be fed to the grid. A filter is commonly used to suppress the switching frequency harmonics produced by the inverter, this being passive, and either an L- or LCL-filter. The latter is smaller in size compared to the L-filter. But choosing the optimal values of the LCL-filter is challenging due to resonance, which can affect stability. This paper presents a simple inverter controller design with an L-filter. The control topology is simple and applied easily using traditional control theory. Fast Fourier Transform analysis is used to compare different grid connected inverter control topologies. The modelled grid connected inverter with the proposed controller complies with the IEEE-1547 standard, and total harmonic distortion of the output current of the modelled inverter has been just 0.25% with an improved output waveform. Experimental work on a commercial PV inverter is then presented, including the effect of strong and weak grid connection. Inverter effects on the resistive load connected at the point of common coupling are presented. Results show that the voltage and current of resistive load, when the grid is interrupted, are increased, which may cause failure or damage for connecting appliances.

  12. An Optimal Current Controller Design for a Grid Connected Inverter to Improve Power Quality and Test Commercial PV Inverters

    PubMed Central

    Altuwayjiri, Saud A.; Ahmed, Oday A.; Daho, Ibrahim

    2017-01-01

    Grid connected inverters play a crucial role in generating energy to be fed to the grid. A filter is commonly used to suppress the switching frequency harmonics produced by the inverter, this being passive, and either an L- or LCL-filter. The latter is smaller in size compared to the L-filter. But choosing the optimal values of the LCL-filter is challenging due to resonance, which can affect stability. This paper presents a simple inverter controller design with an L-filter. The control topology is simple and applied easily using traditional control theory. Fast Fourier Transform analysis is used to compare different grid connected inverter control topologies. The modelled grid connected inverter with the proposed controller complies with the IEEE-1547 standard, and total harmonic distortion of the output current of the modelled inverter has been just 0.25% with an improved output waveform. Experimental work on a commercial PV inverter is then presented, including the effect of strong and weak grid connection. Inverter effects on the resistive load connected at the point of common coupling are presented. Results show that the voltage and current of resistive load, when the grid is interrupted, are increased, which may cause failure or damage for connecting appliances. PMID:28540362

  13. Groundwater-quality data in the North San Francisco Bay Shallow Aquifer study unit, 2012: results from the California GAMA Program

    USGS Publications Warehouse

    Bennett, George L.; Fram, Miranda S.

    2014-01-01

    Results for constituents with non-regulatory benchmarks set for aesthetic concerns from the grid wells showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in 13 grid wells. Chloride was detected at a concentration greater than the SMCL-CA recommended benchmark of 250 mg/L in two grid wells. Sulfate concentrations greater than the SMCL-CA recommended benchmark of 250 mg/L were measured in two grid wells, and the concentration in one of these wells was also greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 15 grid wells, and concentrations in 4 of these wells were also greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  14. Influence of grid control and object detection on radiation exposure and image quality using mobile C-arms - first results.

    PubMed

    Gosch, D; Ratzmer, A; Berauer, P; Kahn, T

    2007-09-01

    The objective of this study was to examine the extent to which the image quality on mobile C-arms can be improved by an innovative exposure rate control system (grid control). In addition, the possible dose reduction in the pulsed fluoroscopy mode using 25 pulses/sec produced by automatic adjustment of the pulse rate through motion detection was to be determined. As opposed to conventional exposure rate control systems, which use a measuring circle in the center of the field of view, grid control is based on a fine mesh of square cells which are overlaid on the entire fluoroscopic image. The system uses only those cells for exposure control that are covered by the object to be visualized. This is intended to ensure optimally exposed images, regardless of the size, shape and position of the object to be visualized. The system also automatically detects any motion of the object. If a pulse rate of 25 pulses/sec is selected and no changes in the image are observed, the pulse rate used for pulsed fluoroscopy is gradually reduced. This may decrease the radiation exposure. The influence of grid control on image quality was examined using an anthropomorphic phantom. The dose reduction achieved with the help of object detection was determined by evaluating the examination data of 146 patients from 5 different countries. The image of the static phantom made with grid control was always optimally exposed, regardless of the position of the object to be visualized. The average dose reduction when using 25 pulses/sec resulting from object detection and automatic down-pulsing was 21 %, and the maximum dose reduction was 60 %. Grid control facilitates C-arm operation, since optimum image exposure can be obtained independently of object positioning. Object detection may lead to a reduction in radiation exposure for the patient and operating staff.

  15. An adaptive method for a model of two-phase reactive flow on overlapping grids

    NASA Astrophysics Data System (ADS)

    Schwendeman, D. W.

    2008-11-01

    A two-phase model of heterogeneous explosives is handled computationally by a new numerical approach that is a modification of the standard Godunov scheme. The approach generates well-resolved and accurate solutions using adaptive mesh refinement on overlapping grids, and treats rationally the nozzling terms that render the otherwise hyperbolic model incapable of a conservative representation. The evolution and structure of detonation waves for a variety of one and two-dimensional configurations will be discussed with a focus given to problems of detonation diffraction and failure.

  16. Large-scale data analysis of power grid resilience across multiple US service regions

    NASA Astrophysics Data System (ADS)

    Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert

    2016-05-01

    Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.

  17. Accelerated Thermal Cycling and Failure Mechanisms for BGA and CSP Assemblies

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2000-01-01

    This paper reviews the accelerated thermal cycling test methods that are currently used by industry to characterize the interconnect reliability of commercial-off-the-shelf (COTS) ball grid array (BGA) and chip scale package (CSP) assemblies. Acceleration induced failure mechanisms varied from conventional surface mount (SM) failures for CSPs. Examples of unrealistic life projections for other CSPs are also presented. The cumulative cycles to failure for ceramic BGA assemblies performed under different conditions, including plots of their two Weibull parameters, are presented. The results are for cycles in the range of -30 C to 100 C, -55 C to 100 C, and -55 C to 125 C. Failure mechanisms as well as cycles to failure for thermal shock and thermal cycling conditions in the range of -55 C to 125 C were compared. Projection to other temperature cycling ranges using a modified Coffin-Manson relationship is also presented.

  18. Islanding the power grid on the transmission level: less connections for more security

    PubMed Central

    Mureddu, Mario; Caldarelli, Guido; Damiano, Alfonso; Scala, Antonio; Meyer-Ortmanns, Hildegard

    2016-01-01

    Islanding is known as a management procedure of the power system that is implemented at the distribution level to preserve sensible loads from outages and to guarantee the continuity in electricity supply, when a high amount of distributed generation occurs. In this paper we study islanding on the level of the transmission grid and shall show that it is a suitable measure to enhance energy security and grid resilience. We consider the German and Italian transmission grids. We remove links either randomly to mimic random failure events, or according to a topological characteristic, their so-called betweenness centrality, to mimic an intentional attack and test whether the resulting fragments are self-sustainable. We test this option via the tool of optimized DC power flow equations. When transmission lines are removed according to their betweenness centrality, the resulting islands have a higher chance of being dynamically self-sustainable than for a random removal. Less connections may even increase the grid’s stability. These facts should be taken into account in the design of future power grids. PMID:27713509

  19. Data distribution service-based interoperability framework for smart grid testbed infrastructure

    DOE PAGES

    Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.

    2016-03-02

    This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less

  20. Islanding the power grid on the transmission level: less connections for more security

    NASA Astrophysics Data System (ADS)

    Mureddu, Mario; Caldarelli, Guido; Damiano, Alfonso; Scala, Antonio; Meyer-Ortmanns, Hildegard

    2016-10-01

    Islanding is known as a management procedure of the power system that is implemented at the distribution level to preserve sensible loads from outages and to guarantee the continuity in electricity supply, when a high amount of distributed generation occurs. In this paper we study islanding on the level of the transmission grid and shall show that it is a suitable measure to enhance energy security and grid resilience. We consider the German and Italian transmission grids. We remove links either randomly to mimic random failure events, or according to a topological characteristic, their so-called betweenness centrality, to mimic an intentional attack and test whether the resulting fragments are self-sustainable. We test this option via the tool of optimized DC power flow equations. When transmission lines are removed according to their betweenness centrality, the resulting islands have a higher chance of being dynamically self-sustainable than for a random removal. Less connections may even increase the grid’s stability. These facts should be taken into account in the design of future power grids.

  1. [Comparison of Preferential Hyperacuity Perimeter (PHP) test and Amsler grid test in the diagnosis of different stages of age-related macular degeneration].

    PubMed

    Kampmeier, J; Zorn, M M; Lang, G K; Botros, Y T; Lang, G E

    2006-09-01

    Age-related macular degeneration (ARMD) is the leading cause of blindness in people over 65 years of age. A rapid loss of vision occurs especially in cases with choroidal neovascularisation. Early detection of ARMD and timely treatment are mandatory. We have prospectively studied the results of two diagnostic self tests for the early detection of metamorphopsia and scotoma, the PHP test and the Amsler grid test, in different stages of ARMD. Patients with ARMD and best corrected visual acuity of 6/30 or better (Snellen charts) were examined with a standardised protocol, including supervised Amsler grid examination and PHP, a new device for metamorphopsia or scotoma measurement, based on the hyperacuity phenomenon in the central 14 degrees of the visual field. The stages of ARMD were independently graded in a masked fashion by stereoscopic ophthalmoscopy, stereoscopic fundus colour photographs, fluorescein angiography, and OCT. The patients were subdivided into 3 non-neovascular groups [early, late (RPE atrophy > 175 microm) and geographic atrophy], a neovascular group (classic and occult CNV) and an age-matched control group (healthy volunteers). 140 patients, with ages ranging from 50 to 90 years (median 68 years), were included in the study. Best corrected visual acuity ranged from 6/30 to 6/6 with a median of 6/12. 95 patients were diagnosed as non-neovascular ARMD. Thirty eyes had early ARMD (9 were tested positive by the PHP test and 9 by the Amsler grid test), and 50 late ARMD (positive: PHP test 23, Amsler grid test 26). The group with geographic atrophy consisted of 15 eyes (positive: PHP test 13, Amsler grid test 10). Forty-five patients presented with neovascular ARMD (positive: PHP test 38, Amsler grid test 36), 34 volunteers served as control group (positive: PHP test 1, Amsler grid test 5). The PHP and Amsler grid tests revealed comparable results detecting metamorphopsia and scotoma in early ARMD (30 vs. 30 %) and late ARMD (46 vs. 52 %). However, the PHP test more often revealed disease-related functional changes in the groups of geographic atrophy (87 vs. 67 %) and neovascular ARMD (84 vs. 80 %). This implies that the PHP and Amsler grid self tests are useful tools for detection of ARMD and that the PHP test has a greater sensitivity in the groups of geographic atrophy and neovascular AMD.

  2. Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant

    PubMed Central

    Moreno-Garcia, Isabel M.; Palacios-Garcia, Emilio J.; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J.; Varo-Martinez, Marta; Real-Calvo, Rafael J.

    2016-01-01

    There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant’s components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid. PMID:27240365

  3. Alternating current long range alpha particle detector

    DOEpatents

    MacArthur, Duncan W.; McAtee, James L.

    1993-01-01

    An alpha particle detector, utilizing alternating currents, whcih is capable of detecting alpha particles from distinct sources. The use of alternating currents allows use of simpler ac circuits which, in turn, are not susceptible to dc error components. It also allows the benefit of gas gain, if desired. In the invention, a voltage source creates an electric field between two conductive grids, and between the grids and a conductive enclosure. Air containing air ions created by collision with alpha particles is drawn into the enclosure and detected. In some embodiments, the air flow into the enclosure is interrupted, creating an alternating flow of ions. In another embodiment, a modulated voltage is applied to the grid, also modulating the detection of ions.

  4. Alternating current long range alpha particle detector

    DOEpatents

    MacArthur, D.W.; McAtee, J.L.

    1993-02-16

    An alpha particle detector, utilizing alternating currents, which is capable of detecting alpha particles from distinct sources. The use of alternating currents allows use of simpler ac circuits which, in turn, are not susceptible to dc error components. It also allows the benefit of gas gain, if desired. In the invention, a voltage source creates an electric field between two conductive grids, and between the grids and a conductive enclosure. Air containing air ions created by collision with alpha particles is drawn into the enclosure and detected. In some embodiments, the air flow into the enclosure is interrupted, creating an alternating flow of ions. In another embodiment, a modulated voltage is applied to the grid, also modulating the detection of ions.

  5. Multi-Dimensional Damage Detection

    NASA Technical Reports Server (NTRS)

    Gibson, Tracy L. (Inventor); Williams, Martha K. (Inventor); Roberson, Luke B. (Inventor); Lewis, Mark E. (Inventor); Snyder, Sarah J. (Inventor); Medelius, Pedro J. (Inventor)

    2016-01-01

    Methods and systems may provide for a structure having a plurality of interconnected panels, wherein each panel has a plurality of detection layers separated from one another by one or more non-detection layers. The plurality of detection layers may form a grid of conductive traces. Additionally, a monitor may be coupled to each grid of conductive traces, wherein the monitor is configured to detect damage to the plurality of interconnected panels in response to an electrical property change with respect to one or more of the conductive traces. In one example, the structure is part of an inflatable space platform such as a spacecraft or habitat.

  6. Real-Time Smart Grids Control for Preventing Cascading Failures and Blackout using Neural Networks: Experimental Approach for N-1-1 Contingency

    NASA Astrophysics Data System (ADS)

    Zarrabian, Sina; Belkacemi, Rabie; Babalola, Adeniyi A.

    2016-12-01

    In this paper, a novel intelligent control is proposed based on Artificial Neural Networks (ANN) to mitigate cascading failure (CF) and prevent blackout in smart grid systems after N-1-1 contingency condition in real-time. The fundamental contribution of this research is to deploy the machine learning concept for preventing blackout at early stages of its occurrence and to make smart grids more resilient, reliable, and robust. The proposed method provides the best action selection strategy for adaptive adjustment of generators' output power through frequency control. This method is able to relieve congestion of transmission lines and prevent consecutive transmission line outage after N-1-1 contingency condition. The proposed ANN-based control approach is tested on an experimental 100 kW test system developed by the authors to test intelligent systems. Additionally, the proposed approach is validated on the large-scale IEEE 118-bus power system by simulation studies. Experimental results show that the ANN approach is very promising and provides accurate and robust control by preventing blackout. The technique is compared to a heuristic multi-agent system (MAS) approach based on communication interchanges. The ANN approach showed more accurate and robust response than the MAS algorithm.

  7. On the Adaptive Protection of Microgrids: A Review on How to Mitigate Cyber Attacks and Communication Failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Hany F; Lashway, Christopher R; Mohammed, Osama A

    One main challenge in the practical implementation of a microgrid is the design of an adequate protection scheme in both grid connected and islanded modes. Conventional overcurrent protection schemes face selectivity and sensitivity issues during grid and microgrid faults since the fault current level is different in both cases for the same relay. Various approaches have been implemented in the past to deal with this problem, yet the most promising ones are the implementation of adaptive protection techniques abiding by the IEC 61850 communication standard. This paper presents a critical review of existing adaptive protection schemes, the technical challenges formore » the use of classical protection techniques and the need for an adaptive, smart protection system. However, the risk of communication link failures and cyber security threats still remain a challenge in implementing a reliable adaptive protection scheme. A contingency is needed where a communication issue prevents the relay from adjusting to a lower current level during islanded mode. An adaptive protection scheme is proposed that utilizes energy storage (ES) and hybrid ES (HESS) already available in the network as a mechanism to source the higher fault current. Four common grid ES and HESS are reviewed for their suitability in feeding the fault while some solutions are proposed.« less

  8. A preliminary evaluation of a failure detection filter for detecting and identifying control element failures in a transport aircraft

    NASA Technical Reports Server (NTRS)

    Bundick, W. T.

    1985-01-01

    The application of the failure detection filter to the detection and identification of aircraft control element failures was evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 Aircraft. Simulation results show that with a simple correlator and threshold detector used to process the filter residuals, the failure detection performance is seriously degraded by the effects of turbulence.

  9. Estimation of In Situ Stresses with Hydro-Fracturing Tests and a Statistical Method

    NASA Astrophysics Data System (ADS)

    Lee, Hikweon; Ong, See Hong

    2018-03-01

    At great depths, where borehole-based field stress measurements such as hydraulic fracturing are challenging due to difficult downhole conditions or prohibitive costs, in situ stresses can be indirectly estimated using wellbore failures such as borehole breakouts and/or drilling-induced tensile failures detected by an image log. As part of such efforts, a statistical method has been developed in which borehole breakouts detected on an image log are used for this purpose (Song et al. in Proceedings on the 7th international symposium on in situ rock stress, 2016; Song and Chang in J Geophys Res Solid Earth 122:4033-4052, 2017). The method employs a grid-searching algorithm in which the least and maximum horizontal principal stresses ( S h and S H) are varied, and the corresponding simulated depth-related breakout width distribution as a function of the breakout angle ( θ B = 90° - half of breakout width) is compared to that observed along the borehole to determine a set of S h and S H having the lowest misfit between them. An important advantage of the method is that S h and S H can be estimated simultaneously in vertical wells. To validate the statistical approach, the method is applied to a vertical hole where a set of field hydraulic fracturing tests have been carried out. The stress estimations using the proposed method were found to be in good agreement with the results interpreted from the hydraulic fracturing test measurements.

  10. Research on fault characteristics about switching component failures for distribution electronic power transformers

    NASA Astrophysics Data System (ADS)

    Sang, Z. X.; Huang, J. Q.; Yan, J.; Du, Z.; Xu, Q. S.; Lei, H.; Zhou, S. X.; Wang, S. C.

    2017-11-01

    The protection is an essential part for power device, especially for those in power grid, as the failure may cost great losses to the society. A study on the voltage and current abnormality in the power electronic devices in Distribution Electronic Power Transformer (D-EPT) during the failures on switching components is presented, as well as the operational principles for 10 kV rectifier, 10 kV/400 V DC-DC converter and 400 V inverter in D-EPT. Derived from the discussion on the effects of voltage and current distortion, the fault characteristics as well as a fault diagnosis method for D-EPT are introduced.

  11. Utilizing Metalized Fabrics for Liquid and Rip Detection and Localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Stephen; Mahan, Cody; Kuhn, Michael J

    2013-01-01

    This paper proposes a novel technique for utilizing conductive textiles as a distributed sensor for detecting and localizing liquids (e.g., blood), rips (e.g., bullet holes), and potentially biosignals. The proposed technique is verified through both simulation and experimental measurements. Circuit theory is utilized to depict conductive fabric as a bounded, near-infinite grid of resistors. Solutions to the well-known infinite resistance grid problem are used to confirm the accuracy and validity of this modeling approach. Simulations allow for discontinuities to be placed within the resistor matrix to illustrate the effects of bullet holes within the fabric. A real-time experimental system wasmore » developed that uses a multiplexed Wheatstone bridge approach to reconstruct the resistor grid across the conductive fabric and detect liquids and rips. The resistor grid model is validated through a comparison of simulated and experimental results. Results suggest accuracy proportional to the electrode spacing in determining the presence and location of discontinuities in conductive fabric samples. Future work is focused on refining the experimental system to provide more accuracy in detecting and localizing events as well as developing a complete prototype that can be deployed for field testing. Potential applications include intelligent clothing, flexible, lightweight sensing systems, and combat wound detection.« less

  12. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    NASA Astrophysics Data System (ADS)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-12-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware.

  13. Study on improved Ip-iq APF control algorithm and its application in micro grid

    NASA Astrophysics Data System (ADS)

    Xie, Xifeng; Shi, Hua; Deng, Haiyingv

    2018-01-01

    In order to enhance the tracking velocity and accuracy of harmonic detection by ip-iq algorithm, a novel ip-iq control algorithm based on the Instantaneous reactive power theory is presented, the improved algorithm adds the lead correction link to adjust the zero point of the detection system, the Fuzzy Self-Tuning Adaptive PI control is introduced to dynamically adjust the DC-link Voltage, which meets the requirement of the harmonic compensation of the micro grid. Simulation and experimental results verify the proposed method is feasible and effective in micro grid.

  14. Asbestos Air Monitoring Results at Eleven Family Housing Areas throughout the United States.

    DTIC Science & Technology

    1991-05-23

    limits varied depending on sampling volumes and grid openings scanned. Therefore, the detection limits presented in the results summary tables vary...1 f/10 grid squares) (855 mm 2) (1 liter) = 3054 liters (0.005 f/cc) (0.0056 mm 2) (1000 cc) Where: * 1 f/10 grid squares (the maximum recommended...diameter filter. * 0.0056 mm 2 is the area of each grid square (75 /Jm per side) in a 200 mesh electron microscope grid . This value will vary from 0.0056

  15. Best Practices for Unstructured Grid Shock Fitting

    NASA Technical Reports Server (NTRS)

    McCloud, Peter L.

    2017-01-01

    Unstructured grid solvers have well-known issues predicting surface heat fluxes when strong shocks are present. Various efforts have been made to address the underlying numerical issues that cause the erroneous predictions. The present work addresses some of the shortcomings of unstructured grid solvers, not by addressing the numerics, but by applying structured grid best practices to unstructured grids. A methodology for robust shock detection and shock fitting is outlined and applied to production relevant cases. Results achieved by using the Loci-CHEM Computational Fluid Dynamics solver are provided.

  16. Intrusion detection system using Online Sequence Extreme Learning Machine (OS-ELM) in advanced metering infrastructure of smart grid.

    PubMed

    Li, Yuancheng; Qiu, Rixuan; Jing, Sitong

    2018-01-01

    Advanced Metering Infrastructure (AMI) realizes a two-way communication of electricity data through by interconnecting with a computer network as the core component of the smart grid. Meanwhile, it brings many new security threats and the traditional intrusion detection method can't satisfy the security requirements of AMI. In this paper, an intrusion detection system based on Online Sequence Extreme Learning Machine (OS-ELM) is established, which is used to detecting the attack in AMI and carrying out the comparative analysis with other algorithms. Simulation results show that, compared with other intrusion detection methods, intrusion detection method based on OS-ELM is more superior in detection speed and accuracy.

  17. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    PubMed Central

    Ye, Qing; Pan, Hao; Liu, Changhua

    2015-01-01

    This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717

  18. Optical remote sensing and correlation of office equipment functional state and stress levels via power quality disturbances inefficiencies

    NASA Astrophysics Data System (ADS)

    Sternberg, Oren; Bednarski, Valerie R.; Perez, Israel; Wheeland, Sara; Rockway, John D.

    2016-09-01

    Non-invasive optical techniques pertaining to the remote sensing of power quality disturbances (PQD) are part of an emerging technology field typically dominated by radio frequency (RF) and invasive-based techniques. Algorithms and methods to analyze and address PQD such as probabilistic neural networks and fully informed particle swarms have been explored in industry and academia. Such methods are tuned to work with RF equipment and electronics in existing power grids. As both commercial and defense assets are heavily power-dependent, understanding electrical transients and failure events using non-invasive detection techniques is crucial. In this paper we correlate power quality empirical models to the observed optical response. We also empirically demonstrate a first-order approach to map household, office and commercial equipment PQD to user functions and stress levels. We employ a physics-based image and signal processing approach, which demonstrates measured non-invasive (remote sensing) techniques to detect and map the base frequency associated with the power source to the various PQD on a calibrated source.

  19. Redundancy relations and robust failure detection

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Lou, X. C.; Verghese, G. C.; Willsky, A. S.

    1984-01-01

    All failure detection methods are based on the use of redundancy, that is on (possible dynamic) relations among the measured variables. Consequently the robustness of the failure detection process depends to a great degree on the reliability of the redundancy relations given the inevitable presence of model uncertainties. The problem of determining redundancy relations which are optimally robust in a sense which includes the major issues of importance in practical failure detection is addressed. A significant amount of intuition concerning the geometry of robust failure detection is provided.

  20. Characterization of steel rebar spacing using synthetic aperture radar imaging

    NASA Astrophysics Data System (ADS)

    Hu, Jie; Tang, Qixiang; Twumasi, Jones Owusu; Yu, Tzuyang

    2018-03-01

    Steel rebars is a vital component in reinforced concrete (RC) and prestressed concrete structures since they provide mechanical functions to those structures. Damages occurred to steel rebars can lead to the premature failure of concrete structures. Characterization of steel rebars using nondestructive evaluation (NDE) offers engineers and decision makers important information for effective/good repair of aging concrete structures. Among existing NDE techniques, microwave/radar NDE has been proven to be a promising technique for surface and subsurface sensing of concrete structures. The objective of this paper is to use microwave/radar NDE to characterize steel rebar grids in free space, as a basis for the subsurface sensing of steel rebars inside RC structures. A portable 10-GHz radar system based on synthetic aperture radar (SAR) imaging was used in this paper. Effect of rebar grid spacing was considered and used to define subsurface steel rebar grids. Five rebar grid spacings were used; 12.7 cm (5 in.), 17.78 cm (7 in.), 22.86 cm (9 in.), 27.94 cm (11 in.), and 33.02 cm (13 in.) # 3 rebars were used in all grid specimens. All SAR images were collected inside an anechoic chamber. It was found that SAR images can successfully capture the change of rebar grid spacing and used for quantifying the spacing of rebar grids. Empirical models were proposed to estimate actual rebar spacing and contour area using SAR images.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.

    This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less

  2. The application of the detection filter to aircraft control surface and actuator failure detection and isolation

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Motyka, P.; Hall, S. R.

    1985-01-01

    The performance of the detection filter in detecting and isolating aircraft control surface and actuator failures is evaluated. The basic detection filter theory assumption of no direct input-output coupling is violated in this application due to the use of acceleration measurements for detecting and isolating failures. With this coupling, residuals produced by control surface failures may only be constrained to a known plane rather than to a single direction. A detection filter design with such planar failure signatures is presented, with the design issues briefly addressed. In addition, a modification to constrain the residual to a single known direction even with direct input-output coupling is also presented. Both the detection filter and the modification are tested using a nonlinear aircraft simulation. While no thresholds were selected, both filters demonstrated an ability to detect control surface and actuator failures. Failure isolation may be a problem if there are several control surfaces which produce similar effects on the aircraft. In addition, the detection filter was sensitive to wind turbulence and modeling errors.

  3. Internal erosion rates of a 10-kW xenon ion thruster

    NASA Technical Reports Server (NTRS)

    Rawlin, Vincent K.

    1988-01-01

    A 30 cm diameter divergent magnetic field ion thruster, developed for mercury operation at 2.7 kW, was modified and operated with xenon propellant at a power level of 10 kW for 567 h to evaluate thruster performance and lifetime. The major differences between this thruster and its baseline configuration were elimination of the three mercury vaporizers, use of a main discharge cathode with a larger orifice, reduction in discharge baffle diameter, and use of an ion accelerating system with larger acceleration grid holes. Grid thickness measurement uncertainties, combined with estimates of the effects of reactive residual facility background gases gave a minimum screen grid lifetime of 7000 h. Discharge cathode orifice erosion rates were measured with three different cathodes with different initial orifice diameters. Three potential problems were identified during the wear test: the upstream side of the discharge baffle eroded at an unacceptable rate; two of the main cathode tubes experienced oxidation, deformation, and failure; and the accelerator grid impingement current was more than an order of magnitude higher than that of the baseline mercury thruster. The charge exchange ion erosion was not quantified in this test. There were no measurable changes in the accelerator grid thickness or the accelerator grid hole diameters.

  4. Model Predictive Control of A Matrix-Converter Based Solid State Transformer for Utility Grid Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, Yaosuo

    The matrix converter solid state transformer (MC-SST), formed from the back-to-back connection of two three-to-single-phase matrix converters, is studied for use in the interconnection of two ac grids. The matrix converter topology provides a light weight and low volume single-stage bidirectional ac-ac power conversion without the need for a dc link. Thus, the lifetime limitations of dc-bus storage capacitors are avoided. However, space vector modulation of this type of MC-SST requires to compute vectors for each of the two MCs, which must be carefully coordinated to avoid commutation failure. An additional controller is also required to control power exchange betweenmore » the two ac grids. In this paper, model predictive control (MPC) is proposed for an MC-SST connecting two different ac power grids. The proposed MPC predicts the circuit variables based on the discrete model of MC-SST system and the cost function is formulated so that the optimal switch vector for the next sample period is selected, thereby generating the required grid currents for the SST. Simulation and experimental studies are carried out to demonstrate the effectiveness and simplicity of the proposed MPC for such MC-SST-based grid interfacing systems.« less

  5. NaradaBrokering as Middleware Fabric for Grid-based Remote Visualization Services

    NASA Astrophysics Data System (ADS)

    Pallickara, S.; Erlebacher, G.; Yuen, D.; Fox, G.; Pierce, M.

    2003-12-01

    Remote Visualization Services (RVS) have tended to rely on approaches based on the client server paradigm. The simplicity in these approaches is offset by problems such as single-point-of-failures, scaling and availability. Furthermore, as the complexity, scale and scope of the services hosted on this paradigm increase, this approach becomes increasingly unsuitable. We propose a scheme based on top of a distributed brokering infrastructure, NaradaBrokering, which comprises a distributed network of broker nodes. These broker nodes are organized in a cluster-based architecture that can scale to very large sizes. The broker network is resilient to broker failures and efficiently routes interactions to entities that expressed an interest in them. In our approach to RVS, services advertise their capabilities to the broker network, which manages these service advertisements. Among the services considered within our system are those that perform graphic transformations, mediate access to specialized datasets and finally those that manage the execution of specified tasks. There could be multiple instances of each of these services and the system ensures that load for a given service is distributed efficiently over these service instances. Among the features provided in our approach are efficient discovery of services and asynchronous interactions between services and service requestors (which could themselves be other services). Entities need not be online during the execution of the service request. The system also ensures that entities can be notified about task executions, partial results and failures that might have taken place during service execution. The system also facilitates specification of task overrides, distribution of execution results to alternate devices (which were not used to originally request service execution) and to multiple users. These RVS services could of course be either OGSA (Open Grid Services Architecture) based Grid services or traditional Web services. The brokering infrastructure will manage the service advertisements and the invocation of these services. This scheme ensures that the fundamental Grid computing concept is met - provide computing capabilities of those that are willing to provide it to those that seek the same. {[1]} The NaradaBrokering Project: http://www.naradabrokering.org

  6. Orthogonal series generalized likelihood ratio test for failure detection and isolation. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Hall, Steven R.; Walker, Bruce K.

    1990-01-01

    A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.

  7. Intrusion detection system using Online Sequence Extreme Learning Machine (OS-ELM) in advanced metering infrastructure of smart grid

    PubMed Central

    Li, Yuancheng; Jing, Sitong

    2018-01-01

    Advanced Metering Infrastructure (AMI) realizes a two-way communication of electricity data through by interconnecting with a computer network as the core component of the smart grid. Meanwhile, it brings many new security threats and the traditional intrusion detection method can’t satisfy the security requirements of AMI. In this paper, an intrusion detection system based on Online Sequence Extreme Learning Machine (OS-ELM) is established, which is used to detecting the attack in AMI and carrying out the comparative analysis with other algorithms. Simulation results show that, compared with other intrusion detection methods, intrusion detection method based on OS-ELM is more superior in detection speed and accuracy. PMID:29485990

  8. Analysis of turbine-grid interaction of grid-connected wind turbine using HHT

    NASA Astrophysics Data System (ADS)

    Chen, A.; Wu, W.; Miao, J.; Xie, D.

    2018-05-01

    This paper processes the output power of the grid-connected wind turbine with the denoising and extracting method based on Hilbert Huang transform (HHT) to discuss the turbine-grid interaction. At first, the detailed Empirical Mode Decomposition (EMD) and the Hilbert Transform (HT) are introduced. Then, on the premise of decomposing the output power of the grid-connected wind turbine into a series of Intrinsic Mode Functions (IMFs), energy ratio and power volatility are calculated to detect the unessential components. Meanwhile, combined with vibration function of turbine-grid interaction, data fitting of instantaneous amplitude and phase of each IMF is implemented to extract characteristic parameters of different interactions. Finally, utilizing measured data of actual parallel-operated wind turbines in China, this work accurately obtains the characteristic parameters of turbine-grid interaction of grid-connected wind turbine.

  9. Resiliency of the Nation's Power Grid: Assessing Risks of Premature Failure of Large Power Transformers Under Climate Warming and Increased Heat Waves

    NASA Astrophysics Data System (ADS)

    Schlosser, C. A.; Gao, X.; Morgan, E.

    2017-12-01

    The aging pieces of our nation's power grid - the largest machine ever built - are at a critical time. Key assets in the transmission system, including large power transformers (LPTs), are approaching their originally designed lifetimes. Moreover, extreme weather and climate events upon which these design lifetimes are partially based are expected to change. In particular, more frequent and intense heat waves can accelerate the degradation of LPTs' insulation/cooling system. Thus, there are likely thousands of LPTs across the United States under increasing risk of premature failure - yet this risk has not been assessed. In this study, we investigate the impact of climate warming and corresponding shifts in heat waves for critical LPTs located in the Northeast corridor of the United States to assess: To what extent do changes in heat waves/events present a rising threat to the transformer network over the Northeast U.S. and to what extent can climate mitigation reduce this risk? This study focuses on a collection of LPTs with a high degree of "betweenness" - while recognizing other factors such as: connectivity, voltage rating, MVA rating, approximate price, weight, location/proximity to major transportation routes, and age. To assess the risk of future change in heat wave occurrence we use an analogue method, which detects the occurrence of heat waves based on associated large-scale atmospheric conditions. This method is compared to the more conventional approach that uses model-simulated daily maximum temperature. Under future climate warming scenarios, multi-model medians of both methods indicate strong increases in heat wave frequency during the latter half of this century. Under weak climate mitigation - the risks imposed from heat wave occurrence could quadruple, but a modest mitigation scenario cuts the increasing threat in half. As important, the analogue method substantially improves the model consensus through reduction of the interquartile range by a factor of three. The improved inter-model consensus is viewed as a promising step toward providing more actionable climate information. Ultimately - this technique could be applied to the entirety of the U.S. power grid as well as other weather extrema (e.g. precipitation, ice, and wind) as well as assess current and future topologies of any electricity system.

  10. Localized strain measurements of the intervertebral disc annulus during biaxial tensile testing.

    PubMed

    Karakolis, Thomas; Callaghan, Jack P

    2015-01-01

    Both inter-lamellar and intra-lamellar failures of the annulus have been described as potential modes of disc herniation. Attempts to characterize initial lamellar failure of the annulus have involved tensile testing of small tissue samples. The purpose of this study was to evaluate a method of measuring local surface strains through image analysis of a tensile test conducted on an isolated sample of annular tissue in order to enhance future studies of intervertebral disc failure. An annulus tissue sample was biaxial strained to 10%. High-resolution images captured the tissue surface throughout testing. Three test conditions were evaluated: submerged, non-submerged and marker. Surface strains were calculated for the two non-marker conditions based on motion of virtual tracking points. Tracking algorithm parameters (grid resolution and template size) were varied to determine the effect on estimated strains. Accuracy of point tracking was assessed through a comparison of the non-marker conditions to a condition involving markers placed on tissue surface. Grid resolution had a larger effect on local strain than template size. Average local strain error ranged from 3% to 9.25% and 0.1% to 2.0%, for the non-submerged and submerged conditions, respectively. Local strain estimation has a relatively high potential for error. Submerging the tissue provided superior strain estimates.

  11. Analysis of Loss-of-Offsite-Power Events 1997-2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Nancy Ellen; Schroeder, John Alton

    2016-07-01

    Loss of offsite power (LOOP) can have a major negative impact on a power plant’s ability to achieve and maintain safe shutdown conditions. LOOP event frequencies and times required for subsequent restoration of offsite power are important inputs to plant probabilistic risk assessments. This report presents a statistical and engineering analysis of LOOP frequencies and durations at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience during calendar years 1997 through 2015. LOOP events during critical operation that do not result in a reactor trip, are not included. Frequencies and durations weremore » determined for four event categories: plant-centered, switchyard-centered, grid-related, and weather-related. Emergency diesel generator reliability is also considered (failure to start, failure to load and run, and failure to run more than 1 hour). There is an adverse trend in LOOP durations. The previously reported adverse trend in LOOP frequency was not statistically significant for 2006-2015. Grid-related LOOPs happen predominantly in the summer. Switchyard-centered LOOPs happen predominantly in winter and spring. Plant-centered and weather-related LOOPs do not show statistically significant seasonality. The engineering analysis of LOOP data shows that human errors have been much less frequent since 1997 than in the 1986 -1996 time period.« less

  12. Electrostatic dust detector

    DOEpatents

    Skinner, Charles H [Lawrenceville, NJ

    2006-05-02

    An apparatus for detecting dust in a variety of environments which can include radioactive and other hostile environments both in a vacuum and in a pressurized system. The apparatus consists of a grid coupled to a selected bias voltage. The signal generated when dust impacts and shorts out the grid is electrically filtered, and then analyzed by a signal analyzer which is then sent to a counter. For fine grids a correlation can be developed to relate the number of counts observed to the amount of dust which impacts the grid.

  13. Indicator of reliability of power grids and networks for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Shaptsev, V. A.

    2017-10-01

    The energy supply of the mining enterprises includes power networks in particular. Environmental monitoring relies on the data network between the observers and the facilitators. Weather and conditions of their work change over time randomly. Temperature, humidity, wind strength and other stochastic processes are interconnecting in different segments of the power grid. The article presents analytical expressions for the probability of failure of the power grid as a whole or its particular segment. These expressions can contain one or more parameters of the operating conditions, simulated by Monte Carlo. In some cases, one can get the ultimate mathematical formula for calculation on the computer. In conclusion, the expression, including the probability characteristic function of one random parameter, for example, wind, temperature or humidity, is given. The parameters of this characteristic function can be given by retrospective or special observations (measurements).

  14. National Wind Technology Center Dynamic 5-Megawatt Dynamometer

    ScienceCinema

    Felker, Fort

    2018-06-06

    The National Wind Technology Center (NWTC) offers wind industry engineers a unique opportunity to conduct a wide range of tests. Its custom-designed dynamometers can test wind turbine systems from 1 kilowatt (kW) to 5 megawatts (MW). The NWTC's new dynamometer facility simulates operating field conditions to assess the reliability and performance of wind turbine prototypes and commercial machines, thereby reducing deployment time, failures, and maintenance or replacement costs. Funded by the U.S. Department of Energy with American Recovery and Reinvestment Act (ARRA) funds, the 5-MW dynamometer will provide the ability to test wind turbine drivetrains and connect those drivetrains directly to the electricity grid or through a controllable grid interface (CGI). The CGI tests the low-voltage ride-through capability of a drivetrain as well as its response to faults and other abnormal grid conditions.

  15. Data grid: a distributed solution to PACS

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyan; Zhang, Jianguo

    2004-04-01

    In a hospital, various kinds of medical images acquired from different modalities are generally used and stored in different department and each modality usually attaches several workstations to display or process images. To do better diagnosis, radiologists or physicians often need to retrieve other kinds of images for reference. The traditional image storage solution is to buildup a large-scale PACS archive server. However, the disadvantages of pure centralized management of PACS archive server are obvious. Besides high costs, any failure of PACS archive server would cripple the entire PACS operation. Here we present a new approach to develop the storage grid in PACS, which can provide more reliable image storage and more efficient query/retrieval for the whole hospital applications. In this paper, we also give the performance evaluation by comparing the three popular technologies mirror, cluster and grid.

  16. Impacts of Inverter-Based Advanced Grid Support Functions on Islanding Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Austin; Hoke, Anderson; Miller, Brian

    A long-standing requirement for inverters paired with distributed energy resources is that they are required to disconnect from the electrical power system (EPS) when an electrical island is formed. In recent years, advanced grid support controls have been developed for inverters to provide voltage and frequency support by integrating functions such as voltage and frequency ride-through, volt-VAr control, and frequency-Watt control. With these new capabilities integrated into the inverter, additional examination is needed to determine how voltage and frequency support will impact pre-existing inverter functions like island detection. This paper inspects how advanced inverter functions will impact its ability tomore » detect the formation of an electrical island. Results are presented for the unintentional islanding laboratory tests of three common residential-scale photovoltaic inverters performing various combinations of grid support functions. For the inverters tested, grid support functions prolonged island disconnection times slightly; however, it was found that in all scenarios the inverters disconnected well within two seconds, the limit imposed by IEEE Std 1547-2003.« less

  17. Modeling and distributed gain scheduling strategy for load frequency control in smart grids with communication topology changes.

    PubMed

    Liu, Shichao; Liu, Xiaoping P; El Saddik, Abdulmotaleb

    2014-03-01

    In this paper, we investigate the modeling and distributed control problems for the load frequency control (LFC) in a smart grid. In contrast with existing works, we consider more practical and real scenarios, where the communication topology of the smart grid changes because of either link failures or packet losses. These topology changes are modeled as a time-varying communication topology matrix. By using this matrix, a new closed-loop power system model is proposed to integrate the communication topology changes into the dynamics of a physical power system. The globally asymptotical stability of this closed-loop power system is analyzed. A distributed gain scheduling LFC strategy is proposed to compensate for the potential degradation of dynamic performance (mean square errors of state vectors) of the power system under communication topology changes. In comparison to conventional centralized control approaches, the proposed method can improve the robustness of the smart grid to the variation of the communication network as well as to reduce computation load. Simulation results show that the proposed distributed gain scheduling approach is capable to improve the robustness of the smart grid to communication topology changes. © 2013 ISA. Published by ISA. All rights reserved.

  18. Control system and method for a universal power conditioning system

    DOEpatents

    Lai, Jih-Sheng; Park, Sung Yeul; Chen, Chien-Liang

    2014-09-02

    A new current loop control system method is proposed for a single-phase grid-tie power conditioning system that can be used under a standalone or a grid-tie mode. This type of inverter utilizes an inductor-capacitor-inductor (LCL) filter as the interface in between inverter and the utility grid. The first set of inductor-capacitor (LC) can be used in the standalone mode, and the complete LCL can be used for the grid-tie mode. A new admittance compensation technique is proposed for the controller design to avoid low stability margin while maintaining sufficient gain at the fundamental frequency. The proposed current loop controller system and admittance compensation technique have been simulated and tested. Simulation results indicate that without the admittance path compensation, the current loop controller output duty cycle is largely offset by an undesired admittance path. At the initial simulation cycle, the power flow may be erratically fed back to the inverter causing catastrophic failure. With admittance path compensation, the output power shows a steady-state offset that matches the design value. Experimental results show that the inverter is capable of both a standalone and a grid-tie connection mode using the LCL filter configuration.

  19. Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text.

    PubMed

    Park, Albert; Hartzler, Andrea L; Huh, Jina; McDonald, David W; Pratt, Wanda

    2015-08-31

    The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical natural language processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time. The primary objective of this study is to explore an alternative approach-using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap. Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap's commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures. From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed term failures, and (3) word ambiguity failures. Within these three failure types, we discovered 12 causes of inaccurate mappings of concepts. We used automated methods to detect almost half of 383,572 MetaMap's mappings as problematic. Word sense ambiguity failure was the most widely occurring, comprising 82.22% of failures. Boundary failure was the second most frequent, amounting to 15.90% of failures, while missed term failures were the least common, making up 1.88% of failures. The automated failure detection achieved precision, recall, accuracy, and F1 score of 83.00%, 92.57%, 88.17%, and 87.52%, respectively. We illustrate the challenges of processing patient-generated online health community text and characterize failures of NLP tools on this patient-generated health text, demonstrating the feasibility of our low-cost approach to automatically detect those failures. Our approach shows the potential for scalable and effective solutions to automatically assess the constantly evolving NLP tools and source vocabularies to process patient-generated text.

  20. Detection of wood failure by image processing method: influence of algorithm, adhesive and wood species

    Treesearch

    Lanying Lin; Sheng He; Feng Fu; Xiping Wang

    2015-01-01

    Wood failure percentage (WFP) is an important index for evaluating the bond strength of plywood. Currently, the method used for detecting WFP is visual inspection, which lacks efficiency. In order to improve it, image processing methods are applied to wood failure detection. The present study used thresholding and K-means clustering algorithms in wood failure detection...

  1. A geometric approach to failure detection and identification in linear systems

    NASA Technical Reports Server (NTRS)

    Massoumnia, M. A.

    1986-01-01

    Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.

  2. Adaptive Harmonic Detection Control of Grid Interfaced Solar Photovoltaic Energy System with Power Quality Improvement

    NASA Astrophysics Data System (ADS)

    Singh, B.; Goel, S.

    2015-03-01

    This paper presents a grid interfaced solar photovoltaic (SPV) energy system with a novel adaptive harmonic detection control for power quality improvement at ac mains under balanced as well as unbalanced and distorted supply conditions. The SPV energy system is capable of compensation of linear and nonlinear loads with the objectives of load balancing, harmonics elimination, power factor correction and terminal voltage regulation. The proposed control increases the utilization of PV infrastructure and brings down its effective cost due to its other benefits. The adaptive harmonic detection control algorithm is used to detect the fundamental active power component of load currents which are subsequently used for reference source currents estimation. An instantaneous symmetrical component theory is used to obtain instantaneous positive sequence point of common coupling (PCC) voltages which are used to derive inphase and quadrature phase voltage templates. The proposed grid interfaced PV energy system is modelled and simulated in MATLAB Simulink and its performance is verified under various operating conditions.

  3. Electric propulsion reliability: Statistical analysis of on-orbit anomalies and comparative analysis of electric versus chemical propulsion failure rates

    NASA Astrophysics Data System (ADS)

    Saleh, Joseph Homer; Geng, Fan; Ku, Michelle; Walker, Mitchell L. R.

    2017-10-01

    With a few hundred spacecraft launched to date with electric propulsion (EP), it is possible to conduct an epidemiological study of EP's on orbit reliability. The first objective of the present work was to undertake such a study and analyze EP's track record of on orbit anomalies and failures by different covariates. The second objective was to provide a comparative analysis of EP's failure rates with those of chemical propulsion. Satellite operators, manufacturers, and insurers will make reliability- and risk-informed decisions regarding the adoption and promotion of EP on board spacecraft. This work provides evidence-based support for such decisions. After a thorough data collection, 162 EP-equipped satellites launched between January 1997 and December 2015 were included in our dataset for analysis. Several statistical analyses were conducted, at the aggregate level and then with the data stratified by severity of the anomaly, by orbit type, and by EP technology. Mean Time To Anomaly (MTTA) and the distribution of the time to (minor/major) anomaly were investigated, as well as anomaly rates. The important findings in this work include the following: (1) Post-2005, EP's reliability has outperformed that of chemical propulsion; (2) Hall thrusters have robustly outperformed chemical propulsion, and they maintain a small but shrinking reliability advantage over gridded ion engines. Other results were also provided, for example the differentials in MTTA of minor and major anomalies for gridded ion engines and Hall thrusters. It was shown that: (3) Hall thrusters exhibit minor anomalies very early on orbit, which might be indicative of infant anomalies, and thus would benefit from better ground testing and acceptance procedures; (4) Strong evidence exists that EP anomalies (onset and likelihood) and orbit type are dependent, a dependence likely mediated by either the space environment or differences in thrusters duty cycles; (5) Gridded ion thrusters exhibit both infant and wear-out failures, and thus would benefit from a reliability growth program that addresses both these types of problems.

  4. Analysis of arrhythmic events is useful to detect lead failure earlier in patients followed by remote monitoring.

    PubMed

    Nishii, Nobuhiro; Miyoshi, Akihito; Kubo, Motoki; Miyamoto, Masakazu; Morimoto, Yoshimasa; Kawada, Satoshi; Nakagawa, Koji; Watanabe, Atsuyuki; Nakamura, Kazufumi; Morita, Hiroshi; Ito, Hiroshi

    2018-03-01

    Remote monitoring (RM) has been advocated as the new standard of care for patients with cardiovascular implantable electronic devices (CIEDs). RM has allowed the early detection of adverse clinical events, such as arrhythmia, lead failure, and battery depletion. However, lead failure was often identified only by arrhythmic events, but not impedance abnormalities. To compare the usefulness of arrhythmic events with conventional impedance abnormalities for identifying lead failure in CIED patients followed by RM. CIED patients in 12 hospitals have been followed by the RM center in Okayama University Hospital. All transmitted data have been analyzed and summarized. From April 2009 to March 2016, 1,873 patients have been followed by the RM center. During the mean follow-up period of 775 days, 42 lead failure events (atrial lead 22, right ventricular pacemaker lead 5, implantable cardioverter defibrillator [ICD] lead 15) were detected. The proportion of lead failures detected only by arrhythmic events, which were not detected by conventional impedance abnormalities, was significantly higher than that detected by impedance abnormalities (arrhythmic event 76.2%, 95% CI: 60.5-87.9%; impedance abnormalities 23.8%, 95% CI: 12.1-39.5%). Twenty-seven events (64.7%) were detected without any alert. Of 15 patients with ICD lead failure, none has experienced inappropriate therapy. RM can detect lead failure earlier, before clinical adverse events. However, CIEDs often diagnose lead failure as just arrhythmic events without any warning. Thus, to detect lead failure earlier, careful human analysis of arrhythmic events is useful. © 2017 Wiley Periodicals, Inc.

  5. A modified adjoint-based grid adaptation and error correction method for unstructured grid

    NASA Astrophysics Data System (ADS)

    Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi

    2018-05-01

    Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.

  6. Detection of system failures in multi-axes tasks. [pilot monitored instrument approach

    NASA Technical Reports Server (NTRS)

    Ephrath, A. R.

    1975-01-01

    The effects of the pilot's participation mode in the control task on his workload level and failure detection performance were examined considering a low visibility landing approach. It is found that the participation mode had a strong effect on the pilot's workload, the induced workload being lowest when the pilot acted as a monitoring element during a coupled approach and highest when the pilot was an active element in the control loop. The effects of workload and participation mode on failure detection were separated. The participation mode was shown to have a dominant effect on the failure detection performance, with a failure in a monitored (coupled) axis being detected significantly faster than a comparable failure in a manually controlled axis.

  7. On-Board Particulate Filter Failure Prevention and Failure Diagnostics Using Radio Frequency Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sappok, Alex; Ragaller, Paul; Herman, Andrew

    The increasing use of diesel and gasoline particulate filters requires advanced on-board diagnostics (OBD) to prevent and detect filter failures and malfunctions. Early detection of upstream (engine-out) malfunctions is paramount to preventing irreversible damage to downstream aftertreatment system components. Such early detection can mitigate the failure of the particulate filter resulting in the escape of emissions exceeding permissible limits and extend the component life. However, despite best efforts at early detection and filter failure prevention, the OBD system must also be able to detect filter failures when they occur. In this study, radio frequency (RF) sensors were used to directlymore » monitor the particulate filter state of health for both gasoline particulate filter (GPF) and diesel particulate filter (DPF) applications. The testing included controlled engine dynamometer evaluations, which characterized soot slip from various filter failure modes, as well as on-road fleet vehicle tests. The results show a high sensitivity to detect conditions resulting in soot leakage from the particulate filter, as well as potential for direct detection of structural failures including internal cracks and melted regions within the filter media itself. Furthermore, the measurements demonstrate, for the first time, the capability to employ a direct and continuous monitor of particulate filter diagnostics to both prevent and detect potential failure conditions in the field.« less

  8. Evaluation of load flow and grid expansion in a unit-commitment and expansion optimization model SciGRID International Conference on Power Grid Modelling

    NASA Astrophysics Data System (ADS)

    Senkpiel, Charlotte; Biener, Wolfgang; Shammugam, Shivenes; Längle, Sven

    2018-02-01

    Energy system models serve as a basis for long term system planning. Joint optimization of electricity generating technologies, storage systems and the electricity grid leads to lower total system cost compared to an approach in which the grid expansion follows a given technology portfolio and their distribution. Modelers often face the problem of finding a good tradeoff between computational time and the level of detail that can be modeled. This paper analyses the differences between a transport model and a DC load flow model to evaluate the validity of using a simple but faster transport model within the system optimization model in terms of system reliability. The main findings in this paper are that a higher regional resolution of a system leads to better results compared to an approach in which regions are clustered as more overloads can be detected. An aggregation of lines between two model regions compared to a line sharp representation has little influence on grid expansion within a system optimizer. In a DC load flow model overloads can be detected in a line sharp case, which is therefore preferred. Overall the regions that need to reinforce the grid are identified within the system optimizer. Finally the paper recommends the usage of a load-flow model to test the validity of the model results.

  9. Automation and quality assurance of the production cycle

    NASA Astrophysics Data System (ADS)

    Hajdu, L.; Didenko, L.; Lauret, J.

    2010-04-01

    Processing datasets on the order of tens of terabytes is an onerous task, faced by production coordinators everywhere. Users solicit data productions and, especially for simulation data, the vast amount of parameters (and sometime incomplete requests) point at the need for a tracking, control and archiving all requests made so a coordinated handling could be made by the production team. With the advent of grid computing the parallel processing power has increased but traceability has also become increasing problematic due to the heterogeneous nature of Grids. Any one of a number of components may fail invalidating the job or execution flow in various stages of completion and re-submission of a few of the multitude of jobs (keeping the entire dataset production consistency) a difficult and tedious process. From the definition of the workflow to its execution, there is a strong need for validation, tracking, monitoring and reporting of problems. To ease the process of requesting production workflow, STAR has implemented several components addressing the full workflow consistency. A Web based online submission request module, implemented using Drupal's Content Management System API, enforces ahead that all parameters are described in advance in a uniform fashion. Upon submission, all jobs are independently tracked and (sometime experiment-specific) discrepancies are detected and recorded providing detailed information on where/how/when the job failed. Aggregate information on success and failure are also provided in near real-time.

  10. [Analytical figures of merit of Hildebrand grid and ultrasonic nebulizations in inductively coupled plasma atomic emission].

    PubMed

    Tian, Mei; Han, Xiao-yuan; Zhuo, Shang-jun; Zhang, Rui-rong

    2012-05-01

    Hildebrand grid nebulizer is a kind of improved Babington nebulizer, which can nebulize solutions with high total dissolved solids. And the ultrasonic nebulizer (USN) possesses advantage of high nebulization efficiency and fine droplets. In the present paper, the detection limits, matrix effects, ICP robustness and memory effects of Hildebrand grid and ultrasonic nebulizers for ICP-AES were studied. The results show that the detection limits using USN are improved by a factor of 6-23 in comparison to Hildebrand grid nebulizer for Cu, Pb, Zn, Cr, Cd and Ni. With the USN the matrix effects were heavier, and the degree of intensity enhancement and lowering depends on the element line, the composition and concentrations of matrices. Moreover, matrix effects induced by Ca and Mg are more significant than those caused by Na and Mg, and intensities of ionic lines are affected more easily than those of atomic lines. At the same time, with the USN ICP has less robustness. In addition, memory effect of the USN is also heavier than that of Hildebrand grid nebulizer.

  11. Bearing failure detection of micro wind turbine via power spectral density analysis for stator current signals spectrum

    NASA Astrophysics Data System (ADS)

    Mahmood, Faleh H.; Kadhim, Hussein T.; Resen, Ali K.; Shaban, Auday H.

    2018-05-01

    The failure such as air gap weirdness, rubbing, and scrapping between stator and rotor generator arise unavoidably and may cause extremely terrible results for a wind turbine. Therefore, we should pay more attention to detect and identify its cause-bearing failure in wind turbine to improve the operational reliability. The current paper tends to use of power spectral density analysis method of detecting internal race and external race bearing failure in micro wind turbine by estimation stator current signal of the generator. The failure detector method shows that it is well suited and effective for bearing failure detection.

  12. Turbofan engine demonstration of sensor failure detection

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Abdelwahab, Mahmood

    1991-01-01

    In the paper, the results of a full-scale engine demonstration of a sensor failure detection algorithm are presented. The algorithm detects, isolates, and accommodates sensor failures using analytical redundancy. The experimental hardware, including the F100 engine, is described. Demonstration results were obtained over a large portion of a typical flight envelope for the F100 engine. They include both subsonic and supersonic conditions at both medium and full, nonafter burning, power. Estimated accuracy, minimum detectable levels of sensor failures, and failure accommodation performance for an F100 turbofan engine control system are discussed.

  13. Low-cost wireless voltage & current grid monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hines, Jacqueline

    This report describes the development and demonstration of a novel low-cost wireless power distribution line monitoring system. This system measures voltage, current, and relative phase on power lines of up to 35 kV-class. The line units operate without any batteries, and without harvesting energy from the power line. Thus, data on grid condition is provided even in outage conditions, when line current is zero. This enhances worker safety by detecting the presence of voltage and current that may appear from stray sources on nominally isolated lines. Availability of low-cost power line monitoring systems will enable widespread monitoring of the distributionmore » grid. Real-time data on local grid operating conditions will enable grid operators to optimize grid operation, implement grid automation, and understand the impact of solar and other distributed sources on grid stability. The latter will enable utilities to implement eneygy storage and control systems to enable greater penetration of solar into the grid.« less

  14. Visual Analytics for Power Grid Contingency Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.; Huang, Zhenyu; Chen, Yousu

    2014-01-20

    Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure tomore » do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.« less

  15. Study of an automatic trajectory following control system

    NASA Technical Reports Server (NTRS)

    Vanlandingham, H. F.; Moose, R. L.; Zwicke, P. E.; Lucas, W. H.; Brinkley, J. D.

    1983-01-01

    It is shown that the estimator part of the Modified Partitioned Adaptive Controller, (MPAC) developed for nonlinear aircraft dynamics of a small jet transport can adapt to sensor failures. In addition, an investigation is made into the potential usefulness of the configuration detection technique used in the MPAC and the failure detection filter is developed that determines how a noise plant output is associated with a line or plane characteristic of a failure. It is shown by computer simulation that the estimator part and the configuration detection part of the MPAC can readily adapt to actuator and sensor failures and that the failure detection filter technique cannot detect actuator or sensor failures accurately for this type of system because of the plant modeling errors. In addition, it is shown that the decision technique, developed for the failure detection filter, can accurately determine that the plant output is related to the characteristic line or plane in the presence of sensor noise.

  16. Experimental Evaluation of PV Inverter Anti-Islanding with Grid Support Functions in Multi-Inverter Island Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoke, Anderson; Nelson, Austin; Miller, Brian

    As PV and other DER systems are connected to the grid at increased penetration levels, island detection may become more challenging for two reasons: 1.) In islands containing many DERs, active inverter-based anti-islanding methods may have more difficulty detecting islands because each individual inverter's efforts to detect the island may be interfered with by the other inverters in the island. 2.) The increasing numbers of DERs are leading to new requirements that DERs ride through grid disturbances and even actively try to regulate grid voltage and frequency back towards nominal operating conditions. These new grid support requirements may directly ormore » indirectly interfere with anti-islanding controls. This report describes a series of tests designed to examine the impacts of both grid support functions and multi-inverter islands on anti-islanding effectiveness. Crucially, the multi-inverter anti-islanding tests described in this report examine scenarios with multiple inverters connected to multiple different points on the grid. While this so-called 'solar subdivision' scenario has been examined to some extent through simulation, this is the first known work to test it using hardware inverters. This was accomplished through the use of power hardware-in-the-loop (PHIL) simulation, which allows the hardware inverters to be connected to a real-time transient simulation of an electric power system that can be easily reconfigured to test various distribution circuit scenarios. The anti-islanding test design was a modified version of the unintentional islanding test in IEEE Standard 1547.1, which creates a balanced, resonant island with the intent of creating a highly challenging condition for island detection. Three common, commercially available single-phase PV inverters from three different manufacturers were tested. The first part of this work examined each inverter individually using a series of pure hardware resistive-inductive-capacitive (RLC) resonant load based anti-islanding tests to determine the worst-case configuration of grid support functions for each inverter. A grid support function is a function an inverter performs to help stabilize the grid or drive the grid back towards its nominal operating point. The four grid support functions examined here were voltage ride-through, frequency ride-through, Volt-VAr control, and frequency-Watt control. The worst-case grid support configuration was defined as the configuration that led to the maximum island duration (or run-on time, ROT) out of 50 tests of each inverter. For each of the three inverters, it was observed that maximum ROT increased when voltage and frequency ride-through were activated. No conclusive evidence was found that Volt-VAr control or frequency-Watt control increased maximum ROT. Over all single-inverter test cases, the maximum ROT was 711 ms, well below the two-second limit currently imposed by IEEE Standard 1547-2003. A subsequent series of 244 experiments tested all three inverters simultaneously in the same island. These tests again used a procedure based on the IEEE 1547.1 unintentional islanding test to create a difficult-to-detect island condition. For these tests, which used the two worst-case grid support function configurations from the single-inverter tests, the inverters were connected to a variety of island circuit topologies designed to represent the variety of multiple-inverter islands that may occur on real distribution circuits. The interconnecting circuits and the resonant island load itself were represented in the real-time PHIL model. PHIL techniques similar to those employed here have been previously used and validated for anti-islanding tests, and the PHIL resonant load model used in this test was successfully validated by comparing single-inverter PHIL tests to conventional tests using an RLC load bank.« less

  17. Influence of Grid Reinforcement Placed In Masonry Bed Joints on Its Flexural Strength

    NASA Astrophysics Data System (ADS)

    Piekarczyk, Adam

    2017-10-01

    The paper presents the test results of the flexural strength of masonry when plane of failure is perpendicular to the bed joints. Comparison tests of unreinforced specimens and specimens reinforced with steel wire, glass and basalt fibre grids applied in masonry bed joints showed the higher flexural strength and crack resistance of masonry reinforced in this manner and so loaded. Reinforced masonry exposed plastic character after cracking allow for large horizontal displacements and transfer the considerable loads perpendicular to their surface. The strengthening of masonry was observed in most tests of reinforced specimens leading to occurrence of the maximum load in after cracking phase.

  18. Groundwater-quality data in the Santa Barbara study unit, 2011: results from the California GAMA Program

    USGS Publications Warehouse

    Davis, Tracy A.; Kulongoski, Justin T.; Belitz, Kenneth

    2013-01-01

    Groundwater quality in the 48-square-mile Santa Barbara study unit was investigated by the U.S. Geological Survey (USGS) from January to February 2011, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The Santa Barbara study unit was the thirty-fourth study unit to be sampled as part of the GAMA-PBP. The GAMA Santa Barbara study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system, and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer system is defined as those parts of the aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the Santa Barbara study unit. Groundwater quality in the primary aquifer system may differ from the quality in the shallower or deeper water-bearing zones; shallow groundwater may be more vulnerable to surficial contamination. In the Santa Barbara study unit located in Santa Barbara and Ventura Counties, groundwater samples were collected from 24 wells. Eighteen of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and six wells were selected to aid in evaluation of water-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, and pharmaceutical compounds); constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]); naturally occurring inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids [TDS], alkalinity, and arsenic, chromium, and iron species); and radioactive constituents (radon-222 and gross alpha and gross beta radioactivity). Naturally occurring isotopes (stable isotopes of hydrogen and oxygen in water, stables isotopes of inorganic carbon and boron dissolved in water, isotope ratios of dissolved strontium, tritium activities, and carbon-14 abundances) and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. In total, 281 constituents and water-quality indicators were measured. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 12 percent of the wells in the Santa Barbara study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 82 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is served to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. All organic constituents and most inorganic constituents that were detected in groundwater samples from the 18 grid wells in the Santa Barbara study unit were detected at concentrations less than drinking-water benchmarks. Of the 220 organic and special-interest constituents sampled for at the 18 grid wells, 13 were detected in groundwater samples; concentrations of all detected constituents were less than regulatory and non-regulatory health-based benchmarks. In total, VOCs were detected in 61 percent of the 18 grid wells sampled, pesticides and pesticide degradates were detected in 11 percent, and perchlorate was detected in 67 percent. Polar pesticides and their degradates, pharmaceutical compounds, and NDMA were not detected in any of the grid wells sampled in the Santa Barbara study unit. Eighteen grid wells were sampled for trace elements, major and minor ions, nutrients, and radioactive constituents; most detected concentrations were less than health-based benchmarks. Exceptions are one detection of boron greater than the CDPH notification level (NL-CA) of 1,000 micrograms per liter (μg/L) and one detection of fluoride greater than the CDPH maximum contaminant level (MCL-CA) of 2 milligrams per liter (mg/L). Results for constituents with non-regulatory benchmarks set for aesthetic concerns from the grid wells showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in three grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in seven grid wells. Chloride was detected at a concentration greater than the SMCL-CA recommended benchmark of 250 mg/L in four grid wells. Sulfate concentrations greater than the SMCL-CA recommended benchmark of 250 mg/L were measured in eight grid wells, and the concentration in one of these wells was also greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 17 grid wells, and concentrations in six of these wells were also greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  19. Reliability analysis and fault-tolerant system development for a redundant strapdown inertial measurement unit. [inertial platforms

    NASA Technical Reports Server (NTRS)

    Motyka, P.

    1983-01-01

    A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.

  20. An islanding detection methodology combining decision trees and Sandia frequency shift for inverter-based distributed generations

    DOE PAGES

    Azim, Riyasat; Li, Fangxing; Xue, Yaosuo; ...

    2017-07-14

    Distributed generations (DGs) for grid-connected applications require an accurate and reliable islanding detection methodology (IDM) for secure system operation. This paper presents an IDM for grid-connected inverter-based DGs. The proposed method is a combination of passive and active islanding detection techniques for aggregation of their advantages and elimination/minimisation of the drawbacks. In the proposed IDM, the passive method utilises critical system attributes extracted from local voltage measurements at target DG locations as well as employs decision tree-based classifiers for characterisation and detection of islanding events. The active method is based on Sandia frequency shift technique and is initiated only whenmore » the passive method is unable to differentiate islanding events from other system events. Thus, the power quality degradation introduced into the system by active islanding detection techniques can be minimised. Furthermore, a combination of active and passive techniques allows detection of islanding events under low power mismatch scenarios eliminating the disadvantage associated with the use of passive techniques alone. Finally, detailed case study results demonstrate the effectiveness of the proposed method in detection of islanding events under various power mismatch scenarios, load quality factors and in the presence of single or multiple grid-connected inverter-based DG units.« less

  1. An islanding detection methodology combining decision trees and Sandia frequency shift for inverter-based distributed generations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azim, Riyasat; Li, Fangxing; Xue, Yaosuo

    Distributed generations (DGs) for grid-connected applications require an accurate and reliable islanding detection methodology (IDM) for secure system operation. This paper presents an IDM for grid-connected inverter-based DGs. The proposed method is a combination of passive and active islanding detection techniques for aggregation of their advantages and elimination/minimisation of the drawbacks. In the proposed IDM, the passive method utilises critical system attributes extracted from local voltage measurements at target DG locations as well as employs decision tree-based classifiers for characterisation and detection of islanding events. The active method is based on Sandia frequency shift technique and is initiated only whenmore » the passive method is unable to differentiate islanding events from other system events. Thus, the power quality degradation introduced into the system by active islanding detection techniques can be minimised. Furthermore, a combination of active and passive techniques allows detection of islanding events under low power mismatch scenarios eliminating the disadvantage associated with the use of passive techniques alone. Finally, detailed case study results demonstrate the effectiveness of the proposed method in detection of islanding events under various power mismatch scenarios, load quality factors and in the presence of single or multiple grid-connected inverter-based DG units.« less

  2. Dynamic response characteristics analysis of the doubly-fed wind power system under grid voltage drop

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Wang, J.; Wang, H. H.; Yang, L.; Chen, W.; Xu, Y. T.

    2016-08-01

    Double-fed induction generator (DFIG) is sensitive to the disturbances of grid, so the security and stability of the grid and the DFIG itself are under threat with the rapid increase of DFIG. Therefore, it is important to study dynamic response of the DFIG when voltage drop failure is happened in power system. In this paper, firstly, mathematical models and the control strategy about mechanical and electrical response processes is respectively introduced. Then through the analysis of response process, it is concluded that the dynamic response characteristics are related to voltage drop level, operating status of DFIG and control strategy adapted to rotor side. Last, the correctness of conclusion is validated by the simulation about mechanical and electrical response processes in different voltage levels drop and different DFIG output levels under DIgSILENT/PowerFactory software platform.

  3. Value Creation Through Integrated Networks and Convergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Martini, Paul; Taft, Jeffrey D.

    2015-04-01

    Customer adoption of distributed energy resources and public policies are driving changes in the uses of the distribution system. A system originally designed and built for one-way energy flows from central generating facilities to end-use customers is now experiencing injections of energy from customers anywhere on the grid and frequent reversals in the direction of energy flow. In response, regulators and utilities are re-thinking the design and operations of the grid to create more open and transactive electric networks. This evolution has the opportunity to unlock significant value for customers and utilities. Alternatively, failure to seize this potential may insteadmore » lead to an erosion of value if customers seek to defect and disconnect from the system. This paper will discuss how current grid modernization investments may be leveraged to create open networks that increase value through the interaction of intelligent devices on the grid and prosumerization of customers. Moreover, even greater value can be realized through the synergistic effects of convergence of multiple networks. This paper will highlight examples of the emerging nexus of non-electric networks with electricity.« less

  4. Detecting Surface Changes from an Underground Explosion in Granite Using Unmanned Aerial System Photogrammetry

    DOE PAGES

    Schultz-Fellenz, Emily S.; Coppersmith, Ryan T.; Sussman, Aviva J.; ...

    2017-08-19

    Efficient detection and high-fidelity quantification of surface changes resulting from underground activities are important national and global security efforts. In this investigation, a team performed field-based topographic characterization by gathering high-quality photographs at very low altitudes from an unmanned aerial system (UAS)-borne camera platform. The data collection occurred shortly before and after a controlled underground chemical explosion as part of the United States Department of Energy’s Source Physics Experiments (SPE-5) series. The high-resolution overlapping photographs were used to create 3D photogrammetric models of the site, which then served to map changes in the landscape down to 1-cm-scale. Separate models weremore » created for two areas, herein referred to as the test table grid region and the nearfield grid region. The test table grid includes the region within ~40 m from surface ground zero, with photographs collected at a flight altitude of 8.5 m above ground level (AGL). The near-field grid area covered a broader area, 90–130 m from surface ground zero, and collected at a flight altitude of 22 m AGL. The photographs, processed using Agisoft Photoscan® in conjunction with 125 surveyed ground control point targets, yielded a 6-mm pixel-size digital elevation model (DEM) for the test table grid region. This provided the ≤3 cm resolution in the topographic data to map in fine detail a suite of features related to the underground explosion: uplift, subsidence, surface fractures, and morphological change detection. The near-field grid region data collection resulted in a 2-cm pixel-size DEM, enabling mapping of a broader range of features related to the explosion, including: uplift and subsidence, rock fall, and slope sloughing. This study represents one of the first works to constrain, both temporally and spatially, explosion-related surface damage using a UAS photogrammetric platform; these data will help to advance the science of underground explosion detection.« less

  5. Detecting Surface Changes from an Underground Explosion in Granite Using Unmanned Aerial System Photogrammetry

    NASA Astrophysics Data System (ADS)

    Schultz-Fellenz, Emily S.; Coppersmith, Ryan T.; Sussman, Aviva J.; Swanson, Erika M.; Cooley, James A.

    2017-08-01

    Efficient detection and high-fidelity quantification of surface changes resulting from underground activities are important national and global security efforts. In this investigation, a team performed field-based topographic characterization by gathering high-quality photographs at very low altitudes from an unmanned aerial system (UAS)-borne camera platform. The data collection occurred shortly before and after a controlled underground chemical explosion as part of the United States Department of Energy's Source Physics Experiments (SPE-5) series. The high-resolution overlapping photographs were used to create 3D photogrammetric models of the site, which then served to map changes in the landscape down to 1-cm-scale. Separate models were created for two areas, herein referred to as the test table grid region and the nearfield grid region. The test table grid includes the region within 40 m from surface ground zero, with photographs collected at a flight altitude of 8.5 m above ground level (AGL). The near-field grid area covered a broader area, 90-130 m from surface ground zero, and collected at a flight altitude of 22 m AGL. The photographs, processed using Agisoft Photoscan® in conjunction with 125 surveyed ground control point targets, yielded a 6-mm pixel-size digital elevation model (DEM) for the test table grid region. This provided the ≤3 cm resolution in the topographic data to map in fine detail a suite of features related to the underground explosion: uplift, subsidence, surface fractures, and morphological change detection. The near-field grid region data collection resulted in a 2-cm pixel-size DEM, enabling mapping of a broader range of features related to the explosion, including: uplift and subsidence, rock fall, and slope sloughing. This study represents one of the first works to constrain, both temporally and spatially, explosion-related surface damage using a UAS photogrammetric platform; these data will help to advance the science of underground explosion detection.

  6. Detecting Surface Changes from an Underground Explosion in Granite Using Unmanned Aerial System Photogrammetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz-Fellenz, Emily S.; Coppersmith, Ryan T.; Sussman, Aviva J.

    Efficient detection and high-fidelity quantification of surface changes resulting from underground activities are important national and global security efforts. In this investigation, a team performed field-based topographic characterization by gathering high-quality photographs at very low altitudes from an unmanned aerial system (UAS)-borne camera platform. The data collection occurred shortly before and after a controlled underground chemical explosion as part of the United States Department of Energy’s Source Physics Experiments (SPE-5) series. The high-resolution overlapping photographs were used to create 3D photogrammetric models of the site, which then served to map changes in the landscape down to 1-cm-scale. Separate models weremore » created for two areas, herein referred to as the test table grid region and the nearfield grid region. The test table grid includes the region within ~40 m from surface ground zero, with photographs collected at a flight altitude of 8.5 m above ground level (AGL). The near-field grid area covered a broader area, 90–130 m from surface ground zero, and collected at a flight altitude of 22 m AGL. The photographs, processed using Agisoft Photoscan® in conjunction with 125 surveyed ground control point targets, yielded a 6-mm pixel-size digital elevation model (DEM) for the test table grid region. This provided the ≤3 cm resolution in the topographic data to map in fine detail a suite of features related to the underground explosion: uplift, subsidence, surface fractures, and morphological change detection. The near-field grid region data collection resulted in a 2-cm pixel-size DEM, enabling mapping of a broader range of features related to the explosion, including: uplift and subsidence, rock fall, and slope sloughing. This study represents one of the first works to constrain, both temporally and spatially, explosion-related surface damage using a UAS photogrammetric platform; these data will help to advance the science of underground explosion detection.« less

  7. Silicon nitride grids are compatible with correlative negative staining electron microscopy and tip-enhanced Raman spectroscopy for use in the detection of micro-organisms.

    PubMed

    Lausch, V; Hermann, P; Laue, M; Bannert, N

    2014-06-01

    Successive application of negative staining transmission electron microscopy (TEM) and tip-enhanced Raman spectroscopy (TERS) is a new correlative approach that could be used to rapidly and specifically detect and identify single pathogens including bioterrorism-relevant viruses in complex samples. Our objective is to evaluate the TERS-compatibility of commonly used electron microscopy (EM) grids (sample supports), chemicals and negative staining techniques and, if required, to devise appropriate alternatives. While phosphortungstic acid (PTA) is suitable as a heavy metal stain, uranyl acetate, paraformaldehyde in HEPES buffer and alcian blue are unsuitable due to their relatively high Raman scattering. Moreover, the low thermal stability of the carbon-coated pioloform film on copper grids (pioloform grids) negates their utilization. The silicon in the cantilever of the silver-coated atomic force microscope tip used to record TERS spectra suggested that Si-based grids might be employed as alternatives. From all evaluated Si-based TEM grids, the silicon nitride (SiN) grid was found to be best suited, with almost no background Raman signals in the relevant spectral range, a low surface roughness and good particle adhesion properties that could be further improved by glow discharge. Charged SiN grids have excellent particle adhesion properties. The use of these grids in combination with PTA for contrast in the TEM is suitable for subsequent analysis by TERS. The study reports fundamental modifications and optimizations of the negative staining EM method that allows a combination with near-field Raman spectroscopy to acquire a spectroscopic signature from nanoscale biological structures. This should facilitate a more precise diagnosis of single viral particles and other micro-organisms previously localized and visualized in the TEM. © 2014 The Society for Applied Microbiology.

  8. Groundwater-quality data in the Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts study unit, 2008-2010--Results from the California GAMA Program

    USGS Publications Warehouse

    Mathany, Timothy M.; Wright, Michael T.; Beuttel, Brandon S.; Belitz, Kenneth

    2012-01-01

    Groundwater quality in the 12,103-square-mile Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts (CLUB) study unit was investigated by the U.S. Geological Survey (USGS) from December 2008 to March 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program's Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The CLUB study unit was the twenty-eighth study unit to be sampled as part of the GAMA-PBP. The GAMA CLUB study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer systems, and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer systems (hereinafter referred to as primary aquifers) are defined as parts of aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the CLUB study unit. The quality of groundwater in shallow or deep water-bearing zones may differ from the quality of groundwater in the primary aquifers; shallow groundwater may be more vulnerable to surficial contamination. In the CLUB study unit, groundwater samples were collected from 52 wells in 3 study areas (Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts) in San Bernardino, Riverside, Kern, San Diego, and Imperial Counties. Forty-nine of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and three wells were selected to aid in evaluation of water-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]), naturally-occurring inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids [TDS], alkalinity, and species of inorganic chromium), and radioactive constituents (radon-222, radium isotopes, and gross alpha and gross beta radioactivity). Naturally-occurring isotopes (stable isotopes of hydrogen, oxygen, boron, and strontium in water, stable isotopes of carbon in dissolved inorganic carbon, activities of tritium, and carbon-14 abundance) and dissolved noble gases also were measured to help identify the sources and ages of sampled groundwater. In total, 223 constituents and 12 water-quality indicators were investigated. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 10 percent of the wells in the CLUB study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Median matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 85 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is delivered to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. Most inorganic constituents detected in groundwater samples from the 49 grid wells were detected at concentrations less than drinking-water benchmarks. In addition, all detections of organic constituents from the CLUB study-unit grid-well samples were less than health-based benchmarks. In total, VOCs were detected in 17 of the 49 grid wells sampled (approximately 35 percent), pesticides and pesticide degradates were detected in 5 of the 47 grid wells sampled (approximately 11 percent), and perchlorate was detected in 41 of 49 grid wells sampled (approximately 84 percent). Trace elements, major and minor ions, and nutrients were sampled for at 39 grid wells, and radioactive constituents were sampled for at 23 grid wells; most detected concentrations were less than health-based benchmarks. Exceptions in the grid-well samples include seven detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (μg/L); four detections of boron greater than the CDPH notification level (NL-CA) of 1,000 μg/L; six detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 μg/L; two detections of uranium greater than the MCL-US of 30 μg/L; nine detections of fluoride greater than the CDPH maximum contaminant level (MCL-CA) of 2 milligrams per liter (mg/L); one detection of nitrite plus nitrate (NO2-+NO3-), as nitrogen, greater than the MCL-US of 10 mg/L; and four detections of gross alpha radioactivity (72-hour count), and one detection of gross alpha radioactivity (30-day count), greater than the MCL-US of 15 picocuries per liter. Results for constituents with non-regulatory benchmarks set for aesthetic concerns showed that a manganese concentration greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 50 μg/L was detected in one grid well. Chloride concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were detected in three grid wells, and one of these wells also had a concentration that was greater than the upper SMCL-CA benchmark of 500 mg/L. Sulfate concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were measured in six grid wells. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 20 grid wells, and concentrations in 2 of these wells also were greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  9. Cascading failure in scale-free networks with tunable clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-Jun; Gu, Bo; Guan, Xiang-Min; Zhu, Yan-Bo; Lv, Ren-Li

    2016-02-01

    Cascading failure is ubiquitous in many networked infrastructure systems, such as power grids, Internet and air transportation systems. In this paper, we extend the cascading failure model to a scale-free network with tunable clustering and focus on the effect of clustering coefficient on system robustness. It is found that the network robustness undergoes a nonmonotonic transition with the increment of clustering coefficient: both highly and lowly clustered networks are fragile under the intentional attack, and the network with moderate clustering coefficient can better resist the spread of cascading. We then provide an extensive explanation for this constructive phenomenon via the microscopic point of view and quantitative analysis. Our work can be useful to the design and optimization of infrastructure systems.

  10. Differentiated protection services with failure probability guarantee for workflow-based applications

    NASA Astrophysics Data System (ADS)

    Zhong, Yaoquan; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng

    2010-12-01

    A cost-effective and service-differentiated provisioning strategy is very desirable to service providers so that they can offer users satisfactory services, while optimizing network resource allocation. Providing differentiated protection services to connections for surviving link failure has been extensively studied in recent years. However, the differentiated protection services for workflow-based applications, which consist of many interdependent tasks, have scarcely been studied. This paper investigates the problem of providing differentiated services for workflow-based applications in optical grid. In this paper, we develop three differentiated protection services provisioning strategies which can provide security level guarantee and network-resource optimization for workflow-based applications. The simulation demonstrates that these heuristic algorithms provide protection cost-effectively while satisfying the applications' failure probability requirements.

  11. A Bayesian Approach Based Outage Prediction in Electric Utility Systems Using Radar Measurement Data

    DOE PAGES

    Yue, Meng; Toto, Tami; Jensen, Michael P.; ...

    2017-05-18

    Severe weather events such as strong thunderstorms are some of the most significant and frequent threats to the electrical grid infrastructure. Outages resulting from storms can be very costly. While some tools are available to utilities to predict storm occurrences and damage, they are typically very crude and provide little means of facilitating restoration efforts. This study developed a methodology to use historical high-resolution (both temporal and spatial) radar observations of storm characteristics and outage information to develop weather condition dependent failure rate models (FRMs) for different grid components. Such models can provide an estimation or prediction of the outagemore » numbers in small areas of a utility’s service territory once the real-time measurement or forecasted data of weather conditions become available as the input to the models. Considering the potential value provided by real-time outages reported, a Bayesian outage prediction (BOP) algorithm is proposed to account for both strength and uncertainties of the reported outages and failure rate models. The potential benefit of this outage prediction scheme is illustrated in this study.« less

  12. A Bayesian Approach Based Outage Prediction in Electric Utility Systems Using Radar Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yue, Meng; Toto, Tami; Jensen, Michael P.

    Severe weather events such as strong thunderstorms are some of the most significant and frequent threats to the electrical grid infrastructure. Outages resulting from storms can be very costly. While some tools are available to utilities to predict storm occurrences and damage, they are typically very crude and provide little means of facilitating restoration efforts. This study developed a methodology to use historical high-resolution (both temporal and spatial) radar observations of storm characteristics and outage information to develop weather condition dependent failure rate models (FRMs) for different grid components. Such models can provide an estimation or prediction of the outagemore » numbers in small areas of a utility’s service territory once the real-time measurement or forecasted data of weather conditions become available as the input to the models. Considering the potential value provided by real-time outages reported, a Bayesian outage prediction (BOP) algorithm is proposed to account for both strength and uncertainties of the reported outages and failure rate models. The potential benefit of this outage prediction scheme is illustrated in this study.« less

  13. Application of the NEXT Ion Thruster Lifetime Assessment to Thruster Throttling

    NASA Technical Reports Server (NTRS)

    VanNoord, Jonathan L.; Herman, Daniel A.

    2010-01-01

    Ion thrusters are low thrust, high specific impulse devices with typical operational lifetimes of 10,000 to 30,000 hr over a range of throttling conditions. The NEXT ion thruster is the latest generation of ion thrusters under development. The NEXT ion thruster currently has a qualification level propellant throughput requirement of 450 kg of xenon, which corresponds to roughly 22,000 hr of operation at the highest input power throttling point. This paper will provide a brief review the previous life assessment predictions for various throttling conditions. A further assessment will be presented examining the anticipated accelerator grid hole wall erosion and related electron backstreaming limit. The continued assessment of the NEXT ion thruster indicates that the first failure mode across the throttling range is expected to be in excess of 36,000 hr of operation from charge exchange induced groove erosion. It is at this duration that the groove is predicted to penetrate the accelerator grid possibly resulting in structural failure. Based on these lifetime and mission assessments, a throttling approach is presented for the Long Duration Test to demonstrate NEXT thruster lifetime and validate modeling.

  14. Multi-terabyte EIDE disk arrays running Linux RAID5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.

    2004-11-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case ofmore » multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.« less

  15. Network topology and resilience analysis of South Korean power grid

    NASA Astrophysics Data System (ADS)

    Kim, Dong Hwan; Eisenberg, Daniel A.; Chun, Yeong Han; Park, Jeryang

    2017-01-01

    In this work, we present topological and resilience analyses of the South Korean power grid (KPG) with a broad voltage level. While topological analysis of KPG only with high-voltage infrastructure shows an exponential degree distribution, providing another empirical evidence of power grid topology, the inclusion of low voltage components generates a distribution with a larger variance and a smaller average degree. This result suggests that the topology of a power grid may converge to a highly skewed degree distribution if more low-voltage data is considered. Moreover, when compared to ER random and BA scale-free networks, the KPG has a lower efficiency and a higher clustering coefficient, implying that highly clustered structure does not necessarily guarantee a functional efficiency of a network. Error and attack tolerance analysis, evaluated with efficiency, indicate that the KPG is more vulnerable to random or degree-based attacks than betweenness-based intentional attack. Cascading failure analysis with recovery mechanism demonstrates that resilience of the network depends on both tolerance capacity and recovery initiation time. Also, when the two factors are fixed, the KPG is most vulnerable among the three networks. Based on our analysis, we propose that the topology of power grids should be designed so the loads are homogeneously distributed, or functional hubs and their neighbors have high tolerance capacity to enhance resilience.

  16. A method for producing digital probabilistic seismic landslide hazard maps

    USGS Publications Warehouse

    Jibson, R.W.; Harp, E.L.; Michael, J.A.

    2000-01-01

    The 1994 Northridge, California, earthquake is the first earthquake for which we have all of the data sets needed to conduct a rigorous regional analysis of seismic slope instability. These data sets include: (1) a comprehensive inventory of triggered landslides, (2) about 200 strong-motion records of the mainshock, (3) 1:24 000-scale geologic mapping of the region, (4) extensive data on engineering properties of geologic units, and (5) high-resolution digital elevation models of the topography. All of these data sets have been digitized and rasterized at 10 m grid spacing using ARC/INFO GIS software on a UNIX computer. Combining these data sets in a dynamic model based on Newmark's permanent-deformation (sliding-block) analysis yields estimates of coseismic landslide displacement in each grid cell from the Northridge earthquake. The modeled displacements are then compared with the digital inventory of landslides triggered by the Northridge earthquake to construct a probability curve relating predicted displacement to probability of failure. This probability function can be applied to predict and map the spatial variability in failure probability in any ground-shaking conditions of interest. We anticipate that this mapping procedure will be used to construct seismic landslide hazard maps that will assist in emergency preparedness planning and in making rational decisions regarding development and construction in areas susceptible to seismic slope failure. ?? 2000 Elsevier Science B.V. All rights reserved.

  17. A method for producing digital probabilistic seismic landslide hazard maps; an example from the Los Angeles, California, area

    USGS Publications Warehouse

    Jibson, Randall W.; Harp, Edwin L.; Michael, John A.

    1998-01-01

    The 1994 Northridge, California, earthquake is the first earthquake for which we have all of the data sets needed to conduct a rigorous regional analysis of seismic slope instability. These data sets include (1) a comprehensive inventory of triggered landslides, (2) about 200 strong-motion records of the mainshock, (3) 1:24,000-scale geologic mapping of the region, (4) extensive data on engineering properties of geologic units, and (5) high-resolution digital elevation models of the topography. All of these data sets have been digitized and rasterized at 10-m grid spacing in the ARC/INFO GIS platform. Combining these data sets in a dynamic model based on Newmark's permanent-deformation (sliding-block) analysis yields estimates of coseismic landslide displacement in each grid cell from the Northridge earthquake. The modeled displacements are then compared with the digital inventory of landslides triggered by the Northridge earthquake to construct a probability curve relating predicted displacement to probability of failure. This probability function can be applied to predict and map the spatial variability in failure probability in any ground-shaking conditions of interest. We anticipate that this mapping procedure will be used to construct seismic landslide hazard maps that will assist in emergency preparedness planning and in making rational decisions regarding development and construction in areas susceptible to seismic slope failure.

  18. Robust-yet-fragile nature of interdependent networks

    NASA Astrophysics Data System (ADS)

    Tan, Fei; Xia, Yongxiang; Wei, Zhi

    2015-05-01

    Interdependent networks have been shown to be extremely vulnerable based on the percolation model. Parshani et al. [Europhys. Lett. 92, 68002 (2010), 10.1209/0295-5075/92/68002] further indicated that the more intersimilar networks are, the more robust they are to random failures. When traffic load is considered, how do the coupling patterns impact cascading failures in interdependent networks? This question has been largely unexplored until now. In this paper, we address this question by investigating the robustness of interdependent Erdös-Rényi random graphs and Barabási-Albert scale-free networks under either random failures or intentional attacks. It is found that interdependent Erdös-Rényi random graphs are robust yet fragile under either random failures or intentional attacks. Interdependent Barabási-Albert scale-free networks, however, are only robust yet fragile under random failures but fragile under intentional attacks. We further analyze the interdependent communication network and power grid and achieve similar results. These results advance our understanding of how interdependency shapes network robustness.

  19. Fault tolerance in computational grids: perspectives, challenges, and issues.

    PubMed

    Haider, Sajjad; Nazir, Babar

    2016-01-01

    Computational grids are established with the intention of providing shared access to hardware and software based resources with special reference to increased computational capabilities. Fault tolerance is one of the most important issues faced by the computational grids. The main contribution of this survey is the creation of an extended classification of problems that incur in the computational grid environments. The proposed classification will help researchers, developers, and maintainers of grids to understand the types of issues to be anticipated. Moreover, different types of problems, such as omission, interaction, and timing related have been identified that need to be handled on various layers of the computational grid. In this survey, an analysis and examination is also performed pertaining to the fault tolerance and fault detection mechanisms. Our conclusion is that a dependable and reliable grid can only be established when more emphasis is on fault identification. Moreover, our survey reveals that adaptive and intelligent fault identification, and tolerance techniques can improve the dependability of grid working environments.

  20. A vector-based failure detection and isolation algorithm for a dual fail-operational redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, Frederick R.; Bailey, Melvin L.

    1987-01-01

    A vector-based failure detection and isolation technique for a skewed array of two degree-of-freedom inertial sensors is developed. Failure detection is based on comparison of parity equations with a threshold, and isolation is based on comparison of logic variables which are keyed to pass/fail results of the parity test. A multi-level approach to failure detection is used to ensure adequate coverage for the flight control, display, and navigation avionics functions. Sensor error models are introduced to expose the susceptibility of the parity equations to sensor errors and physical separation effects. The algorithm is evaluated in a simulation of a commercial transport operating in a range of light to severe turbulence environments. A bias-jump failure level of 0.2 deg/hr was detected and isolated properly in the light and moderate turbulence environments, but not detected in the extreme turbulence environment. An accelerometer bias-jump failure level of 1.5 milli-g was detected over all turbulence environments. For both types of inertial sensor, hard-over, and null type failures were detected in all environments without incident. The algorithm functioned without false alarm or isolation over all turbulence environments for the runs tested.

  1. Improving Cyber-Security of Smart Grid Systems via Anomaly Detection and Linguistic Domain Knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ondrej Linda; Todd Vollmer; Milos Manic

    The planned large scale deployment of smart grid network devices will generate a large amount of information exchanged over various types of communication networks. The implementation of these critical systems will require appropriate cyber-security measures. A network anomaly detection solution is considered in this work. In common network architectures multiple communications streams are simultaneously present, making it difficult to build an anomaly detection solution for the entire system. In addition, common anomaly detection algorithms require specification of a sensitivity threshold, which inevitably leads to a tradeoff between false positives and false negatives rates. In order to alleviate these issues, thismore » paper proposes a novel anomaly detection architecture. The designed system applies the previously developed network security cyber-sensor method to individual selected communication streams allowing for learning accurate normal network behavior models. Furthermore, the developed system dynamically adjusts the sensitivity threshold of each anomaly detection algorithm based on domain knowledge about the specific network system. It is proposed to model this domain knowledge using Interval Type-2 Fuzzy Logic rules, which linguistically describe the relationship between various features of the network communication and the possibility of a cyber attack. The proposed method was tested on experimental smart grid system demonstrating enhanced cyber-security.« less

  2. Detection, Diagnosis and Prognosis: Contribution to the energy challenge: Proceedings of the Meeting of the Mechanical Failures Prevention Group

    NASA Technical Reports Server (NTRS)

    Shives, T. R. (Editor); Willard, W. A. (Editor)

    1981-01-01

    The contribution of failure detection, diagnosis and prognosis to the energy challenge is discussed. Areas of special emphasis included energy management, techniques for failure detection in energy related systems, improved prognostic techniques for energy related systems and opportunities for detection, diagnosis and prognosis in the energy field.

  3. Design study of the geometry of the blanking tool to predict the burr formation of Zircaloy-4 sheet

    NASA Astrophysics Data System (ADS)

    Ha, Jisun; Lee, Hyungyil; Kim, Dongchul; Kim, Naksoo

    2013-12-01

    In this work, we investigated factors that influence burr formation for zircaloy-4 sheet used for spacer grids of nuclear fuel roads. Factors we considered are geometric factors of punch. We changed clearance and velocity in order to consider the failure parameters, and we changed shearing angle and corner radius of L-shaped punch in order to consider geometric factors of punch. First, we carried out blanking test with failure parameter of GTN model using L-shaped punch. The tendency of failure parameters and geometric factors that affect burr formation by analyzing sheared edges is investigated. Consequently, geometric factor's influencing on the burr formation is also high as failure parameters. Then, the sheared edges and burr formation with failure parameters and geometric factors is investigated using FE analysis model. As a result of analyzing sheared edges with the variables, we checked geometric factors more affect burr formation than failure parameters. To check the reliability of the FE model, the blanking force and the sheared edges obtained from experiments are compared with the computations considering heat transfer.

  4. Development of an adaptive failure detection and identification system for detecting aircraft control element failures

    NASA Technical Reports Server (NTRS)

    Bundick, W. Thomas

    1990-01-01

    A methodology for designing a failure detection and identification (FDI) system to detect and isolate control element failures in aircraft control systems is reviewed. An FDI system design for a modified B-737 aircraft resulting from this methodology is also reviewed, and the results of evaluating this system via simulation are presented. The FDI system performed well in a no-turbulence environment, but it experienced an unacceptable number of false alarms in atmospheric turbulence. An adaptive FDI system, which adjusts thresholds and other system parameters based on the estimated turbulence level, was developed and evaluated. The adaptive system performed well over all turbulence levels simulated, reliably detecting all but the smallest magnitude partially-missing-surface failures.

  5. Investigation of the cross-ship comparison monitoring method of failure detection in the HIMAT RPRV. [digital control techniques using airborne microprocessors

    NASA Technical Reports Server (NTRS)

    Wolf, J. A.

    1978-01-01

    The Highly maneuverable aircraft technology (HIMAT) remotely piloted research vehicle (RPRV) uses cross-ship comparison monitoring of the actuator RAM positions to detect a failure in the aileron, canard, and elevator control surface servosystems. Some possible sources of nuisance trips for this failure detection technique are analyzed. A FORTRAN model of the simplex servosystems and the failure detection technique were utilized to provide a convenient means of changing parameters and introducing system noise. The sensitivity of the technique to differences between servosystems and operating conditions was determined. The cross-ship comparison monitoring method presently appears to be marginal in its capability to detect an actual failure and to withstand nuisance trips.

  6. Operation of an InGrid based X-ray detector at the CAST experiment

    NASA Astrophysics Data System (ADS)

    Krieger, Christoph; Desch, Klaus; Kaminski, Jochen; Lupberger, Michael

    2018-02-01

    The CERN Axion Solar Telescope (CAST) is searching for axions and other particles which could be candidates for DarkMatter and even Dark Energy. These particles could be produced in the Sun and detected by a conversion into soft X-ray photons inside a strong magnetic field. In order to increase the sensitivity for physics beyond the Standard Model, detectors with a threshold below 1 keV as well as efficient background rejection methods are required to compensate for low energies and weak couplings resulting in very low detection rates. Those criteria are fulfilled by a detector utilizing the combination of a pixelized readout chip with an integrated Micromegas stage. These InGrid (Integrated Grid) devices can be build by photolithographic postprocessing techniques, resulting in a close to perfect match of grid and pixels facilitating the detection of single electrons on the chip surface. The high spatial resolution allows for energy determination by simple electron counting as well as for an event-shape based analysis as background rejection method. Tests at an X-ray generator revealed the energy threshold of an InGrid based X-ray detector to be well below the carbon Kα line at 277 eV. After the successful demonstration of the detectors key features, the detector was mounted at one of CAST's four detector stations behind an X-ray telescope in 2014. After several months of successful operation without any detector related interruptions, the InGrid based X-ray detector continues data taking at CAST in 2015. During operation at the experiment, background rates in the order of 10-5 keV-1 cm-2 s-1 have been achieved by application of a likelihood based method discriminating the non-photon background originating mostly from cosmic rays. For continued operation in 2016, an upgraded InGrid based detector is to be installed among other improvements including decoupling and sampling of the signal induced on the grid as well as a veto scintillator to further lower the observed background rates and improving sensitivity.

  7. Real-time failure control (SAFD)

    NASA Technical Reports Server (NTRS)

    Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.

    1990-01-01

    The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.

  8. Characterizing the digital radiography system in terms of effective detective quantum efficiency and CDRAD measurement

    NASA Astrophysics Data System (ADS)

    Yalcin, A.; Olgar, T.

    2018-07-01

    The aim of this study was to assess the performance of a digital radiography system in terms of effective detective quantum efficiency (eDQE) for different tube voltages, polymethyl methacrylate (PMMA) phantom thicknesses and different grid types. The image performance of the digital radiography system was also evaluated by using CDRAD measurements at the same conditions and the correlation of CDRAD results with eDQE was compared. The eDQE was calculated via measurement of effective modulation transfer function (eMTF), effective normalized noise power spectra (eNNPS), scatter fraction (SF) and transmission factors (TF). SFs and TFs were also calculated for different beam qualities by using MCNP4C Monte Carlo simulation code. The integrated eDQE (IeDQE) over the frequency range was used to find the correlation with the inverse image quality figure (IQFinv) obtained from CDRAD measurements. The highest eDQE was obtained with 60 lp/cm grid frequency and 10:1 grid ratio. No remarkable effect was observed on eDQE with different grid frequency, but eDQE decreased with increasing grid ratio. A significant correlation was found between IeDQE and IQFinv.

  9. Groundwater-quality data in the Cascade Range and Modoc Plateau study unit, 2010-Results from the California GAMA Program

    USGS Publications Warehouse

    Shelton, Jennifer L.; Fram, Miranda S.; Belitz, Kenneth

    2013-01-01

    Groundwater quality in the 39,000-square-kilometer Cascade Range and Modoc Plateau (CAMP) study unit was investigated by the U.S. Geological Survey (USGS) from July through October 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project (PBP). The GAMA PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The CAMP study unit is the thirty-second study unit to be sampled as part of the GAMA PBP. The GAMA CAMP study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer system is defined as that part of the aquifer corresponding to the open or screened intervals of wells listed in the California Department of Public Health (CDPH) database for the CAMP study unit. The quality of groundwater in shallow or deep water-bearing zones may differ from the quality of groundwater in the primary aquifer system; shallow groundwater may be more vulnerable to surficial contamination. In the CAMP study unit, groundwater samples were collected from 90 wells and springs in 6 study areas (Sacramento Valley Eastside, Honey Lake Valley, Cascade Range and Modoc Plateau Low Use Basins, Shasta Valley and Mount Shasta Volcanic Area, Quaternary Volcanic Areas, and Tertiary Volcanic Areas) in Butte, Lassen, Modoc, Plumas, Shasta, Siskiyou, and Tehama Counties. Wells and springs were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells). Groundwater samples were analyzed for field water-quality indicators, organic constituents, perchlorate, inorganic constituents, radioactive constituents, and microbial indicators. Naturally occurring isotopes and dissolved noble gases also were measured to provide a dataset that will be used to help interpret the sources and ages of the sampled groundwater in subsequent reports. In total, 221 constituents were investigated for this study. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at approximately 10 percent of the wells in the CAMP study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 90 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is served to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. All organic constituents and most inorganic constituents that were detected in groundwater samples from the 90 grid wells in the CAMP study unit were detected at concentrations less than drinking-water benchmarks. Of the 148 organic constituents analyzed, 27 were detected in groundwater samples; concentrations of all detected constituents were less than regulatory and nonregulatory health-based benchmarks, and all were less than 1/10 of benchmark levels. One or more organic constituents were detected in 52 percent of the grid wells in the CAMP study unit: VOCs were detected in 30 percent, and pesticides and pesticide degradates were detected in 31 percent. Trace elements, major ions, nutrients, and radioactive constituents were sampled for at 90 grid wells in the CAMP study unit, and most detected concentrations were less than health-based benchmarks. Exceptions include three detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (µg/L), two detections of boron greater than the CDPH notification level (NL-CA) of 1,000 µg/L, two detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 µg/L, two detections of vanadium greater than the CDPH notification level (NL-CA) of 50 µg/L, one detection of nitrate, as nitrogen, greater than the MCL-US of 10 milligrams per liter (mg/L), two detections of uranium greater than the MCL-US of 30 µg/L and the MCL-CA of 20 picocuries per liter (pCi/L), one detection of radon-222 greater than the proposed MCL-US of 4,000 pCi/L, and two detections of gross alpha particle activity greater than the MCL-US of 15 pCi/L. Results for inorganic constituents with non-regulatory benchmarks set for aesthetic concerns showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 µg/L were detected in four grid wells. Manganese concentrations greater than the SMCL-CA of 50 µg/L were detected in nine grid wells. Chloride and TDS were detected at concentrations greater than the upper SMCL-CA benchmarks of 500 mg/L and 1,000 mg/L, respectively, in one grid well. Microbial indicators (total coliform and Escherichia coli [E. coli]) were detected in 11 percent of the 83 grid wells sampled for these analyses in the CAMP study unit. The presence of total coliform was detected in nine grid wells, and the presence of E. coli was detected in one of these same grid wells.

  10. Multidrug-resistant tuberculosis treatment failure detection depends on monitoring interval and microbiological method

    PubMed Central

    White, Richard A.; Lu, Chunling; Rodriguez, Carly A.; Bayona, Jaime; Becerra, Mercedes C.; Burgos, Marcos; Centis, Rosella; Cohen, Theodore; Cox, Helen; D'Ambrosio, Lia; Danilovitz, Manfred; Falzon, Dennis; Gelmanova, Irina Y.; Gler, Maria T.; Grinsdale, Jennifer A.; Holtz, Timothy H.; Keshavjee, Salmaan; Leimane, Vaira; Menzies, Dick; Milstein, Meredith B.; Mishustin, Sergey P.; Pagano, Marcello; Quelapio, Maria I.; Shean, Karen; Shin, Sonya S.; Tolman, Arielle W.; van der Walt, Martha L.; Van Deun, Armand; Viiklepp, Piret

    2016-01-01

    Debate persists about monitoring method (culture or smear) and interval (monthly or less frequently) during treatment for multidrug-resistant tuberculosis (MDR-TB). We analysed existing data and estimated the effect of monitoring strategies on timing of failure detection. We identified studies reporting microbiological response to MDR-TB treatment and solicited individual patient data from authors. Frailty survival models were used to estimate pooled relative risk of failure detection in the last 12 months of treatment; hazard of failure using monthly culture was the reference. Data were obtained for 5410 patients across 12 observational studies. During the last 12 months of treatment, failure detection occurred in a median of 3 months by monthly culture; failure detection was delayed by 2, 7, and 9 months relying on bimonthly culture, monthly smear and bimonthly smear, respectively. Risk (95% CI) of failure detection delay resulting from monthly smear relative to culture is 0.38 (0.34–0.42) for all patients and 0.33 (0.25–0.42) for HIV-co-infected patients. Failure detection is delayed by reducing the sensitivity and frequency of the monitoring method. Monthly monitoring of sputum cultures from patients receiving MDR-TB treatment is recommended. Expanded laboratory capacity is needed for high-quality culture, and for smear microscopy and rapid molecular tests. PMID:27587552

  11. AE (Acoustic Emission) for Flip-Chip CGA/FCBGA Defect Detection

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2014-01-01

    C-mode scanning acoustic microscopy (C-SAM) is a nondestructive inspection technique that uses ultrasound to show the internal feature of a specimen. A very high or ultra-high-frequency ultrasound passes through a specimen to produce a visible acoustic microimage (AMI) of its inner features. As ultrasound travels into a specimen, the wave is absorbed, scattered or reflected. The response is highly sensitive to the elastic properties of the materials and is especially sensitive to air gaps. This specific characteristic makes AMI the preferred method for finding "air gaps" such as delamination, cracks, voids, and porosity. C-SAM analysis, which is a type of AMI, was widely used in the past for evaluation of plastic microelectronic circuits, especially for detecting delamination of direct die bonding. With the introduction of the flip-chip die attachment in a package; its use has been expanded to nondestructive characterization of the flip-chip solder bumps and underfill. Figure 1.1 compares visual and C-SAM inspection approaches for defect detection, especially for solder joint interconnections and hidden defects. C-SAM is specifically useful for package features like internal cracks and delamination. C-SAM not only allows for the visualization of the interior features, it has the ability to produce images on layer-by-layer basis. Visual inspection; however, is only superior to C-SAM for the exposed features including solder dewetting, microcracks, and contamination. Ideally, a combination of various inspection techniques - visual, optical and SEM microscopy, C-SAM, and X-ray - need to be performed in order to assure quality at part, package, and system levels. This reports presents evaluations performed on various advanced packages/assemblies, especially the flip-chip die version of ball grid array/column grid array (BGA/CGA) using C-SAM equipment. Both external and internal equipment was used for evaluation. The outside facility provided images of the key features that could be detected using the most advanced C-SAM equipment with a skilled operator. Investigation continued using in-house equipment with its limitations. For comparison, representative X-rays of the assemblies were also gathered to show key defect detection features of these non-destructive techniques. Key images gathered and compared are: Compared the images of 2D X-ray and C-SAM for a plastic LGA assembly showing features that could be detected by either NDE technique. For this specific case, X-ray was a clear winner. Evaluated flip-chip CGA and FCBGA assemblies with and without heat sink by C-SAM. Only the FCCGA package that had no heat sink could be fully analyzed for underfill and bump quality. Cross-sectional microscopy did not revealed peripheral delamination features detected by C-SAM. Analyzed a number of fine pitch PBGA assemblies by C-SAM. Even though the internal features of the package assemblies could be detected, C-SAM was unable to detect solder joint failure at either the package or board level. Twenty times touch ups by solder iron with 700degF tip temperature, each with about 5 second duration, did not induce defects to be detected by C-SAM images. Other techniques need to be considered to induce known defects for characterization. Given NASA's emphasis on the use of microelectronic packages and assemblies and quality assurance on workmanship defect detection, understanding key features of various inspection systems that detect defects in the early stages of package and assembly is critical to developing approaches that will minimize future failures. Additional specific, tailored non-destructive inspection approaches could enable low-risk insertion of these advanced electronic packages having hidden and fine features.

  12. SCADA alarms processing for wind turbine component failure detection

    NASA Astrophysics Data System (ADS)

    Gonzalez, E.; Reder, M.; Melero, J. J.

    2016-09-01

    Wind turbine failure and downtime can often compromise the profitability of a wind farm due to their high impact on the operation and maintenance (O&M) costs. Early detection of failures can facilitate the changeover from corrective maintenance towards a predictive approach. This paper presents a cost-effective methodology to combine various alarm analysis techniques, using data from the Supervisory Control and Data Acquisition (SCADA) system, in order to detect component failures. The approach categorises the alarms according to a reviewed taxonomy, turning overwhelming data into valuable information to assess component status. Then, different alarms analysis techniques are applied for two purposes: the evaluation of the SCADA alarm system capability to detect failures, and the investigation of the relation between components faults being followed by failure occurrences in others. Various case studies are presented and discussed. The study highlights the relationship between faulty behaviour in different components and between failures and adverse environmental conditions.

  13. A Testbed Environment for Buildings-to-Grid Cyber Resilience Research and Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sridhar, Siddharth; Ashok, Aditya; Mylrea, Michael E.

    The Smart Grid is characterized by the proliferation of advanced digital controllers at all levels of its operational hierarchy from generation to end consumption. Such controllers within modern residential and commercial buildings enable grid operators to exercise fine-grained control over energy consumption through several emerging Buildings-to-Grid (B2G) applications. Though this capability promises significant benefits in terms of operational economics and improved reliability, cybersecurity weaknesses in the supporting infrastructure could be exploited to cause a detrimental effect and this necessitates focused research efforts on two fronts. First, the understanding of how cyber attacks in the B2G space could impact grid reliabilitymore » and to what extent. Second, the development and validation of cyber-physical application-specific countermeasures that are complementary to traditional infrastructure cybersecurity mechanisms for enhanced cyber attack detection and mitigation. The PNNL B2G testbed is currently being developed to address these core research needs. Specifically, the B2G testbed combines high-fidelity buildings+grid simulators, industry-grade building automation and Supervisory Control and Data Acquisition (SCADA) systems in an integrated, realistic, and reconfigurable environment capable of supporting attack-impact-detection-mitigation experimentation. In this paper, we articulate the need for research testbeds to model various B2G applications broadly by looking at the end-to-end operational hierarchy of the Smart Grid. Finally, the paper not only describes the architecture of the B2G testbed in detail, but also addresses the broad spectrum of B2G resilience research it is capable of supporting based on the smart grid operational hierarchy identified earlier.« less

  14. Synchrophasor Sensor Networks for Grid Communication and Protection.

    PubMed

    Gharavi, Hamid; Hu, Bin

    2017-07-01

    This paper focuses primarily on leveraging synchronized current/voltage amplitudes and phase angle measurements to foster new categories of applications, such as improving the effectiveness of grid protection and minimizing outage duration for distributed grid systems. The motivation for such an application arises from the fact that with the support of communication, synchronized measurements from multiple sites in a grid network can greatly enhance the accuracy and timeliness of identifying the source of instabilities. The paper first provides an overview of synchrophasor networks and then presents techniques for power quality assessment, including fault detection and protection. To achieve this we present a new synchrophasor data partitioning scheme that is based on the formation of a joint space and time observation vector. Since communication is an integral part of synchrophasor networks, the newly adopted wireless standard for machine-to-machine (M2M) communication, known as IEEE 802.11ah, has been investigated. The paper also presents a novel implementation of a hardware in the loop testbed for real-time performance evaluation. The purpose is to illustrate the use of both hardware and software tools to verify the performance of synchrophasor networks under more realistic environments. The testbed is a combination of grid network modeling, and an Emulab-based communication network. The combined grid and communication network is then used to assess power quality for fault detection and location using the IEEE 39-bus and 390-bus systems.

  15. Synchrophasor Sensor Networks for Grid Communication and Protection

    PubMed Central

    Gharavi, Hamid

    2017-01-01

    This paper focuses primarily on leveraging synchronized current/voltage amplitudes and phase angle measurements to foster new categories of applications, such as improving the effectiveness of grid protection and minimizing outage duration for distributed grid systems. The motivation for such an application arises from the fact that with the support of communication, synchronized measurements from multiple sites in a grid network can greatly enhance the accuracy and timeliness of identifying the source of instabilities. The paper first provides an overview of synchrophasor networks and then presents techniques for power quality assessment, including fault detection and protection. To achieve this we present a new synchrophasor data partitioning scheme that is based on the formation of a joint space and time observation vector. Since communication is an integral part of synchrophasor networks, the newly adopted wireless standard for machine-to-machine (M2M) communication, known as IEEE 802.11ah, has been investigated. The paper also presents a novel implementation of a hardware in the loop testbed for real-time performance evaluation. The purpose is to illustrate the use of both hardware and software tools to verify the performance of synchrophasor networks under more realistic environments. The testbed is a combination of grid network modeling, and an Emulab-based communication network. The combined grid and communication network is then used to assess power quality for fault detection and location using the IEEE 39-bus and 390-bus systems. PMID:28890553

  16. White Light Schlieren Optics Using Bacteriorhodopsin as an Adaptive Image Grid

    NASA Technical Reports Server (NTRS)

    Peale, Robert; Ruffin, Boh; Donahue, Jeff; Barrett, Carolyn

    1996-01-01

    A Schlieren apparatus using a bacteriorhodopsin film as an adaptive image grid with white light illumination is demonstrated for the first time. The time dependent spectral properties of the film are characterized. Potential applications include a single-ended Schlieren system for leak detection.

  17. Development and testing of an algorithm to detect implantable cardioverter-defibrillator lead failure.

    PubMed

    Gunderson, Bruce D; Gillberg, Jeffrey M; Wood, Mark A; Vijayaraman, Pugazhendhi; Shepard, Richard K; Ellenbogen, Kenneth A

    2006-02-01

    Implantable cardioverter-defibrillator (ICD) lead failures often present as inappropriate shock therapy. An algorithm that can reliably discriminate between ventricular tachyarrhythmias and noise due to lead failure may prevent patient discomfort and anxiety and avoid device-induced proarrhythmia by preventing inappropriate ICD shocks. The goal of this analysis was to test an ICD tachycardia detection algorithm that differentiates noise due to lead failure from ventricular tachyarrhythmias. We tested an algorithm that uses a measure of the ventricular intracardiac electrogram baseline to discriminate the sinus rhythm isoelectric line from the right ventricular coil-can (i.e., far-field) electrogram during oversensing of noise caused by a lead failure. The baseline measure was defined as the product of the sum (mV) and standard deviation (mV) of the voltage samples for a 188-ms window centered on each sensed electrogram. If the minimum baseline measure of the last 12 beats was <0.35 mV-mV, then the detected rhythm was considered noise due to a lead failure. The first ICD-detected episode of lead failure and inappropriate detection from 24 ICD patients with a pace/sense lead failure and all ventricular arrhythmias from 56 ICD patients without a lead failure were selected. The stored data were analyzed to determine the sensitivity and specificity of the algorithm to detect lead failures. The minimum baseline measure for the 24 lead failure episodes (0.28 +/- 0.34 mV-mV) was smaller than the 135 ventricular tachycardia (40.8 +/- 43.0 mV-mV, P <.0001) and 55 ventricular fibrillation episodes (19.1 +/- 22.8 mV-mV, P <.05). A minimum baseline <0.35 mV-mV threshold had a sensitivity of 83% (20/24) with a 100% (190/190) specificity. A baseline measure of the far-field electrogram had a high sensitivity and specificity to detect lead failure noise compared with ventricular tachycardia or fibrillation.

  18. Artificial-neural-network-based failure detection and isolation

    NASA Astrophysics Data System (ADS)

    Sadok, Mokhtar; Gharsalli, Imed; Alouani, Ali T.

    1998-03-01

    This paper presents the design of a systematic failure detection and isolation system that uses the concept of failure sensitive variables (FSV) and artificial neural networks (ANN). The proposed approach was applied to tube leak detection in a utility boiler system. Results of the experimental testing are presented in the paper.

  19. Automatic patient respiration failure detection system with wireless transmission

    NASA Technical Reports Server (NTRS)

    Dimeff, J.; Pope, J. M.

    1968-01-01

    Automatic respiration failure detection system detects respiration failure in patients with a surgically implanted tracheostomy tube, and actuates an audible and/or visual alarm. The system incorporates a miniature radio transmitter so that the patient is unencumbered by wires yet can be monitored from a remote location.

  20. Design and evaluation of a failure detection and isolation algorithm for restructurable control systems

    NASA Technical Reports Server (NTRS)

    Weiss, Jerold L.; Hsu, John Y.

    1986-01-01

    The use of a decentralized approach to failure detection and isolation for use in restructurable control systems is examined. This work has produced: (1) A method for evaluating fundamental limits to FDI performance; (2) Application using flight recorded data; (3) A working control element FDI system with maximal sensitivity to critical control element failures; (4) Extensive testing on realistic simulations; and (5) A detailed design methodology involving parameter optimization (with respect to model uncertainties) and sensitivity analyses. This project has concentrated on detection and isolation of generic control element failures since these failures frequently lead to emergency conditions and since knowledge of remaining control authority is essential for control system redesign. The failures are generic in the sense that no temporal failure signature information was assumed. Thus, various forms of functional failures are treated in a unified fashion. Such a treatment results in a robust FDI system (i.e., one that covers all failure modes) but sacrifices some performance when detailed failure signature information is known, useful, and employed properly. It was assumed throughout that all sensors are validated (i.e., contain only in-spec errors) and that only the first failure of a single control element needs to be detected and isolated. The FDI system which has been developed will handle a class of multiple failures.

  1. MeDICi Software Superglue for Data Analysis Pipelines

    ScienceCinema

    Ian Gorton

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework is an integrated middleware platform developed to solve data analysis and processing needs of scientists across many domains. MeDICi is scalable, easily modified, and robust to multiple languages, protocols, and hardware platforms, and in use today by PNNL scientists for bioinformatics, power grid failure analysis, and text analysis.

  2. GPS Spoofing Attack Characterization and Detection in Smart Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blum, Rick S.; Pradhan, Parth; Nagananda, Kyatsandra

    The problem of global positioning system (GPS) spoofing attacks on smart grids endowed with phasor measurement units (PMUs) is addressed, taking into account the dynamical behavior of the states of the system. First, it is shown how GPS spoofing introduces a timing synchronization error in the phasor readings recorded by the PMUs and alters the measurement matrix of the dynamical model. Then, a generalized likelihood ratio-based hypotheses testing procedure is devised to detect changes in the measurement matrix when the system is subjected to a spoofing attack. Monte Carlo simulations are performed on the 9-bus, 3-machine test grid to demonstratemore » the implication of the spoofing attack on dynamic state estimation and to analyze the performance of the proposed hypotheses test.« less

  3. A Distributed Middleware Architecture for Attack-Resilient Communications in Smart Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Wu, Yifu; Wei, Jin

    Distributed Energy Resources (DERs) are being increasingly accepted as an excellent complement to traditional energy sources in smart grids. As most of these generators are geographically dispersed, dedicated communications investments for every generator are capital cost prohibitive. Real-time distributed communications middleware, which supervises, organizes and schedules tremendous amounts of data traffic in smart grids with high penetrations of DERs, allows for the use of existing network infrastructure. In this paper, we propose a distributed attack-resilient middleware architecture that detects and mitigates the congestion attacks by exploiting the Quality of Experience (QoE) measures to complement the conventional Quality of Service (QoS)more » information to detect and mitigate the congestion attacks effectively. The simulation results illustrate the efficiency of our proposed communications middleware architecture.« less

  4. A Distributed Middleware Architecture for Attack-Resilient Communications in Smart Grids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yifu; Wei, Jin; Hodge, Bri-Mathias

    Distributed energy resources (DERs) are being increasingly accepted as an excellent complement to traditional energy sources in smart grids. Because most of these generators are geographically dispersed, dedicated communications investments for every generator are capital-cost prohibitive. Real-time distributed communications middleware - which supervises, organizes, and schedules tremendous amounts of data traffic in smart grids with high penetrations of DERs - allows for the use of existing network infrastructure. In this paper, we propose a distributed attack-resilient middleware architecture that detects and mitigates the congestion attacks by exploiting the quality of experience measures to complement the conventional quality of service informationmore » to effectively detect and mitigate congestion attacks. The simulation results illustrate the efficiency of our proposed communications middleware architecture.« less

  5. The cosmic web in CosmoGrid void regions

    NASA Astrophysics Data System (ADS)

    Rieder, Steven; van de Weygaert, Rien; Cautun, Marius; Beygu, Burcu; Portegies Zwart, Simon

    2016-10-01

    We study the formation and evolution of the cosmic web, using the high-resolution CosmoGrid ΛCDM simulation. In particular, we investigate the evolution of the large-scale structure around void halo groups, and compare this to observations of the VGS-31 galaxy group, which consists of three interacting galaxies inside a large void. The structure around such haloes shows a great deal of tenuous structure, with most of such systems being embedded in intra-void filaments and walls. We use the Nexus+} algorithm to detect walls and filaments in CosmoGrid, and find them to be present and detectable at every scale. The void regions embed tenuous walls, which in turn embed tenuous filaments. We hypothesize that the void galaxy group of VGS-31 formed in such an environment.

  6. Enhancing the cyber-security of smart grids with applications to synchrophasor data

    NASA Astrophysics Data System (ADS)

    Pal, Seemita

    In the power grids, Supervisory Control and Data Acquisition (SCADA) systems are used as part of the Energy Management System (EMS) for enabling grid monitoring, control and protection. In recent times, with the ongoing installation of thousands of Phasor Measurement Units (PMUs), system operators are becoming increasingly reliant on PMU-generated synchrophasor measurements for executing wide-area monitoring and real-time control. The availability of PMU data facilitates dynamic state estimation of the system, thus improving the efficiency and resiliency of the grid. Since the SCADA and PMU data are used to make critical control decisions including actuation of physical systems, the timely availability and integrity of this networked data is of paramount importance. Absence or wrong control actions can potentially lead to disruption of operations, monetary loss, damage to equipments or surroundings or even blackout. This has posed new challenges to information security especially in this age of ever-increasing cyber-attacks. In this thesis, potential cyber-attacks on smart grids are presented and effective and implementable schemes are proposed for detecting them. The focus is mainly on three kinds of cyber-attacks and their detection: (i) gray-hole attacks on synchrophasor systems, (ii) PMU data manipulation attacks and (iii) data integrity attacks on SCADA systems. In the case of gray-hole attacks, also known as packet-drop attacks, the adversary may arbitrarily drop PMU data packets as they traverse the network, resulting in unavailability of time-sensitive data for the various critical power system applications. The fundamental challenge is to distinguish packets dropped by the adversary from those that occur naturally due to network congestion.The proposed gray-hole attack detection technique is based on exploiting the inherent timing information in the GPS time-stamped PMU data packets and using the temporal trends of the latencies to classify the cause of packet-drops and finally detect attacks, if any. In the case of PMU data manipulation attacks, the attacker may modify the data in the PMU packets in order to bias the system states and influence the control center into taking wrong decisions. The proposed detection technique is based on evaluating the equivalent impedances of the transmission lines and classifying the observed anomalies to determine the presence of attack and its location. The scheme for detecting data integrity attacks on SCADA systems is based on utilizing synchrophasor measurements from available PMUs in the grid. The proposed method uses a difference measure, developed in this thesis, to determine the relative divergence and mis-correlation between the datasets. Based on the estimated difference measure, tampered and genuine data can be distinguished. The proposed detection mechanisms have demonstrated high accuracy in real-time detection of attacks of various magnitudes, simulated on real PMU data obtained from the NY grid. By performing alarm clustering, the occurrence of false alarms has been reduced to almost zero. The solutions are computationally inexpensive, low on cost, do not add any overhead, and do not require any feedback from the network.

  7. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  8. Detection of new-onset choroidal neovascularization.

    PubMed

    Do, Diana V

    2013-05-01

    To highlight the most common methods that are used to detect new-onset choroidal neovascularization (CNV) as a result of age-related macular degeneration (AMD). Numerous modalities are available to try to detect CNV. Amsler grid testing, preferential hyperacuity perimetry (PHP), optical coherence tomography (OCT), and fluorescein angiography are tools that may be used to detect CNV. The Age-Related Macular Degeneration: Detection of Onset of new Choroidal neovascularization Study (AMD DOC Study) evaluated the sensitivity of time domain OCT, relative to fluorescein angiography, in detecting new-onset neovascular AMD within a 2-year period. The sensitivity of each modality for detecting CNV was OCT 0.40 [(95% confidence interval (95% CI) (0.16-0.68), supervised Amsler grid 0.42 (95% CI 0.15-0.72), and PHP 0.50 (95% CI 0.23-0.77)]. Numerous modalities are available to try to detect CNV. The prospective AMD DOC Study demonstrated that fluorescein angiography still remains the best method to detect new-onset CNV.

  9. TU-FG-201-09: Predicting Accelerator Dysfunction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Able, C; Nguyen, C; Baydush, A

    Purpose: To develop an integrated statistical process control (SPC) framework using digital performance and component data accumulated within the accelerator system that can detect dysfunction prior to unscheduled downtime. Methods: Seven digital accelerators were monitored for twelve to 18 months. The accelerators were operated in a ‘run to failure mode’ with the individual institutions determining when service would be initiated. Institutions were required to submit detailed service reports. Trajectory and text log files resulting from a robust daily VMAT QA delivery were decoded and evaluated using Individual and Moving Range (I/MR) control charts. The SPC evaluation was presented in amore » customized dashboard interface that allows the user to review 525 monitored parameters (480 MLC parameters). Chart limits were calculated using a hybrid technique that includes the standard SPC 3σ limits and an empirical factor based on the parameter/system specification. The individual (I) grand mean values and control limit ranges of the I/MR charts of all accelerators were compared using statistical (ranked analysis of variance (ANOVA)) and graphical analyses to determine consistency of operating parameters. Results: When an alarm or warning was directly connected to field service, process control charts predicted dysfunction consistently on beam generation related parameters (BGP)– RF Driver Voltage, Gun Grid Voltage, and Forward Power (W); beam uniformity parameters – angle and position steering coil currents; and Gantry position accuracy parameter: cross correlation max-value. Control charts for individual MLC – cross correlation max-value/position detected 50% to 60% of MLCs serviced prior to dysfunction or failure. In general, non-random changes were detected 5 to 80 days prior to a service intervention. The ANOVA comparison of BGP determined that each accelerator parameter operated at a distinct value. Conclusion: The SPC framework shows promise. Long term monitoring coordinated with service will be required to definitively determine the effectiveness of the model. Varian Medical System, Inc. provided funding in support of the research presented.« less

  10. 21 CFR 886.1330 - Amsler grid.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Amsler grid. 886.1330 Section 886.1330 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES... the patient and intended to rapidly detect central and paracentral irregularities in the visual field...

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadi, Mohammad A. H.; Dasgupta, Dipankar; Ali, Mohammad Hassan

    The important backbone of the smart grid is the cyber/information infrastructure, which is primarily used to communicate with different grid components. A smart grid is a complex cyber physical system containing a numerous and variety number of sources, devices, controllers and loads. Therefore, the smart grid is vulnerable to grid related disturbances. For such dynamic system, disturbance and intrusion detection is a paramount issue. This paper presents a Simulink and Opnet based co-simulated platform to carry out a cyber-intrusion in cyber network for modern power systems and the smart grid. The IEEE 30 bus power system model is used tomore » demonstrate the effectiveness of the simulated testbed. The experiments were performed by disturbing the circuit breakers reclosing time through a cyber-attack. Different disturbance situations in the considered test system are considered and the results indicate the effectiveness of the proposed co-simulated scheme.« less

  12. Feasibility study on an integrated AEC-grid device for the optimization of image quality and exposure dose in mammography

    NASA Astrophysics Data System (ADS)

    Kim, Kyo-Tae; Yun, Ryang-Young; Han, Moo-Jae; Heo, Ye-Ji; Song, Yong-Keun; Heo, Sung-Wook; Oh, Kyeong-Min; Park, Sung-Kwang

    2017-10-01

    Currently, in the radiation diagnosis field, mammography is used for the early detection of breast cancer. In addition, studies are being conducted on a grid to produce high-quality images. Although the grid ratio of the grid, which affects the scattering removal rate, must be increased to improve image quality, it increases the total exposure dose. While the use of automatic exposure control is recommended to minimize this problem, existing mammography equipment, unlike general radiography equipment, is mounted on the back of a detector. Therefore, the device is greatly affected by the detector and supporting device, and it is difficult to control the exposure dose. Accordingly, in this research, an integrated AEC-grid device that simultaneously performs AEC and grid functions was used to minimize the unnecessary exposure dose while removing scattering, thereby realizing superior image quality.

  13. Real-Time Occupancy Change Analyzer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-03-30

    The Real-Time Occupancy Change Analyzer (ROCA) produces an occupancy grid map of an environment around the robot, scans the environment to generate a current obstacle map relative to a current robot position, and converts the current obstacle map to a current occupancy grid map. Changes in the occupancy grid can be reported in real time to support a number of tracking capabilities. The benefit of ROCA is that rather than only providing a vector to the detected change, it provides the actual x,y position of the change.

  14. Achieving fast and stable failure detection in WDM Networks

    NASA Astrophysics Data System (ADS)

    Gao, Donghui; Zhou, Zhiyu; Zhang, Hanyi

    2005-02-01

    In dynamic networks, the failure detection time takes a major part of the convergence time, which is an important network performance index. To detect a node or link failure in the network, traditional protocols, like Hello protocol in OSPF or RSVP, exchanges keep-alive messages between neighboring nodes to keep track of the link/node state. But by default settings, it can get a minimum detection time in the measure of dozens of seconds, which can not meet the demands of fast network convergence and failure recovery. When configuring the related parameters to reduce the detection time, there will be notable instability problems. In this paper, we analyzed the problem and designed a new failure detection algorithm to reduce the network overhead of detection signaling. Through our experiment we found it is effective to enhance the stability by implicitly acknowledge other signaling messages as keep-alive messages. We conducted our proposal and the previous approaches on the ASON test-bed. The experimental results show that our algorithm gives better performances than previous schemes in about an order magnitude reduction of both false failure alarms and queuing delay to other messages, especially under light traffic load.

  15. Sensor Failure Detection of FASSIP System using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina

    2018-02-01

    In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.

  16. Machine Learning Methods for Attack Detection in the Smart Grid.

    PubMed

    Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent

    2016-08-01

    Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.

  17. Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis on Over 10,000 Cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Rice, Mark J.

    Contingency analysis studies are necessary to assess the impact of possible power system component failures. The results of the contingency analysis are used to ensure the grid reliability, and in power market operation for the feasibility test of market solutions. Currently, these studies are performed in real time based on the current operating conditions of the grid with a set of pre-selected contingency list, which might result in overlooking some critical contingencies caused by variable system status. To have a complete picture of a power grid, more contingencies need to be studied to improve grid reliability. High-performance computing techniques holdmore » the promise of being able to perform the analysis for more contingency cases within a much shorter time frame. This paper evaluates the performance of counter-based dynamic load balancing schemes for a massive contingency analysis program on 10,000+ cores. One million N-2 contingency analysis cases with a Western Electricity Coordinating Council power grid model have been used to demonstrate the performance. The speedup of 3964 with 4096 cores and 7877 with 10240 cores are obtained. This paper reports the performance of the load balancing scheme with a single counter and two counters, describes disk I/O issues, and discusses other potential techniques for further improving the performance.« less

  18. Performance and sensitivity analysis of the generalized likelihood ratio method for failure detection. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bueno, R. A.

    1977-01-01

    Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.

  19. Groundwater-quality data in the Western San Joaquin Valley study unit, 2010 - Results from the California GAMA Program

    USGS Publications Warehouse

    Mathany, Timothy M.; Landon, Matthew K.; Shelton, Jennifer L.; Belitz, Kenneth

    2013-01-01

    Groundwater quality in the approximately 2,170-square-mile Western San Joaquin Valley (WSJV) study unit was investigated by the U.S. Geological Survey (USGS) from March to July 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program's Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The WSJV study unit was the twenty-ninth study unit to be sampled as part of the GAMA-PBP. The GAMA Western San Joaquin Valley study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system, and to facilitate statistically consistent comparisons of untreated groundwater quality throughout California. The primary aquifer system is defined as parts of aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the WSJV study unit. Groundwater quality in the primary aquifer system may differ from the quality in the shallower or deeper water-bearing zones; shallow groundwater may be more vulnerable to surficial contamination. In the WSJV study unit, groundwater samples were collected from 58 wells in 2 study areas (Delta-Mendota subbasin and Westside subbasin) in Stanislaus, Merced, Madera, Fresno, and Kings Counties. Thirty-nine of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and 19 wells were selected to aid in the understanding of aquifer-system flow and related groundwater-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], low-level fumigants, and pesticides and pesticide degradates), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), and naturally occurring inorganic constituents (trace elements, nutrients, dissolved organic carbon [DOC], major and minor ions, silica, total dissolved solids [TDS], alkalinity, total arsenic and iron [unfiltered] and arsenic, chromium, and iron species [filtered]). Isotopic tracers (stable isotopes of hydrogen, oxygen, and boron in water, stable isotopes of nitrogen and oxygen in dissolved nitrate, stable isotopes of sulfur in dissolved sulfate, isotopic ratios of strontium in water, stable isotopes of carbon in dissolved inorganic carbon, activities of tritium, and carbon-14 abundance), dissolved standard gases (methane, carbon dioxide, nitrogen, oxygen, and argon), and dissolved noble gases (argon, helium-4, krypton, neon, and xenon) were measured to help identify sources and ages of sampled groundwater. In total, 245 constituents and 8 water-quality indicators were measured. Quality-control samples (blanks, replicates, or matrix spikes) were collected at 16 percent of the wells in the WSJV study unit, and the results for these samples were used to evaluate the quality of the data from the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples all were within acceptable limits of variability. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 87 percent of the compounds. This study did not evaluate the quality of water delivered to consumers. After withdrawal, groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is delivered to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. Most inorganic constituents detected in groundwater samples from the 39 grid wells were detected at concentrations less than health-based benchmarks. Detections of organic and special-interest constituents from grid wells sampled in the WSJV study unit also were less than health-based benchmarks. In total, VOCs were detected in 12 of the 39 grid wells sampled (approximately 31 percent), pesticides and pesticide degradates were detected in 9 grid wells (approximately 23 percent), and perchlorate was detected in 15 grid wells (approximately 38 percent). Trace elements, major and minor ions, and nutrients were sampled for at 39 grid wells; most concentrations were less than health-based benchmarks. Exceptions include two detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (μg/L), 20 detections of boron greater than the CDPH notification level (NL-CA) of 1,000 μg/L, 2 detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 μg/L, 1 detection of selenium greater than the MCL-US of 50 μg/L, 2 detections of strontium greater than the HAL-US of 4,000 μg/L, and 3 detections of nitrate greater than the MCL-US of 10 μg/L. Results for inorganic constituents with non-health-based benchmarks (iron, manganese, chloride, sulfate, and TDS) showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in five grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in 16 grid wells. Chloride concentrations greater than the recommended SMCL-CA benchmark of 250 milligrams per liter (mg/L) were detected in 14 grid wells, and concentrations in 5 of these wells also were greater than the upper SMCL-CA benchmark of 500 mg/L. Sulfate concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were measured in 21 grid wells, and concentrations in 13 of these wells also were greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 36 grid wells, and concentrations in 20 of these wells also were greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  20. Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors

    DOE PAGES

    Christon, Mark A.; Lu, Roger; Bakosi, Jozsef; ...

    2016-10-01

    Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less

  1. Applying artificial neural networks to predict communication risks in the emergency department.

    PubMed

    Bagnasco, Annamaria; Siri, Anna; Aleo, Giuseppe; Rocco, Gennaro; Sasso, Loredana

    2015-10-01

    To describe the utility of artificial neural networks in predicting communication risks. In health care, effective communication reduces the risk of error. Therefore, it is important to identify the predictive factors of effective communication. Non-technical skills are needed to achieve effective communication. This study explores how artificial neural networks can be applied to predict the risk of communication failures in emergency departments. A multicentre observational study. Data were collected between March-May 2011 by observing the communication interactions of 840 nurses with their patients during their routine activities in emergency departments. The tools used for our observation were a questionnaire to collect personal and descriptive data, level of training and experience and Guilbert's observation grid, applying the Situation-Background-Assessment-Recommendation technique to communication in emergency departments. A total of 840 observations were made on the nurses working in the emergency departments. Based on Guilbert's observation grid, the output variables is likely to influence the risk of communication failure were 'terminology'; 'listening'; 'attention' and 'clarity', whereas nurses' personal characteristics were used as input variables in the artificial neural network model. A model based on the multilayer perceptron topology was developed and trained. The receiver operator characteristic analysis confirmed that the artificial neural network model correctly predicted the performance of more than 80% of the communication failures. The application of the artificial neural network model could offer a valid tool to forecast and prevent harmful communication errors in the emergency department. © 2015 John Wiley & Sons Ltd.

  2. Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christon, Mark A.; Lu, Roger; Bakosi, Jozsef

    Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less

  3. Failure detection in high-performance clusters and computers using chaotic map computations

    DOEpatents

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  4. Failure detection and isolation investigation for strapdown skew redundant tetrad laser gyro inertial sensor arrays

    NASA Technical Reports Server (NTRS)

    Eberlein, A. J.; Lahm, T. G.

    1976-01-01

    The degree to which flight-critical failures in a strapdown laser gyro tetrad sensor assembly can be isolated in short-haul aircraft after a failure occurrence has been detected by the skewed sensor failure-detection voting logic is investigated along with the degree to which a failure in the tetrad computer can be detected and isolated at the computer level, assuming a dual-redundant computer configuration. The tetrad system was mechanized with two two-axis inertial navigation channels (INCs), each containing two gyro/accelerometer axes, computer, control circuitry, and input/output circuitry. Gyro/accelerometer data is crossfed between the two INCs to enable each computer to independently perform the navigation task. Computer calculations are synchronized between the computers so that calculated quantities are identical and may be compared. Fail-safe performance (identification of the first failure) is accomplished with a probability approaching 100 percent of the time, while fail-operational performance (identification and isolation of the first failure) is achieved 93 to 96 percent of the time.

  5. Multi-agent coordination algorithms for control of distributed energy resources in smart grids

    NASA Astrophysics Data System (ADS)

    Cortes, Andres

    Sustainable energy is a top-priority for researchers these days, since electricity and transportation are pillars of modern society. Integration of clean energy technologies such as wind, solar, and plug-in electric vehicles (PEVs), is a major engineering challenge in operation and management of power systems. This is due to the uncertain nature of renewable energy technologies and the large amount of extra load that PEVs would add to the power grid. Given the networked structure of a power system, multi-agent control and optimization strategies are natural approaches to address the various problems of interest for the safe and reliable operation of the power grid. The distributed computation in multi-agent algorithms addresses three problems at the same time: i) it allows for the handling of problems with millions of variables that a single processor cannot compute, ii) it allows certain independence and privacy to electricity customers by not requiring any usage information, and iii) it is robust to localized failures in the communication network, being able to solve problems by simply neglecting the failing section of the system. We propose various algorithms to coordinate storage, generation, and demand resources in a power grid using multi-agent computation and decentralized decision making. First, we introduce a hierarchical vehicle-one-grid (V1G) algorithm for coordination of PEVs under usage constraints, where energy only flows from the grid in to the batteries of PEVs. We then present a hierarchical vehicle-to-grid (V2G) algorithm for PEV coordination that takes into consideration line capacity constraints in the distribution grid, and where energy flows both ways, from the grid in to the batteries, and from the batteries to the grid. Next, we develop a greedy-like hierarchical algorithm for management of demand response events with on/off loads. Finally, we introduce distributed algorithms for the optimal control of distributed energy resources, i.e., generation and storage in a microgrid. The algorithms we present are provably correct and tested in simulation. Each algorithm is assumed to work on a particular network topology, and simulation studies are carried out in order to demonstrate their convergence properties to a desired solution.

  6. Control system failure monitoring using generalized parity relations. M.S. Thesis Interim Technical Report

    NASA Technical Reports Server (NTRS)

    Vanschalkwyk, Christiaan Mauritz

    1991-01-01

    Many applications require that a control system must be tolerant to the failure of its components. This is especially true for large space-based systems that must work unattended and with long periods between maintenance. Fault tolerance can be obtained by detecting the failure of the control system component, determining which component has failed, and reconfiguring the system so that the failed component is isolated from the controller. Component failure detection experiments that were conducted on an experimental space structure, the NASA Langley Mini-Mast are presented. Two methodologies for failure detection and isolation (FDI) exist that do not require the specification of failure modes and are applicable to both actuators and sensors. These methods are known as the Failure Detection Filter and the method of Generalized Parity Relations. The latter method was applied to three different sensor types on the Mini-Mast. Failures were simulated in input-output data that were recorded during operation of the Mini-Mast. Both single and double sensor parity relations were tested and the effect of several design parameters on the performance of these relations is discussed. The detection of actuator failures is also treated. It is shown that in all the cases it is possible to identify the parity relations directly from input-output data. Frequency domain analysis is used to explain the behavior of the parity relations.

  7. Anomaly Detection Using Optimally-Placed μPMU Sensors in Distribution Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamei, Mahdi; Scaglione, Anna; Roberts, Ciaran

    IEEE As the distribution grid moves toward a tightly-monitored network, it is important to automate the analysis of the enormous amount of data produced by the sensors to increase the operators situational awareness about the system. Here, focusing on Micro-Phasor Measurement Unit (μPMU) data, we propose a hierarchical architecture for monitoring the grid and establish a set of analytics and sensor fusion primitives for the detection of abnormal behavior in the control perimeter. And due to the key role of the μPMU devices in our architecture, a source-constrained optimal μPMU placement is also described that finds the best location ofmore » the devices with respect to our rules. The effectiveness of the proposed methods are tested through the synthetic and real μPMU data.« less

  8. Rapid detection of Salmonella spp. in food by use of the ISO-GRID hydrophobic grid membrane filter.

    PubMed Central

    Entis, P; Brodsky, M H; Sharpe, A N; Jarvis, G A

    1982-01-01

    A rapid hydrophobic grid-membrane filter (HGMF) method was developed and compared with the Health Protection Branch cultural method for the detection of Salmonella spp. in 798 spiked samples and 265 naturally contaminated samples of food. With the HGMF method, Salmonella spp. were isolated from 618 of the spiked samples and 190 of the naturally contaminated samples. The conventional method recovered Salmonella spp. from 622 spiked samples and 204 unspiked samples. The isolation rates from Salmonella-positive samples for the two methods were not significantly different (94.6% overall for the HGMF method and 96.7% for the conventional approach), but the HGMF results were available in only 2 to 3 days after sample receipt compared with 3 to 4 days by the conventional method. Images PMID:7059168

  9. Anomaly Detection Using Optimally-Placed μPMU Sensors in Distribution Grids

    DOE PAGES

    Jamei, Mahdi; Scaglione, Anna; Roberts, Ciaran; ...

    2017-10-25

    IEEE As the distribution grid moves toward a tightly-monitored network, it is important to automate the analysis of the enormous amount of data produced by the sensors to increase the operators situational awareness about the system. Here, focusing on Micro-Phasor Measurement Unit (μPMU) data, we propose a hierarchical architecture for monitoring the grid and establish a set of analytics and sensor fusion primitives for the detection of abnormal behavior in the control perimeter. And due to the key role of the μPMU devices in our architecture, a source-constrained optimal μPMU placement is also described that finds the best location ofmore » the devices with respect to our rules. The effectiveness of the proposed methods are tested through the synthetic and real μPMU data.« less

  10. Method, memory media and apparatus for detection of grid disconnect

    DOEpatents

    Ye, Zhihong [Clifton Park, NY; Du, Pengwei [Troy, NY

    2008-09-23

    A phase shift procedure for detecting a disconnect of a power grid from a feeder that is connected to a load and a distributed generator. The phase shift procedure compares a current phase shift of the output voltage of the distributed generator with a predetermined threshold and if greater, a command is issued for a disconnect of the distributed generator from the feeder. To extend the range of detection, the phase shift procedure is used when a power mismatch between the distributed generator and the load exceeds a threshold and either or both of an under/over frequency procedure and an under/over voltage procedure is used when any power mismatch does not exceed the threshold.

  11. Novel Concept for Flexible and Resilient Large Power Transformers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhyay, Parag; Englebretson, Steven; Ramanan, V. R. R.

    This feasibility study investigates a flexible and adaptable LPT design solution which can facilitate long-term replacement in the event of both catastrophic failures as well as scheduled replacements, thereby increasing grid resilience. The scope of this project has been defined based on an initial system study and identification of the transformer requirements from an overall system load flow perspective.

  12. Reliability Assessment of Critical Electronic Components

    DTIC Science & Technology

    1992-07-01

    Failures FLHP - Full Horse Power FSN - Federal Stock Number I Current IC - Integrated Circuit IPB - Illustrated Parts Breakdown K - Boltzmans Constant L...Classified P - Power PC - Printed Circuit PCB - Printed Circuit Board PGA - Pin Grid Array PPM - Parts Per Million PWB - Printed Wiring Board 0...4-59 4.4.3.2.3 Circuit Brcakers ......................................................... 4-59 4.4.3.2.4 Thermal

  13. Composite Grids for Reinforcement of Concrete Structures.

    DTIC Science & Technology

    1998-06-01

    to greater compressive loads before induced shear failure occurs. Concrete columns were tested in compression to explore alter- native... columns were tested on the same day as the fiber-reinforced concrete columns . Load /deflection readings were taken with the load cell to determine the...ln) Figure 78. Ultimate load vs toughness for the different beam types tested . USACERLTR-98/81 141 £\\

  14. NASA DOD Lead Free Electronics Project

    NASA Technical Reports Server (NTRS)

    Kessel, Kurt R.

    2008-01-01

    The primary'technical objective of this project is to undertake comprehensive testing to generate information on failure modes/criteria to better understand the reliability of: Packages (e.g., Thin Small Outline Package [TSOP], Ball Grid Array [BGA], Plastic Dual In-line Package [PDIPD assembled and reworked with lead-free alloys Packages (e.g., TSOP, BGA, PDIP) assembled and reworked with mixed (lead/lead-free) alloys.

  15. Dynamic Modeling and Grid Interaction of a Tidal and River Generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muljadi, Eduard; Gevorgian, Vahan; Donegan, James

    This presentation provides a high-level overview of the deployment of a river generator installed in a small system. The turbine dynamics of a river generator, electrical generator, and power converter are modeled in detail. Various simulations can be exercised, and the impact of different control algorithms, failures of power switches, and corresponding impacts can be examined.

  16. Cascading Failures as Continuous Phase-Space Transitions

    DOE PAGES

    Yang, Yang; Motter, Adilson E.

    2017-12-14

    In network systems, a local perturbation can amplify as it propagates, potentially leading to a large-scale cascading failure. We derive a continuous model to advance our understanding of cascading failures in power-grid networks. The model accounts for both the failure of transmission lines and the desynchronization of power generators and incorporates the transient dynamics between successive steps of the cascade. In this framework, we show that a cascade event is a phase-space transition from an equilibrium state with high energy to an equilibrium state with lower energy, which can be suitably described in a closed form using a global Hamiltonian-likemore » function. From this function, we show that a perturbed system cannot always reach the equilibrium state predicted by quasi-steady-state cascade models, which would correspond to a reduced number of failures, and may instead undergo a larger cascade. We also show that, in the presence of two or more perturbations, the outcome depends strongly on the order and timing of the individual perturbations. These results offer new insights into the current understanding of cascading dynamics, with potential implications for control interventions.« less

  17. Game-Theoretic strategies for systems of components using product-form utilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.

    Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less

  18. Cascading Failures as Continuous Phase-Space Transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yang; Motter, Adilson E.

    In network systems, a local perturbation can amplify as it propagates, potentially leading to a large-scale cascading failure. We derive a continuous model to advance our understanding of cascading failures in power-grid networks. The model accounts for both the failure of transmission lines and the desynchronization of power generators and incorporates the transient dynamics between successive steps of the cascade. In this framework, we show that a cascade event is a phase-space transition from an equilibrium state with high energy to an equilibrium state with lower energy, which can be suitably described in a closed form using a global Hamiltonian-likemore » function. From this function, we show that a perturbed system cannot always reach the equilibrium state predicted by quasi-steady-state cascade models, which would correspond to a reduced number of failures, and may instead undergo a larger cascade. We also show that, in the presence of two or more perturbations, the outcome depends strongly on the order and timing of the individual perturbations. These results offer new insights into the current understanding of cascading dynamics, with potential implications for control interventions.« less

  19. Thermal Cycling Life Prediction of Sn-3.0Ag-0.5Cu Solder Joint Using Type-I Censored Data

    PubMed Central

    Mi, Jinhua; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-01-01

    Because solder joint interconnections are the weaknesses of microelectronic packaging, their reliability has great influence on the reliability of the entire packaging structure. Based on an accelerated life test the reliability assessment and life prediction of lead-free solder joints using Weibull distribution are investigated. The type-I interval censored lifetime data were collected from a thermal cycling test, which was implemented on microelectronic packaging with lead-free ball grid array (BGA) and fine-pitch ball grid array (FBGA) interconnection structures. The number of cycles to failure of lead-free solder joints is predicted by using a modified Engelmaier fatigue life model and a type-I censored data processing method. Then, the Pan model is employed to calculate the acceleration factor of this test. A comparison of life predictions between the proposed method and the ones calculated directly by Matlab and Minitab is conducted to demonstrate the practicability and effectiveness of the proposed method. At last, failure analysis and microstructure evolution of lead-free solders are carried out to provide useful guidance for the regular maintenance, replacement of substructure, and subsequent processing of electronic products. PMID:25121138

  20. Distributed analysis functional testing using GangaRobot in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Legger, Federica; ATLAS Collaboration

    2011-12-01

    Automated distributed analysis tests are necessary to ensure smooth operations of the ATLAS grid resources. The HammerCloud framework allows for easy definition, submission and monitoring of grid test applications. Both functional and stress test applications can be defined in HammerCloud. Stress tests are large-scale tests meant to verify the behaviour of sites under heavy load. Functional tests are light user applications running at each site with high frequency, to ensure that the site functionalities are available at all times. Success or failure rates of these tests jobs are individually monitored. Test definitions and results are stored in a database and made available to users and site administrators through a web interface. In this work we present the recent developments of the GangaRobot framework. GangaRobot monitors the outcome of functional tests, creates a blacklist of sites failing the tests, and exports the results to the ATLAS Site Status Board (SSB) and to the Service Availability Monitor (SAM), providing on the one hand a fast way to identify systematic or temporary site failures, and on the other hand allowing for an effective distribution of the work load on the available resources.

  1. Development of a fountain detector for spectroscopy of secondary electrons in scanning electron microscopy

    NASA Astrophysics Data System (ADS)

    Agemura, Toshihide; Kimura, Takashi; Sekiguchi, Takashi

    2018-04-01

    The low-pass secondary electron (SE) detector, the so-called “fountain detector (FD)”, for scanning electron microscopy has high potential for application to the imaging of low-energy SEs. Low-energy SE imaging may be used for detecting the surface potential variations of a specimen. However, the detected SEs include a certain fraction of tertiary electrons (SE3s) because some of the high-energy backscattered electrons hit the grid to yield SE3s. We have overcome this difficulty by increasing the aperture ratio of the bias and ground grids and using the lock-in technique, in which the AC field with the DC offset was applied on the bias grid. The energy-filtered SE images of a 4H-SiC p-n junction show complex behavior according to the grid bias. These observations are clearly explained by the variations of Auger spectra across the p-n junction. The filtered SE images taken with the FD can be applied to observing the surface potential variation of specimens.

  2. High pressure xenon ionization detector

    DOEpatents

    Markey, J.K.

    1989-11-14

    A method is provided for detecting ionization comprising allowing particles that cause ionization to contact high pressure xenon maintained at or near its critical point and measuring the amount of ionization. An apparatus is provided for detecting ionization, the apparatus comprising a vessel containing a ionizable medium, the vessel having an inlet to allow high pressure ionizable medium to enter the vessel, a means to permit particles that cause ionization of the medium to enter the vessel, an anode, a cathode, a grid and a plurality of annular field shaping rings, the field shaping rings being electrically isolated from one another, the anode, cathode, grid and field shaping rings being electrically isolated from one another in order to form an electric field between the cathode and the anode, the electric field originating at the anode and terminating at the cathode, the grid being disposed between the cathode and the anode, the field shaping rings being disposed between the cathode and the grid, the improvement comprising the medium being xenon and the vessel being maintained at a pressure of 50 to 70 atmospheres and a temperature of 0 to 30 C. 2 figs.

  3. High pressure xenon ionization detector

    DOEpatents

    Markey, John K.

    1989-01-01

    A method is provided for detecting ionization comprising allowing particles that cause ionization to contact high pressure xenon maintained at or near its critical point and measuring the amount of ionization. An apparatus is provided for detecting ionization, the apparatus comprising a vessel containing a ionizable medium, the vessel having an inlet to allow high pressure ionizable medium to enter the vessel, a means to permit particles that cause ionization of the medium to enter the vessel, an anode, a cathode, a grid and a plurality of annular field shaping rings, the field shaping rings being electrically isolated from one another, the anode, cathode, grid and field shaping rings being electrically isolated from one another in order to form an electric field between the cathode and the anode, the electric field originating at the anode and terminating at the cathode, the grid being disposed between the cathode and the anode, the field shaping rings being disposed between the cathode and the grid, the improvement comprising the medium being xenon and the vessel being maintained at a pressure of 50 to 70 atmospheres and a temperature of 0.degree. to 30.degree. C.

  4. Optimally robust redundancy relations for failure detection in uncertain systems

    NASA Technical Reports Server (NTRS)

    Lou, X.-C.; Willsky, A. S.; Verghese, G. C.

    1986-01-01

    All failure detection methods are based, either explicitly or implicitly, on the use of redundancy, i.e. on (possibly dynamic) relations among the measured variables. The robustness of the failure detection process consequently depends to a great degree on the reliability of the redundancy relations, which in turn is affected by the inevitable presence of model uncertainties. In this paper the problem of determining redundancy relations that are optimally robust is addressed in a sense that includes several major issues of importance in practical failure detection and that provides a significant amount of intuition concerning the geometry of robust failure detection. A procedure is given involving the construction of a single matrix and its singular value decomposition for the determination of a complete sequence of redundancy relations, ordered in terms of their level of robustness. This procedure also provides the basis for comparing levels of robustness in redundancy provided by different sets of sensors.

  5. Cyber attacks against state estimation in power systems: Vulnerability analysis and protection strategies

    NASA Astrophysics Data System (ADS)

    Liu, Xuan

    Power grid is one of the most critical infrastructures in a nation and could suffer a variety of cyber attacks. With the development of Smart Grid, false data injection attack has recently attracted wide research interest. This thesis proposes a false data attack model with incomplete network information and develops optimal attack strategies for attacking load measurements and the real-time topology of a power grid. The impacts of false data on the economic and reliable operations of power systems are quantitatively analyzed in this thesis. To mitigate the risk of cyber attacks, a distributed protection strategies are also developed. It has been shown that an attacker can design false data to avoid being detected by the control center if the network information of a power grid is known to the attacker. In practice, however, it is very hard or even impossible for an attacker to obtain all network information of a power grid. In this thesis, we propose a local load redistribution attacking model based on incomplete network information and show that an attacker only needs to obtain the network information of the local attacking region to inject false data into smart meters in the local region without being detected by the state estimator. A heuristic algorithm is developed to determine a feasible attacking region by obtaining reduced network information. This thesis investigates the impacts of false data on the operations of power systems. It has been shown that false data can be designed by an attacker to: 1) mask the real-time topology of a power grid; 2) overload a transmission line; 3) disturb the line outage detection based on PMU data. To mitigate the risk of cyber attacks, this thesis proposes a new protection strategy, which intends to mitigate the damage effects of false data injection attacks by protecting a small set of critical measurements. To further reduce the computation complexity, a mixed integer linear programming approach is also proposed to separate the power grid into several subnetworks, then distributed protection strategy is applied to each subnetwork.

  6. Sensor failure and multivariable control for airbreathing propulsion systems. Ph.D. Thesis - Dec. 1979 Final Report

    NASA Technical Reports Server (NTRS)

    Behbehani, K.

    1980-01-01

    A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.

  7. Age of Palos Verdes submarine debris avalanche, southern California

    USGS Publications Warehouse

    Normark, W.R.; McGann, M.; Sliter, R.

    2004-01-01

    The Palos Verdes debris avalanche is the largest, by volume, late Quaternary mass-wasted deposit recognized from the inner California Borderland basins. Early workers speculated that the sediment failure giving rise to the deposit is young, taking place well after sea level reached its present position. A newly acquired, closely-spaced grid of high-resolution, deep-tow boomer profiles of the debris avalanche shows that the Palos Verdes debris avalanche fills a turbidite leveed channel that extends seaward from San Pedro Sea Valley, with the bulk of the avalanche deposit appearing to result from a single failure on the adjacent slope. Radiocarbon dates from piston-cored sediment samples acquired near the distal edge of the avalanche deposit indicate that the main failure took place about 7500 yr BP. ?? 2003 Elsevier B.V. All rights reserved.

  8. Analysis of lead-acid battery accelerated testing data

    NASA Astrophysics Data System (ADS)

    Clifford, J. E.; Thomas, R. E.

    1983-06-01

    Battelle conducted an independent review and analysis of the accelerated test procedures and test data obtained by Exide in the 3 year Phase 1 program to develop advanced lead acid batteries for utility load leveling. Of special importance is the extensive data obtained in deep discharge cycling tests on 60 cells at elevated temperatures over a 2-1/2 year period. The principal uncertainty in estimating cell life relates to projecting cycle life data at elevated temperature to the lower operating temperatures. The accelerated positive grid corrosion test involving continuous overcharge at 500C provided some indication of the degree of grid corrosion that might be tolerable before failure. The accelerated positive material shedding test was not examined in any detail. Recommendations are made for additional studies.

  9. Large Area Coverage of a TPC Endcap with GridPix Detectors

    NASA Astrophysics Data System (ADS)

    Kaminski, Jochen

    2018-02-01

    The Large Prototype TPC at DESY, Hamburg, was built by the LCTPC collaboration as a testbed for new readout technologies of Time Projection Chambers. Up to seven modules of about 400 cm2 each can be placed in the endcap. Three of these modules were equipped with a total of 160 GridPix detectors. This is a combination of a highly pixelated readout ASIC and a Micromegas built on top. GridPix detectors have a very high efficiency of detecting primary electrons, which leads to excellent spatial and energy resolutions. For the first time a large number of GridPix detectors has been operated and long segments of tracks have been recorded with excellent precision.

  10. Recent Results on Gridpix Detectors:. AN Integrated Micromegas Grid and a Micromegas Ageing Test

    NASA Astrophysics Data System (ADS)

    Chefdeville, M.; Aarts, A.; van der Graaf, H.; van der Putten, S.

    2006-04-01

    A new gas-filled detector combining a Micromegas with a CMOS pixel chip has been recently tested. A procedure to integrate the Micromegas grid onto silicon wafers (‘wafer post processing’) has been developed. We aim to eventually integrate the grid on top of wafers of CMOS pixel chips. The first part of this contribution describes an application in vertex detection (GOSSIP). Then tests of the first detector prototype of a grid integrated on a bare silicon wafer are shown. Finally an ageing test of a Micromegas chamber is presented. After verifying the chambers' proportionality at a very high dose rates, the device was irradiated until ageing became apparent.

  11. Grid Stability Awareness System (GSAS) Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feuerborn, Scott; Ma, Jian; Black, Clifton

    The project team developed a software suite named Grid Stability Awareness System (GSAS) for power system near real-time stability monitoring and analysis based on synchrophasor measurement. The software suite consists of five analytical tools: an oscillation monitoring tool, a voltage stability monitoring tool, a transient instability monitoring tool, an angle difference monitoring tool, and an event detection tool. These tools have been integrated into one framework to provide power grid operators with both real-time or near real-time stability status of a power grid and historical information about system stability status. These tools are being considered for real-time use in themore » operation environment.« less

  12. Sensor failure detection system. [for the F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Beattie, E. C.; Laprad, R. F.; Mcglone, M. E.; Rock, S. M.; Akhter, M. M.

    1981-01-01

    Advanced concepts for detecting, isolating, and accommodating sensor failures were studied to determine their applicability to the gas turbine control problem. Five concepts were formulated based upon such techniques as Kalman filters and a screening process led to the selection of one advanced concept for further evaluation. The selected advanced concept uses a Kalman filter to generate residuals, a weighted sum square residuals technique to detect soft failures, likelihood ratio testing of a bank of Kalman filters for isolation, and reconfiguring of the normal mode Kalman filter by eliminating the failed input to accommodate the failure. The advanced concept was compared to a baseline parameter synthesis technique. The advanced concept was shown to be a viable concept for detecting, isolating, and accommodating sensor failures for the gas turbine applications.

  13. Fault detection and identification in missile system guidance and control: a filtering approach

    NASA Astrophysics Data System (ADS)

    Padgett, Mary Lou; Evers, Johnny; Karplus, Walter J.

    1996-03-01

    Real-world applications of computational intelligence can enhance the fault detection and identification capabilities of a missile guidance and control system. A simulation of a bank-to- turn missile demonstrates that actuator failure may cause the missile to roll and miss the target. Failure of one fin actuator can be detected using a filter and depicting the filter output as fuzzy numbers. The properties and limitations of artificial neural networks fed by these fuzzy numbers are explored. A suite of networks is constructed to (1) detect a fault and (2) determine which fin (if any) failed. Both the zero order moment term and the fin rate term show changes during actuator failure. Simulations address the following questions: (1) How bad does the actuator failure have to be for detection to occur, (2) How bad does the actuator failure have to be for fault detection and isolation to occur, (3) are both zero order moment and fine rate terms needed. A suite of target trajectories are simulated, and properties and limitations of the approach reported. In some cases, detection of the failed actuator occurs within 0.1 second, and isolation of the failure occurs 0.1 after that. Suggestions for further research are offered.

  14. Ground-Water Quality Data in the Southeast San Joaquin Valley, 2005-2006 - Results from the California GAMA Program

    USGS Publications Warehouse

    Burton, Carmen A.; Belitz, Kenneth

    2008-01-01

    Ground-water quality in the approximately 3,800 square-mile Southeast San Joaquin Valley study unit (SESJ) was investigated from October 2005 through February 2006 as part of the Priority Basin Assessment Project of Ground-Water Ambient Monitoring and Assessment (GAMA) Program. The GAMA Statewide Basin Assessment project was developed in response to the Ground-Water Quality Monitoring Act of 2001 and is being conducted by the California State Water Resources Control Board (SWRCB) in collaboration with the U.S. Geological Survey (USGS) and the Lawrence Livermore National Laboratory (LLNL). The SESJ study was designed to provide a spatially unbiased assessment of raw ground-water quality within SESJ, as well as a statistically consistent basis for comparing water quality throughout California. Samples were collected from 99 wells in Fresno, Tulare, and Kings Counties, 83 of which were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study area (grid wells), and 16 of which were sampled to evaluate changes in water chemistry along ground-water flow paths or across alluvial fans (understanding wells). The ground-water samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate, N-nitrosodimethylamine, and 1,2,3-trichloropropane), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), radioactive constituents, and microbial indicators. Naturally occurring isotopes (tritium, and carbon-14, and stable isotopes of hydrogen, oxygen, nitrogen, and carbon), and dissolved noble gases also were measured to help identify the source and age of the sampled ground water. Quality-control samples (blanks, replicates, samples for matrix spikes) were collected at approximately 10 percent of the wells, and the results for these samples were used to evaluate the quality of the data for the ground-water samples. Assessment of the quality-control data resulted in censoring of less than 1 percent of the detections of constituents measured in ground-water samples. This study did not attempt to evaluate the quality of drinking water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, and (or) blended with other waters to maintain acceptable drinking-water quality. Regulatory thresholds apply to the treated water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with regulatory and other health-based thresholds established by the U.S. Environmental Protection Agency and California Department of Public Health (CDPH) and thresholds established for aesthetic concerns by CDPH. Two VOCs were detected above health-based thresholds: 1,2-dibromo-3-chloropropane (DBCP), and benzene. DBCP was detected above the U.S. Environmental Protections Agency's maximum contaminant level (MCL-US) in three grid wells and five understanding wells. Benzene was detected above the CDPH's maximum contaminant level (MCL-CA) in one grid well. All pesticide detections were below health-based thresholds. Perchlorate was detected above its maximum contaminate level for California in one grid well. Nitrate was detected above the MCL-US in six samples from understanding wells, of which one was a public supply well. Two trace elements were detected above MCLs-US: arsenic and uranium. Arsenic was detected above the MCL-US in four grid wells and two understanding wells; uranium was detected above the MCL-US in one grid well and one understanding well. Gross alpha radiation was detected above MCLs-US in five samples; four of them understanding wells, and uranium isotope activity was greater than the MCL-US for one understanding well

  15. An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks

    PubMed Central

    Dong, Jian; Ren, Xiao; Zuo, Decheng; Liu, Hongwei

    2014-01-01

    The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P) networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS) can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen's classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD) is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen's model. Moreover, B-AFD has better adaptability to P2P network. PMID:25198005

  16. Land use change detection based on multi-date imagery from different satellite sensor systems

    NASA Technical Reports Server (NTRS)

    Stow, Douglas A.; Collins, Doretta; Mckinsey, David

    1990-01-01

    An empirical study is conducted to assess the accuracy of land use change detection using satellite image data acquired ten years apart by sensors with differing spatial resolutions. The primary goals of the investigation were to (1) compare standard change detection methods applied to image data of varying spatial resolution, (2) assess whether to transform the raster grid of the higher resolution image data to that of the lower resolution raster grid or vice versa in the registration process, (3) determine if Landsat/Thermatic Mapper or SPOT/High Resolution Visible multispectral data provide more accurate detection of land use changes when registered to historical Landsat/MSS data. It is concluded that image ratioing of multisensor, multidate satellite data produced higher change detection accuracies than did principal components analysis, and that it is useful as a land use change enhancement method.

  17. Incipient failure detection of space shuttle main engine turbopump bearings using vibration envelope detection

    NASA Technical Reports Server (NTRS)

    Hopson, Charles B.

    1987-01-01

    The results of an analysis performed on seven successive Space Shuttle Main Engine (SSME) static test firings, utilizing envelope detection of external accelerometer data are discussed. The results clearly show the great potential for using envelope detection techniques in SSME incipient failure detection.

  18. Device for detecting imminent failure of high-dielectric stress capacitors. [Patent application

    DOEpatents

    McDuff, G.G.

    1980-11-05

    A device is described for detecting imminent failure of a high-dielectric stress capacitor utilizing circuitry for detecting pulse width variations and pulse magnitude variations. Inexpensive microprocessor circuitry is utilized to make numerical calculations of digital data supplied by detection circuitry for comparison of pulse width data and magnitude data to determine if preselected ranges have been exceeded, thereby indicating imminent failure of a capacitor. Detection circuitry may be incorporated in transmission lines, pulse power circuitry, including laser pulse circuitry or any circuitry where capacitors or capacitor banks are utilized.

  19. Device for detecting imminent failure of high-dielectric stress capacitors

    DOEpatents

    McDuff, George G.

    1982-01-01

    A device for detecting imminent failure of a high-dielectric stress capacitor utilizing circuitry for detecting pulse width variations and pulse magnitude variations. Inexpensive microprocessor circuitry is utilized to make numerical calculations of digital data supplied by detection circuitry for comparison of pulse width data and magnitude data to determine if preselected ranges have been exceeded, thereby indicating imminent failure of a capacitor. Detection circuitry may be incorporated in transmission lines, pulse power circuitry, including laser pulse circuitry or any circuitry where capacitors or capactior banks are utilized.

  20. Lining seam elimination algorithm and surface crack detection in concrete tunnel lining

    NASA Astrophysics Data System (ADS)

    Qu, Zhong; Bai, Ling; An, Shi-Quan; Ju, Fang-Rong; Liu, Ling

    2016-11-01

    Due to the particularity of the surface of concrete tunnel lining and the diversity of detection environments such as uneven illumination, smudges, localized rock falls, water leakage, and the inherent seams of the lining structure, existing crack detection algorithms cannot detect real cracks accurately. This paper proposed an algorithm that combines lining seam elimination with the improved percolation detection algorithm based on grid cell analysis for surface crack detection in concrete tunnel lining. First, check the characteristics of pixels within the overlapping grid to remove the background noise and generate the percolation seed map (PSM). Second, cracks are detected based on the PSM by the accelerated percolation algorithm so that the fracture unit areas can be scanned and connected. Finally, the real surface cracks in concrete tunnel lining can be obtained by removing the lining seam and performing percolation denoising. Experimental results show that the proposed algorithm can accurately, quickly, and effectively detect the real surface cracks. Furthermore, it can fill the gap in the existing concrete tunnel lining surface crack detection by removing the lining seam.

  1. Long-term change of activity of very low-frequency earthquakes in southwest Japan

    NASA Astrophysics Data System (ADS)

    Baba, S.; Takeo, A.; Obara, K.; Kato, A.; Maeda, T.; Matsuzawa, T.

    2017-12-01

    On plate interface near seismogenic zone of megathrust earthquakes, various types of slow earthquakes were detected including non-volcanic tremors, slow slip events (SSEs) and very low-frequency earthquakes (VLFEs). VLFEs are classified into deep VLFEs, which occur in the downdip side of the seismogenic zone, and shallow VLFEs, occur in the updip side, i.e. several kilometers in depth in southwest Japan. As a member of slow earthquake family, VLFE activity is expected to be a proxy of inter-plate slipping because VLFEs have the same mechanisms as inter-plate slipping and are detected during Episodic tremor and slip (ETS). However, long-term change of the VLFE seismicity has not been well constrained compared to deep low-frequency tremor. We thus studied long-term changes in the activity of VLFEs in southwest Japan where ETS and long-term SSEs have been most intensive. We used continuous seismograms of F-net broadband seismometers operated by NIED from April 2004 to March 2017. After applying the band-pass filter with a frequency range of 0.02—0.05 Hz, we adopted the matched-filter technique in detecting VLFEs. We prepared templates by calculating synthetic waveforms for each hypocenter grid assuming typical focal mechanisms of VLFEs. The correlation coefficients between templates and continuous F-net seismograms were calculated at each grid every 1s in all components. The grid interval is 0.1 degree for both longitude and latitude. Each VLFE was detected as an event if the average of correlation coefficients exceeds the threshold. We defined the detection threshold as eight times as large as the median absolute deviation of the distribution. At grids in the Bungo channel, where long-term SSEs occurred frequently, the cumulative number of detected VLFEs increases rapidly in 2010 and 2014, which were modulated by stress loading from the long-term SSEs. At inland grids near the Bungo channel, the cumulative number increases steeply every half a year. This stepwise change accompanies with ETS. During long-term SSEs, the interval of the step is shorter and the number of VLFEs in each step is smaller than usual. The most remarkable point is that the rate of deep VLFEs has been low since later 2014 in this region. A likely explanation of the VLFE quiescence is a temporal change of inter-plate coupling in the Nankai subduction zone.

  2. High-resolution computer-aided moire

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1991-12-01

    This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.

  3. An Improved Compressive Sensing and Received Signal Strength-Based Target Localization Algorithm with Unknown Target Population for Wireless Local Area Networks.

    PubMed

    Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang

    2017-05-30

    In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalimunthe, Amty Ma’rufah Ardhiyah; Mindara, Jajat Yuda; Panatarani, Camellia

    Smart grid and distributed generation should be the solution of the global climate change and the crisis energy of the main source of electrical power generation which is fossil fuel. In order to meet the rising electrical power demand and increasing service quality demands, as well as reduce pollution, the existing power grid infrastructure should be developed into a smart grid and distributed power generation which provide a great opportunity to address issues related to energy efficiency, energy security, power quality and aging infrastructure systems. The conventional of the existing distributed generation system is an AC grid while for amore » renewable resources requires a DC grid system. This paper explores the model of smart DC grid by introducing a model of smart DC grid with the stable power generation give a minimal and compressed circuitry that can be implemented very cost-effectively with simple components. The PC based application software for controlling was developed to show the condition of the grid and to control the grid become ‘smart’. The model is then subjected to a severe system perturbation, such as incremental change in loads to test the performance of the system again stability. It is concluded that the system able to detect and controlled the voltage stability which indicating the ability of power system to maintain steady voltage within permissible rangers in normal condition.« less

  5. Epidemic failure detection and consensus for extreme parallelism

    DOE PAGES

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...

    2017-02-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  6. Advanced detection, isolation and accommodation of sensor failures: Real-time evaluation

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Bruton, William M.

    1987-01-01

    The objective of the Advanced Detection, Isolation, and Accommodation (ADIA) Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines by using analytical redundacy to detect sensor failures. The results of a real time hybrid computer evaluation of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 engine control system are determined. Also included are details about the microprocessor implementation of the algorithm as well as a description of the algorithm itself.

  7. Sensitivity Analysis of Repeat Track Estimation Techniques for Detection of Elevation Change in Polar Ice Sheets

    NASA Astrophysics Data System (ADS)

    Harpold, R. E.; Urban, T. J.; Schutz, B. E.

    2008-12-01

    Interest in elevation change detection in the polar regions has increased recently due to concern over the potential sea level rise from the melting of the polar ice caps. Repeat track analysis can be used to estimate elevation change rate by fitting elevation data to model parameters. Several aspects of this method have been tested to improve the recovery of the model parameters. Elevation data from ICESat over Antarctica and Greenland from 2003-2007 are used to test several grid sizes and types, such as grids based on latitude and longitude and grids centered on the ICESat reference groundtrack. Different sets of parameters are estimated, some of which include seasonal terms or alternate types of slopes (linear, quadratic, etc.). In addition, the effects of including crossovers and other solution constraints are evaluated. Simulated data are used to infer potential errors due to unmodeled parameters.

  8. Information security threats and an easy-to-implement attack detection framework for wireless sensor network-based smart grid applications

    NASA Astrophysics Data System (ADS)

    Tuna, G.; Örenbaş, H.; Daş, R.; Kogias, D.; Baykara, M.; K, K.

    2016-03-01

    Wireless Sensor Networks (WSNs) when combined with various energy harvesting solutions managing to prolong the overall lifetime of the system and enhanced capabilities of the communication protocols used by modern sensor nodes are efficiently used in are efficiently used in Smart Grid (SG), an evolutionary system for the modernization of existing power grids. However, wireless communication technology brings various types of security threats. In this study, firstly the use of WSNs for SG applications is presented. Second, the security related issues and challenges as well as the security threats are presented. In addition, proposed security mechanisms for WSN-based SG applications are discussed. Finally, an easy- to-implement and simple attack detection framework to prevent attacks directed to sink and gateway nodes with web interfaces is proposed and its efficiency is proved using a case study.

  9. GridPix detectors: Production and beam test results

    NASA Astrophysics Data System (ADS)

    Koppert, W. J. C.; van Bakel, N.; Bilevych, Y.; Colas, P.; Desch, K.; Fransen, M.; van der Graaf, H.; Hartjes, F.; Hessey, N. P.; Kaminski, J.; Schmitz, J.; Schön, R.; Zappon, F.

    2013-12-01

    The innovative GridPix detector is a Time Projection Chamber (TPC) that is read out with a Timepix-1 pixel chip. By using wafer post-processing techniques an aluminium grid is placed on top of the chip. When operated, the electric field between the grid and the chip is sufficient to create electron induced avalanches which are detected by the pixels. The time-to-digital converter (TDC) records the drift time enabling the reconstruction of high precision 3D track segments. Recently GridPixes were produced on full wafer scale, to meet the demand for more reliable and cheaper devices in large quantities. In a recent beam test the contribution of both diffusion and time walk to the spatial and angular resolutions of a GridPix detector with a 1.2 mm drift gap are studied in detail. In addition long term tests show that in a significant fraction of the chips the protection layer successfully quenches discharges, preventing harm to the chip.

  10. The marginalization of "small is beautiful": Micro-hydroelectricity, common property, and the politics of rural electricity provision in Thailand

    NASA Astrophysics Data System (ADS)

    Greacen, Christopher Edmund

    This study analyzes forces that constrain sustainable deployment of cost-effective renewable energy in a developing country. By many economic and social measures, community micro-hydro is a superior electrification option for remote mountainous communities in Thailand. Yet despite a 20 year government program, only 59 projects were built and of these less than half remain operating. By comparison, the national grid has extended to over 69,000 villages. Based on microeconomic, engineering, social barriers, common pool resource, and political economic theories, this study investigates first, why so few micro-hydro projects were built, and second, why so few remain operating. Drawing on historical information, site visits, interviews, surveys, and data logging, this study shows that the marginal status of micro-hydro arises from multiple linked factors spanning from village experiences to geopolitical concerns. The dominance of the parastatal rural electrification utility, the PEA, and its singular focus on grid extension are crucial in explaining why so few projects were built. Buffered from financial consequences by domestic and international subsidies, grid expansion proceeded without consideration of alternatives. High costs borne by villagers for micro-hydro discouraged village choice. PEA remains catalytic in explaining why few systems remain operating: grid expansion plans favor villages with existing loads and most villages abandon micro-hydro generators when the grid arrives. Village experiences are fundamental: most projects suffer blackouts, brownouts, and equipment failures due to poor equipment and collective over-consumption. Over-consumption is linked to mismatch between tariffs and generator technical characteristics. Opportunities to resolve problems languished as limited state support focused on building projects and immediate repairs rather than fundamentals. Despite frustrations, many remain proud of "their power plant". Interconnecting and selling electricity to PEA offers a mutually beneficial opportunity for the Thai public and for villagers, but one thus far thwarted by bureaucratic challenges. Explanations of renewable energy dissemination in countries with strong state involvement in rural electrification should borrow approaches from political economy concerning the ways in which politics and constellations of other factors eclipse rational economic behavior. At the village level, common pool resource theory reveals causal linkages between appliance use, equipment limitations, power quality, and equipment failures.

  11. The Failure Models of Lead Free Sn-3.0Ag-0.5Cu Solder Joint Reliability Under Low-G and High-G Drop Impact

    NASA Astrophysics Data System (ADS)

    Gu, Jian; Lei, YongPing; Lin, Jian; Fu, HanGuang; Wu, Zhongwei

    2017-02-01

    The reliability of Sn-3.0Ag-0.5Cu (SAC 305) solder joint under a broad level of drop impacts was studied. The failure performance of solder joint, failure probability and failure position were analyzed under two shock test conditions, i.e., 1000 g for 1 ms and 300 g for 2 ms. The stress distribution on the solder joint was calculated by ABAQUS. The results revealed that the dominant reason was the tension due to the difference in stiffness between the print circuit board and ball grid array, and the maximum tension of 121.1 MPa and 31.1 MPa, respectively, under both 1000 g or 300 g drop impact, was focused on the corner of the solder joint which was located in the outmost corner of the solder ball row. The failure modes were summarized into the following four modes: initiation and propagation through the (1) intermetallic compound layer, (2) Ni layer, (3) Cu pad, or (4) Sn-matrix. The outmost corner of the solder ball row had a high failure probability under both 1000 g and 300 g drop impact. The number of failures of solder ball under the 300 g drop impact was higher than that under the 1000 g drop impact. The characteristic drop values for failure were 41 and 15,199, respectively, following the statistics.

  12. The biometric-based module of smart grid system

    NASA Astrophysics Data System (ADS)

    Engel, E.; Kovalev, I. V.; Ermoshkina, A.

    2015-10-01

    Within Smart Grid concept the flexible biometric-based module base on Principal Component Analysis (PCA) and selective Neural Network is developed. The formation of the selective Neural Network the biometric-based module uses the method which includes three main stages: preliminary processing of the image, face localization and face recognition. Experiments on the Yale face database show that (i) selective Neural Network exhibits promising classification capability for face detection, recognition problems; and (ii) the proposed biometric-based module achieves near real-time face detection, recognition speed and the competitive performance, as compared to some existing subspaces-based methods.

  13. EEMD-based wind turbine bearing failure detection using the generator stator current homopolar component

    NASA Astrophysics Data System (ADS)

    Amirat, Yassine; Choqueuse, Vincent; Benbouzid, Mohamed

    2013-12-01

    Failure detection has always been a demanding task in the electrical machines community; it has become more challenging in wind energy conversion systems because sustainability and viability of wind farms are highly dependent on the reduction of the operational and maintenance costs. Indeed the most efficient way of reducing these costs would be to continuously monitor the condition of these systems. This allows for early detection of the generator health degeneration, facilitating a proactive response, minimizing downtime, and maximizing productivity. This paper provides then an assessment of a failure detection techniques based on the homopolar component of the generator stator current and attempts to highlight the use of the ensemble empirical mode decomposition as a tool for failure detection in wind turbine generators for stationary and non-stationary cases.

  14. Impact Dynamics: Theory and Experiment

    DTIC Science & Technology

    1980-10-01

    in the HEMP QHydrodynamic, Elastic, Magneto & Plastic ) code, employ a quadrilateral grid and may be solved in plane coordinates or with cylindrical...material constitution, strain rate. localized plastic flow, and failure are manifest at various stages of the impact process. Typically, loading and...STRENGTH; DENSITY /A DOMINANT’ PARAMETER 104 - 500-1000ms-1 VISCOUS-MATERIAL POWDER GUNS STRENGTH STILL SIGNIFICANT 10 2 50- 500 ms- PRIMARILY PLASTIC

  15. NASA-DoD Lead-Free Electronics Project

    NASA Technical Reports Server (NTRS)

    Kessel, Kurt R.

    2009-01-01

    The primary technical objective of this project is to undertake comprehensive testing to generate information on failure modes/criteria to better understand the reliability of: (1) Packages (e.g., Thin Small Outline Package [TSOP], Ball Grid Array [BGA], Plastic Dual In-line Package [PDIP]) assembled and reworked with lead-free alloys, (2) Packages (e.g., TSOP, BGA, PDIP) assembled and reworked with mixed (lead/lead-free) alloys.

  16. Evaluation of a grid based molecular dynamics approach for polypeptide simulations.

    PubMed

    Merelli, Ivan; Morra, Giulia; Milanesi, Luciano

    2007-09-01

    Molecular dynamics is very important for biomedical research because it makes possible simulation of the behavior of a biological macromolecule in silico. However, molecular dynamics is computationally rather expensive: the simulation of some nanoseconds of dynamics for a large macromolecule such as a protein takes very long time, due to the high number of operations that are needed for solving the Newton's equations in the case of a system of thousands of atoms. In order to obtain biologically significant data, it is desirable to use high-performance computation resources to perform these simulations. Recently, a distributed computing approach based on replacing a single long simulation with many independent short trajectories has been introduced, which in many cases provides valuable results. This study concerns the development of an infrastructure to run molecular dynamics simulations on a grid platform in a distributed way. The implemented software allows the parallel submission of different simulations that are singularly short but together bring important biological information. Moreover, each simulation is divided into a chain of jobs to avoid data loss in case of system failure and to contain the dimension of each data transfer from the grid. The results confirm that the distributed approach on grid computing is particularly suitable for molecular dynamics simulations thanks to the elevated scalability.

  17. Failure detection and fault management techniques for flush airdata sensing systems

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.

    1992-01-01

    Methods based on chi-squared analysis are presented for detecting system and individual-port failures in the high-angle-of-attack flush airdata sensing system on the NASA F-18 High Alpha Research Vehicle. The HI-FADS hardware is introduced, and the aerodynamic model describes measured pressure in terms of dynamic pressure, angle of attack, angle of sideslip, and static pressure. Chi-squared analysis is described in the presentation of the concept for failure detection and fault management which includes nominal, iteration, and fault-management modes. A matrix of pressure orifices arranged in concentric circles on the nose of the aircraft indicate the parameters which are applied to the regression algorithms. The sensing techniques are applied to the F-18 flight data, and two examples are given of the computed angle-of-attack time histories. The failure-detection and fault-management techniques permit the matrix to be multiply redundant, and the chi-squared analysis is shown to be useful in the detection of failures.

  18. Failure Detecting Method of Fault Current Limiter System with Rectifier

    NASA Astrophysics Data System (ADS)

    Tokuda, Noriaki; Matsubara, Yoshio; Asano, Masakuni; Ohkuma, Takeshi; Sato, Yoshibumi; Takahashi, Yoshihisa

    A fault current limiter (FCL) is extensively needed to suppress fault current, particularly required for trunk power systems connecting high-voltage transmission lines, such as 500kV class power system which constitutes the nucleus of the electric power system. We proposed a new type FCL system (rectifier type FCL), consisting of solid-state diodes, DC reactor and bypass AC reactor, and demonstrated the excellent performances of this FCL by developing the small 6.6kV and 66kV model. It is important to detect the failure of power devices used in the rectifier under the normal operating condition, for keeping the excellent reliability of the power system. In this paper, we have proposed a new failure detecting method of power devices most suitable for the rectifier type FCL. This failure detecting system is simple and compact. We have adapted the proposed system to the 66kV prototype single-phase model and successfully demonstrated to detect the failure of power devices.

  19. Syndromic surveillance for health information system failures: a feasibility study.

    PubMed

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-05-01

    To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.

  20. X-ray photon correlation spectroscopy using a fast pixel array detector with a grid mask resolution enhancer.

    PubMed

    Hoshino, Taiki; Kikuchi, Moriya; Murakami, Daiki; Harada, Yoshiko; Mitamura, Koji; Ito, Kiminori; Tanaka, Yoshihito; Sasaki, Sono; Takata, Masaki; Jinnai, Hiroshi; Takahara, Atsushi

    2012-11-01

    The performance of a fast pixel array detector with a grid mask resolution enhancer has been demonstrated for X-ray photon correlation spectroscopy (XPCS) measurements to investigate fast dynamics on a microscopic scale. A detecting system, in which each pixel of a single-photon-counting pixel array detector, PILATUS, is covered by grid mask apertures, was constructed for XPCS measurements of silica nanoparticles in polymer melts. The experimental results are confirmed to be consistent by comparison with other independent experiments. By applying this method, XPCS measurements can be carried out by customizing the hole size of the grid mask to suit the experimental conditions, such as beam size, detector size and sample-to-detector distance.

  1. Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.

  2. Fire Detections and Fire Radiative Power Intercomparison Using Multiple Sensor Products over a Predominantly Gas Flaring Region

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Wang, J.

    2014-12-01

    Gas flaring is a global environmental hazard severely impacting climate, economy and public health. The associated emissions are frequently unreported and have large uncertainties. Prior studies have established a direct relationship between radiative energy released from fires and the biomass burned, making fire radiative power (FRP), i.e., the rate of radiative energy release, an important proxy to characterize emissions. In this study fire properties from four different satellite products were obtained over a 10⁰ x 10⁰ gas flaring region in Russia for all days of May 2013. The target area is part of Russia's biggest gas flaring region, Khanty-Mansiysk autonomous okrug. The objective of the study is to investigate the consistency of fire detections, FRP retrievals and effects of gridding FRP data from the region on a uniform grid. The four products used were: MODIS Terra level2 thermal anomalies (MOD14), MODIS Aqua level2 thermal anomalies (MYD14), VIIRS Active fire product and a recent NOAA Nightfire product. 1 km nominal resolution FRP from MOD14 AND MYD14, subpixel radiant heat (RH) from NOAA Nightfire product and fire detections from all four products were recorded on a 0.25⁰ x 0.25⁰ grid on a daily basis. Results revealed the Nightfire product had maximum detections, almost six times the number of detections by other products, mainly because of the use of M10 (1.6 µm) band as their primary detection band. The M10 band is highly efficient in identifying radiant emissions from hot sources during night-time. The correlation (after omitting outliers) between gridded NOAA Nightfire RH and corresponding MOD14 FRP and MYD14 FRP gave a moderate regression value, with MODIS FRP being mostly higher than RH. As an extension to this work, a comprehensive study for a larger temporal domain also incorporating viewing geometries and cloud cover would advance our understanding of flare detections and associated FRP retrievals not just for the target region but also gas flaring regions globally.

  3. Investigation of accelerated stress factors and failure/degradation mechanisms in terrestrial solar cells

    NASA Technical Reports Server (NTRS)

    Lathrop, J. W.

    1983-01-01

    Results of an ongoing research program into the reliability of terrestrial solar cells are presented. Laboratory accelerated testing procedures are used to identify failure/degradation modes which are then related to basic physical, chemical, and metallurgical phenomena. In the most recent tests, ten different types of production cells, both with and without encapsulation, from eight different manufacturers were subjected to a variety of accelerated tests. Results indicated the presence of a number of hitherto undetected failure mechanisms, including Schottky barrier formation at back contacts and loss of adhesion of grid metallization. The mechanism of Schottky barrier formation is explained by hydrogen, formed by the dissociation of water molecules at the contact surface, diffusing to the metal semiconductor interface. This same mechanism accounts for the surprising increase in sensitivity to accelerated stress conditions that was observed in some cells when encapsulated.

  4. A highly optimized grid deployment: the metagenomic analysis example.

    PubMed

    Aparicio, Gabriel; Blanquer, Ignacio; Hernández, Vicente

    2008-01-01

    Computational resources and computationally expensive processes are two topics that are not growing at the same ratio. The availability of large amounts of computing resources in Grid infrastructures does not mean that efficiency is not an important issue. It is necessary to analyze the whole process to improve partitioning and submission schemas, especially in the most critical experiments. This is the case of metagenomic analysis, and this text shows the work done in order to optimize a Grid deployment, which has led to a reduction of the response time and the failure rates. Metagenomic studies aim at processing samples of multiple specimens to extract the genes and proteins that belong to the different species. In many cases, the sequencing of the DNA of many microorganisms is hindered by the impossibility of growing significant samples of isolated specimens. Many bacteria cannot survive alone, and require the interaction with other organisms. In such cases, the information of the DNA available belongs to different kinds of organisms. One important stage in Metagenomic analysis consists on the extraction of fragments followed by the comparison and analysis of their function stage. By the comparison to existing chains, whose function is well known, fragments can be classified. This process is computationally intensive and requires of several iterations of alignment and phylogeny classification steps. Source samples reach several millions of sequences, which could reach up to thousands of nucleotides each. These sequences are compared to a selected part of the "Non-redundant" database which only implies the information from eukaryotic species. From this first analysis, a refining process is performed and alignment analysis is restarted from the results. This process implies several CPU years. The article describes and analyzes the difficulties to fragment, automate and check the above operations in current Grid production environments. This environment has been tuned-up from an experimental study which has tested the most efficient and reliable resources, the optimal job size, and the data transference and database reindexation overhead. The environment should re-submit faulty jobs, detect endless tasks and ensure that the results are correctly retrieved and workflow synchronised. The paper will give an outline on the structure of the system, and the preparation steps performed to deal with this experiment.

  5. A Low-Pressure Oxygen Storage System for Oxygen Supply in Low-Resource Settings.

    PubMed

    Rassool, Roger P; Sobott, Bryn A; Peake, David J; Mutetire, Bagayana S; Moschovis, Peter P; Black, Jim Fp

    2017-12-01

    Widespread access to medical oxygen would reduce global pneumonia mortality. Oxygen concentrators are one proposed solution, but they have limitations, in particular vulnerability to electricity fluctuations and failure during blackouts. The low-pressure oxygen storage system addresses these limitations in low-resource settings. This study reports testing of the system in Melbourne, Australia, and nonclinical field testing in Mbarara, Uganda. The system included a power-conditioning unit, a standard oxygen concentrator, and an oxygen store. In Melbourne, pressure and flows were monitored during cycles of filling/emptying, with forced voltage fluctuations. The bladders were tested by increasing pressure until they ruptured. In Mbarara, the system was tested by accelerated cycles of filling/emptying and then run on grid power for 30 d. The low-pressure oxygen storage system performed well, including sustaining a pressure approximately twice the standard working pressure before rupture of the outer bag. Flow of 1.2 L/min was continuously maintained to a simulated patient during 30 d on grid power, despite power failures totaling 2.9% of the total time, with durations of 1-176 min (mean 36.2, median 18.5). The low-pressure oxygen storage system was robust and durable, with accelerated testing equivalent to at least 2 y of operation revealing no visible signs of imminent failure. Despite power cuts, the system continuously provided oxygen, equivalent to the treatment of one child, for 30 d under typical power conditions for sub-Saharan Africa. The low-pressure oxygen storage system is ready for clinical field trials. Copyright © 2017 by Daedalus Enterprises.

  6. NASA's Evolutionary Xenon Thruster (NEXT) Long-Duration Test as of 736 kg of Propellant Throughput

    NASA Technical Reports Server (NTRS)

    Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Patterson, Michael J.

    2012-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) program is developing the next-generation solar-electric ion propulsion system with significant enhancements beyond the state-of-the-art NASA Solar Electric Propulsion Technology Application Readiness (NSTAR) ion propulsion system to provide future NASA science missions with enhanced mission capabilities. A Long-Duration Test (LDT) was initiated in June 2005 to validate the thruster service life modeling and to qualify the thruster propellant throughput capability. The thruster has set electric propulsion records for the longest operating duration, highest propellant throughput, and most total impulse demonstrated. At the time of this publication, the NEXT LDT has surpassed 42,100 h of operation, processed more than 736 kg of xenon propellant, and demonstrated greater than 28.1 MN s total impulse. Thruster performance has been steady with negligible degradation. The NEXT thruster design has mitigated several lifetime limiting mechanisms encountered in the NSTAR design, including the NSTAR first failure mode, thereby drastically improving thruster capabilities. Component erosion rates and the progression of the predicted life-limiting erosion mechanism for the thruster compare favorably to pretest predictions based upon semi-empirical ion thruster models used in the thruster service life assessment. Service life model validation has been accomplished by the NEXT LDT. Assuming full-power operation until test article failure, the models and extrapolated erosion data predict penetration of the accelerator grid grooves after more than 45,000 hours of operation while processing over 800 kg of xenon propellant. Thruster failure due to degradation of the accelerator grid structural integrity is expected after groove penetration.

  7. NASA's Evolutionary Xenon Thruster (NEXT) Long-Duration Test as of 736 kg of Propellant Throughput

    NASA Technical Reports Server (NTRS)

    Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Patterson, Michael J.

    2012-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) program is developing the next-generation solar-electric ion propulsion system with significant enhancements beyond the state-of-the-art NASA Solar Electric Propulsion Technology Application Readiness (NSTAR) ion propulsion system to provide future NASA science missions with enhanced mission capabilities. A Long-Duration Test (LDT) was initiated in June 2005 to validate the thruster service life modeling and to qualify the thruster propellant throughput capability. The thruster has set electric propulsion records for the longest operating duration, highest propellant throughput, and most total impulse demonstrated. At the time of this publication, the NEXT LDT has surpassed 42,100 h of operation, processed more than 736 kg of xenon propellant, and demonstrated greater than 28.1 MN s total impulse. Thruster performance has been steady with negligible degradation. The NEXT thruster design has mitigated several lifetime limiting mechanisms encountered in the NSTAR design, including the NSTAR first failure mode, thereby drastically improving thruster capabilities. Component erosion rates and the progression of the predicted life-limiting erosion mechanism for the thruster compare favorably to pretest predictions based upon semi-empirical ion thruster models used in the thruster service life assessment. Service life model validation has been accomplished by the NEXT LDT. Assuming full-power operation until test article failure, the models and extrapolated erosion data predict penetration of the accelerator grid grooves after more than 45,000 hours of operation while processing over 800 kg of xenon propellant. Thruster failure due to degradation of the accelerator grid structural integrity is expected after

  8. Lagrangian displacement tracking using a polar grid between endocardial and epicardial contours for cardiac strain imaging.

    PubMed

    Ma, Chi; Varghese, Tomy

    2012-04-01

    Accurate cardiac deformation analysis for cardiac displacement and strain imaging over time requires Lagrangian description of deformation of myocardial tissue structures. Failure to couple the estimated displacement and strain information with the correct myocardial tissue structures will lead to erroneous result in the displacement and strain distribution over time. Lagrangian based tracking in this paper divides the tissue structure into a fixed number of pixels whose deformation is tracked over the cardiac cycle. An algorithm that utilizes a polar-grid generated between the estimated endocardial and epicardial contours for cardiac short axis images is proposed to ensure Lagrangian description of the pixels. Displacement estimates from consecutive radiofrequency frames were then mapped onto the polar grid to obtain a distribution of the actual displacement that is mapped to the polar grid over time. A finite element based canine heart model coupled with an ultrasound simulation program was used to verify this approach. Segmental analysis of the accumulated displacement and strain over a cardiac cycle demonstrate excellent agreement between the ideal result obtained directly from the finite element model and our Lagrangian approach to strain estimation. Traditional Eulerian based estimation results, on the other hand, show significant deviation from the ideal result. An in vivo comparison of the displacement and strain estimated using parasternal short axis views is also presented. Lagrangian displacement tracking using a polar grid provides accurate tracking of myocardial deformation demonstrated using both finite element and in vivo radiofrequency data acquired on a volunteer. In addition to the cardiac application, this approach can also be utilized for transverse scans of arteries, where a polar grid can be generated between the contours delineating the outer and inner wall of the vessels from the blood flowing though the vessel.

  9. Remote Structural Health Monitoring and Advanced Prognostics of Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Douglas Brown; Bernard Laskowski

    The prospect of substantial investment in wind energy generation represents a significant capital investment strategy. In order to maximize the life-cycle of wind turbines, associated rotors, gears, and structural towers, a capability to detect and predict (prognostics) the onset of mechanical faults at a sufficiently early stage for maintenance actions to be planned would significantly reduce both maintenance and operational costs. Advancement towards this effort has been made through the development of anomaly detection, fault detection and fault diagnosis routines to identify selected fault modes of a wind turbine based on available sensor data preceding an unscheduled emergency shutdown. Themore » anomaly detection approach employs spectral techniques to find an approximation of the data using a combination of attributes that capture the bulk of variability in the data. Fault detection and diagnosis (FDD) is performed using a neural network-based classifier trained from baseline and fault data recorded during known failure conditions. The approach has been evaluated for known baseline conditions and three selected failure modes: pitch rate failure, low oil pressure failure and a gearbox gear-tooth failure. Experimental results demonstrate the approach can distinguish between these failure modes and normal baseline behavior within a specified statistical accuracy.« less

  10. Peer-to-peer Cooperative Scheduling Architecture for National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Matyska, Ludek; Ruda, Miroslav; Toth, Simon

    For some ten years, the Czech National Grid Infrastructure MetaCentrum uses a single central PBSPro installation to schedule jobs across the country. This centralized approach keeps a full track about all the clusters, providing support for jobs spanning several sites, implementation for the fair-share policy and better overall control of the grid environment. Despite a steady progress in the increased stability and resilience to intermittent very short network failures, growing number of sites and processors makes this architecture, with a single point of failure and scalability limits, obsolete. As a result, a new scheduling architecture is proposed, which relies on higher autonomy of clusters. It is based on a peer to peer network of semi-independent schedulers for each site or even cluster. Each scheduler accepts jobs for the whole infrastructure, cooperating with other schedulers on implementation of global policies like central job accounting, fair-share, or submission of jobs across several sites. The scheduling system is integrated with the Magrathea system to support scheduling of virtual clusters, including the setup of their internal network, again eventually spanning several sites. On the other hand, each scheduler is local to one of several clusters and is able to directly control and submit jobs to them even if the connection of other scheduling peers is lost. In parallel to the change of the overall architecture, the scheduling system itself is being replaced. Instead of PBSPro, chosen originally for its declared support of large scale distributed environment, the new scheduling architecture is based on the open-source Torque system. The implementation and support for the most desired properties in PBSPro and Torque are discussed and the necessary modifications to Torque to support the MetaCentrum scheduling architecture are presented, too.

  11. Reliability of CGA/LGA/HDI Package Board/Assembly (Final Report)

    NASA Technical Reports Server (NTRS)

    Ghaffaroam. Reza

    2014-01-01

    Package manufacturers are now offering commercial-off-the-shelf column grid array (COTS CGA) packaging technologies in high-reliability versions. Understanding the process and quality assurance (QA) indicators for reliability are important for low-risk insertion of these advanced electronics packages. The previous reports, released in January of 2012 and January of 2013, presented package test data, assembly information, and reliability evaluation by thermal cycling for CGA packages with 1752, 1517, 1509, and 1272 inputs/outputs (I/Os) and 1-mm pitch. It presented the thermal cycling (-55C either 100C or 125C) test results for up to 200 cycles. This report presents up to 500 thermal cycles with quality assurance and failure analysis evaluation represented by optical photomicrographs, 2D real time X-ray images, dye-and-pry photomicrographs, and optical/scanning electron Microscopy (SEM) cross-sectional images. The report also presents assembly challenge using reflowing by either vapor phase or rework station of CGA and land grid array (LGA) versions of three high I/O packages both ceramic and plastic configuration. A new test vehicle was designed having high density interconnect (HDI) printed circuit board (PCB) with microvia-in-pad to accommodate both LGA packages as well as a large number of fine pitch ball grid arrays (BGAs). The LGAs either were assembled onto HDI PCB as an LGA or were solder paste print and reflow first to form solder dome on pads before assembly. Both plastic BGAs with 1156 I/O and ceramic LGAs were assembled. It also presented the X-ray inspection results as well as failures due to 200 thermal cycles. Lessons learned on assembly of ceramic LGAs are also presented.

  12. Very low frequency earthquakes (VLFEs) detected during episodic tremor and slip (ETS) events in Cascadia using a match filter method indicate repeating events

    NASA Astrophysics Data System (ADS)

    Hutchison, A. A.; Ghosh, A.

    2016-12-01

    Very low frequency earthquakes (VLFEs) occur in transitional zones of faults, releasing seismic energy in the 0.02-0.05 Hz frequency band over a 90 s duration and typically have magntitudes within the range of Mw 3.0-4.0. VLFEs can occur down-dip of the seismogenic zone, where they can transfer stress up-dip potentially bringing the locked zone closer to a critical failure stress. VLFEs also occur up-dip of the seismogenic zone in a region along the plate interface that can rupture coseismically during large megathrust events, such as the 2011 Tohoku-Oki earthquake [Ide et al., 2011]. VLFEs were first detected in Cascadia during the 2011 episodic tremor and slip (ETS) event, occurring coincidentally with tremor [Ghosh et al., 2015]. However, during the 2014 ETS event, VLFEs were spatially and temporally asynchronous with tremor activity [Hutchison and Ghosh, 2016]. Such contrasting behaviors remind us that the mechanics behind such events remain elusive, yet they are responsible for the largest portion of the moment release during an ETS event. Here, we apply a match filter method using known VLFEs as template events to detect additional VLFEs. Using a grid-search centroid moment tensor inversion method, we invert stacks of the resulting match filter detections to ensure moment tensor solutions are similar to that of the respective template events. Our ability to successfully employ a match filter method to VLFE detection in Cascadia intrinsically indicates that these events can be repeating, implying that the same asperities are likely responsible for generating multiple VLFEs.

  13. Liquid crystal-based glucose biosensor functionalized with mixed PAA and QP4VP brushes.

    PubMed

    Khan, Mashooq; Park, Soo-Young

    2015-06-15

    4-Cyano-4'-pentylbiphenyl (5CB) in a transmission electron microscopy (TEM) grid was developed for glucose detection by coating with a monolayer of mixed polymer brushes using poly(acrylicacid-b-4-cynobiphenyl-4'-oxyundecylacrylate) (PAA-b-LCP) and quaternized poly(4-vinylpyridine-b-4-cynobiphenyl-4'-oxyundecylacrylate) (QP4VP-b-LCP) (LCP stands for liquid crystal polymer) at the 5CB/aqueous interface. The resultant 5CB in TEM grid was functionalized with the PAA and QP4VP brushes, which were strongly anchored by the LCP block. The PAA brush rendered the 5CB/aqueous interface pH-responsive and the QP4VP brush immobilized glucose oxidase (GOx) through electrostatic interactions without the aid of coupling agents. The glucose was detected through a homeotropic-to-planar orientational transition of the 5CB observed through a polarized optical microscope (POM) under crossed polarizers. The optimum immobilization with a 0.78 µM GOx solution on the dual-brush-coated TEM grid enabled glucose detection at concentrations higher than 0.5 mM with response times shorter than 180 s. This TEM grid glucose sensor provided a linear response of birefringence of the 5CB to glucose concentrations ranging from 0.5 to 11 mM with a Michaelis-Menten constant (Km) of 1.67 mM. This new and sensitive glucose biosensor has the advantages of low production cost, simple enzyme immobilization, high enzyme sensitivity and stability, and easy detection with POM, and may be useful for prescreening the glucose level in the human body. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. A two-phase investment model for optimal allocation of phasor measurement units considering transmission switching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mousavian, Seyedamirabbas; Valenzuela, Jorge; Wang, Jianhui

    2015-02-01

    Ensuring the reliability of an electrical power system requires a wide-area monitoring and full observability of the state variables. Phasor measurement units (PMUs) collect in real time synchronized phasors of voltages and currents which are used for the observability of the power grid. Due to the considerable cost of installing PMUs, it is not possible to equip all buses with PMUs. In this paper, we propose an integer linear programming model to determine the optimal PMU placement plan in two investment phases. In the first phase, PMUs are installed to achieve full observability of the power grid whereas additional PMUsmore » are installed in the second phase to guarantee the N - 1 observability of the power grid. The proposed model also accounts for transmission switching and single contingencies such as failure of a PMU or a transmission line. Results are provided on several IEEE test systems which show that our proposed approach is a promising enhancement to the methods available for the optimal placement of PMUs.« less

  15. Modeling Geomagnetically Induced Currents From Magnetometer Measurements: Spatial Scale Assessed With Reference Measurements

    NASA Astrophysics Data System (ADS)

    Butala, Mark D.; Kazerooni, Maryam; Makela, Jonathan J.; Kamalabadi, Farzad; Gannon, Jennifer L.; Zhu, Hao; Overbye, Thomas J.

    2017-10-01

    Solar-driven disturbances generate geomagnetically induced currents (GICs) that can result in power grid instability and, in the most extreme cases, even failure. Magnetometers provide direct measurements of the geomagnetic disturbance (GMD) effect on the surface magnetic field and GIC response can be determined from the power grid topology and engineering parameters. This paper considers this chain of models: transforming surface magnetic field disturbance to induced surface electric field through an electromagnetic transfer function and, then, induced surface electric field to GIC using the PowerWorld simulator to model a realistic power grid topology. Comparisons are made to transformer neutral current reference measurements provided by the American Transmission Company. Three GMD intervals are studied, with the Kp index reaching 8- on 2 October 2013, 7 on 1 June 2013, and 6- on 9 October 2013. Ultimately, modeled to measured GIC correlations are analyzed as a function of magnetometer to GIC sensor distance. Results indicate that modeling fidelity during the three studied GMD intervals is strongly dependent on both magnetometer to substation transformer baseline distance and GMD intensity.

  16. Real-time diagnostics of the reusable rocket engine using on-line system identification

    NASA Technical Reports Server (NTRS)

    Guo, T.-H.; Merrill, W.; Duyar, A.

    1990-01-01

    A model-based failure diagnosis system has been proposed for real-time diagnosis of SSME failures. Actuation, sensor, and system degradation failure modes are all considered by the proposed system. In the case of SSME actuation failures, it was shown that real-time identification can effectively be used for failure diagnosis purposes. It is a direct approach since it reduces the detection, isolation, and the estimation of the extent of the failures to the comparison of parameter values before and after the failure. As with any model-based failure detection system, the proposed approach requires a fault model that embodies the essential characteristics of the failure process. The proposed diagnosis approach has the added advantage that it can be used as part of an intelligent control system for failure accommodation purposes.

  17. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGES

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; ...

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  18. Coordinated learning of grid cell and place cell spatial and temporal properties: multiple scales, attention and oscillations.

    PubMed

    Grossberg, Stephen; Pilly, Praveen K

    2014-02-05

    A neural model proposes how entorhinal grid cells and hippocampal place cells may develop as spatial categories in a hierarchy of self-organizing maps (SOMs). The model responds to realistic rat navigational trajectories by learning both grid cells with hexagonal grid firing fields of multiple spatial scales, and place cells with one or more firing fields, that match neurophysiological data about their development in juvenile rats. Both grid and place cells can develop by detecting, learning and remembering the most frequent and energetic co-occurrences of their inputs. The model's parsimonious properties include: similar ring attractor mechanisms process linear and angular path integration inputs that drive map learning; the same SOM mechanisms can learn grid cell and place cell receptive fields; and the learning of the dorsoventral organization of multiple spatial scale modules through medial entorhinal cortex to hippocampus (HC) may use mechanisms homologous to those for temporal learning through lateral entorhinal cortex to HC ('neural relativity'). The model clarifies how top-down HC-to-entorhinal attentional mechanisms may stabilize map learning, simulates how hippocampal inactivation may disrupt grid cells, and explains data about theta, beta and gamma oscillations. The article also compares the three main types of grid cell models in the light of recent data.

  19. Adaptive grid methods for RLV environment assessment and nozzle analysis

    NASA Technical Reports Server (NTRS)

    Thornburg, Hugh J.

    1996-01-01

    Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation, forcing functions to attract/repel points in an elliptic system, or to trigger local refinement, based upon application of an equidistribution principle. The popularity of solution-adaptive techniques is growing in tandem with unstructured methods. The difficultly of precisely controlling mesh densities and orientations with current unstructured grid generation systems has driven the use of solution-adaptive meshing. Use of derivatives of density or pressure are widely used for construction of such weight functions, and have been proven very successful for inviscid flows with shocks. However, less success has been realized for flowfields with viscous layers, vortices or shocks of disparate strength. It is difficult to maintain the appropriate mesh point spacing in the various regions which require a fine spacing for adequate resolution. Mesh points often migrate from important regions due to refinement of dominant features. An example of this is the well know tendency of adaptive methods to increase the resolution of shocks in the flowfield around airfoils, but in the incorrect location due to inadequate resolution of the stagnation region. This problem has been the motivation for this research.

  20. Enhanced Product Generation at NASA Data Centers Through Grid Technology

    NASA Technical Reports Server (NTRS)

    Barkstrom, Bruce R.; Hinke, Thomas H.; Gavali, Shradha; Seufzer, William J.

    2003-01-01

    This paper describes how grid technology can support the ability of NASA data centers to provide customized data products. A combination of grid technology and commodity processors are proposed to provide the bandwidth necessary to perform customized processing of data, with customized data subsetting providing the initial example. This customized subsetting engine can be used to support a new type of subsetting, called phenomena-based subsetting, where data is subsetted based on its association with some phenomena, such as mesoscale convective systems or hurricanes. This concept is expanded to allow the phenomena to be detected in one type of data, with the subsetting requirements transmitted to the subsetting engine to subset a different type of data. The subsetting requirements are generated by a data mining system and transmitted to the subsetter in the form of an XML feature index that describes the spatial and temporal extent of the phenomena. For this work, a grid-based mining system called the Grid Miner is used to identify the phenomena and generate the feature index. This paper discusses the value of grid technology in facilitating the development of a high performance customized product processing and the coupling of a grid mining system to support phenomena-based subsetting.

  1. Electron gas grid semiconductor radiation detectors

    DOEpatents

    Lee, Edwin Y.; James, Ralph B.

    2002-01-01

    An electron gas grid semiconductor radiation detector (EGGSRAD) useful for gamma-ray and x-ray spectrometers and imaging systems is described. The radiation detector employs doping of the semiconductor and variation of the semiconductor detector material to form a two-dimensional electron gas, and to allow transistor action within the detector. This radiation detector provides superior energy resolution and radiation detection sensitivity over the conventional semiconductor radiation detector and the "electron-only" semiconductor radiation detectors which utilize a grid electrode near the anode. In a first embodiment, the EGGSRAD incorporates delta-doped layers adjacent the anode which produce an internal free electron grid well to which an external grid electrode can be attached. In a second embodiment, a quantum well is formed between two of the delta-doped layers, and the quantum well forms the internal free electron gas grid to which an external grid electrode can be attached. Two other embodiments which are similar to the first and second embodiment involve a graded bandgap formed by changing the composition of the semiconductor material near the first and last of the delta-doped layers to increase or decrease the conduction band energy adjacent to the delta-doped layers.

  2. Co-Simulation Platform For Characterizing Cyber Attacks in Cyber Physical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadi, Mohammad A. H.; Ali, Mohammad Hassan; Dasgupta, Dipankar

    Smart grid is a complex cyber physical system containing a numerous and variety of sources, devices, controllers and loads. Communication/Information infrastructure is the backbone of the smart grid system where different grid components are connected with each other through this structure. Therefore, the drawbacks of the information technology related issues are also becoming a part of the smart grid. Further, smart grid is also vulnerable to the grid related disturbances. For such a dynamic system, disturbance and intrusion detection is a paramount issue. This paper presents a Simulink and OPNET based co-simulated test bed to carry out a cyber-intrusion inmore » a cyber-network for modern power systems and smart grid. The effect of the cyber intrusion on the physical power system is also presented. The IEEE 30 bus power system model is used to demonstrate the effectiveness of the simulated testbed. The experiments were performed by disturbing the circuit breakers reclosing time through a cyber-attack in the cyber network. Different disturbance situations in the proposed test system are considered and the results indicate the effectiveness of the proposed co-simulated scheme.« less

  3. Analysis and Reduction of Complex Networks Under Uncertainty.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghanem, Roger G

    2014-07-31

    This effort was a collaboration with Youssef Marzouk of MIT, Omar Knio of Duke University (at the time at Johns Hopkins University) and Habib Najm of Sandia National Laboratories. The objective of this effort was to develop the mathematical and algorithmic capacity to analyze complex networks under uncertainty. Of interest were chemical reaction networks and smart grid networks. The statements of work for USC focused on the development of stochastic reduced models for uncertain networks. The USC team was led by Professor Roger Ghanem and consisted of one graduate student and a postdoc. The contributions completed by the USC teammore » consisted of 1) methodology and algorithms to address the eigenvalue problem, a problem of significance in the stability of networks under stochastic perturbations, 2) methodology and algorithms to characterize probability measures on graph structures with random flows. This is an important problem in characterizing random demand (encountered in smart grid) and random degradation (encountered in infrastructure systems), as well as modeling errors in Markov Chains (with ubiquitous relevance !). 3) methodology and algorithms for treating inequalities in uncertain systems. This is an important problem in the context of models for material failure and network flows under uncertainty where conditions of failure or flow are described in the form of inequalities between the state variables.« less

  4. Syndromic surveillance for health information system failures: a feasibility study

    PubMed Central

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-01-01

    Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193

  5. Risk analysis of analytical validations by probabilistic modification of FMEA.

    PubMed

    Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J

    2012-05-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Glucose biosensor based on GOx/HRP bienzyme at liquid-crystal/aqueous interface.

    PubMed

    Khan, Mashooq; Park, Soo-Young

    2015-11-01

    Glucose oxidase (GOx) and horseradish peroxidase (HRP) were co-immobilized to the polyacrylicacid block of a poly(acrylicacid-b-4-cyanobiphenyl-4'-undecylacrylate) (PAA-b-LCP) copolymer in water. PAA-b-LCP was strongly anchored by the LCP block in 4-cyano-4'-pentylbiphenyl (5CB) which was contained in a transmission electron microscope (TEM) grid for glucose detection. The optimal conditions for the performance of the TEM grid glucose biosensor were studied in terms of the activity and stability of the immobilized enzymes. Glucose in water was detected by the 5CB changing from a planar to a homeotropic orientation, as observed through a polarized optical microscope. The TEM biosensor detected glucose concentrations at ⩾0.02 mM, with an optimal GOx/HRP molar ratio of 3/1. This glucose biosensor has characteristics of enzyme sensitivity and stability, reusability, the ease and selective glucose detection which may provide a new way of detecting glucose. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. DC-to-AC inverter ratio failure detector

    NASA Technical Reports Server (NTRS)

    Ebersole, T. J.; Andrews, R. E.

    1975-01-01

    Failure detection technique is based upon input-output ratios, which is independent of inverter loading. Since inverter has fixed relationship between V-in/V-out and I-in/I-out, failure detection criteria are based on this ratio, which is simply inverter transformer turns ratio, K, equal to primary turns divided by secondary turns.

  8. Impaired face detection may explain some but not all cases of developmental prosopagnosia.

    PubMed

    Dalrymple, Kirsten A; Duchaine, Brad

    2016-05-01

    Developmental prosopagnosia (DP) is defined by severe face recognition difficulties due to the failure to develop the visual mechanisms for processing faces. The two-process theory of face recognition (Morton & Johnson, 1991) implies that DP could result from a failure of an innate face detection system; this failure could prevent an individual from then tuning higher-level processes for face recognition (Johnson, 2005). Work with adults indicates that some individuals with DP have normal face detection whereas others are impaired. However, face detection has not been addressed in children with DP, even though their results may be especially informative because they have had less opportunity to develop strategies that could mask detection deficits. We tested the face detection abilities of seven children with DP. Four were impaired at face detection to some degree (i.e. abnormally slow, or failed to find faces) while the remaining three children had normal face detection. Hence, the cases with impaired detection are consistent with the two-process account suggesting that DP could result from a failure of face detection. However, the cases with normal detection implicate a higher-level origin. The dissociation between normal face detection and impaired identity perception also indicates that these abilities depend on different neurocognitive processes. © 2015 John Wiley & Sons Ltd.

  9. Online production validation in a HEP environment

    NASA Astrophysics Data System (ADS)

    Harenberg, T.; Kuhl, T.; Lang, N.; Mättig, P.; Sandhoff, M.; Schwanenberger, C.; Volkmer, F.

    2017-03-01

    In high energy physics (HEP) event simulations, petabytes of data are processed and stored requiring millions of CPU-years. This enormous demand for computing resources is handled by centers distributed worldwide, which form part of the LHC computing grid. The consumption of such an important amount of resources demands for an efficient production of simulation and for the early detection of potential errors. In this article we present a new monitoring framework for grid environments, which polls a measure of data quality during job execution. This online monitoring facilitates the early detection of configuration errors (specially in simulation parameters), and may thus contribute to significant savings in computing resources.

  10. Systematic detection of long-term slow slip events along Hyuga-nada to central Shikoku, Nankai subduction zone, using GNSS data

    NASA Astrophysics Data System (ADS)

    Takagi, R.; Obara, K.; Uchida, N.

    2017-12-01

    Understanding slow earthquake activity improves our knowledge of slip behavior in brittle-ductile transition zone and subduction process including megathrust earthquakes. In order to understand overall picture of slow slip activity, it is important to make a comprehensive catalog of slow slip events (SSEs). Although short-term SSEs have been detected by GNSS and tilt meter records systematically, analysis of long-term slow slip events relies on individual slip inversions. We develop an algorism to systematically detect long-term SSEs and estimate source parameters of the SSEs using GNSS data. The algorism is similar to GRiD-MT (Tsuruoka et al., 2009), which is grid-based automatic determination of moment tensor solution. Instead of moment tensor fitting to long period seismic records, we estimate parameters of a single rectangle fault to fit GNSS displacement time series. First, we make a two dimensional grid covering possible location of SSE. Second, we estimate best-fit parameters (length, width, slip, and rake) of the rectangle fault at each grid point by an iterative damped least square method. Depth, strike, and dip are fixed on the plate boundary. Ramp function with duration of 300 days is used for expressing time evolution of the fault slip. Third, a grid maximizing variance reduction is selected as a candidate of long-term SSE. We also search onset of ramp function based on the grid search. We applied the method to GNSS data in southwest Japan to detect long-term SSEs in Nankai subduction zone. With current selection criteria, we found 13 events with Mw6.2-6.9 in Hyuga-nada, Bungo channel, and central Shikoku from 1998 to 2015, which include unreported events. Key finding is along strike migrations of long-term SSEs from Hyuga-nada to Bungo channel and from Bungo channel to central Shikoku. In particular, three successive events migrating northward in Hyuga-nada preceded the 2003 Bungo channel SSE, and one event in central Shikoku followed the 2003 SSE in Bungo channel. The space-time dimensions of the possible along-strike migration are about 300km in length and 6 years in time. Systematic detection with assumptions of various durations in the time evolution of SSE may improve the picture of SSE activity and possible interaction with neighboring SSEs.

  11. Evidence for Feature and Location Learning in Human Visual Perceptual Learning

    ERIC Educational Resources Information Center

    Moreno-Fernández, María Manuela; Salleh, Nurizzati Mohd; Prados, Jose

    2015-01-01

    In Experiment 1, human participants were pre-exposed to two similar checkerboard grids (AX and X) in alternation, and to a third grid (BX) in a separate block of trials. In a subsequent test, the unique feature A was better detected than the feature B when they were presented in the same location during the pre-exposure and test phases. However,…

  12. Cyber-Physical System Security of a Power Grid: State-of-the-Art

    DOE PAGES

    Sun, Chih -Che; Liu, Chen -Ching; Xie, Jing

    2016-07-14

    Here, as part of the smart grid development, more and more technologies are developed and deployed on the power grid to enhance the system reliability. A primary purpose of the smart grid is to significantly increase the capability of computer-based remote control and automation. As a result, the level of connectivity has become much higher, and cyber security also becomes a potential threat to the cyber-physical systems (CPSs). In this paper, a survey of the state-of-the-art is conducted on the cyber security of the power grid concerning issues of: the structure of CPSs in a smart grid; cyber vulnerability assessment;more » cyber protection systems; and testbeds of a CPS. At Washington State University (WSU), the Smart City Testbed (SCT) has been developed to provide a platform to test, analyze and validate defense mechanisms against potential cyber intrusions. A test case is provided in this paper to demonstrate how a testbed helps the study of cyber security and the anomaly detection system (ADS) for substations.« less

  13. Cyber-Physical System Security of a Power Grid: State-of-the-Art

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Chih -Che; Liu, Chen -Ching; Xie, Jing

    Here, as part of the smart grid development, more and more technologies are developed and deployed on the power grid to enhance the system reliability. A primary purpose of the smart grid is to significantly increase the capability of computer-based remote control and automation. As a result, the level of connectivity has become much higher, and cyber security also becomes a potential threat to the cyber-physical systems (CPSs). In this paper, a survey of the state-of-the-art is conducted on the cyber security of the power grid concerning issues of: the structure of CPSs in a smart grid; cyber vulnerability assessment;more » cyber protection systems; and testbeds of a CPS. At Washington State University (WSU), the Smart City Testbed (SCT) has been developed to provide a platform to test, analyze and validate defense mechanisms against potential cyber intrusions. A test case is provided in this paper to demonstrate how a testbed helps the study of cyber security and the anomaly detection system (ADS) for substations.« less

  14. A dual-mode generalized likelihood ratio approach to self-reorganizing digital flight control system design

    NASA Technical Reports Server (NTRS)

    Bueno, R.; Chow, E.; Gershwin, S. B.; Willsky, A. S.

    1975-01-01

    The research is reported on the problems of failure detection and reliable system design for digital aircraft control systems. Failure modes, cross detection probability, wrong time detection, application of performance tools, and the GLR computer package are discussed.

  15. Rapid construction of pinhole SPECT system matrices by distance-weighted Gaussian interpolation method combined with geometric parameter estimations

    NASA Astrophysics Data System (ADS)

    Lee, Ming-Wei; Chen, Yi-Chun

    2014-02-01

    In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.

  16. Evaluation of a fault tolerant system for an integrated avionics sensor configuration with TSRV flight data

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.

    1985-01-01

    The performance analysis results of a fault inferring nonlinear detection system (FINDS) using sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment is presented. First, a statistical analysis of the flight recorded sensor data was made in order to determine the characteristics of sensor inaccuracies. Next, modifications were made to the detection and decision functions in the FINDS algorithm in order to improve false alarm and failure detection performance under real modelling errors present in the flight data. Finally, the failure detection and false alarm performance of the FINDS algorithm were analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minute flight data. In general, the detection speed, failure level estimation, and false alarm performance showed a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed was faster for filter measurement sensors soon as MLS than for filter input sensors such as flight control accelerometers.

  17. Effects of pressure angle and tip relief on the life of speed increasing gearbox: a case study.

    PubMed

    Shanmugasundaram, Sankar; Kumaresan, Manivarma; Muthusamy, Nataraj

    2014-01-01

    This paper examines failure of helical gear in speed increasing gearbox used in the wind turbine generator (WTG). In addition, an attempt has been made to get suitable gear micro-geometry such as pressure angle and tip relief to minimize the gear failure in the wind turbines. As the gear trains in the wind turbine gearbox is prearranged with higher speed ratio and the gearboxes experience shock load due to atmospheric turbulence, gust wind speed, non-synchronization of pitching, frequent grid drops and failure of braking, the gear failure occurs either in the intermediate or high speed stage pinion. KISS soft gear calculation software was used to determine the gear specifications and analysis is carried out in ANSYS software version.11.0 for the existing and the proposed gear to evaluate the performance of bending stress tooth deflection and stiffness. The main objective of this research study is to propose suitable gear micro-geometry that is tip relief and pressure angle blend for increasing tooth strength of the helical gear used in the wind turbine for trouble free operation.

  18. 22nd Annual Logistics Conference and Exhibition

    DTIC Science & Technology

    2006-04-20

    Prognostics & Health Management at GE Dr. Piero P.Bonissone Industrial AI Lab GE Global Research NCD Select detection model Anomaly detection results...Mode 213 x Failure mode histogram 2130014 Anomaly detection from event-log data Anomaly detection from event-log data Diagnostics/ Prognostics Using...Failure Monitoring & AssessmentTactical C4ISR Sense Respond 7 •Diagnostics, Prognostics and health management

  19. A Review of Transmission Diagnostics Research at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Zakajsek, James J.

    1994-01-01

    This paper presents a summary of the transmission diagnostics research work conducted at NASA Lewis Research Center over the last four years. In 1990, the Transmission Health and Usage Monitoring Research Team at NASA Lewis conducted a survey to determine the critical needs of the diagnostics community. Survey results indicated that experimental verification of gear and bearing fault detection methods, improved fault detection in planetary systems, and damage magnitude assessment and prognostics research were all critical to a highly reliable health and usage monitoring system. In response to this, a variety of transmission fault detection methods were applied to experimentally obtained fatigue data. Failure modes of the fatigue data include a variety of gear pitting failures, tooth wear, tooth fracture, and bearing spalling failures. Overall results indicate that, of the gear fault detection techniques, no one method can successfully detect all possible failure modes. The more successful methods need to be integrated into a single more reliable detection technique. A recently developed method, NA4, in addition to being one of the more successful gear fault detection methods, was also found to exhibit damage magnitude estimation capabilities.

  20. Performance analysis of a fault inferring nonlinear detection system algorithm with integrated avionics flight data

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.; Morrell, F. R.

    1985-01-01

    This paper presents the performance analysis results of a fault inferring nonlinear detection system (FINDS) using integrated avionics sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. First, an overview of the FINDS algorithm structure is given. Then, aircraft state estimate time histories and statistics for the flight data sensors are discussed. This is followed by an explanation of modifications made to the detection and decision functions in FINDS to improve false alarm and failure detection performance. Next, the failure detection and false alarm performance of the FINDS algorithm are analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minutes of flight data. Results indicate that the detection speed, failure level estimation, and false alarm performance show a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed is faster for filter measurement sensors such as MLS than for filter input sensors such as flight control accelerometers. Finally, the progress in modifications of the FINDS algorithm design to accommodate flight computer constraints is discussed.

  1. Deep Learning-Based Data Forgery Detection in Automatic Generation Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Fengli; Li, Qinghua

    Automatic Generation Control (AGC) is a key control system in the power grid. It is used to calculate the Area Control Error (ACE) based on frequency and tie-line power flow between balancing areas, and then adjust power generation to maintain the power system frequency in an acceptable range. However, attackers might inject malicious frequency or tie-line power flow measurements to mislead AGC to do false generation correction which will harm the power grid operation. Such attacks are hard to be detected since they do not violate physical power system models. In this work, we propose algorithms based on Neural Networkmore » and Fourier Transform to detect data forgery attacks in AGC. Different from the few previous work that rely on accurate load prediction to detect data forgery, our solution only uses the ACE data already available in existing AGC systems. In particular, our solution learns the normal patterns of ACE time series and detects abnormal patterns caused by artificial attacks. Evaluations on the real ACE dataset show that our methods have high detection accuracy.« less

  2. Prevention 0f Unwanted Free-Declaration of Static Obstacles in Probability Occupancy Grids

    NASA Astrophysics Data System (ADS)

    Krause, Stefan; Scholz, M.; Hohmann, R.

    2017-10-01

    Obstacle detection and avoidance are major research fields in unmanned aviation. Map based obstacle detection approaches often use discrete world representations such as probabilistic grid maps to fuse incremental environment data from different views or sensors to build a comprehensive representation. The integration of continuous measurements into a discrete representation can result in rounding errors which, in turn, leads to differences between the artificial model and real environment. The cause of these deviations is a low spatial resolution of the world representation comparison to the used sensor data. Differences between artificial representations which are used for path planning or obstacle avoidance and the real world can lead to unexpected behavior up to collisions with unmapped obstacles. This paper presents three approaches to the treatment of errors that can occur during the integration of continuous laser measurement in the discrete probabilistic grid. Further, the quality of the error prevention and the processing performance are compared with real sensor data.

  3. A High Order Finite Difference Scheme with Sharp Shock Resolution for the Euler Equations

    NASA Technical Reports Server (NTRS)

    Gerritsen, Margot; Olsson, Pelle

    1996-01-01

    We derive a high-order finite difference scheme for the Euler equations that satisfies a semi-discrete energy estimate, and present an efficient strategy for the treatment of discontinuities that leads to sharp shock resolution. The formulation of the semi-discrete energy estimate is based on a symmetrization of the Euler equations that preserves the homogeneity of the flux vector, a canonical splitting of the flux derivative vector, and the use of difference operators that satisfy a discrete analogue to the integration by parts procedure used in the continuous energy estimate. Around discontinuities or sharp gradients, refined grids are created on which the discrete equations are solved after adding a newly constructed artificial viscosity. The positioning of the sub-grids and computation of the viscosity are aided by a detection algorithm which is based on a multi-scale wavelet analysis of the pressure grid function. The wavelet theory provides easy to implement mathematical criteria to detect discontinuities, sharp gradients and spurious oscillations quickly and efficiently.

  4. Introduction

    NASA Astrophysics Data System (ADS)

    Dum, Ralph

    Various types of diverse networks — communication networks, transport networks, global business networks, networks of friends, or the Internet — shape our daily life and the way we think and act. We depend on various social, economic, and technological networks that weave a tissue of businesses, governments, technologies and that contain us as citizens, users, or customers. We only become aware of our dependence if failures occur in these networks: when cities are plunged into darkness because of a breakdown of the power grid like happened recently in New York, when national economies collapse because of a failure of global financial systems like happened in the South-Asian banking crisis, or when computer viruses spread with mind-boggling speed over information networks destroying or, even worse, exposing sensitive data.

  5. Update of the NEXT Ion Thruster Service Life Assessment with Post-Test Correlation to the Long Duration Test

    NASA Technical Reports Server (NTRS)

    Yim, John T.; Soulas, George C.; Shastry, Rohit; Choi, Maria; Mackey, Jonathan A.; Sarver-Verhey, Timothy R.

    2017-01-01

    The service life assessment for NASA's Evolutionary Xenon Thruster is updated to incorporate the results from the successful and voluntarily early completion of the 51,184 hour long duration test which demonstrated 918 kg of total xenon throughput. The results of the numerous post-test investigations including destructive interrogations have been assessed against all of the critical known and suspected failure mechanisms to update the life and throughput expectations for each major component. Analysis results of two of the most acute failure mechanisms, namely pit-and-groove erosion and aperture enlargement of the accelerator grid, are not updated in this work but will be published at a future time after analysis completion.

  6. Convergence issues in domain decomposition parallel computation of hovering rotor

    NASA Astrophysics Data System (ADS)

    Xiao, Zhongyun; Liu, Gang; Mou, Bin; Jiang, Xiong

    2018-05-01

    Implicit LU-SGS time integration algorithm has been widely used in parallel computation in spite of its lack of information from adjacent domains. When applied to parallel computation of hovering rotor flows in a rotating frame, it brings about convergence issues. To remedy the problem, three LU factorization-based implicit schemes (consisting of LU-SGS, DP-LUR and HLU-SGS) are investigated comparatively. A test case of pure grid rotation is designed to verify these algorithms, which show that LU-SGS algorithm introduces errors on boundary cells. When partition boundaries are circumferential, errors arise in proportion to grid speed, accumulating along with the rotation, and leading to computational failure in the end. Meanwhile, DP-LUR and HLU-SGS methods show good convergence owing to boundary treatment which are desirable in domain decomposition parallel computations.

  7. Fracture Behaviors of Sn-Cu Intermetallic Compound Layer in Ball Grid Array Induced by Thermal Shock

    NASA Astrophysics Data System (ADS)

    Shen, Jun; Zhai, Dajun; Cao, Zhongming; Zhao, Mali; Pu, Yayun

    2014-02-01

    In this work, thermal shock reliability testing and finite-element analysis (FEA) of solder joints between ball grid array components and printed circuit boards with Cu pads were used to investigate the failure mechanism of solder interconnections. The morphologies, composition, and thickness of Sn-Cu intermetallic compounds (IMC) at the interface of Sn-3.0Ag-0.5Cu lead-free solder alloy and Cu substrates were investigated by scanning electron microscopy and transmission electron microscopy. Based on the experimental observations and FEA results, it can be recognized that the origin and propagation of cracks are caused primarily by the difference between the coefficient of thermal expansion of different parts of the packaged products, the growth behaviors and roughness of the IMC layer, and the grain size of the solder balls.

  8. Technical, economic and legal aspects of wind energy utilization

    NASA Astrophysics Data System (ADS)

    Obermair, G. M.; Jarass, L.

    Potentially problematical areas of the implementation of wind turbines for electricity production in West Germany are identified and briefly discussed. Variations in wind generator output due to source variability may cause power regulation difficulties in the grid and also raise uncertainties in utility capacity planning for new construction. Catastrophic machine component failures, such as a thrown blade, are hazardous to life and property, while lulls in the resource can cause power regulation capabilities only when grid penetration has reached significant levels. Economically, the lack of actual data from large scale wind projects is cited as a barrier to accurate cost comparisons of wind-derived power relative to other generating sources, although breakeven costs for wind power have been found to be $2000/kW installed capacity, i.e., a marginal cost of $0.10/kW.

  9. Triplexer Monitor Design for Failure Detection in FTTH System

    NASA Astrophysics Data System (ADS)

    Fu, Minglei; Le, Zichun; Hu, Jinhua; Fei, Xia

    2012-09-01

    Triplexer was one of the key components in FTTH systems, which employed an analog overlay channel for video broadcasting in addition to bidirectional digital transmission. To enhance the survivability of triplexer as well as the robustness of FTTH system, a multi-ports device named triplexer monitor was designed and realized, by which failures at triplexer ports can be detected and localized. Triplexer monitor was composed of integrated circuits and its four input ports were connected with the beam splitter whose power division ratio was 95∶5. By means of detecting the sampled optical signal from the beam splitters, triplexer monitor tracked the status of the four ports in triplexer (e.g. 1310 nm, 1490 nm, 1550 nm and com ports). In this paper, the operation scenario of the triplexer monitor with external optical devices was addressed. And the integrated circuit structure of the triplexer monitor was also given. Furthermore, a failure localization algorithm was proposed, which based on the state transition diagram. In order to measure the failure detection and localization time under the circumstance of different failed ports, an experimental test-bed was built. Experiment results showed that the detection time for the failure at 1310 nm port by the triplexer monitor was less than 8.20 ms. For the failure at 1490 nm or 1550 nm port it was less than 8.20 ms and for the failure at com port it was less than 7.20 ms.

  10. Detection of Failure in Asynchronous Motor Using Soft Computing Method

    NASA Astrophysics Data System (ADS)

    Vinoth Kumar, K.; Sony, Kevin; Achenkunju John, Alan; Kuriakose, Anto; John, Ano P.

    2018-04-01

    This paper investigates the stator short winding failure of asynchronous motor also their effects on motor current spectrums. A fuzzy logic approach i.e., model based technique possibly will help to detect the asynchronous motor failure. Actually, fuzzy logic similar to humanoid intelligent methods besides expected linguistic empowering inferences through vague statistics. The dynamic model is technologically advanced for asynchronous motor by means of fuzzy logic classifier towards investigate the stator inter turn failure in addition open phase failure. A hardware implementation was carried out with LabVIEW for the online-monitoring of faults.

  11. A Solution Adaptive Technique Using Tetrahedral Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2000-01-01

    An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.

  12. Metallic wire grid behavior and testing in a low pressure gaseous noble elements detector

    NASA Astrophysics Data System (ADS)

    Ji, W.

    2018-05-01

    High voltage performance has been a challenge for noble element detectors. One piece of this challenge is the emission of electrons from metal electrodes when applying high voltage. This has become a major concern for low-background detectors such as LUX-ZEPLIN (LZ). LZ is a liquid xenon Time Projection Chamber (TPC) searching for Weakly Interactive Massive Particles (WIMPs). In this work, we demonstrate a method to measure electron emission from metallic electrode grids via detection of proportional scintillation light. We find consistency with Fowler-Nordheim emission with a surface parameter β = 1988 after electro-polishing treatment of a stainless steel grid.

  13. Measurement of neutron dose equivalent outside and inside of the treatment vault of GRID therapy.

    PubMed

    Wang, Xudong; Charlton, Michael A; Esquivel, Carlos; Eng, Tony Y; Li, Ying; Papanikolaou, Nikos

    2013-09-01

    To evaluate the neutron and photon dose equivalent rates at the treatment vault entrance (Hn,D and HG), and to study the secondary radiation to the patient in GRID therapy. The radiation activation on the grid was studied. A Varian Clinac 23EX accelerator was working at 18 MV mode with a grid manufactured by .decimal, Inc. The Hn,D and HG were measured using an Andersson-Braun neutron REM meter, and a Geiger Müller counter. The radiation activation on the grid was measured after the irradiation with an ion chamber γ-ray survey meter. The secondary radiation dose equivalent to patient was evaluated by etched track detectors and OSL detectors on a RANDO(®) phantom. Within the measurement uncertainty, there is no significant difference between the Hn,D and HG with and without a grid. However, the neutron dose equivalent to the patient with the grid is, on average, 35.3% lower than that without the grid when using the same field size and the same amount of monitor unit. The photon dose equivalent to the patient with the grid is, on average, 44.9% lower. The measured average half-life of the radiation activation in the grid is 12.0 (± 0.9) min. The activation can be categorized into a fast decay component and a slow decay component with half-lives of 3.4 (± 1.6) min and 15.3 (± 4.0) min, respectively. There was no detectable radioactive contamination found on the surface of the grid through a wipe test. This work indicates that there is no significant change of the Hn,D and HG in GRID therapy, compared with a conventional external beam therapy. However, the neutron and scattered photon dose equivalent to the patient decrease dramatically with the grid and can be clinical irrelevant. Meanwhile, the users of a grid should be aware of the possible high dose to the radiation worker from the radiation activation on the surface of the grid. A delay in handling the grid after the beam delivery is suggested.

  14. A novel strategy for rapid detection of NT-proBNP

    NASA Astrophysics Data System (ADS)

    Cui, Qiyao; Sun, Honghao; Zhu, Hui

    2017-09-01

    In order to establish a simple, rapid, sensitive, and specific quantitative assay to detect the biomarkers of heart failure, in this study, biotin-streptavidin technology was employed with fluorescence immunochromatographic assay to detect the concentration of the biomarkers in serum, and this method was applied to detect NT-proBNP, which is valuable for diagnostic evaluation of heart failure.

  15. Scaling effects on spring phenology detections from MODIS data at multiple spatial resolutions over the contiguous United States

    NASA Astrophysics Data System (ADS)

    Peng, Dailiang; Zhang, Xiaoyang; Zhang, Bing; Liu, Liangyun; Liu, Xinjie; Huete, Alfredo R.; Huang, Wenjiang; Wang, Siyuan; Luo, Shezhou; Zhang, Xiao; Zhang, Helin

    2017-10-01

    Land surface phenology (LSP) has been widely retrieved from satellite data at multiple spatial resolutions, but the spatial scaling effects on LSP detection are poorly understood. In this study, we collected enhanced vegetation index (EVI, 250 m) from collection 6 MOD13Q1 product over the contiguous United States (CONUS) in 2007 and 2008, and generated a set of multiple spatial resolution EVI data by resampling 250 m to 2 × 250 m and 3 × 250 m, 4 × 250 m, …, 35 × 250 m. These EVI time series were then used to detect the start of spring season (SOS) at various spatial resolutions. Further the SOS variation across scales was examined at each coarse resolution grid (35 × 250 m ≈ 8 km, refer to as reference grid) and ecoregion. Finally, the SOS scaling effects were associated with landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation within each reference grid. The results revealed the influences of satellite spatial resolutions on SOS retrievals and the related impact factors. Specifically, SOS significantly varied lineally or logarithmically across scales although the relationship could be either positive or negative. The overall SOS values averaged from spatial resolutions between 250 m and 35 × 250 m at large ecosystem regions were generally similar with a difference less than 5 days, while the SOS values within the reference grid could differ greatly in some local areas. Moreover, the standard deviation of SOS across scales in the reference grid was less than 5 days in more than 70% of area over the CONUS, which was smaller in northeastern than in southern and western regions. The SOS scaling effect was significantly associated with heterogeneity of vegetation properties characterized using land landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation, but the latter was the most important impact factor.

  16. Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid

    PubMed Central

    Byambasuren, Bat-erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar

    2016-01-01

    Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results. PMID:26907274

  17. Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid.

    PubMed

    Byambasuren, Bat-Erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar

    2016-02-19

    Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results.

  18. Sensor failure detection for jet engines

    NASA Technical Reports Server (NTRS)

    Beattie, E. C.; Laprad, R. F.; Akhter, M. M.; Rock, S. M.

    1983-01-01

    Revisions to the advanced sensor failure detection, isolation, and accommodation (DIA) algorithm, developed under the sensor failure detection system program were studied to eliminate the steady state errors due to estimation filter biases. Three algorithm revisions were formulated and one revision for detailed evaluation was chosen. The selected version modifies the DIA algorithm to feedback the actual sensor outputs to the integral portion of the control for the nofailure case. In case of a failure, the estimates of the failed sensor output is fed back to the integral portion. The estimator outputs are fed back to the linear regulator portion of the control all the time. The revised algorithm is evaluated and compared to the baseline algorithm developed previously.

  19. DESPIC: Detecting Early Signatures of Persuasion in Information Cascades

    DTIC Science & Technology

    2015-08-27

    over NoSQL Databases, Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). 26-MAY-14, . : , P...over NoSQL Databases. Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). Chicago, IL, USA...distributed NoSQL databases including HBase and Riak, we finalized the requirements of the optimal computational architecture to support our framework

  20. Development of a Whole Container Seal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhn, Michael J; Pickett, Chris A; Stinson, Brad J

    This paper outlines a technique for utilizing electrically conductive textiles as a whole container seal. This method has the potential to provide more robustness for ensuring that the container has not been breached versus conventional sealing methods that only provide tamper indication at the area used for normal access. The conductive textile is used as a distributed sensor for detecting and localizing container tamper or breach. For sealing purposes, the conductive fabric represents a bounded, near-infinite grid of resistors. The well-known infinite resistance grid problem was used to model and confirm the expected accuracy and validity of this approach. Anmore » experimental setup was built that uses a multiplexed Wheatstone bridge measurement to determine the resistances of a coarse electrode grid across the conductive fabric. Non-uniform resistance values of the grid infer the presence of damage or tears in the fabric. Results suggest accuracy proportional to the electrode spacing in determining the presence and location of disturbances in conductive fabric samples. Current work is focused on constructing experimental prototypes for field and environmental testing to gauge the performance of these whole container seals in real world conditions. We are also developing software and hardware to interface with the whole container seals. The latest prototypes are expected to provide more accuracy in detecting and localizing events, although detection of a penetration should be adequate for most sealing applications. We are also developing smart sensing nodes that integrate digital hardware and additional sensors (e.g., motion, humidity) into the electrode nodes within the whole container seal.« less

  1. Centrifugal Modelling of Soil Structures. Part I. Centrifugal Modelling of Slope Failures.

    DTIC Science & Technology

    1979-03-01

    comparing successive photographs in which soil movement was noted by the change in position of the original grid of silvered indicator balls . Inherent in...SECIJ RITY CLASSIFICATION OF THIS PAGE(1Thon Pat& Entered) of uplift forces was also observed. In nineteen coal mine waste embankment dam models...In’nineteen coal mine waste embankment dam models, throughout which the soil particle size distribution was altered for modelling of dif- ferent

  2. Using an object-based grid system to evaluate a newly developed EP approach to formulate SVMs as applied to the classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun

    2004-04-01

    This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.

  3. Integrated failure detection and management for the Space Station Freedom external active thermal control system

    NASA Technical Reports Server (NTRS)

    Mesloh, Nick; Hill, Tim; Kosyk, Kathy

    1993-01-01

    This paper presents the integrated approach toward failure detection, isolation, and recovery/reconfiguration to be used for the Space Station Freedom External Active Thermal Control System (EATCS). The on-board and on-ground diagnostic capabilities of the EATCS are discussed. Time and safety critical features, as well as noncritical failures, and the detection coverage for each provided by existing capabilities are reviewed. The allocation of responsibility between on-board software and ground-based systems, to be shown during ground testing at the Johnson Space Center, is described. Failure isolation capabilities allocated to the ground include some functionality originally found on orbit but moved to the ground to reduce on-board resource requirements. Complex failures requiring the analysis of multiple external variables, such as environmental conditions, heat loads, or station attitude, are also allocated to ground personnel.

  4. Rainfall-Runoff and Slope Failure in a Steep, Tropical Landscape

    NASA Astrophysics Data System (ADS)

    Deane, J.; Freyberg, D. L.

    2016-12-01

    Tropical forests are often located on short, steep slopes with pronounced heterogeneity in vegetation over small distances. Further, they are distinguished from their temperate counterparts by a thinner organic horizon, and large interannual and subseasonal variability in precipitation. However, hydrologic processes in tropical watersheds are difficult to quantify and study because of data scarcity, accessibility difficulties and complex topography. As a result, there has been little work on disentangling the effects of spatial and temporal heterogeneity on flow generation and slope failure on tropical hillslopes. In this work we analyze the connections between terrain properties, subsurface formation, land cover, and precipitation variability in changing water table dynamics at the interface between a thin soil mantle and underlying bedrock. We have developed a fully distributed integrated hydrologic model at two different scales: 1) a 100 m idealized hillslope (1 m model grid size) representative of physiographic regions on tropical islands and 2) a 48 sq. km tropical island watershed in Trinidad and Tobago (30 m model grid size) using ParFlow.CLM. Additionally, we couple Parflow to an infinite slope stability module to investigate the initiation of rainfall induced landslides under different precipitation scenarios. The characteristic hillslopes are used to used to generalize the near subsurface response of a soil-saprolite aquifer to a range of landscape properties. In particular, we investigate the role of mean slope, soil properties and road cuts in altering the partitioning of runoff and infiltration, and increasing slope stability. Moving from the idealized models to the steep tropical watershed, we evaluate the effects of different land cover and precipitation scenarios—consistent with climate change projections—on flooding and hillslope failure incidence.

  5. Hot streak characterization in serpentine exhaust nozzles

    NASA Astrophysics Data System (ADS)

    Crowe, Darrell S.

    Modern aircraft of the United States Air Force face increasingly demanding cost, weight, and survivability requirements. Serpentine exhaust nozzles within an embedded engine allow a weapon system to fulfill mission survivability requirements by providing denial of direct line-of-sight into the high-temperature components of the engine. Recently, aircraft have experienced material degradation and failure along the aft deck due to extreme thermal loading. Failure has occurred in specific regions along the aft deck where concentrations of hot gas have come in contact with the surface causing hot streaks. The prevention of these failures will be aided by the accurate prediction of hot streaks. Additionally, hot streak prediction will improve future designs by identifying areas of the nozzle and aft deck surfaces that require thermal management. To this end, the goal of this research is to observe and characterize the underlying flow physics of hot streak phenomena. The goal is accomplished by applying computational fluid dynamics to determine how hot streak phenomena is affected by changes in nozzle geometry. The present research first validates the computational methods using serpentine inlet experimental and computational studies. A design methodology is then established for creating six serpentine exhaust nozzles investigated in this research. A grid independent solution is obtained on a nozzle using several figures of merit and the grid-convergence index method. An investigation into the application of a second-order closure turbulence model is accomplished. Simulations are performed for all serpentine nozzles at two flow conditions. The research introduces a set of characterization and performance parameters based on the temperature distribution and flow conditions at the nozzle throat and exit. Examination of the temperature distribution on the upper and lower nozzle surfaces reveals critical information concerning changes in hot streak phenomena due to changes in nozzle geometry.

  6. Foundations for Protecting Renewable-Rich Distribution Systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Abraham; Brahma, Sukumar; Ranade, Satish

    High proliferation of Inverter Interfaced Distributed Energy Resources (IIDERs) into the electric distribution grid introduces new challenges to protection of such systems. This is because the existing protection systems are designed with two assumptions: 1) system is single-sourced, resulting in unidirectional fault current, and (2) fault currents are easily detectable due to much higher magnitudes compared to load currents. Due to the fact that most renewables interface with the grid though inverters, and inverters restrict their current output to levels close to the full load currents, both these assumptions are no longer valid - the system becomes multi-sourced, and overcurrent-basedmore » protection does not work. The primary scope of this study is to analyze the response of a grid-tied inverter to different faults in the grid, leading to new guidelines on protecting renewable-rich distribution systems.« less

  7. Aircraft control surface failure detection and isolation using the OSGLR test. [orthogonal series generalized likelihood ratio

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.

    1986-01-01

    The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.

  8. Use of Failure Mode and Effects Analysis to Improve Emergency Department Handoff Processes.

    PubMed

    Sorrentino, Patricia

    2016-01-01

    The purpose of this article is to describe a quality improvement process using failure mode and effects analysis (FMEA) to evaluate systems handoff communication processes, improve emergency department (ED) throughput and reduce crowding through development of a standardized handoff, and, ultimately, improve patient safety. Risk of patient harm through ineffective communication during handoff transitions is a major reason for breakdown of systems. Complexities of ED processes put patient safety at risk. An increased incidence of submitted patient safety event reports for handoff communication failures between the ED and inpatient units solidified a decision to implement the use of FMEA to identify handoff failures to mitigate patient harm through redesign. The clinical nurse specialist implemented an FMEA. Handoff failure themes were created from deidentified retrospective reviews. Weekly meetings were held over a 3-month period to identify failure modes and determine cause and effect on the process. A functional block diagram process map tool was used to illustrate handoff processes. An FMEA grid was used to list failure modes and assign a risk priority number to quantify results. Multiple areas with actionable failures were identified. A majority of causes for high-priority failure modes were specific to communications. Findings demonstrate the complexity of transition and handoff processes. The FMEA served to identify and evaluate risk of handoff failures and provide a framework for process improvement. A focus on mentoring nurses to quality handoff processes so that it becomes habitual practice is crucial to safe patient transitions. Standardizing content and hardwiring within the system are best practice. The clinical nurse specialist is prepared to provide strong leadership to drive and implement system-wide quality projects.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salvagnini, Elena; Bosmans, Hilde; Marshall, Nicholas W.

    Purpose: The aim of this paper was to illustrate the value of the new metric effective detective quantum efficiency (eDQE) in relation to more established measures in the optimization process of two digital mammography systems. The following metrics were included for comparison against eDQE: detective quantum efficiency (DQE) of the detector, signal difference to noise ratio (SdNR), and detectability index (d′) calculated using a standard nonprewhitened observer with eye filter.Methods: The two systems investigated were the Siemens MAMMOMAT Inspiration and the Hologic Selenia Dimensions. The presampling modulation transfer function (MTF) required for the eDQE was measured using two geometries: amore » geometry containing scattered radiation and a low scatter geometry. The eDQE, SdNR, and d′ were measured for poly(methyl methacrylate) (PMMA) thicknesses of 20, 40, 60, and 70 mm, with and without the antiscatter grid and for a selection of clinically relevant target/filter (T/F) combinations. Figures of merit (FOMs) were then formed from SdNR and d′ using the mean glandular dose as the factor to express detriment. Detector DQE was measured at energies covering the range of typical clinically used spectra.Results: The MTF measured in the presence of scattered radiation showed a large drop at low spatial frequency compared to the low scatter method and led to a corresponding reduction in eDQE. The eDQE for the Siemens system at 1 mm{sup −1} ranged between 0.15 and 0.27, depending on T/F and grid setting. For the Hologic system, eDQE at 1 mm{sup −1} varied from 0.15 to 0.32, again depending on T/F and grid setting. The eDQE results for both systems showed that the grid increased the system efficiency for PMMA thicknesses of 40 mm and above but showed only small sensitivity to T/F setting. While results of the SdNR and d′ based FOMs confirmed the eDQE grid position results, they were also more specific in terms of T/F selection. For the Siemens system at 20 mm PMMA, the FOMs indicated Mo/Mo (grid out) as optimal while W/Rh (grid in) was the optimal configuration at 40, 60, and 70 mm PMMA. For the Hologic, the FOMs pointed to W/Rh (grid in) at 20 and 40 mm of PMMA while W/Ag (grid in) gave the highest FOM at 60 and 70 mm PMMA. Finally, DQE at 1 mm{sup −1} averaged for the four beam qualities studied was 0.44 ± 0.02 and 0.55 ± 0.03 for the Siemens and Hologic detectors, respectively, indicating only a small influence of energy on detector DQE.Conclusions: Both the DQE and eDQE data showed only a small sensitivity to T/F setting for these two systems. The eDQE showed clear preferences in terms of scatter reduction, being highest for the grid-in geometry for PMMA thicknesses of 40 mm and above. The SdNR and d′ based figures of merit, which contain additional weighting for contrast and dose, pointed to specific T/F settings for both systems.« less

  10. Integrated Access to Solar Observations With EGSO

    NASA Astrophysics Data System (ADS)

    Csillaghy, A.

    2003-12-01

    {\\b Co-Authors}: J.Aboudarham (2), E.Antonucci (3), R.D.Bentely (4), L.Ciminiera (5), A.Finkelstein (4), J.B.Gurman(6), F.Hill (7), D.Pike (8), I.Scholl (9), V.Zharkova and the EGSO development team {\\b Institutions}: (2) Observatoire de Paris-Meudon (France); (3) INAF - Istituto Nazionale di Astrofisica (Italy); (4) University College London (U.K.); (5) Politecnico di Torino (Italy), (6) NASA Goddard Space Flight Center (USA); (7) National Solar Observatory (USA); (8) Rutherford Appleton Lab. (U.K.); (9) Institut d'Astrophysique Spatial, Universite de Paris-Sud (France) ; (10) University of Bradford (U.K) {\\b Abstract}: The European Grid of Solar Observations is the European contribution to the deployment of a virtual solar observatory. The project is funded under the Information Society Technologies (IST) thematic programme of the European Commission's Fifth Framework. EGSO started in March 2002 and will last until March 2005. The project is categorized as a computer science effort. Evidently, a fair amount of issues it addresses are general to grid projects. Nevertheless, EGSO is also of benefit to the application domains, including solar physics, space weather, climate physics and astrophysics. With EGSO, researchers as well as the general public can access and combine solar data from distributed archives in an integrated virtual solar resource. Users express queries based on various search parameters. The search possibilities of EGSO extend the search possibilities of traditional data access systems. For instance, users can formulate a query to search for simultaneous observations of a specific solar event in a given number of wavelengths. In other words, users can search for observations on the basis of events and phenomena, rather than just time and location. The software architecture consists of three collaborating components: a consumer, a broker and a provider. The first component, the consumer, organizes the end user interaction and controls requests submitted to the grid. The consumer is thus in charge of tasks such as request handling, request composition, data visualization and data caching. The second component, the provider, is dedicated to data providing and processing. It links the grid to individual data providers and data centers. The third component, the broker, collects information about providers and allows consumers to perform the searches on the grid. Each component can exist in multiple instances. This follows a basic grid concept: The failure or unavailability of a single component will not generate a failure of the whole system, as other systems will take over the processing of requests. The architecture relies on a global data model for the semantics. The data model is in some way the brains of the grid. It provides a description of the information entities available within the grid, as well as a description of their relationships. EGSO is now in the development phase. A demonstration (www.egso.org/demo) is provided to get an idea about how the system will function once the project is completed. The demonstration focuses on retrieving data needed to determine the energy released in the solar atmosphere during the impulsive phase of flares. It allows finding simultaneous observations in the visible, UV, Soft X-rays, hard X-rays, gamma-rays, and radio. The types of observations that can be specified are images at high space and time resolutions as well as integrated emission and spectra from a yet limited set of instruments, including the NASA spacecraft TRACE, SOHO, RHESSI, and the ground-based observatories Phoenix-2 in Switzerland and Meudon Observatory in France

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  12. A Fault Tolerant System for an Integrated Avionics Sensor Configuration

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Lancraft, R. E.

    1984-01-01

    An aircraft sensor fault tolerant system methodology for the Transport Systems Research Vehicle in a Microwave Landing System (MLS) environment is described. The fault tolerant system provides reliable estimates in the presence of possible failures both in ground-based navigation aids, and in on-board flight control and inertial sensors. Sensor failures are identified by utilizing the analytic relationships between the various sensors arising from the aircraft point mass equations of motion. The estimation and failure detection performance of the software implementation (called FINDS) of the developed system was analyzed on a nonlinear digital simulation of the research aircraft. Simulation results showing the detection performance of FINDS, using a dual redundant sensor compliment, are presented for bias, hardover, null, ramp, increased noise and scale factor failures. In general, the results show that FINDS can distinguish between normal operating sensor errors and failures while providing an excellent detection speed for bias failures in the MLS, indicated airspeed, attitude and radar altimeter sensors.

  13. Best chirplet chain: Near-optimal detection of gravitational wave chirps

    NASA Astrophysics Data System (ADS)

    Chassande-Mottin, Éric; Pai, Archana

    2006-02-01

    The list of putative sources of gravitational waves possibly detected by the ongoing worldwide network of large scale interferometers has been continuously growing in the last years. For some of them, the detection is made difficult by the lack of a complete information about the expected signal. We concentrate on the case where the expected gravitational wave (GW) is a quasiperiodic frequency modulated signal i.e., a chirp. In this article, we address the question of detecting an a priori unknown GW chirp. We introduce a general chirp model and claim that it includes all physically realistic GW chirps. We produce a finite grid of template waveforms which samples the resulting set of possible chirps. If we follow the classical approach (used for the detection of inspiralling binary chirps, for instance), we would build a bank of quadrature matched filters comparing the data to each of the templates of this grid. The detection would then be achieved by thresholding the output, the maximum giving the individual which best fits the data. In the present case, this exhaustive search is not tractable because of the very large number of templates in the grid. We show that the exhaustive search can be reformulated (using approximations) as a pattern search in the time-frequency plane. This motivates an approximate but feasible alternative solution which is clearly linked to the optimal one. The time-frequency representation and pattern search algorithm are fully determined by the reformulation. This contrasts with the other time-frequency based methods presented in the literature for the same problem, where these choices are justified by “ad hoc” arguments. In particular, the time-frequency representation has to be unitary. Finally, we assess the performance, robustness and computational cost of the proposed method with several benchmarks using simulated data.

  14. Smart Grid Risk Management

    NASA Astrophysics Data System (ADS)

    Abad Lopez, Carlos Adrian

    Current electricity infrastructure is being stressed from several directions -- high demand, unreliable supply, extreme weather conditions, accidents, among others. Infrastructure planners have, traditionally, focused on only the cost of the system; today, resilience and sustainability are increasingly becoming more important. In this dissertation, we develop computational tools for efficiently managing electricity resources to help create a more reliable and sustainable electrical grid. The tools we present in this work will help electric utilities coordinate demand to allow the smooth and large scale integration of renewable sources of energy into traditional grids, as well as provide infrastructure planners and operators in developing countries a framework for making informed planning and control decisions in the presence of uncertainty. Demand-side management is considered as the most viable solution for maintaining grid stability as generation from intermittent renewable sources increases. Demand-side management, particularly demand response (DR) programs that attempt to alter the energy consumption of customers either by using price-based incentives or up-front power interruption contracts, is more cost-effective and sustainable in addressing short-term supply-demand imbalances when compared with the alternative that involves increasing fossil fuel-based fast spinning reserves. An essential step in compensating participating customers and benchmarking the effectiveness of DR programs is to be able to independently detect the load reduction from observed meter data. Electric utilities implementing automated DR programs through direct load control switches are also interested in detecting the reduction in demand to efficiently pinpoint non-functioning devices to reduce maintenance costs. We develop sparse optimization methods for detecting a small change in the demand for electricity of a customer in response to a price change or signal from the utility, dynamic learning methods for scheduling the maintenance of direct load control switches whose operating state is not directly observable and can only be inferred from the metered electricity consumption, and machine learning methods for accurately forecasting the load of hundreds of thousands of residential, commercial and industrial customers. These algorithms have been implemented in the software system provided by AutoGrid, Inc., and this system has helped several utilities in the Pacific Northwest, Oklahoma, California and Texas, provide more reliable power to their customers at significantly reduced prices. Providing power to widely spread out communities in developing countries using the conventional power grid is not economically feasible. The most attractive alternative source of affordable energy for these communities is solar micro-grids. We discuss risk-aware robust methods to optimally size and operate solar micro-grids in the presence of uncertain demand and uncertain renewable generation. These algorithms help system operators to increase their revenue while making their systems more resilient to inclement weather conditions.

  15. Designing efficient surveys: spatial arrangement of sample points for detection of invasive species

    Treesearch

    Ludek Berec; John M. Kean; Rebecca Epanchin-Niell; Andrew M. Liebhold; Robert G. Haight

    2015-01-01

    Effective surveillance is critical to managing biological invasions via early detection and eradication. The efficiency of surveillance systems may be affected by the spatial arrangement of sample locations. We investigate how the spatial arrangement of sample points, ranging from random to fixed grid arrangements, affects the probability of detecting a target...

  16. Random vs. Combinatorial Methods for Discrete Event Simulation of a Grid Computer Network

    NASA Technical Reports Server (NTRS)

    Kuhn, D. Richard; Kacker, Raghu; Lei, Yu

    2010-01-01

    This study compared random and t-way combinatorial inputs of a network simulator, to determine if these two approaches produce significantly different deadlock detection for varying network configurations. Modeling deadlock detection is important for analyzing configuration changes that could inadvertently degrade network operations, or to determine modifications that could be made by attackers to deliberately induce deadlock. Discrete event simulation of a network may be conducted using random generation, of inputs. In this study, we compare random with combinatorial generation of inputs. Combinatorial (or t-way) testing requires every combination of any t parameter values to be covered by at least one test. Combinatorial methods can be highly effective because empirical data suggest that nearly all failures involve the interaction of a small number of parameters (1 to 6). Thus, for example, if all deadlocks involve at most 5-way interactions between n parameters, then exhaustive testing of all n-way interactions adds no additional information that would not be obtained by testing all 5-way interactions. While the maximum degree of interaction between parameters involved in the deadlocks clearly cannot be known in advance, covering all t-way interactions may be more efficient than using random generation of inputs. In this study we tested this hypothesis for t = 2, 3, and 4 for deadlock detection in a network simulation. Achieving the same degree of coverage provided by 4-way tests would have required approximately 3.2 times as many random tests; thus combinatorial methods were more efficient for detecting deadlocks involving a higher degree of interactions. The paper reviews explanations for these results and implications for modeling and simulation.

  17. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.

    2016-04-01

    Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched field processing.

  18. Speedy routing recovery protocol for large failure tolerance in wireless sensor networks.

    PubMed

    Lee, Joa-Hyoung; Jung, In-Bum

    2010-01-01

    Wireless sensor networks are expected to play an increasingly important role in data collection in hazardous areas. However, the physical fragility of a sensor node makes reliable routing in hazardous areas a challenging problem. Because several sensor nodes in a hazardous area could be damaged simultaneously, the network should be able to recover routing after node failures over large areas. Many routing protocols take single-node failure recovery into account, but it is difficult for these protocols to recover the routing after large-scale failures. In this paper, we propose a routing protocol, referred to as ARF (Adaptive routing protocol for fast Recovery from large-scale Failure), to recover a network quickly after failures over large areas. ARF detects failures by counting the packet losses from parent nodes, and upon failure detection, it decreases the routing interval to notify the neighbor nodes of the failure. Our experimental results indicate that ARF could provide recovery from large-area failures quickly with less packets and energy consumption than previous protocols.

  19. Failure detection and identification for a reconfigurable flight control system

    NASA Technical Reports Server (NTRS)

    Dallery, Francois

    1987-01-01

    Failure detection and identification logic for a fault-tolerant longitudinal control system were investigated. Aircraft dynamics were based upon the cruise condition for a hypothetical transonic business jet transport configuration. The fault-tolerant control system consists of conventional control and estimation plus a new outer loop containing failure detection, identification, and reconfiguration (FDIR) logic. It is assumed that the additional logic has access to all measurements, as well as to the outputs of the control and estimation logic. The pilot may also command the FDIR logic to perform special tests.

  20. A dual-mode generalized likelihood ratio approach to self-reorganizing digital flight control system design

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Analytic techniques have been developed for detecting and identifying abrupt changes in dynamic systems. The GLR technique monitors the output of the Kalman filter and searches for the time that the failure occured, thus allowing it to be sensitive to new data and consequently increasing the chances for fast system recovery following detection of a failure. All failure detections are based on functional redundancy. Performance tests of the F-8 aircraft flight control system and computerized modelling of the technique are presented.

  1. Bearing system

    DOEpatents

    Kapich, Davorin D.

    1987-01-01

    A bearing system includes backup bearings for supporting a rotating shaft upon failure of primary bearings. In the preferred embodiment, the backup bearings are rolling element bearings having their rolling elements disposed out of contact with their associated respective inner races during normal functioning of the primary bearings. Displacement detection sensors are provided for detecting displacement of the shaft upon failure of the primary bearings. Upon detection of the failure of the primary bearings, the rolling elements and inner races of the backup bearings are brought into mutual contact by axial displacement of the shaft.

  2. Failure detection system risk reduction assessment

    NASA Technical Reports Server (NTRS)

    Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)

    2012-01-01

    A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.

  3. Data-Driven Anomaly Detection Performance for the Ares I-X Ground Diagnostic Prototype

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Schwabacher, Mark A.; Matthews, Bryan L.

    2010-01-01

    In this paper, we will assess the performance of a data-driven anomaly detection algorithm, the Inductive Monitoring System (IMS), which can be used to detect simulated Thrust Vector Control (TVC) system failures. However, the ability of IMS to detect these failures in a true operational setting may be related to the realistic nature of how they are simulated. As such, we will investigate both a low fidelity and high fidelity approach to simulating such failures, with the latter based upon the underlying physics. Furthermore, the ability of IMS to detect anomalies that were previously unknown and not previously simulated will be studied in earnest, as well as apparent deficiencies or misapplications that result from using the data-driven paradigm. Our conclusions indicate that robust detection performance of simulated failures using IMS is not appreciably affected by the use of a high fidelity simulation. However, we have found that the inclusion of a data-driven algorithm such as IMS into a suite of deployable health management technologies does add significant value.

  4. Simulation Assisted Risk Assessment Applied to Launch Vehicle Conceptual Design

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Go, Susie; Gee, Ken; Lawrence, Scott

    2008-01-01

    A simulation-based risk assessment approach is presented and is applied to the analysis of abort during the ascent phase of a space exploration mission. The approach utilizes groupings of launch vehicle failures, referred to as failure bins, which are mapped to corresponding failure environments. Physical models are used to characterize the failure environments in terms of the risk due to blast overpressure, resulting debris field, and the thermal radiation due to a fireball. The resulting risk to the crew is dynamically modeled by combining the likelihood of each failure, the severity of the failure environments as a function of initiator and time of the failure, the robustness of the crew module, and the warning time available due to early detection. The approach is shown to support the launch vehicle design process by characterizing the risk drivers and identifying regions where failure detection would significantly reduce the risk to the crew.

  5. Security Analysis of Smart Grid Cyber Physical Infrastructures Using Modeling and Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T.

    Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less

  6. Security attack detection algorithm for electric power gis system based on mobile application

    NASA Astrophysics Data System (ADS)

    Zhou, Chao; Feng, Renjun; Wang, Liming; Huang, Wei; Guo, Yajuan

    2017-05-01

    Electric power GIS is one of the key information technologies to satisfy the power grid construction in China, and widely used in power grid construction planning, weather, and power distribution management. The introduction of electric power GIS based on mobile applications is an effective extension of the geographic information system that has been widely used in the electric power industry. It provides reliable, cheap and sustainable power service for the country. The accurate state estimation is the important conditions to maintain the normal operation of the electric power GIS. Recent research has shown that attackers can inject the complex false data into the power system. The injection attack of this new type of false data (load integrity attack LIA) can successfully bypass the routine detection to achieve the purpose of attack, so that the control center will make a series of wrong decision. Eventually, leading to uneven distribution of power in the grid. In order to ensure the safety of the electric power GIS system based on mobile application, it is very important to analyze the attack mechanism and propose a new type of attack, and to study the corresponding detection method and prevention strategy in the environment of electric power GIS system based on mobile application.

  7. Robust failure detection filters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sanmartin, A. M.

    1985-01-01

    The robustness of detection filters applied to the detection of actuator failures on a free-free beam is analyzed. This analysis is based on computer simulation tests of the detection filters in the presence of different types of model mismatch, and on frequency response functions of the transfers corresponding to the model mismatch. The robustness of detection filters based on a model of the beam containing a large number of structural modes varied dramatically with the placement of some of the filter poles. The dynamics of these filters were very hard to analyze. The design of detection filters with a number of modes equal to the number of sensors was trivial. They can be configured to detect any number of actuator failure events. The dynamics of these filters were very easy to analyze and their robustness properties were much improved. A change of the output transformation allowed the filter to perform satisfactorily with realistic levels of model mismatch.

  8. Robustness of spatial micronetworks

    NASA Astrophysics Data System (ADS)

    McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  9. Causes of unusual distribution of coseismic landslides triggered by the Mw 6.1 2014 Ludian, Yunnan, China earthquake

    NASA Astrophysics Data System (ADS)

    Chen, Xiao-li; Liu, Chun-guo; Wang, Ming-ming; Zhou, Qing

    2018-06-01

    The Mw 6.1 2014 Ludian, Yunnan, China earthquake triggered numerous coseismic landslides that do not appear to be associated with any previously known seismogenic fault. Traditional models of triggering for seismically generated landslides do not provide a reasonable explanation for the landslide pattern observed here. Here the Newmark method is applied to a grid to calculate the minimum accelerations required for slope failures throughout the affected region. The results demonstrate that for much of the study area, the distribution of failure prone slopes is similar to the actual pattern of coseismic landslides, however there are some areas where the model predicts considerably fewer failures than occurred. We suggest that this is a result of the complex source faults that generated the Ludian earthquake, which produced a half-conjugate rupture on nearly EW- and NNW trending faults at depth. The rupture directed much of its seismic moment southeast of the epicenter, increasing ground shaking and the number of resulting landslides.

  10. A new method of converter transformer protection without commutation failure

    NASA Astrophysics Data System (ADS)

    Zhang, Jiayu; Kong, Bo; Liu, Mingchang; Zhang, Jun; Guo, Jianhong; Jing, Xu

    2018-01-01

    With the development of AC / DC hybrid transmission technology, converter transformer as nodes of AC and DC conversion of HVDC transmission technology, its reliable safe and stable operation plays an important role in the DC transmission. As a common problem of DC transmission, commutation failure poses a serious threat to the safe and stable operation of power grid. According to the commutation relation between the AC bus voltage of converter station and the output DC voltage of converter, the generalized transformation ratio is defined, and a new method of converter transformer protection based on generalized transformation ratio is put forward. The method uses generalized ratio to realize the on-line monitoring of the fault or abnormal commutation components, and the use of valve side of converter transformer bushing CT current characteristics of converter transformer fault accurately, and is not influenced by the presence of commutation failure. Through the fault analysis and EMTDC/PSCAD simulation, the protection can be operated correctly under the condition of various faults of the converter.

  11. HIV resistance testing and detected drug resistance in Europe.

    PubMed

    Schultze, Anna; Phillips, Andrew N; Paredes, Roger; Battegay, Manuel; Rockstroh, Jürgen K; Machala, Ladislav; Tomazic, Janez; Girard, Pierre M; Januskevica, Inga; Gronborg-Laut, Kamilla; Lundgren, Jens D; Cozzi-Lepri, Alessandro

    2015-07-17

    To describe regional differences and trends in resistance testing among individuals experiencing virological failure and the prevalence of detected resistance among those individuals who had a genotypic resistance test done following virological failure. Multinational cohort study. Individuals in EuroSIDA with virological failure (>1 RNA measurement >500 on ART after >6 months on ART) after 1997 were included. Adjusted odds ratios (aORs) for resistance testing following virological failure and aORs for the detection of resistance among those who had a test were calculated using logistic regression with generalized estimating equations. Compared to 74.2% of ART-experienced individuals in 1997, only 5.1% showed evidence of virological failure in 2012. The odds of resistance testing declined after 2004 (global P < 0.001). Resistance was detected in 77.9% of the tests, NRTI resistance being most common (70.3%), followed by NNRTI (51.6%) and protease inhibitor (46.1%) resistance. The odds of detecting resistance were lower in tests done in 1997-1998, 1999-2000 and 2009-2010, compared to those carried out in 2003-2004 (global P < 0.001). Resistance testing was less common in Eastern Europe [aOR 0.72, 95% confidence interval (CI) 0.55-0.94] compared to Southern Europe, whereas the detection of resistance given that a test was done was less common in Northern (aOR 0.29, 95% CI 0.21-0.39) and Central Eastern (aOR 0.47, 95% CI 0.29-0.76) Europe, compared to Southern Europe. Despite a concurrent decline in virological failure and testing, drug resistance was commonly detected. This suggests a selective approach to resistance testing. The regional differences identified indicate that policy aiming to minimize the emergence of resistance is of particular relevance in some European regions, notably in the countries in Eastern Europe.

  12. Spacecraft dynamics characterization and control system failure detection. Volume 3: Control system failure monitoring

    NASA Technical Reports Server (NTRS)

    Vanschalkwyk, Christiaan M.

    1992-01-01

    We discuss the application of Generalized Parity Relations to two experimental flexible space structures, the NASA Langley Mini-Mast and Marshall Space Flight Center ACES mast. We concentrate on the generation of residuals and make no attempt to implement the Decision Function. It should be clear from the examples that are presented whether it would be possible to detect the failure of a specific component. We derive the equations from Generalized Parity Relations. Two special cases are treated: namely, Single Sensor Parity Relations (SSPR) and Double Sensor Parity Relations (DSPR). Generalized Parity Relations for actuators are also derived. The NASA Langley Mini-Mast and the application of SSPR and DSPR to a set of displacement sensors located at the tip of the Mini-Mast are discussed. The performance of a reduced order model that includes the first five models of the mast is compared to a set of parity relations that was identified on a set of input-output data. Both time domain and frequency domain comparisons are made. The effect of the sampling period and model order on the performance of the Residual Generators are also discussed. Failure detection experiments where the sensor set consisted of two gyros and an accelerometer are presented. The effects of model order and sampling frequency are again illustrated. The detection of actuator failures is discussed. We use Generalized Parity Relations to monitor control system component failures on the ACES mast. An overview is given of the Failure Detection Filter and experimental results are discussed. Conclusions and directions for future research are given.

  13. Real-Time Detection of Infusion Site Failures in a Closed-Loop Artificial Pancreas.

    PubMed

    Howsmon, Daniel P; Baysal, Nihat; Buckingham, Bruce A; Forlenza, Gregory P; Ly, Trang T; Maahs, David M; Marcal, Tatiana; Towers, Lindsey; Mauritzen, Eric; Deshpande, Sunil; Huyett, Lauren M; Pinsker, Jordan E; Gondhalekar, Ravi; Doyle, Francis J; Dassau, Eyal; Hahn, Juergen; Bequette, B Wayne

    2018-05-01

    As evidence emerges that artificial pancreas systems improve clinical outcomes for patients with type 1 diabetes, the burden of this disease will hopefully begin to be alleviated for many patients and caregivers. However, reliance on automated insulin delivery potentially means patients will be slower to act when devices stop functioning appropriately. One such scenario involves an insulin infusion site failure, where the insulin that is recorded as delivered fails to affect the patient's glucose as expected. Alerting patients to these events in real time would potentially reduce hyperglycemia and ketosis associated with infusion site failures. An infusion site failure detection algorithm was deployed in a randomized crossover study with artificial pancreas and sensor-augmented pump arms in an outpatient setting. Each arm lasted two weeks. Nineteen participants wore infusion sets for up to 7 days. Clinicians contacted patients to confirm infusion site failures detected by the algorithm and instructed on set replacement if failure was confirmed. In real time and under zone model predictive control, the infusion site failure detection algorithm achieved a sensitivity of 88.0% (n = 25) while issuing only 0.22 false positives per day, compared with a sensitivity of 73.3% (n = 15) and 0.27 false positives per day in the SAP arm (as indicated by retrospective analysis). No association between intervention strategy and duration of infusion sets was observed ( P = .58). As patient burden is reduced by each generation of advanced diabetes technology, fault detection algorithms will help ensure that patients are alerted when they need to manually intervene. Clinical Trial Identifier: www.clinicaltrials.gov,NCT02773875.

  14. Pilot evaluation of electricity-reliability and power-quality monitoring in California's Silicon Valley with the I-Grid(R) system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph; Divan, Deepak; Brumsickle, William

    2004-02-01

    Power-quality events are of increasing concern for the economy because today's equipment, particularly computers and automated manufacturing devices, is susceptible to these imperceptible voltage changes. A small variation in voltage can cause this equipment to shut down for long periods, resulting in significant business losses. Tiny variations in power quality are difficult to detect except with expensive monitoring equipment used by trained technicians, so many electricity customers are unaware of the role of power-quality events in equipment malfunctioning. This report describes the findings from a pilot study coordinated through the Silicon Valley Manufacturers Group in California to explore the capabilitiesmore » of I-Grid(R), a new power-quality monitoring system. This system is designed to improve the accessibility of power-quality in formation and to increase understanding of the growing importance of electricity reliability and power quality to the economy. The study used data collected by I-Grid sensors at seven Silicon Valley firms to investigate the impacts of power quality on individual study participants as well as to explore the capabilities of the I-Grid system to detect events on the larger electricity grid by means of correlation of data from the sensors at the different sites. In addition, study participants were interviewed about the value they place on power quality, and their efforts to address electricity-reliability and power-quality problems. Issues were identified that should be taken into consideration in developing a larger, potentially nationwide, network of power-quality sensors.« less

  15. A facile and cost-effective TEM grid approach to design gold nano-structured substrates for high throughput plasmonic sensitive detection of biomolecules.

    PubMed

    Jia, Kun; Bijeon, Jean Louis; Adam, Pierre Michel; Ionescu, Rodica Elena

    2013-02-21

    A commercial TEM grid was used as a mask for the creation of extremely well-organized gold micro-/nano-structures on a glass substrate via a high temperature annealing process at 500 °C. The structured substrate was (bio)functionalized and used for the high throughput LSPR immunosensing of different concentrations of a model protein named bovine serum albumin.

  16. Enhancement of surface definition and gridding in the EAGLE code

    NASA Technical Reports Server (NTRS)

    Thompson, Joe F.

    1991-01-01

    Algorithms for smoothing of curves and surfaces for the EAGLE grid generation program are presented. The method uses an existing automated technique which detects undesirable geometric characteristics by using a local fairness criterion. The geometry entity is then smoothed by repeated removal and insertion of spline knots in the vicinity of the geometric irregularity. The smoothing algorithm is formulated for use with curves in Beta spline form and tensor product B-spline surfaces.

  17. Camouflage Traffic: Minimizing Message Delay for Smart Grid Applications Under Jamming

    DTIC Science & Technology

    2015-01-16

    Conf. Wireless Netw. Security, 2011, pp. 47–52. [26] M. Strasser, B. Danev, and S. Capkun, “Detection of reactive jam- ming in sensor networks,” ACM...Evaluation of two anti-islanding schemes for a radial distribution system equipped with self-excited induction generator wind turbines ,” IEEE Trans...technologies. To facilitate efficient information exchange, wireless networks have been proposed to be widely used in the smart grid. However, the jamming

  18. An Off-Grid Turbo Channel Estimation Algorithm for Millimeter Wave Communications.

    PubMed

    Han, Lingyi; Peng, Yuexing; Wang, Peng; Li, Yonghui

    2016-09-22

    The bandwidth shortage has motivated the exploration of the millimeter wave (mmWave) frequency spectrum for future communication networks. To compensate for the severe propagation attenuation in the mmWave band, massive antenna arrays can be adopted at both the transmitter and receiver to provide large array gains via directional beamforming. To achieve such array gains, channel estimation (CE) with high resolution and low latency is of great importance for mmWave communications. However, classic super-resolution subspace CE methods such as multiple signal classification (MUSIC) and estimation of signal parameters via rotation invariant technique (ESPRIT) cannot be applied here due to RF chain constraints. In this paper, an enhanced CE algorithm is developed for the off-grid problem when quantizing the angles of mmWave channel in the spatial domain where off-grid problem refers to the scenario that angles do not lie on the quantization grids with high probability, and it results in power leakage and severe reduction of the CE performance. A new model is first proposed to formulate the off-grid problem. The new model divides the continuously-distributed angle into a quantized discrete grid part, referred to as the integral grid angle, and an offset part, termed fractional off-grid angle. Accordingly, an iterative off-grid turbo CE (IOTCE) algorithm is proposed to renew and upgrade the CE between the integral grid part and the fractional off-grid part under the Turbo principle. By fully exploiting the sparse structure of mmWave channels, the integral grid part is estimated by a soft-decoding based compressed sensing (CS) method called improved turbo compressed channel sensing (ITCCS). It iteratively updates the soft information between the linear minimum mean square error (LMMSE) estimator and the sparsity combiner. Monte Carlo simulations are presented to evaluate the performance of the proposed method, and the results show that it enhances the angle detection resolution greatly.

  19. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    NASA Astrophysics Data System (ADS)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  20. Ferrographic and spectrographic analysis of oil sampled before and after failure of a jet engine

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.

    1980-01-01

    An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph as well as plasma, atomic absorption, and emission spectrometers. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism, nor a high level of wear debris was detected in the oil sample from the engine just prior to the test in which the failure occurred. However, low concentrations of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure.

  1. Ontology for the Gridded Met Database

    DTIC Science & Technology

    2007-07-01

    several concerned one or other of the following two possibilities: A “crib sheet” interpreted as a cloth sheet used in a baby’s crib or, alternately...brain decided that the math professor was the guy, but as it turned out, they both were—the math professor had made a mid-life career change to artist... bulletproof . If you tell your teenager to take out the garbage and she doesn’t, should you attribute the failure to lack of understanding? With

  2. Energy Reduction Strategies for Marine Corps Base Camp Pendleton: Assessment and Recommendations Professional Report

    DTIC Science & Technology

    2012-05-07

    kV kilovolt CNG compressed natural gas LCOE levelized cost of electricity CAES compressed air energy storage LED light-emitting diode COP...conservation to ensure a high -quality sustainable water supply. Report Objectives The purpose of the report is to identify the most economic and...report stated that critical military missions are at a high risk of failure in the event of an electric grid breakdown. (Defense Science Board, 2008

  3. Applicability of grid-net detection system for landfill leachate and diesel fuel release in the subsurface.

    PubMed

    Oh, Myounghak; Seo, Min Woo; Lee, Seunghak; Park, Junboum

    2008-02-19

    The grid-net system estimating the electrical conductivity changes was evaluated as a potential detection system for the leakage of diesel fuel and landfill leachate. Aspects of electrical conductivity changes were varied upon the type of contaminant. The electrical conductivity in the homogeneous mixtures of soil and landfill leachate linearly increased with the ionic concentration of pore fluid, which became more significant at higher volumetric water contents. However, the electrical conductivity in soil/diesel fuel mixture decreased with diesel fuel content and it was more significant at lower water contents. The electrode spacing should be determined by considering the type of contaminant to enhance the electrode sensitivity especially when two-electrode sensors are to be used. The electrode sensitivity for landfill leachate was constantly maintained regardless of the electrode spacings while that for the diesel fuel significantly increased at smaller electrode spacings. This is possibly due to the fact that the insulating barrier effect of the diesel fuel in non-aqueous phase was less predominant at large electrode spacing because electrical current can form the round-about paths over the volume with relatively small diesel fuel content. The model test results showed that the grid-net detection system can be used to monitor the leakage from waste landfill and underground storage tank sites. However, for a successful application of the detection system in the field, data under various field conditions should be accumulated.

  4. Evaluation of myocardial defect detection between parallel-hole and fan-beam SPECT using the Hotelling trace

    NASA Astrophysics Data System (ADS)

    Wollenweber, S. D.; Tsui, B. M. W.; Lalush, D. S.; Frey, E. C.; Gullberg, G. T.

    1998-08-01

    The objective of this study was to implement the Hotelling trace (HT) to evaluate the potential increase in defect detection in myocardial SPECT using high-resolution fan-beam (HRF) versus parallel-hole (HRP) collimation and compare results to a previously reported human observer study (G.K. Gregoriou et al., ibid., vol. 42, p. 1267-75, 1995). Projection data from the 3D MCAT torso phantom were simulated including the effects of attenuation, collimator-detector response blurring and scatter. Poisson noise fluctuations were then simulated. The HRP and HRF collimators had the same spatial resolution at 20 cm. The total counts in the projection data sets were proportional to the detection efficiencies of the collimators and on the order of that found in clinical Tc-99m studies. In six left-ventricular defect locations, the HT found for HRF was superior to that for HRP collimation. For HRF collimation, the HT was calculated for reconstructed images using 64/spl times/64, 128/spl times/128 and 192/spl times/192 grid sizes. The results demonstrate substantial improvement in myocardial defect detection when the grid size was increased from 64/spl times/64 to 128/spl times/128 and slight improvement from 128/spl times/128 to 192/spl times/192. Also, the performance of the Hotelling observer in terms of the HT at the different grid sizes correlates at better than 0.95 to that found in human observers in a previously reported observer experiment and ROC study.

  5. Real time flaw detection and characterization in tube through partial least squares and SVR: Application to eddy current testing

    NASA Astrophysics Data System (ADS)

    Ahmed, Shamim; Miorelli, Roberto; Calmon, Pierre; Anselmi, Nicola; Salucci, Marco

    2018-04-01

    This paper describes Learning-By-Examples (LBE) technique for performing quasi real time flaw localization and characterization within a conductive tube based on Eddy Current Testing (ECT) signals. Within the framework of LBE, the combination of full-factorial (i.e., GRID) sampling and Partial Least Squares (PLS) feature extraction (i.e., GRID-PLS) techniques are applied for generating a suitable training set in offine phase. Support Vector Regression (SVR) is utilized for model development and inversion during offine and online phases, respectively. The performance and robustness of the proposed GIRD-PLS/SVR strategy on noisy test set is evaluated and compared with standard GRID/SVR approach.

  6. Ant colony clustering with fitness perception and pheromone diffusion for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Ji, Junzhong; Song, Xiangjing; Liu, Chunnian; Zhang, Xiuzhen

    2013-08-01

    Community structure detection in complex networks has been intensively investigated in recent years. In this paper, we propose an adaptive approach based on ant colony clustering to discover communities in a complex network. The focus of the method is the clustering process of an ant colony in a virtual grid, where each ant represents a node in the complex network. During the ant colony search, the method uses a new fitness function to percept local environment and employs a pheromone diffusion model as a global information feedback mechanism to realize information exchange among ants. A significant advantage of our method is that the locations in the grid environment and the connections of the complex network structure are simultaneously taken into account in ants moving. Experimental results on computer-generated and real-world networks show the capability of our method to successfully detect community structures.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  8. A novel image-based BRDF measurement system and its application to human skin

    NASA Astrophysics Data System (ADS)

    Bintz, Jeffrey R.; Mendenhall, Michael J.; Marciniak, Michael A.; Butler, Samuel D.; Lloyd, James Tommy

    2016-09-01

    Human skin detection is an important first step in search and rescue (SAR) scenarios. Previous research performed human skin detection through an application specific camera system that ex- ploits the spectral properties of human skin at two visible and two near-infrared (NIR) wavelengths. The current theory assumes human skin is diffuse; however, it is observed that human skin exhibits specular and diffuse reflectance properties. This paper presents a novel image-based bidirectional reflectance distribution function (BRDF) measurement system, and applies it to the collection of human skin BRDF. The system uses a grid projecting laser and a novel signal processing chain to extract the surface normal from each grid location. Human skin BRDF measurements are shown for a variety of melanin content and hair coverage at the four spectral channels needed for human skin detection. The NIR results represent a novel contribution to the existing body of human skin BRDF measurements.

  9. Views of the self and others at different ages: utility of repertory grid technique in detecting the positivity effect in aging.

    PubMed

    Williams, Ben D; Harter, Stephanie Lewis

    2010-01-01

    Socioemotional selectivity theory (Carstensen, 1995) posits a "positivity effect" in older adults, describing an increasing tendency to attend to, process, interpret, and remember events and others in life in a positive fashion as one ages. Drawing on personal construct theory, Viney (1993) observes increasing integration of constructions of self with others across the lifespan. The current study extends assessment of the positivity effect, integrating it with personal construct theory, by use of Repertory Grid (RepGrid) analysis. Consistent with the positivity effect, older adults (ages 54-86) described others more positively on RepGrid measures in comparison to younger adults (ages 18-25). Older adults also described the self as more similar to others and tended to describe the self more positively. The age groups did not differ in measures of psychological distress or well being with the exception of older adults describing more autonomy.

  10. Fault Detection, Diagnosis, and Mitigation for Long-Duration AUV Missions with Minimal Human Intervention

    DTIC Science & Technology

    2014-09-30

    Duration AUV Missions with Minimal Human Intervention James Bellingham Monterey Bay Aquarium Research Institute 7700 Sandholdt Road Moss Landing...subsystem failures and environmental challenges. For example, should an AUV suffer the failure of one of its internal actuators, can that failure be...reduce the need for operator intervention in the event of performance anomalies on long- duration AUV deployments, - To allow the vehicle to detect

  11. Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids.

    PubMed

    Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong

    2017-04-28

    Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability.

  12. Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids

    PubMed Central

    Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong

    2017-01-01

    Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability. PMID:28452925

  13. ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.

    PubMed

    Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles

    2018-04-19

    Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We developed a novel de-arraying approach for TMA analysis. By combining wavelet-based detection, active contour segmentation, and thin-plate spline interpolation, our approach is able to handle TMA images with high dynamic, poor signal-to-noise ratio, complex background and non-linear deformation of TMA grid. In addition, the deformation estimation produces quantitative information to asset the manufacturing quality of TMAs.

  14. On-line detection of key radionuclides for fuel-rod failure in a pressurized water reactor.

    PubMed

    Qin, Guoxiu; Chen, Xilin; Guo, Xiaoqing; Ni, Ning

    2016-08-01

    For early on-line detection of fuel rod failure, the key radionuclides useful in monitoring must leak easily from failing rods. Yield, half-life, and mass share of fission products that enter the primary coolant also need to be considered in on-line analyses. From all the nuclides that enter the primary coolant during fuel-rod failure, (135)Xe and (88)Kr were ultimately chosen as crucial for on-line monitoring of fuel-rod failure. A monitoring system for fuel-rod failure detection for pressurized water reactor (PWR) based on the LaBr3(Ce) detector was assembled and tested. The samples of coolant from the PWR were measured using the system as well as a HPGe γ-ray spectrometer. A comparison showed the method was feasible. Finally, the γ-ray spectra of primary coolant were measured under normal operations and during fuel-rod failure. The two peaks of (135)Xe (249.8keV) and (88)Kr (2392.1keV) were visible, confirming that the method is capable of monitoring fuel-rod failure on-line. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. SU-E-J-81: Beveled Needle Tip Detection Error in Ultrasound-Guided Prostate Brachytherapy.

    PubMed

    Leu, S; Ruiz, B; Podder, T

    2012-06-01

    To quantify the needle tip detection errors in ultrasound images due to bevel-tip orientation in relation to the location on template grid. Transrectal ultrasound (TRUS) system (BK Medical) with physical template grid and 18-gauge bevel-tip (20-deg beveled angle) brachytherapy needle (Bard Medical, Covington, GA) were used. The TRUS was set at 6.5MHz in water phantom at 40°C and measurements were taken with 50% and 100% TRUS gains. Needles were oriented with bevel-tip facing up (0-degree) and inserted through template grid-holes. Reference needle depths were measured when needle tip image intensity was bright enough for potentially consistent readings. High-resolution digital vernier caliper was used to measure needle depth. Needle bevel-tip orientation was then changed to bevel down (by rotating 180-degree) and needle depth was adjusted by retracting so that the needle-tip image intensity appeared similar to when the needle bevel-tip was at 0-degree orientation. Clinically relevant locations were considered for needle placement on the template grids (1st row to 9th row, and 'a-f' columns). For 50% TRUS gain, bevel tip detection errors/differences were 0.69±0.30mm (1st row) to 3.23±0.22mm (9th row) and 0.78±0.71mm (1st row) to 4.14±0.56mm (9th row) in columns 'a' and 'D', respectively. The corresponding errors for 100% TRUS gain were 0.57±0.25mm to 5.24±0.36mm and 0.84±0.30mm to 4.2±0.20mm in columns 'a' and 'D', respectively. These errors/differences varied linearly for grid-hole locations on the rows and columns in between, smaller to large depending on distance from the TRUS probe. Observed no effect of gains (50% vs. 100%) along 'D' column, which was directly above the TRUS probe. Experiment results revealed that the beveled needle tip orientation could significantly impact the detection accuracy of the needle tips, based on which the seeds might be delivered. These errors may lead to considerable dosimetric deviations in prostate brachytherapy seed implantation. © 2012 American Association of Physicists in Medicine.

  16. Signal analysis techniques for incipient failure detection in turbomachinery

    NASA Technical Reports Server (NTRS)

    Coffin, T.

    1985-01-01

    Signal analysis techniques for the detection and classification of incipient mechanical failures in turbomachinery were developed, implemented and evaluated. Signal analysis techniques available to describe dynamic measurement characteristics are reviewed. Time domain and spectral methods are described, and statistical classification in terms of moments is discussed. Several of these waveform analysis techniques were implemented on a computer and applied to dynamic signals. A laboratory evaluation of the methods with respect to signal detection capability is described. Plans for further technique evaluation and data base development to characterize turbopump incipient failure modes from Space Shuttle main engine (SSME) hot firing measurements are outlined.

  17. Evaluation of ENEPIG and Immersion Silver Surface Finishes Under Drop Loading

    NASA Astrophysics Data System (ADS)

    Pearl, Adam; Osterman, Michael; Pecht, Michael

    2016-01-01

    The effect of printed circuit board surface finish on the drop loading reliability of ball grid array (BGA) solder interconnects has been examined. The finishes examined include electroless nickel/electroless palladium/immersion gold (ENEPIG) and immersion silver (ImAg). For the ENEPIG finish, the effect of the Pd plating layer thickness was evaluated by testing two different thicknesses: 0.05 μm and 0.15 μm. BGA components were assembled onto the boards using either eutectic Sn-Pb or Sn-3.0Ag-0.5Cu (SAC305) solder. Prior to testing, the assembled boards were aged at 100°C for 24 h or 500 h. The boards were then subjected to multiple 1500-g drop tests. Failure analysis indicated the primary failure site for the BGAs to be the solder balls at the board-side solder interface. Cratering of the board laminate under the solder-attached pads was also observed. In all cases, isothermal aging reduced the number of drops to failure. The components soldered onto the boards with the 0.15- μm-Pd ENEPIG finish with the SAC305 solder had the highest characteristic life, at 234 drops to failure, compared with the other finish-solder combinations.

  18. Sample Introduction Using the Hildebrand Grid Nebulizer for Plasma Spectrometry

    DTIC Science & Technology

    1988-01-01

    linear dynamic ranges, precision, and peak width were de- termined for elements in methanol and acetonitrile solutions. , (1)> The grid nebulizer was...FIA) with ICP-OES detection were evaluated. Detec- tion limits, linear dynamic ranges, precision, and peak width were de- termined for elements in...Concentration vs. Log Peak Area for Mn, 59 Cd, Zn, Au, Ni in Methanol (CMSC) 3-28 Log Concentration vs. Log Peak Area for Mn, 60 Cd, Au, Ni in

  19. The Challenges of Defense Support of Civil Authorities and Homeland Defense in the Cyber Domain

    DTIC Science & Technology

    2013-05-20

    Information Grid ( GIG ) against a cyber attack has taken the forefront in national level discussions. The U.S. homeland’s assumed sanctuary against...other U.S. government agencies and key operators within the private sector to detect, deter, prevent, and thwart exploitation of CIKR and the GIG ...CIKR) and the Global Information Grid ( GIG ) against a cyber attack has taken the forefront in national level discussions. The U.S. homeland’s

  20. Data Fusion Analysis for Range Test Validation System

    DTIC Science & Technology

    2010-07-14

    simulants were released during the RTVS ’08 test series: triethyl phosphate (TEP), methyl salicylate (MeS), and acetic acid (AA). A total of 29 release...the combination of a grid of point sensors at ground level and a standoff FTIR system monitoring above ground areas proved effective in detecting the...presence of simulants over the test grid. A Dempster-Shafer approach for data fusion was selected as the most effective strategy for RTVS data fusion

  1. Failure detection and fault management techniques for flush airdata sensing systems

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.

    1992-01-01

    A high-angle-of-attack flush airdata sensing system was installed and flight tested on the F-18 High Alpha Research Vehicle at NASA-Dryden. This system uses a matrix of pressure orifices arranged in concentric circles on the nose of the vehicle to determine angles of attack, angles of sideslip, dynamic pressure, and static pressure as well as other airdata parameters. Results presented use an arrangement of 11 symmetrically distributed ports on the aircraft nose. Experience with this sensing system data indicates that the primary concern for real-time implementation is the detection and management of overall system and individual pressure sensor failures. The multiple port sensing system is more tolerant to small disturbances in the measured pressure data than conventional probe-based intrusive airdata systems. However, under adverse circumstances, large undetected failures in individual pressure ports can result in algorithm divergence and catastrophic failure of the entire system. How system and individual port failures may be detected using chi sq. analysis is shown. Once identified, the effects of failures are eliminated using weighted least squares.

  2. Gear Fault Detection Effectiveness as Applied to Tooth Surface Pitting Fatigue Damage

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; Dempsey, Paula J.; Heath, Gregory F.; Shanthakumaran, Perumal

    2009-01-01

    A study was performed to evaluate fault detection effectiveness as applied to gear tooth pitting fatigue damage. Vibration and oil-debris monitoring (ODM) data were gathered from 24 sets of spur pinion and face gears run during a previous endurance evaluation study. Three common condition indicators (RMS, FM4, and NA4) were deduced from the time-averaged vibration data and used with the ODM to evaluate their performance for gear fault detection. The NA4 parameter showed to be a very good condition indicator for the detection of gear tooth surface pitting failures. The FM4 and RMS parameters performed average to below average in detection of gear tooth surface pitting failures. The ODM sensor was successful in detecting a significant amount of debris from all the gear tooth pitting fatigue failures. Excluding outliers, the average cumulative mass at the end of a test was 40 mg.

  3. A preliminary design for flight testing the FINDS algorithm

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.

    1986-01-01

    This report presents a preliminary design for flight testing the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a target flight computer. The FINDS software was ported onto the target flight computer by reducing the code size by 65%. Several modifications were made to the computational algorithms resulting in a near real-time execution speed. Finally, a new failure detection strategy was developed resulting in a significant improvement in the detection time performance. In particular, low level MLS, IMU and IAS sensor failures are detected instantaneously with the new detection strategy, while accelerometer and the rate gyro failures are detected within the minimum time allowed by the information generated in the sensor residuals based on the point mass equations of motion. All of the results have been demonstrated by using five minutes of sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment.

  4. Making MUSIC: A multiple sampling ionization chamber

    NASA Astrophysics Data System (ADS)

    Shumard, B.; Henderson, D. J.; Rehm, K. E.; Tang, X. D.

    2007-08-01

    A multiple sampling ionization chamber (MUSIC) was developed for use in conjunction with the Atlas scattering chamber (ATSCAT). This chamber was developed to study the (α, p) reaction in stable and radioactive beams. The gas filled ionization chamber is used as a target and detector for both particles in the outgoing channel (p + beam particles for elastic scattering or p + residual nucleus for (α, p) reactions). The MUSIC detector is followed by a Si array to provide a trigger for anode events. The anode events are gated by a gating grid so that only (α, p) reactions where the proton reaches the Si detector result in an anode event. The MUSIC detector is a segmented ionization chamber. The active length of the chamber is 11.95 in. and is divided into 16 equal anode segments (3.5 in. × 0.70 in. with 0.3 in. spacing between pads). The dead area of the chamber was reduced by the addition of a Delrin snout that extends 0.875 in. into the chamber from the front face, to which a mylar window is affixed. 0.5 in. above the anode is a Frisch grid that is held at ground potential. 0.5 in. above the Frisch grid is a gating grid. The gating grid functions as a drift electron barrier, effectively halting the gathering of signals. Setting two sets of alternating wires at differing potentials creates a lateral electric field which traps the drift electrons, stopping the collection of anode signals. The chamber also has a reinforced mylar exit window separating the Si array from the target gas. This allows protons from the (α, p) reaction to be detected. The detection of these protons opens the gating grid to allow the drift electrons released from the ionizing gas during the (α, p) reaction to reach the anode segment below the reaction.

  5. Level of Automation and Failure Frequency Effects on Simulated Lunar Lander Performance

    NASA Technical Reports Server (NTRS)

    Marquez, Jessica J.; Ramirez, Margarita

    2014-01-01

    A human-in-the-loop experiment was conducted at the NASA Ames Research Center Vertical Motion Simulator, where instrument-rated pilots completed a simulated terminal descent phase of a lunar landing. Ten pilots participated in a 2 x 2 mixed design experiment, with level of automation as the within-subjects factor and failure frequency as the between subjects factor. The two evaluated levels of automation were high (fully automated landing) and low (manual controlled landing). During test trials, participants were exposed to either a high number of failures (75% failure frequency) or low number of failures (25% failure frequency). In order to investigate the pilots' sensitivity to changes in levels of automation and failure frequency, the dependent measure selected for this experiment was accuracy of failure diagnosis, from which D Prime and Decision Criterion were derived. For each of the dependent measures, no significant difference was found for level of automation and no significant interaction was detected between level of automation and failure frequency. A significant effect was identified for failure frequency suggesting failure frequency has a significant effect on pilots' sensitivity to failure detection and diagnosis. Participants were more likely to correctly identify and diagnose failures if they experienced the higher levels of failures, regardless of level of automation

  6. Early detection of nonneurologic organ failure in patients with severe traumatic brain injury: Multiple organ dysfunction score or sequential organ failure assessment?

    PubMed

    Ramtinfar, Sara; Chabok, Shahrokh Yousefzadeh; Chari, Aliakbar Jafari; Reihanian, Zoheir; Leili, Ehsan Kazemnezhad; Alizadeh, Arsalan

    2016-10-01

    The aim of this study is to compare the discriminant function of multiple organ dysfunction score (MODS) and sequential organ failure assessment (SOFA) components in predicting the Intensive Care Unit (ICU) mortality and neurologic outcome. A descriptive-analytic study was conducted at a level I trauma center. Data were collected from patients with severe traumatic brain injury admitted to the neurosurgical ICU. Basic demographic data, SOFA and MOD scores were recorded daily for all patients. Odd's ratios (ORs) were calculated to determine the relationship of each component score to mortality, and area under receiver operating characteristic (AUROC) curve was used to compare the discriminative ability of two tools with respect to ICU mortality. The most common organ failure observed was respiratory detected by SOFA of 26% and MODS of 13%, and the second common was cardiovascular detected by SOFA of 18% and MODS of 13%. No hepatic or renal failure occurred, and coagulation failure reported as 2.5% by SOFA and MODS. Cardiovascular failure defined by both tools had a correlation to ICU mortality and it was more significant for SOFA (OR = 6.9, CI = 3.6-13.3, P < 0.05 for SOFA; OR = 5, CI = 3-8.3, P < 0.05 for MODS; AUROC = 0.82 for SOFA; AUROC = 0.73 for MODS). The relationship of cardiovascular failure to dichotomized neurologic outcome was not significant statistically. ICU mortality was not associated with respiratory or coagulation failure. Cardiovascular failure defined by either tool significantly related to ICU mortality. Compared to MODS, SOFA-defined cardiovascular failure was a stronger predictor of death. ICU mortality was not affected by respiratory or coagulation failures.

  7. The Identification of Software Failure Regions

    DTIC Science & Technology

    1990-06-01

    be used to detect non-obviously redundant test cases. A preliminary examination of the manual analysis method is performed with a set of programs ...failure regions are defined and a method of failure region analysis is described in detail. The thesis describes how this analysis may be used to detect...is the termination of the ability of a functional unit to perform its required function. (Glossary, 1983) The presence of faults in program code

  8. Fault Detection and Isolation for Hydraulic Control

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Pressure sensors and isolation valves act to shut down defective servochannel. Redundant hydraulic system indirectly senses failure in any of its electrical control channels and mechanically isolates hydraulic channel controlled by faulty electrical channel so flat it cannot participate in operating system. With failure-detection and isolation technique, system can sustains two failed channels and still functions at full performance levels. Scheme useful on aircraft or other systems with hydraulic servovalves where failure cannot be tolerated.

  9. Recent development of the Multi-Grid detector for large area neutron scattering instruments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerard, Bruno

    2015-07-01

    Most of the Neutron Scattering facilities are committed in a continuous program of modernization of their instruments, requiring large area and high performance thermal neutron detectors. Beside scintillators detectors, {sup 3}He detectors, like linear PSDs (Position Sensitive Detectors) and MWPCs (Multi-Wires Proportional Chambers), are the most current techniques nowadays. Time Of Flight instruments are using {sup 3}He PSDs mounted side by side to cover tens of m{sup 2}. As a result of the so-called '{sup 3}He shortage crisis{sup ,} the volume of 3He which is needed to build one of these instruments is not accessible anymore. The development of alternativemore » techniques requiring no 3He, has been given high priority to secure the future of neutron scattering instrumentation. This is particularly important in the context where the future ESS (European Spallation Source) will start its operation in 2019-2020. Improved scintillators represent one of the alternative techniques. Another one is the Multi-Grid introduced at the ILL in 2009. A Multi-Grid detector is composed of several independent modules of typically 0.8 m x 3 m sensitive area, mounted side by side in air or in a vacuum TOF chamber. One module is composed of segmented boron-lined proportional counters mounted in a gas vessel; the counters, of square section, are assembled with Aluminium grids electrically insulated and stacked together. This design provides two advantages: First, magnetron sputtering techniques can be used to coat B{sub 4}C films on planar substrates, and second, the neutron position along the anode wires can be measured by reading out individually the grid signals with fast shaping amplifiers followed by comparators. Unlike charge division localisation in linear PSDs, the individual readout of the grids allows operating the Multi-Grid at a low amplification gain, hence this detector is tolerant to mechanical defects and its production accessible to laboratories equipped with standard equipment. Prototypes of different configurations and sizes have been developed and tested. A demonstrator, with a sensitive area of 0.8 m x 3 m, has been studied during the CRISP European project; it contains 1024 grids, and a surface of isotopically enriched B{sub 4}C film close to 80 m{sup 2}. Its size represented a challenge in terms of fabrication and mounting of the detection elements. Another challenge was to make the gas chamber mechanically compatible with operation in a vacuum TOF chamber. Optimal working condition of this detector was achieved by flushing Ar-CO{sub 2} at a pressure of 50 mbar, and by applying 400 Volts on the anodes. This unusual gas pressure allows to greatly simplifying the mechanics of the gas vessel in vacuum. The detection efficiency has been measured with high precision for different film thicknesses. 52% has been measured at 2.5 Angstrom, in good agreement with the MC simulation. A high position resolution has been achieved by centre of gravity measurement of the TOT (Time-Over-Threshold) signals between neighbouring grids. These results, as well as other detection parameters, including gamma sensitivity and spatial uniformity, will be presented. (author)« less

  10. Dielectric Spectroscopic Detection of Early Failures in 3-D Integrated Circuits.

    PubMed

    Obeng, Yaw; Okoro, C A; Ahn, Jung-Joon; You, Lin; Kopanski, Joseph J

    The commercial introduction of three dimensional integrated circuits (3D-ICs) has been hindered by reliability challenges, such as stress related failures, resistivity changes, and unexplained early failures. In this paper, we discuss a new RF-based metrology, based on dielectric spectroscopy, for detecting and characterizing electrically active defects in fully integrated 3D devices. These defects are traceable to the chemistry of the insolation dielectrics used in the through silicon via (TSV) construction. We show that these defects may be responsible for some of the unexplained early reliability failures observed in TSV enabled 3D devices.

  11. Redundancy management of multiple KT-70 inertial measurement units applicable to the space shuttle

    NASA Technical Reports Server (NTRS)

    Cook, L. J.

    1975-01-01

    Results of an investigation of velocity failure detection and isolation for 3 inertial measuring units (IMU) and 2 inertial measuring units (IMU) configurations are presented. The failure detection and isolation algorithm performance was highly successful and most types of velocity errors were detected and isolated. The failure detection and isolation algorithm also included attitude FDI but was not evaluated because of the lack of time and low resolution in the gimbal angle synchro outputs. The shuttle KT-70 IMUs will have dual-speed resolvers and high resolution gimbal angle readouts. It was demonstrated by these tests that a single computer utilizing a serial data bus can successfully control a redundant 3-IMU system and perform FDI.

  12. Detecting Structural Failures Via Acoustic Impulse Responses

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Joshi, Sanjay S.

    1995-01-01

    Advanced method of acoustic pulse reflectivity testing developed for use in determining sizes and locations of failures within structures. Used to detect breaks in electrical transmission lines, detect faults in optical fibers, and determine mechanical properties of materials. In method, structure vibrationally excited with acoustic pulse (a "ping") at one location and acoustic response measured at same or different location. Measured acoustic response digitized, then processed by finite-impulse-response (FIR) filtering algorithm unique to method and based on acoustic-wave-propagation and -reflection properties of structure. Offers several advantages: does not require training, does not require prior knowledge of mathematical model of acoustic response of structure, enables detection and localization of multiple failures, and yields data on extent of damage at each location.

  13. Heart Failure and Frailty in the Community-Living Elderly Population: What the UFO Study Will Tell Us.

    PubMed

    Fung, Erik; Hui, Elsie; Yang, Xiaobo; Lui, Leong T; Cheng, King F; Li, Qi; Fan, Yiting; Sahota, Daljit S; Ma, Bosco H M; Lee, Jenny S W; Lee, Alex P W; Woo, Jean

    2018-01-01

    Heart failure and frailty are clinical syndromes that present with overlapping phenotypic characteristics. Importantly, their co-presence is associated with increased mortality and morbidity. While mechanical and electrical device therapies for heart failure are vital for select patients with advanced stage disease, the majority of patients and especially those with undiagnosed heart failure would benefit from early disease detection and prompt initiation of guideline-directed medical therapies. In this article, we review the problematic interactions between heart failure and frailty, introduce a focused cardiac screening program for community-living elderly initiated by a mobile communication device app leading to the Undiagnosed heart Failure in frail Older individuals (UFO) study, and discuss how the knowledge of pre-frailty and frailty status could be exploited for the detection of previously undiagnosed heart failure or advanced cardiac disease. The widespread use of mobile devices coupled with increasing availability of novel, effective medical and minimally invasive therapies have incentivized new approaches to heart failure case finding and disease management.

  14. A Methodology for the Estimation of the Wind Generator Economic Efficiency

    NASA Astrophysics Data System (ADS)

    Zaleskis, G.

    2017-12-01

    Integration of renewable energy sources and the improvement of the technological base may not only reduce the consumption of fossil fuel and environmental load, but also ensure the power supply in regions with difficult fuel delivery or power failures. The main goal of the research is to develop the methodology of evaluation of the wind turbine economic efficiency. The research has demonstrated that the electricity produced from renewable sources may be much more expensive than the electricity purchased from the conventional grid.

  15. Research on Spectroscopy, Opacity, and Atmospheres

    NASA Technical Reports Server (NTRS)

    Kurucz, Robert L.

    1996-01-01

    I discuss errors in theory and in interpreting observations that are produced by the failure to consider resolution in space, time, and energy. I discuss convection in stellar model atmospheres and in stars. Large errors in abundances are possible such as the factor of ten error in the Li abundance for extreme Population II stars. Finally I discuss the variation of microturbulent velocity with depth, effective temperature, gravity and abundance. These variations must be dealt with in computing models and grids and in any type of photometric calibration.

  16. Flight test results of the strapdown ring laser gyro tetrad inertial navigation system

    NASA Technical Reports Server (NTRS)

    Carestia, R. A.; Hruby, R. J.; Bjorkman, W. S.

    1983-01-01

    A helicopter flight test program undertaken to evaluate the performance of Tetrad (a strap down, laser gyro, inertial navigation system) is described. The results of 34 flights show a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n. mi., with a standard deviation of 1.48 n. mi.; and a modeled mean position error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. No laser gyro or accelerometer failures were detected during the flight tests. Off line parity residual studies used simulated failures with the prerecorded flight test and laboratory test data. The airborne Tetrad system's failure--detection logic, exercised during the tests, successfully demonstrated the detection of simulated ""hard'' failures and the system's ability to continue successfully to navigate by removing the simulated faulted sensor from the computations. Tetrad's four ring laser gyros provided reliable and accurate angular rate sensing during the 4 yr of the test program, and no sensor failures were detected during the evaluation of free inertial navigation performance.

  17. Experience of automation failures in training: effects on trust, automation bias, complacency and performance.

    PubMed

    Sauer, Juergen; Chavaillaz, Alain; Wastell, David

    2016-06-01

    This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.

  18. Magnetic Field Would Reduce Electron Backstreaming in Ion Thrusters

    NASA Technical Reports Server (NTRS)

    Foster, John E.

    2003-01-01

    The imposition of a magnetic field has been proposed as a means of reducing the electron backstreaming problem in ion thrusters. Electron backstreaming refers to the backflow of electrons into the ion thruster. Backstreaming electrons are accelerated by the large potential difference that exists between the ion-thruster acceleration electrodes, which otherwise accelerates positive ions out of the engine to develop thrust. The energetic beam formed by the backstreaming electrons can damage the discharge cathode, as well as other discharge surfaces upstream of the acceleration electrodes. The electron-backstreaming condition occurs when the center potential of the ion accelerator grid is no longer sufficiently negative to prevent electron diffusion back into the ion thruster. This typically occurs over extended periods of operation as accelerator-grid apertures enlarge due to erosion. As a result, ion thrusters are required to operate at increasingly negative accelerator-grid voltages in order to prevent electron backstreaming. These larger negative voltages give rise to higher accelerator grid erosion rates, which in turn accelerates aperture enlargement. Electron backstreaming due to accelerator-gridhole enlargement has been identified as a failure mechanism that will limit ionthruster service lifetime. The proposed method would make it possible to not only reduce the electron backstreaming current at and below the backstreaming voltage limit, but also reduce the backstreaming voltage limit itself. This reduction in the voltage at which electron backstreaming occurs provides operating margin and thereby reduces the magnitude of negative voltage that must be placed on the accelerator grid. Such a reduction reduces accelerator- grid erosion rates. The basic idea behind the proposed method is to impose a spatially uniform magnetic field downstream of the accelerator electrode that is oriented transverse to the thruster axis. The magnetic field must be sufficiently strong to impede backstreaming electrons, but not so strong as to significantly perturb ion trajectories. An electromagnet or permanent magnetic circuit can be used to impose the transverse magnetic field downstream of the accelerator-grid electrode. For example, in the case of an accelerator grid containing straight, parallel rows of apertures, one can apply nearly uniform magnetic fields across all the apertures by the use of permanent magnets of alternating polarity connected to pole pieces laid out parallel to the rows, as shown in the left part of the figure. For low-temperature operation, the pole pieces can be replaced with bar magnets of alternating polarity. Alternatively, for the same accelerator grid, one could use an electromagnet in the form of current-carrying rods laid out parallel to the rows.

  19. Sensor failure detection for jet engines using analytical redundance

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1984-01-01

    Analytical redundant sensor failure detection, isolation and accommodation techniques for gas turbine engines are surveyed. Both the theoretical technology base and demonstrated concepts are discussed. Also included is a discussion of current technology needs and ongoing Government sponsored programs to meet those needs.

  20. A Cosmic Dust Sensor Based on an Array of Grid Electrodes

    NASA Astrophysics Data System (ADS)

    Li, Y. W.; Bugiel, S.; Strack, H.; Srama, R.

    2014-04-01

    We described a low mass and high sensitivity cosmic dust trajectory sensor using a array of grid segments[1]. the sensor determines the particle velocity vector and the particle mass. An impact target is used for the detection of the impact plasma of high speed particles like interplanetary dust grains or high speed ejecta. Slower particles are measured by three planes of grid electrodes using charge induction. In contrast to conventional Dust Trajectory Sensor based on wire electrodes, grid electrodes a robust and sensitive design with a trajectory resolution of a few degree. Coulomb simulation and laboratory tests were performed in order to verify the instrument design. The signal shapes are used to derive the particle plane intersection points and to derive the exact particle trajectory. The accuracy of the instrument for the incident angle depends on the particle charge, the position of the intersection point and the signal-to-noise of the charge sensitive amplifier (CSA). There are some advantages of this grid-electrodes based design with respect to conventional trajectory sensor using individual wire electrodes: the grid segment electrodes show higher amplitudes (close to 100%induced charge) and the overall number of measurement channels can be reduced. This allows a compact instrument with low power and mass requirements.

  1. Secure smart grid communications and information integration based on digital watermarking in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Yan, Xin; Zhang, Ling; Wu, Yang; Luo, Youlong; Zhang, Xiaoxing

    2017-02-01

    As more and more wireless sensor nodes and networks are employed to acquire and transmit the state information of power equipment in smart grid, we are in urgent need of some viable security solutions to ensure secure smart grid communications. Conventional information security solutions, such as encryption/decryption, digital signature and so forth, are not applicable to wireless sensor networks in smart grid any longer, where bulk messages need to be exchanged continuously. The reason is that these cryptographic solutions will account for a large portion of the extremely limited resources on sensor nodes. In this article, a security solution based on digital watermarking is adopted to achieve the secure communications for wireless sensor networks in smart grid by data and entity authentications at a low cost of operation. Our solution consists of a secure framework of digital watermarking, and two digital watermarking algorithms based on alternating electric current and time window, respectively. Both watermarking algorithms are composed of watermark generation, embedding and detection. The simulation experiments are provided to verify the correctness and practicability of our watermarking algorithms. Additionally, a new cloud-based architecture for the information integration of smart grid is proposed on the basis of our security solutions.

  2. An Experimental Study of Launch Vehicle Propellant Tank Fragmentation

    NASA Technical Reports Server (NTRS)

    Richardson, Erin; Jackson, Austin; Hays, Michael; Bangham, Mike; Blackwood, James; Skinner, Troy; Richman, Ben

    2014-01-01

    In order to better understand launch vehicle abort environments, Bangham Engineering Inc. (BEi) built a test assembly that fails sample materials (steel and aluminum plates of various alloys and thicknesses) under quasi-realistic vehicle failure conditions. Samples are exposed to pressures similar to those expected in vehicle failure scenarios and filmed at high speed to increase understanding of complex fracture mechanics. After failure, the fragments of each test sample are collected, catalogued and reconstructed for further study. Post-test analysis shows that aluminum samples consistently produce fewer fragments than steel samples of similar thickness and at similar failure pressures. Video analysis shows that there are several failure 'patterns' that can be observed for all test samples based on configuration. Fragment velocities are also measured from high speed video data. Sample thickness and material are analyzed for trends in failure pressure. Testing is also done with cryogenic and noncryogenic liquid loading on the samples. It is determined that liquid loading and cryogenic temperatures can decrease material fragmentation for sub-flight thicknesses. A method is developed for capture and collection of fragments that is greater than 97 percent effective in recovering sample mass, addressing the generation of tiny fragments. Currently, samples tested do not match actual launch vehicle propellant tank material thicknesses because of size constraints on test assembly, but test findings are used to inform the design and build of another, larger test assembly with the purpose of testing actual vehicle flight materials that include structural components such as iso-grid and friction stir welds.

  3. Immunity-based detection, identification, and evaluation of aircraft sub-system failures

    NASA Astrophysics Data System (ADS)

    Moncayo, Hever Y.

    This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also analyzed in this thesis. They showed to have an important effect on detection performance and are a critical aspect when designing the configuration of the AIS. The results presented in this thesis show that the AIS paradigm addresses directly the complexity and multi-dimensionality associated with a damaged aircraft dynamic response and provides the tools necessary for a comprehensive/integrated solution to the FDIE problem. Excellent detection, identification, and evaluation performance has been recorded for all types of failures considered. The implementation of the proposed AIS-based scheme can potentially have a significant impact on the safety of aircraft operation. The output information obtained from the scheme will be useful to increase pilot situational awareness and determine automated compensation.

  4. The Amsterdam quintuplet nuclear microprobe

    NASA Astrophysics Data System (ADS)

    van den Putte, M. J. J.; van den Brand, J. F. J.; Jamieson, D. N.; Rout, B.; Szymanski, R.

    2003-09-01

    A new nuclear microprobe comprising of a quintuplet lens system is being constructed at the Ion Beam Facility of the "Vrije Universiteit" Amsterdam in collaboration with the Microanalytical Research Centre of the University of Melbourne. An overview of the Amsterdam set-up will be presented. Detailed characterisation of the individual lenses was performed with the grid shadow method using a 2000 mesh Cu grid mounted at a relative angle of 0.5° to the vertical lens line focus. The lenses were found to have very low parasitic aberrations equal or below the minimum detectable limit for the method, which was approximately 0.1% for the sextupole component and 0.2% for the octupole component. We present experimental and theoretical grid shadow patterns, showing results for all five lenses.

  5. An energy-efficient failure detector for vehicular cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Dong, Jian; Wu, Jin; Wen, Dongxin

    2018-01-01

    Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption.

  6. An energy-efficient failure detector for vehicular cloud computing

    PubMed Central

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Wen, Dongxin

    2018-01-01

    Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption. PMID:29352282

  7. Failure of the MicroScan WalkAway System To Detect Heteroresistance to Carbapenems in a Patient with Enterobacter aerogenes Bacteremia▿

    PubMed Central

    Gordon, N. C.; Wareham, D. W.

    2009-01-01

    We report the failure of the automated MicroScan WalkAway system to detect carbapenem heteroresistance in Enterobacter aerogenes. Carbapenem resistance has become an increasing concern in recent years, and robust surveillance is required to prevent dissemination of resistant strains. Reliance on automated systems may delay the detection of emerging resistance. PMID:19641071

  8. Early detection of glaucoma by means of a novel 3D computer‐automated visual field test

    PubMed Central

    Nazemi, Paul P; Fink, Wolfgang; Sadun, Alfredo A; Francis, Brian; Minckler, Donald

    2007-01-01

    Purpose A recently devised 3D computer‐automated threshold Amsler grid test was used to identify early and distinctive defects in people with suspected glaucoma. Further, the location, shape and depth of these field defects were characterised. Finally, the visual fields were compared with those obtained by standard automated perimetry. Patients and methods Glaucoma suspects were defined as those having elevated intraocular pressure (>21 mm Hg) or cup‐to‐disc ratio of >0.5. 33 patients and 66 eyes with risk factors for glaucoma were examined. 15 patients and 23 eyes with no risk factors were tested as controls. The recently developed 3D computer‐automated threshold Amsler grid test was used. The test exhibits a grid on a computer screen at a preselected greyscale and angular resolution, and allows patients to trace those areas on the grid that are missing in their visual field using a touch screen. The 5‐minute test required that the patients repeatedly outline scotomas on a touch screen with varied displays of contrast while maintaining their gaze on a central fixation marker. A 3D depiction of the visual field defects was then obtained that was further characterised by the location, shape and depth of the scotomas. The exam was repeated three times per eye. The results were compared to Humphrey visual field tests (ie, achromatic standard or SITA standard 30‐2 or 24‐2). Results In this pilot study 79% of the eyes tested in the glaucoma‐suspect group repeatedly demonstrated visual field loss with the 3D perimetry. The 3D depictions of visual field loss associated with these risk factors were all characteristic of or compatible with glaucoma. 71% of the eyes demonstrated arcuate defects or a nasal step. Constricted visual fields were shown in 29% of the eyes. No visual field changes were detected in the control group. Conclusions The 3D computer‐automated threshold Amsler grid test may demonstrate visual field abnormalities characteristic of glaucoma in glaucoma suspects with normal achromatic Humphrey visual field testing. This test may be used as a screening tool for the early detection of glaucoma. PMID:17504855

  9. Covariance analysis of the airborne laser ranging system

    NASA Technical Reports Server (NTRS)

    Englar, T. S., Jr.; Hammond, C. L.; Gibbs, B. P.

    1981-01-01

    The requirements and limitations of employing an airborne laser ranging system for detecting crustal shifts of the Earth within centimeters over a region of approximately 200 by 400 km are presented. The system consists of an aircraft which flies over a grid of ground deployed retroreflectors, making six passes over the grid at two different altitudes. The retroreflector baseline errors are assumed to result from measurement noise, a priori errors on the aircraft and retroreflector positions, tropospheric refraction, and sensor biases.

  10. Failure detection and identification

    NASA Technical Reports Server (NTRS)

    Massoumnia, Mohammad-Ali; Verghese, George C.; Willsky, Alan S.

    1989-01-01

    Using the geometric concept of an unobservability subspace, a solution is given to the problem of detecting and identifying control system component failures in linear, time-invariant systems. Conditions are developed for the existence of a causal, linear, time-invariant processor that can detect and uniquely identify a component failure, first for the case where components can fail simultaneously, and then for the case where they fail only one at a time. Explicit design algorithms are provided when these conditions are satisfied. In addition to time-domain solvability conditions, frequency-domain interpretations of the results are given, and connections are drawn with results already available in the literature.

  11. A survey of design methods for failure detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1975-01-01

    A number of methods for detecting abrupt changes (such as failures) in stochastic dynamical systems are surveyed. The class of linear systems is concentrated on but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.

  12. Integrated condition monitoring of a fleet of offshore wind turbines with focus on acceleration streaming processing

    NASA Astrophysics Data System (ADS)

    Helsen, Jan; Gioia, Nicoletta; Peeters, Cédric; Jordaens, Pieter-Jan

    2017-05-01

    Particularly offshore there is a trend to cluster wind turbines in large wind farms, and in the near future to operate such a farm as an integrated power production plant. Predictability of individual turbine behavior across the entire fleet is key in such a strategy. Failure of turbine subcomponents should be detected well in advance to allow early planning of all necessary maintenance actions; Such that they can be performed during low wind and low electricity demand periods. In order to obtain the insights to predict component failure, it is necessary to have an integrated clean dataset spanning all turbines of the fleet for a sufficiently long period of time. This paper illustrates our big-data approach to do this. In addition, advanced failure detection algorithms are necessary to detect failures in this dataset. This paper discusses a multi-level monitoring approach that consists of a combination of machine learning and advanced physics based signal-processing techniques. The advantage of combining different data sources to detect system degradation is in the higher certainty due to multivariable criteria. In order to able to perform long-term acceleration data signal processing at high frequency a streaming processing approach is necessary. This allows the data to be analysed as the sensors generate it. This paper illustrates this streaming concept on 5kHz acceleration data. A continuous spectrogram is generated from the data-stream. Real-life offshore wind turbine data is used. Using this streaming approach for calculating bearing failure features on continuous acceleration data will support failure propagation detection.

  13. Using failure mode and effects analysis to improve the safety of neonatal parenteral nutrition.

    PubMed

    Arenas Villafranca, Jose Javier; Gómez Sánchez, Araceli; Nieto Guindo, Miriam; Faus Felipe, Vicente

    2014-07-15

    Failure mode and effects analysis (FMEA) was used to identify potential errors and to enable the implementation of measures to improve the safety of neonatal parenteral nutrition (PN). FMEA was used to analyze the preparation and dispensing of neonatal PN from the perspective of the pharmacy service in a general hospital. A process diagram was drafted, illustrating the different phases of the neonatal PN process. Next, the failures that could occur in each of these phases were compiled and cataloged, and a questionnaire was developed in which respondents were asked to rate the following aspects of each error: incidence, detectability, and severity. The highest scoring failures were considered high risk and identified as priority areas for improvements to be made. The evaluation process detected a total of 82 possible failures. Among the phases with the highest number of possible errors were transcription of the medical order, formulation of the PN, and preparation of material for the formulation. After the classification of these 82 possible failures and of their relative importance, a checklist was developed to achieve greater control in the error-detection process. FMEA demonstrated that use of the checklist reduced the level of risk and improved the detectability of errors. FMEA was useful for detecting medication errors in the PN preparation process and enabling corrective measures to be taken. A checklist was developed to reduce errors in the most critical aspects of the process. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  14. Hyperspherical Sparse Approximation Techniques for High-Dimensional Discontinuity Detection

    DOE PAGES

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max; ...

    2016-08-04

    This work proposes a hyperspherical sparse approximation framework for detecting jump discontinuities in functions in high-dimensional spaces. The need for a novel approach results from the theoretical and computational inefficiencies of well-known approaches, such as adaptive sparse grids, for discontinuity detection. Our approach constructs the hyperspherical coordinate representation of the discontinuity surface of a function. Then sparse approximations of the transformed function are built in the hyperspherical coordinate system, with values at each point estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computationalmore » cost, compared to existing methods. Several approaches are used to approximate the transformed discontinuity surface in the hyperspherical system, including adaptive sparse grid and radial basis function interpolation, discrete least squares projection, and compressed sensing approximation. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. In conclusion, rigorous complexity analyses of the new methods are provided, as are several numerical examples that illustrate the effectiveness of our approach.« less

  15. Detecting vapour bubbles in simulations of metastable water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    González, Miguel A.; Abascal, Jose L. F.; Valeriani, Chantal, E-mail: christoph.dellago@univie.ac.at, E-mail: cvaleriani@quim.ucm.es

    2014-11-14

    The investigation of cavitation in metastable liquids with molecular simulations requires an appropriate definition of the volume of the vapour bubble forming within the metastable liquid phase. Commonly used approaches for bubble detection exhibit two significant flaws: first, when applied to water they often identify the voids within the hydrogen bond network as bubbles thus masking the signature of emerging bubbles and, second, they lack thermodynamic consistency. Here, we present two grid-based methods, the M-method and the V-method, to detect bubbles in metastable water specifically designed to address these shortcomings. The M-method incorporates information about neighbouring grid cells to distinguishmore » between liquid- and vapour-like cells, which allows for a very sensitive detection of small bubbles and high spatial resolution of the detected bubbles. The V-method is calibrated such that its estimates for the bubble volume correspond to the average change in system volume and are thus thermodynamically consistent. Both methods are computationally inexpensive such that they can be used in molecular dynamics and Monte Carlo simulations of cavitation. We illustrate them by computing the free energy barrier and the size of the critical bubble for cavitation in water at negative pressure.« less

  16. Towards a Low-Cost Remote Memory Attestation for the Smart Grid

    PubMed Central

    Yang, Xinyu; He, Xiaofei; Yu, Wei; Lin, Jie; Li, Rui; Yang, Qingyu; Song, Houbing

    2015-01-01

    In the smart grid, measurement devices may be compromised by adversaries, and their operations could be disrupted by attacks. A number of schemes to efficiently and accurately detect these compromised devices remotely have been proposed. Nonetheless, most of the existing schemes detecting compromised devices depend on the incremental response time in the attestation process, which are sensitive to data transmission delay and lead to high computation and network overhead. To address the issue, in this paper, we propose a low-cost remote memory attestation scheme (LRMA), which can efficiently and accurately detect compromised smart meters considering real-time network delay and achieve low computation and network overhead. In LRMA, the impact of real-time network delay on detecting compromised nodes can be eliminated via investigating the time differences reported from relay nodes. Furthermore, the attestation frequency in LRMA is dynamically adjusted with the compromised probability of each node, and then, the total number of attestations could be reduced while low computation and network overhead can be achieved. Through a combination of extensive theoretical analysis and evaluations, our data demonstrate that our proposed scheme can achieve better detection capacity and lower computation and network overhead in comparison to existing schemes. PMID:26307998

  17. Towards a Low-Cost Remote Memory Attestation for the Smart Grid.

    PubMed

    Yang, Xinyu; He, Xiaofei; Yu, Wei; Lin, Jie; Li, Rui; Yang, Qingyu; Song, Houbing

    2015-08-21

    In the smart grid, measurement devices may be compromised by adversaries, and their operations could be disrupted by attacks. A number of schemes to efficiently and accurately detect these compromised devices remotely have been proposed. Nonetheless, most of the existing schemes detecting compromised devices depend on the incremental response time in the attestation process, which are sensitive to data transmission delay and lead to high computation and network overhead. To address the issue, in this paper, we propose a low-cost remote memory attestation scheme (LRMA), which can efficiently and accurately detect compromised smart meters considering real-time network delay and achieve low computation and network overhead. In LRMA, the impact of real-time network delay on detecting compromised nodes can be eliminated via investigating the time differences reported from relay nodes. Furthermore, the attestation frequency in LRMA is dynamically adjusted with the compromised probability of each node, and then, the total number of attestations could be reduced while low computation and network overhead can be achieved. Through a combination of extensive theoretical analysis and evaluations, our data demonstrate that our proposed scheme can achieve better detection capacity and lower computation and network overhead in comparison to existing schemes.

  18. Change Detection of Mobile LIDAR Data Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Liu, Kun; Boehm, Jan; Alis, Christian

    2016-06-01

    Change detection has long been a challenging problem although a lot of research has been conducted in different fields such as remote sensing and photogrammetry, computer vision, and robotics. In this paper, we blend voxel grid and Apache Spark together to propose an efficient method to address the problem in the context of big data. Voxel grid is a regular geometry representation consisting of the voxels with the same size, which fairly suites parallel computation. Apache Spark is a popular distributed parallel computing platform which allows fault tolerance and memory cache. These features can significantly enhance the performance of Apache Spark and results in an efficient and robust implementation. In our experiments, both synthetic and real point cloud data are employed to demonstrate the quality of our method.

  19. Weld failure detection

    DOEpatents

    Pennell, William E.; Sutton, Jr., Harry G.

    1981-01-01

    Method and apparatus for detecting failure in a welded connection, particrly applicable to not readily accessible welds such as those joining components within the reactor vessel of a nuclear reactor system. A preselected tag gas is sealed within a chamber which extends through selected portions of the base metal and weld deposit. In the event of a failure, such as development of a crack extending from the chamber to an outer surface, the tag gas is released. The environment about the welded area is directed to an analyzer which, in the event of presence of the tag gas, evidences the failure. A trigger gas can be included with the tag gas to actuate the analyzer.

  20. An investigation of gear mesh failure prediction techniques. M.S. Thesis - Cleveland State Univ.

    NASA Technical Reports Server (NTRS)

    Zakrajsek, James J.

    1989-01-01

    A study was performed in which several gear failure prediction methods were investigated and applied to experimental data from a gear fatigue test apparatus. The primary objective was to provide a baseline understanding of the prediction methods and to evaluate their diagnostic capabilities. The methods investigated use the signal average in both the time and frequency domain to detect gear failure. Data from eleven gear fatigue tests were recorded at periodic time intervals as the gears were run from initiation to failure. Four major failure modes, consisting of heavy wear, tooth breakage, single pits, and distributed pitting were observed among the failed gears. Results show that the prediction methods were able to detect only those gear failures which involved heavy wear or distributed pitting. None of the methods could predict fatigue cracks, which resulted in tooth breakage, or single pits. It is suspected that the fatigue cracks were not detected because of limitations in data acquisition rather than in methodology. Additionally, the frequency response between the gear shaft and the transducer was found to significantly affect the vibration signal. The specific frequencies affected were filtered out of the signal average prior to application of the methods.

  1. A survey of design methods for failure detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1975-01-01

    A number of methods for the detection of abrupt changes (such as failures) in stochastic dynamical systems were surveyed. The class of linear systems were emphasized, but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.

  2. Solar-blind deep-UV band-pass filter (250 - 350 nm) consisting of a metal nano-grid fabricated by nanoimprint lithography.

    PubMed

    Li, Wen-Di; Chou, Stephen Y

    2010-01-18

    We designed, fabricated and demonstrated a solar-blind deep-UV pass filter, that has a measured optical performance of a 27% transmission peak at 290 nm, a pass-band width of 100 nm (from 250 to 350 nm), and a 20dB rejection ratio between deep-UV wavelength and visible wavelength. The filter consists of an aluminum nano-grid, which was made by coating 20 nm Al on a SiO(2) square grid with 190 nm pitch, 30 nm linewidth and 250 nm depth. The performances agree with a rigorous coupled wave analysis. The wavelength for the peak transmission and the pass-bandwidth can be tuned through adjusting the metal nano-grid dimensions. The filter was fabricated by nanoimprint lithography, hence is large area and low cost. Combining with Si photodetectors, the filter offers simple yet effective and low cost solar-blind deep-UV detection at either a single device or large-area complex integrated imaging array level.

  3. Influence of grid resolution, parcel size and drag models on bubbling fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Konan, Arthur; Benyahia, Sofiane

    2017-06-02

    Here in this paper, a bubbling fluidized bed is simulated with different numerical parameters, such as grid resolution and parcel size. We examined also the effect of using two homogeneous drag correlations and a heterogeneous drag based on the energy minimization method. A fast and reliable bubble detection algorithm was developed based on the connected component labeling. The radial and axial solids volume fraction profiles are compared with experiment data and previous simulation results. These results show a significant influence of drag models on bubble size and voidage distributions and a much less dependence on numerical parameters. With a heterogeneousmore » drag model that accounts for sub-scale structures, the void fraction in the bubbling fluidized bed can be well captured with coarse grid and large computation parcels. Refining the CFD grid and reducing the parcel size can improve the simulation results but with a large increase in computation cost.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agalgaonkar, Yashodhan P.; Hammerstrom, Donald J.

    The Pacific Northwest Smart Grid Demonstration (PNWSGD) was a smart grid technology performance evaluation project that included multiple U.S. states and cooperation from multiple electric utilities in the northwest region. One of the local objectives for the project was to achieve improved distribution system reliability. Toward this end, some PNWSGD utilities automated their distribution systems, including the application of fault detection, isolation, and restoration and advanced metering infrastructure. In light of this investment, a major challenge was to establish a correlation between implementation of these smart grid technologies and actual improvements of distribution system reliability. This paper proposes using Welch’smore » t-test to objectively determine and quantify whether distribution system reliability is improving over time. The proposed methodology is generic, and it can be implemented by any utility after calculation of the standard reliability indices. The effectiveness of the proposed hypothesis testing approach is demonstrated through comprehensive practical results. It is believed that wider adoption of the proposed approach can help utilities to evaluate a realistic long-term performance of smart grid technologies.« less

  5. Next generation molten NaI batteries for grid scale energy storage

    NASA Astrophysics Data System (ADS)

    Small, Leo J.; Eccleston, Alexis; Lamb, Joshua; Read, Andrew C.; Robins, Matthew; Meaders, Thomas; Ingersoll, David; Clem, Paul G.; Bhavaraju, Sai; Spoerke, Erik D.

    2017-08-01

    Robust, safe, and reliable grid-scale energy storage continues to be a priority for improved energy surety, expanded integration of renewable energy, and greater system agility required to meet modern dynamic and evolving electrical energy demands. We describe here a new sodium-based battery based on a molten sodium anode, a sodium iodide/aluminum chloride (NaI/AlCl3) cathode, and a high conductivity NaSICON (Na1+xZr2SixP3-xO12) ceramic separator. This NaI battery operates at intermediate temperatures (120-180 °C) and boasts an energy density of >150 Wh kg-1. The energy-dense NaI-AlCl3 ionic liquid catholyte avoids lifetime-limiting plating and intercalation reactions, and the use of earth-abundant elements minimizes materials costs and eliminates economic uncertainties associated with lithium metal. Moreover, the inherent safety of this system under internal mechanical failure is characterized by negligible heat or gas production and benign reaction products (Al, NaCl). Scalability in design is exemplified through evolution from 0.85 to 10 Ah (28 Wh) form factors, displaying lifetime average Coulombic efficiencies of 99.45% and energy efficiencies of 81.96% over dynamic testing lasting >3000 h. This demonstration promises a safe, cost-effective, and long-lifetime technology as an attractive candidate for grid scale storage.

  6. Contingency Analysis Post-Processing With Advanced Computing and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Glaesemann, Kurt; Fitzhenry, Erin

    Contingency analysis is a critical function widely used in energy management systems to assess the impact of power system component failures. Its outputs are important for power system operation for improved situational awareness, power system planning studies, and power market operations. With the increased complexity of power system modeling and simulation caused by increased energy production and demand, the penetration of renewable energy and fast deployment of smart grid devices, and the trend of operating grids closer to their capacity for better efficiency, more and more contingencies must be executed and analyzed quickly in order to ensure grid reliability andmore » accuracy for the power market. Currently, many researchers have proposed different techniques to accelerate the computational speed of contingency analysis, but not much work has been published on how to post-process the large amount of contingency outputs quickly. This paper proposes a parallel post-processing function that can analyze contingency analysis outputs faster and display them in a web-based visualization tool to help power engineers improve their work efficiency by fast information digestion. Case studies using an ESCA-60 bus system and a WECC planning system are presented to demonstrate the functionality of the parallel post-processing technique and the web-based visualization tool.« less

  7. Squid - a simple bioinformatics grid.

    PubMed

    Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M

    2005-08-03

    BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.

  8. Gear Fault Detection Effectiveness as Applied to Tooth Surface Pitting Fatigue Damage

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; Dempsey, Paula J.; Heath, Gregory F.; Shanthakumaran, Perumal

    2010-01-01

    A study was performed to evaluate fault detection effectiveness as applied to gear-tooth-pitting-fatigue damage. Vibration and oil-debris monitoring (ODM) data were gathered from 24 sets of spur pinion and face gears run during a previous endurance evaluation study. Three common condition indicators (RMS, FM4, and NA4 [Ed. 's note: See Appendix A-Definitions D were deduced from the time-averaged vibration data and used with the ODM to evaluate their performance for gear fault detection. The NA4 parameter showed to be a very good condition indicator for the detection of gear tooth surface pitting failures. The FM4 and RMS parameters perfomu:d average to below average in detection of gear tooth surface pitting failures. The ODM sensor was successful in detecting a significant 8lDOunt of debris from all the gear tooth pitting fatigue failures. Excluding outliers, the average cumulative mass at the end of a test was 40 mg.

  9. Sensors and systems for space applications: a methodology for developing fault detection, diagnosis, and recovery

    NASA Astrophysics Data System (ADS)

    Edwards, John L.; Beekman, Randy M.; Buchanan, David B.; Farner, Scott; Gershzohn, Gary R.; Khuzadi, Mbuyi; Mikula, D. F.; Nissen, Gerry; Peck, James; Taylor, Shaun

    2007-04-01

    Human space travel is inherently dangerous. Hazardous conditions will exist. Real time health monitoring of critical subsystems is essential for providing a safe abort timeline in the event of a catastrophic subsystem failure. In this paper, we discuss a practical and cost effective process for developing critical subsystem failure detection, diagnosis and response (FDDR). We also present the results of a real time health monitoring simulation of a propellant ullage pressurization subsystem failure. The health monitoring development process identifies hazards, isolates hazard causes, defines software partitioning requirements and quantifies software algorithm development. The process provides a means to establish the number and placement of sensors necessary to provide real time health monitoring. We discuss how health monitoring software tracks subsystem control commands, interprets off-nominal operational sensor data, predicts failure propagation timelines, corroborate failures predictions and formats failure protocol.

  10. Analytical Study of different types Of network failure detection and possible remedies

    NASA Astrophysics Data System (ADS)

    Saxena, Shikha; Chandra, Somnath

    2012-07-01

    Faults in a network have various causes,such as the failure of one or more routers, fiber-cuts, failure of physical elements at the optical layer, or extraneous causes like power outages. These faults are usually detected as failures of a set of dependent logical entities and the links affected by the failed components. A reliable control plane plays a crucial role in creating high-level services in the next-generation transport network based on the Generalized Multiprotocol Label Switching (GMPLS) or Automatically Switched Optical Networks (ASON) model. In this paper, approaches to control-plane survivability, based on protection and restoration mechanisms, are examined. Procedures for the control plane state recovery are also discussed, including link and node failure recovery and the concepts of monitoring paths (MPs) and monitoring cycles (MCs) for unique localization of shared risk linked group (SRLG) failures in all-optical networks. An SRLG failure is a failure of multiple links due to a failure of a common resource. MCs (MPs) start and end at same (distinct) monitoring location(s). They are constructed such that any SRLG failure results in the failure of a unique combination of paths and cycles. We derive necessary and sufficient conditions on the set of MCs and MPs needed for localizing an SRLG failure in an arbitrary graph. Procedure of Protection and Restoration of the SRLG failure by backup re-provisioning algorithm have also been discussed.

  11. Demonstration of the use of ADAPT to derive predictive maintenance algorithms for the KSC central heat plant

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.

    1972-01-01

    The Avco Data Analysis and Prediction Techniques (ADAPT) were employed to determine laws capable of detecting failures in a heat plant up to three days in advance of the occurrence of the failure. The projected performance of algorithms yielded a detection probability of 90% with false alarm rates of the order of 1 per year for a sample rate of 1 per day with each detection, followed by 3 hourly samplings. This performance was verified on 173 independent test cases. The program also demonstrated diagnostic algorithms and the ability to predict the time of failure to approximately plus or minus 8 hours up to three days in advance of the failure. The ADAPT programs produce simple algorithms which have a unique possibility of a relatively low cost updating procedure. The algorithms were implemented on general purpose computers at Kennedy Space Flight Center and tested against current data.

  12. A polar-region-adaptable systematic bias collaborative measurement method for shipboard redundant rotational inertial navigation systems

    NASA Astrophysics Data System (ADS)

    Wang, Lin; Wu, Wenqi; Wei, Guo; Lian, Junxiang; Yu, Ruihang

    2018-05-01

    The shipboard redundant rotational inertial navigation system (RINS) configuration, including a dual-axis RINS and a single-axis RINS, can satisfy the demand of marine INSs of especially high reliability as well as achieving trade-off between position accuracy and cost. Generally, the dual-axis RINS is the master INS, and the single-axis RINS is the hot backup INS for high reliability purposes. An integrity monitoring system performs a fault detection function to ensure sailing safety. However, improving the accuracy of the backup INS in case of master INS failure has not been given enough attention. Without the aid of any external information, a systematic bias collaborative measurement method based on an augmented Kalman filter is proposed for the redundant RINSs. Estimates of inertial sensor biases can be used by the built-in integrity monitoring system to monitor the RINS running condition. On the other hand, a position error prediction model is designed for the single-axis RINS to estimate the systematic error caused by its azimuth gyro bias. After position error compensation, the position information provided by the single-axis RINS still remains highly accurate, even if the integrity monitoring system detects a dual-axis RINS fault. Moreover, use of a grid frame as a navigation frame makes the proposed method applicable in any area, including the polar regions. Semi-physical simulation and experiments including sea trials verify the validity of the method.

  13. Spatio-temporal Outlier Detection in Precipitation Data

    NASA Astrophysics Data System (ADS)

    Wu, Elizabeth; Liu, Wei; Chawla, Sanjay

    The detection of outliers from spatio-temporal data is an important task due to the increasing amount of spatio-temporal data available and the need to understand and interpret it. Due to the limitations of current data mining techniques, new techniques to handle this data need to be developed. We propose a spatio-temporal outlier detection algorithm called Outstretch, which discovers the outlier movement patterns of the top-k spatial outliers over several time periods. The top-k spatial outliers are found using the Exact-Grid Top- k and Approx-Grid Top- k algorithms, which are an extension of algorithms developed by Agarwal et al. [1]. Since they use the Kulldorff spatial scan statistic, they are capable of discovering all outliers, unaffected by neighbouring regions that may contain missing values. After generating the outlier sequences, we show one way they can be interpreted, by comparing them to the phases of the El Niño Southern Oscilliation (ENSO) weather phenomenon to provide a meaningful analysis of the results.

  14. Detection of faults in rotating machinery using periodic time-frequency sparsity

    NASA Astrophysics Data System (ADS)

    Ding, Yin; He, Wangpeng; Chen, Binqiang; Zi, Yanyang; Selesnick, Ivan W.

    2016-11-01

    This paper addresses the problem of extracting periodic oscillatory features in vibration signals for detecting faults in rotating machinery. To extract the feature, we propose an approach in the short-time Fourier transform (STFT) domain where the periodic oscillatory feature manifests itself as a relatively sparse grid. To estimate the sparse grid, we formulate an optimization problem using customized binary weights in the regularizer, where the weights are formulated to promote periodicity. In order to solve the proposed optimization problem, we develop an algorithm called augmented Lagrangian majorization-minimization algorithm, which combines the split augmented Lagrangian shrinkage algorithm (SALSA) with majorization-minimization (MM), and is guaranteed to converge for both convex and non-convex formulation. As examples, the proposed approach is applied to simulated data, and used as a tool for diagnosing faults in bearings and gearboxes for real data, and compared to some state-of-the-art methods. The results show that the proposed approach can effectively detect and extract the periodical oscillatory features.

  15. Respiratory failure in diabetic ketoacidosis.

    PubMed

    Konstantinov, Nikifor K; Rohrscheib, Mark; Agaba, Emmanuel I; Dorin, Richard I; Murata, Glen H; Tzamaloukas, Antonios H

    2015-07-25

    Respiratory failure complicating the course of diabetic ketoacidosis (DKA) is a source of increased morbidity and mortality. Detection of respiratory failure in DKA requires focused clinical monitoring, careful interpretation of arterial blood gases, and investigation for conditions that can affect adversely the respiration. Conditions that compromise respiratory function caused by DKA can be detected at presentation but are usually more prevalent during treatment. These conditions include deficits of potassium, magnesium and phosphate and hydrostatic or non-hydrostatic pulmonary edema. Conditions not caused by DKA that can worsen respiratory function under the added stress of DKA include infections of the respiratory system, pre-existing respiratory or neuromuscular disease and miscellaneous other conditions. Prompt recognition and management of the conditions that can lead to respiratory failure in DKA may prevent respiratory failure and improve mortality from DKA.

  16. Respiratory failure in diabetic ketoacidosis

    PubMed Central

    Konstantinov, Nikifor K; Rohrscheib, Mark; Agaba, Emmanuel I; Dorin, Richard I; Murata, Glen H; Tzamaloukas, Antonios H

    2015-01-01

    Respiratory failure complicating the course of diabetic ketoacidosis (DKA) is a source of increased morbidity and mortality. Detection of respiratory failure in DKA requires focused clinical monitoring, careful interpretation of arterial blood gases, and investigation for conditions that can affect adversely the respiration. Conditions that compromise respiratory function caused by DKA can be detected at presentation but are usually more prevalent during treatment. These conditions include deficits of potassium, magnesium and phosphate and hydrostatic or non-hydrostatic pulmonary edema. Conditions not caused by DKA that can worsen respiratory function under the added stress of DKA include infections of the respiratory system, pre-existing respiratory or neuromuscular disease and miscellaneous other conditions. Prompt recognition and management of the conditions that can lead to respiratory failure in DKA may prevent respiratory failure and improve mortality from DKA. PMID:26240698

  17. Heart Failure and Frailty in the Community-Living Elderly Population: What the UFO Study Will Tell Us

    PubMed Central

    Fung, Erik; Hui, Elsie; Yang, Xiaobo; Lui, Leong T.; Cheng, King F.; Li, Qi; Fan, Yiting; Sahota, Daljit S.; Ma, Bosco H. M.; Lee, Jenny S. W.; Lee, Alex P. W.; Woo, Jean

    2018-01-01

    Heart failure and frailty are clinical syndromes that present with overlapping phenotypic characteristics. Importantly, their co-presence is associated with increased mortality and morbidity. While mechanical and electrical device therapies for heart failure are vital for select patients with advanced stage disease, the majority of patients and especially those with undiagnosed heart failure would benefit from early disease detection and prompt initiation of guideline-directed medical therapies. In this article, we review the problematic interactions between heart failure and frailty, introduce a focused cardiac screening program for community-living elderly initiated by a mobile communication device app leading to the Undiagnosed heart Failure in frail Older individuals (UFO) study, and discuss how the knowledge of pre-frailty and frailty status could be exploited for the detection of previously undiagnosed heart failure or advanced cardiac disease. The widespread use of mobile devices coupled with increasing availability of novel, effective medical and minimally invasive therapies have incentivized new approaches to heart failure case finding and disease management. PMID:29740330

  18. Extended Testability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin; Maul, William A.; Fulton, Christopher

    2012-01-01

    The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.

  19. A novel method for the extraction of local gravity wave parameters from gridded three-dimensional data: description, validation, and application

    NASA Astrophysics Data System (ADS)

    Schoon, Lena; Zülicke, Christoph

    2018-05-01

    For the local diagnosis of wave properties, we develop, validate, and apply a novel method which is based on the Hilbert transform. It is called Unified Wave Diagnostics (UWaDi). It provides the wave amplitude and three-dimensional wave number at any grid point for gridded three-dimensional data. UWaDi is validated for a synthetic test case comprising two different wave packets. In comparison with other methods, the performance of UWaDi is very good with respect to wave properties and their location. For a first practical application of UWaDi, a minor sudden stratospheric warming on 30 January 2016 is chosen. Specifying the diagnostics for hydrostatic inertia-gravity waves in analyses from the European Centre for Medium-Range Weather Forecasts, we detect the local occurrence of gravity waves throughout the middle atmosphere. The local wave characteristics are discussed in terms of vertical propagation using the diagnosed local amplitudes and wave numbers. We also note some hints on local inertia-gravity wave generation by the stratospheric jet from the detection of shallow slow waves in the vicinity of its exit region.

  20. Micro-electro-fluidic grids for nematodes: a lens-less, image-sensor-less approach for on-chip tracking of nematode locomotion.

    PubMed

    Liu, Peng; Martin, Richard J; Dong, Liang

    2013-02-21

    This paper reports on the development of a lens-less and image-sensor-less micro-electro-fluidic (MEF) approach for real-time monitoring of the locomotion of microscopic nematodes. The technology showed promise for overcoming the constraint of the limited field of view of conventional optical microscopy, with relatively low cost, good spatial resolution, and high portability. The core of the device was microelectrode grids formed by orthogonally arranging two identical arrays of microelectrode lines. The two microelectrode arrays were spaced by a microfluidic chamber containing a liquid medium of interest. As a nematode (e.g., Caenorhabditis elegans) moved inside the chamber, the invasion of part of its body into some intersection regions between the microelectrodes caused changes in the electrical resistance of these intersection regions. The worm's presence at, or absence from, a detection unit was determined by a comparison between the measured resistance variation of this unit and a pre-defined threshold resistance variation. An electronic readout circuit was designed to address all the detection units and read out their individual electrical resistances. By this means, it was possible to obtain the electrical resistance profile of the whole MEF grid, and thus, the physical pattern of the swimming nematode. We studied the influence of a worm's body on the resistance of an addressed unit. We also investigated how the full-frame scanning and readout rates of the electronic circuit and the dimensions of a detection unit posed an impact on the spatial resolution of the reconstructed images of the nematode. Other important issues, such as the manufacturing-induced initial non-uniformity of the grids and the electrotaxic behaviour of nematodes, were also studied. A drug resistance screening experiment was conducted by using the grids with a good resolution of 30 × 30 μm(2). The phenotypic differences in the locomotion behaviours (e.g., moving speed and oscillation frequency extracted from the reconstructed images with the help of software) between the wild-type (N2) and mutant (lev-8) C. elegans worms in response to different doses of the anthelmintic drug, levamisole, were investigated. The locomotive parameters obtained by the MEF grids agreed well with those obtained by optical microscopy. Therefore, this technology will benefit whole-animal assays by providing a structurally simple, potentially cost-effective device capable of tracking the movement and phenotypes of important nematodes in various microenvironments.

Top