On-Die Sensors for Transient Events
NASA Astrophysics Data System (ADS)
Suchak, Mihir Vimal
Failures caused by transient electromagnetic events like Electrostatic Discharge (ESD) are a major concern for embedded systems. The component often failing is an integrated circuit (IC). Determining which IC is affected in a multi-device system is a challenging task. Debugging errors often requires sophisticated lab setups which require intentionally disturbing and probing various parts of the system which might not be easily accessible. Opening the system and adding probes may change its response to the transient event, which further compounds the problem. On-die transient event sensors were developed that require relatively little area on die, making them inexpensive, they consume negligible static current, and do not interfere with normal operation of the IC. These circuits can be used to determine the pin involved and the level of the event in the event of a transient event affecting the IC, thus allowing the user to debug system-level transient events without modifying the system. The circuit and detection scheme design has been completed and verified in simulations with Cadence Virtuoso environment. Simulations accounted for the impact of the ESD protection circuits, parasitics from the I/O pin, package and I/O ring, and included a model of an ESD gun to test the circuit's response to an ESD pulse as specified in IEC 61000-4-2. Multiple detection schemes are proposed. The final detection scheme consists of an event detector and a level sensor. The event detector latches on the presence of an event at a pad, to determine on which pin an event occurred. The level sensor generates current proportional to the level of the event. This current is converted to a voltage and digitized at the A/D converter to be read by the microprocessor. Detection scheme shows good performance in simulations when checked against process variations and different kind of events.
Event-Triggered Fault Detection of Nonlinear Networked Systems.
Li, Hongyi; Chen, Ziran; Wu, Ligang; Lam, Hak-Keung; Du, Haiping
2017-04-01
This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.
Onyango, Laura A.; Quinn, Chloe; Tng, Keng H.; Wood, James G.; Leslie, Greg
2015-01-01
Potable reuse is implemented in several countries around the world to augment strained water supplies. This article presents a public health perspective on potable reuse by comparing the critical infrastructure and institutional capacity characteristics of two well-established potable reuse schemes with conventional drinking water schemes in developed nations that have experienced waterborne outbreaks. Analysis of failure events in conventional water systems between 2003 and 2013 showed that despite advances in water treatment technologies, drinking water outbreaks caused by microbial contamination were still frequent in developed countries and can be attributed to failures in infrastructure or institutional practices. Numerous institutional failures linked to ineffective treatment protocols, poor operational practices, and negligence were detected. In contrast, potable reuse schemes that use multiple barriers, online instrumentation, and operational measures were found to address the events that have resulted in waterborne outbreaks in conventional systems in the past decade. Syndromic surveillance has emerged as a tool in outbreak detection and was useful in detecting some outbreaks; increases in emergency department visits and GP consultations being the most common data source, suggesting potential for an increasing role in public health surveillance of waterborne outbreaks. These results highlight desirable characteristics of potable reuse schemes from a public health perspective with potential for guiding policy on surveillance activities. PMID:27053920
Onyango, Laura A; Quinn, Chloe; Tng, Keng H; Wood, James G; Leslie, Greg
2015-01-01
Potable reuse is implemented in several countries around the world to augment strained water supplies. This article presents a public health perspective on potable reuse by comparing the critical infrastructure and institutional capacity characteristics of two well-established potable reuse schemes with conventional drinking water schemes in developed nations that have experienced waterborne outbreaks. Analysis of failure events in conventional water systems between 2003 and 2013 showed that despite advances in water treatment technologies, drinking water outbreaks caused by microbial contamination were still frequent in developed countries and can be attributed to failures in infrastructure or institutional practices. Numerous institutional failures linked to ineffective treatment protocols, poor operational practices, and negligence were detected. In contrast, potable reuse schemes that use multiple barriers, online instrumentation, and operational measures were found to address the events that have resulted in waterborne outbreaks in conventional systems in the past decade. Syndromic surveillance has emerged as a tool in outbreak detection and was useful in detecting some outbreaks; increases in emergency department visits and GP consultations being the most common data source, suggesting potential for an increasing role in public health surveillance of waterborne outbreaks. These results highlight desirable characteristics of potable reuse schemes from a public health perspective with potential for guiding policy on surveillance activities.
Nuclear Explosion and Infrasound Event Resources of the SMDC Monitoring Research Program
2008-09-01
2008 Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring Technologies 928 Figure 7. Dozens of detected infrasound signals from...investigate alternative detection schemes at the two infrasound arrays based on frequency-wavenumber (fk) processing and the F-statistic. The results of... infrasound signal - detection processing schemes. REFERENCES Bahavar, M., B. Barker, J. Bennett, R. Bowman, H. Israelsson, B. Kohl, Y-L. Kung, J. Murphy
NASA Astrophysics Data System (ADS)
Kornhuber, K.; Petoukhov, V.; Petri, S.; Rahmstorf, S.; Coumou, D.
2017-09-01
Several recent northern hemisphere summer extremes have been linked to persistent high-amplitude wave patterns (e.g. heat waves in Europe 2003, Russia 2010 and in the US 2011, Floods in Pakistan 2010 and Europe 2013). Recently quasi-resonant amplification (QRA) was proposed as a mechanism that, when certain dynamical conditions are fulfilled, can lead to such high-amplitude wave events. Based on these resonance conditions a detection scheme to scan reanalysis data for QRA events in boreal summer months was implemented. With this objective detection scheme we analyzed the occurrence and duration of QRA events and the associated atmospheric flow patterns in 1979-2015 reanalysis data. We detect a total number of 178 events for wave 6, 7 and 8 and find that during roughly one-third of all high amplitude events QRA conditions were met for respective waves. Our analysis reveals a significant shift for quasi-stationary waves 6 and 7 towards high amplitudes during QRA events, lagging first QRA-detection by typically one week. The results provide further evidence for the validity of the QRA hypothesis and its important role in generating high amplitude waves in boreal summer.
Developing a disease outbreak event corpus.
Conway, Mike; Kawazoe, Ai; Chanlekha, Hutchatai; Collier, Nigel
2010-09-28
In recent years, there has been a growth in work on the use of information extraction technologies for tracking disease outbreaks from online news texts, yet publicly available evaluation standards (and associated resources) for this new area of research have been noticeably lacking. This study seeks to create a "gold standard" data set against which to test how accurately disease outbreak information extraction systems can identify the semantics of disease outbreak events. Additionally, we hope that the provision of an annotation scheme (and associated corpus) to the community will encourage open evaluation in this new and growing application area. We developed an annotation scheme for identifying infectious disease outbreak events in news texts. An event--in the context of our annotation scheme--consists minimally of geographical (eg, country and province) and disease name information. However, the scheme also allows for the rich encoding of other domain salient concepts (eg, international travel, species, and food contamination). The work resulted in a 200-document corpus of event-annotated disease outbreak reports that can be used to evaluate the accuracy of event detection algorithms (in this case, for the BioCaster biosurveillance online news information extraction system). In the 200 documents, 394 distinct events were identified (mean 1.97 events per document, range 0-25 events per document). We also provide a download script and graphical user interface (GUI)-based event browsing software to facilitate corpus exploration. In summary, we present an annotation scheme and corpus that can be used in the evaluation of disease outbreak event extraction algorithms. The annotation scheme and corpus were designed both with the particular evaluation requirements of the BioCaster system in mind as well as the wider need for further evaluation resources in this growing research area.
Smith, Brian T; Coiro, Daniel J; Finson, Richard; Betz, Randal R; McCarthy, James
2002-03-01
Force-sensing resistors (FSRs) were used to detect the transitions between five main phases of gait for the control of electrical stimulation (ES) while walking with seven children with spastic diplegia, cerebral palsy. The FSR positions within each child's insoles were customized based on plantar pressure profiles determined using a pressure-sensitive membrane array (Tekscan Inc., Boston, MA). The FSRs were placed in the insoles so that pressure transitions coincided with an ipsilateral or contralateral gait event. The transitions between the following gait phases were determined: loading response, mid- and terminal stance, and pre- and initial swing. Following several months of walking on a regular basis with FSR-triggered intramuscular ES to the hip and knee extensors, hip abductors, and ankle dorsi and plantar flexors, the accuracy and reliability of the FSRs to detect gait phase transitions were evaluated. Accuracy was evaluated with four of the subjects by synchronizing the output of the FSR detection scheme with a VICON (Oxford Metrics, U.K.) motion analysis system, which was used as the gait event reference. While mean differences between each FSR-detected gait event and that of the standard (VICON) ranged from +35 ms (indicating that the FSR detection scheme recognized the event before it actually happened) to -55 ms (indicating that the FSR scheme recognized the event after it occurred), the difference data was widely distributed, which appeared to be due in part to both intrasubject (step-to-step) and intersubject variability. Terminal stance exhibited the largest mean difference and standard deviation, while initial swing exhibited the smallest deviation and preswing the smallest mean difference. To determine step-to-step reliability, all seven children walked on a level walkway for at least 50 steps. Of 642 steps, there were no detection errors in 94.5% of the steps. Of the steps that contained a detection error, 80% were due to the failure of the FSR signal to reach the programmed threshold level during the transition to loading response. Recovery from an error always occurred one to three steps later.
Sensor data monitoring and decision level fusion scheme for early fire detection
NASA Astrophysics Data System (ADS)
Rizogiannis, Constantinos; Thanos, Konstantinos Georgios; Astyakopoulos, Alkiviadis; Kyriazanos, Dimitris M.; Thomopoulos, Stelios C. A.
2017-05-01
The aim of this paper is to present the sensor monitoring and decision level fusion scheme for early fire detection which has been developed in the context of the AF3 Advanced Forest Fire Fighting European FP7 research project, adopted specifically in the OCULUS-Fire control and command system and tested during a firefighting field test in Greece with prescribed real fire, generating early-warning detection alerts and notifications. For this purpose and in order to improve the reliability of the fire detection system, a two-level fusion scheme is developed exploiting a variety of observation solutions from air e.g. UAV infrared cameras, ground e.g. meteorological and atmospheric sensors and ancillary sources e.g. public information channels, citizens smartphone applications and social media. In the first level, a change point detection technique is applied to detect changes in the mean value of each measured parameter by the ground sensors such as temperature, humidity and CO2 and then the Rate-of-Rise of each changed parameter is calculated. In the second level the fire event Basic Probability Assignment (BPA) function is determined for each ground sensor using Fuzzy-logic theory and then the corresponding mass values are combined in a decision level fusion process using Evidential Reasoning theory to estimate the final fire event probability.
Application of a Hybrid Detection and Location Scheme to Volcanic Systems
NASA Astrophysics Data System (ADS)
Thurber, C. H.; Lanza, F.; Roecker, S. W.
2017-12-01
We are using a hybrid method for automated detection and onset estimation, called REST, that combines a modified version of the nearest-neighbor similarity scheme of Rawles and Thurber (2015; RT15) with the regression approach of Kushnir et al. (1990; K90). This approach incorporates some of the windowing ideas proposed by RT15 into the regression techniques described in K90. The K90 and RT15 algorithms both define an onset as that sample where a segment of noise at earlier times is most "unlike" a segment of data at later times; the main difference between the approaches is how one defines "likeness." Hence, it is fairly straightforward to adapt the RT15 ideas to a K90 approach. We also incorporated the running mean normalization scheme of Bensen et al. (2007), used in ambient noise pre-processing, to reduce the effects of coherent signals (such as earthquakes) in defining noise segments. This is especially useful for aftershock sequences, when the persistent high amplitudes due to many earthquakes biases the true noise level. We use the fall-off of the K90 estimation function to assign uncertainties and the asymmetry of the function as a causality constraint. The detection and onset estimation stage is followed by iterative pick association and event location using a grid-search method. Some fine-tuning of some parameters is generally required for optimal results. We present 2 applications of this scheme to data from volcanic systems: Makushin volcano, Alaska, and Laguna del Maule (LdM), Chile. In both cases, there are permanent seismic networks, operated by the Alaska Volcano Observatory (AVO) and Observatorio Volcanológico de Los Andes del Sur (OVDAS), respectively, and temporary seismic arrays were deployed for a year or more. For Makushin, we have analyzed a year of data, from summer 2015 to summer 2016. The AVO catalog has 691 events in our study volume; REST processing yields 1784 more events. After quality control, the event numbers are 151 AVO events and 344 additional REST events. For LdM, we have analyzed 3 months of data from the beginning of 2017. The OVDAS catalog contains only 6 events. In contrast, REST processing yields over 100 events. We will show both the grid search locations and relocations obtained as part of tomographic inversions, and discuss some of the advantages and weaknesses of the REST processing scheme.
Zhang, Zhi-Hui; Yang, Guang-Hong
2017-05-01
This paper provides a novel event-triggered fault detection (FD) scheme for discrete-time linear systems. First, an event-triggered interval observer is proposed to generate the upper and lower residuals by taking into account the influence of the disturbances and the event error. Second, the robustness of the residual interval against the disturbances and the fault sensitivity are improved by introducing l 1 and H ∞ performances. Third, dilated linear matrix inequalities are used to decouple the Lyapunov matrices from the system matrices. The nonnegative conditions for the estimation error variables are presented with the aid of the slack matrix variables. This technique allows considering a more general Lyapunov function. Furthermore, the FD decision scheme is proposed by monitoring whether the zero value belongs to the residual interval. It is shown that the information communication burden is reduced by designing the event-triggering mechanism, while the FD performance can still be guaranteed. Finally, simulation results demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Li, Yunji; Wu, QingE; Peng, Li
2018-01-23
In this paper, a synthesized design of fault-detection filter and fault estimator is considered for a class of discrete-time stochastic systems in the framework of event-triggered transmission scheme subject to unknown disturbances and deception attacks. A random variable obeying the Bernoulli distribution is employed to characterize the phenomena of the randomly occurring deception attacks. To achieve a fault-detection residual is only sensitive to faults while robust to disturbances, a coordinate transformation approach is exploited. This approach can transform the considered system into two subsystems and the unknown disturbances are removed from one of the subsystems. The gain of fault-detection filter is derived by minimizing an upper bound of filter error covariance. Meanwhile, system faults can be reconstructed by the remote fault estimator. An recursive approach is developed to obtain fault estimator gains as well as guarantee the fault estimator performance. Furthermore, the corresponding event-triggered sensor data transmission scheme is also presented for improving working-life of the wireless sensor node when measurement information are aperiodically transmitted. Finally, a scaled version of an industrial system consisting of local PC, remote estimator and wireless sensor node is used to experimentally evaluate the proposed theoretical results. In particular, a novel fault-alarming strategy is proposed so that the real-time capacity of fault-detection is guaranteed when the event condition is triggered.
Subdecoherence time generation and detection of orbital entanglement in quantum dots.
Brange, F; Malkoc, O; Samuelsson, P
2015-05-01
Recent experiments have demonstrated subdecoherence time control of individual single-electron orbital qubits. Here we propose a quantum-dot-based scheme for generation and detection of pairs of orbitally entangled electrons on a time scale much shorter than the decoherence time. The electrons are entangled, via two-particle interference, and transferred to the detectors during a single cotunneling event, making the scheme insensitive to charge noise. For sufficiently long detector dot lifetimes, cross-correlation detection of the dot charges can be performed with real-time counting techniques, providing for an unambiguous short-time Bell inequality test of orbital entanglement.
Network hydraulics inclusion in water quality event detection using multiple sensor stations data.
Oliker, Nurit; Ostfeld, Avi
2015-09-01
Event detection is one of the current most challenging topics in water distribution systems analysis: how regular on-line hydraulic (e.g., pressure, flow) and water quality (e.g., pH, residual chlorine, turbidity) measurements at different network locations can be efficiently utilized to detect water quality contamination events. This study describes an integrated event detection model which combines multiple sensor stations data with network hydraulics. To date event detection modelling is likely limited to single sensor station location and dataset. Single sensor station models are detached from network hydraulics insights and as a result might be significantly exposed to false positive alarms. This work is aimed at decreasing this limitation through integrating local and spatial hydraulic data understanding into an event detection model. The spatial analysis complements the local event detection effort through discovering events with lower signatures by exploring the sensors mutual hydraulic influences. The unique contribution of this study is in incorporating hydraulic simulation information into the overall event detection process of spatially distributed sensors. The methodology is demonstrated on two example applications using base runs and sensitivity analyses. Results show a clear advantage of the suggested model over single-sensor event detection schemes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Perfect Detection of Spikes in the Linear Sub-threshold Dynamics of Point Neurons
Krishnan, Jeyashree; Porta Mana, PierGianLuca; Helias, Moritz; Diesmann, Markus; Di Napoli, Edoardo
2018-01-01
Spiking neuronal networks are usually simulated with one of three main schemes: the classical time-driven and event-driven schemes, and the more recent hybrid scheme. All three schemes evolve the state of a neuron through a series of checkpoints: equally spaced in the first scheme and determined neuron-wise by spike events in the latter two. The time-driven and the hybrid scheme determine whether the membrane potential of a neuron crosses a threshold at the end of the time interval between consecutive checkpoints. Threshold crossing can, however, occur within the interval even if this test is negative. Spikes can therefore be missed. The present work offers an alternative geometric point of view on neuronal dynamics, and derives, implements, and benchmarks a method for perfect retrospective spike detection. This method can be applied to neuron models with affine or linear subthreshold dynamics. The idea behind the method is to propagate the threshold with a time-inverted dynamics, testing whether the threshold crosses the neuron state to be evolved, rather than vice versa. Algebraically this translates into a set of inequalities necessary and sufficient for threshold crossing. This test is slower than the imperfect one, but can be optimized in several ways. Comparison confirms earlier results that the imperfect tests rarely miss spikes (less than a fraction 1/108 of missed spikes) in biologically relevant settings. PMID:29379430
A Reverse Localization Scheme for Underwater Acoustic Sensor Networks
Moradi, Marjan; Rezazadeh, Javad; Ismail, Abdul Samad
2012-01-01
Underwater Wireless Sensor Networks (UWSNs) provide new opportunities to observe and predict the behavior of aquatic environments. In some applications like target tracking or disaster prevention, sensed data is meaningless without location information. In this paper, we propose a novel 3D centralized, localization scheme for mobile underwater wireless sensor network, named Reverse Localization Scheme or RLS in short. RLS is an event-driven localization method triggered by detector sensors for launching localization process. RLS is suitable for surveillance applications that require very fast reactions to events and could report the location of the occurrence. In this method, mobile sensor nodes report the event toward the surface anchors as soon as they detect it. They do not require waiting to receive location information from anchors. Simulation results confirm that the proposed scheme improves the energy efficiency and reduces significantly localization response time with a proper level of accuracy in terms of mobility model of water currents. Major contributions of this method lie on reducing the numbers of message exchange for localization, saving the energy and decreasing the average localization response time. PMID:22666034
A reverse localization scheme for underwater acoustic sensor networks.
Moradi, Marjan; Rezazadeh, Javad; Ismail, Abdul Samad
2012-01-01
Underwater Wireless Sensor Networks (UWSNs) provide new opportunities to observe and predict the behavior of aquatic environments. In some applications like target tracking or disaster prevention, sensed data is meaningless without location information. In this paper, we propose a novel 3D centralized, localization scheme for mobile underwater wireless sensor network, named Reverse Localization Scheme or RLS in short. RLS is an event-driven localization method triggered by detector sensors for launching localization process. RLS is suitable for surveillance applications that require very fast reactions to events and could report the location of the occurrence. In this method, mobile sensor nodes report the event toward the surface anchors as soon as they detect it. They do not require waiting to receive location information from anchors. Simulation results confirm that the proposed scheme improves the energy efficiency and reduces significantly localization response time with a proper level of accuracy in terms of mobility model of water currents. Major contributions of this method lie on reducing the numbers of message exchange for localization, saving the energy and decreasing the average localization response time.
Lognormal Assimilation of Water Vapor in a WRF-GSI Cycled System
NASA Astrophysics Data System (ADS)
Fletcher, S. J.; Kliewer, A.; Jones, A. S.; Forsythe, J. M.
2015-12-01
Recent publications have shown the viability of both detecting a lognormally-distributed signal for water vapor mixing ratio and the improved quality of satellite retrievals in a 1DVAR mixed lognormal-Gaussian assimilation scheme over a Gaussian-only system. This mixed scheme is incorporated into the Gridpoint Statistical Interpolation (GSI) assimilation scheme with the goal of improving forecasts from the Weather Research and Forecasting (WRF) Model in a cycled system. Results are presented of the impact of treating water vapor as a lognormal random variable. Included in the analysis are: 1) the evolution of Tropical Storm Chris from 2006, and 2) an analysis of a "Pineapple Express" water vapor event from 2005 where a lognormal signal has been previously detected.
LAN attack detection using Discrete Event Systems.
Hubballi, Neminath; Biswas, Santosh; Roopa, S; Ratti, Ritesh; Nandi, Sukumar
2011-01-01
Address Resolution Protocol (ARP) is used for determining the link layer or Medium Access Control (MAC) address of a network host, given its Internet Layer (IP) or Network Layer address. ARP is a stateless protocol and any IP-MAC pairing sent by a host is accepted without verification. This weakness in the ARP may be exploited by malicious hosts in a Local Area Network (LAN) by spoofing IP-MAC pairs. Several schemes have been proposed in the literature to circumvent these attacks; however, these techniques either make IP-MAC pairing static, modify the existing ARP, patch operating systems of all the hosts etc. In this paper we propose a Discrete Event System (DES) approach for Intrusion Detection System (IDS) for LAN specific attacks which do not require any extra constraint like static IP-MAC, changing the ARP etc. A DES model is built for the LAN under both a normal and compromised (i.e., spoofed request/response) situation based on the sequences of ARP related packets. Sequences of ARP events in normal and spoofed scenarios are similar thereby rendering the same DES models for both the cases. To create different ARP events under normal and spoofed conditions the proposed technique uses active ARP probing. However, this probing adds extra ARP traffic in the LAN. Following that a DES detector is built to determine from observed ARP related events, whether the LAN is operating under a normal or compromised situation. The scheme also minimizes extra ARP traffic by probing the source IP-MAC pair of only those ARP packets which are yet to be determined as genuine/spoofed by the detector. Also, spoofed IP-MAC pairs determined by the detector are stored in tables to detect other LAN attacks triggered by spoofing namely, man-in-the-middle (MiTM), denial of service etc. The scheme is successfully validated in a test bed. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Event generators for address event representation transmitters
NASA Astrophysics Data System (ADS)
Serrano-Gotarredona, Rafael; Serrano-Gotarredona, Teresa; Linares Barranco, Bernabe
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. In a typical AER transmitter chip, there is an array of neurons that generate events. They send events to a peripheral circuitry (let's call it "AER Generator") that transforms those events to neurons coordinates (addresses) which are put sequentially on an interchip high speed digital bus. This bus includes a parallel multi-bit address word plus a Rqst (request) and Ack (acknowledge) handshaking signals for asynchronous data exchange. There have been two main approaches published in the literature for implementing such "AER Generator" circuits. They differ on the way of handling event collisions coming from the array of neurons. One approach is based on detecting and discarding collisions, while the other incorporates arbitration for sequencing colliding events . The first approach is supposed to be simpler and faster, while the second is able to handle much higher event traffic. In this article we will concentrate on the second arbiter-based approach. Boahen has been publishing several techniques for implementing and improving the arbiter based approach. Originally, he proposed an arbitration squeme by rows, followed by a column arbitration. In this scheme, while one neuron was selected by the arbiters to transmit his event out of the chip, the rest of neurons in the array were freezed to transmit any further events during this time window. This limited the maximum transmission speed. In order to improve this speed, Boahen proposed an improved 'burst mode' scheme. In this scheme after the row arbitration, a complete row of events is pipelined out of the array and arbitered out of the chip at higher speed. During this single row event arbitration, the array is free to generate new events and communicate to the row arbiter, in a pipelined mode. This scheme significantly improves maximum event transmission speed, specially for high traffic situations were speed is more critical. We have analyzed and studied this approach and have detected some shortcomings in the circuits reported by Boahen, which may render some false situations under some statistical conditions. The present paper proposes some improvements to overcome such situations. The improved "AER Generator" has been implemented in an AER transmitter system
Alsep data processing: How we processed Apollo Lunar Seismic Data
NASA Technical Reports Server (NTRS)
Latham, G. V.; Nakamura, Y.; Dorman, H. J.
1979-01-01
The Apollo lunar seismic station network gathered data continuously at a rate of 3 x 10 to the 8th power bits per day for nearly eight years until the termination in September, 1977. The data were processed and analyzed using a PDP-15 minicomputer. On the average, 1500 long-period seismic events were detected yearly. Automatic event detection and identification schemes proved unsuccessful because of occasional high noise levels and, above all, the risk of overlooking unusual natural events. The processing procedures finally settled on consist of first plotting all the data on a compressed time scale, visually picking events from the plots, transferring event data to separate sets of tapes and performing detailed analyses using the latter. Many problems remain especially for automatically processing extraterrestrial seismic signals.
NASA Technical Reports Server (NTRS)
Quilligan, Gerard; DeMonthier, Jeffrey; Suarez, George
2011-01-01
This innovation addresses challenges in lidar imaging, particularly with the detection scheme and the shapes of the detected signals. Ideally, the echoed pulse widths should be extremely narrow to resolve fine detail at high event rates. However, narrow pulses require wideband detection circuitry with increased power dissipation to minimize thermal noise. Filtering is also required to shape each received signal into a form suitable for processing by a constant fraction discriminator (CFD) followed by a time-to-digital converter (TDC). As the intervals between the echoes decrease, the finite bandwidth of the shaping circuits blends the pulses into an analog signal (luminance) with multiple modes, reducing the ability of the CFD to discriminate individual events
NASA Astrophysics Data System (ADS)
Faridatussafura, Nurzaka; Wandala, Agie
2018-05-01
The meteorological model WRF-ARW version 3.8.1 is used for simulating the heavy rainfall in Semarang that occurred on February 12th, 2015. Two different convective schemes and two different microphysics scheme in a nested configuration were chosen. The sensitivity of those schemes in capturing the extreme weather event has been tested. GFS data were used for the initial and boundary condition. Verification on the twenty-four hours accumulated rainfall using GSMaPsatellite data shows that Kain-Fritsch convective scheme and Lin microphysics scheme is the best combination scheme among the others. The combination also gives the highest success ratio value in placing high intensity rainfall area. Based on the ROC diagram, KF-Lin shows the best performance in detecting high intensity rainfall. However, the combination still has high bias value.
Human visual system-based smoking event detection
NASA Astrophysics Data System (ADS)
Odetallah, Amjad D.; Agaian, Sos S.
2012-06-01
Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.
NASA Astrophysics Data System (ADS)
Jie, Cao; Zhi-Hai, Wu; Li, Peng
2016-05-01
This paper investigates the consensus tracking problems of second-order multi-agent systems with a virtual leader via event-triggered control. A novel distributed event-triggered transmission scheme is proposed, which is intermittently examined at constant sampling instants. Only partial neighbor information and local measurements are required for event detection. Then the corresponding event-triggered consensus tracking protocol is presented to guarantee second-order multi-agent systems to achieve consensus tracking. Numerical simulations are given to illustrate the effectiveness of the proposed strategy. Project supported by the National Natural Science Foundation of China (Grant Nos. 61203147, 61374047, and 61403168).
NASA Astrophysics Data System (ADS)
Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.
2015-12-01
Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/
Investigating Montara platform oil spill accident by implementing RST-OIL approach.
NASA Astrophysics Data System (ADS)
Satriano, Valeria; Ciancia, Emanuele; Coviello, Irina; Di Polito, Carmine; Lacava, Teodosio; Pergola, Nicola; Tramutoli, Valerio
2016-04-01
Oil Spills represent one of the most harmful events to marine ecosystems and their timely detection is crucial for their mitigation and management. The potential of satellite data for their detection and monitoring has been largely investigated. Traditional satellite techniques usually identify oil spill presence applying a fixed threshold scheme only after the occurrence of an event, which make them not well suited for their prompt identification. The Robust Satellite Technique (RST) approach, in its oil spill detection version (RST-OIL), being based on the comparison of the latest satellite acquisition with its historical value, previously identified, allows the automatic and near real-time detection of events. Such a technique has been already successfully applied on data from different sources (AVHRR-Advanced Very High Resolution Radiometer and MODIS-Moderate Resolution Imaging Spectroradiometer) showing excellent performance in detecting oil spills both during day- and night-time conditions, with an high level of sensitivity (detection also of low intensity events) and reliability (no false alarm on scene). In this paper, RST-OIL has been implemented on MODIS thermal infrared data for the analysis of the Montara Platform (Timor Sea - Australia) oil spill disaster occurred in August 2009. Preliminary achievements are presented and discussed in this paper.
Global Seismic Event Detection Using Surface Waves: 15 Possible Antarctic Glacial Sliding Events
NASA Astrophysics Data System (ADS)
Chen, X.; Shearer, P. M.; Walker, K. T.; Fricker, H. A.
2008-12-01
To identify overlooked or anomalous seismic events not listed in standard catalogs, we have developed an algorithm to detect and locate global seismic events using intermediate-period (35-70s) surface waves. We apply our method to continuous vertical-component seismograms from the global seismic networks as archived in the IRIS UV FARM database from 1997 to 2007. We first bandpass filter the seismograms, apply automatic gain control, and compute envelope functions. We then examine 1654 target event locations defined at 5 degree intervals and stack the seismogram envelopes along the predicted Rayleigh-wave travel times. The resulting function has spatial and temporal peaks that indicate possible seismic events. We visually check these peaks using a graphical user interface to eliminate artifacts and assign an overall reliability grade (A, B or C) to the new events. We detect 78% of events in the Global Centroid Moment Tensor (CMT) catalog. However, we also find 840 new events not listed in the PDE, ISC and REB catalogs. Many of these new events were previously identified by Ekstrom (2006) using a different Rayleigh-wave detection scheme. Most of these new events are located along oceanic ridges and transform faults. Some new events can be associated with volcanic eruptions such as the 2000 Miyakejima sequence near Japan and others with apparent glacial sliding events in Greenland (Ekstrom et al., 2003). We focus our attention on 15 events detected from near the Antarctic coastline and relocate them using a cross-correlation approach. The events occur in 3 groups which are well-separated from areas of cataloged earthquake activity. We speculate that these are iceberg calving and/or glacial sliding events, and hope to test this by inverting for their source mechanisms and examining remote sensing data from their source regions.
Detecting and characterizing coal mine related seismicity in the Western U.S. using subspace methods
NASA Astrophysics Data System (ADS)
Chambers, Derrick J. A.; Koper, Keith D.; Pankow, Kristine L.; McCarter, Michael K.
2015-11-01
We present an approach for subspace detection of small seismic events that includes methods for estimating magnitudes and associating detections from multiple stations into unique events. The process is used to identify mining related seismicity from a surface coal mine and an underground coal mining district, both located in the Western U.S. Using a blasting log and a locally derived seismic catalogue as ground truth, we assess detector performance in terms of verified detections, false positives and failed detections. We are able to correctly identify over 95 per cent of the surface coal mine blasts and about 33 per cent of the events from the underground mining district, while keeping the number of potential false positives relatively low by requiring all detections to occur on two stations. We find that most of the potential false detections for the underground coal district are genuine events missed by the local seismic network, demonstrating the usefulness of regional subspace detectors in augmenting local catalogues. We note a trade-off in detection performance between stations at smaller source-receiver distances, which have increased signal-to-noise ratio, and stations at larger distances, which have greater waveform similarity. We also explore the increased detection capabilities of a single higher dimension subspace detector, compared to multiple lower dimension detectors, in identifying events that can be described as linear combinations of training events. We find, in our data set, that such an advantage can be significant, justifying the use of a subspace detection scheme over conventional correlation methods.
An Overview of Recent Advances in Event-Triggered Consensus of Multiagent Systems.
Ding, Lei; Han, Qing-Long; Ge, Xiaohua; Zhang, Xian-Ming
2018-04-01
Event-triggered consensus of multiagent systems (MASs) has attracted tremendous attention from both theoretical and practical perspectives due to the fact that it enables all agents eventually to reach an agreement upon a common quantity of interest while significantly alleviating utilization of communication and computation resources. This paper aims to provide an overview of recent advances in event-triggered consensus of MASs. First, a basic framework of multiagent event-triggered operational mechanisms is established. Second, representative results and methodologies reported in the literature are reviewed and some in-depth analysis is made on several event-triggered schemes, including event-based sampling schemes, model-based event-triggered schemes, sampled-data-based event-triggered schemes, and self-triggered sampling schemes. Third, two examples are outlined to show applicability of event-triggered consensus in power sharing of microgrids and formation control of multirobot systems, respectively. Finally, some challenging issues on event-triggered consensus are proposed for future research.
Real-Time Gait Event Detection Based on Kinematic Data Coupled to a Biomechanical Model.
Lambrecht, Stefan; Harutyunyan, Anna; Tanghe, Kevin; Afschrift, Maarten; De Schutter, Joris; Jonkers, Ilse
2017-03-24
Real-time detection of multiple stance events, more specifically initial contact (IC), foot flat (FF), heel off (HO), and toe off (TO), could greatly benefit neurorobotic (NR) and neuroprosthetic (NP) control. Three real-time threshold-based algorithms have been developed, detecting the aforementioned events based on kinematic data in combination with a biomechanical model. Data from seven subjects walking at three speeds on an instrumented treadmill were used to validate the presented algorithms, accumulating to a total of 558 steps. The reference for the gait events was obtained using marker and force plate data. All algorithms had excellent precision and no false positives were observed. Timing delays of the presented algorithms were similar to current state-of-the-art algorithms for the detection of IC and TO, whereas smaller delays were achieved for the detection of FF. Our results indicate that, based on their high precision and low delays, these algorithms can be used for the control of an NR/NP, with the exception of the HO event. Kinematic data is used in most NR/NP control schemes and is thus available at no additional cost, resulting in a minimal computational burden. The presented methods can also be applied for screening pathological gait or gait analysis in general in/outside of the laboratory.
On event-based optical flow detection
Brosch, Tobias; Tschechne, Stephan; Neumann, Heiko
2015-01-01
Event-based sensing, i.e., the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations. PMID:25941470
Haynes, Edward; Helgason, Thorunn; Young, J Peter W; Thwaites, Richard; Budge, Giles E
2013-08-01
Melissococcus plutonius is the bacterial pathogen that causes European Foulbrood of honeybees, a globally important honeybee brood disease. We have used next-generation sequencing to identify highly polymorphic regions in an otherwise genetically homogenous organism, and used these loci to create a modified MLST scheme. This synthesis of a proven typing scheme format with next-generation sequencing combines reliability and low costs with insights only available from high-throughput sequencing technologies. Using this scheme we show that the global distribution of M.plutonius variants is not uniform. We use the scheme in epidemiological studies to trace movements of infective material around England, insights that would have been impossible to confirm without the typing scheme. We also demonstrate the persistence of local variants over time. © 2013 Crown copyright. Reproduced with the permission of the Controller of Her Majesty's Stationary Office/Queen’s Printer for Scotland and Food and Environment Research Agency.
Accelerometer and Camera-Based Strategy for Improved Human Fall Detection.
Zerrouki, Nabil; Harrou, Fouzi; Sun, Ying; Houacine, Amrane
2016-12-01
In this paper, we address the problem of detecting human falls using anomaly detection. Detection and classification of falls are based on accelerometric data and variations in human silhouette shape. First, we use the exponentially weighted moving average (EWMA) monitoring scheme to detect a potential fall in the accelerometric data. We used an EWMA to identify features that correspond with a particular type of fall allowing us to classify falls. Only features corresponding with detected falls were used in the classification phase. A benefit of using a subset of the original data to design classification models minimizes training time and simplifies models. Based on features corresponding to detected falls, we used the support vector machine (SVM) algorithm to distinguish between true falls and fall-like events. We apply this strategy to the publicly available fall detection databases from the university of Rzeszow's. Results indicated that our strategy accurately detected and classified fall events, suggesting its potential application to early alert mechanisms in the event of fall situations and its capability for classification of detected falls. Comparison of the classification results using the EWMA-based SVM classifier method with those achieved using three commonly used machine learning classifiers, neural network, K-nearest neighbor and naïve Bayes, proved our model superior.
Scheduling Randomly-Deployed Heterogeneous Video Sensor Nodes for Reduced Intrusion Detection Time
NASA Astrophysics Data System (ADS)
Pham, Congduc
This paper proposes to use video sensor nodes to provide an efficient intrusion detection system. We use a scheduling mechanism that takes into account the criticality of the surveillance application and present a performance study of various cover set construction strategies that take into account cameras with heterogeneous angle of view and those with very small angle of view. We show by simulation how a dynamic criticality management scheme can provide fast event detection for mission-critical surveillance applications by increasing the network lifetime and providing low stealth time of intrusions.
Hu, Wenfeng; Liu, Lu; Feng, Gang
2016-09-02
This paper addresses the output consensus problem of heterogeneous linear multi-agent systems. We first propose a novel distributed event-triggered control scheme. It is shown that, with the proposed control scheme, the output consensus problem can be solved if two matrix equations are satisfied. Then, we further propose a novel self-triggered control scheme, with which continuous monitoring is avoided. By introducing a fixed timer into both event- and self-triggered control schemes, Zeno behavior can be ruled out for each agent. The effectiveness of the event- and self-triggered control schemes is illustrated by an example.
Isbarn, Hendrik; Briganti, Alberto; De Visschere, Pieter J L; Fütterer, Jurgen J; Ghadjar, Pirus; Giannarini, Gianluca; Ost, Piet; Ploussard, Guillaume; Sooriakumaran, Prasanna; Surcel, Christian I; van Oort, Inge M; Yossepowitch, Ofer; van den Bergh, Roderick C N
2015-04-01
Prostate biopsy (PB) is the gold standard for the diagnosis of prostate cancer (PCa). However, the optimal number of biopsy cores remains debatable. We sought to compare contemporary standard (10-12 cores) vs. saturation (=18 cores) schemes on initial as well as repeat PB. A non-systematic review of the literature was performed from 2000 through 2013. Studies of highest evidence (randomized controlled trials, prospective non-randomized studies, and retrospective reports of high quality) comparing standard vs saturation schemes on initial and repeat PB were evaluated. Outcome measures were overall PCa detection rate, detection rate of insignificant PCa, and procedure-associated morbidity. On initial PB, there is growing evidence that a saturation scheme is associated with a higher PCa detection rate compared to a standard one in men with lower PSA levels (<10 ng/ml), larger prostates (>40 cc), or lower PSA density values (<0.25 ng/ml/cc). However, these cut-offs are not uniform and differ among studies. Detection rates of insignificant PCa do not differ in a significant fashion between standard and saturation biopsies. On repeat PB, PCa detection rate is likewise higher with saturation protocols. Estimates of insignificant PCa vary widely due to differing definitions of insignificant disease. However, the rates of insignificant PCa appear to be comparable for the schemes in patients with only one prior negative biopsy, while saturation biopsy seems to detect more cases of insignificant PCa compared to standard biopsy in men with two or more prior negative biopsies. Very extensive sampling is associated with a high rate of acute urinary retention, whereas other severe adverse events, such as sepsis, appear not to occur more frequently with saturation schemes. Current evidence suggests that saturation schemes are associated with a higher PCa detection rate compared to standard ones on initial PB in men with lower PSA levels or larger prostates, and on repeat PB. Since most data are derived from retrospective studies, other endpoints such as detection rate of insignificant disease - especially on repeat PB - show broad variations throughout the literature and must, thus, be interpreted with caution. Future prospective controlled trials should be conducted to compare extended templates with newer techniques, such as image-guided sampling, in order to optimize PCa diagnostic strategy.
Modeling and Detection of Ice Particle Accretion in Aircraft Engine Compression Systems
NASA Technical Reports Server (NTRS)
May, Ryan D.; Simon, Donald L.; Guo, Ten-Huei
2012-01-01
The accretion of ice particles in the core of commercial aircraft engines has been an ongoing aviation safety challenge. While no accidents have resulted from this phenomenon to date, numerous engine power loss events ranging from uneventful recoveries to forced landings have been recorded. As a first step to enabling mitigation strategies during ice accretion, a detection scheme must be developed that is capable of being implemented on board modern engines. In this paper, a simple detection scheme is developed and tested using a realistic engine simulation with approximate ice accretion models based on data from a compressor design tool. These accretion models are implemented as modified Low Pressure Compressor maps and have the capability to shift engine performance based on a specified level of ice blockage. Based on results from this model, it is possible to detect the accretion of ice in the engine core by observing shifts in the typical sensed engine outputs. Results are presented in which, for a 0.1 percent false positive rate, a true positive detection rate of 98 percent is achieved.
Dual-stage periodic event-triggered output-feedback control for linear systems.
Ruan, Zhen; Chen, Wu-Hua; Lu, Xiaomei
2018-05-01
This paper proposes an event-triggered control framework, called dual-stage periodic event-triggered control (DSPETC), which unifies periodic event-triggered control (PETC) and switching event-triggered control (SETC). Specifically, two period parameters h 1 and h 2 are introduced to characterize the new event-triggering rule, where h 1 denotes the sampling period, while h 2 denotes the monitoring period. By choosing some specified values of h 2 , the proposed control scheme can reduce to PETC or SETC scheme. In the DSPETC framework, the controlled system is represented as a switched system model and its stability is analyzed via a switching-time-dependent Lyapunov functional. Both the cases with/without network-induced delays are investigated. Simulation and experimental results show that the DSPETC scheme is superior to the PETC scheme and the SETC scheme. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Protection of Renewable-dominated Microgrids: Challenges and Potential Solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elkhatib, Mohamed; Ellis, Abraham; Milan Biswal
keywords : Microgrid Protection, Impedance Relay, Signal Processing-based Fault Detec- tion, Networked Microgrids, Communication-Assisted Protection In this report we address the challenge of designing efficient protection system for inverter- dominated microgrids. These microgrids are characterised with limited fault current capacity as a result of current-limiting protection functions of inverters. Typically, inverters limit their fault contribution in sub-cycle time frame to as low as 1.1 per unit. As a result, overcurrent protection could fail completely to detect faults in inverter-dominated microgrids. As part of this project a detailed literature survey of existing and proposed microgrid protection schemes were conducted. The surveymore » concluded that there is a gap in the available microgrid protection methods. The only credible protection solution available in literature for low- fault inverter-dominated microgrids is the differential protection scheme which represents a robust transmission-grade protection solution but at a very high cost. Two non-overcurrent protection schemes were investigated as part of this project; impedance-based protection and transient-based protection. Impedance-based protection depends on monitoring impedance trajectories at feeder relays to detect faults. Two communication-based impedance-based protection schemes were developed. the first scheme utilizes directional elements and pilot signals to locate the fault. The second scheme depends on a Central Protection Unit that communicates with all feeder relays to locate the fault based on directional flags received from feeder relays. The later approach could potentially be adapted to protect networked microgrids and dynamic topology microgrids. Transient-based protection relies on analyzing high frequency transients to detect and locate faults. This approach is very promising but its implementation in the filed faces several challenges. For example, high frequency transients due to faults can be confused with transients due to other events such as capacitor switching. Additionally, while detecting faults by analyzing transients could be doable, locating faults based on analyzing transients is still an open question.« less
Protection of Renewable-dominated Microgrids: Challenges and Potential Solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elkhatib, Mohamed; Ellis, Abraham; Biswal, Milan
In this report we address the challenge of designing efficient protection system for inverter- dominated microgrids. These microgrids are characterised with limited fault current capacity as a result of current-limiting protection functions of inverters. Typically, inverters limit their fault contribution in sub-cycle time frame to as low as 1.1 per unit. As a result, overcurrent protection could fail completely to detect faults in inverter-dominated microgrids. As part of this project a detailed literature survey of existing and proposed microgrid protection schemes were conducted. The survey concluded that there is a gap in the available microgrid protection methods. The only crediblemore » protection solution available in literature for low- fault inverter-dominated microgrids is the differential protection scheme which represents a robust transmission-grade protection solution but at a very high cost. Two non-overcurrent protection schemes were investigated as part of this project; impedance-based protection and transient-based protection. Impedance-based protection depends on monitoring impedance trajectories at feeder relays to detect faults. Two communication-based impedance-based protection schemes were developed. the first scheme utilizes directional elements and pilot signals to locate the fault. The second scheme depends on a Central Protection Unit that communicates with all feeder relays to locate the fault based on directional flags received from feeder relays. The later approach could potentially be adapted to protect networked microgrids and dynamic topology microgrids. Transient-based protection relies on analyzing high frequency transients to detect and locate faults. This approach is very promising but its implementation in the filed faces several challenges. For example, high frequency transients due to faults can be confused with transients due to other events such as capacitor switching. Additionally, while detecting faults by analyzing transients could be doable, locating faults based on analyzing transients is still an open question.« less
NASA Astrophysics Data System (ADS)
Hasan, Md Alfi; Islam, A. K. M. Saiful
2018-05-01
Accurate forecasting of heavy rainfall is crucial for the improvement of flood warning to prevent loss of life and property damage due to flash-flood-related landslides in the hilly region of Bangladesh. Forecasting heavy rainfall events is challenging where microphysics and cumulus parameterization schemes of Weather Research and Forecast (WRF) model play an important role. In this study, a comparison was made between observed and simulated rainfall using 19 different combinations of microphysics and cumulus schemes available in WRF over Bangladesh. Two severe rainfall events during 11th June 2007 and 24-27th June 2012, over the eastern hilly region of Bangladesh, were selected for performance evaluation using a number of indicators. A combination of the Stony Brook University microphysics scheme with Tiedtke cumulus scheme is found as the most suitable scheme for reproducing those events. Another combination of the single-moment 6-class microphysics scheme with New Grell 3D cumulus schemes also showed reasonable performance in forecasting heavy rainfall over this region. The sensitivity analysis confirms that cumulus schemes play a greater role than microphysics schemes for reproducing the heavy rainfall events using WRF.
Barbhuiya, F A; Agarwal, Mayank; Purwar, Sanketh; Biswas, Santosh; Nandi, Sukumar
2015-09-01
TCP is the most widely accepted transport layer protocol. The major emphasis during the development of TCP was its functionality and efficiency. However, not much consideration was given on studying the possibility of attackers exploiting the protocol, which has lead to several attacks on TCP. This paper deals with the induced low rate TCP attack. Since the attack is relatively new, only a few schemes have been proposed to mitigate it. However, the main issues with these schemes are scalability, change in TCP header, lack of formal frameworks, etc. In this paper, we have adapted the stochastic DES framework for detecting the attack, which addresses most of these issues. We have successfully deployed and tested the proposed DES based IDS on a test bed. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Banerjee, Torsha
Unlike conventional networks, wireless sensor networks (WSNs) are limited in power, have much smaller memory buffers, and possess relatively slower processing speeds. These characteristics necessitate minimum transfer and storage of information in order to prolong the network lifetime. In this dissertation, we exploit the spatio-temporal nature of sensor data to approximate the current values of the sensors based on readings obtained from neighboring sensors and itself. We propose a Tree based polynomial REGression algorithm, (TREG) that addresses the problem of data compression in wireless sensor networks. Instead of aggregated data, a polynomial function (P) is computed by the regression function, TREG. The coefficients of P are then passed to achieve the following goals: (i) The sink can get attribute values in the regions devoid of sensor nodes, and (ii) Readings over any portion of the region can be obtained at one time by querying the root of the tree. As the size of the data packet from each tree node to its parent remains constant, the proposed scheme scales very well with growing network density or increased coverage area. Since physical attributes exhibit a gradual change over time, we propose an iterative scheme, UPDATE_COEFF, which obviates the need to perform the regression function repeatedly and uses approximations based on previous readings. Extensive simulations are performed on real world data to demonstrate the effectiveness of our proposed aggregation algorithm, TREG. Results reveal that for a network density of 0.0025 nodes/m2, a complete binary tree of depth 4 could provide the absolute error to be less than 6%. A data compression ratio of about 0.02 is achieved using our proposed algorithm, which is almost independent of the tree depth. In addition, our proposed updating scheme makes the aggregation process faster while maintaining the desired error bounds. We also propose a Polynomial-based scheme that addresses the problem of Event Region Detection (PERD) for WSNs. When a single event occurs, a child of the tree sends a Flagged Polynomial (FP) to its parent, if the readings approximated by it falls outside the data range defining the existing phenomenon. After the aggregation process is over, the root having the two polynomials, P and FP can be queried for FP (approximating the new event region) instead of flooding the whole network. For multiple such events, instead of computing a polynomial corresponding to each new event, areas with same data range are combined by the corresponding tree nodes and the aggregated coefficients are passed on. Results reveal that a new event can be detected by PERD while error in detection remains constant and is less than a threshold of 10%. As the node density increases, accuracy and delay for event detection are found to remain almost constant, making PERD highly scalable. Whenever an event occurs in a WSN, data is generated by closeby sensors and relaying the data to the base station (BS) make sensors closer to the BS run out of energy at a much faster rate than sensors in other parts of the network. This gives rise to an unequal distribution of residual energy in the network and makes those sensors with lower remaining energy level die at much faster rate than others. We propose a scheme for enhancing network Lifetime using mobile cluster heads (CH) in a WSN. To maintain remaining energy more evenly, some energy-rich nodes are designated as CHs which move in a controlled manner towards sensors rich in energy and data. This eliminates multihop transmission required by the static sensors and thus increases the overall lifetime of the WSN. We combine the idea of clustering and mobile CH to first form clusters of static sensor nodes. A collaborative strategy among the CHs further increases the lifetime of the network. Time taken for transmitting data to the BS is reduced further by making the CHs follow a connectivity strategy that always maintain a connected path to the BS. Spatial correlation of sensor data can be further exploited for dynamic channel selection in Cellular Communication. In such a scenario within a licensed band, wireless sensors can be deployed (each sensor tuned to a frequency of the channel at a particular time) to sense the interference power of the frequency band. In an ideal channel, interference temperature (IT) which is directly proportional to the interference power, can be assumed to vary spatially with the frequency of the sub channel. We propose a scheme for fitting the sub channel frequencies and corresponding ITs to a regression model for calculating the IT of a random sub channel for further analysis of the channel interference at the base station. Our scheme, based on the readings reported by Sensors helps in Dynamic Channel Selection (S-DCS) in extended C-band for assignment to unlicensed secondary users. S-DCS proves to be economic from energy consumption point of view and it also achieves accuracy with error bound within 6.8%. Again, users are assigned empty sub channels without actually probing them, incurring minimum delay in the process. The overall channel throughput is maximized along with fairness to individual users.
NASA Astrophysics Data System (ADS)
Pötzi, W.; Veronig, A. M.; Temmer, M.
2018-06-01
In the framework of the Space Situational Awareness program of the European Space Agency (ESA/SSA), an automatic flare detection system was developed at Kanzelhöhe Observatory (KSO). The system has been in operation since mid-2013. The event detection algorithm was upgraded in September 2017. All data back to 2014 was reprocessed using the new algorithm. In order to evaluate both algorithms, we apply verification measures that are commonly used for forecast validation. In order to overcome the problem of rare events, which biases the verification measures, we introduce a new event-based method. We divide the timeline of the Hα observations into positive events (flaring period) and negative events (quiet period), independent of the length of each event. In total, 329 positive and negative events were detected between 2014 and 2016. The hit rate for the new algorithm reached 96% (just five events were missed) and a false-alarm ratio of 17%. This is a significant improvement of the algorithm, as the original system had a hit rate of 85% and a false-alarm ratio of 33%. The true skill score and the Heidke skill score both reach values of 0.8 for the new algorithm; originally, they were at 0.5. The mean flare positions are accurate within {±} 1 heliographic degree for both algorithms, and the peak times improve from a mean difference of 1.7± 2.9 minutes to 1.3± 2.3 minutes. The flare start times that had been systematically late by about 3 minutes as determined by the original algorithm, now match the visual inspection within -0.47± 4.10 minutes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M. G.; Ackermann, M.; Adams, J.
Here we present the development and application of a generic analysis scheme for the measurement of neutrino spectra with the IceCube detector. This scheme is based on regularized unfolding, preceded by an event selection which uses a Minimum Redundancy Maximum Relevance algorithm to select the relevant variables and a random forest for the classification of events. The analysis has been developed using IceCube data from the 59-string configuration of the detector. 27,771 neutrino candidates were detected in 346 days of livetime. A rejection of 99.9999 % of the atmospheric muon background is achieved. The energy spectrum of the atmospheric neutrinomore » flux is obtained using the TRUEE unfolding program. The unfolded spectrum of atmospheric muon neutrinos covers an energy range from 100 GeV to 1 PeV. Compared to the previous measurement using the detector in the 40-string configuration, the analysis presented here, extends the upper end of the atmospheric neutrino spectrum by more than a factor of two, reaching an energy region that has not been previously accessed by spectral measurements.« less
Aartsen, M. G.; Ackermann, M.; Adams, J.; ...
2015-03-11
Here we present the development and application of a generic analysis scheme for the measurement of neutrino spectra with the IceCube detector. This scheme is based on regularized unfolding, preceded by an event selection which uses a Minimum Redundancy Maximum Relevance algorithm to select the relevant variables and a random forest for the classification of events. The analysis has been developed using IceCube data from the 59-string configuration of the detector. 27,771 neutrino candidates were detected in 346 days of livetime. A rejection of 99.9999 % of the atmospheric muon background is achieved. The energy spectrum of the atmospheric neutrinomore » flux is obtained using the TRUEE unfolding program. The unfolded spectrum of atmospheric muon neutrinos covers an energy range from 100 GeV to 1 PeV. Compared to the previous measurement using the detector in the 40-string configuration, the analysis presented here, extends the upper end of the atmospheric neutrino spectrum by more than a factor of two, reaching an energy region that has not been previously accessed by spectral measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Moses; Kim, Keonhui; Muljadi, Eduard
This paper proposes a torque limit-based inertial control scheme of a doubly-fed induction generator (DFIG) that supports the frequency control of a power system. If a frequency deviation occurs, the proposed scheme aims to release a large amount of kinetic energy (KE) stored in the rotating masses of a DFIG to raise the frequency nadir (FN). Upon detecting the event, the scheme instantly increases its output to the torque limit and then reduces the output with the rotor speed so that it converges to the stable operating range. To restore the rotor speed while causing a small second frequency dipmore » (SFD), after the rotor speed converges the power reference is reduced by a small amount and maintained until it meets the reference for maximum power point tracking control. The test results demonstrate that the scheme can improve the FN and maximum rate of change of frequency while causing a small SFD in any wind conditions and in a power system that has a high penetration of wind power, and thus the scheme helps maintain the required level of system reliability. The scheme releases the KE from 2.9 times to 3.7 times the Hydro-Quebec requirement depending on the power reference.« less
An automated multi-scale network-based scheme for detection and location of seismic sources
NASA Astrophysics Data System (ADS)
Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.
2017-12-01
We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.
Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video
NASA Astrophysics Data System (ADS)
Yeo, Boon-Lock; Liu, Bede
1996-03-01
Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.
Online Least Squares One-Class Support Vector Machines-Based Abnormal Visual Event Detection
Wang, Tian; Chen, Jie; Zhou, Yi; Snoussi, Hichem
2013-01-01
The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM), combined with its sparsified version (sparse online LS-OC-SVM). LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method. PMID:24351629
Online least squares one-class support vector machines-based abnormal visual event detection.
Wang, Tian; Chen, Jie; Zhou, Yi; Snoussi, Hichem
2013-12-12
The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM), combined with its sparsified version (sparse online LS-OC-SVM). LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method.
Fault-tolerant locomotion of the hexapod robot.
Yang, J M; Kim, J H
1998-01-01
In this paper, we propose a scheme for fault detection and tolerance of the hexapod robot locomotion on even terrain. The fault stability margin is defined to represent potential stability which a gait can have in case a sudden fault event occurs to one leg. Based on this, the fault-tolerant quadruped periodic gaits of the hexapod walking over perfectly even terrain are derived. It is demonstrated that the derived quadruped gait is the optimal one the hexapod can have maintaining fault stability margin nonnegative and a geometric condition should be satisfied for the optimal locomotion. By this scheme, when one leg is in failure, the hexapod robot has the modified tripod gait to continue the optimal locomotion.
Multi-model data fusion to improve an early warning system for hypo-/hyperglycemic events.
Botwey, Ransford Henry; Daskalaki, Elena; Diem, Peter; Mougiakakou, Stavroula G
2014-01-01
Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.
Wang, Chao Saul; Fu, Zhong-Chuan; Chen, Hong-Song; Wang, Dong-Sheng
2014-01-01
As semiconductor technology scales into the nanometer regime, intermittent faults have become an increasing threat. This paper focuses on the effects of intermittent faults on NET versus REG on one hand and the implications for dependability strategy on the other. First, the vulnerability characteristics of representative units in OpenSPARC T2 are revealed, and in particular, the highly sensitive modules are identified. Second, an arch-level dependability enhancement strategy is proposed, showing that events such as core/strand running status and core-memory interface events can be candidates of detectable symptoms. A simple watchdog can be deployed to detect application running status (IEXE event). Then SDC (silent data corruption) rate is evaluated demonstrating its potential. Third and last, the effects of traditional protection schemes in the target CMT to intermittent faults are quantitatively studied on behalf of the contribution of each trap type, demonstrating the necessity of taking this factor into account for the strategy.
A scalable multi-photon coincidence detector based on superconducting nanowires.
Zhu, Di; Zhao, Qing-Yuan; Choi, Hyeongrak; Lu, Tsung-Ju; Dane, Andrew E; Englund, Dirk; Berggren, Karl K
2018-06-04
Coincidence detection of single photons is crucial in numerous quantum technologies and usually requires multiple time-resolved single-photon detectors. However, the electronic readout becomes a major challenge when the measurement basis scales to large numbers of spatial modes. Here, we address this problem by introducing a two-terminal coincidence detector that enables scalable readout of an array of detector segments based on superconducting nanowire microstrip transmission line. Exploiting timing logic, we demonstrate a sixteen-element detector that resolves all 136 possible single-photon and two-photon coincidence events. We further explore the pulse shapes of the detector output and resolve up to four-photon events in a four-element device, giving the detector photon-number-resolving capability. This new detector architecture and operating scheme will be particularly useful for multi-photon coincidence detection in large-scale photonic integrated circuits.
Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter
2018-01-01
Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes. PMID:29453930
Real-time prediction of the occurrence of GLE events
NASA Astrophysics Data System (ADS)
Núñez, Marlon; Reyes-Santiago, Pedro J.; Malandraki, Olga E.
2017-07-01
A tool for predicting the occurrence of Ground Level Enhancement (GLE) events using the UMASEP scheme is presented. This real-time tool, called HESPERIA UMASEP-500, is based on the detection of the magnetic connection, along which protons arrive in the near-Earth environment, by estimating the lag correlation between the time derivatives of 1 min soft X-ray flux (SXR) and 1 min near-Earth proton fluxes observed by the GOES satellites. Unlike current GLE warning systems, this tool can predict GLE events before the detection by any neutron monitor (NM) station. The prediction performance measured for the period from 1986 to 2016 is presented for two consecutive periods, because of their notable difference in performance. For the 2000-2016 period, this prediction tool obtained a probability of detection (POD) of 53.8% (7 of 13 GLE events), a false alarm ratio (FAR) of 30.0%, and average warning times (AWT) of 8 min with respect to the first NM station's alert and 15 min to the GLE Alert Plus's warning. We have tested the model by replacing the GOES proton data with SOHO/EPHIN proton data, and the results are similar in terms of POD, FAR, and AWT for the same period. The paper also presents a comparison with a GLE warning system.
Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter
2018-02-17
Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes.
Freedman, Kevin J; Bastian, Arangassery R; Chaiken, Irwin; Kim, Min Jun
2013-03-11
Protein conjugation provides a unique look into many biological phenomena and has been used for decades for molecular recognition purposes. In this study, the use of solid-state nanopores for the detection of gp120-associated complexes are investigated. They exhibit monovalent and multivalent binding to anti-gp120 antibody monomer and dimers. In order to investigate the feasibility of many practical applications related to nanopores, detection of specific protein complexes is attempted within a heterogeneous protein sample, and the role of voltage on complexed proteins is researched. It is found that the electric field within the pore can result in unbinding of a freely translocating protein complex within the transient event durations measured experimentally. The strong dependence of the unbinding time with voltage can be used to improve the detection capability of the nanopore system by adding an additional level of specificity that can be probed. These data provide a strong framework for future protein-specific detection schemes, which are shown to be feasible in the realm of a 'real-world' sample and an automated multidimensional method of detecting events. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Online track detection in triggerless mode for INO
NASA Astrophysics Data System (ADS)
Jain, A.; Padmini, S.; Joseph, A. N.; Mahesh, P.; Preetha, N.; Behere, A.; Sikder, S. S.; Majumder, G.; Behera, S. P.
2018-03-01
The India based Neutrino Observatory (INO) is a proposed particle physics research project to study the atmospheric neutrinos. INO-Iron Calorimeter (ICAL) will consist of 28,800 detectors having 3.6 million electronic channels expected to activate with 100 Hz single rate, producing data at a rate of 3 GBps. Data collected contains a few real hits generated by muon tracks and the remaining noise-induced spurious hits. Estimated reduction factor after filtering out data of interest from generated data is of the order of 103. This makes trigger generation critical for efficient data collection and storage. Trigger is generated by detecting coincidence across multiple channels satisfying trigger criteria, within a small window of 200 ns in the trigger region. As the probability of neutrino interaction is very low, track detection algorithm has to be efficient and fast enough to process 5 × 106 events-candidates/s without introducing significant dead time, so that not even a single neutrino event is missed out. A hardware based trigger system is presently proposed for on-line track detection considering stringent timing requirements. Though the trigger system can be designed with scalability, a lot of hardware devices and interconnections make it a complex and expensive solution with limited flexibility. A software based track detection approach working on the hit information offers an elegant solution with possibility of varying trigger criteria for selecting various potentially interesting physics events. An event selection approach for an alternative triggerless readout scheme has been developed. The algorithm is mathematically simple, robust and parallelizable. It has been validated by detecting simulated muon events for energies of the range of 1 GeV-10 GeV with 100% efficiency at a processing rate of 60 μs/event on a 16 core machine. The algorithm and result of a proof-of-concept for its faster implementation over multiple cores is presented. The paper also discusses about harnessing the computing capabilities of multi-core computing farm, thereby optimizing number of nodes required for the proposed system.
NASA Technical Reports Server (NTRS)
Stano, Geoffrey T.; Fuelberg, Henry E.; Roeder, William P.
2010-01-01
This research addresses the 45th Weather Squadron's (45WS) need for improved guidance regarding lightning cessation at Cape Canaveral Air Force Station and Kennedy Space Center (KSC). KSC's Lightning Detection and Ranging (LDAR) network was the primary observational tool to investigate both cloud-to-ground and intracloud lightning. Five statistical and empirical schemes were created from LDAR, sounding, and radar parameters derived from 116 storms. Four of the five schemes were unsuitable for operational use since lightning advisories would be canceled prematurely, leading to safety risks to personnel. These include a correlation and regression tree analysis, three variants of multiple linear regression, event time trending, and the time delay between the greatest height of the maximum dBZ value to the last flash. These schemes failed to adequately forecast the maximum interval, the greatest time between any two flashes in the storm. The majority of storms had a maximum interval less than 10 min, which biased the schemes toward small values. Success was achieved with the percentile method (PM) by separating the maximum interval into percentiles for the 100 dependent storms.
NASA Technical Reports Server (NTRS)
Cummings, Kristin A.; Pickering, Kenneth E.; Barth, M.; Weinheimer, A.; Bela, M.; Li, Y.; Allen, D.; Bruning, E.; MacGorman, D.; Rutledge, S.;
2014-01-01
The Deep Convective Clouds and Chemistry (DC3) field campaign in 2012 provided a plethora of aircraft and ground-based observations (e.g., trace gases, lightning and radar) to study deep convective storms, their convective transport of trace gases, and associated lightning occurrence and production of nitrogen oxides (NOx). Based on the measurements taken of the 29-30 May 2012 Oklahoma thunderstorm, an analysis against a Weather Research and Forecasting Chemistry (WRF-Chem) model simulation of the same event at 3-km horizontal resolution was performed. One of the main objectives was to include various flash rate parameterization schemes (FRPSs) in the model and identify which scheme(s) best captured the flash rates observed by the National Lightning Detection Network (NLDN) and Oklahoma Lightning Mapping Array (LMA). The comparison indicates how well the schemes predicted the timing, location, and number of lightning flashes. The FRPSs implemented in the model were based on the simulated thunderstorms physical features, such as maximum vertical velocity, cloud top height, and updraft volume. Adjustment factors were added to each FRPS to best capture the observed flash trend and a sensitivity study was performed to compare the range in model-simulated lightning-generated nitrogen oxides (LNOx) generated by each FRPS over the storms lifetime. Based on the best FRPS, model-simulated LNOx was compared against aircraft measured NOx. The trace gas analysis, along with the increased detail in the model specification of the vertical distribution of lightning flashes as suggested by the LMA data, provide guidance in determining the scenario of NO production per intracloud and cloud-to-ground flash that best matches the NOx mixing ratios observed by the aircraft.
NASA Technical Reports Server (NTRS)
Cummings, Kristin A.; Pickering, Kenneth E.; Barth, M.; Weinheimer, A.; Bela, M.; Li, Y.; Allen, D.; Bruning, E.; MacGorman, D.; Rutledge, S.;
2014-01-01
The Deep Convective Clouds and Chemistry (DC3) field campaign in 2012 provided a plethora of aircraft and ground-based observations (e.g., trace gases, lightning and radar) to study deep convective storms, their convective transport of trace gases, and associated lightning occurrence and production of nitrogen oxides (NOx). Based on the measurements taken of the 29-30 May 2012 Oklahoma thunderstorm, an analysis against a Weather Research and Forecasting Chemistry (WRF-Chem) model simulation of the same event at 3-km horizontal resolution was performed. One of the main objectives was to include various flash rate parameterization schemes (FRPSs) in the model and identify which scheme(s) best captured the flash rates observed by the National Lightning Detection Network (NLDN) and Oklahoma Lightning Mapping Array (LMA). The comparison indicates how well the schemes predicted the timing, location, and number of lightning flashes. The FRPSs implemented in the model were based on the simulated thunderstorms physical features, such as maximum vertical velocity, cloud top height, and updraft volume. Adjustment factors were applied to each FRPS to best capture the observed flash trend and a sensitivity study was performed to compare the range in model-simulated lightning-generated nitrogen oxides (LNOx) generated by each FRPS over the storms lifetime. Based on the best FRPS, model-simulated LNOx was compared against aircraft measured NOx. The trace gas analysis, along with the increased detail in the model specification of the vertical distribution of lightning flashes as suggested by the LMA data, provide guidance in determining the scenario of NO production per intracloud and cloud-to-ground flash that best matches the NOx mixing ratios observed by the aircraft.
A previously unreported type of seismic source in the firn layer of the East Antarctic Ice Sheet
NASA Astrophysics Data System (ADS)
Lough, Amanda C.; Barcheck, C. Grace; Wiens, Douglas A.; Nyblade, Andrew; Anandakrishnan, Sridhar
2015-11-01
We identify a unique type of seismic source in the uppermost part of the East Antarctic Ice Sheet recorded by temporary broadband seismic arrays in East Antarctica. These sources, termed "firnquakes," are characterized by dispersed surface wave trains with frequencies of 1-10 Hz detectable at distances up to 1000 km. Events show strong dispersed Rayleigh wave trains and an absence of observable body wave arrivals; most events also show weaker Love waves. Initial events were discovered by standard detection schemes; additional events were then detected with a correlation scanner using the initial arrivals as templates. We locate sources by determining the L2 misfit for a grid of potential source locations using Rayleigh wave arrival times and polarization directions. We then perform a multiple-filter analysis to calculate the Rayleigh wave group velocity dispersion and invert the group velocity for shear velocity structure. The resulting velocity structure is used as an input model to calculate synthetic seismograms. Inverting the dispersion curves yields ice velocity structures consistent with a low-velocity firn layer ~100 m thick and show that velocity structure is laterally variable. The absence of observable body wave phases and the relative amplitudes of Rayleigh waves and noise constrain the source depth to be less than 20 m. The presence of Love waves for most events suggests the source is not isotropic. We propose the events are linked to the formation of small crevasses in the firn, and several events correlate with shallow crevasse fields mapped in satellite imagery.
Design of the Detector II: A CMOS Gate Array for the Study of Concurrent Error Detection Techniques.
1987-07-01
detection schemes and temporary failures. The circuit consists- or of six different adders with concurrent error detection schemes . The error detection... schemes are - simple duplication, duplication with functional dual implementation, duplication with different &I [] .6implementations, two-rail encoding...THE SYSTEM. .. .... ...... ...... ...... 5 7. DESIGN OF CED SCHEMES .. ... ...... ...... ........ 7 7.1 Simple Duplication
The best prostate biopsy scheme is dictated by the gland volume: a monocentric study.
Dell'Atti, L
2015-08-01
Accuracy of biopsy scheme depends on different parameters. Prostate-specific antigen (PSA) level and digital rectal examination (DRE) influenced the detection rate and suggested the biopsy scheme to approach each patient. Another parameter is the prostate volume. Sampling accuracy tends to decrease progressively with an increasing prostate volume. We prospectively observed detection cancer rate in suspicious prostate cancer (PCa) and improved by applying a protocol biopsy according to prostate volume (PV). Clinical data and pathological features of these 1356 patients were analysed and included in this study. This protocol is a combined scheme that includes transrectal (TR) 12-core PBx (TR12PBx) for PV ≤ 30 cc, TR 14-core PBx (TR14PBx) for PV > 30 cc but < 60 cc, TR 18-core PBx (TR18PBx) for PV ≥ 60 cc. Out of a total of 1356 patients, in 111 (8.2%) PCa was identified through TR12PBx scheme, in 198 (14.6%) through TR14PBx scheme and in 253 (18.6%) through TR18PBx scheme. The PCa detection rate was increased by 44% by adding two TZ cores (TR14PBx scheme). The TR18PBx scheme increased this rate by 21.7% vs. TR14PBx scheme. The diagnostic yield offered by TR18PBx was statistically significant compared to the detection rate offered by the TR14PBx scheme (p < 0.003). The biopsy Gleason score and the percentage of core involvement were comparable between PCa detected by the TR14PBx scheme diagnostic yield and those detected by the TR18PBx scheme (p = 0.362). The only PV parameter, in our opinion, can be significant in choosing the best biopsy scheme to approach in a first setting of biopsies increasing PCa detection rate.
Hanuschkin, Alexander; Kunkel, Susanne; Helias, Moritz; Morrison, Abigail; Diesmann, Markus
2010-01-01
Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision. PMID:21031031
Gupta, Rahul; Audhkhasi, Kartik; Lee, Sungbok; Narayanan, Shrikanth
2017-01-01
Non-verbal communication involves encoding, transmission and decoding of non-lexical cues and is realized using vocal (e.g. prosody) or visual (e.g. gaze, body language) channels during conversation. These cues perform the function of maintaining conversational flow, expressing emotions, and marking personality and interpersonal attitude. In particular, non-verbal cues in speech such as paralanguage and non-verbal vocal events (e.g. laughters, sighs, cries) are used to nuance meaning and convey emotions, mood and attitude. For instance, laughters are associated with affective expressions while fillers (e.g. um, ah, um) are used to hold floor during a conversation. In this paper we present an automatic non-verbal vocal events detection system focusing on the detect of laughter and fillers. We extend our system presented during Interspeech 2013 Social Signals Sub-challenge (that was the winning entry in the challenge) for frame-wise event detection and test several schemes for incorporating local context during detection. Specifically, we incorporate context at two separate levels in our system: (i) the raw frame-wise features and, (ii) the output decisions. Furthermore, our system processes the output probabilities based on a few heuristic rules in order to reduce erroneous frame-based predictions. Our overall system achieves an Area Under the Receiver Operating Characteristics curve of 95.3% for detecting laughters and 90.4% for fillers on the test set drawn from the data specifications of the Interspeech 2013 Social Signals Sub-challenge. We perform further analysis to understand the interrelation between the features and obtained results. Specifically, we conduct a feature sensitivity analysis and correlate it with each feature's stand alone performance. The observations suggest that the trained system is more sensitive to a feature carrying higher discriminability with implications towards a better system design. PMID:28713197
New coherent laser communication detection scheme based on channel-switching method.
Liu, Fuchuan; Sun, Jianfeng; Ma, Xiaoping; Hou, Peipei; Cai, Guangyu; Sun, Zhiwei; Lu, Zhiyong; Liu, Liren
2015-04-01
A new coherent laser communication detection scheme based on the channel-switching method is proposed. The detection front end of this scheme comprises a 90° optical hybrid and two balanced photodetectors which outputs the in-phase (I) channel and quadrature-phase (Q) channel signal current, respectively. With this method, the ultrahigh speed analog/digital transform of the signal of the I or Q channel is not required. The phase error between the signal and local lasers is obtained by simple analog circuit. Using the phase error signal, the signals of the I/Q channel are switched alternately. The principle of this detection scheme is presented. Moreover, the comparison of the sensitivity of this scheme with that of homodyne detection with an optical phase-locked loop is discussed. An experimental setup was constructed to verify the proposed detection scheme. The offline processing procedure and results are presented. This scheme could be realized through simple structure and has potential applications in cost-effective high-speed laser communication.
A Low-Cost Tracking System for Running Race Applications Based on Bluetooth Low Energy Technology.
Perez-Diaz-de-Cerio, David; Hernández-Solana, Ángela; Valdovinos, Antonio; Valenzuela, Jose Luis
2018-03-20
Timing points used in running races and other competition events are generally based on radio-frequency identification (RFID) technology. Athletes' times are calculated via passive RFID tags and reader kits. Specifically, the reader infrastructure needed is complex and requires the deployment of a mat or ramps which hide the receiver antennae under them. Moreover, with the employed tags, it is not possible to transmit additional and dynamic information such as pulse or oximetry monitoring, alarms, etc. In this paper we present a system based on two low complex schemes allowed in Bluetooth Low Energy (BLE): the non-connectable undirected advertisement process and a modified version of scannable undirected advertisement process using the new capabilities present in Bluetooth 5. After fully describing the system architecture, which allows full real-time position monitoring of the runners using mobile phones on the organizer side and BLE sensors on the participants' side, we derive the mobility patterns of runners and capacity requirements, which are determinant for evaluating the performance of the proposed system. They have been obtained from the analysis of the real data measured in the last Barcelona Marathon. By means of simulations, we demonstrate that, even under disadvantageous conditions (50% error ratio), both schemes perform reliably and are able to detect the 100% of the participants in all the cases. The cell coverage of the system needs to be adjusted when non-connectable process is considered. Nevertheless, through simulation and experimental, we show that the proposed scheme based on the new events available in Bluetooth 5 is clearly the best implementation alternative for all the cases, no matter the coverage area and the runner speed. The proposal widely exceeds the detection requirements of the real scenario, surpassing the measured peaks of 20 sensors per second incoming in the coverage area, moving at speeds that range from 1.5 m/s to 6.25 m/s. The designed real test-bed shows that the scheme is able to detect 72 sensors below 600 ms, fulfilling comfortably the requirements determined for the intended application. The main disadvantage of this system would be that the sensors are active, but we have proved that its consumption can be so low (9.5 µA) that, with a typical button cell, the sensor battery life would be over 10,000 h of use.
A Low-Cost Tracking System for Running Race Applications Based on Bluetooth Low Energy Technology
2018-01-01
Timing points used in running races and other competition events are generally based on radio-frequency identification (RFID) technology. Athletes’ times are calculated via passive RFID tags and reader kits. Specifically, the reader infrastructure needed is complex and requires the deployment of a mat or ramps which hide the receiver antennae under them. Moreover, with the employed tags, it is not possible to transmit additional and dynamic information such as pulse or oximetry monitoring, alarms, etc. In this paper we present a system based on two low complex schemes allowed in Bluetooth Low Energy (BLE): the non-connectable undirected advertisement process and a modified version of scannable undirected advertisement process using the new capabilities present in Bluetooth 5. After fully describing the system architecture, which allows full real-time position monitoring of the runners using mobile phones on the organizer side and BLE sensors on the participants’ side, we derive the mobility patterns of runners and capacity requirements, which are determinant for evaluating the performance of the proposed system. They have been obtained from the analysis of the real data measured in the last Barcelona Marathon. By means of simulations, we demonstrate that, even under disadvantageous conditions (50% error ratio), both schemes perform reliably and are able to detect the 100% of the participants in all the cases. The cell coverage of the system needs to be adjusted when non-connectable process is considered. Nevertheless, through simulation and experimental, we show that the proposed scheme based on the new events available in Bluetooth 5 is clearly the best implementation alternative for all the cases, no matter the coverage area and the runner speed. The proposal widely exceeds the detection requirements of the real scenario, surpassing the measured peaks of 20 sensors per second incoming in the coverage area, moving at speeds that range from 1.5 m/s to 6.25 m/s. The designed real test-bed shows that the scheme is able to detect 72 sensors below 600 ms, fulfilling comfortably the requirements determined for the intended application. The main disadvantage of this system would be that the sensors are active, but we have proved that its consumption can be so low (9.5 µA) that, with a typical button cell, the sensor battery life would be over 10,000 h of use. PMID:29558432
Parallel discrete-event simulation schemes with heterogeneous processing elements.
Kim, Yup; Kwon, Ikhyun; Chae, Huiseung; Yook, Soon-Hyung
2014-07-01
To understand the effects of nonidentical processing elements (PEs) on parallel discrete-event simulation (PDES) schemes, two stochastic growth models, the restricted solid-on-solid (RSOS) model and the Family model, are investigated by simulations. The RSOS model is the model for the PDES scheme governed by the Kardar-Parisi-Zhang equation (KPZ scheme). The Family model is the model for the scheme governed by the Edwards-Wilkinson equation (EW scheme). Two kinds of distributions for nonidentical PEs are considered. In the first kind computing capacities of PEs are not much different, whereas in the second kind the capacities are extremely widespread. The KPZ scheme on the complex networks shows the synchronizability and scalability regardless of the kinds of PEs. The EW scheme never shows the synchronizability for the random configuration of PEs of the first kind. However, by regularizing the arrangement of PEs of the first kind, the EW scheme is made to show the synchronizability. In contrast, EW scheme never shows the synchronizability for any configuration of PEs of the second kind.
NASA Astrophysics Data System (ADS)
Cua, G. B.; Fischer, M.; Caprio, M.; Heaton, T. H.; Cisn Earthquake Early Warning Project Team
2010-12-01
The Virtual Seismologist (VS) earthquake early warning (EEW) algorithm is one of 3 EEW approaches being incorporated into the California Integrated Seismic Network (CISN) ShakeAlert system, a prototype EEW system that could potentially be implemented in California. The VS algorithm, implemented by the Swiss Seismological Service at ETH Zurich, is a Bayesian approach to EEW, wherein the most probable source estimate at any given time is a combination of contributions from a likehihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS codes have been running in real-time at the Southern California Seismic Network since July 2008, and at the Northern California Seismic Network since February 2009. We discuss recent enhancements to the VS EEW algorithm that are being integrated into CISN ShakeAlert. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to initiate an event declaration, with the goal of reducing false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and the requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) into an on-site/regional approach capable of providing a continuously evolving stream of EEW information starting from the first P-detection. Real-time and offline analysis on Swiss and California waveform datasets indicate that the multiple-threshold approach is faster and more reliable for larger events than the earlier version of the VS codes. In addition, we provide evolutionary estimates of the probability of false alarms (PFA), which is an envisioned output stream of the CISN ShakeAlert system. The real-time decision-making approach envisioned for CISN ShakeAlert users, where users specify a threshhold PFA in addition to thresholds on peak ground motion estimates, has the potential to increase the available warning time for users with high tolerance to false alarms without compromising the needs of users with lower tolerances to false alarms.
Recovery and normalization of triple coincidences in PET.
Lage, Eduardo; Parot, Vicente; Moore, Stephen C; Sitek, Arkadiusz; Udías, Jose M; Dave, Shivang R; Park, Mi-Ae; Vaquero, Juan J; Herraiz, Joaquin L
2015-03-01
Triple coincidences in positron emission tomography (PET) are events in which three γ-rays are detected simultaneously. These events, though potentially useful for enhancing the sensitivity of PET scanners, are discarded or processed without special consideration in current systems, because there is not a clear criterion for assigning them to a unique line-of-response (LOR). Methods proposed for recovering such events usually rely on the use of highly specialized detection systems, hampering general adoption, and/or are based on Compton-scatter kinematics and, consequently, are limited in accuracy by the energy resolution of standard PET detectors. In this work, the authors propose a simple and general solution for recovering triple coincidences, which does not require specialized detectors or additional energy resolution requirements. To recover triple coincidences, the authors' method distributes such events among their possible LORs using the relative proportions of double coincidences in these LORs. The authors show analytically that this assignment scheme represents the maximum-likelihood solution for the triple-coincidence distribution problem. The PET component of a preclinical PET/CT scanner was adapted to enable the acquisition and processing of triple coincidences. Since the efficiencies for detecting double and triple events were found to be different throughout the scanner field-of-view, a normalization procedure specific for triple coincidences was also developed. The effect of including triple coincidences using their method was compared against the cases of equally weighting the triples among their possible LORs and discarding all the triple events. The authors used as figures of merit for this comparison sensitivity, noise-equivalent count (NEC) rates and image quality calculated as described in the NEMA NU-4 protocol for the assessment of preclinical PET scanners. The addition of triple-coincidence events with the authors' method increased peak NEC rates of the scanner by 26.6% and 32% for mouse- and rat-sized objects, respectively. This increase in NEC-rate performance was also reflected in the image-quality metrics. Images reconstructed using double and triple coincidences recovered using their method had better signal-to-noise ratio than those obtained using only double coincidences, while preserving spatial resolution and contrast. Distribution of triple coincidences using an equal-weighting scheme increased apparent system sensitivity but degraded image quality. The performance boost provided by the inclusion of triple coincidences using their method allowed to reduce the acquisition time of standard imaging procedures by up to ∼25%. Recovering triple coincidences with the proposed method can effectively increase the sensitivity of current clinical and preclinical PET systems without compromising other parameters like spatial resolution or contrast.
Classifying GRB 170817A/GW170817 in a Fermi duration-hardness plane
NASA Astrophysics Data System (ADS)
Horváth, I.; Tóth, B. G.; Hakkila, J.; Tóth, L. V.; Balázs, L. G.; Rácz, I. I.; Pintér, S.; Bagoly, Z.
2018-03-01
GRB 170817A, associated with the LIGO-Virgo GW170817 neutron-star merger event, lacks the short duration and hard spectrum of a Short gamma-ray burst (GRB) expected from long-standing classification models. Correctly identifying the class to which this burst belongs requires comparison with other GRBs detected by the Fermi GBM. The aim of our analysis is to classify Fermi GRBs and to test whether or not GRB 170817A belongs—as suggested—to the Short GRB class. The Fermi GBM catalog provides a large database with many measured variables that can be used to explore gamma-ray burst classification. We use statistical techniques to look for clustering in a sample of 1298 gamma-ray bursts described by duration and spectral hardness. Classification of the detected bursts shows that GRB 170817A most likely belongs to the Intermediate, rather than the Short GRB class. We discuss this result in light of theoretical neutron-star merger models and existing GRB classification schemes. It appears that GRB classification schemes may not yet be linked to appropriate theoretical models, and that theoretical models may not yet adequately account for known GRB class properties. We conclude that GRB 170817A may not fit into a simple phenomenological classification scheme.
Diagnosis diagrams for passing signals on an automatic block signaling railway section
NASA Astrophysics Data System (ADS)
Spunei, E.; Piroi, I.; Chioncel, C. P.; Piroi, F.
2018-01-01
This work presents a diagnosis method for railway traffic security installations. More specifically, the authors present a series of diagnosis charts for passing signals on a railway block equipped with an automatic block signaling installation. These charts are based on the exploitation electric schemes, and are subsequently used to develop a diagnosis software package. The thus developed software package contributes substantially to a reduction of failure detection and remedy for these types of installation faults. The use of the software package eliminates making wrong decisions in the fault detection process, decisions that may result in longer remedy times and, sometimes, to railway traffic events.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-09-15
We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.
Event-triggered attitude control of spacecraft
NASA Astrophysics Data System (ADS)
Wu, Baolin; Shen, Qiang; Cao, Xibin
2018-02-01
The problem of spacecraft attitude stabilization control system with limited communication and external disturbances is investigated based on an event-triggered control scheme. In the proposed scheme, information of attitude and control torque only need to be transmitted at some discrete triggered times when a defined measurement error exceeds a state-dependent threshold. The proposed control scheme not only guarantees that spacecraft attitude control errors converge toward a small invariant set containing the origin, but also ensures that there is no accumulation of triggering instants. The performance of the proposed control scheme is demonstrated through numerical simulation.
Fei, Lanfang; Peng, Zhou
2017-02-01
In 2005, China introduced an administrative no-fault one-time compensation scheme for adverse events following immunization (AEFI). The scheme aims to ensure fair compensation for those injured by adverse reactions following immunization. These individuals bear a significant burden for the benefits of widespread immunization. However, there is little empirical evidence of how the scheme has been implemented and how it functions in practice. The article aims to fill this gap. Based on an analysis of the legal basis of the scheme and of practical compensation cases, this article examines the structuring, function, and effects of the scheme; evaluates loopholes in the scheme; evaluates the extent to which the scheme has achieved its intended objectives; and discusses further development of the scheme. © The Author 2017. Published by Oxford University Press; all rights reserved. For Permissions, please email: journals.permissions@oup.com.
Anti-islanding Protection of Distributed Generation Using Rate of Change of Impedance
NASA Astrophysics Data System (ADS)
Shah, Pragnesh; Bhalja, Bhavesh
2013-08-01
Distributed Generation (DG), which is interlinked with distribution system, has inevitable effect on distribution system. Integrating DG with the utility network demands an anti-islanding scheme to protect the system. Failure to trip islanded generators can lead to problems such as threats to personnel safety, out-of-phase reclosing, and degradation of power quality. In this article, a new method for anti-islanding protection based on impedance monitoring of distribution network is carried out in presence of DG. The impedance measured between two phases is used to derive the rate of change of impedance (dz/dt), and its peak values are used for final trip decision. Test data are generated using PSCAD/EMTDC software package and the performance of the proposed method is evaluated in MatLab software. The simulation results show the effectiveness of the proposed scheme as it is capable to detect islanding condition accurately. Subsequently, it is also observed that the proposed scheme does not mal-operate during other disturbances such as short circuit and switching event.
An active monitoring method for flood events
NASA Astrophysics Data System (ADS)
Chen, Zeqiang; Chen, Nengcheng; Du, Wenying; Gong, Jianya
2018-07-01
Timely and active detecting and monitoring of a flood event are critical for a quick response, effective decision-making and disaster reduction. To achieve the purpose, this paper proposes an active service framework for flood monitoring based on Sensor Web services and an active model for the concrete implementation of the active service framework. The framework consists of two core components-active warning and active planning. The active warning component is based on a publish-subscribe mechanism implemented by the Sensor Event Service. The active planning component employs the Sensor Planning Service to control the execution of the schemes and models and plans the model input data. The active model, called SMDSA, defines the quantitative calculation method for five elements, scheme, model, data, sensor, and auxiliary information, as well as their associations. Experimental monitoring of the Liangzi Lake flood in the summer of 2010 is conducted to test the proposed framework and model. The results show that 1) the proposed active service framework is efficient for timely and automated flood monitoring. 2) The active model, SMDSA, is a quantitative calculation method used to monitor floods from manual intervention to automatic computation. 3) As much preliminary work as possible should be done to take full advantage of the active service framework and the active model.
Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy
NASA Astrophysics Data System (ADS)
Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente
2017-02-01
We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes.
Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy
Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente
2017-01-01
We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes. PMID:28233829
Current Saturation Avoidance with Real-Time Control using DPCS
NASA Astrophysics Data System (ADS)
Ferrara, M.; Hutchinson, I.; Wolfe, S.; Stillerman, J.; Fredian, T.
2008-11-01
Tokamak ohmic-transformer and equilibrium-field coils need to be able to operate near their maximum current capabilities. However if they reach their upper limit during high-performance discharges or in the presence of a strong off-normal event, shape control is compromised, and instability, even plasma disruptions can result. On Alcator C-Mod we designed and tested an anti-saturation routine which detects the impending saturation of OH and EF currents and interpolates to a neighboring safe equilibrium in real-time. The routine was implemented with a multi-processor, multi-time-scale control scheme, which is based on a master process and multiple asynchronous slave processes. The scheme is general and can be used for any computationally-intensive algorithm. USDoE award DE- FC02-99ER545512.
A Lightweight Data Integrity Scheme for Sensor Networks
Kamel, Ibrahim; Juma, Hussam
2011-01-01
Limited energy is the most critical constraint that limits the capabilities of wireless sensor networks (WSNs). Most sensors operate on batteries with limited power. Battery recharging or replacement may be impossible. Security mechanisms that are based on public key cryptographic algorithms such as RSA and digital signatures are prohibitively expensive in terms of energy consumption and storage requirements, and thus unsuitable for WSN applications. This paper proposes a new fragile watermarking technique to detect unauthorized alterations in WSN data streams. We propose the FWC-D scheme, which uses group delimiters to keep the sender and receivers synchronized and help them to avoid ambiguity in the event of data insertion or deletion. The watermark, which is computed using a hash function, is stored in the previous group in a linked-list fashion to ensure data freshness and mitigate replay attacks, FWC-D generates a serial number SN that is attached to each group to help the receiver determines how many group insertions or deletions occurred. Detailed security analysis that compares the proposed FWC-D scheme with SGW, one of the latest integrity schemes for WSNs, shows that FWC-D is more robust than SGW. Simulation results further show that the proposed scheme is much faster than SGW. PMID:22163840
Autonomous Detection of Eruptions, Plumes, and Other Transient Events in the Outer Solar System
NASA Astrophysics Data System (ADS)
Bunte, M. K.; Lin, Y.; Saripalli, S.; Bell, J. F.
2012-12-01
The outer solar system abounds with visually stunning examples of dynamic processes such as eruptive events that jettison materials from satellites and small bodies into space. The most notable examples of such events are the prominent volcanic plumes of Io, the wispy water jets of Enceladus, and the outgassing of comet nuclei. We are investigating techniques that will allow a spacecraft to autonomously detect those events in visible images. This technique will allow future outer planet missions to conduct sustained event monitoring and automate prioritization of data for downlink. Our technique detects plumes by searching for concentrations of large local gradients in images. Applying a Scale Invariant Feature Transform (SIFT) to either raw or calibrated images identifies interest points for further investigation based on the magnitude and orientation of local gradients in pixel values. The interest points are classified as possible transient geophysical events when they share characteristics with similar features in user-classified images. A nearest neighbor classification scheme assesses the similarity of all interest points within a threshold Euclidean distance and classifies each according to the majority classification of other interest points. Thus, features marked by multiple interest points are more likely to be classified positively as events; isolated large plumes or multiple small jets are easily distinguished from a textured background surface due to the higher magnitude gradient of the plume or jet when compared with the small, randomly oriented gradients of the textured surface. We have applied this method to images of Io, Enceladus, and comet Hartley 2 from the Voyager, Galileo, New Horizons, Cassini, and Deep Impact EPOXI missions, where appropriate, and have successfully detected up to 95% of manually identifiable events that our method was able to distinguish from the background surface and surface features of a body. Dozens of distinct features are identifiable under a variety of viewing conditions and hundreds of detections are made in each of the aforementioned datasets. In this presentation, we explore the controlling factors in detecting transient events and discuss causes of success or failure due to distinct data characteristics. These include the level of calibration of images, the ability to differentiate an event from artifacts, and the variety of event appearances in user-classified images. Other important factors include the physical characteristics of the events themselves: albedo, size as a function of image resolution, and proximity to other events (as in the case of multiple small jets which feed into the overall plume at the south pole of Enceladus). A notable strength of this method is the ability to detect events that do not extend beyond the limb of a planetary body or are adjacent to the terminator or other strong edges in the image. The former scenario strongly influences the success rate of detecting eruptive events in nadir views.
Team interaction during surgery: a systematic review of communication coding schemes.
Tiferes, Judith; Bisantz, Ann M; Guru, Khurshid A
2015-05-15
Communication problems have been systematically linked to human errors in surgery and a deep understanding of the underlying processes is essential. Although a number of tools exist to assess nontechnical skills, methods to study communication and other team-related processes are far from being standardized, making comparisons challenging. We conducted a systematic review to analyze methods used to study events in the operating room (OR) and to develop a synthesized coding scheme for OR team communication. Six electronic databases were accessed to search for articles that collected individual events during surgery and included detailed coding schemes. Additional articles were added based on cross-referencing. That collection was then classified based on type of events collected, environment type (real or simulated), number of procedures, type of surgical task, team characteristics, method of data collection, and coding scheme characteristics. All dimensions within each coding scheme were grouped based on emergent content similarity. Categories drawn from articles, which focused on communication events, were further analyzed and synthesized into one common coding scheme. A total of 34 of 949 articles met the inclusion criteria. The methodological characteristics and coding dimensions of the articles were summarized. A priori coding was used in nine studies. The synthesized coding scheme for OR communication included six dimensions as follows: information flow, period, statement type, topic, communication breakdown, and effects of communication breakdown. The coding scheme provides a standardized coding method for OR communication, which can be used to develop a priori codes for future studies especially in comparative effectiveness research. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Singh, K. S.; Bonthu, Subbareddy; Purvaja, R.; Robin, R. S.; Kannan, B. A. M.; Ramesh, R.
2018-04-01
This study attempts to investigate the real-time prediction of a heavy rainfall event over the Chennai Metropolitan City, Tamil Nadu, India that occurred on 01 December 2015 using Advanced Research Weather Research and Forecasting (WRF-ARW) model. The study evaluates the impact of six microphysical (Lin, WSM6, Goddard, Thompson, Morrison and WDM6) parameterization schemes of the model on prediction of heavy rainfall event. In addition, model sensitivity has also been evaluated with six Planetary Boundary Layer (PBL) and two Land Surface Model (LSM) schemes. Model forecast was carried out using nested domain and the impact of model horizontal grid resolutions were assessed at 9 km, 6 km and 3 km. Analysis of the synoptic features using National Center for Environmental Prediction Global Forecast System (NCEP-GFS) analysis data revealed strong upper-level divergence and high moisture content at lower level were favorable for the occurrence of heavy rainfall event over the northeast coast of Tamil Nadu. The study signified that forecasted rainfall was more sensitive to the microphysics and PBL schemes compared to the LSM schemes. The model provided better forecast of the heavy rainfall event using the logical combination of Goddard microphysics, YSU PBL and Noah LSM schemes, and it was mostly attributed to timely initiation and development of the convective system. The forecast with different horizontal resolutions using cumulus parameterization indicated that the rainfall prediction was not well represented at 9 km and 6 km. The forecast with 3 km horizontal resolution provided better prediction in terms of timely initiation and development of the event. The study highlights that forecast of heavy rainfall events using a high-resolution mesoscale model with suitable representations of physical parameterization schemes are useful for disaster management and planning to minimize the potential loss of life and property.
Mining patterns in persistent surveillance systems with smart query and visual analytics
NASA Astrophysics Data System (ADS)
Habibi, Mohammad S.; Shirkhodaie, Amir
2013-05-01
In Persistent Surveillance Systems (PSS) the ability to detect and characterize events geospatially help take pre-emptive steps to counter adversary's actions. Interactive Visual Analytic (VA) model offers this platform for pattern investigation and reasoning to comprehend and/or predict such occurrences. The need for identifying and offsetting these threats requires collecting information from diverse sources, which brings with it increasingly abstract data. These abstract semantic data have a degree of inherent uncertainty and imprecision, and require a method for their filtration before being processed further. In this paper, we have introduced an approach based on Vector Space Modeling (VSM) technique for classification of spatiotemporal sequential patterns of group activities. The feature vectors consist of an array of attributes extracted from generated sensors semantic annotated messages. To facilitate proper similarity matching and detection of time-varying spatiotemporal patterns, a Temporal-Dynamic Time Warping (DTW) method with Gaussian Mixture Model (GMM) for Expectation Maximization (EM) is introduced. DTW is intended for detection of event patterns from neighborhood-proximity semantic frames derived from established ontology. GMM with EM, on the other hand, is employed as a Bayesian probabilistic model to estimated probability of events associated with a detected spatiotemporal pattern. In this paper, we present a new visual analytic tool for testing and evaluation group activities detected under this control scheme. Experimental results demonstrate the effectiveness of proposed approach for discovery and matching of subsequences within sequentially generated patterns space of our experiments.
Learning-automaton-based online discovery and tracking of spatiotemporal event patterns.
Yazidi, Anis; Granmo, Ole-Christoffer; Oommen, B John
2013-06-01
Discovering and tracking of spatiotemporal patterns in noisy sequences of events are difficult tasks that have become increasingly pertinent due to recent advances in ubiquitous computing, such as community-based social networking applications. The core activities for applications of this class include the sharing and notification of events, and the importance and usefulness of these functionalities increase as event sharing expands into larger areas of one's life. Ironically, instead of being helpful, an excessive number of event notifications can quickly render the functionality of event sharing to be obtrusive. Indeed, any notification of events that provides redundant information to the application/user can be seen to be an unnecessary distraction. In this paper, we introduce a new scheme for discovering and tracking noisy spatiotemporal event patterns, with the purpose of suppressing reoccurring patterns, while discerning novel events. Our scheme is based on maintaining a collection of hypotheses, each one conjecturing a specific spatiotemporal event pattern. A dedicated learning automaton (LA)--the spatiotemporal pattern LA (STPLA)--is associated with each hypothesis. By processing events as they unfold, we attempt to infer the correctness of each hypothesis through a real-time guided random walk. Consequently, the scheme that we present is computationally efficient, with a minimal memory footprint. Furthermore, it is ergodic, allowing adaptation. Empirical results involving extensive simulations demonstrate the superior convergence and adaptation speed of STPLA, as well as an ability to operate successfully with noise, including both the erroneous inclusion and omission of events. An empirical comparison study was performed and confirms the superiority of our scheme compared to a similar state-of-the-art approach. In particular, the robustness of the STPLA to inclusion as well as to omission noise constitutes a unique property compared to other related approaches. In addition, the results included, which involve the so-called " presence sharing" application, are both promising and, in our opinion, impressive. It is thus our opinion that the proposed STPLA scheme is, in general, ideal for improving the usefulness of event notification and sharing systems, since it is capable of significantly, robustly, and adaptively suppressing redundant information.
Full On-Device Stay Points Detection in Smartphones for Location-Based Mobile Applications.
Pérez-Torres, Rafael; Torres-Huitzil, César; Galeana-Zapién, Hiram
2016-10-13
The tracking of frequently visited places, also known as stay points, is a critical feature in location-aware mobile applications as a way to adapt the information and services provided to smartphones users according to their moving patterns. Location based applications usually employ the GPS receiver along with Wi-Fi hot-spots and cellular cell tower mechanisms for estimating user location. Typically, fine-grained GPS location data are collected by the smartphone and transferred to dedicated servers for trajectory analysis and stay points detection. Such Mobile Cloud Computing approach has been successfully employed for extending smartphone's battery lifetime by exchanging computation costs, assuming that on-device stay points detection is prohibitive. In this article, we propose and validate the feasibility of having an alternative event-driven mechanism for stay points detection that is executed fully on-device, and that provides higher energy savings by avoiding communication costs. Our solution is encapsulated in a sensing middleware for Android smartphones, where a stream of GPS location updates is collected in the background, supporting duty cycling schemes, and incrementally analyzed following an event-driven paradigm for stay points detection. To evaluate the performance of the proposed middleware, real world experiments were conducted under different stress levels, validating its power efficiency when compared against a Mobile Cloud Computing oriented solution.
Improved Fake-State Attack to the Quantum Key Distribution Systems
NASA Astrophysics Data System (ADS)
Zhang, Sheng; Wang, Jian; Tang, Chao-jing
2012-09-01
It has been showed that most commercial quantum cryptosystems are vulnerable to the fake-state attacks, which employ the loophole that the avalanche photodiodes as single photon detectors still produce detection events in the linear mode. However, previous fake-state attacks may be easily prevented by either installing a watch dog or reconfiguring the dead-time assigning component. In this paper, we present a new technique to counteract the after-pulse effect ever enhanced by the fake-state attacks, in order to lower the quantum bit error rate. Obviously, it is more difficult to detect the presented attack scheme. Indeed, it contributes to promoting of implementing a secure quantum cryptosystem in real life.
NASA Astrophysics Data System (ADS)
Tsai, F.; Lai, J. S.; Chiang, S. H.
2015-12-01
Landslides are frequently triggered by typhoons and earthquakes in Taiwan, causing serious economic losses and human casualties. Remotely sensed images and geo-spatial data consisting of land-cover and environmental information have been widely used for producing landslide inventories and causative factors for slope stability analysis. Landslide susceptibility, on the other hand, can represent the spatial likelihood of landslide occurrence and is an important basis for landslide risk assessment. As multi-temporal satellite images become popular and affordable, they are commonly used to generate landslide inventories for subsequent analysis. However, it is usually difficult to distinguish different landslide sub-regions (scarp, debris flow, deposition etc.) directly from remote sensing imagery. Consequently, the extracted landslide extents using image-based visual interpretation and automatic detections may contain many depositions that may reduce the fidelity of the landslide susceptibility model. This study developed an empirical thresholding scheme based on terrain characteristics for eliminating depositions from detected landslide areas to improve landslide susceptibility modeling. In this study, Bayesian network classifier is utilized to build a landslide susceptibility model and to predict sequent rainfall-induced shallow landslides in the Shimen reservoir watershed located in northern Taiwan. Eleven causative factors are considered, including terrain slope, aspect, curvature, elevation, geology, land-use, NDVI, soil, distance to fault, river and road. Landslide areas detected using satellite images acquired before and after eight typhoons between 2004 to 2008 are collected as the main inventory for training and verification. In the analysis, previous landslide events are used as training data to predict the samples of the next event. The results are then compared with recorded landslide areas in the inventory to evaluate the accuracy. Experimental results demonstrate that the accuracies of landslide susceptibility analysis in all sequential predictions have been improved significantly after eliminating landslide depositions.
Khan, Aihab; Husain, Syed Afaq
2013-01-01
We put forward a fragile zero watermarking scheme to detect and characterize malicious modifications made to a database relation. Most of the existing watermarking schemes for relational databases introduce intentional errors or permanent distortions as marks into the database original content. These distortions inevitably degrade the data quality and data usability as the integrity of a relational database is violated. Moreover, these fragile schemes can detect malicious data modifications but do not characterize the tempering attack, that is, the nature of tempering. The proposed fragile scheme is based on zero watermarking approach to detect malicious modifications made to a database relation. In zero watermarking, the watermark is generated (constructed) from the contents of the original data rather than introduction of permanent distortions as marks into the data. As a result, the proposed scheme is distortion-free; thus, it also resolves the inherent conflict between security and imperceptibility. The proposed scheme also characterizes the malicious data modifications to quantify the nature of tempering attacks. Experimental results show that even minor malicious modifications made to a database relation can be detected and characterized successfully.
On resilience studies of system detection and recovery techniques against stealthy insider attacks
NASA Astrophysics Data System (ADS)
Wei, Sixiao; Zhang, Hanlin; Chen, Genshe; Shen, Dan; Yu, Wei; Pham, Khanh D.; Blasch, Erik P.; Cruz, Jose B.
2016-05-01
With the explosive growth of network technologies, insider attacks have become a major concern to business operations that largely rely on computer networks. To better detect insider attacks that marginally manipulate network traffic over time, and to recover the system from attacks, in this paper we implement a temporal-based detection scheme using the sequential hypothesis testing technique. Two hypothetical states are considered: the null hypothesis that the collected information is from benign historical traffic and the alternative hypothesis that the network is under attack. The objective of such a detection scheme is to recognize the change within the shortest time by comparing the two defined hypotheses. In addition, once the attack is detected, a server migration-based system recovery scheme can be triggered to recover the system to the state prior to the attack. To understand mitigation of insider attacks, a multi-functional web display of the detection analysis was developed for real-time analytic. Experiments using real-world traffic traces evaluate the effectiveness of Detection System and Recovery (DeSyAR) scheme. The evaluation data validates the detection scheme based on sequential hypothesis testing and the server migration-based system recovery scheme can perform well in effectively detecting insider attacks and recovering the system under attack.
Earthquake Fingerprints: Representing Earthquake Waveforms for Similarity-Based Detection
NASA Astrophysics Data System (ADS)
Bergen, K.; Beroza, G. C.
2016-12-01
New earthquake detection methods, such as Fingerprint and Similarity Thresholding (FAST), use fast approximate similarity search to identify similar waveforms in long-duration data without templates (Yoon et al. 2015). These methods have two key components: fingerprint extraction and an efficient search algorithm. Fingerprint extraction converts waveforms into fingerprints, compact signatures that represent short-duration waveforms for identification and search. Earthquakes are detected using an efficient indexing and search scheme, such as locality-sensitive hashing, that identifies similar waveforms in a fingerprint database. The quality of the search results, and thus the earthquake detection results, is strongly dependent on the fingerprinting scheme. Fingerprint extraction should map similar earthquake waveforms to similar waveform fingerprints to ensure a high detection rate, even under additive noise and small distortions. Additionally, fingerprints corresponding to noise intervals should have mutually dissimilar fingerprints to minimize false detections. In this work, we compare the performance of multiple fingerprint extraction approaches for the earthquake waveform similarity search problem. We apply existing audio fingerprinting (used in content-based audio identification systems) and time series indexing techniques and present modified versions that are specifically adapted for seismic data. We also explore data-driven fingerprinting approaches that can take advantage of labeled or unlabeled waveform data. For each fingerprinting approach we measure its ability to identify similar waveforms in a low signal-to-noise setting, and quantify the trade-off between true and false detection rates in the presence of persistent noise sources. We compare the performance using known event waveforms from eight independent stations in the Northern California Seismic Network.
The role of pore geometry in single nanoparticle detection
Davenport, Matthew; Healy, Ken; Pevarnik, Matthew; ...
2012-08-22
In this study, we observe single nanoparticle translocation events via resistive pulse sensing using silicon nitride pores described by a range of lengths and diameters. Pores are prepared by focused ion beam milling in 50 nm-, 100 nm-, and 500 nm-thick silicon nitride membranes with diameters fabricated to accommodate spherical silica nanoparticles with sizes chosen to mimic that of virus particles. In this manner, we are able to characterize the role of pore geometry in three key components of the detection scheme, namely, event magnitude, event duration, and event frequency. We find that the electric field created by the appliedmore » voltage and the pore’s geometry is a critical factor. We develop approximations to describe this field, which are verified with computer simulations, and interactions between particles and this field. In so doing, we formulate what we believe to be the first approximation for the magnitude of ionic current blockage that explicitly addresses the invariance of access resistance of solid-state pores during particle translocation. These approximations also provide a suitable foundation for estimating the zeta potential of the particles and/or pore surface when studied in conjunction with event durations. We also verify that translocation achieved by electro-osmostic transport is an effective means of slowing translocation velocities of highly charged particles without compromising particle capture rate as compared to more traditional approaches based on electrophoretic transport.« less
NASA Astrophysics Data System (ADS)
Wong, Elaine; Nadarajah, Nishaanthan; Chae, Chang-Joon; Nirmalathas, Ampalavanapillai; Attygalle, Sanjeewa M.
2006-01-01
We describe two optical layer schemes which simultaneously facilitate local area network emulation and automatic protection switching against distribution fiber breaks in passive optical networks. One scheme employs a narrowband fiber Bragg grating placed close to the star coupler in the feeder fiber of the passive optical network, while the other uses an additional short length distribution fiber from the star coupler to each customer for the redirection of the customer traffic. Both schemes use RF subcarrier multiplexed transmission for intercommunication between customers in conjunction with upstream access to the central office at baseband. Failure detection and automatic protection switching are performed independently by each optical network unit that is located at the customer premises in a distributed manner. The restoration of traffic transported between the central office and an optical network unit in the event of the distribution fiber break is performed by interconnecting adjacent optical network units and carrying out signal transmissions via an independent but interconnected optical network unit. Such a protection mechanism enables multiple adjacent optical network units to be simultaneously protected by a single optical network unit utilizing its maximum available bandwidth. We experimentally verify the feasibility of both schemes with 1.25 Gb/s upstream baseband transmission to the central office and 155 Mb/s local area network data transmission on a RF subcarrier frequency. The experimental results obtained from both schemes are compared, and the power budgets are calculated to analyze the scalability of each scheme.
NASA Astrophysics Data System (ADS)
Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.
2018-02-01
Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.
Asynchronous discrete event schemes for PDEs
NASA Astrophysics Data System (ADS)
Stone, D.; Geiger, S.; Lord, G. J.
2017-08-01
A new class of asynchronous discrete-event simulation schemes for advection-diffusion-reaction equations is introduced, based on the principle of allowing quanta of mass to pass through faces of a (regular, structured) Cartesian finite volume grid. The timescales of these events are linked to the flux on the face. The resulting schemes are self-adaptive, and local in both time and space. Experiments are performed on realistic physical systems related to porous media flow applications, including a large 3D advection diffusion equation and advection diffusion reaction systems. The results are compared to highly accurate reference solutions where the temporal evolution is computed with exponential integrator schemes using the same finite volume discretisation. This allows a reliable estimation of the solution error. Our results indicate a first order convergence of the error as a control parameter is decreased, and we outline a framework for analysis.
PHACK: An Efficient Scheme for Selective Forwarding Attack Detection in WSNs.
Liu, Anfeng; Dong, Mianxiong; Ota, Kaoru; Long, Jun
2015-12-09
In this paper, a Per-Hop Acknowledgement (PHACK)-based scheme is proposed for each packet transmission to detect selective forwarding attacks. In our scheme, the sink and each node along the forwarding path generate an acknowledgement (ACK) message for each received packet to confirm the normal packet transmission. The scheme, in which each ACK is returned to the source node along a different routing path, can significantly increase the resilience against attacks because it prevents an attacker from compromising nodes in the return routing path, which can otherwise interrupt the return of nodes' ACK packets. For this case, the PHACK scheme also has better potential to detect abnormal packet loss and identify suspect nodes as well as better resilience against attacks. Another pivotal issue is the network lifetime of the PHACK scheme, as it generates more acknowledgements than previous ACK-based schemes. We demonstrate that the network lifetime of the PHACK scheme is not lower than that of other ACK-based schemes because the scheme just increases the energy consumption in non-hotspot areas and does not increase the energy consumption in hotspot areas. Moreover, the PHACK scheme greatly simplifies the protocol and is easy to implement. Both theoretical and simulation results are given to demonstrate the effectiveness of the proposed scheme in terms of high detection probability and the ability to identify suspect nodes.
PHACK: An Efficient Scheme for Selective Forwarding Attack Detection in WSNs
Liu, Anfeng; Dong, Mianxiong; Ota, Kaoru; Long, Jun
2015-01-01
In this paper, a Per-Hop Acknowledgement (PHACK)-based scheme is proposed for each packet transmission to detect selective forwarding attacks. In our scheme, the sink and each node along the forwarding path generate an acknowledgement (ACK) message for each received packet to confirm the normal packet transmission. The scheme, in which each ACK is returned to the source node along a different routing path, can significantly increase the resilience against attacks because it prevents an attacker from compromising nodes in the return routing path, which can otherwise interrupt the return of nodes’ ACK packets. For this case, the PHACK scheme also has better potential to detect abnormal packet loss and identify suspect nodes as well as better resilience against attacks. Another pivotal issue is the network lifetime of the PHACK scheme, as it generates more acknowledgements than previous ACK-based schemes. We demonstrate that the network lifetime of the PHACK scheme is not lower than that of other ACK-based schemes because the scheme just increases the energy consumption in non-hotspot areas and does not increase the energy consumption in hotspot areas. Moreover, the PHACK scheme greatly simplifies the protocol and is easy to implement. Both theoretical and simulation results are given to demonstrate the effectiveness of the proposed scheme in terms of high detection probability and the ability to identify suspect nodes. PMID:26690178
NASA Astrophysics Data System (ADS)
2011-09-01
Competition: Physics Olympiad hits Thailand Report: Institute carries out survey into maths in physics at university Event: A day for everyone teaching physics Conference: Welsh conference celebrates birthday Schools: Researchers in Residence scheme set to close Teachers: A day for new physics teachers Social: Network combines fun and physics Forthcoming events
Intelligent Power Swing Detection Scheme to Prevent False Relay Tripping Using S-Transform
NASA Astrophysics Data System (ADS)
Mohamad, Nor Z.; Abidin, Ahmad F.; Musirin, Ismail
2014-06-01
Distance relay design is equipped with out-of-step tripping scheme to ensure correct distance relay operation during power swing. The out-of-step condition is a consequence result from unstable power swing. It requires proper detection of power swing to initiate a tripping signal followed by separation of unstable part from the entire power system. The distinguishing process of unstable swing from stable swing poses a challenging task. This paper presents an intelligent approach to detect power swing based on S-Transform signal processing tool. The proposed scheme is based on the use of S-Transform feature of active power at the distance relay measurement point. It is demonstrated that the proposed scheme is able to detect and discriminate the unstable swing from stable swing occurring in the system. To ascertain validity of the proposed scheme, simulations were carried out with the IEEE 39 bus system and its performance has been compared with the wavelet transform-based power swing detection scheme.
Parametric Amplification For Detecting Weak Optical Signals
NASA Technical Reports Server (NTRS)
Hemmati, Hamid; Chen, Chien; Chakravarthi, Prakash
1996-01-01
Optical-communication receivers of proposed type implement high-sensitivity scheme of optical parametric amplification followed by direct detection for reception of extremely weak signals. Incorporates both optical parametric amplification and direct detection into optimized design enhancing effective signal-to-noise ratios during reception in photon-starved (photon-counting) regime. Eliminates need for complexity of heterodyne detection scheme and partly overcomes limitations imposed on older direct-detection schemes by noise generated in receivers and by limits on quantum efficiencies of photodetectors.
Adaptive Detection and ISI Mitigation for Mobile Molecular Communication.
Chang, Ge; Lin, Lin; Yan, Hao
2018-03-01
Current studies on modulation and detection schemes in molecular communication mainly focus on the scenarios with static transmitters and receivers. However, mobile molecular communication is needed in many envisioned applications, such as target tracking and drug delivery. Until now, investigations about mobile molecular communication have been limited. In this paper, a static transmitter and a mobile bacterium-based receiver performing random walk are considered. In this mobile scenario, the channel impulse response changes due to the dynamic change of the distance between the transmitter and the receiver. Detection schemes based on fixed distance fail in signal detection in such a scenario. Furthermore, the intersymbol interference (ISI) effect becomes more complex due to the dynamic character of the signal which makes the estimation and mitigation of the ISI even more difficult. In this paper, an adaptive ISI mitigation method and two adaptive detection schemes are proposed for this mobile scenario. In the proposed scheme, adaptive ISI mitigation, estimation of dynamic distance, and the corresponding impulse response reconstruction are performed in each symbol interval. Based on the dynamic channel impulse response in each interval, two adaptive detection schemes, concentration-based adaptive threshold detection and peak-time-based adaptive detection, are proposed for signal detection. Simulations demonstrate that the ISI effect is significantly reduced and the adaptive detection schemes are reliable and robust for mobile molecular communication.
A threshold-based fixed predictor for JPEG-LS image compression
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua; Yao, Shoukui
2018-03-01
In JPEG-LS, fixed predictor based on median edge detector (MED) only detect horizontal and vertical edges, and thus produces large prediction errors in the locality of diagonal edges. In this paper, we propose a threshold-based edge detection scheme for the fixed predictor. The proposed scheme can detect not only the horizontal and vertical edges, but also diagonal edges. For some certain thresholds, the proposed scheme can be simplified to other existing schemes. So, it can also be regarded as the integration of these existing schemes. For a suitable threshold, the accuracy of horizontal and vertical edges detection is higher than the existing median edge detection in JPEG-LS. Thus, the proposed fixed predictor outperforms the existing JPEG-LS predictors for all images tested, while the complexity of the overall algorithm is maintained at a similar level.
Recovery and normalization of triple coincidences in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lage, Eduardo, E-mail: elage@mit.edu; Parot, Vicente; Dave, Shivang R.
2015-03-15
Purpose: Triple coincidences in positron emission tomography (PET) are events in which three γ-rays are detected simultaneously. These events, though potentially useful for enhancing the sensitivity of PET scanners, are discarded or processed without special consideration in current systems, because there is not a clear criterion for assigning them to a unique line-of-response (LOR). Methods proposed for recovering such events usually rely on the use of highly specialized detection systems, hampering general adoption, and/or are based on Compton-scatter kinematics and, consequently, are limited in accuracy by the energy resolution of standard PET detectors. In this work, the authors propose amore » simple and general solution for recovering triple coincidences, which does not require specialized detectors or additional energy resolution requirements. Methods: To recover triple coincidences, the authors’ method distributes such events among their possible LORs using the relative proportions of double coincidences in these LORs. The authors show analytically that this assignment scheme represents the maximum-likelihood solution for the triple-coincidence distribution problem. The PET component of a preclinical PET/CT scanner was adapted to enable the acquisition and processing of triple coincidences. Since the efficiencies for detecting double and triple events were found to be different throughout the scanner field-of-view, a normalization procedure specific for triple coincidences was also developed. The effect of including triple coincidences using their method was compared against the cases of equally weighting the triples among their possible LORs and discarding all the triple events. The authors used as figures of merit for this comparison sensitivity, noise-equivalent count (NEC) rates and image quality calculated as described in the NEMA NU-4 protocol for the assessment of preclinical PET scanners. Results: The addition of triple-coincidence events with the authors’ method increased peak NEC rates of the scanner by 26.6% and 32% for mouse- and rat-sized objects, respectively. This increase in NEC-rate performance was also reflected in the image-quality metrics. Images reconstructed using double and triple coincidences recovered using their method had better signal-to-noise ratio than those obtained using only double coincidences, while preserving spatial resolution and contrast. Distribution of triple coincidences using an equal-weighting scheme increased apparent system sensitivity but degraded image quality. The performance boost provided by the inclusion of triple coincidences using their method allowed to reduce the acquisition time of standard imaging procedures by up to ∼25%. Conclusions: Recovering triple coincidences with the proposed method can effectively increase the sensitivity of current clinical and preclinical PET systems without compromising other parameters like spatial resolution or contrast.« less
Wu, Chunxue; Wu, Wenliang; Wan, Caihua
2017-01-01
Sensors are increasingly used in mobile environments with wireless network connections. Multiple sensor types measure distinct aspects of the same event. Their measurements are then combined to produce integrated, reliable results. As the number of sensors in networks increases, low energy requirements and changing network connections complicate event detection and measurement. We present a data fusion scheme for use in mobile wireless sensor networks with high energy efficiency and low network delays, that still produces reliable results. In the first phase, we used a network simulation where mobile agents dynamically select the next hop migration node based on the stability parameter of the link, and perform the data fusion at the migration node. Agents use the fusion results to decide if it should return the fusion results to the processing center or continue to collect more data. In the second phase. The feasibility of data fusion at the node level is confirmed by an experimental design where fused data from color sensors show near-identical results to actual physical temperatures. These results are potentially important for new large-scale sensor network applications. PMID:29099793
NASA Technical Reports Server (NTRS)
Wang, Shuguang; Sobel, Adam H.; Fridlind, Ann; Feng, Zhe; Comstock, Jennifer M.; Minnis, Patrick; Nordeen, Michele L.
2015-01-01
The recently completed CINDY/DYNAMO field campaign observed two Madden-Julian oscillation (MJO) events in the equatorial Indian Ocean from October to December 2011. Prior work has indicated that the moist static energy anomalies in these events grew and were sustained to a significant extent by radiative feedbacks. We present here a study of radiative fluxes and clouds in a set of cloud-resolving simulations of these MJO events. The simulations are driven by the large-scale forcing data set derived from the DYNAMO northern sounding array observations, and carried out in a doubly periodic domain using the Weather Research and Forecasting (WRF) model. Simulated cloud properties and radiative fluxes are compared to those derived from the S-PolKa radar and satellite observations. To accommodate the uncertainty in simulated cloud microphysics, a number of single-moment (1M) and double-moment (2M) microphysical schemes in the WRF model are tested. The 1M schemes tend to underestimate radiative flux anomalies in the active phases of the MJO events, while the 2M schemes perform better, but can overestimate radiative flux anomalies. All the tested microphysics schemes exhibit biases in the shapes of the histograms of radiative fluxes and radar reflectivity. Histograms of radiative fluxes and brightness temperature indicate that radiative biases are not evenly distributed; the most significant bias occurs in rainy areas with OLR less than 150 W/ cu sq in the 2M schemes. Analysis of simulated radar reflectivities indicates that this radiative flux uncertainty is closely related to the simulated stratiform cloud coverage. Single-moment schemes underestimate stratiform cloudiness by a factor of 2, whereas 2M schemes simulate much more stratiform cloud.
Research on time synchronization scheme of MES systems in manufacturing enterprise
NASA Astrophysics Data System (ADS)
Yuan, Yuan; Wu, Kun; Sui, Changhao; Gu, Jin
2018-04-01
With the popularity of information and automatic production in the manufacturing enterprise, data interaction between business systems is more and more frequent. Therefore, the accuracy of time is getting higher and higher. However, the NTP network time synchronization methods lack the corresponding redundancy and monitoring mechanisms. When failure occurs, it can only make up operations after the event, which has a great effect on production data and systems interaction. Based on this, the paper proposes a RHCS-based NTP server architecture, automatically detect NTP status and failover by script.
Quantum Logic with Cavity Photons From Single Atoms.
Holleczek, Annemarie; Barter, Oliver; Rubenok, Allison; Dilley, Jerome; Nisbet-Jones, Peter B R; Langfahl-Klabes, Gunnar; Marshall, Graham D; Sparrow, Chris; O'Brien, Jeremy L; Poulios, Konstantinos; Kuhn, Axel; Matthews, Jonathan C F
2016-07-08
We demonstrate quantum logic using narrow linewidth photons that are produced with an a priori nonprobabilistic scheme from a single ^{87}Rb atom strongly coupled to a high-finesse cavity. We use a controlled-not gate integrated into a photonic chip to entangle these photons, and we observe nonclassical correlations between photon detection events separated by periods exceeding the travel time across the chip by 3 orders of magnitude. This enables quantum technology that will use the properties of both narrow-band single photon sources and integrated quantum photonics.
NASA Astrophysics Data System (ADS)
Goertz-Allmann, B. P.; Oye, V.
2015-12-01
The occurrence of induced and triggered microseismicity is of increasing concern to the general public. The underlying human causes are numerous and include hydrocarbon production and geological storage of CO2. The concerns of induced seismicity are the potential hazards from large seismic events and the creation of fluid pathways. However, microseismicity is also a unique tool to gather information about real-time changes in the subsurface, a fact generally ignored by the public. The ability to detect, locate and characterize microseismic events, provides a snapshot of the stress conditions within and around a geological reservoir. In addition, data on rapid stress changes (i.e. microseismic events) can be used as input to hydro-mechanical models, often used to map fluid propagation. In this study we investigate the impact of microseismic event location accuracy using surface seismic stations in addition to downhole geophones. Due to signal-to-noise conditions and the small magnitudes inherent in microseismicity, downhole systems detect significantly more events with better precision of phase arrival times than surface networks. However, downhole systems are often limited in their ability to obtain large enough observational apertures required for accurate locations. We therefore jointly locate the largest microseismic events using surface and downhole data. This requires careful evaluation in the weighting of input data when inverting for the event location. For the smaller events only observed on the downhole geophones, we define event clusters using waveform cross-correlation methods. We apply this methodology to microseismic data collected in the Illinois Basin-Decatur Project. A previous study revealed over 10,000 events detected by the downhole sensors. In our analysis, we include up to 12 surface sensors, installed by the USGS. The weighting scheme for this assembly of data needs to take into account significant uncertainties in the near-surface velocity structure. The re-located event clusters allow an investigation of systematic spatio-temporal variations of source parameters (e.g. stress drop) and statistical parameters (e.g. b-value). We examine these observations together with injection parameters to deduce constraints on the long-term stability of the injection site.
Large Area Flat Panel Imaging Detectors for Astronomy and Night Time Sensing
NASA Astrophysics Data System (ADS)
Siegmund, O.; McPhate, J.; Frisch, H.; Elam, J.; Mane, A.; Wagner, R.; Varner, G.
2013-09-01
Sealed tube photo-sensing detectors for optical/IR detection have applications in astronomy, nighttime remote reconnaissance, and airborne/space situational awareness. The potential development of large area photon counting, imaging, timing detectors has significance for these applications and a number of other areas (High energy particle detection (RICH), biological single-molecule fluorescence lifetime imaging microscopy, neutron imaging, time of flight mass spectroscopy, diffraction imaging). We will present details of progress towards the development of a 20 cm sealed tube optical detector with nanoengineered microchannel plates for photon counting, imaging and sub-ns event time stamping. In the operational scheme of the photodetector incoming light passes through an entrance window and interacts with a semitransparent photocathode on the inside of the window. The photoelectrons emitted are accelerated across a proximity gap and are detected by an MCP pair. The pair of novel borosilicate substrate MCPs are functionalized by atomic layer deposition (ALD), and amplify the signal and the resulting electron cloud is detected by a conductive strip line anode for determination of the event positions and the time of arrival. The physical package is ~ 25 x 25 cm but only 1.5 cm thick. Development of such a device in a square 20 cm format presents challenges: hermetic sealing to a large entrance window, a 20 cm semitransparent photocathode with good efficiency and uniformity, 20 cm MCPs with reasonable cost and performance, robust construction to preserve high vacuum and withstand an atmosphere pressure differential. We will discuss the schemes developed to address these issues and present the results for the first test devices. The novel microchannel plates employing borosilicate micro-capillary arrays provide many performance characteristics typical of conventional MCPs, but have been made in sizes up to 20 cm, have low intrinsic background (0.08 events cm2 s-1) and have very stable gain behavior over > 7 C cm2 of charge extracted. They are high temperature compatible and have minimal outgassing, which shortens and simplifies the sealed tube production process and should improve overall lifetimes. Bialkali (NaKSb) semitransparent photocathodes with > 20% quantum efficiency have also been made on 20 cm borosilicate windows compatible with the window seals for the large sealed tube device. The photocathodes have good response uniformity and have been stable for > 5 months in testing. Tests with a 20 cm detector with a cross delay line readout have achieved ~50µm FWHM imaging with single photon sub-ns timing and MHz event rates, and tests with a 10 x 10cm detector with cross strip readout has achieved ~20µm FWHM imaging with >4 MHz event rates with ~10% deadtime. We will discuss the details and implications of these novel detector implementations and their potential applications.
Meal-Insulin Cycle: A Visual Summary of the Biochemical Events between Meals
ERIC Educational Resources Information Center
Kalogiannis, Stavros
2017-01-01
In the present article, a scheme that summarizes the biochemical events occurring in the human body after the consumption of a meal is proposed. The scheme illustrates the metabolic sequence as a series of counteracting components occupying opposite positions in a cycle, indicating their opposite actions or physiological states, such as meal…
Fei, Zhongyang; Guan, Chaoxu; Gao, Huijun; Zhongyang Fei; Chaoxu Guan; Huijun Gao; Fei, Zhongyang; Guan, Chaoxu; Gao, Huijun
2018-06-01
This paper is concerned with the exponential synchronization for master-slave chaotic delayed neural network with event trigger control scheme. The model is established on a network control framework, where both external disturbance and network-induced delay are taken into consideration. The desired aim is to synchronize the master and slave systems with limited communication capacity and network bandwidth. In order to save the network resource, we adopt a hybrid event trigger approach, which not only reduces the data package sending out, but also gets rid of the Zeno phenomenon. By using an appropriate Lyapunov functional, a sufficient criterion for the stability is proposed for the error system with extended ( , , )-dissipativity performance index. Moreover, hybrid event trigger scheme and controller are codesigned for network-based delayed neural network to guarantee the exponential synchronization between the master and slave systems. The effectiveness and potential of the proposed results are demonstrated through a numerical example.
Stable Short-Term Frequency Support Using Adaptive Gains for a DFIG-Based Wind Power Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jinsik; Jang, Gilsoo; Muljadi, Eduard
For the fixed-gain inertial control of wind power plants (WPPs), a large gain setting provides a large contribution to supporting system frequency control, but it may cause over-deceleration for a wind turbine generator that has a small amount of kinetic energy (KE). Further, if the wind speed decreases during inertial control, even a small gain may cause over-deceleration. This paper proposes a stable inertial control scheme using adaptive gains for a doubly fed induction generator (DFIG)-based WPP. The scheme aims to improve the frequency nadir (FN) while ensuring stable operation of all DFIGs, particularly when the wind speed decreases duringmore » inertial control. In this scheme, adaptive gains are set to be proportional to the KE stored in DFIGs, which is spatially and temporally dependent. To improve the FN, upon detecting an event, large gains are set to be proportional to the KE of DFIGs; to ensure stable operation, the gains decrease with the declining KE. The simulation results demonstrate that the scheme improves the FN while ensuring stable operation of all DFIGs in various wind and system conditions. Further, it prevents over-deceleration even when the wind speed decreases during inertial control.« less
Computerized Detection of Lung Nodules by Means of “Virtual Dual-Energy” Radiography
Chen, Sheng; Suzuki, Kenji
2014-01-01
Major challenges in current computer-aided detection (CADe) schemes for nodule detection in chest radiographs (CXRs) are to detect nodules that overlap with ribs and/or clavicles and to reduce the frequent false positives (FPs) caused by ribs. Detection of such nodules by a CADe scheme is very important, because radiologists are likely to miss such subtle nodules. Our purpose in this study was to develop a CADe scheme with improved sensitivity and specificity by use of “virtual dual-energy” (VDE) CXRs where ribs and clavicles are suppressed with massive-training artificial neural networks (MTANNs). To reduce rib-induced FPs and detect nodules overlapping with ribs, we incorporated the VDE technology in our CADe scheme. The VDE technology suppressed rib and clavicle opacities in CXRs while maintaining soft-tissue opacity by use of the MTANN technique that had been trained with real dual-energy imaging. Our scheme detected nodule candidates on VDE images by use of a morphologic filtering technique. Sixty morphologic and gray-level-based features were extracted from each candidate from both original and VDE CXRs. A nonlinear support vector classifier was employed for classification of the nodule candidates. A publicly available database containing 140 nodules in 140 CXRs and 93 normal CXRs was used for testing our CADe scheme. All nodules were confirmed by computed tomography examinations, and the average size of the nodules was 17.8 mm. Thirty percent (42/140) of the nodules were rated “extremely subtle” or “very subtle” by a radiologist. The original scheme without VDE technology achieved a sensitivity of 78.6% (110/140) with 5 (1165/233) FPs per image. By use of the VDE technology, more nodules overlapping with ribs or clavicles were detected and the sensitivity was improved substantially to 85.0% (119/140) at the same FP rate in a leave-one-out cross-validation test, whereas the FP rate was reduced to 2.5 (583/233) per image at the same sensitivity level as the original CADe scheme obtained (Difference between the specificities of the original and the VDE-based CADe schemes was statistically significant). In particular, the sensitivity of our VDE-based CADe scheme for subtle nodules (66.7% = 28/42) was statistically significantly higher than that of the original CADe scheme (57.1% = 24/42). Therefore, by use of VDE technology, the sensitivity and specificity of our CADe scheme for detection of nodules, especially subtle nodules, in CXRs were improved substantially. PMID:23193306
Computerized detection of lung nodules by means of "virtual dual-energy" radiography.
Chen, Sheng; Suzuki, Kenji
2013-02-01
Major challenges in current computer-aided detection (CADe) schemes for nodule detection in chest radiographs (CXRs) are to detect nodules that overlap with ribs and/or clavicles and to reduce the frequent false positives (FPs) caused by ribs. Detection of such nodules by a CADe scheme is very important, because radiologists are likely to miss such subtle nodules. Our purpose in this study was to develop a CADe scheme with improved sensitivity and specificity by use of "virtual dual-energy" (VDE) CXRs where ribs and clavicles are suppressed with massive-training artificial neural networks (MTANNs). To reduce rib-induced FPs and detect nodules overlapping with ribs, we incorporated the VDE technology in our CADe scheme. The VDE technology suppressed rib and clavicle opacities in CXRs while maintaining soft-tissue opacity by use of the MTANN technique that had been trained with real dual-energy imaging. Our scheme detected nodule candidates on VDE images by use of a morphologic filtering technique. Sixty morphologic and gray-level-based features were extracted from each candidate from both original and VDE CXRs. A nonlinear support vector classifier was employed for classification of the nodule candidates. A publicly available database containing 140 nodules in 140 CXRs and 93 normal CXRs was used for testing our CADe scheme. All nodules were confirmed by computed tomography examinations, and the average size of the nodules was 17.8 mm. Thirty percent (42/140) of the nodules were rated "extremely subtle" or "very subtle" by a radiologist. The original scheme without VDE technology achieved a sensitivity of 78.6% (110/140) with 5 (1165/233) FPs per image. By use of the VDE technology, more nodules overlapping with ribs or clavicles were detected and the sensitivity was improved substantially to 85.0% (119/140) at the same FP rate in a leave-one-out cross-validation test, whereas the FP rate was reduced to 2.5 (583/233) per image at the same sensitivity level as the original CADe scheme obtained (Difference between the specificities of the original and the VDE-based CADe schemes was statistically significant). In particular, the sensitivity of our VDE-based CADe scheme for subtle nodules (66.7% = 28/42) was statistically significantly higher than that of the original CADe scheme (57.1% = 24/42). Therefore, by use of VDE technology, the sensitivity and specificity of our CADe scheme for detection of nodules, especially subtle nodules, in CXRs were improved substantially.
Song, Guo-Zhu; Wu, Fang-Zhou; Zhang, Mei; Yang, Guo-Jian
2016-06-28
Quantum repeater is the key element in quantum communication and quantum information processing. Here, we investigate the possibility of achieving a heralded quantum repeater based on the scattering of photons off single emitters in one-dimensional waveguides. We design the compact quantum circuits for nonlocal entanglement generation, entanglement swapping, and entanglement purification, and discuss the feasibility of our protocols with current experimental technology. In our scheme, we use a parametric down-conversion source instead of ideal single-photon sources to realize the heralded quantum repeater. Moreover, our protocols can turn faulty events into the detection of photon polarization, and the fidelity can reach 100% in principle. Our scheme is attractive and scalable, since it can be realized with artificial solid-state quantum systems. With developed experimental technique on controlling emitter-waveguide systems, the repeater may be very useful in long-distance quantum communication.
Song, Guo-Zhu; Wu, Fang-Zhou; Zhang, Mei; Yang, Guo-Jian
2016-01-01
Quantum repeater is the key element in quantum communication and quantum information processing. Here, we investigate the possibility of achieving a heralded quantum repeater based on the scattering of photons off single emitters in one-dimensional waveguides. We design the compact quantum circuits for nonlocal entanglement generation, entanglement swapping, and entanglement purification, and discuss the feasibility of our protocols with current experimental technology. In our scheme, we use a parametric down-conversion source instead of ideal single-photon sources to realize the heralded quantum repeater. Moreover, our protocols can turn faulty events into the detection of photon polarization, and the fidelity can reach 100% in principle. Our scheme is attractive and scalable, since it can be realized with artificial solid-state quantum systems. With developed experimental technique on controlling emitter-waveguide systems, the repeater may be very useful in long-distance quantum communication. PMID:27350159
Computerized scheme for vertebra detection in CT scout image
NASA Astrophysics Data System (ADS)
Guo, Wei; Chen, Qiang; Zhou, Hanxun; Zhang, Guodong; Cong, Lin; Li, Qiang
2016-03-01
Our purposes are to develop a vertebra detection scheme for automated scan planning, which would assist radiological technologists in their routine work for the imaging of vertebrae. Because the orientations of vertebrae were various, and the Haar-like features were only employed to represent the subject on the vertical, horizontal, or diagonal directions, we rotated the CT scout image seven times to make the vertebrae roughly horizontal in least one of the rotated images. Then, we employed Adaboost learning algorithm to construct a strong classifier for the vertebra detection by use of Haar-like features, and combined the detection results with the overlapping region according to the number of times they were detected. Finally, most of the false positives were removed by use of the contextual relationship between them. The detection scheme was evaluated on a database with 76 CT scout image. Our detection scheme reported 1.65 false positives per image at a sensitivity of 94.3% for initial detection of vertebral candidates, and then the performance of detection was improved to 0.95 false positives per image at a sensitivity of 98.6% for the further steps of false positive reduction. The proposed scheme achieved a high performance for the detection of vertebrae with different orientations.
Improved Readout Scheme for SQUID-Based Thermometry
NASA Technical Reports Server (NTRS)
Penanen, Konstantin
2007-01-01
An improved readout scheme has been proposed for high-resolution thermometers, (HRTs) based on the use of superconducting quantum interference devices (SQUIDs) to measure temperature- dependent magnetic susceptibilities. The proposed scheme would eliminate counting ambiguities that arise in the conventional scheme, while maintaining the superior magnetic-flux sensitivity of the conventional scheme. The proposed scheme is expected to be especially beneficial for HRT-based temperature control of multiplexed SQUIDbased bolometer sensor arrays. SQUID-based HRTs have become standard for measuring and controlling temperatures in the sub-nano-Kelvin temperature range in a broad range of low-temperature scientific and engineering applications. A typical SQUIDbased HRT that utilizes the conventional scheme includes a coil wound on a core made of a material that has temperature- dependent magnetic susceptibility in the temperature range of interest. The core and the coil are placed in a DC magnetic field provided either by a permanent magnet or as magnetic flux inside a superconducting outer wall. The aforementioned coil is connected to an input coil of a SQUID. Changes in temperature lead to changes in the susceptibility of the core and to changes in the magnetic flux detected by the SQUID. The SQUID readout instrumentation is capable of measuring magnetic-flux changes that correspond to temperature changes down to a noise limit .0.1 nK/Hz1/2. When the flux exceeds a few fundamental flux units, which typically corresponds to a temperature of .100 nK, the SQUID is reset. The temperature range can be greatly expanded if the reset events are carefully tracked and counted, either by a computer running appropriate software or by a dedicated piece of hardware.
NASA Astrophysics Data System (ADS)
Jolly, Gill; Sandri, Laura; Lindsay, Jan; Scott, Brad; Sherburn, Steve; Jolly, Art; Fournier, Nico; Keys, Harry; Marzocchi, Warner
2010-05-01
The Bayesian Event Tree for Eruption Forecasting software (BET_EF) is a probabilistic model based on an event tree scheme that was created specifically to compute long- and short-term probabilities of different outcomes (volcanic unrest, magmatic unrest, eruption, vent location and eruption size) at long-time dormant and routinely monitored volcanoes. It is based on the assumption that upward movements of magma in a closed conduit volcano will produce detectable changes in the monitored parameters at the surface. In this perspective, the goal of BET_EF is to compute probabilities by merging information from geology, models, past data and present monitoring measurements, through a Bayesian inferential method. In the present study, we attempt to apply BET_EF to Mt Ruapehu, a very active and well-monitored volcano exhibiting the typical features of open conduit volcanoes. In such conditions, current monitoring at the surface is not necessarily able to detect short term changes at depth that may occur only seconds to minutes before an eruption. This results in so-called "blue sky eruptions" of Mt Ruapehu (for example in September 2007), that are volcanic eruptions apparently not preceded by any presently detectable signal in the current monitoring. A further complication at Mt Ruapehu arises from the well-developed hydrothermal system and the permanent crater lake sitting on top of the magmatic conduit. Both the hydrothermal system and crater lake may act to mask or change monitoring signals (if present) that magma produces deeper in the edifice. Notwithstanding these potential drawbacks, we think that an attempt to apply BET_EF at Ruapehu is worthwhile, for several reasons. First, with the exception of a few "blue sky" events, monitoring data at Mt Ruapehu can be helpful in forecasting major events, especially if a large amount of magma is intruded into the edifice and becomes available for phreatomagmatic or magmatic eruptions, as for example in 1995-96. Secondly, in setting up BET_EF for Mt Ruapehu we are forced to define quantitatively what the background activity is. This will result in a quantitative evaluation of what changes in long time monitored parameters may influence the probability of future eruptions. The slopes of Mt Ruapehu host the largest ski area in North Island, New Zealand. Lahars have been generated as a result of several eruptions in the last 50 years, and some of these have reached the ski runs in a very short time frame (around 90 seconds from the beginning of the eruption). In the light of these potentially hazardous lahars, we use the output probabilities provided by BET_EF in a practical and rational decision scheme recently proposed by Marzocchi and Woo (2009) based on a cost/benefit analysis (CBA). In such scheme, a C/L ratio is computed, based on the costs (C) of practical mitigation actions to reduce risk (e.g., a public warning scheme and other means of raising awareness, and a call for a temporary and/or partial closure of the ski area) and on the potential loss (L) if no mitigation action is taken and an eruption occurs causing lahars down the ski fields. By comparing the probability of eruption-driven lahars and the C/L ratio, it is possible to define the most rational mitigation actions that can be taken to reduce the risk to skiers, snowboarders and staff on skifield. As BET_EF probability of eruption changes dynamically as updated monitoring data are received, the authorities can decide, at any specific point in time, what is the best action according to the current monitoring of the volcano. In this respect, CBA represents a bridge linking scientific output (probabilities) and Decision Makers (practical mitigation actions).
Practical scheme for optimal measurement in quantum interferometric devices
NASA Astrophysics Data System (ADS)
Takeoka, Masahiro; Ban, Masashi; Sasaki, Masahide
2003-06-01
We apply a Kennedy-type detection scheme, which was originally proposed for a binary communications system, to interferometric sensing devices. We show that the minimum detectable perturbation of the proposed system reaches the ultimate precision bound which is predicted by quantum Neyman-Pearson hypothesis testing. To provide concrete examples, we apply our interferometric scheme to phase shift detection by using coherent and squeezed probe fields.
USDA-ARS?s Scientific Manuscript database
The dust emission scheme of Shao (2004) has been implemented into the regional atmospheric model COSMO-ART and has been applied to a severe dust event in northeastern Germany on 8th April 2011. The model sensitivity to soil moisture and vegetation cover has been studied. Soil moisture has been found...
Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations
NASA Astrophysics Data System (ADS)
Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.
2017-12-01
Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.
Full On-Device Stay Points Detection in Smartphones for Location-Based Mobile Applications
Pérez-Torres, Rafael; Torres-Huitzil, César; Galeana-Zapién, Hiram
2016-01-01
The tracking of frequently visited places, also known as stay points, is a critical feature in location-aware mobile applications as a way to adapt the information and services provided to smartphones users according to their moving patterns. Location based applications usually employ the GPS receiver along with Wi-Fi hot-spots and cellular cell tower mechanisms for estimating user location. Typically, fine-grained GPS location data are collected by the smartphone and transferred to dedicated servers for trajectory analysis and stay points detection. Such Mobile Cloud Computing approach has been successfully employed for extending smartphone’s battery lifetime by exchanging computation costs, assuming that on-device stay points detection is prohibitive. In this article, we propose and validate the feasibility of having an alternative event-driven mechanism for stay points detection that is executed fully on-device, and that provides higher energy savings by avoiding communication costs. Our solution is encapsulated in a sensing middleware for Android smartphones, where a stream of GPS location updates is collected in the background, supporting duty cycling schemes, and incrementally analyzed following an event-driven paradigm for stay points detection. To evaluate the performance of the proposed middleware, real world experiments were conducted under different stress levels, validating its power efficiency when compared against a Mobile Cloud Computing oriented solution. PMID:27754388
Electromechanical Displacement Detection With an On-Chip High Electron Mobility Transistor Amplifier
NASA Astrophysics Data System (ADS)
Oda, Yasuhiko; Onomitsu, Koji; Kometani, Reo; Warisawa, Shin-ichi; Ishihara, Sunao; Yamaguchi, Hiroshi
2011-06-01
We developed a highly sensitive displacement detection scheme for a GaAs-based electromechanical resonator using an integrated high electron mobility transistor (HEMT). Piezoelectric voltage generated by the vibration of the resonator is applied to the gate of the HEMT, resulting in the on-chip amplification of the signal voltage. This detection scheme achieves a displacement sensitivity of ˜9 pm·Hz-1/2, which is one of the highest among on-chip purely electrical displacement detection schemes at room temperature.
NASA Astrophysics Data System (ADS)
Alkilani, Amjad; Shirkhodaie, Amir
2013-05-01
Handling, manipulation, and placement of objects, hereon called Human-Object Interaction (HOI), in the environment generate sounds. Such sounds are readily identifiable by the human hearing. However, in the presence of background environment noises, recognition of minute HOI sounds is challenging, though vital for improvement of multi-modality sensor data fusion in Persistent Surveillance Systems (PSS). Identification of HOI sound signatures can be used as precursors to detection of pertinent threats that otherwise other sensor modalities may miss to detect. In this paper, we present a robust method for detection and classification of HOI events via clustering of extracted features from training of HOI acoustic sound waves. In this approach, salient sound events are preliminary identified and segmented from background via a sound energy tracking method. Upon this segmentation, frequency spectral pattern of each sound event is modeled and its features are extracted to form a feature vector for training. To reduce dimensionality of training feature space, a Principal Component Analysis (PCA) technique is employed to expedite fast classification of test feature vectors, a kd-tree and Random Forest classifiers are trained for rapid classification of training sound waves. Each classifiers employs different similarity distance matching technique for classification. Performance evaluations of classifiers are compared for classification of a batch of training HOI acoustic signatures. Furthermore, to facilitate semantic annotation of acoustic sound events, a scheme based on Transducer Mockup Language (TML) is proposed. The results demonstrate the proposed approach is both reliable and effective, and can be extended to future PSS applications.
Tourassi, Georgia D; Harrawood, Brian; Singh, Swatee; Lo, Joseph Y; Floyd, Carey E
2007-01-01
The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrieval precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee
The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrievalmore » precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.« less
NASA Astrophysics Data System (ADS)
Gaffney, Kevin P.; Aghaei, Faranak; Battiste, James; Zheng, Bin
2017-03-01
Detection of residual brain tumor is important to evaluate efficacy of brain cancer surgery, determine optimal strategy of further radiation therapy if needed, and assess ultimate prognosis of the patients. Brain MR is a commonly used imaging modality for this task. In order to distinguish between residual tumor and surgery induced scar tissues, two sets of MRI scans are conducted pre- and post-gadolinium contrast injection. The residual tumors are only enhanced in the post-contrast injection images. However, subjective reading and quantifying this type of brain MR images faces difficulty in detecting real residual tumor regions and measuring total volume of the residual tumor. In order to help solve this clinical difficulty, we developed and tested a new interactive computer-aided detection scheme, which consists of three consecutive image processing steps namely, 1) segmentation of the intracranial region, 2) image registration and subtraction, 3) tumor segmentation and refinement. The scheme also includes a specially designed and implemented graphical user interface (GUI) platform. When using this scheme, two sets of pre- and post-contrast injection images are first automatically processed to detect and quantify residual tumor volume. Then, a user can visually examine segmentation results and conveniently guide the scheme to correct any detection or segmentation errors if needed. The scheme has been repeatedly tested using five cases. Due to the observed high performance and robustness of the testing results, the scheme is currently ready for conducting clinical studies and helping clinicians investigate the association between this quantitative image marker and outcome of patients.
NASA Astrophysics Data System (ADS)
Lough, A. C.; Roman, D. C.; Haney, M. M.
2015-12-01
Deep long period (DLP) earthquakes are commonly observed in volcanic settings such as the Aleutian Arc in Alaska. DLPs are poorly understood but are thought to be associated with movements of fluids, such as magma or hydrothermal fluids, deep in the volcanic plumbing system. These events have been recognized for several decades but few studies have gone beyond their identification and location. All long period events are more difficult to identify and locate than volcano-tectonic (VT) earthquakes because traditional detection schemes focus on high frequency (short period) energy. In addition, DLPs present analytical challenges because they tend to be emergent and so it is difficult to accurately pick the onset of arriving body waves. We now expect to find DLPs at most volcanic centers, the challenge lies in identification and location. We aim to reduce the element of human error in location by applying back projection to better constrain the depth and horizontal position of these events. Power et al. (2004) provided the first compilation of DLP activity in the Aleutian Arc. This study focuses on the reanalysis of 162 cataloged DLPs beneath 11 volcanoes in the Aleutian arc (we expect to ultimately identify and reanalyze more DLPs). We are currently adapting the approach of Haney (2014) for volcanic tremor to use back projection over a 4D grid to determine position and origin time of DLPs. This method holds great potential in that it will allow automated, high-accuracy picking of arrival times and could reduce the number of arrival time picks necessary for traditional location schemes to well constrain event origins. Back projection can also calculate a relative focal mechanism (difficult with traditional methods due to the emergent nature of DLPs) allowing the first in depth analysis of source properties. Our event catalog (spanning over 25 years and volcanoes) is one of the longest and largest and enables us to investigate spatial and temporal variation in DLPs.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.
NASA Astrophysics Data System (ADS)
Kwiatek, G.; Plenkers, K.; Zang, A.; Stephansson, O.; Stenberg, L.
2016-12-01
The geothermic Fatigue Hydraulic Fracturing (FHF) in situ experiment (Nova project 54-14-1) took place in the Äspö Hard Rock Laboratory/Sweden in a 1.8 Ma old granitic to dioritic rock mass. The experiment aims at optimizing geothermal heat exchange in crystalline rock mass by multistage hydraulic fracturing at 10 m scale. Six fractures are driven by three different water injection schemes (continuous, cyclic, pulse pressurization) inside a 28 m long, horizontal borehole at depth level 410 m. The rock volume subject to hydraulic fracturing and monitored by three different networks with acoustic emission (AE), micro-seismicity and electromagnetic sensors is about 30 m x 30 m x 30 m in size. The 16-channel In-situ AE monitoring network by GMuG monitored the rupture generation and propagation in the frequency range 1000 Hz to 100,000 Hz corresponding to rupture dimensions from cm- to dm-scale. The in-situ AE monitoring system detected and analyzed AE activity in-situ (P- and S-wave picking, localization). The results were used to review the ongoing microfracturing activity in near real-time. The in-situ AE monitoring network successfully recorded and localized 196 seismic events for most, but not all, hydraulic fractures. All AE events detected in-situ occurred during fracturing time periods. The source parameters (fracture sizes, moment magnitudes, static stress drop) of AE events framing injection periods were calculated using the combined spectral fitting/spectra ratio techniques. The AE activity is clustered in space and clearly outline the fractures location, its orientation, and expansion as well as their temporal evolution. The outward migration of AE events away from the borehole is observed. Fractures extend up to 7 m from the injection interval in the horizontal borehole. The fractures orientation and location correlate for most fractures roughly with the results gained by image packer. Clear differences in seismic response between hydraulic fractures in different formations and injection schemes are visible which need further investigation. For further analysis all AE data of fracturing time periods were recorded continuously with 1 MHz sampling frequency per channel.
Secure Distributed Detection under Energy Constraint in IoT-Oriented Sensor Networks.
Zhang, Guomei; Sun, Hao
2016-12-16
We study the secure distributed detection problems under energy constraint for IoT-oriented sensor networks. The conventional channel-aware encryption (CAE) is an efficient physical-layer secure distributed detection scheme in light of its energy efficiency, good scalability and robustness over diverse eavesdropping scenarios. However, in the CAE scheme, it remains an open problem of how to optimize the key thresholds for the estimated channel gain, which are used to determine the sensor's reporting action. Moreover, the CAE scheme does not jointly consider the accuracy of local detection results in determining whether to stay dormant for a sensor. To solve these problems, we first analyze the error probability and derive the optimal thresholds in the CAE scheme under a specified energy constraint. These results build a convenient mathematic framework for our further innovative design. Under this framework, we propose a hybrid secure distributed detection scheme. Our proposal can satisfy the energy constraint by keeping some sensors inactive according to the local detection confidence level, which is characterized by likelihood ratio. In the meanwhile, the security is guaranteed through randomly flipping the local decisions forwarded to the fusion center based on the channel amplitude. We further optimize the key parameters of our hybrid scheme, including two local decision thresholds and one channel comparison threshold. Performance evaluation results demonstrate that our hybrid scheme outperforms the CAE under stringent energy constraints, especially in the high signal-to-noise ratio scenario, while the security is still assured.
Secure Distributed Detection under Energy Constraint in IoT-Oriented Sensor Networks
Zhang, Guomei; Sun, Hao
2016-01-01
We study the secure distributed detection problems under energy constraint for IoT-oriented sensor networks. The conventional channel-aware encryption (CAE) is an efficient physical-layer secure distributed detection scheme in light of its energy efficiency, good scalability and robustness over diverse eavesdropping scenarios. However, in the CAE scheme, it remains an open problem of how to optimize the key thresholds for the estimated channel gain, which are used to determine the sensor’s reporting action. Moreover, the CAE scheme does not jointly consider the accuracy of local detection results in determining whether to stay dormant for a sensor. To solve these problems, we first analyze the error probability and derive the optimal thresholds in the CAE scheme under a specified energy constraint. These results build a convenient mathematic framework for our further innovative design. Under this framework, we propose a hybrid secure distributed detection scheme. Our proposal can satisfy the energy constraint by keeping some sensors inactive according to the local detection confidence level, which is characterized by likelihood ratio. In the meanwhile, the security is guaranteed through randomly flipping the local decisions forwarded to the fusion center based on the channel amplitude. We further optimize the key parameters of our hybrid scheme, including two local decision thresholds and one channel comparison threshold. Performance evaluation results demonstrate that our hybrid scheme outperforms the CAE under stringent energy constraints, especially in the high signal-to-noise ratio scenario, while the security is still assured. PMID:27999282
A computerized scheme for lung nodule detection in multiprojection chest radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo Wei; Li Qiang; Boyce, Sarah J.
2012-04-15
Purpose: Our previous study indicated that multiprojection chest radiography could significantly improve radiologists' performance for lung nodule detection in clinical practice. In this study, the authors further verify that multiprojection chest radiography can greatly improve the performance of a computer-aided diagnostic (CAD) scheme. Methods: Our database consisted of 59 subjects, including 43 subjects with 45 nodules and 16 subjects without nodules. The 45 nodules included 7 real and 38 simulated ones. The authors developed a conventional CAD scheme and a new fusion CAD scheme to detect lung nodules. The conventional CAD scheme consisted of four steps for (1) identification ofmore » initial nodule candidates inside lungs, (2) nodule candidate segmentation based on dynamic programming, (3) extraction of 33 features from nodule candidates, and (4) false positive reduction using a piecewise linear classifier. The conventional CAD scheme processed each of the three projection images of a subject independently and discarded the correlation information between the three images. The fusion CAD scheme included the four steps in the conventional CAD scheme and two additional steps for (5) registration of all candidates in the three images of a subject, and (6) integration of correlation information between the registered candidates in the three images. The integration step retained all candidates detected at least twice in the three images of a subject and removed those detected only once in the three images as false positives. A leave-one-subject-out testing method was used for evaluation of the performance levels of the two CAD schemes. Results: At the sensitivities of 70%, 65%, and 60%, our conventional CAD scheme reported 14.7, 11.3, and 8.6 false positives per image, respectively, whereas our fusion CAD scheme reported 3.9, 1.9, and 1.2 false positives per image, and 5.5, 2.8, and 1.7 false positives per patient, respectively. The low performance of the conventional CAD scheme may be attributed to the high noise level in chest radiography, and the small size and low contrast of most nodules. Conclusions: This study indicated that the fusion of correlation information in multiprojection chest radiography can markedly improve the performance of CAD scheme for lung nodule detection.« less
Permanence analysis of a concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.; Kasami, T.
1983-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.
Probability of undetected error after decoding for a concatenated coding scheme
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.
Bi-orthogonal Symbol Mapping and Detection in Optical CDMA Communication System
NASA Astrophysics Data System (ADS)
Liu, Maw-Yang
2017-12-01
In this paper, the bi-orthogonal symbol mapping and detection scheme is investigated in time-spreading wavelength-hopping optical CDMA communication system. The carrier-hopping prime code is exploited as signature sequence, whose put-of-phase autocorrelation is zero. Based on the orthogonality of carrier-hopping prime code, the equal weight orthogonal signaling scheme can be constructed, and the proposed scheme using bi-orthogonal symbol mapping and detection can be developed. The transmitted binary data bits are mapped into corresponding bi-orthogonal symbols, where the orthogonal matrix code and its complement are utilized. In the receiver, the received bi-orthogonal data symbol is fed into the maximum likelihood decoder for detection. Under such symbol mapping and detection, the proposed scheme can greatly enlarge the Euclidean distance; hence, the system performance can be drastically improved.
Weak beacon detection for air-to-ground optical wireless link establishment.
Han, Yaoqiang; Dang, Anhong; Tang, Junxiong; Guo, Hong
2010-02-01
In an air-to-ground free-space optical communication system, strong background interference seriously affects the beacon detection, which makes it difficult to establish the optical link. In this paper, we propose a correlation beacon detection scheme under strong background interference conditions. As opposed to traditional beacon detection schemes, the beacon is modulated by an m-sequence at the transmitting terminal with a digital differential matched filter (DDMF) array introduced at the receiving end to detect the modulated beacon. This scheme is capable of suppressing both strong interference and noise by correlation reception of the received image sequence. In addition, the DDMF array enables each pixel of the image sensor to have its own DDMF of the same structure to process its received image sequence in parallel, thus it makes fast beacon detection possible. Theoretical analysis and an outdoor experiment have been demonstrated and show that the proposed scheme can realize fast and effective beacon detection under strong background interference conditions. Consequently, the required beacon transmission power can also be reduced dramatically.
NASA Astrophysics Data System (ADS)
Fischer, M.; Caprio, M.; Cua, G. B.; Heaton, T. H.; Clinton, J. F.; Wiemer, S.
2009-12-01
The Virtual Seismologist (VS) algorithm is a Bayesian approach to earthquake early warning (EEW) being implemented by the Swiss Seismological Service at ETH Zurich. The application of Bayes’ theorem in earthquake early warning states that the most probable source estimate at any given time is a combination of contributions from a likelihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS algorithm was one of three EEW algorithms involved in the California Integrated Seismic Network (CISN) real-time EEW testing and performance evaluation effort. Its compelling real-time performance in California over the last three years has led to its inclusion in the new USGS-funded effort to develop key components of CISN ShakeAlert, a prototype EEW system that could potentially be implemented in California. A significant portion of VS code development was supported by the SAFER EEW project in Europe. We discuss recent enhancements to the VS EEW algorithm. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to be declared an event to reduce false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and it requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) to a hybrid on-site/regional approach capable of providing a continuously evolving stream of EEW information starting from the first P-detection. Offline analysis on Swiss and California waveform datasets indicate that the multiple-threshold approach is faster and more reliable for larger events than the earlier version of the VS codes. This multiple-threshold approach is well-suited for implementation on a wide range of devices, from embedded processor systems installed at a seismic stations, to small autonomous networks for local warnings, to large-scale regional networks such as the CISN. In addition, we quantify the influence of systematic use of prior information and Vs30-based corrections for site amplification on VS magnitude estimation performance, and describe how components of the VS algorithm will be integrated into non-EEW standard network processing procedures at CHNet, the national broadband / strong motion network in Switzerland. These enhancements to the VS codes will be transitioned from off-line to real-time testing at CHNet in Europe in the coming months, and will be incorporated into the development of key components of CISN ShakeAlert prototype system in California.
Detection-enhanced steady state entanglement with ions.
Bentley, C D B; Carvalho, A R R; Kielpinski, D; Hope, J J
2014-07-25
Driven dissipative steady state entanglement schemes take advantage of coupling to the environment to robustly prepare highly entangled states. We present a scheme for two trapped ions to generate a maximally entangled steady state with fidelity above 0.99, appropriate for use in quantum protocols. Furthermore, we extend the scheme by introducing detection of our dissipation process, significantly enhancing the fidelity. Our scheme is robust to anomalous heating and requires no sympathetic cooling.
Robust Spacecraft Component Detection in Point Clouds.
Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng
2018-03-21
Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.
Robust Spacecraft Component Detection in Point Clouds
Wei, Quanmao; Jiang, Zhiguo
2018-01-01
Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density. PMID:29561828
Earthquake Monitoring with the MyShake Global Smartphone Seismic Network
NASA Astrophysics Data System (ADS)
Inbal, A.; Kong, Q.; Allen, R. M.; Savran, W. H.
2017-12-01
Smartphone arrays have the potential for significantly improving seismic monitoring in sparsely instrumented urban areas. This approach benefits from the dense spatial coverage of users, as well as from communication and computational capabilities built into smartphones, which facilitate big seismic data transfer and analysis. Advantages in data acquisition with smartphones trade-off with factors such as the low-quality sensors installed in phones, high noise levels, and strong network heterogeneity, all of which limit effective seismic monitoring. Here we utilize network and array-processing schemes to asses event detectability with the MyShake global smartphone network. We examine the benefits of using this network in either triggered or continuous modes of operation. A global database of ground motions measured on stationary phones triggered by M2-6 events is used to establish detection probabilities. We find that the probability of detecting an M=3 event with a single phone located <10 km from the epicenter exceeds 70%. Due to the sensor's self-noise, smaller magnitude events at short epicentral distances are very difficult to detect. To increase the signal-to-noise ratio, we employ array back-projection techniques on continuous data recorded by thousands of phones. In this class of methods, the array is used as a spatial filter that suppresses signals emitted from shallow noise sources. Filtered traces are stacked to further enhance seismic signals from deep sources. We benchmark our technique against traditional location algorithms using recordings from California, a region with large MyShake user database. We find that locations derived from back-projection images of M 3 events recorded by >20 nearby phones closely match the regional catalog locations. We use simulated broadband seismic data to examine how location uncertainties vary with user distribution and noise levels. To this end, we have developed an empirical noise model for the metropolitan Los-Angeles (LA) area. We find that densities larger than 100 stationary phones/km2 are required to accurately locate M 2 events in the LA basin. Given the projected MyShake user distribution, that condition may be met within the next few years.
Fravolini, M L; Fabietti, P G
2014-01-01
This paper proposes a scheme for the control of the blood glucose in subjects with type-1 diabetes mellitus based on the subcutaneous (s.c.) glucose measurement and s.c. insulin administration. The tuning of the controller is based on an iterative learning strategy that exploits the repetitiveness of the daily feeding habit of a patient. The control consists of a mixed feedback and feedforward contribution whose parameters are tuned through an iterative learning process that is based on the day-by-day automated analysis of the glucose response to the infusion of exogenous insulin. The scheme does not require any a priori information on the patient insulin/glucose response, on the meal times and on the amount of ingested carbohydrates (CHOs). Thanks to the learning mechanism the scheme is able to improve its performance over time. A specific logic is also introduced for the detection and prevention of possible hypoglycaemia events. The effectiveness of the methodology has been validated using long-term simulation studies applied to a set of nine in silico patients considering realistic uncertainties on the meal times and on the quantities of ingested CHOs.
Takahiro Sayama; Jeffrey J. McDonnell
2009-01-01
Hydrograph source components and stream water residence time are fundamental behavioral descriptors of watersheds but, as yet, are poorly represented in most rainfall-runoff models. We present a new time-space accounting scheme (T-SAS) to simulate the pre-event and event water fractions, mean residence time, and spatial source of streamflow at the watershed scale. We...
Preliminary analysis of EUSO—TA data
NASA Astrophysics Data System (ADS)
Fenu, F.; Piotrowski, L. W.; Shin, H.; Jung, A.; Bacholle, S.; Bisconti, F.; Capel, F.; Eser, J.; Kawasaki, Y.; Kuznetsov, E.; Larsson, O.; Mackovjak, S.; Miyamoto, H.; Plebaniak, Z.; Prevot, G.; Putis, M.; Shinozaki, K.; Adams, J.; Bertaina, M.; Bobik, P.; Casolino, M.; Matthews, J. N.; Ricci, M.; Wiencke, L.;
2016-05-01
The EUSO-TA detector is a pathfinder for the JEM-EUSO project and is currently installed in Black Rock Mesa (Utah) on the site of the Telescope Array fluorescence detectors. Aim of this experiment is to validate the observation principle of JEM-EUSO on air showers measured from ground. The experiment gets data in coincidence with the TA triggers to increase the likelihood of cosmic ray detection. In this framework the collaboration is also testing the detector response with respect to several test events from lasers and LED flashers. Moreover, another aim of the project is the validation of the stability of the data acquisition chain in real sky condition and the optimization of the trigger scheme for the rejection of background. Data analysis is ongoing to identify cosmic ray events in coincidence with the TA detector. In this contribution we will show the response of the EUSO-TA detector to all the different typologies of events and we will show some preliminary results on the trigger optimization performed on such data.
Unsupervised iterative detection of land mines in highly cluttered environments.
Batman, Sinan; Goutsias, John
2003-01-01
An unsupervised iterative scheme is proposed for land mine detection in heavily cluttered scenes. This scheme is based on iterating hybrid multispectral filters that consist of a decorrelating linear transform coupled with a nonlinear morphological detector. Detections extracted from the first pass are used to improve results in subsequent iterations. The procedure stops after a predetermined number of iterations. The proposed scheme addresses several weaknesses associated with previous adaptations of morphological approaches to land mine detection. Improvement in detection performance, robustness with respect to clutter inhomogeneities, a completely unsupervised operation, and computational efficiency are the main highlights of the method. Experimental results reveal excellent performance.
A united event grand canonical Monte Carlo study of partially doped polyaniline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byshkin, M. S., E-mail: mbyshkin@unisa.it, E-mail: gmilano@unisa.it; Correa, A.; Buonocore, F.
2013-12-28
A Grand Canonical Monte Carlo scheme, based on united events combining protonation/deprotonation and insertion/deletion of HCl molecules is proposed for the generation of polyaniline structures at intermediate doping levels between 0% (PANI EB) and 100% (PANI ES). A procedure based on this scheme and subsequent structure relaxations using molecular dynamics is described and validated. Using the proposed scheme and the corresponding procedure, atomistic models of amorphous PANI-HCl structures were generated and studied at different doping levels. Density, structure factors, and solubility parameters were calculated. Their values agree well with available experimental data. The interactions of HCl with PANI have beenmore » studied and distribution of their energies has been analyzed. The procedure has also been extended to the generation of PANI models including adsorbed water and the effect of inclusion of water molecules on PANI properties has also been modeled and discussed. The protocol described here is general and the proposed United Event Grand Canonical Monte Carlo scheme can be easily extended to similar polymeric materials used in gas sensing and to other systems involving adsorption and chemical reactions steps.« less
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
Optical measurement of sound using time-varying laser speckle patterns
NASA Astrophysics Data System (ADS)
Leung, Terence S.; Jiang, Shihong; Hebden, Jeremy
2011-02-01
In this work, we introduce an optical technique to measure sound. The technique involves pointing a coherent pulsed laser beam on the surface of the measurement site and capturing the time-varying speckle patterns using a CCD camera. Sound manifests itself as vibrations on the surface which induce a periodic translation of the speckle pattern over time. Using a parallel speckle detection scheme, the dynamics of the time-varying speckle patterns can be captured and processed to produce spectral information of the sound. One potential clinical application is to measure pathological sounds from the brain as a screening test. We performed experiments to demonstrate the principle of the detection scheme using head phantoms. The results show that the detection scheme can measure the spectra of single frequency sounds between 100 and 2000 Hz. The detection scheme worked equally well in both a flat geometry and an anatomical head geometry. However, the current detection scheme is too slow for use in living biological tissues which has a decorrelation time of a few milliseconds. Further improvements have been suggested.
A DFIG Islanding Detection Scheme Based on Reactive Power Infusion
NASA Astrophysics Data System (ADS)
Wang, M.; Liu, C.; He, G. Q.; Li, G. H.; Feng, K. H.; Sun, W. W.
2017-07-01
A lot of research has been done on photovoltaic (the “PV”) power system islanding detection in recent years. As a comparison, much less attention has been paid to islanding in wind turbines. Meanwhile, wind turbines can work in islanding conditions for quite a long period, which can be harmful to equipments and cause safety hazards. This paper presents and examines a double fed introduction generation (the “DFIG”) islanding detection scheme based on feedback of reactive power and frequency and uses a trigger signal of reactive power infusion which can be obtained by dividing the voltage total harmonic distortion (the "THD") by the voltage THD of last cycle to avoid the deterioration of power quality. This DFIG islanding detection scheme uses feedback of reactive power current loop to amplify the frequency differences in islanding and normal conditions. Simulation results show that the DFIG islanding detection scheme is effective.
Kaplan, H S
2005-11-01
Safety and reliability in blood transfusion are not static, but are dynamic non-events. Since performance deviations continually occur in complex systems, their detection and correction must be accomplished over and over again. Non-conformance must be detected early enough to allow for recovery or mitigation. Near-miss events afford early detection of possible system weaknesses and provide an early chance at correction. National event reporting systems, both voluntary and involuntary, have begun to include near-miss reporting in their classification schemes, raising awareness for their detection. MERS-TM is a voluntary safety reporting initiative in transfusion. Currently 22 hospitals submit reports anonymously to a central database which supports analysis of a hospital's own data and that of an aggregate database. The system encourages reporting of near-miss events, where the patient is protected from receiving an unsuitable or incorrect blood component due to a planned or unplanned recovery step. MERS-TM data suggest approximately 90% of events are near-misses, with 10% caught after issue but before transfusion. Near-miss reporting may increase total reports ten-fold. The ratio of near-misses to events with harm is 339:1, consistent with other industries' ratio of 300:1, which has been proposed as a measure of reporting in event reporting systems. Use of a risk matrix and an event's relation to protective barriers allow prioritization of these events. Near-misses recovered by planned barriers occur ten times more frequently then unplanned recoveries. A bedside check of the patient's identity with that on the blood component is an essential, final barrier. How the typical two person check is performed, is critical. Even properly done, this check is ineffective against sampling and testing errors. Blood testing at bedside just prior to transfusion minimizes the risk of such upstream events. However, even with simple and well designed devices, training may be a critical issue. Sample errors account for more than half of reported events. The most dangerous miscollection is a blood sample passing acceptance with no previous patient results for comparison. Bar code labels or collection of a second sample may counter this upstream vulnerability. Further upstream barriers have been proposed to counter the precariousness of urgent blood sample collection in a changing unstable situation. One, a linking device, allows safer labeling of tubes away from the bedside, the second, a forcing function, prevents omission of critical patient identification steps. Errors in the blood bank itself account for 15% of errors with a high potential severity. In one such event, a component incorrectly issued, but safely detected prior to transfusion, focused attention on multitasking's contribution to laboratory error. In sum, use of near-miss information, by enhancing barriers supporting error prevention and mitigation, increases our capacity to get the right blood to the right patient.
Park, Seonhwa; Singh, Amardeep; Kim, Sinyoung; Yang, Haesik
2014-02-04
We compare herein biosensing performance of two electroreduction-based electrochemical-enzymatic (EN) redox-cycling schemes [the redox cycling combined with simultaneous enzymatic amplification (one-enzyme scheme) and the redox cycling combined with preceding enzymatic amplification (two-enzyme scheme)]. To minimize unwanted side reactions in the two-enzyme scheme, β-galactosidase (Gal) and tyrosinase (Tyr) are selected as an enzyme label and a redox enzyme, respectively, and Tyr is selected as a redox enzyme label in the one-enzyme scheme. The signal amplification in the one-enzyme scheme consists of (i) enzymatic oxidation of catechol into o-benzoquinone by Tyr and (ii) electroreduction-based EN redox cycling of o-benzoquinone. The signal amplification in the two-enzyme scheme consists of (i) enzymatic conversion of phenyl β-d-galactopyranoside into phenol by Gal, (ii) enzymatic oxidation of phenol into catechol by Tyr, and (iii) electroreduction-based EN redox cycling of o-benzoquinone including further enzymatic oxidation of catechol to o-benzoquinone by Tyr. Graphene oxide-modified indium-tin oxide (GO/ITO) electrodes, simply prepared by immersing ITO electrodes in a GO-dispersed aqueous solution, are used to obtain better electrocatalytic activities toward o-benzoquinone reduction than bare ITO electrodes. The detection limits for mouse IgG, measured with GO/ITO electrodes, are lower than when measured with bare ITO electrodes. Importantly, the detection of mouse IgG using the two-enzyme scheme allows lower detection limits than that using the one-enzyme scheme, because the former gives higher signal levels at low target concentrations although the former gives lower signal levels at high concentrations. The detection limit for cancer antigen (CA) 15-3, a biomarker of breast cancer, measured using the two-enzyme scheme and GO/ITO electrodes is ca. 0.1 U/mL, indicating that the immunosensor is highly sensitive.
NASA Astrophysics Data System (ADS)
Salimun, Ester; Tangang, Fredolin; Juneng, Liew
2010-06-01
A comparative study has been conducted to investigate the skill of four convection parameterization schemes, namely the Anthes-Kuo (AK), the Betts-Miller (BM), the Kain-Fritsch (KF), and the Grell (GR) schemes in the numerical simulation of an extreme precipitation episode over eastern Peninsular Malaysia using the Pennsylvania State University—National Center for Atmospheric Research Center (PSU-NCAR) Fifth Generation Mesoscale Model (MM5). The event is a commonly occurring westward propagating tropical depression weather system during a boreal winter resulting from an interaction between a cold surge and the quasi-stationary Borneo vortex. The model setup and other physical parameterizations are identical in all experiments and hence any difference in the simulation performance could be associated with the cumulus parameterization scheme used. From the predicted rainfall and structure of the storm, it is clear that the BM scheme has an edge over the other schemes. The rainfall intensity and spatial distribution were reasonably well simulated compared to observations. The BM scheme was also better in resolving the horizontal and vertical structures of the storm. Most of the rainfall simulated by the BM simulation was of the convective type. The failure of other schemes (AK, GR and KF) in simulating the event may be attributed to the trigger function, closure assumption, and precipitation scheme. On the other hand, the appropriateness of the BM scheme for this episode may not be generalized for other episodes or convective environments.
A Multisensor Investigation of Convection During HyMeX SOP1 IOP13
NASA Technical Reports Server (NTRS)
Roberto, N.; Adirosi, E.; Baldini, L.; Casella, D.; Dietrich, S.; Panegrossi, G.; Petracca, M.; Sano, P.; Gatlin, P.
2014-01-01
A multisensor analysis of the convective precipitation event occurred over Rome during the IOP13 (October 15th, 2012) of the HyMeX (Hydrological cycle in the Mediterranean eXperiment) Special Observation Period (SOP) 1 is presented. Thanks to the cooperation among Italian meteorological services and scientific community and a specific agreement with NASA-GSFC, different types of devices for meteorological measurements were made available during the HyMeX SOP.1. For investigating this event, used are the 3-D lightning data provided by the LINET, the CNR ISAC dual-pol C-band radar (Polar 55C), located in Rome, the Drop Size Distributions (DSD) collected by the 2D Video Disdrometer (2DVD) and the collocated Micro Rain Radar (MRR) installed at the Radio Meteorology Lab. of "Sapienza" University of Rome, located 14 km from the Polar 55C radar. The relation between microphysical structure and electrical activity during the convective phase of the event was investigated using LINET lightning data and Polar 55C (working both in PPI and RHI scanning mode) observations. Location of regions of high horizontal reflectivity (Zh) values ( > 50 dBz), indicating convective precipitation, were found to be associated to a high number of LINET strokes. In addition, an hydrometeor classification scheme applied to the Polar 55C scans was used to detect graupel and to identify a relation between number of LINET strokes and integrated IWC of graupel along the event. Properties of DSDs measured by the 2DVD and vertical DSD profiles estimated by MRR and their relation with the lighting activity registered by LINET were investigated with specific focus on the transition from convective to stratiform regimes. A good agreement was found between convection detected by these instruments and the number of strokes detected by LINET.
LCP method for a planar passive dynamic walker based on an event-driven scheme
NASA Astrophysics Data System (ADS)
Zheng, Xu-Dong; Wang, Qi
2018-06-01
The main purpose of this paper is to present a linear complementarity problem (LCP) method for a planar passive dynamic walker with round feet based on an event-driven scheme. The passive dynamic walker is treated as a planar multi-rigid-body system. The dynamic equations of the passive dynamic walker are obtained by using Lagrange's equations of the second kind. The normal forces and frictional forces acting on the feet of the passive walker are described based on a modified Hertz contact model and Coulomb's law of dry friction. The state transition problem of stick-slip between feet and floor is formulated as an LCP, which is solved with an event-driven scheme. Finally, to validate the methodology, four gaits of the walker are simulated: the stance leg neither slips nor bounces; the stance leg slips without bouncing; the stance leg bounces without slipping; the walker stands after walking several steps.
LCP method for a planar passive dynamic walker based on an event-driven scheme
NASA Astrophysics Data System (ADS)
Zheng, Xu-Dong; Wang, Qi
2018-02-01
The main purpose of this paper is to present a linear complementarity problem (LCP) method for a planar passive dynamic walker with round feet based on an event-driven scheme. The passive dynamic walker is treated as a planar multi-rigid-body system. The dynamic equations of the passive dynamic walker are obtained by using Lagrange's equations of the second kind. The normal forces and frictional forces acting on the feet of the passive walker are described based on a modified Hertz contact model and Coulomb's law of dry friction. The state transition problem of stick-slip between feet and floor is formulated as an LCP, which is solved with an event-driven scheme. Finally, to validate the methodology, four gaits of the walker are simulated: the stance leg neither slips nor bounces; the stance leg slips without bouncing; the stance leg bounces without slipping; the walker stands after walking several steps.
Rakkiyappan, R; Maheswari, K; Velmurugan, G; Park, Ju H
2018-05-17
This paper investigates H ∞ state estimation problem for a class of semi-Markovian jumping discrete-time neural networks model with event-triggered scheme and quantization. First, a new event-triggered communication scheme is introduced to determine whether or not the current sampled sensor data should be broad-casted and transmitted to the quantizer, which can save the limited communication resource. Second, a novel communication framework is employed by the logarithmic quantizer that quantifies and reduces the data transmission rate in the network, which apparently improves the communication efficiency of networks. Third, a stabilization criterion is derived based on the sufficient condition which guarantees a prescribed H ∞ performance level in the estimation error system in terms of the linear matrix inequalities. Finally, numerical simulations are given to illustrate the correctness of the proposed scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.
A study of malware detection on smart mobile devices
NASA Astrophysics Data System (ADS)
Yu, Wei; Zhang, Hanlin; Xu, Guobin
2013-05-01
The growing in use of smart mobile devices for everyday applications has stimulated the spread of mobile malware, especially on popular mobile platforms. As a consequence, malware detection becomes ever more critical in sustaining the mobile market and providing a better user experience. In this paper, we review the existing malware and detection schemes. Using real-world malware samples with known signatures, we evaluate four popular commercial anti-virus tools and our data shows that these tools can achieve high detection accuracy. To deal with the new malware with unknown signatures, we study the anomaly based detection using decision tree algorithm. We evaluate the effectiveness of our detection scheme using malware and legitimate software samples. Our data shows that the detection scheme using decision tree can achieve a detection rate up to 90% and a false positive rate as low as 10%.
Properties of induced seismicity at the geothermal reservoir Insheim, Germany
NASA Astrophysics Data System (ADS)
Olbert, Kai; Küperkoch, Ludger; Thomas, Meier
2017-04-01
Within the framework of the German MAGS2 Project the processing of induced events at the geothermal power plant Insheim, Germany, has been reassessed and evaluated. The power plant is located close to the western rim of the Upper Rhine Graben in a region with a strongly heterogeneous subsurface. Therefore, the location of seismic events particularly the depth estimation is challenging. The seismic network consisting of up to 50 stations has an aperture of approximately 15 km around the power plant. Consequently, the manual processing is time consuming. Using a waveform similarity detection algorithm, the existing dataset from 2012 to 2016 has been reprocessed to complete the catalog of induced seismic events. Based on the waveform similarity clusters of similar events have been detected. Automated P- and S-arrival time determination using an improved multi-component autoregressive prediction algorithm yields approximately 14.000 P- and S-arrivals for 758 events. Applying a dataset of manual picks as reference the automated picking algorithm has been optimized resulting in a standard deviation of the residuals between automated and manual picks of about 0.02s. The automated locations show uncertainties comparable to locations of the manual reference dataset. 90 % of the automated relocations fall within the error ellipsoid of the manual locations. The remaining locations are either badly resolved due to low numbers of picks or so well resolved that the automatic location is outside the error ellipsoid although located close to the manual location. The developed automated processing scheme proved to be a useful tool to supplement real-time monitoring. The event clusters are located at small patches of faults known from reflection seismic studies. The clusters are observed close to both the injection as well as the production wells.
Increasing sensitivity of pulse EPR experiments using echo train detection schemes.
Mentink-Vigier, F; Collauto, A; Feintuch, A; Kaminker, I; Tarle, V; Goldfarb, D
2013-11-01
Modern pulse EPR experiments are routinely used to study the structural features of paramagnetic centers. They are usually performed at low temperatures, where relaxation times are long and polarization is high, to achieve a sufficient Signal/Noise Ratio (SNR). However, when working with samples whose amount and/or concentration are limited, sensitivity becomes an issue and therefore measurements may require a significant accumulation time, up to 12h or more. As the detection scheme of practically all pulse EPR sequences is based on the integration of a spin echo--either primary, stimulated or refocused--a considerable increase in SNR can be obtained by replacing the single echo detection scheme by a train of echoes. All these echoes, generated by Carr-Purcell type sequences, are integrated and summed together to improve the SNR. This scheme is commonly used in NMR and here we demonstrate its applicability to a number of frequently used pulse EPR experiments: Echo-Detected EPR, Davies and Mims ENDOR (Electron-Nuclear Double Resonance), DEER (Electron-Electron Double Resonance|) and EDNMR (Electron-Electron Double Resonance (ELDOR)-Detected NMR), which were combined with a Carr-Purcell-Meiboom-Gill (CPMG) type detection scheme at W-band. By collecting the transient signal and integrating a number of refocused echoes, this detection scheme yielded a 1.6-5 folds SNR improvement, depending on the paramagnetic center and the pulse sequence applied. This improvement is achieved while keeping the experimental time constant and it does not introduce signal distortion. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Chui, T. C. P.; Shao, M.; Redding, D.; Gursel, Y.; Boden, A.
1995-01-01
We discuss the effect of mirror birefringence in two optical schemes designed to detect the quantum-electrodynamics (QED) predictions of vacuum birefringence under the influence of a strong magnetic field, B. Both schemes make use of a high finesse Fabry-Perot cavity (F-P) to increase the average path length of the light in the magnetic field. The first scheme, which we called the frequency scheme, is based on measurement of the beat frequency of two orthogonal polarized laser beams in the cavity. We show that mirror birefringence contributes to the detection uncertainties in first order, resulting in a high susceptibility to small thermal disturbances. We estimate that an unreasonably high thermal stability of 10-9 K is required to resolve the effect to 0.1%. In the second scheme, which we called the polarization rotation scheme, laser polarized at 45 relative to the B field is injected into the cavity.
NASA Technical Reports Server (NTRS)
Tian, Jialin; Madaras, Eric I.
2009-01-01
The development of a robust and efficient leak detection and localization system within a space station environment presents a unique challenge. A plausible approach includes the implementation of an acoustic sensor network system that can successfully detect the presence of a leak and determine the location of the leak source. Traditional acoustic detection and localization schemes rely on the phase and amplitude information collected by the sensor array system. Furthermore, the acoustic source signals are assumed to be airborne and far-field. Likewise, there are similar applications in sonar. In solids, there are specialized methods for locating events that are used in geology and in acoustic emission testing that involve sensor arrays and depend on a discernable phase front to the received signal. These methods are ineffective if applied to a sensor detection system within the space station environment. In the case of acoustic signal location, there are significant baffling and structural impediments to the sound path and the source could be in the near-field of a sensor in this particular setting.
NASA Technical Reports Server (NTRS)
Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet
1994-01-01
This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.
NASA Astrophysics Data System (ADS)
Qiao, F.; Liang, X.
2011-12-01
Accurate prediction of U.S. summer precipitation, including its geographic distribution, the occurrence frequency and intensity, and diurnal cycle, has been a long-standing problem for most climate and weather models. This study employs the Climate-Weather Research and Forecasting model (CWRF) to investigate the effects of cumulus parameterization on prediction of these key precipitation features during the summers of 1993 and 2008 when severe floods occurred over the U.S. Midwest. Among the 12 widely-used cumulus schemes incorporated in the CWRF, the Ensemble Cumulus Parameterization modified from G3 (ECP) scheme and the Zhang-McFarland cumulus scheme modified by Liang (ZML) well reproduce the geographic distributions of observed 1993 and 2008 floods, albeit both slightly underestimating the maximum amount. However, the ZML scheme greatly overestimates the rainfall amount over the North American Monsoon region and Southeast U.S. while the ECP scheme has a better performance over the entire U.S. Compared to global general circulations models that tend to produce too frequent rainy events at reduced intensity, the CWRF better captures both frequency and intensity of extreme events (heavy rainfall and dry bells). However, most existing cumulus schemes in the CWRF are likely to convert atmospheric moisture into rainfall too fast, leading to less rainy days and stronger heavy rainfall events. A few cumulus schemes can depict the diurnal characteristics in certain but not all the regions over the U.S. For example, the Grell scheme shows its superiority in reproducing the eastward diurnal phase transition and the nocturnal peaks over the Great Plains, whereas the other schemes all fail in capturing this feature. By investigating the critical trigger function(s) that enable these cumulus schemes to capture the observed features, it provides opportunity to better understand the underlying mechanisms that drive the diurnal variation, and thus significantly improves the U.S. summer rainfall diurnal cycle prediction. These will be discussed. For an oral presentation at AGU Fall Meeting 2011 A15: Cloud, Convection, Precipitation, and Radiation: Observations and Modeling, San Francisco, California, USA, 5-9 December 2011.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
A Real-Time Decision Support System for Voltage Collapse Avoidance in Power Supply Networks
NASA Astrophysics Data System (ADS)
Chang, Chen-Sung
This paper presents a real-time decision support system (RDSS) based on artificial intelligence (AI) for voltage collapse avoidance (VCA) in power supply networks. The RDSS scheme employs a fuzzy hyperrectangular composite neural network (FHRCNN) to carry out voltage risk identification (VRI). In the event that a threat to the security of the power supply network is detected, an evolutionary programming (EP)-based algorithm is triggered to determine the operational settings required to restore the power supply network to a secure condition. The effectiveness of the RDSS methodology is demonstrated through its application to the American Electric Power Provider System (AEP, 30-bus system) under various heavy load conditions and contingency scenarios. In general, the numerical results confirm the ability of the RDSS scheme to minimize the risk of voltage collapse in power supply networks. In other words, RDSS provides Power Provider Enterprises (PPEs) with a viable tool for performing on-line voltage risk assessment and power system security enhancement functions.
An inter-lighting interference cancellation scheme for MISO-VLC systems
NASA Astrophysics Data System (ADS)
Kim, Kyuntak; Lee, Kyujin; Lee, Kyesan
2017-08-01
In this paper, we propose an inter-lighting interference cancellation (ILIC) scheme to reduce the interference between adjacent light-emitting diodes (LEDs) and enhance the transmission capacity of multiple-input-single-output (MISO)-visible light communication (VLC) systems. In indoor environments, multiple LEDs have normally been used as lighting sources, allowing the design of MISO-VLC systems. To enhance the transmission capacity, different data should be simultaneously transmitted from each LED; however, that can lead to interference between adjacent LEDs. In that case, relatively low-received power signals are subjected to large interference because wireless optical systems generally use intensity modulation and direct detection. Thus, only the signal with the highest received power can be detected, while the other received signals cannot be detected. To solve this problem, we propose the ILIC scheme for MISO-VLC systems. The proposed scheme preferentially detects the highest received power signal, and this signal is referred as interference signal by an interference component generator. Then, relatively low-received power signal can be detected by cancelling the interference signal from the total received signals. Therefore, the performance of the proposed scheme can improve the total average bit error rate and throughput of a MISO-VLC system.
Towards a Low-Cost Remote Memory Attestation for the Smart Grid
Yang, Xinyu; He, Xiaofei; Yu, Wei; Lin, Jie; Li, Rui; Yang, Qingyu; Song, Houbing
2015-01-01
In the smart grid, measurement devices may be compromised by adversaries, and their operations could be disrupted by attacks. A number of schemes to efficiently and accurately detect these compromised devices remotely have been proposed. Nonetheless, most of the existing schemes detecting compromised devices depend on the incremental response time in the attestation process, which are sensitive to data transmission delay and lead to high computation and network overhead. To address the issue, in this paper, we propose a low-cost remote memory attestation scheme (LRMA), which can efficiently and accurately detect compromised smart meters considering real-time network delay and achieve low computation and network overhead. In LRMA, the impact of real-time network delay on detecting compromised nodes can be eliminated via investigating the time differences reported from relay nodes. Furthermore, the attestation frequency in LRMA is dynamically adjusted with the compromised probability of each node, and then, the total number of attestations could be reduced while low computation and network overhead can be achieved. Through a combination of extensive theoretical analysis and evaluations, our data demonstrate that our proposed scheme can achieve better detection capacity and lower computation and network overhead in comparison to existing schemes. PMID:26307998
Towards a Low-Cost Remote Memory Attestation for the Smart Grid.
Yang, Xinyu; He, Xiaofei; Yu, Wei; Lin, Jie; Li, Rui; Yang, Qingyu; Song, Houbing
2015-08-21
In the smart grid, measurement devices may be compromised by adversaries, and their operations could be disrupted by attacks. A number of schemes to efficiently and accurately detect these compromised devices remotely have been proposed. Nonetheless, most of the existing schemes detecting compromised devices depend on the incremental response time in the attestation process, which are sensitive to data transmission delay and lead to high computation and network overhead. To address the issue, in this paper, we propose a low-cost remote memory attestation scheme (LRMA), which can efficiently and accurately detect compromised smart meters considering real-time network delay and achieve low computation and network overhead. In LRMA, the impact of real-time network delay on detecting compromised nodes can be eliminated via investigating the time differences reported from relay nodes. Furthermore, the attestation frequency in LRMA is dynamically adjusted with the compromised probability of each node, and then, the total number of attestations could be reduced while low computation and network overhead can be achieved. Through a combination of extensive theoretical analysis and evaluations, our data demonstrate that our proposed scheme can achieve better detection capacity and lower computation and network overhead in comparison to existing schemes.
NASA Astrophysics Data System (ADS)
Chen, Shih-Hao; Chow, Chi-Wai
2015-01-01
Multiple-input and multiple-output (MIMO) scheme can extend the transmission capacity for the light-emitting-diode (LED) based visible light communication (VLC) systems. The MIMO VLC system that uses the mobile-phone camera as the optical receiver (Rx) to receive MIMO signal from the n×n Red-Green-Blue (RGB) LED array is desirable. The key step of decoding this signal is to detect the signal direction. If the LED transmitter (Tx) is rotated, the Rx may not realize the rotation and transmission error can occur. In this work, we propose and demonstrate a novel hierarchical transmission scheme which can reduce the computation complexity of rotation detection in LED array VLC system. We use the n×n RGB LED array as the MIMO Tx. In our study, a novel two dimensional Hadamard coding scheme is proposed. Using the different LED color layers to indicate the rotation, a low complexity rotation detection method can be used for improving the quality of received signal. The detection correction rate is above 95% in the indoor usage distance. Experimental results confirm the feasibility of the proposed scheme.
Gradient field of undersea sound speed structure extracted from the GNSS-A oceanography
NASA Astrophysics Data System (ADS)
Yokota, Yusuke; Ishikawa, Tadashi; Watanabe, Shun-ichi
2018-06-01
After the twenty-first century, the Global Navigation Satellite System-Acoustic ranging (GNSS-A) technique detected geodetic events such as co- and postseismic effects following the 2011 Tohoku-oki earthquake and slip-deficit rate distributions along the Nankai Trough subduction zone. Although these are extremely important discoveries in geodesy and seismology, more accurate observation that can capture temporal and spatial changes are required for future earthquake disaster prevention. In order to upgrade the accuracy of the GNSS-A technique, it is necessary to understand disturbances in undersea sound speed structures, which are major error sources. In particular, detailed temporal and spatial variations are difficult to observe accurately, and their effect was not sufficiently extracted in previous studies. In the present paper, we reconstruct an inversion scheme for extracting the effect from GNSS-A data and experimentally apply this scheme to the seafloor sites around the Kuroshio. The extracted gradient effects are believed to represent not only a broad sound speed structure but also a more detailed structure generated in the unsteady disturbance. The accuracy of the seafloor positioning was also improved by this new method. The obtained results demonstrate the feasibility of using the GNSS-A technique to detect a seafloor crustal deformation for oceanography research.
A two-stage spectrum sensing scheme based on energy detection and a novel multitaper method
NASA Astrophysics Data System (ADS)
Qi, Pei-Han; Li, Zan; Si, Jiang-Bo; Xiong, Tian-Yi
2015-04-01
Wideband spectrum sensing has drawn much attention in recent years since it provides more opportunities to the secondary users. However, wideband spectrum sensing requires a long time and a complex mechanism at the sensing terminal. A two-stage wideband spectrum sensing scheme is considered to proceed spectrum sensing with low time consumption and high performance to tackle this predicament. In this scheme, a novel multitaper spectrum sensing (MSS) method is proposed to mitigate the poor performance of energy detection (ED) in the low signal-to-noise ratio (SNR) region. The closed-form expression of the decision threshold is derived based on the Neyman-Pearson criterion and the probability of detection in the Rayleigh fading channel is analyzed. An optimization problem is formulated to maximize the probability of detection of the proposed two-stage scheme and the average sensing time of the two-stage scheme is analyzed. Numerical results validate the efficiency of MSS and show that the two-stage spectrum sensing scheme enjoys higher performance in the low SNR region and lower time cost in the high SNR region than the single-stage scheme. Project supported by the National Natural Science Foundation of China (Grant No. 61301179), the China Postdoctoral Science Foundation (Grant No. 2014M550479), and the Doctorial Programs Foundation of the Ministry of Education, China (Grant No. 20110203110011).
Generation and coherent detection of QPSK signal using a novel method of digital signal processing
NASA Astrophysics Data System (ADS)
Zhao, Yuan; Hu, Bingliang; He, Zhen-An; Xie, Wenjia; Gao, Xiaohui
2018-02-01
We demonstrate an optical quadrature phase-shift keying (QPSK) signal transmitter and an optical receiver for demodulating optical QPSK signal with homodyne detection and digital signal processing (DSP). DSP on the homodyne detection scheme is employed without locking the phase of the local oscillator (LO). In this paper, we present an extracting one-dimensional array of down-sampling method for reducing unwanted samples of constellation diagram measurement. Such a novel scheme embodies the following major advantages over the other conventional optical QPSK signal detection methods. First, this homodyne detection scheme does not need strict requirement on LO in comparison with linear optical sampling, such as having a flat spectral density and phase over the spectral support of the source under test. Second, the LabVIEW software is directly used for recovering the QPSK signal constellation without employing complex DSP circuit. Third, this scheme is applicable to multilevel modulation formats such as M-ary PSK and quadrature amplitude modulation (QAM) or higher speed signals by making minor changes.
Event-triggered output feedback control for distributed networked systems.
Mahmoud, Magdi S; Sabih, Muhammad; Elshafei, Moustafa
2016-01-01
This paper addresses the problem of output-feedback communication and control with event-triggered framework in the context of distributed networked control systems. The design problem of the event-triggered output-feedback control is proposed as a linear matrix inequality (LMI) feasibility problem. The scheme is developed for the distributed system where only partial states are available. In this scheme, a subsystem uses local observers and share its information to its neighbors only when the subsystem's local error exceeds a specified threshold. The developed method is illustrated by using a coupled cart example from the literature. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Analysis of Alerting System Failures in Commercial Aviation Accidents
NASA Technical Reports Server (NTRS)
Mumaw, Randall J.
2017-01-01
The role of an alerting system is to make the system operator (e.g., pilot) aware of an impending hazard or unsafe state so the hazard can be avoided or managed successfully. A review of 46 commercial aviation accidents (between 1998 and 2014) revealed that, in the vast majority of events, either the hazard was not alerted or relevant hazard alerting occurred but failed to aid the flight crew sufficiently. For this set of events, alerting system failures were placed in one of five phases: Detection, Understanding, Action Selection, Prioritization, and Execution. This study also reviewed the evolution of alerting system schemes in commercial aviation, which revealed naive assumptions about pilot reliability in monitoring flight path parameters; specifically, pilot monitoring was assumed to be more effective than it actually is. Examples are provided of the types of alerting system failures that have occurred, and recommendations are provided for alerting system improvements.
da Silva, Thiago Ferreira; Xavier, Guilherme B; Temporão, Guilherme P; von der Weid, Jean Pierre
2012-08-13
By employing real-time monitoring of single-photon avalanche photodiodes we demonstrate how two types of practical eavesdropping strategies, the after-gate and time-shift attacks, may be detected. Both attacks are identified with the detectors operating without any special modifications, making this proposal well suited for real-world applications. The monitoring system is based on accumulating statistics of the times between consecutive detection events, and extracting the afterpulse and overall efficiency of the detectors in real-time using mathematical models fit to the measured data. We are able to directly observe changes in the afterpulse probabilities generated from the after-gate and faint after-gate attacks, as well as different timing signatures in the time-shift attack. We also discuss the applicability of our scheme to other general blinding attacks.
Towards an integrated defense system for cyber security situation awareness experiment
NASA Astrophysics Data System (ADS)
Zhang, Hanlin; Wei, Sixiao; Ge, Linqiang; Shen, Dan; Yu, Wei; Blasch, Erik P.; Pham, Khanh D.; Chen, Genshe
2015-05-01
In this paper, an implemented defense system is demonstrated to carry out cyber security situation awareness. The developed system consists of distributed passive and active network sensors designed to effectively capture suspicious information associated with cyber threats, effective detection schemes to accurately distinguish attacks, and network actors to rapidly mitigate attacks. Based on the collected data from network sensors, image-based and signals-based detection schemes are implemented to detect attacks. To further mitigate attacks, deployed dynamic firewalls on hosts dynamically update detection information reported from the detection schemes and block attacks. The experimental results show the effectiveness of the proposed system. A future plan to design an effective defense system is also discussed based on system theory.
A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks
Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O.
2017-01-01
This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks. PMID:28555023
A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks.
Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O
2017-05-27
This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks.
NASA Astrophysics Data System (ADS)
Imran, H. M.; Kala, J.; Ng, A. W. M.; Muthukumaran, S.
2018-04-01
Appropriate choice of physics options among many physics parameterizations is important when using the Weather Research and Forecasting (WRF) model. The responses of different physics parameterizations of the WRF model may vary due to geographical locations, the application of interest, and the temporal and spatial scales being investigated. Several studies have evaluated the performance of the WRF model in simulating the mean climate and extreme rainfall events for various regions in Australia. However, no study has explicitly evaluated the sensitivity of the WRF model in simulating heatwaves. Therefore, this study evaluates the performance of a WRF multi-physics ensemble that comprises 27 model configurations for a series of heatwave events in Melbourne, Australia. Unlike most previous studies, we not only evaluate temperature, but also wind speed and relative humidity, which are key factors influencing heatwave dynamics. No specific ensemble member for all events explicitly showed the best performance, for all the variables, considering all evaluation metrics. This study also found that the choice of planetary boundary layer (PBL) scheme had largest influence, the radiation scheme had moderate influence, and the microphysics scheme had the least influence on temperature simulations. The PBL and microphysics schemes were found to be more sensitive than the radiation scheme for wind speed and relative humidity. Additionally, the study tested the role of Urban Canopy Model (UCM) and three Land Surface Models (LSMs). Although the UCM did not play significant role, the Noah-LSM showed better performance than the CLM4 and NOAH-MP LSMs in simulating the heatwave events. The study finally identifies an optimal configuration of WRF that will be a useful modelling tool for further investigations of heatwaves in Melbourne. Although our results are invariably region-specific, our results will be useful to WRF users investigating heatwave dynamics elsewhere.
Increasing cancer detection yield of breast MRI using a new CAD scheme of mammograms
NASA Astrophysics Data System (ADS)
Tan, Maxine; Aghaei, Faranak; Hollingsworth, Alan B.; Stough, Rebecca G.; Liu, Hong; Zheng, Bin
2016-03-01
Although breast MRI is the most sensitive imaging modality to detect early breast cancer, its cancer detection yield in breast cancer screening is quite low (< 3 to 4% even for the small group of high-risk women) to date. The purpose of this preliminary study is to test the potential of developing and applying a new computer-aided detection (CAD) scheme of digital mammograms to identify women at high risk of harboring mammography-occult breast cancers, which can be detected by breast MRI. For this purpose, we retrospectively assembled a dataset involving 30 women who had both mammography and breast MRI screening examinations. All mammograms were interpreted as negative, while 5 cancers were detected using breast MRI. We developed a CAD scheme of mammograms, which include a new quantitative mammographic image feature analysis based risk model, to stratify women into two groups with high and low risk of harboring mammography-occult cancer. Among 30 women, 9 were classified into the high risk group by CAD scheme, which included all 5 women who had cancer detected by breast MRI. All 21 low risk women remained negative on the breast MRI examinations. The cancer detection yield of breast MRI applying to this dataset substantially increased from 16.7% (5/30) to 55.6% (5/9), while eliminating 84% (21/25) unnecessary breast MRI screenings. The study demonstrated the potential of applying a new CAD scheme to significantly increase cancer detection yield of breast MRI, while simultaneously reducing the number of negative MRIs in breast cancer screening.
Smart sensor technology for advanced launch vehicles
NASA Astrophysics Data System (ADS)
Schoess, Jeff
1989-07-01
Next-generation advanced launch vehicles will require improved use of sensor data and the management of multisensor resources to achieve automated preflight checkout, prelaunch readiness assessment and vehicle inflight condition monitoring. Smart sensor technology is a key component in meeting these needs. This paper describes the development of a smart sensor-based condition monitoring system concept referred to as the Distributed Sensor Architecture. A significant event and anomaly detection scheme that provides real-time condition assessment and fault diagnosis of advanced launch system rocket engines is described. The design and flight test of a smart autonomous sensor for Space Shuttle structural integrity health monitoring is presented.
NASA Astrophysics Data System (ADS)
de Ruiter, Marleen; Hudson, Paul; de Ruig, Lars; Kuik, Onno; Botzen, Wouter
2017-04-01
This paper provides an analysis of the insurance schemes that cover extreme weather events in twelve different EU countries and the risk reduction incentives offered by these schemes. Economic impacts of extreme weather events in many regions in Europe and elsewhere are on the rise due to climate change and increasing exposure as driven by urban development. In an attempt to manage impacts from extreme weather events, natural disaster insurance schemes can provide incentives for taking measures that limit weather-related risks. Insurance companies can influence public risk management policies and risk-reducing behaviour of policyholders by "rewarding behaviour that reduces risks and potential damages" (Botzen and Van den Bergh, 2008, p. 417). Examples of insurance market systems that directly or indirectly aim to incentivize risk reduction with varying degrees of success are: the U.S. National Flood Insurance Programme; the French Catastrophes Naturelles system; and the U.K. Flood Re program which requires certain levels of protection standards for properties to be insurable. In our analysis, we distinguish between four different disaster types (i.e. coastal and fluvial floods, droughts and storms) and three different sectors (i.e. residential, commercial and agriculture). The selected case studies also provide a wide coverage of different insurance market structures, including public, private and public-private insurance provision, and different methods of coping with extreme loss events, such as re-insurance, governmental aid and catastrophe bonds. The analysis of existing mechanisms for risk reduction incentives provides recommendations about incentivizing adaptive behaviour, in order to assist policy makers and other stakeholders in designing more effective insurance schemes for extreme weather risks.
Simplified Interval Observer Scheme: A New Approach for Fault Diagnosis in Instruments
Martínez-Sibaja, Albino; Astorga-Zaragoza, Carlos M.; Alvarado-Lassman, Alejandro; Posada-Gómez, Rubén; Aguila-Rodríguez, Gerardo; Rodríguez-Jarquin, José P.; Adam-Medina, Manuel
2011-01-01
There are different schemes based on observers to detect and isolate faults in dynamic processes. In the case of fault diagnosis in instruments (FDI) there are different diagnosis schemes based on the number of observers: the Simplified Observer Scheme (SOS) only requires one observer, uses all the inputs and only one output, detecting faults in one detector; the Dedicated Observer Scheme (DOS), which again uses all the inputs and just one output, but this time there is a bank of observers capable of locating multiple faults in sensors, and the Generalized Observer Scheme (GOS) which involves a reduced bank of observers, where each observer uses all the inputs and m-1 outputs, and allows the localization of unique faults. This work proposes a new scheme named Simplified Interval Observer SIOS-FDI, which does not requires the measurement of any input and just with just one output allows the detection of unique faults in sensors and because it does not require any input, it simplifies in an important way the diagnosis of faults in processes in which it is difficult to measure all the inputs, as in the case of biologic reactors. PMID:22346593
Antenna Allocation in MIMO Radar with Widely Separated Antennas for Multi-Target Detection
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-01-01
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes. PMID:25350505
Antenna allocation in MIMO radar with widely separated antennas for multi-target detection.
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-10-27
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes.
Frequency analysis of urban runoff quality in an urbanizing catchment of Shenzhen, China
NASA Astrophysics Data System (ADS)
Qin, Huapeng; Tan, Xiaolong; Fu, Guangtao; Zhang, Yingying; Huang, Yuefei
2013-07-01
This paper investigates the frequency distribution of urban runoff quality indicators using a long-term continuous simulation approach and evaluates the impacts of proposed runoff control schemes on runoff quality in an urbanizing catchment in Shenzhen, China. Four different indicators are considered to provide a comprehensive assessment of the potential impacts: total runoff depth, event pollutant load, Event Mean Concentration, and peak concentration during a rainfall event. The results obtained indicate that urban runoff quantity and quality in the catchment have significant variations in rainfall events and a very high rate of non-compliance with surface water quality regulations. Three runoff control schemes with the capacity to intercept an initial runoff depth of 5 mm, 10 mm, and 15 mm are evaluated, respectively, and diminishing marginal benefits are found with increasing interception levels in terms of water quality improvement. The effects of seasonal variation in rainfall events are investigated to provide a better understanding of the performance of the runoff control schemes. The pre-flood season has higher risk of poor water quality than other seasons after runoff control. This study demonstrates that frequency analysis of urban runoff quantity and quality provides a probabilistic evaluation of pollution control measures, and thus helps frame a risk-based decision making for urban runoff quality management in an urbanizing catchment.
Long period gratings in multimode optical fibers: application in chemical sensing
NASA Astrophysics Data System (ADS)
Thomas Lee, S.; Dinesh Kumar, R.; Suresh Kumar, P.; Radhakrishnan, P.; Vallabhan, C. P. G.; Nampoori, V. P. N.
2003-09-01
We propose and demonstrate a new technique for evanescent wave chemical sensing by writing long period gratings in a bare multimode plastic clad silica fiber. The sensing length of the present sensor is only 10 mm, but is as sensitive as a conventional unclad evanescent wave sensor having about 100 mm sensing length. The minimum measurable concentration of the sensor reported here is 10 nmol/l and the operating range is more than 4 orders of magnitude. Moreover, the detection is carried out in two independent detection configurations viz., bright field detection scheme that detects the core-mode power and dark field detection scheme that detects the cladding mode power. The use of such a double detection scheme definitely enhances the reliability and accuracy of the results. Furthermore, the cladding of the present fiber need not be removed as done in conventional evanescent wave fiber sensors.
Physics conditions for robust control of tearing modes in a rotating tokamak plasma
NASA Astrophysics Data System (ADS)
Lazzaro, E.; Borgogno, D.; Brunetti, D.; Comisso, L.; Fevrier, O.; Grasso, D.; Lutjens, H.; Maget, P.; Nowak, S.; Sauter, O.; Sozzi, C.; the EUROfusion MST1 Team
2018-01-01
The disruptive collapse of the current sustained equilibrium of a tokamak is perhaps the single most serious obstacle on the path toward controlled thermonuclear fusion. The current disruption is generally too fast to be identified early enough and tamed efficiently, and may be associated with a variety of initial perturbing events. However, a common feature of all disruptive events is that they proceed through the onset of magnetohydrodynamic instabilities and field reconnection processes developing magnetic islands, which eventually destroy the magnetic configuration. Therefore the avoidance and control of magnetic reconnection instabilities is of foremost importance and great attention is focused on the promising stabilization techniques based on localized rf power absorption and current drive. Here a short review is proposed of the key aspects of high power rf control schemes (specifically electron cyclotron heating and current drive) for tearing modes, considering also some effects of plasma rotation. From first principles physics considerations, new conditions are presented and discussed to achieve control of the tearing perturbations by means of high power ({P}{{EC}}≥slant {P}{{ohm}}) in regimes where strong nonlinear instabilities may be driven, such as secondary island structures, which can blur the detection and limit the control of the instabilities. Here we consider recent work that has motivated the search for the improvement of some traditional control strategies, namely the feedback schemes based on strict phase tracking of the propagating magnetic islands.
A Component-Based Approach for Securing Indoor Home Care Applications
Estévez, Elisabet
2017-01-01
eHealth systems have adopted recent advances on sensing technologies together with advances in information and communication technologies (ICT) in order to provide people-centered services that improve the quality of life of an increasingly elderly population. As these eHealth services are founded on the acquisition and processing of sensitive data (e.g., personal details, diagnosis, treatments and medical history), any security threat would damage the public’s confidence in them. This paper proposes a solution for the design and runtime management of indoor eHealth applications with security requirements. The proposal allows applications definition customized to patient particularities, including the early detection of health deterioration and suitable reaction (events) as well as security needs. At runtime, security support is twofold. A secured component-based platform supervises applications execution and provides events management, whilst the security of the communications among application components is also guaranteed. Additionally, the proposed event management scheme adopts the fog computing paradigm to enable local event related data storage and processing, thus saving communication bandwidth when communicating with the cloud. As a proof of concept, this proposal has been validated through the monitoring of the health status in diabetic patients at a nursing home. PMID:29278370
Stochastic modeling of soundtrack for efficient segmentation and indexing of video
NASA Astrophysics Data System (ADS)
Naphade, Milind R.; Huang, Thomas S.
1999-12-01
Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.
A Component-Based Approach for Securing Indoor Home Care Applications.
Agirre, Aitor; Armentia, Aintzane; Estévez, Elisabet; Marcos, Marga
2017-12-26
eHealth systems have adopted recent advances on sensing technologies together with advances in information and communication technologies (ICT) in order to provide people-centered services that improve the quality of life of an increasingly elderly population. As these eHealth services are founded on the acquisition and processing of sensitive data (e.g., personal details, diagnosis, treatments and medical history), any security threat would damage the public's confidence in them. This paper proposes a solution for the design and runtime management of indoor eHealth applications with security requirements. The proposal allows applications definition customized to patient particularities, including the early detection of health deterioration and suitable reaction (events) as well as security needs. At runtime, security support is twofold. A secured component-based platform supervises applications execution and provides events management, whilst the security of the communications among application components is also guaranteed. Additionally, the proposed event management scheme adopts the fog computing paradigm to enable local event related data storage and processing, thus saving communication bandwidth when communicating with the cloud. As a proof of concept, this proposal has been validated through the monitoring of the health status in diabetic patients at a nursing home.
Induction detection of concealed bulk banknotes
NASA Astrophysics Data System (ADS)
Fuller, Christopher; Chen, Antao
2012-06-01
The smuggling of bulk cash across borders is a serious issue that has increased in recent years. In an effort to curb the illegal transport of large numbers of paper bills, a detection scheme has been developed, based on the magnetic characteristics of bank notes. The results show that volumes of paper currency can be detected through common concealing materials such as plastics, cardboard, and fabrics making it a possible potential addition to border security methods. The detection scheme holds the potential of also reducing or eliminating false positives caused by metallic materials found in the vicinity, by observing the stark difference in received signals caused by metal and currency. The detection scheme holds the potential to detect for both the presence and number of concealed bulk notes, while maintaining the ability to reduce false positives caused by metal objects.
A new detection scheme for ultrafast 2D J-resolved spectroscopy
NASA Astrophysics Data System (ADS)
Giraudeau, Patrick; Akoka, Serge
2007-06-01
Recent ultrafast techniques enable 2D NMR spectra to be obtained in a single scan. A modification of the detection scheme involved in this technique is proposed, permitting the achievement of 2D 1H J-resolved spectra in 500 ms. The detection gradient echoes are substituted by spin echoes to obtain spectra where the coupling constants are encoded along the direct ν2 domain. The use of this new J-resolved detection block after continuous phase-encoding excitation schemes is discussed in terms of resolution and sensitivity. J-resolved spectra obtained on cinnamic acid and 3-ethyl bromopropionate are presented, revealing the expected 2D J-patterns with coupling constants as small as 2 Hz.
Fault Analysis and Detection in Microgrids with High PV Penetration
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Khatib, Mohamed; Hernandez Alvidrez, Javier; Ellis, Abraham
In this report we focus on analyzing current-controlled PV inverters behaviour under faults in order to develop fault detection schemes for microgrids with high PV penetration. Inverter model suitable for steady state fault studies is presented and the impact of PV inverters on two protection elements is analyzed. The studied protection elements are superimposed quantities based directional element and negative sequence directional element. Additionally, several non-overcurrent fault detection schemes are discussed in this report for microgrids with high PV penetration. A detailed time-domain simulation study is presented to assess the performance of the presented fault detection schemes under different microgridmore » modes of operation.« less
Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang
2016-01-01
Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873
Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang
2016-01-01
Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.
A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.
Do, Nhu Tri; An, Beongku
2015-02-13
In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.
Li, Congcong; Zhang, Xi; Wang, Haiping; Li, Dongfeng
2018-01-11
Vehicular sensor networks have been widely applied in intelligent traffic systems in recent years. Because of the specificity of vehicular sensor networks, they require an enhanced, secure and efficient authentication scheme. Existing authentication protocols are vulnerable to some problems, such as a high computational overhead with certificate distribution and revocation, strong reliance on tamper-proof devices, limited scalability when building many secure channels, and an inability to detect hardware tampering attacks. In this paper, an improved authentication scheme using certificateless public key cryptography is proposed to address these problems. A security analysis of our scheme shows that our protocol provides an enhanced secure anonymous authentication, which is resilient against major security threats. Furthermore, the proposed scheme reduces the incidence of node compromise and replication attacks. The scheme also provides a malicious-node detection and warning mechanism, which can quickly identify compromised static nodes and immediately alert the administrative department. With performance evaluations, the scheme can obtain better trade-offs between security and efficiency than the well-known available schemes.
Enhancing Community Detection By Affinity-based Edge Weighting Scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Andy; Sanders, Geoffrey; Henson, Van
Community detection refers to an important graph analytics problem of finding a set of densely-connected subgraphs in a graph and has gained a great deal of interest recently. The performance of current community detection algorithms is limited by an inherent constraint of unweighted graphs that offer very little information on their internal community structures. In this paper, we propose a new scheme to address this issue that weights the edges in a given graph based on recently proposed vertex affinity. The vertex affinity quantifies the proximity between two vertices in terms of their clustering strength, and therefore, it is idealmore » for graph analytics applications such as community detection. We also demonstrate that the affinity-based edge weighting scheme can improve the performance of community detection algorithms significantly.« less
Combining image-processing and image compression schemes
NASA Technical Reports Server (NTRS)
Greenspan, H.; Lee, M.-C.
1995-01-01
An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.
NASA Technical Reports Server (NTRS)
Wu, Di; Dong, Xiquan; Xi, Baike; Feng, Zhe; Kennedy, Aaron; Mullendore, Gretchen; Gilmore, Matthew; Tao, Wei-Kuo
2013-01-01
This study investigates the impact of snow, graupel, and hail processes on simulated squall lines over the Southern Great Plains in the United States. The Weather Research and Forecasting (WRF) model is used to simulate two squall line events in Oklahoma during May 2007, and the simulations are validated against radar and surface observations. Several microphysics schemes are tested in this study, including the WRF 5-Class Microphysics (WSM5), WRF 6-Class Microphysics (WSM6), Goddard Cumulus Ensemble (GCE) Three Ice (3-ice) with graupel, Goddard Two Ice (2-ice), and Goddard 3-ice hail schemes. Simulated surface precipitation is sensitive to the microphysics scheme when the graupel or hail categories are included. All of the 3-ice schemes overestimate the total precipitation with WSM6 having the largest bias. The 2-ice schemes, without a graupel/hail category, produce less total precipitation than the 3-ice schemes. By applying a radar-based convective/stratiform partitioning algorithm, we find that including graupel/hail processes increases the convective areal coverage, precipitation intensity, updraft, and downdraft intensities, and reduces the stratiform areal coverage and precipitation intensity. For vertical structures, simulations have higher reflectivity values distributed aloft than the observed values in both the convective and stratiform regions. Three-ice schemes produce more high reflectivity values in convective regions, while 2-ice schemes produce more high reflectivity values in stratiform regions. In addition, this study has demonstrated that the radar-based convective/stratiform partitioning algorithm can reasonably identify WRF-simulated precipitation, wind, and microphysical fields in both convective and stratiform regions.
Photoinitiator Nucleotide for Quantifying Nucleic Acid Hybridization
Johnson, Leah M.; Hansen, Ryan R.; Urban, Milan; Kuchta, Robert D.; Bowman, Christopher N.
2010-01-01
This first report of a photoinitiator-nucleotide conjugate demonstrates a novel approach for sensitive, rapid and visual detection of DNA hybridization events. This approach holds potential for various DNA labeling schemes and for applications benefiting from selective DNA-based polymerization initiators. Here, we demonstrate covalent, enzymatic incorporation of an eosin-photoinitiator 2′-deoxyuridine-5′-triphosphate (EITC-dUTP) conjugate into surface-immobilized DNA hybrids. Subsequent radical chain photoinitiation from these sites using an acrylamide/bis-acrylamide formulation yields a dynamic detection range between 500pM and 50nM of DNA target. Increasing EITC-nucleotide surface densities leads to an increase in surface-based polymer film heights until achieving a film height plateau of 280nm ±20nm at 610 ±70 EITC-nucleotides/μm2. Film heights of 10–20 nm were obtained from eosin surface densities of approximately 20 EITC-nucleotides/μm2 while below the detection limit of ~10 EITC-nucleotides/μm2, no detectable films were formed. This unique threshold behavior is utilized for instrument-free, visual quantification of target DNA concentration ranges. PMID:20337438
Real-Time Detection of Staphylococcus Aureus Using Whispering Gallery Mode Optical Microdisks
Ghali, Hala; Chibli, Hicham; Nadeau, Jay L.; Bianucci, Pablo; Peter, Yves-Alain
2016-01-01
Whispering Gallery Mode (WGM) microresonators have recently been studied as a means to achieve real-time label-free detection of biological targets such as virus particles, specific DNA sequences, or proteins. Due to their high quality (Q) factors, WGM resonators can be highly sensitive. A biosensor also needs to be selective, requiring proper functionalization of its surface with the appropriate ligand that will attach the biomolecule of interest. In this paper, WGM microdisks are used as biosensors for detection of Staphylococcus aureus. The microdisks are functionalized with LysK, a phage protein specific for staphylococci at the genus level. A binding event on the surface shifts the resonance peak of the microdisk resonator towards longer wavelengths. This reactive shift can be used to estimate the surface density of bacteria that bind to the surface of the resonator. The limit of detection of a microdisk with a Q-factor around 104 is on the order of 5 pg/mL, corresponding to 20 cells. No binding of Escherichia coli to the resonators is seen, supporting the specificity of the functionalization scheme. PMID:27153099
Moradi, Saber; Qiao, Ning; Stefanini, Fabio; Indiveri, Giacomo
2018-02-01
Neuromorphic computing systems comprise networks of neurons that use asynchronous events for both computation and communication. This type of representation offers several advantages in terms of bandwidth and power consumption in neuromorphic electronic systems. However, managing the traffic of asynchronous events in large scale systems is a daunting task, both in terms of circuit complexity and memory requirements. Here, we present a novel routing methodology that employs both hierarchical and mesh routing strategies and combines heterogeneous memory structures for minimizing both memory requirements and latency, while maximizing programming flexibility to support a wide range of event-based neural network architectures, through parameter configuration. We validated the proposed scheme in a prototype multicore neuromorphic processor chip that employs hybrid analog/digital circuits for emulating synapse and neuron dynamics together with asynchronous digital circuits for managing the address-event traffic. We present a theoretical analysis of the proposed connectivity scheme, describe the methods and circuits used to implement such scheme, and characterize the prototype chip. Finally, we demonstrate the use of the neuromorphic processor with a convolutional neural network for the real-time classification of visual symbols being flashed to a dynamic vision sensor (DVS) at high speed.
Macedo, Gleicy A.; Gonin, Michelle Luiza C.; Pone, Sheila M.; Cruz, Oswaldo G.; Nobre, Flávio F.; Brasil, Patrícia
2014-01-01
Background The clinical definition of severe dengue fever remains a challenge for researchers in hyperendemic areas like Brazil. The ability of the traditional (1997) as well as the revised (2009) World Health Organization (WHO) dengue case classification schemes to detect severe dengue cases was evaluated in 267 children admitted to hospital with laboratory-confirmed dengue. Principal Findings Using the traditional scheme, 28.5% of patients could not be assigned to any category, while the revised scheme categorized all patients. Intensive therapeutic interventions were used as the reference standard to evaluate the ability of both the traditional and revised schemes to detect severe dengue cases. Analyses of the classified cases (n = 183) demonstrated that the revised scheme had better sensitivity (86.8%, P<0.001), while the traditional scheme had better specificity (93.4%, P<0.001) for the detection of severe forms of dengue. Conclusions/Significance This improved sensitivity of the revised scheme allows for better case capture and increased ICU admission, which may aid pediatricians in avoiding deaths due to severe dengue among children, but, in turn, it may also result in the misclassification of the patients' condition as severe, reflected in the observed lower positive predictive value (61.6%, P<0.001) when compared with the traditional scheme (82.6%, P<0.001). The inclusion of unusual dengue manifestations in the revised scheme has not shifted the emphasis from the most important aspects of dengue disease and the major factors contributing to fatality in this study: shock with consequent organ dysfunction. PMID:24777054
Macedo, Gleicy A; Gonin, Michelle Luiza C; Pone, Sheila M; Cruz, Oswaldo G; Nobre, Flávio F; Brasil, Patrícia
2014-01-01
The clinical definition of severe dengue fever remains a challenge for researchers in hyperendemic areas like Brazil. The ability of the traditional (1997) as well as the revised (2009) World Health Organization (WHO) dengue case classification schemes to detect severe dengue cases was evaluated in 267 children admitted to hospital with laboratory-confirmed dengue. Using the traditional scheme, 28.5% of patients could not be assigned to any category, while the revised scheme categorized all patients. Intensive therapeutic interventions were used as the reference standard to evaluate the ability of both the traditional and revised schemes to detect severe dengue cases. Analyses of the classified cases (n = 183) demonstrated that the revised scheme had better sensitivity (86.8%, P<0.001), while the traditional scheme had better specificity (93.4%, P<0.001) for the detection of severe forms of dengue. This improved sensitivity of the revised scheme allows for better case capture and increased ICU admission, which may aid pediatricians in avoiding deaths due to severe dengue among children, but, in turn, it may also result in the misclassification of the patients' condition as severe, reflected in the observed lower positive predictive value (61.6%, P<0.001) when compared with the traditional scheme (82.6%, P<0.001). The inclusion of unusual dengue manifestations in the revised scheme has not shifted the emphasis from the most important aspects of dengue disease and the major factors contributing to fatality in this study: shock with consequent organ dysfunction.
Image Motion Detection And Estimation: The Modified Spatio-Temporal Gradient Scheme
NASA Astrophysics Data System (ADS)
Hsin, Cheng-Ho; Inigo, Rafael M.
1990-03-01
The detection and estimation of motion are generally involved in computing a velocity field of time-varying images. A completely new modified spatio-temporal gradient scheme to determine motion is proposed. This is derived by using gradient methods and properties of biological vision. A set of general constraints is proposed to derive motion constraint equations. The constraints are that the second directional derivatives of image intensity at an edge point in the smoothed image will be constant at times t and t+L . This scheme basically has two stages: spatio-temporal filtering, and velocity estimation. Initially, image sequences are processed by a set of oriented spatio-temporal filters which are designed using a Gaussian derivative model. The velocity is then estimated for these filtered image sequences based on the gradient approach. From a computational stand point, this scheme offers at least three advantages over current methods. The greatest advantage of the modified spatio-temporal gradient scheme over the traditional ones is that an infinite number of motion constraint equations are derived instead of only one. Therefore, it solves the aperture problem without requiring any additional assumptions and is simply a local process. The second advantage is that because of the spatio-temporal filtering, the direct computation of image gradients (discrete derivatives) is avoided. Therefore the error in gradients measurement is reduced significantly. The third advantage is that during the processing of motion detection and estimation algorithm, image features (edges) are produced concurrently with motion information. The reliable range of detected velocity is determined by parameters of the oriented spatio-temporal filters. Knowing the velocity sensitivity of a single motion detection channel, a multiple-channel mechanism for estimating image velocity, seldom addressed by other motion schemes in machine vision, can be constructed by appropriately choosing and combining different sets of parameters. By applying this mechanism, a great range of velocity can be detected. The scheme has been tested for both synthetic and real images. The results of simulations are very satisfactory.
Automatic-repeat-request error control schemes
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.; Miller, M. J.
1983-01-01
Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.
Experimental quantum-cryptography scheme based on orthogonal states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avella, Alessio; Brida, Giorgio; Degiovanni, Ivo Pietro
2010-12-15
Since, in general, nonorthogonal states cannot be cloned, any eavesdropping attempt in a quantum-communication scheme using nonorthogonal states as carriers of information introduces some errors in the transmission, leading to the possibility of detecting the spy. Usually, orthogonal states are not used in quantum-cryptography schemes since they can be faithfully cloned without altering the transmitted data. Nevertheless, L. Goldberg and L. Vaidman [Phys. Rev. Lett. 75, 1239 (1995)] proposed a protocol in which, even if the data exchange is realized using two orthogonal states, any attempt to eavesdrop is detectable by the legal users. In this scheme the orthogonal statesmore » are superpositions of two localized wave packets traveling along separate channels. Here we present an experiment realizing this scheme.« less
Privacy-Aware Image Encryption Based on Logistic Map and Data Hiding
NASA Astrophysics Data System (ADS)
Sun, Jianglin; Liao, Xiaofeng; Chen, Xin; Guo, Shangwei
The increasing need for image communication and storage has created a great necessity for securely transforming and storing images over a network. Whereas traditional image encryption algorithms usually consider the security of the whole plain image, region of interest (ROI) encryption schemes, which are of great importance in practical applications, protect the privacy regions of plain images. Existing ROI encryption schemes usually adopt approximate techniques to detect the privacy region and measure the quality of encrypted images; however, their performance is usually inconsistent with a human visual system (HVS) and is sensitive to statistical attacks. In this paper, we propose a novel privacy-aware ROI image encryption (PRIE) scheme based on logistical mapping and data hiding. The proposed scheme utilizes salient object detection to automatically, adaptively and accurately detect the privacy region of a given plain image. After private pixels have been encrypted using chaotic cryptography, the significant bits are embedded into the nonprivacy region of the plain image using data hiding. Extensive experiments are conducted to illustrate the consistency between our automatic ROI detection and HVS. Our experimental results also demonstrate that the proposed scheme exhibits satisfactory security performance.
Symbol Synchronization for Diffusion-Based Molecular Communications.
Jamali, Vahid; Ahmadzadeh, Arman; Schober, Robert
2017-12-01
Symbol synchronization refers to the estimation of the start of a symbol interval and is needed for reliable detection. In this paper, we develop several symbol synchronization schemes for molecular communication (MC) systems where we consider some practical challenges, which have not been addressed in the literature yet. In particular, we take into account that in MC systems, the transmitter may not be equipped with an internal clock and may not be able to emit molecules with a fixed release frequency. Such restrictions hold for practical nanotransmitters, e.g., modified cells, where the lengths of the symbol intervals may vary due to the inherent randomness in the availability of food and energy for molecule generation, the process for molecule production, and the release process. To address this issue, we develop two synchronization-detection frameworks which both employ two types of molecule. In the first framework, one type of molecule is used for symbol synchronization and the other one is used for data detection, whereas in the second framework, both types of molecule are used for joint symbol synchronization and data detection. For both frameworks, we first derive the optimal maximum likelihood (ML) symbol synchronization schemes as performance upper bounds. Since ML synchronization entails high complexity, for each framework, we also propose three low-complexity suboptimal schemes, namely a linear filter-based scheme, a peak observation-based scheme, and a threshold-trigger scheme, which are suitable for MC systems with limited computational capabilities. Furthermore, we study the relative complexity and the constraints associated with the proposed schemes and the impact of the insertion and deletion errors that arise due to imperfect synchronization. Our simulation results reveal the effectiveness of the proposed synchronization schemes and suggest that the end-to-end performance of MC systems significantly depends on the accuracy of the symbol synchronization.
Testing seismic amplitude source location for fast debris-flow detection at Illgraben, Switzerland
NASA Astrophysics Data System (ADS)
Walter, Fabian; Burtin, Arnaud; McArdell, Brian W.; Hovius, Niels; Weder, Bianca; Turowski, Jens M.
2017-06-01
Heavy precipitation can mobilize tens to hundreds of thousands of cubic meters of sediment in steep Alpine torrents in a short time. The resulting debris flows (mixtures of water, sediment and boulders) move downstream with velocities of several meters per second and have a high destruction potential. Warning protocols for affected communities rely on raising awareness about the debris-flow threat, precipitation monitoring and rapid detection methods. The latter, in particular, is a challenge because debris-flow-prone torrents have their catchments in steep and inaccessible terrain, where instrumentation is difficult to install and maintain. Here we test amplitude source location (ASL) as a processing scheme for seismic network data for early warning purposes. We use debris-flow and noise seismograms from the Illgraben catchment, Switzerland, a torrent system which produces several debris-flow events per year. Automatic in situ detection is currently based on geophones mounted on concrete check dams and radar stage sensors suspended above the channel. The ASL approach has the advantage that it uses seismometers, which can be installed at more accessible locations where a stable connection to mobile phone networks is available for data communication. Our ASL processing uses time-averaged ground vibration amplitudes to estimate the location of the debris-flow front. Applied to continuous data streams, inversion of the seismic amplitude decay throughout the network is robust and efficient, requires no manual identification of seismic phase arrivals and eliminates the need for a local seismic velocity model. We apply the ASL technique to a small debris-flow event on 19 July 2011, which was captured with a temporary seismic monitoring network. The processing rapidly detects the debris-flow event half an hour before arrival at the outlet of the torrent and several minutes before detection by the in situ alarm system. An analysis of continuous seismic records furthermore indicates that detectability of Illgraben debris flows of this size is unaffected by changing environmental and anthropogenic seismic noise and that false detections can be greatly reduced with simple processing steps.
Qubit-loss-free fusion of atomic W states via photonic detection
NASA Astrophysics Data System (ADS)
Ding, Cheng-Yun; Kong, Fan-Zhen; Yang, Qing; Yang, Ming; Cao, Zhuo-Liang
2018-06-01
In this paper, we propose two new qubit-loss-free (QLF) fusion schemes for W states in cavity QED system. Resonant interactions between atoms and single cavity mode constitute the main fusion mechanism, with which atomic |W_{n+m}> and |W_{n+m+q}> states can be generated, respectively, from a |Wn> and a |Wm>; and from a |Wn>, a |Wm> and a |Wq>, by detecting the cavity mode. The QLF property of the schemes makes them more efficient and simpler than the currently existing ones, and fewer intermediate steps and memory resources are required for generating a target large-scale W state. Furthermore, the fusion of atomic states can be realized via the detection on cavity mode rather than the much complicated atomic detection, which makes our schemes feasible. In addition, the analyses of the optimal resource cost and the experimental feasibility indicate that the present schemes are simple and efficient, and maybe implementable within the current experimental techniques.
Index files for Belle II - very small skim containers
NASA Astrophysics Data System (ADS)
Sevior, Martin; Bloomfield, Tristan; Kuhr, Thomas; Ueda, I.; Miyake, H.; Hara, T.
2017-10-01
The Belle II experiment[1] employs the root file format[2] for recording data and is investigating the use of “index-files” to reduce the size of data skims. These files contain pointers to the location of interesting events within the total Belle II data set and reduce the size of data skims by 2 orders of magnitude. We implement this scheme on the Belle II grid by recording the parent file metadata and the event location within the parent file. While the scheme works, it is substantially slower than a normal sequential read of standard skim files using default root file parameters. We investigate the performance of the scheme by adjusting the “splitLevel” and “autoflushsize” parameters of the root files in the parent data files.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Passive Wireless Vibration Sensing for Measuring Aerospace Structural Flutter
NASA Technical Reports Server (NTRS)
Wilson, William C.; Moore, Jason P.
2017-01-01
To reduce energy consumption, emissions, and noise, NASA is exploring the use of high aspect ratio wings on subsonic aircraft. Because high aspect ratio wings are susceptible to flutter events, NASA is also investigating methods of flutter detection and suppression. In support of that work a new remote, non-contact method for measuring flutter-induced vibrations has been developed. The new sensing scheme utilizes a microwave reflectometer to monitor the reflected response from an aeroelastic structure to ultimately characterize structural vibrations. To demonstrate the ability of microwaves to detect flutter vibrations, a carbon fiber-reinforced polymer (CFRP) composite panel was vibrated at various frequencies from 1Hz to 130Hz. The reflectometer response was found to closely resemble the sinusoidal response as measured with an accelerometer up to 100 Hz. The data presented demonstrate that microwaves can be used to measure flutter-induced aircraft vibrations.
Multiple Embedded Processors for Fault-Tolerant Computing
NASA Technical Reports Server (NTRS)
Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy
2005-01-01
A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.
Using the Moon As A Low-Noise Seismic Detector For Strange Quark Nuggets
NASA Technical Reports Server (NTRS)
Banerdt, W. Bruce; Chui, Talso; Griggs, Cornelius E.; Herrin, Eugene T.; Nakamura, Yosio; Paik, Ho Jung; Penanen, Konstantin; Rosenbaum, Doris; Teplitz, Vigdor L.; Young, Joseph
2006-01-01
Strange quark matter made of up, down and strange quarks has been postulated by Witten [1]. Strange quark matter would be nearly charge neutral and would have density of nuclear matter (10(exp 14) gm/cu cm). Witten also suggested that nuggets of strange quark matter, or strange quark nuggets (SQNs), could have formed shortly after the Big Bang, and that they would be viable candidates for cold dark matter. As suggested by de Rujula and Glashow [2], an SQN may pass through a celestial body releasing detectable seismic energy along a straight line. The Moon, being much quieter seismically than the Earth, would be a favorable place to search for such events. We review previous searches for SQNs to illustrate the parameter space explored by using the Moon as a low-noise detector of SQNs. We also discuss possible detection schemes using a single seismometer, and using an International Lunar Seismic Network.
NASA Astrophysics Data System (ADS)
Schaefer, Semjon; Gregory, Mark; Rosenkranz, Werner
2016-11-01
We present simulative and experimental investigations of different coherent receiver designs for high-speed optical intersatellite links. We focus on frequency offset (FO) compensation in homodyne and intradyne detection systems. The considered laser communication terminal uses an optical phase-locked loop (OPLL), which ensures stable homodyne detection. However, the hardware complexity increases with the modulation order. Therefore, we show that software-based intradyne detection is an attractive alternative for OPLL-based homodyne systems. Our approach is based on digital FO and phase noise compensation, in order to achieve a more flexible coherent detection scheme. Analytic results will further show the theoretical impact of the different detection schemes on the receiver sensitivity. Finally, we compare the schemes in terms of bit error ratio measurements and optimal receiver design.
Shaikh, Riaz Ahmed; Jameel, Hassan; d'Auriol, Brian J; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae
2009-01-01
Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.
Shaikh, Riaz Ahmed; Jameel, Hassan; d’Auriol, Brian J.; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae
2009-01-01
Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm. PMID:22454568
An Automatic Detection System of Lung Nodule Based on Multi-Group Patch-Based Deep Learning Network.
Jiang, Hongyang; Ma, He; Qian, Wei; Gao, Mengdi; Li, Yan
2017-07-14
High-efficiency lung nodule detection dramatically contributes to the risk assessment of lung cancer. It is a significant and challenging task to quickly locate the exact positions of lung nodules. Extensive work has been done by researchers around this domain for approximately two decades. However, previous computer aided detection (CADe) schemes are mostly intricate and time-consuming since they may require more image processing modules, such as the computed tomography (CT) image transformation, the lung nodule segmentation and the feature extraction, to construct a whole CADe system. It is difficult for those schemes to process and analyze enormous data when the medical images continue to increase. Besides, some state of the art deep learning schemes may be strict in the standard of database. This study proposes an effective lung nodule detection scheme based on multi-group patches cut out from the lung images, which are enhanced by the Frangi filter. Through combining two groups of images, a four-channel convolution neural networks (CNN) model is designed to learn the knowledge of radiologists for detecting nodules of four levels. This CADe scheme can acquire the sensitivity of 80.06% with 4.7 false positives per scan and the sensitivity of 94% with 15.1 false positives per scan. The results demonstrate that the multi-group patch-based learning system is efficient to improve the performance of lung nodule detection and greatly reduce the false positives under a huge amount of image data.
Deep ensemble learning of virtual endoluminal views for polyp detection in CT colonography
NASA Astrophysics Data System (ADS)
Umehara, Kensuke; Näppi, Janne J.; Hironaka, Toru; Regge, Daniele; Ishida, Takayuki; Yoshida, Hiroyuki
2017-03-01
Robust training of a deep convolutional neural network (DCNN) requires a very large number of annotated datasets that are currently not available in CT colonography (CTC). We previously demonstrated that deep transfer learning provides an effective approach for robust application of a DCNN in CTC. However, at high detection accuracy, the differentiation of small polyps from non-polyps was still challenging. In this study, we developed and evaluated a deep ensemble learning (DEL) scheme for reviewing of virtual endoluminal images to improve the performance of computer-aided detection (CADe) of polyps in CTC. Nine different types of image renderings were generated from virtual endoluminal images of polyp candidates detected by a conventional CADe system. Eleven DCNNs that represented three types of publically available pre-trained DCNN models were re-trained by transfer learning to identify polyps from the virtual endoluminal images. A DEL scheme that determines the final detected polyps by a review of the nine types of VE images was developed by combining the DCNNs using a random forest classifier as a meta-classifier. For evaluation, we sampled 154 CTC cases from a large CTC screening trial and divided the cases randomly into a training dataset and a test dataset. At 3.9 falsepositive (FP) detections per patient on average, the detection sensitivities of the conventional CADe system, the highestperforming single DCNN, and the DEL scheme were 81.3%, 90.7%, and 93.5%, respectively, for polyps ≥6 mm in size. For small polyps, the DEL scheme reduced the number of false positives by up to 83% over that of using a single DCNN alone. These preliminary results indicate that the DEL scheme provides an effective approach for improving the polyp detection performance of CADe in CTC, especially for small polyps.
A robust trust establishment scheme for wireless sensor networks.
Ishmanov, Farruh; Kim, Sung Won; Nam, Seung Yeob
2015-03-23
Security techniques like cryptography and authentication can fail to protect a network once a node is compromised. Hence, trust establishment continuously monitors and evaluates node behavior to detect malicious and compromised nodes. However, just like other security schemes, trust establishment is also vulnerable to attack. Moreover, malicious nodes might misbehave intelligently to trick trust establishment schemes. Unfortunately, attack-resistance and robustness issues with trust establishment schemes have not received much attention from the research community. Considering the vulnerability of trust establishment to different attacks and the unique features of sensor nodes in wireless sensor networks, we propose a lightweight and robust trust establishment scheme. The proposed trust scheme is lightweight thanks to a simple trust estimation method. The comprehensiveness and flexibility of the proposed trust estimation scheme make it robust against different types of attack and misbehavior. Performance evaluation under different types of misbehavior and on-off attacks shows that the detection rate of the proposed trust mechanism is higher and more stable compared to other trust mechanisms.
Quantum cryptography using single-particle entanglement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jae-Weon; Lee, Eok Kyun; Chung, Yong Wook
2003-07-01
A quantum cryptography scheme based on entanglement between a single-particle state and a vacuum state is proposed. The scheme utilizes linear optics devices to detect the superposition of the vacuum and single-particle states. Existence of an eavesdropper can be detected by using a variant of Bell's inequality.
Li, Congcong; Zhang, Xi; Wang, Haiping; Li, Dongfeng
2018-01-01
Vehicular sensor networks have been widely applied in intelligent traffic systems in recent years. Because of the specificity of vehicular sensor networks, they require an enhanced, secure and efficient authentication scheme. Existing authentication protocols are vulnerable to some problems, such as a high computational overhead with certificate distribution and revocation, strong reliance on tamper-proof devices, limited scalability when building many secure channels, and an inability to detect hardware tampering attacks. In this paper, an improved authentication scheme using certificateless public key cryptography is proposed to address these problems. A security analysis of our scheme shows that our protocol provides an enhanced secure anonymous authentication, which is resilient against major security threats. Furthermore, the proposed scheme reduces the incidence of node compromise and replication attacks. The scheme also provides a malicious-node detection and warning mechanism, which can quickly identify compromised static nodes and immediately alert the administrative department. With performance evaluations, the scheme can obtain better trade-offs between security and efficiency than the well-known available schemes. PMID:29324719
NASA Astrophysics Data System (ADS)
Bustamante, J. F. F.; Chou, S. C.; Gomes, J. L.
2009-04-01
The Southeast Brazil, in the coastal and mountain region called Serra do Mar, between Sao Paulo and Rio de Janeiro, is subject to frequent events of landslides and floods. The Eta Model has been producing good quality forecasts over South America at about 40-km horizontal resolution. For that type of hazards, however, more detailed and probabilistic information on the risks should be provided with the forecasts. Thus, a short-range ensemble prediction system (SREPS) based on the Eta Model is being constructed. Ensemble members derived from perturbed initial and lateral boundary conditions did not provide enough spread for the forecasts. Members with model physics perturbation are being included and tested. The objective of this work is to construct more members for the Eta SREPS by adding physics perturbed members. The Eta Model is configured at 10-km resolution and 38 layers in the vertical. The domain covered is most of Southeast Brazil, centered over the Serra do Mar region. The constructed members comprise variations of the cumulus parameterization Betts-Miller-Janjic (BMJ) and Kain-Fritsch (KF) schemes. Three members were constructed from the BMJ scheme by varying the deficit of saturation pressure profile over land and sea, and 2 members of the KF scheme were included using the standard KF and a momentum flux added to KF scheme version. One of the runs with BMJ scheme is the control run as it was used for the initial condition perturbation SREPS. The forecasts were tested for 6 cases of South America Convergence Zone (SACZ) events. The SACZ is a common summer season feature of Southern Hemisphere that causes persistent rain for a few days over the Southeast Brazil and it frequently organizes over Serra do Mar region. These events are particularly interesting because of the persistent rains that can accumulate large amounts and cause generalized landslides and death. With respect to precipitation, the KF scheme versions have shown to be able to reach the larger precipitation peaks of the events. On the other hand, for predicted 850-hPa temperature, the KF scheme versions produce positive bias and BMJ versions produce negative bias. Therefore, the ensemble mean forecast of 850-hPa temperature of this SREPS exhibits smaller error than the control member. Specific humidity shows smaller bias in the KF scheme. In general, the ensemble mean produced forecasts closer to the observations than the control run.
Statistical process control based chart for information systems security
NASA Astrophysics Data System (ADS)
Khan, Mansoor S.; Cui, Lirong
2015-07-01
Intrusion detection systems have a highly significant role in securing computer networks and information systems. To assure the reliability and quality of computer networks and information systems, it is highly desirable to develop techniques that detect intrusions into information systems. We put forward the concept of statistical process control (SPC) in computer networks and information systems intrusions. In this article we propose exponentially weighted moving average (EWMA) type quality monitoring scheme. Our proposed scheme has only one parameter which differentiates it from the past versions. We construct the control limits for the proposed scheme and investigate their effectiveness. We provide an industrial example for the sake of clarity for practitioner. We give comparison of the proposed scheme with EWMA schemes and p chart; finally we provide some recommendations for the future work.
Experimental quantum-cryptography scheme based on orthogonal states
NASA Astrophysics Data System (ADS)
Avella, Alessio; Brida, Giorgio; Degiovanni, Ivo Pietro; Genovese, Marco; Gramegna, Marco; Traina, Paolo
2010-12-01
Since, in general, nonorthogonal states cannot be cloned, any eavesdropping attempt in a quantum-communication scheme using nonorthogonal states as carriers of information introduces some errors in the transmission, leading to the possibility of detecting the spy. Usually, orthogonal states are not used in quantum-cryptography schemes since they can be faithfully cloned without altering the transmitted data. Nevertheless, L. Goldberg and L. Vaidman [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.75.1239 75, 1239 (1995)] proposed a protocol in which, even if the data exchange is realized using two orthogonal states, any attempt to eavesdrop is detectable by the legal users. In this scheme the orthogonal states are superpositions of two localized wave packets traveling along separate channels. Here we present an experiment realizing this scheme.
Detection scheme for acoustic quantum radiation in Bose-Einstein condensates.
Schützhold, Ralf
2006-11-10
Based on doubly detuned Raman transitions between (meta)stable atomic or molecular states and recently developed atom counting techniques, a detection scheme for sound waves in dilute Bose-Einstein condensates is proposed whose accuracy might reach down to the level of a few or even single phonons. This scheme could open up a new range of applications including the experimental observation of quantum radiation phenomena such as the Hawking effect in sonic black-hole analogues or the acoustic analogue of cosmological particle creation.
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
NASA Astrophysics Data System (ADS)
Park, Sang Cheol; Zheng, Bin; Wang, Xiao-Hui; Gur, David
2008-03-01
Digital breast tomosynthesis (DBT) has emerged as a promising imaging modality for screening mammography. However, visually detecting micro-calcification clusters depicted on DBT images is a difficult task. Computer-aided detection (CAD) schemes for detecting micro-calcification clusters depicted on mammograms can achieve high performance and the use of CAD results can assist radiologists in detecting subtle micro-calcification clusters. In this study, we compared the performance of an available 2D based CAD scheme with one that includes a new grouping and scoring method when applied to both projection and reconstructed DBT images. We selected a dataset involving 96 DBT examinations acquired on 45 women. Each DBT image set included 11 low dose projection images and a varying number of reconstructed image slices ranging from 18 to 87. In this dataset 20 true-positive micro-calcification clusters were visually detected on the projection images and 40 were visually detected on the reconstructed images, respectively. We first applied the CAD scheme that was previously developed in our laboratory to the DBT dataset. We then tested a new grouping method that defines an independent cluster by grouping the same cluster detected on different projection or reconstructed images. We then compared four scoring methods to assess the CAD performance. The maximum sensitivity level observed for the different grouping and scoring methods were 70% and 88% for the projection and reconstructed images with a maximum false-positive rate of 4.0 and 15.9 per examination, respectively. This preliminary study demonstrates that (1) among the maximum, the minimum or the average CAD generated scores, using the maximum score of the grouped cluster regions achieved the highest performance level, (2) the histogram based scoring method is reasonably effective in reducing false-positive detections on the projection images but the overall CAD sensitivity is lower due to lower signal-to-noise ratio, and (3) CAD achieved higher sensitivity and higher false-positive rate (per examination) on the reconstructed images. We concluded that without changing the detection threshold or performing pre-filtering to possibly increase detection sensitivity, current CAD schemes developed and optimized for 2D mammograms perform relatively poorly and need to be re-optimized using DBT datasets and new grouping and scoring methods need to be incorporated into the schemes if these are to be used on the DBT examinations.
This draft report is a preliminary assessment that describes how biological indicators are likely to respond to climate change, how well current sampling schemes may detect climate-driven changes, and how likely it is that these sampling schemes will continue to detect impairment...
A novel scheme for abnormal cell detection in Pap smear images
NASA Astrophysics Data System (ADS)
Zhao, Tong; Wachman, Elliot S.; Farkas, Daniel L.
2004-07-01
Finding malignant cells in Pap smear images is a "needle in a haystack"-type problem, tedious, labor-intensive and error-prone. It is therefore desirable to have an automatic screening tool in order that human experts can concentrate on the evaluation of the more difficult cases. Most research on automatic cervical screening tries to extract morphometric and texture features at the cell level, in accordance with the NIH "The Bethesda System" rules. Due to variances in image quality and features, such as brightness, magnification and focus, morphometric and texture analysis is insufficient to provide robust cervical cancer detection. Using a microscopic spectral imaging system, we have produced a set of multispectral Pap smear images with wavelengths from 400 nm to 690 nm, containing both spectral signatures and spatial attributes. We describe a novel scheme that combines spatial information (including texture and morphometric features) with spectral information to significantly improve abnormal cell detection. Three kinds of wavelet features, orthogonal, bi-orthogonal and non-orthogonal, are carefully chosen to optimize recognition performance. Multispectral feature sets are then extracted in the wavelet domain. Using a Back-Propagation Neural Network classifier that greatly decreases the influence of spurious events, we obtain a classification error rate of 5%. Cell morphometric features, such as area and shape, are then used to eliminate most remaining small artifacts. We report initial results from 149 cells from 40 separate image sets, in which only one abnormal cell was missed (TPR = 97.6%) and one normal cell was falsely classified as cancerous (FPR = 1%).
Phase magnification by two-axis countertwisting for detection-noise robust interferometry
NASA Astrophysics Data System (ADS)
Anders, Fabian; Pezzè, Luca; Smerzi, Augusto; Klempt, Carsten
2018-04-01
Entanglement-enhanced atom interferometry has the potential of surpassing the standard quantum limit and eventually reaching the ultimate Heisenberg bound. The experimental progress is, however, hindered by various technical noise sources, including the noise in the detection of the output quantum state. The influence of detection noise can be largely overcome by exploiting echo schemes, where the entanglement-generating interaction is repeated after the interferometer sequence. Here, we propose an echo protocol that uses two-axis countertwisting as the main nonlinear interaction. We demonstrate that the scheme is robust to detection noise and its performance is superior compared to the already demonstrated one-axis twisting echo scheme. In particular, the sensitivity maintains the Heisenberg scaling in the limit of a large particle number. Finally, we show that the protocol can be implemented with spinor Bose-Einstein condensates. Our results thus outline a realistic approach to mitigate the detection noise in quantum-enhanced interferometry.
SWIFT Detects a remarkable Gamma-ray Burst, GRB 060514, that introduces a New Classification Scheme
NASA Technical Reports Server (NTRS)
Gehrels, N.; Norris, J. P.; Mangano, V.; Barthelmy, S. D.; Burrows, D. N.; Granot, J.; Kaneko, Y.; Kouveliotou, C.; Markwardt, C. B.; Meszaros, P.;
2007-01-01
Gamma ray bursts (GFU3s) are known to come in two duration classes, separated at approx.2 s. Long bursts originate from star forming regions in galaxies, have accompanying supernovae (SNe) when near enough to observe and are likely caused by massive-star collapsars. Recent observations show that short bursts originate in regions within their host galaxies with lower star formation rates, consistent with binary neutron star (NS) or NS - black hole (BH) mergers. Moreover, although their hosts are predominantly nearby galaxies, no SNe have been so far associated with short GRBs. We report here on the bright, nearby GRB 060614 that does not fit in either class. Its approx.102 s duration groups it with long GRBs, while its temporal lag and peak luminosity fall entirely within the short GRB subclass. Moreover, very deep optical observations exclude an accompanying supernova, similar to short GRBs. This combination of a long duration event without accompanying SN poses a challenge to both a collapsar and merging NS interpretation and opens the door on a new GRB classification scheme that straddles both long and short bursts.
NASA Technical Reports Server (NTRS)
Tao, Wei Kuo; Chen, C.-S.; Jia, Y.; Baker, D.; Lang, S.; Wetzel, P.; Lau, W. K.-M.
2001-01-01
Several heavy precipitation episodes occurred over Taiwan from August 10 to 13, 1994. Precipitation patterns and characteristics are quite different between the precipitation events that occurred from August 10 and I I and from August 12 and 13. In Part I (Chen et al. 2001), the environmental situation and precipitation characteristics are analyzed using the EC/TOGA data, ground-based radar data, surface rainfall patterns, surface wind data, and upper air soundings. In this study (Part II), the Penn State/NCAR Mesoscale Model (MM5) is used to study the precipitation characteristics of these heavy precipitation events. Various physical processes (schemes) developed at NASA Goddard Space Flight Center (i.e., cloud microphysics scheme, radiative transfer model, and land-soil-vegetation surface model) have recently implemented into the MM5. These physical packages are described in the paper, Two way interactive nested grids are used with horizontal resolutions of 45, 15 and 5 km. The model results indicated that Cloud physics, land surface and radiation processes generally do not change the location (horizontal distribution) of heavy precipitation. The Goddard 3-class ice scheme produced more rainfall than the 2-class scheme. The Goddard multi-broad-band radiative transfer model reduced precipitation compared to a one-broad band (emissivity) radiation model. The Goddard land-soil-vegetation surface model also reduce the rainfall compared to a simple surface model in which the surface temperature is computed from a Surface energy budget following the "force-re store" method. However, model runs including all Goddard physical processes enhanced precipitation significantly for both cases. The results from these runs are in better agreement with observations. Despite improved simulations using different physical schemes, there are still some deficiencies in the model simulations. Some potential problems are discussed. Sensitivity tests (removing either terrain or radiative processes) are performed to identify the physical processes that determine the precipitation patterns and characteristics for heavy rainfall events. These sensitivity tests indicated that terrain can play a major role in determining the exact location for both precipitation events. The terrain can also play a major role in determining the intensity of precipitation for both events. However, it has a large impact on one event but a smaller one on the other. The radiative processes are also important for determining, the precipitation patterns for one case but. not the other. The radiative processes can also effect the total rainfall for both cases to different extents.
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Li, Deng; Lu, Xiaoming; Cheng, Xinyi; Wang, Liwei
2014-10-01
Continuous crystal-based positron emission tomography (PET) detectors could be an ideal alternative for current high-resolution pixelated PET detectors if the issues of high performance γ interaction position estimation and its real-time implementation are solved. Unfortunately, existing position estimators are not very feasible for implementation on field-programmable gate array (FPGA). In this paper, we propose a new self-organizing map neural network-based nearest neighbor (SOM-NN) positioning scheme aiming not only at providing high performance, but also at being realistic for FPGA implementation. Benefitting from the SOM feature mapping mechanism, the large set of input reference events at each calibration position is approximated by a small set of prototypes, and the computation of the nearest neighbor searching for unknown events is largely reduced. Using our experimental data, the scheme was evaluated, optimized and compared with the smoothed k-NN method. The spatial resolutions of full-width-at-half-maximum (FWHM) of both methods averaged over the center axis of the detector were obtained as 1.87 ±0.17 mm and 1.92 ±0.09 mm, respectively. The test results show that the SOM-NN scheme has an equivalent positioning performance with the smoothed k-NN method, but the amount of computation is only about one-tenth of the smoothed k-NN method. In addition, the algorithm structure of the SOM-NN scheme is more feasible for implementation on FPGA. It has the potential to realize real-time position estimation on an FPGA with a high-event processing throughput.
An improved PCA method with application to boiler leak detection.
Sun, Xi; Marquez, Horacio J; Chen, Tongwen; Riaz, Muhammad
2005-07-01
Principal component analysis (PCA) is a popular fault detection technique. It has been widely used in process industries, especially in the chemical industry. In industrial applications, achieving a sensitive system capable of detecting incipient faults, which maintains the false alarm rate to a minimum, is a crucial issue. Although a lot of research has been focused on these issues for PCA-based fault detection and diagnosis methods, sensitivity of the fault detection scheme versus false alarm rate continues to be an important issue. In this paper, an improved PCA method is proposed to address this problem. In this method, a new data preprocessing scheme and a new fault detection scheme designed for Hotelling's T2 as well as the squared prediction error are developed. A dynamic PCA model is also developed for boiler leak detection. This new method is applied to boiler water/steam leak detection with real data from Syncrude Canada's utility plant in Fort McMurray, Canada. Our results demonstrate that the proposed method can effectively reduce false alarm rate, provide effective and correct leak alarms, and give early warning to operators.
Atomic Interferometric Gravitational-Wave Space Observatory (AIGSO)
NASA Astrophysics Data System (ADS)
Gao, Dong-Feng; Wang, Jin; Zhan, Ming-Sheng
2018-01-01
We propose a space-borne gravitational-wave detection scheme, called atom interferometric gravitational-wave space observatory (AIGSO). It is motivated by the progress in the atomic matter-wave interferometry, which solely utilizes the standing light waves to split, deflect and recombine the atomic beam. Our scheme consists of three drag-free satellites orbiting the Earth. The phase shift of AIGSO is dominated by the Sagnac effect of gravitational-waves, which is proportional to the area enclosed by the atom interferometer, the frequency and amplitude of gravitational-waves. The scheme has a strain sensitivity < {10}-20/\\sqrt{{Hz}} in the 100 mHz-10 Hz frequency range, which fills in the detection gap between space-based and ground-based laser interferometric detectors. Thus, our proposed AIGSO can be a good complementary detection scheme to the space-borne laser interferometric schemes, such as LISA. Considering the current status of relevant technology readiness, we expect our AIGSO to be a promising candidate for the future space-based gravitational-wave detection plan. Supported by the National Key Research Program of China under Grant No. 2016YFA0302002, the National Science Foundation of China under Grant Nos. 11227803 and 91536221, and the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No. XDB21010100
Computer-Aided Diagnostic (CAD) Scheme by Use of Contralateral Subtraction Technique
NASA Astrophysics Data System (ADS)
Nagashima, Hiroyuki; Harakawa, Tetsumi
We developed a computer-aided diagnostic (CAD) scheme for detection of subtle image findings of acute cerebral infarction in brain computed tomography (CT) by using a contralateral subtraction technique. In our computerized scheme, the lateral inclination of image was first corrected automatically by rotating and shifting. The contralateral subtraction image was then derived by subtraction of reversed image from original image. Initial candidates for acute cerebral infarctions were identified using the multiple-thresholding and image filtering techniques. As the 1st step for removing false positive candidates, fourteen image features were extracted in each of the initial candidates. Halfway candidates were detected by applying the rule-based test with these image features. At the 2nd step, five image features were extracted using the overlapping scale with halfway candidates in interest slice and upper/lower slice image. Finally, acute cerebral infarction candidates were detected by applying the rule-based test with five image features. The sensitivity in the detection for 74 training cases was 97.4% with 3.7 false positives per image. The performance of CAD scheme for 44 testing cases had an approximate result to training cases. Our CAD scheme using the contralateral subtraction technique can reveal suspected image findings of acute cerebral infarctions in CT images.
Park, Sang Cheol; Chapman, Brian E; Zheng, Bin
2011-06-01
This study developed a computer-aided detection (CAD) scheme for pulmonary embolism (PE) detection and investigated several approaches to improve CAD performance. In the study, 20 computed tomography examinations with various lung diseases were selected, which include 44 verified PE lesions. The proposed CAD scheme consists of five basic steps: 1) lung segmentation; 2) PE candidate extraction using an intensity mask and tobogganing region growing; 3) PE candidate feature extraction; 4) false-positive (FP) reduction using an artificial neural network (ANN); and 5) a multifeature-based k-nearest neighbor for positive/negative classification. In this study, we also investigated the following additional methods to improve CAD performance: 1) grouping 2-D detected features into a single 3-D object; 2) selecting features with a genetic algorithm (GA); and 3) limiting the number of allowed suspicious lesions to be cued in one examination. The results showed that 1) CAD scheme using tobogganing, an ANN, and grouping method achieved the maximum detection sensitivity of 79.2%; 2) the maximum scoring method achieved the superior performance over other scoring fusion methods; 3) GA was able to delete "redundant" features and further improve CAD performance; and 4) limiting the maximum number of cued lesions in an examination reduced FP rate by 5.3 times. Combining these approaches, CAD scheme achieved 63.2% detection sensitivity with 18.4 FP lesions per examination. The study suggested that performance of CAD schemes for PE detection depends on many factors that include 1) optimizing the 2-D region grouping and scoring methods; 2) selecting the optimal feature set; and 3) limiting the number of allowed cueing lesions per examination.
Sensor data security level estimation scheme for wireless sensor networks.
Ramos, Alex; Filho, Raimir Holanda
2015-01-19
Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates.
Sensor Data Security Level Estimation Scheme for Wireless Sensor Networks
Ramos, Alex; Filho, Raimir Holanda
2015-01-01
Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates. PMID:25608215
NASA Astrophysics Data System (ADS)
Wang, Wenkai; Li, Husheng; Sun, Yan(Lindsay); Han, Zhu
2009-12-01
Cognitive radio is a revolutionary paradigm to migrate the spectrum scarcity problem in wireless networks. In cognitive radio networks, collaborative spectrum sensing is considered as an effective method to improve the performance of primary user detection. For current collaborative spectrum sensing schemes, secondary users are usually assumed to report their sensing information honestly. However, compromised nodes can send false sensing information to mislead the system. In this paper, we study the detection of untrustworthy secondary users in cognitive radio networks. We first analyze the case when there is only one compromised node in collaborative spectrum sensing schemes. Then we investigate the scenario that there are multiple compromised nodes. Defense schemes are proposed to detect malicious nodes according to their reporting histories. We calculate the suspicious level of all nodes based on their reports. The reports from nodes with high suspicious levels will be excluded in decision-making. Compared with existing defense methods, the proposed scheme can effectively differentiate malicious nodes and honest nodes. As a result, it can significantly improve the performance of collaborative sensing. For example, when there are 10 secondary users, with the primary user detection rate being equal to 0.99, one malicious user can make the false alarm rate [InlineEquation not available: see fulltext.] increase to 72%. The proposed scheme can reduce it to 5%. Two malicious users can make [InlineEquation not available: see fulltext.] increase to 85% and the proposed scheme reduces it to 8%.
Evaluation schemes for video and image anomaly detection algorithms
NASA Astrophysics Data System (ADS)
Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael
2016-05-01
Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.
Enhancing a Simple MODIS Cloud Mask Algorithm for the Landsat Data Continuity Mission
NASA Technical Reports Server (NTRS)
Wilson, Michael J.; Oreopoulos, Lazarous
2011-01-01
The presence of clouds in images acquired by the Landsat series of satellites is usually an undesirable, but generally unavoidable fact. With the emphasis of the program being on land imaging, the suspended liquid/ice particles of which clouds are made of fully or partially obscure the desired observational target. Knowing the amount and location of clouds in a Landsat scene is therefore valuable information for scene selection, for making clear-sky composites from multiple scenes, and for scheduling future acquisitions. The two instruments in the upcoming Landsat Data Continuity Mission (LDCM) will include new channels that will enhance our ability to detect high clouds which are often also thin in the sense that a large fraction of solar radiation can pass through them. This work studies the potential impact of these new channels on enhancing LDCM's cloud detection capabilities compared to previous Landsat missions. We revisit a previously published scheme for cloud detection and add new tests to capture more of the thin clouds that are harder to detect with the more limited arsenal channels. Since there are no Landsat data yet that include the new LDCM channels, we resort to data from another instrument, MODIS, which has these bands, as well as the other bands of LDCM, to test the capabilities of our new algorithm. By comparing our revised scheme's performance against the performance of the official MODIS cloud detection scheme, we conclude that the new scheme performs better than the earlier scheme which was not very good at thin cloud detection.
NASA Astrophysics Data System (ADS)
Martinez, J. D.; Benlloch, J. M.; Cerda, J.; Lerche, Ch. W.; Pavon, N.; Sebastia, A.
2004-06-01
This paper is framed into the Positron Emission Mammography (PEM) project, whose aim is to develop an innovative gamma ray sensor for early breast cancer diagnosis. Currently, breast cancer is detected using low-energy X-ray screening. However, functional imaging techniques such as PET/FDG could be employed to detect breast cancer and track disease changes with greater sensitivity. Furthermore, a small and less expensive PET camera can be utilized minimizing main problems of whole body PET. To accomplish these objectives, we are developing a new gamma ray sensor based on a newly released photodetector. However, a dedicated PEM detector requires an adequate data acquisition (DAQ) and processing system. The characterization of gamma events needs a free-running analog-to-digital converter (ADC) with sampling rates of more than 50 Ms/s and must achieve event count rates up to 10 MHz. Moreover, comprehensive data processing must be carried out to obtain event parameters necessary for performing the image reconstruction. A new generation digital signal processor (DSP) has been used to comply with these requirements. This device enables us to manage the DAQ system at up to 80 Ms/s and to execute intensive calculi over the detector signals. This paper describes our designed DAQ and processing architecture whose main features are: very high-speed data conversion, multichannel synchronized acquisition with zero dead time, a digital triggering scheme, and high throughput of data with an extensive optimization of the signal processing algorithms.
New, Improved Goddard Bulk-Microphysical Schemes for Studying Precipitation Processes in WRF
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2007-01-01
An improved bulk microphysical parameterization is implemented into the Weather Research and Forecasting ()VRF) model. This bulk microphysical scheme has three different options, 2ICE (cloud ice & snow), 3ICE-graupel (cloud ice, snow & graupel) and 3ICE-hail (cloud ice, snow & hail). High-resolution model simulations are conducted to examine the impact of microphysical schemes on two different weather events (a midlatitude linear convective system and an Atlantic hurricane). The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The Goddard 3ICE scheme with a cloud ice-snow-hail configuration agreed better with observations in terms of rainfall intensity and a narrow convective line than did simulations with a cloud ice-snow-graupel or cloud ice-snow (i.e., 2ICE) configuration. This is because the 3ICE-hail scheme includes dense ice precipitating (hail) particle with very fast fall speed (over 10 in For an Atlantic hurricane case, the Goddard microphysical schemes had no significant impact on the track forecast but did affect the intensity slightly. The improved Goddard schemes are also compared with WRF's three other 3ICE bulk microphysical schemes: WSM6, Purdue-Lin and Thompson. For the summer midlatitude convective line system, all of the schemes resulted in simulated precipitation events that were elongated in the southwest-northeast direction in qualitative agreement with the observed feature. However, the Goddard 3ICE scheme with the hail option and the Thompson scheme agree better with observations in terms of rainfall intensity, expect that the Goddard scheme simulated more heavy rainfall (over 48 mm/h). For the Atlantic hurricane case, none of the schemes had a significant impact on the track forecast; however, the simulated intensity using the Purdue-Lin scheme was much stronger than the other schemes. The vertical distributions of model simulated cloud species (i.e., snow) are quite sensitive to microphysical schemes, which is an important issue for future verification against satellite retrievals. Both the Purdue-Lin and WSM6 schemes simulated very little snow compared to the other schemes for both the midlatitude convective line and hurricane cases. Sensitivity tests are performed for these two WRF schemes to identify that snow productions could be increased by increasing the snow intercept, turning off the auto-conversion from snow to graupel and reducing the transfer processes from cloud-sized particles to precipitation-sized ice.
Re-formulation and Validation of Cloud Microphysics Schemes
NASA Astrophysics Data System (ADS)
Wang, J.; Georgakakos, K. P.
2007-12-01
The research focuses on improving quantitative precipitation forecasts by removing significant uncertainties in current cloud microphysics schemes embedded in models such as WRF and MM5 and cloud-resolving models such as GCE. Reformulation of several production terms in these microphysics schemes was found necessary. When estimating four graupel production terms involved in the accretion between rain, snow and graupel, current microphysics schemes assumes that all raindrops and snow particles are falling at their appropriate mass-weighted mean terminal velocities and thus analytic solutions are able to be found for these production terms. Initial analysis and tests showed that these approximate analytic solutions give significant and systematic overestimates of these terms, and, thus, become one of major error sources of the graupel overproduction and associated extreme radar reflectivity in simulations. These results are corroborated by several reports. For example, the analytic solution overestimates the graupel production by collisions between raindrops and snow by up to 230%. The structure of "pure" snow (not rimed) and "pure graupel" (completely rimed) in current microphysics schemes excludes intermediate forms between "pure" snow and "pure" graupel and thus becomes a significant reason of graupel overproduction in hydrometeor simulations. In addition, the generation of the same density graupel by both the freezing of supercooled water and the riming of snow may cause underestimation of graupel production by freezing. A parameterization scheme of the riming degree of snow is proposed and then a dynamic fallspeed-diameter relationship and density- diameter relationship of rimed snow is assigned to graupel based on the diagnosed riming degree. To test if these new treatments can improve quantitative precipitation forecast, the Hurricane Katrina and a severe winter snowfall event in the Sierra Nevada Range are selected as case studies. A series of control simulation and sensitivity tests was conducted for these two cases. Two statistical methods are used to compare simulated radar reflectivity by the model with that detected by ground-based and airborne radar at different height levels. It was found that the changes made in current microphysical schemes improve QPF and microphysics simulation significantly.
Novel MDM-PON scheme utilizing self-homodyne detection for high-speed/capacity access networks.
Chen, Yuanxiang; Li, Juhao; Zhu, Paikun; Wu, Zhongying; Zhou, Peng; Tian, Yu; Ren, Fang; Yu, Jinyi; Ge, Dawei; Chen, Jingbiao; He, Yongqi; Chen, Zhangyuan
2015-12-14
In this paper, we propose a cost-effective, energy-saving mode-division-multiplexing passive optical network (MDM-PON) scheme utilizing self-homodyne detection for high-speed/capacity access network based on low modal-crosstalk few-mode fiber (FMF) and all-fiber mode multiplexer/demultiplexer (MUX/DEMUX). In the proposed scheme, one of the spatial modes is used to transmit a portion of signal carrier (namely pilot-tone) as the local oscillator (LO), while the others are used for signal-bearing channels. At the receiver, the pilot-tone and the signal can be separated without strong crosstalk and sent to the receiver for coherent detection. The spectral efficiency (SE) is significantly enhanced when multiple spatial channels are used. Meanwhile, the self-homodyne detection scheme can effectively suppress laser phase noise, which relaxes the requirement for the lasers line-width at the optical line terminal or optical network units (OLT/ONUs). The digital signal processing (DSP) at the receiver is also simplified since it removes the need for frequency offset compensation and complex phase correction, which reduces the computational complexity and energy consumption. Polarization division multiplexing (PDM) that offers doubled SE is also supported by the scheme. The proposed scheme is scalable to multi-wavelength application when wavelength MUX/DEMUX is utilized. Utilizing the proposed scheme, we demonstrate a proof of concept 4 × 40-Gb/s orthogonal frequency division multiplexing (OFDM) transmission over 55-km FMF using low modal-crosstalk two-mode FMF and MUX/DEMUX with error free operation. Compared with back to back case, less than 1-dB Q-factor penalty is observed after 55-km FMF of the four channels. Signal power and pilot-tone power are also optimized to achieve the optimal transmission performance.
Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Zheng, Bin
2017-01-01
The purpose of this study is to evaluate a new method to improve performance of computer-aided detection (CAD) schemes of screening mammograms with two approaches. In the first approach, we developed a new case based CAD scheme using a set of optimally selected global mammographic density, texture, spiculation, and structural similarity features computed from all four full-field digital mammography (FFDM) images of the craniocaudal (CC) and mediolateral oblique (MLO) views by using a modified fast and accurate sequential floating forward selection feature selection algorithm. Selected features were then applied to a “scoring fusion” artificial neural network (ANN) classification scheme to produce a final case based risk score. In the second approach, we combined the case based risk score with the conventional lesion based scores of a conventional lesion based CAD scheme using a new adaptive cueing method that is integrated with the case based risk scores. We evaluated our methods using a ten-fold cross-validation scheme on 924 cases (476 cancer and 448 recalled or negative), whereby each case had all four images from the CC and MLO views. The area under the receiver operating characteristic curve was AUC = 0.793±0.015 and the odds ratio monotonically increased from 1 to 37.21 as CAD-generated case based detection scores increased. Using the new adaptive cueing method, the region based and case based sensitivities of the conventional CAD scheme at a false positive rate of 0.71 per image increased by 2.4% and 0.8%, respectively. The study demonstrated that supplementary information can be derived by computing global mammographic density image features to improve CAD-cueing performance on the suspicious mammographic lesions. PMID:27997380
A Foreign Object Damage Event Detector Data Fusion System for Turbofan Engines
NASA Technical Reports Server (NTRS)
Turso, James A.; Litt, Jonathan S.
2004-01-01
A Data Fusion System designed to provide a reliable assessment of the occurrence of Foreign Object Damage (FOD) in a turbofan engine is presented. The FOD-event feature level fusion scheme combines knowledge of shifts in engine gas path performance obtained using a Kalman filter, with bearing accelerometer signal features extracted via wavelet analysis, to positively identify a FOD event. A fuzzy inference system provides basic probability assignments (bpa) based on features extracted from the gas path analysis and bearing accelerometers to a fusion algorithm based on the Dempster-Shafer-Yager Theory of Evidence. Details are provided on the wavelet transforms used to extract the foreign object strike features from the noisy data and on the Kalman filter-based gas path analysis. The system is demonstrated using a turbofan engine combined-effects model (CEM), providing both gas path and rotor dynamic structural response, and is suitable for rapid-prototyping of control and diagnostic systems. The fusion of the disparate data can provide significantly more reliable detection of a FOD event than the use of either method alone. The use of fuzzy inference techniques combined with Dempster-Shafer-Yager Theory of Evidence provides a theoretical justification for drawing conclusions based on imprecise or incomplete data.
A new EEMD-based scheme for detection of insect damaged wheat kernels using impact acoustics
USDA-ARS?s Scientific Manuscript database
Internally feeding insects inside wheat kernels cause significant, but unseen economic damage to stored grain. In this paper, a new scheme based on ensemble empirical mode decomposition (EEMD) using impact acoustics is proposed for detection of insect-damaged wheat kernels, based on its capability t...
Rise time measurement for ultrafast X-ray pulses
Celliers, Peter M [Berkeley, CA; Weber, Franz A [Oakland, CA; Moon, Stephen J [Tracy, CA
2005-04-05
A pump-probe scheme measures the rise time of ultrafast x-ray pulses. Conventional high speed x-ray diagnostics (x-ray streak cameras, PIN diodes, diamond PCD devices) do not provide sufficient time resolution to resolve rise times of x-ray pulses on the order of 50 fs or less as they are being produced by modern fast x-ray sources. Here, we are describing a pump-probe technique that can be employed to measure events where detector resolution is insufficient to resolve the event. The scheme utilizes a diamond plate as an x-ray transducer and a p-polarized probe beam.
Rise Time Measurement for Ultrafast X-Ray Pulses
Celliers, Peter M.; Weber, Franz A.; Moon, Stephen J.
2005-04-05
A pump-probe scheme measures the rise time of ultrafast x-ray pulses. Conventional high speed x-ray diagnostics (x-ray streak cameras, PIN diodes, diamond PCD devices) do not provide sufficient time resolution to resolve rise times of x-ray pulses on the order of 50 fs or less as they are being produced by modern fast x-ray sources. Here, we are describing a pump-probe technique that can be employed to measure events where detector resolution is insufficient to resolve the event. The scheme utilizes a diamond plate as an x-ray transducer and a p-polarized probe beam.
Inferring Recent Demography from Isolation by Distance of Long Shared Sequence Blocks
Ringbauer, Harald; Coop, Graham
2017-01-01
Recently it has become feasible to detect long blocks of nearly identical sequence shared between pairs of genomes. These identity-by-descent (IBD) blocks are direct traces of recent coalescence events and, as such, contain ample signal to infer recent demography. Here, we examine sharing of such blocks in two-dimensional populations with local migration. Using a diffusion approximation to trace genetic ancestry, we derive analytical formulas for patterns of isolation by distance of IBD blocks, which can also incorporate recent population density changes. We introduce an inference scheme that uses a composite-likelihood approach to fit these formulas. We then extensively evaluate our theory and inference method on a range of scenarios using simulated data. We first validate the diffusion approximation by showing that the theoretical results closely match the simulated block-sharing patterns. We then demonstrate that our inference scheme can accurately and robustly infer dispersal rate and effective density, as well as bounds on recent dynamics of population density. To demonstrate an application, we use our estimation scheme to explore the fit of a diffusion model to Eastern European samples in the Population Reference Sample data set. We show that ancestry diffusing with a rate of σ≈50−−100 km/gen during the last centuries, combined with accelerating population growth, can explain the observed exponential decay of block sharing with increasing pairwise sample distance. PMID:28108588
Development and evaluation of modified envelope correlation method for deep tectonic tremor
NASA Astrophysics Data System (ADS)
Mizuno, N.; Ide, S.
2017-12-01
We develop a new location method for deep tectonic tremors, as an improvement of widely used envelope correlation method, and applied it to construct a tremor catalog in western Japan. Using the cross-correlation functions as objective functions and weighting components of data by the inverse of error variances, the envelope cross-correlation method is redefined as a maximum likelihood method. This method is also capable of multiple source detection, because when several events occur almost simultaneously, they appear as local maxima of likelihood.The average of weighted cross-correlation functions, defined as ACC, is a nonlinear function whose variable is a position of deep tectonic tremor. The optimization method has two steps. First, we fix the source depth to 30 km and use a grid search with 0.2 degree intervals to find the maxima of ACC, which are candidate event locations. Then, using each of the candidate locations as initial values, we apply a gradient method to determine horizontal and vertical components of a hypocenter. Sometimes, several source locations are determined in a time window of 5 minutes. We estimate the resolution, which is defined as a distance of sources to be detected separately by the location method, is about 100 km. The validity of this estimation is confirmed by a numerical test using synthetic waveforms. Applying to continuous seismograms in western Japan for over 10 years, the new method detected 27% more tremors than a previous method, owing to the multiple detection and improvement of accuracy by appropriate weighting scheme.
Multirate and event-driven Kalman filters for helicopter flight
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Smith, Phillip; Suorsa, Raymond E.; Hussien, Bassam
1993-01-01
A vision-based obstacle detection system that provides information about objects as a function of azimuth and elevation is discussed. The range map is computed using a sequence of images from a passive sensor, and an extended Kalman filter is used to estimate range to obstacles. The magnitude of the optical flow that provides measurements for each Kalman filter varies significantly over the image depending on the helicopter motion and object location. In a standard Kalman filter, the measurement update takes place at fixed intervals. It may be necessary to use a different measurement update rate in different parts of the image in order to maintain the same signal to noise ratio in the optical flow calculations. A range estimation scheme that accepts the measurement only under certain conditions is presented. The estimation results from the standard Kalman filter are compared with results from a multirate Kalman filter and an event-driven Kalman filter for a sequence of helicopter flight images.
NASA Technical Reports Server (NTRS)
Davis, Robert N.; Polites, Michael E.; Trevino, Luis C.
2004-01-01
This paper details a novel scheme for autonomous component health management (ACHM) with failed actuator detection and failed sensor detection, identification, and avoidance. This new scheme has features that far exceed the performance of systems with triple-redundant sensing and voting, yet requires fewer sensors and could be applied to any system with redundant sensing. Relevant background to the ACHM scheme is provided, and the simulation results for the application of that scheme to a single-axis spacecraft attitude control system with a 3rd order plant and dual-redundant measurement of system states are presented. ACHM fulfills key functions needed by an integrated vehicle health monitoring (IVHM) system. It is: autonomous; adaptive; works in realtime; provides optimal state estimation; identifies failed components; avoids failed components; reconfigures for multiple failures; reconfigures for intermittent failures; works for hard-over, soft, and zero-output failures; and works for both open- and closed-loop systems. The ACHM scheme combines a prefilter that generates preliminary state estimates, detects and identifies failed sensors and actuators, and avoids the use of failed sensors in state estimation with a fixed-gain Kalman filter that generates optimal state estimates and provides model-based state estimates that comprise an integral part of the failure detection logic. The results show that ACHM successfully isolates multiple persistent and intermittent hard-over, soft, and zero-output failures. It is now ready to be tested on a computer model of an actual system.
NASA Astrophysics Data System (ADS)
Wang, Xingwei; Zheng, Bin; Li, Shibo; Mulvihill, John J.; Chen, Xiaodong; Liu, Hong
2010-07-01
Karyotyping is an important process to classify chromosomes into standard classes and the results are routinely used by the clinicians to diagnose cancers and genetic diseases. However, visual karyotyping using microscopic images is time-consuming and tedious, which reduces the diagnostic efficiency and accuracy. Although many efforts have been made to develop computerized schemes for automated karyotyping, no schemes can get be performed without substantial human intervention. Instead of developing a method to classify all chromosome classes, we develop an automatic scheme to detect abnormal metaphase cells by identifying a specific class of chromosomes (class 22) and prescreen for suspicious chronic myeloid leukemia (CML). The scheme includes three steps: (1) iteratively segment randomly distributed individual chromosomes, (2) process segmented chromosomes and compute image features to identify the candidates, and (3) apply an adaptive matching template to identify chromosomes of class 22. An image data set of 451 metaphase cells extracted from bone marrow specimens of 30 positive and 30 negative cases for CML is selected to test the scheme's performance. The overall case-based classification accuracy is 93.3% (100% sensitivity and 86.7% specificity). The results demonstrate the feasibility of applying an automated scheme to detect or prescreen the suspicious cancer cases.
A Novel Physical Layer Assisted Authentication Scheme for Mobile Wireless Sensor Networks
Wang, Qiuhua
2017-01-01
Physical-layer authentication can address physical layer vulnerabilities and security threats in wireless sensor networks, and has been considered as an effective complementary enhancement to existing upper-layer authentication mechanisms. In this paper, to advance the existing research and improve the authentication performance, we propose a novel physical layer assisted authentication scheme for mobile wireless sensor networks. In our proposed scheme, we explore the reciprocity and spatial uncorrelation of the wireless channel to verify the identities of involved transmitting users and decide whether all data frames are from the same sender. In our proposed scheme, a new method is developed for the legitimate users to compare their received signal strength (RSS) records, which avoids the information from being disclosed to the adversary. Our proposed scheme can detect the spoofing attack even in a high dynamic environment. We evaluate our scheme through experiments under indoor and outdoor environments. Experiment results show that our proposed scheme is more efficient and achieves a higher detection rate as well as keeping a lower false alarm rate. PMID:28165423
A Novel Physical Layer Assisted Authentication Scheme for Mobile Wireless Sensor Networks.
Wang, Qiuhua
2017-02-04
Physical-layer authentication can address physical layer vulnerabilities and security threats in wireless sensor networks, and has been considered as an effective complementary enhancement to existing upper-layer authentication mechanisms. In this paper, to advance the existing research and improve the authentication performance, we propose a novel physical layer assisted authentication scheme for mobile wireless sensor networks. In our proposed scheme, we explore the reciprocity and spatial uncorrelation of the wireless channel to verify the identities of involved transmitting users and decide whether all data frames are from the same sender. In our proposed scheme, a new method is developed for the legitimate users to compare their received signal strength (RSS) records, which avoids the information from being disclosed to the adversary. Our proposed scheme can detect the spoofing attack even in a high dynamic environment. We evaluate our scheme through experiments under indoor and outdoor environments. Experiment results show that our proposed scheme is more efficient and achieves a higher detection rate as well as keeping a lower false alarm rate.
A joint asymmetric watermarking and image encryption scheme
NASA Astrophysics Data System (ADS)
Boato, G.; Conotter, V.; De Natale, F. G. B.; Fontanari, C.
2008-02-01
Here we introduce a novel watermarking paradigm designed to be both asymmetric, i.e., involving a private key for embedding and a public key for detection, and commutative with a suitable encryption scheme, allowing both to cipher watermarked data and to mark encrypted data without interphering with the detection process. In order to demonstrate the effectiveness of the above principles, we present an explicit example where the watermarking part, based on elementary linear algebra, and the encryption part, exploiting a secret random permutation, are integrated in a commutative scheme.
Automatic background updating for video-based vehicle detection
NASA Astrophysics Data System (ADS)
Hu, Chunhai; Li, Dongmei; Liu, Jichuan
2008-03-01
Video-based vehicle detection is one of the most valuable techniques for the Intelligent Transportation System (ITS). The widely used video-based vehicle detection technique is the background subtraction method. The key problem of this method is how to subtract and update the background effectively. In this paper an efficient background updating scheme based on Zone-Distribution for vehicle detection is proposed to resolve the problems caused by sudden camera perturbation, sudden or gradual illumination change and the sleeping person problem. The proposed scheme is robust and fast enough to satisfy the real-time constraints of vehicle detection.
An online outlier identification and removal scheme for improving fault detection performance.
Ferdowsi, Hasan; Jagannathan, Sarangapani; Zawodniok, Maciej
2014-05-01
Measured data or states for a nonlinear dynamic system is usually contaminated by outliers. Identifying and removing outliers will make the data (or system states) more trustworthy and reliable since outliers in the measured data (or states) can cause missed or false alarms during fault diagnosis. In addition, faults can make the system states nonstationary needing a novel analytical model-based fault detection (FD) framework. In this paper, an online outlier identification and removal (OIR) scheme is proposed for a nonlinear dynamic system. Since the dynamics of the system can experience unknown changes due to faults, traditional observer-based techniques cannot be used to remove the outliers. The OIR scheme uses a neural network (NN) to estimate the actual system states from measured system states involving outliers. With this method, the outlier detection is performed online at each time instant by finding the difference between the estimated and the measured states and comparing its median with its standard deviation over a moving time window. The NN weight update law in OIR is designed such that the detected outliers will have no effect on the state estimation, which is subsequently used for model-based fault diagnosis. In addition, since the OIR estimator cannot distinguish between the faulty or healthy operating conditions, a separate model-based observer is designed for fault diagnosis, which uses the OIR scheme as a preprocessing unit to improve the FD performance. The stability analysis of both OIR and fault diagnosis schemes are introduced. Finally, a three-tank benchmarking system and a simple linear system are used to verify the proposed scheme in simulations, and then the scheme is applied on an axial piston pump testbed. The scheme can be applied to nonlinear systems whose dynamics and underlying distribution of states are subjected to change due to both unknown faults and operating conditions.
NASA Technical Reports Server (NTRS)
Tao, W.K.; Shi, J.J.; Braun, S.; Simpson, J.; Chen, S.S.; Lang, S.; Hong, S.Y.; Thompson, G.; Peters-Lidard, C.
2009-01-01
A Goddard bulk microphysical parameterization is implemented into the Weather Research and Forecasting (WRF) model. This bulk microphysical scheme has three different options, 2ICE (cloud ice & snow), 3ICE-graupel (cloud ice, snow & graupel) and 3ICE-hail (cloud ice, snow & hail). High-resolution model simulations are conducted to examine the impact of microphysical schemes on different weather events: a midlatitude linear convective system and an Atlantic hurricane. The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The Goddard 3ICE scheme with the cloud ice-snow-hail configuration agreed better with observations ill of rainfall intensity and having a narrow convective line than did simulations with the cloud ice-snow-graupel and cloud ice-snow (i.e., 2ICE) configurations. This is because the Goddard 3ICE-hail configuration has denser precipitating ice particles (hail) with very fast fall speeds (over 10 m/s) For an Atlantic hurricane case, the Goddard microphysical scheme (with 3ICE-hail, 3ICE-graupel and 2ICE configurations) had no significant impact on the track forecast but did affect the intensity slightly. The Goddard scheme is also compared with WRF's three other 3ICE bulk microphysical schemes: WSM6, Purdue-Lin and Thompson. For the summer midlatitude convective line system, all of the schemes resulted in simulated precipitation events that were elongated in southwest-northeast direction in qualitative agreement with the observed feature. However, the Goddard 3ICE-hail and Thompson schemes were closest to the observed rainfall intensities although the Goddard scheme simulated more heavy rainfall (over 48 mm/h). For the Atlantic hurricane case, none of the schemes had a significant impact on the track forecast; however, the simulated intensity using the Purdue-Lin scheme was much stronger than the other schemes. The vertical distributions of model-simulated cloud species (e.g., snow) are quite sensitive to the microphysical schemes, which is an issue for future verification against satellite retrievals. Both the Purdue-Lin and WSM6 schemes simulated very little snow compared to the other schemes for both the midlatitude convective line and hurricane case. Sensitivity tests with these two schemes showed that increasing the snow intercept, turning off the auto-conversion from snow to graupel, eliminating dry growth, and reducing the transfer processes from cloud-sized particles to precipitation-sized ice collectively resulted in a net increase in those schemes' snow amounts.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.
van Elburg, Ronald A J; van Ooyen, Arjen
2009-07-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.
Notes on two multiparty quantum secret sharing schemes
NASA Astrophysics Data System (ADS)
Gao, Gan
In the paper [H. Abulkasim et al., Int. J. Quantum Inform. 15 (2017) 1750023], Abulkasim et al. proposed a quantum secret sharing scheme based on Bell states. We study the security of the multiparty case in the proposed scheme and detect that it is not secure. In the paper [Y. Du and W. Bao, Opt. Commun. 308 (2013) 159], Du and Bao listed Gao’s scheme and gave a attack strategy on the listed scheme. We point out that their listing scheme is not the genuine Gao’s scheme and their research method is not advisable.
Elves, Forbush Decreases and Solar Activity Studies at the Pierre Auger Observatory
NASA Astrophysics Data System (ADS)
Colalillo, Roberta
The Pierre Auger Observatory, designed to observe cosmic rays at the highest energies, can also be a valid ground based instrument for the observation of transient luminous events and for studying the modulation of galactic cosmic rays due to solar activity. The Fluorescence Detector can observe elves, transient luminous emissions from altitudes between 80 and 95 km above sea level, with timescales of tens of microseconds, which are triggered by lightning activity. A dedicated trigger and an extended readout scheme were introduced to enhance detection efficiency of these events and to improve the knowledge of some peculiar characteristics. The low energy mode of the Surface Detector, on the other hand, records variations in the flux of low energy secondary particles with extreme detail. With the Scaler mode, it is possible to register the rate of signals for deposited energies between 15-100 MeV; the Histogram mode, using the calibration peak and charge histograms of the individual pulses detected by each water-Cherenkov station, covers different deposited energy ranges up to 1 GeV. The variations in the flux of galactic cosmic rays have been studied on short and intermediate time scales (Forbush decreases), but also a long-term analysis, which shows the sensitivity of the Observatory to the solar cycle variation, is in progress.
Clothier, Hazel J; Crawford, Nigel W; Russell, Melissa; Kelly, Heath; Buttery, Jim P
2017-06-01
Australia is traditionally an early adopter of vaccines, therefore comprehensive and effective post-licensure vaccine pharmacovigilance is critical to maintain confidence in immunisation, both nationally and internationally. With adverse event following immunisation (AEFI) surveillance the responsibility of Australian jurisdictions, Victoria operates an enhanced passive AEFI surveillance system integrated with clinical services, called 'SAEFVIC' (Surveillance of Adverse Events Following Vaccination In the Community). The aim of this study was to evaluate Victoria's current AEFI surveillance system 'SAEFVIC' and inform ongoing quality improvement of vaccine pharmacovigilance in Victoria and Australia. We conducted a retrospective structured desktop evaluation of AEFI reporting received by SAEFVIC from 2007 to 2014, to evaluate the system according to its stated objectives, i.e. to improve AEFI reporting; provide AEFI signal detection; and to maintain consumer confidence in vaccination. AEFI reporting has tripled since SAEFVIC commenced (incidence risk ratio [IRR] 3.04, 95% confidence interval [CI] 2.35-3.93), raising Victoria to be the lead jurisdiction by AEFI reporting volume and to rank third by population reporting rate nationally. The largest increase was observed in children. Data were utilised to investigate potential signal events and inform vaccine policy. Signal detection required clinical suspicion by surveillance nurses, or prior vaccine-specific concerns. Subsequent vaccination post-AEFI was documented for 56.2% (95% CI 54.1-58.4) of reports, and the proportion of children due or overdue for vaccination was 2.3% higher for those reporting AEFI compared with the general population. SAEFVIC has improved AEFI surveillance, facilitates signal investigation and validation, and supports consumer confidence in immunisation. Expansion of the system nationally has the potential to improve capacity and capability of vaccine pharmacovigilance, particularly through data consistency and jurisdictional comparability in Australia.
The Onset of a Novel Environmental Offset: A case study for diverse pollutant scheme in Australia.
NASA Astrophysics Data System (ADS)
Sengupta, A.; Arora, M.; Delbridge, N.; Pettigrove, V.; Feldman, D.
2014-12-01
Environmental offset schemes employ a crediting system to mitigate the impacts of pollutants. In this talk, we present a novel trade-off concept comparing diverse groups of pollutants: environmental flows, micropollutants (heavy metals, pesticides, estrogen compounds) and nutrients in a test watershed (Jacksons Creek), in the vicinity of Melbourne. A reservoir in the upper watershed, and a wastewater treatment plant (WTP) are the main sources of flow into Jacksons Creek. The current land use is a mix of agriculture, and rural, though rapid urbanization is anticipated with a 40% increase in the population by 2040. The creek is impacted by: 1) low flow, especially during dry periods (contribution from the reservoir drops dramatically), 2) nutrient enrichment (WTP and agricultural runoff), and 3) micropollutants-heavy metals (urban runoff), estrogenic compounds (WTP), and pesticides (agricultural runoff). In this offset framework, we evaluated current and future scenarios to identify the main stressor in Jacksons Creek. We collected monitoring data at 15 sites for separate 3 events. Then we developed a watershed model to assess sources of pollutant loads to the creek, using two different tools, Model for Urban Stormwater Improvement Conceptualisation (MUSIC) for the preliminary flow and water quality modeling, and eWater Source for integrated water resource management (IWRM), and a decision support system for stakeholders. Scenario analysis includes urbanization and population growth, and anticipated discharges from WTP and the reservoir. Measured nutrient concentrations were high for all sampling events. Micropollutants were detected at a concentration higher than the trigger value at several locations. Preliminary analysis shows that low flow is one of the major stressors in the creek causing elevated micropollutant and nutrient concentrations (non-point), and that discharge from the WTP is essential to maintain the minimum environmental flows, though nutrient enrichment downstream could occur. This study demonstrates an innovative case for evaluating net environmental benefits, and might hold important lessons for the design of offset schemes in comparable environments elsewhere.
Design of fire detection equipment based on ultraviolet detection technology
NASA Astrophysics Data System (ADS)
Liu, Zhenji; Liu, Jin; Chu, Sheng; Ping, Chao; Yuan, Xiaobing
2015-03-01
Utilized the feature of wide bandgap semiconductor of MgZnO, researched and developed a kind of Mid-Ultraviolet-Band(MUV) ultraviolet detector which has passed the simulation experiment in the sun circumstance. Based on the ultraviolet detector, it gives out a design scheme of gun-shot detection device, which is composed of twelve ultraviolet detectors, signal amplifier, processor, annunciator , azimuth indicator and the bracket. Through Analysing the feature of solar blind, ultraviolet responsivity, fire feature of gunshots and detection distance, the feasibility of this design scheme is proved.
NASA Astrophysics Data System (ADS)
Gamer, L.; Schulz, D.; Enss, C.; Fleischmann, A.; Gastaldo, L.; Kempf, S.; Krantz, C.; Novotný, O.; Schwalm, D.; Wolf, A.
2016-08-01
We present the design of MOCCA, a large-area particle detector that is developed for the position- and energy-resolving detection of neutral molecule fragments produced in electron-ion interactions at the Cryogenic Storage Ring at the Max Planck Institute for Nuclear Physics in Heidelberg. The detector is based on metallic magnetic calorimeters and consists of 4096 particle absorbers covering a total detection area of 44.8 mathrm {mm} × 44.8 mathrm {mm}. Groups of four absorbers are thermally coupled to a common paramagnetic temperature sensor where the strength of the thermal link is different for each absorber. This allows attributing a detector event within this group to the corresponding absorber by discriminating the signal rise times. A novel readout scheme further allows reading out all 1024 temperature sensors that are arranged in a 32 × 32 square array using only 16+16 current-sensing superconducting quantum interference devices. Numerical calculations taking into account a simplified detector model predict an energy resolution of Δ E_mathrm {FWHM} le 80 mathrm {eV} for all pixels of this detector.
Törnros, Tobias; Dorn, Helen; Reichert, Markus; Ebner-Priemer, Ulrich; Salize, Hans-Joachim; Tost, Heike; Meyer-Lindenberg, Andreas; Zipf, Alexander
2016-11-21
Self-reporting is a well-established approach within the medical and psychological sciences. In order to avoid recall bias, i.e. past events being remembered inaccurately, the reports can be filled out on a smartphone in real-time and in the natural environment. This is often referred to as ambulatory assessment and the reports are usually triggered at regular time intervals. With this sampling scheme, however, rare events (e.g. a visit to a park or recreation area) are likely to be missed. When addressing the correlation between mood and the environment, it may therefore be beneficial to include participant locations within the ambulatory assessment sampling scheme. Based on the geographical coordinates, the database query system then decides if a self-report should be triggered or not. We simulated four different ambulatory assessment sampling schemes based on movement data (coordinates by minute) from 143 voluntary participants tracked for seven consecutive days. Two location-based sampling schemes incorporating the environmental characteristics (land use and population density) at each participant's location were introduced and compared to a time-based sampling scheme triggering a report on the hour as well as to a sampling scheme incorporating physical activity. We show that location-based sampling schemes trigger a report less often, but we obtain more unique trigger positions and a greater spatial spread in comparison to sampling strategies based on time and distance. Additionally, the location-based methods trigger significantly more often at rarely visited types of land use and less often outside the study region where no underlying environmental data are available.
NASA Technical Reports Server (NTRS)
Reed, M. A.
1974-01-01
The need for an obstacle detection system on the Mars roving vehicle was assumed, and a practical scheme was investigated and simulated. The principal sensing device on this vehicle was taken to be a laser range finder. Both existing and original algorithms, ending with thresholding operations, were used to obtain the outlines of obstacles from the raw data of this laser scan. A theoretical analysis was carried out to show how proper value of threshold may be chosen. Computer simulations considered various mid-range boulders, for which the scheme was quite successful. The extension to other types of obstacles, such as craters, was considered. The special problems of bottom edge detection and scanning procedure are discussed.
Automatic Single Event Effects Sensitivity Analysis of a 13-Bit Successive Approximation ADC
NASA Astrophysics Data System (ADS)
Márquez, F.; Muñoz, F.; Palomo, F. R.; Sanz, L.; López-Morillo, E.; Aguirre, M. A.; Jiménez, A.
2015-08-01
This paper presents Analog Fault Tolerant University of Seville Debugging System (AFTU), a tool to evaluate the Single-Event Effect (SEE) sensitivity of analog/mixed signal microelectronic circuits at transistor level. As analog cells can behave in an unpredictable way when critical areas interact with the particle hitting, there is a need for designers to have a software tool that allows an automatic and exhaustive analysis of Single-Event Effects influence. AFTU takes the test-bench SPECTRE design, emulates radiation conditions and automatically evaluates vulnerabilities using user-defined heuristics. To illustrate the utility of the tool, the SEE sensitivity of a 13-bits Successive Approximation Analog-to-Digital Converter (ADC) has been analysed. This circuit was selected not only because it was designed for space applications, but also due to the fact that a manual SEE sensitivity analysis would be too time-consuming. After a user-defined test campaign, it was detected that some voltage transients were propagated to a node where a parasitic diode was activated, affecting the offset cancelation, and therefore the whole resolution of the ADC. A simple modification of the scheme solved the problem, as it was verified with another automatic SEE sensitivity analysis.
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Gu, Guojun; Nelkin, Eric J.; Bowman, Kenneth P.; Stocker, Erich; Wolff, David B.
2006-01-01
The TRMM Multi-satellite Precipitation Analysis (TMPA) provides a calibration-based sequential scheme for combining multiple precipitation estimates from satellites, as well as gauge analyses where feasible, at fine scales (0.25 degrees x 0.25 degrees and 3-hourly). It is available both after and in real time, based on calibration by the TRMM Combined Instrument and TRMM Microwave Imager precipitation products, respectively. Only the after-real-time product incorporates gauge data at the present. The data set covers the latitude band 50 degrees N-S for the period 1998 to the delayed present. Early validation results are as follows: The TMPA provides reasonable performance at monthly scales, although it is shown to have precipitation rate dependent low bias due to lack of sensitivity to low precipitation rates in one of the input products (based on AMSU-B). At finer scales the TMPA is successful at approximately reproducing the surface-observation-based histogram of precipitation, as well as reasonably detecting large daily events. The TMPA, however, has lower skill in correctly specifying moderate and light event amounts on short time intervals, in common with other fine-scale estimators. Examples are provided of a flood event and diurnal cycle determination.
A Blind Reversible Robust Watermarking Scheme for Relational Databases
Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen
2013-01-01
Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as “histogram shifting of adjacent pixel difference” (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks. PMID:24223033
A blind reversible robust watermarking scheme for relational databases.
Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen
2013-01-01
Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as "histogram shifting of adjacent pixel difference" (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks.
Development of a fully automatic scheme for detection of masses in whole breast ultrasound images.
Ikedo, Yuji; Fukuoka, Daisuke; Hara, Takeshi; Fujita, Hiroshi; Takada, Etsuo; Endo, Tokiko; Morita, Takako
2007-11-01
Ultrasonography has been used for breast cancer screening in Japan. Screening using a conventional hand-held probe is operator dependent and thus it is possible that some areas of the breast may not be scanned. To overcome such problems, a mechanical whole breast ultrasound (US) scanner has been proposed and developed for screening purposes. However, another issue is that radiologists might tire while interpreting all images in a large-volume screening; this increases the likelihood that masses may remain undetected. Therefore, the aim of this study is to develop a fully automatic scheme for the detection of masses in whole breast US images in order to assist the interpretations of radiologists and potentially improve the screening accuracy. The authors database comprised 109 whole breast US imagoes, which include 36 masses (16 malignant masses, 5 fibroadenomas, and 15 cysts). A whole breast US image with 84 slice images (interval between two slice images: 2 mm) was obtained by the ASU-1004 US scanner (ALOKA Co., Ltd., Japan). The feature based on the edge directions in each slice and a method for subtracting between the slice images were used for the detection of masses in the authors proposed scheme. The Canny edge detector was applied to detect edges in US images; these edges were classified as near-vertical edges or near-horizontal edges using a morphological method. The positions of mass candidates were located using the near-vertical edges as a cue. Then, the located positions were segmented by the watershed algorithm and mass candidate regions were detected using the segmented regions and the low-density regions extracted by the slice subtraction method. For the removal of false positives (FPs), rule-based schemes and a quadratic discriminant analysis were applied for the distribution between masses and FPs. As a result, the sensitivity of the authors scheme for the detection of masses was 80.6% (29/36) with 3.8 FPs per whole breast image. The authors scheme for a computer-aided detection may be useful in improving the screening performance and efficiency.
Data quality enhancement and knowledge discovery from relevant signals in acoustic emission
NASA Astrophysics Data System (ADS)
Mejia, Felipe; Shyu, Mei-Ling; Nanni, Antonio
2015-10-01
The increasing popularity of structural health monitoring has brought with it a growing need for automated data management and data analysis tools. Of great importance are filters that can systematically detect unwanted signals in acoustic emission datasets. This study presents a semi-supervised data mining scheme that detects data belonging to unfamiliar distributions. This type of outlier detection scheme is useful detecting the presence of new acoustic emission sources, given a training dataset of unwanted signals. In addition to classifying new observations (herein referred to as "outliers") within a dataset, the scheme generates a decision tree that classifies sub-clusters within the outlier context set. The obtained tree can be interpreted as a series of characterization rules for newly-observed data, and they can potentially describe the basic structure of different modes within the outlier distribution. The data mining scheme is first validated on a synthetic dataset, and an attempt is made to confirm the algorithms' ability to discriminate outlier acoustic emission sources from a controlled pencil-lead-break experiment. Finally, the scheme is applied to data from two fatigue crack-growth steel specimens, where it is shown that extracted rules can adequately describe crack-growth related acoustic emission sources while filtering out background "noise." Results show promising performance in filter generation, thereby allowing analysts to extract, characterize, and focus only on meaningful signals.
A prototype of mammography CADx scheme integrated to imaging quality evaluation techniques
NASA Astrophysics Data System (ADS)
Schiabel, Homero; Matheus, Bruno R. N.; Angelo, Michele F.; Patrocínio, Ana Claudia; Ventura, Liliane
2011-03-01
As all women over the age of 40 are recommended to perform mammographic exams every two years, the demands on radiologists to evaluate mammographic images in short periods of time has increased considerably. As a tool to improve quality and accelerate analysis CADe/Dx (computer-aided detection/diagnosis) schemes have been investigated, but very few complete CADe/Dx schemes have been developed and most are restricted to detection and not diagnosis. The existent ones usually are associated to specific mammographic equipment (usually DR), which makes them very expensive. So this paper describes a prototype of a complete mammography CADx scheme developed by our research group integrated to an imaging quality evaluation process. The basic structure consists of pre-processing modules based on image acquisition and digitization procedures (FFDM, CR or film + scanner), a segmentation tool to detect clustered microcalcifications and suspect masses and a classification scheme, which evaluates as the presence of microcalcifications clusters as well as possible malignant masses based on their contour. The aim is to provide enough information not only on the detected structures but also a pre-report with a BI-RADS classification. At this time the system is still lacking an interface integrating all the modules. Despite this, it is functional as a prototype for clinical practice testing, with results comparable to others reported in literature.
NASA Astrophysics Data System (ADS)
Nishikawa, Robert M.; Giger, Maryellen L.; Doi, Kunio; Vyborny, Carl J.; Schmidt, Robert A.; Metz, Charles E.; Wu, Chris Y.; Yin, Fang-Fang; Jiang, Yulei; Huo, Zhimin; Lu, Ping; Zhang, Wei; Ema, Takahiro; Bick, Ulrich; Papaioannou, John; Nagel, Rufus H.
1993-07-01
We are developing an 'intelligent' workstation to assist radiologists in diagnosing breast cancer from mammograms. The hardware for the workstation will consist of a film digitizer, a high speed computer, a large volume storage device, a film printer, and 4 high resolution CRT monitors. The software for the workstation is a comprehensive package of automated detection and classification schemes. Two rule-based detection schemes have been developed, one for breast masses and the other for clustered microcalcifications. The sensitivity of both schemes is 85% with a false-positive rate of approximately 3.0 and 1.5 false detections per image, for the mass and cluster detection schemes, respectively. Computerized classification is performed by an artificial neural network (ANN). The ANN has a sensitivity of 100% with a specificity of 60%. Currently, the ANN, which is a three-layer, feed-forward network, requires as input ratings of 14 different radiographic features of the mammogram that were determined subjectively by a radiologist. We are in the process of developing automated techniques to objectively determine these 14 features. The workstation will be placed in the clinical reading area of the radiology department in the near future, where controlled clinical tests will be performed to measure its efficacy.
An Adaptive Ship Detection Scheme for Spaceborne SAR Imagery
Leng, Xiangguang; Ji, Kefeng; Zhou, Shilin; Xing, Xiangwei; Zou, Huanxin
2016-01-01
With the rapid development of spaceborne synthetic aperture radar (SAR) and the increasing need of ship detection, research on adaptive ship detection in spaceborne SAR imagery is of great importance. Focusing on practical problems of ship detection, this paper presents a highly adaptive ship detection scheme for spaceborne SAR imagery. It is able to process a wide range of sensors, imaging modes and resolutions. Two main stages are identified in this paper, namely: ship candidate detection and ship discrimination. Firstly, this paper proposes an adaptive land masking method using ship size and pixel size. Secondly, taking into account the imaging mode, incidence angle, and polarization channel of SAR imagery, it implements adaptive ship candidate detection in spaceborne SAR imagery by applying different strategies to different resolution SAR images. Finally, aiming at different types of typical false alarms, this paper proposes a comprehensive ship discrimination method in spaceborne SAR imagery based on confidence level and complexity analysis. Experimental results based on RADARSAT-1, RADARSAT-2, TerraSAR-X, RS-1, and RS-3 images demonstrate that the adaptive scheme proposed in this paper is able to detect ship targets in a fast, efficient and robust way. PMID:27563902
NASA Astrophysics Data System (ADS)
Kwon, Seong Kyung; Hyun, Eugin; Lee, Jin-Hee; Lee, Jonghun; Son, Sang Hyuk
2017-11-01
Object detections are critical technologies for the safety of pedestrians and drivers in autonomous vehicles. Above all, occluded pedestrian detection is still a challenging topic. We propose a new detection scheme for occluded pedestrian detection by means of lidar-radar sensor fusion. In the proposed method, the lidar and radar regions of interest (RoIs) have been selected based on the respective sensor measurement. Occluded depth is a new means to determine whether an occluded target exists or not. The occluded depth is a region projected out by expanding the longitudinal distance with maintaining the angle formed by the outermost two end points of the lidar RoI. The occlusion RoI is the overlapped region made by superimposing the radar RoI and the occluded depth. The object within the occlusion RoI is detected by the radar measurement information and the occluded object is estimated as a pedestrian based on human Doppler distribution. Additionally, various experiments are performed in detecting a partially occluded pedestrian in outdoor as well as indoor environments. According to experimental results, the proposed sensor fusion scheme has much better detection performance compared to the case without our proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Tariq; Majumdar, Shantanu; Udpa, Lalita
2012-05-17
The objective of this work is to develop processing algorithms to detect and localize flaws using ultrasonic phased-array data. Data was collected on cast austenitic stainless stell (CASS) weld specimens onloan from the U.S. nuclear power industry' Pressurized Walter Reactor Owners Group (PWROG) traveling specimen set. Each specimen consists of a centrifugally cast stainless stell (CCSS) pipe section welded to a statically cst(SCSS) or wrought (WRSS) section. The paper presents a novel automated flaw detection and localization scheme using low frequency ultrasonic phased array inspection singals from the weld and heat affected zone of the based materials. The major stepsmore » of the overall scheme are preprocessing and region of interest (ROI) detection followed by the Hilbert-Huang transform (HHT) of A-scans in the detected ROIs. HHT offers time-frequency-energy distribution for each ROI. The Accumulation of energy in a particular frequency band is used as a classification feature for the particular ROI.« less
Enhanced DNA Sensing via Catalytic Aggregation of Gold Nanoparticles
Huttanus, Herbert M.; Graugnard, Elton; Yurke, Bernard; Knowlton, William B.; Kuang, Wan; Hughes, William L.; Lee, Jeunghoon
2014-01-01
A catalytic colorimetric detection scheme that incorporates a DNA-based hybridization chain reaction into gold nanoparticles was designed and tested. While direct aggregation forms an inter-particle linkage from only ones target DNA strand, the catalytic aggregation forms multiple linkages from a single target DNA strand. Gold nanoparticles were functionalized with thiol-modified DNA strands capable of undergoing hybridization chain reactions. The changes in their absorption spectra were measured at different times and target concentrations and compared against direct aggregation. Catalytic aggregation showed a multifold increase in sensitivity at low target concentrations when compared to direct aggregation. Gel electrophoresis was performed to compare DNA hybridization reactions in catalytic and direct aggregation schemes, and the product formation was confirmed in the catalytic aggregation scheme at low levels of target concentrations. The catalytic aggregation scheme also showed high target specificity. This application of a DNA reaction network to gold nanoparticle-based colorimetric detection enables highly-sensitive, field-deployable, colorimetric readout systems capable of detecting a variety of biomolecules. PMID:23891867
Lin, Yuehe; Bennett, Wendy D.; Timchalk, Charles; Thrall, Karla D.
2004-03-02
Microanalytical systems based on a microfluidics/electrochemical detection scheme are described. Individual modules, such as microfabricated piezoelectrically actuated pumps and a microelectrochemical cell were integrated onto portable platforms. This allowed rapid change-out and repair of individual components by incorporating "plug and play" concepts now standard in PC's. Different integration schemes were used for construction of the microanalytical systems based on microfluidics/electrochemical detection. In one scheme, all individual modules were integrated in the surface of the standard microfluidic platform based on a plug-and-play design. Microelectrochemical flow cell which integrated three electrodes based on a wall-jet design was fabricated on polymer substrate. The microelectrochemical flow cell was then plugged directly into the microfluidic platform. Another integration scheme was based on a multilayer lamination method utilizing stacking modules with different functionality to achieve a compact microanalytical device. Application of the microanalytical system for detection of lead in, for example, river water and saliva samples using stripping voltammetry is described.
An adaptive morphological gradient lifting wavelet for detecting bearing defects
NASA Astrophysics Data System (ADS)
Li, Bing; Zhang, Pei-lin; Mi, Shuang-shan; Hu, Ren-xi; Liu, Dong-sheng
2012-05-01
This paper presents a novel wavelet decomposition scheme, named adaptive morphological gradient lifting wavelet (AMGLW), for detecting bearing defects. The adaptability of the AMGLW consists in that the scheme can select between two filters, mean the average filter and morphological gradient filter, to update the approximation signal based on the local gradient of the analyzed signal. Both a simulated signal and vibration signals acquired from bearing are employed to evaluate and compare the proposed AMGLW scheme with the traditional linear wavelet transform (LWT) and another adaptive lifting wavelet (ALW) developed in literature. Experimental results reveal that the AMGLW outperforms the LW and ALW obviously for detecting bearing defects. The impulsive components can be enhanced and the noise can be depressed simultaneously by the presented AMGLW scheme. Thus the fault characteristic frequencies of bearing can be clearly identified. Furthermore, the AMGLW gets an advantage over LW in computation efficiency. It is quite suitable for online condition monitoring of bearings and other rotating machineries.
Subranging scheme for SQUID sensors
NASA Technical Reports Server (NTRS)
Penanen, Konstantin I. (Inventor)
2008-01-01
A readout scheme for measuring the output from a SQUID-based sensor-array using an improved subranging architecture that includes multiple resolution channels (such as a coarse resolution channel and a fine resolution channel). The scheme employs a flux sensing circuit with a sensing coil connected in series to multiple input coils, each input coil being coupled to a corresponding SQUID detection circuit having a high-resolution SQUID device with independent linearizing feedback. A two-resolution configuration (course and fine) is illustrated with a primary SQUID detection circuit for generating a fine readout, and a secondary SQUID detection circuit for generating a course readout, both having feedback current coupled to the respective SQUID devices via feedback/modulation coils. The primary and secondary SQUID detection circuits function and derive independent feedback. Thus, the SQUID devices may be monitored independently of each other (and read simultaneously) to dramatically increase slew rates and dynamic range.
NASA Astrophysics Data System (ADS)
Atta Yaseen, Amer; Bayart, Mireille
2017-01-01
In this work, a new approach will be introduced as a development for the attack-tolerant scheme in the Networked Control System (NCS). The objective is to be able to detect an attack such as the Stuxnet case where the controller is reprogrammed and hijacked. Besides the ability to detect the stealthy controller hijacking attack, the advantage of this approach is that there is no need for a priori mathematical model of the controller. In order to implement the proposed scheme, a specific detector for the controller hijacking attack is designed. The performance of this scheme is evaluated be connected the detector to NCS with basic security elements such as Data Encryption Standard (DES), Message Digest (MD5), and timestamp. The detector is tested along with networked PI controller under stealthy hijacking attack. The test results of the proposed method show that the hijacked controller can be significantly detected and recovered.
Ding, Chao; Yang, Lijun; Wu, Meng
2017-01-01
Due to the unattended nature and poor security guarantee of the wireless sensor networks (WSNs), adversaries can easily make replicas of compromised nodes, and place them throughout the network to launch various types of attacks. Such an attack is dangerous because it enables the adversaries to control large numbers of nodes and extend the damage of attacks to most of the network with quite limited cost. To stop the node replica attack, we propose a location similarity-based detection scheme using deployment knowledge. Compared with prior solutions, our scheme provides extra functionalities that prevent replicas from generating false location claims without deploying resource-consuming localization techniques on the resource-constraint sensor nodes. We evaluate the security performance of our proposal under different attack strategies through heuristic analysis, and show that our scheme achieves secure and robust replica detection by increasing the cost of node replication. Additionally, we evaluate the impact of network environment on the proposed scheme through theoretic analysis and simulation experiments, and indicate that our scheme achieves effectiveness and efficiency with substantially lower communication, computational, and storage overhead than prior works under different situations and attack strategies. PMID:28098846
Ding, Chao; Yang, Lijun; Wu, Meng
2017-01-15
Due to the unattended nature and poor security guarantee of the wireless sensor networks (WSNs), adversaries can easily make replicas of compromised nodes, and place them throughout the network to launch various types of attacks. Such an attack is dangerous because it enables the adversaries to control large numbers of nodes and extend the damage of attacks to most of the network with quite limited cost. To stop the node replica attack, we propose a location similarity-based detection scheme using deployment knowledge. Compared with prior solutions, our scheme provides extra functionalities that prevent replicas from generating false location claims without deploying resource-consuming localization techniques on the resource-constraint sensor nodes. We evaluate the security performance of our proposal under different attack strategies through heuristic analysis, and show that our scheme achieves secure and robust replica detection by increasing the cost of node replication. Additionally, we evaluate the impact of network environment on the proposed scheme through theoretic analysis and simulation experiments, and indicate that our scheme achieves effectiveness and efficiency with substantially lower communication, computational, and storage overhead than prior works under different situations and attack strategies.
Wavelet Representation of the Corneal Pulse for Detecting Ocular Dicrotism
Melcer, Tomasz; Danielewska, Monika E.; Iskander, D. Robert
2015-01-01
Purpose To develop a reliable and powerful method for detecting the ocular dicrotism from non-invasively acquired signals of corneal pulse without the knowledge of the underlying cardiopulmonary information present in signals of ocular blood pulse and the electrical heart activity. Methods Retrospective data from a study on glaucomatous and age-related changes in corneal pulsation [PLOS ONE 9(7),(2014):e102814] involving 261 subjects was used. Continuous wavelet representation of the signal derivative of the corneal pulse was considered with a complex Gaussian derivative function chosen as mother wavelet. Gray-level Co-occurrence Matrix has been applied to the image (heat-maps) of CWT to yield a set of parameters that can be used to devise the ocular dicrotic pulse detection schemes based on the Conditional Inference Tree and the Random Forest models. The detection scheme was first tested on synthetic signals resembling those of a dicrotic and a non-dicrotic ocular pulse before being used on all 261 real recordings. Results A detection scheme based on a single feature of the Continuous Wavelet Transform of the corneal pulse signal resulted in a low detection rate. Conglomeration of a set of features based on measures of texture (homogeneity, correlation, energy, and contrast) resulted in a high detection rate reaching 93%. Conclusion It is possible to reliably detect a dicrotic ocular pulse from the signals of corneal pulsation without the need of acquiring additional signals related to heart activity, which was the previous state-of-the-art. The proposed scheme can be applied to other non-stationary biomedical signals related to ocular dynamics. PMID:25906236
Herz, Markus; Bouvron, Samuel; Ćavar, Elizabeta; Fonin, Mikhail; Belzig, Wolfgang; Scheer, Elke
2013-10-21
We present a measurement scheme that enables quantitative detection of the shot noise in a scanning tunnelling microscope while scanning the sample. As test objects we study defect structures produced on an iridium single crystal at low temperatures. The defect structures appear in the constant current images as protrusions with curvature radii well below the atomic diameter. The measured power spectral density of the noise is very near to the quantum limit with Fano factor F = 1. While the constant current images show detailed structures expected for tunnelling involving d-atomic orbitals of Ir, we find the current noise to be without pronounced spatial variation as expected for shot noise arising from statistically independent events.
NASA Technical Reports Server (NTRS)
Berg, M.; Kim, H.; Phan, A.; Seidleck, C.; LaBel, K.; Pellish, J.; Campola, M.
2015-01-01
Space applications are complex systems that require intricate trade analyses for optimum implementations. We focus on a subset of the trade process, using classical reliability theory and SEU data, to illustrate appropriate TMR scheme selection.
Mesoscale data assimilation for a local severe rainfall event with the NHM-LETKF system
NASA Astrophysics Data System (ADS)
Kunii, M.
2013-12-01
This study aims to improve forecasts of local severe weather events through data assimilation and ensemble forecasting approaches. Here, the local ensemble transform Kalman filter (LETKF) is implemented with the Japan Meteorological Agency's nonhydrostatic model (NHM). The newly developed NHM-LETKF contains an adaptive inflation scheme and a spatial covariance localization scheme with physical distance. One-way nested analysis in which a finer-resolution LETKF is conducted by using the outputs of an outer model also becomes feasible. These new contents should enhance the potential of the LETKF for convective scale events. The NHM-LETKF is applied to a local severe rainfall event in Japan in 2012. Comparison of the root mean square errors between the model first guess and analysis reveals that the system assimilates observations appropriately. Analysis ensemble spreads indicate a significant increase around the time torrential rainfall occurred, which would imply an increase in the uncertainty of environmental fields. Forecasts initialized with LETKF analyses successfully capture intense rainfalls, suggesting that the system can work effectively for local severe weather. Investigation of probabilistic forecasts by ensemble forecasting indicates that this could become a reliable data source for decision making in the future. A one-way nested data assimilation scheme is also tested. The experiment results demonstrate that assimilation with a finer-resolution model provides an advantage in the quantitative precipitation forecasting of local severe weather conditions.
Narayanan, Vignesh; Jagannathan, Sarangapani
2017-06-08
This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.
NASA Astrophysics Data System (ADS)
Murray, Jon E.; Brindley, Helen E.; Bryant, Robert G.; Russell, Jacqui E.; Jenkins, Katherine F.
2013-04-01
Understanding the processes governing the availability and entrainment of mineral dust into the atmosphere requires dust sources to be identified and the evolution of dust events to be monitored. To achieve this aim a wide range of approaches have been developed utilising observations from a variety of different satellite sensors. Global maps of source regions and their relative strengths have been derived from instruments in low Earth orbit (e.g. Total Ozone Monitoring Spectrometer (TOMS) (Prospero et al., 2002), MODerate resolution Imaging Spectrometer (MODIS) (Ginoux et al., 2012)). Instruments such as MODIS can also be used to improve precise source location (Baddock et al., 2009) but the information available is restricted to the satellite overpass times which may not be coincident with active dust emission from the source. Hence, at a regional scale, some of the more successful approaches used to characterise the activity of different sources use high temporal resolution data available from instruments in geostationary orbit. For example, the widely used red-green-blue (RGB) dust scheme developed by Lensky and Rosenfeld (2008) (hereafter LR2008) makes use of observations from selected thermal channels of the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) in a false colour rendering scheme in which dust appears pink. This scheme has provided the basis for numerous studies of north African dust sources and factors governing their activation (e.g. Schepanski et al., 2007, 2009, 2012). However, the LR2008 imagery can fail to identify dust events due to the effects of atmospheric moisture, variations in dust layer height and optical properties, and surface conditions (Brindley et al., 2012). Here we introduce a new method designed to circumvent some of these issues and enhance the signature of dust events using observations from SEVIRI. The approach involves the derivation of a composite clear-sky signal for selected channels on an individual time-step and pixel basis. These composite signals are subtracted from each observation in the relevant channels to enhance weak transient signals associated with low levels of dust emission. Different channel combinations are then rendered in false colour imagery to better identify dust source locations and activity. We have applied this new clear-sky difference (CSD) algorithm over three key source regions in southern Africa: the Makgadikgadi Basin, Etosha Pan, and the Namibian and western South African coast. Case studies indicate that advantages associated with the CSD approach include an improved ability to detect dust and distinguish multiple sources, the observation of source activation earlier in the diurnal cycle, and an improved ability to pinpoint dust source locations. These advantages are confirmed by a survey of four-years of data, comparing the results obtained using the CSD technique with those derived from LR2008 dust imagery. On average the new algorithm more than doubles the number of dust events identified, with the greatest improvement for the Makgadigkadi Basin and coastal regions. We anticipate exploiting this new activation record derived using the CSD approach to better understand the surface and meteorological conditions controlling dust uplift and subsequent atmospheric transport.
Two particle tracking and detection in a single Gaussian beam optical trap.
Praveen, P; Yogesha; Iyengar, Shruthi S; Bhattacharya, Sarbari; Ananthamurthy, Sharath
2016-01-20
We have studied in detail the situation wherein two microbeads are trapped axially in a single-beam Gaussian intensity profile optical trap. We find that the corner frequency extracted from a power spectral density analysis of intensity fluctuations recorded on a quadrant photodetector (QPD) is dependent on the detection scheme. Using forward- and backscattering detection schemes with single and two laser wavelengths along with computer simulations, we conclude that fluctuations detected in backscattering bear true position information of the bead encountered first in the beam propagation direction. Forward scattering, on the other hand, carries position information of both beads with substantial contribution from the bead encountered first along the beam propagation direction. Mie scattering analysis further reveals that the interference term from the scattering of the two beads contributes significantly to the signal, precluding the ability to resolve the positions of the individual beads in forward scattering. In QPD-based detection schemes, detection through backscattering, thereby, is imperative to track the true displacements of axially trapped microbeads for possible studies on light-mediated interbead interactions.
Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme
Li, Shanbin; Sauter, Dominique; Xu, Bugong
2011-01-01
In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590
Robust Fault Detection and Isolation for Stochastic Systems
NASA Technical Reports Server (NTRS)
George, Jemin; Gregory, Irene M.
2010-01-01
This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.
Mahfuz, Mohammad Upal
2016-10-01
In this paper, the expressions of achievable strength-based detection probabilities of concentration-encoded molecular communication (CEMC) system have been derived based on finite pulsewidth (FP) pulse-amplitude modulated (PAM) on-off keying (OOK) modulation scheme and strength threshold. An FP-PAM system is characterized by its duty cycle α that indicates the fraction of the entire symbol duration the transmitter remains on and transmits the signal. Results show that the detection performance of an FP-PAM OOK CEMC system significantly depends on the statistical distribution parameters of diffusion-based propagation noise and intersymbol interference (ISI). Analytical detection performance of an FP-PAM OOK CEMC system under ISI scenario has been explained and compared based on receiver operating characteristics (ROC) for impulse (i.e., spike)-modulated (IM) and FP-PAM CEMC schemes. It is shown that the effects of diffusion noise and ISI on ROC can be explained separately based on their communication range-dependent statistics. With full duty cycle, an FP-PAM scheme provides significantly worse performance than an IM scheme. The paper also analyzes the performance of the system when duty cycle, transmission data rate, and quantity of molecules vary.
A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei
2018-01-01
Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wenfang; Du, Jinjin; Wen, Ruijuan
We have investigated the transmission spectra of a Fabry-Perot interferometer (FPI) with squeezed vacuum state injection and non-Gaussian detection, including photon number resolving detection and parity detection. In order to show the suitability of the system, parallel studies were made of the performance of two other light sources: coherent state of light and Fock state of light either with classical mean intensity detection or with non-Gaussian detection. This shows that by using the squeezed vacuum state and non-Gaussian detection simultaneously, the resolution of the FPI can go far beyond the cavity standard bandwidth limit based on the current techniques. Themore » sensitivity of the scheme has also been explored and it shows that the minimum detectable sensitivity is better than that of the other schemes.« less
NASA Technical Reports Server (NTRS)
Greenberg, Albert G.; Lubachevsky, Boris D.; Nicol, David M.; Wright, Paul E.
1994-01-01
Fast, efficient parallel algorithms are presented for discrete event simulations of dynamic channel assignment schemes for wireless cellular communication networks. The driving events are call arrivals and departures, in continuous time, to cells geographically distributed across the service area. A dynamic channel assignment scheme decides which call arrivals to accept, and which channels to allocate to the accepted calls, attempting to minimize call blocking while ensuring co-channel interference is tolerably low. Specifically, the scheme ensures that the same channel is used concurrently at different cells only if the pairwise distances between those cells are sufficiently large. Much of the complexity of the system comes from ensuring this separation. The network is modeled as a system of interacting continuous time automata, each corresponding to a cell. To simulate the model, conservative methods are used; i.e., methods in which no errors occur in the course of the simulation and so no rollback or relaxation is needed. Implemented on a 16K processor MasPar MP-1, an elegant and simple technique provides speedups of about 15 times over an optimized serial simulation running on a high speed workstation. A drawback of this technique, typical of conservative methods, is that processor utilization is rather low. To overcome this, new methods were developed that exploit slackness in event dependencies over short intervals of time, thereby raising the utilization to above 50 percent and the speedup over the optimized serial code to about 120 times.
Kalantzis, Georgios; Tachibana, Hidenobu
2014-01-01
For microdosimetric calculations event-by-event Monte Carlo (MC) methods are considered the most accurate. The main shortcoming of those methods is the extensive requirement for computational time. In this work we present an event-by-event MC code of low projectile energy electron and proton tracks for accelerated microdosimetric MC simulations on a graphic processing unit (GPU). Additionally, a hybrid implementation scheme was realized by employing OpenMP and CUDA in such a way that both GPU and multi-core CPU were utilized simultaneously. The two implementation schemes have been tested and compared with the sequential single threaded MC code on the CPU. Performance comparison was established on the speed-up for a set of benchmarking cases of electron and proton tracks. A maximum speedup of 67.2 was achieved for the GPU-based MC code, while a further improvement of the speedup up to 20% was achieved for the hybrid approach. The results indicate the capability of our CPU-GPU implementation for accelerated MC microdosimetric calculations of both electron and proton tracks without loss of accuracy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Xu, Wenying; Wang, Zidong; Ho, Daniel W C
2018-05-01
This paper is concerned with the finite-horizon consensus problem for a class of discrete time-varying multiagent systems with external disturbances and missing measurements. To improve the communication reliability, redundant channels are introduced and the corresponding protocol is constructed for the information transmission over redundant channels. An event-triggered scheme is adopted to determine whether the information of agents should be transmitted to their neighbors. Subsequently, an observer-type event-triggered control protocol is proposed based on the latest received neighbors' information. The purpose of the addressed problem is to design a time-varying controller based on the observed information to achieve the consensus performance in a finite horizon. By utilizing a constrained recursive Riccati difference equation approach, some sufficient conditions are obtained to guarantee the consensus performance, and the controller parameters are also designed. Finally, a numerical example is provided to demonstrate the desired reliability of redundant channels and the effectiveness of the event-triggered control protocol.
NASA Astrophysics Data System (ADS)
Markowitz, Alex; Krumpe, Mirko; Nikutta, R.
2016-06-01
In two papers (Markowitz, Krumpe, & Nikutta 2014, and Nikutta et al., in prep.), we derive the first X-ray statistical constraints for clumpy-torus models in Seyfert AGN by quantifying multi-timescale variability in line of-sight X-ray absorbing gas as a function of optical classification.We systematically search for discrete absorption events in the vast archive of RXTE monitoring of 55 nearby type Is and Compton-thin type IIs. We are sensitive to discrete absorption events due to clouds of full-covering, neutral/mildly ionized gas transiting the line of sight. Our results apply to both dusty and non-dusty clumpy media, and probe model parameter space complementary to that for eclipses observed with XMM-Newton, Suzaku, and Chandra.We detect twelve eclipse events in eight Seyferts, roughly tripling the number previously published from this archive. Event durations span hours to years. Most of our detected clouds are Compton-thin, and most clouds' distances from the black hole are inferred to be commensurate with the outer portions of the BLR or the inner regions of infrared-emitting dusty tori.We present the density profiles of the highest-quality eclipse events; the column density profile for an eclipsing cloud in NGC 3783 is doubly spiked, possibly indicating a cloud that is being tidallysheared. We discuss implications for cloud distributions in the context of clumpy-torus models. We calculate eclipse probabilities for orientation-dependent Type I/II unification schemes.We present constraints on cloud sizes, stability, and radial distribution. We infer that clouds' small angular sizes as seen from the SMBH imply 107 clouds required across the BLR + torus. Cloud size is roughly proportional to distance from the black hole, hinting at the formation processes (e.g., disk fragmentation). All observed clouds are sub-critical with respect to tidal disruption; self-gravity alone cannot contain them. External forces, such as magnetic fields or ambient pressure, are needed to contain them; otherwise, clouds must be short-lived.
Discriminative Cooperative Networks for Detecting Phase Transitions
NASA Astrophysics Data System (ADS)
Liu, Ye-Hua; van Nieuwenburg, Evert P. L.
2018-04-01
The classification of states of matter and their corresponding phase transitions is a special kind of machine-learning task, where physical data allow for the analysis of new algorithms, which have not been considered in the general computer-science setting so far. Here we introduce an unsupervised machine-learning scheme for detecting phase transitions with a pair of discriminative cooperative networks (DCNs). In this scheme, a guesser network and a learner network cooperate to detect phase transitions from fully unlabeled data. The new scheme is efficient enough for dealing with phase diagrams in two-dimensional parameter spaces, where we can utilize an active contour model—the snake—from computer vision to host the two networks. The snake, with a DCN "brain," moves and learns actively in the parameter space, and locates phase boundaries automatically.
High-Performance Sensors Based on Resistance Fluctuations of Single-Layer-Graphene Transistors.
Amin, Kazi Rafsanjani; Bid, Aveek
2015-09-09
One of the most interesting predicted applications of graphene-monolayer-based devices is as high-quality sensors. In this article, we show, through systematic experiments, a chemical vapor sensor based on the measurement of low-frequency resistance fluctuations of single-layer-graphene field-effect-transistor devices. The sensor has extremely high sensitivity, very high specificity, high fidelity, and fast response times. The performance of the device using this scheme of measurement (which uses resistance fluctuations as the detection parameter) is more than 2 orders of magnitude better than a detection scheme in which changes in the average value of the resistance is monitored. We propose a number-density-fluctuation-based model to explain the superior characteristics of a noise-measurement-based detection scheme presented in this article.
NASA Astrophysics Data System (ADS)
Liu, Maw-Yang; Hsu, Yi-Kai
2017-03-01
Three-arm dual-balanced detection scheme is studied in an optical code division multiple access system. As the MAI and beat noise are the main deleterious source of system performance, we utilize optical hard-limiters to alleviate such channel impairment. In addition, once the channel condition is improved effectively, the proposed two-dimensional error correction code can remarkably enhance the system performance. In our proposed scheme, the optimal thresholds of optical hard-limiters and decision circuitry are fixed, and they will not change with other system parameters. Our proposed scheme can accommodate a large number of users simultaneously and is suitable for burst traffic with asynchronous transmission. Therefore, it is highly recommended as the platform for broadband optical access network.
Multiple crack detection in 3D using a stable XFEM and global optimization
NASA Astrophysics Data System (ADS)
Agathos, Konstantinos; Chatzi, Eleni; Bordas, Stéphane P. A.
2018-02-01
A numerical scheme is proposed for the detection of multiple cracks in three dimensional (3D) structures. The scheme is based on a variant of the extended finite element method (XFEM) and a hybrid optimizer solution. The proposed XFEM variant is particularly well-suited for the simulation of 3D fracture problems, and as such serves as an efficient solution to the so-called forward problem. A set of heuristic optimization algorithms are recombined into a multiscale optimization scheme. The introduced approach proves effective in tackling the complex inverse problem involved, where identification of multiple flaws is sought on the basis of sparse measurements collected near the structural boundary. The potential of the scheme is demonstrated through a set of numerical case studies of varying complexity.
NASA Astrophysics Data System (ADS)
Wang, Jinhong; Guan, Liang; Chapman, J.; Zhou, Bing; Zhu, Junjie
2017-11-01
We present a programmable time alignment scheme used in an ASIC for the ATLAS forward muon trigger development. The scheme utilizes regenerated clocks with programmable phases to compensate for the timing offsets introduced by different detector trace lengths. Each ASIC used in the design has 104 input channels with delay compensation circuitry providing steps of ∼3 ns and a full range of 25 ns for each channel. Detailed implementation of the scheme including majority logic to suppress single-event effects is presented. The scheme is flexible and fully synthesizable. The approach is adaptable to other applications with similar phase shifting requirements. In addition, the design is resource efficient and is suitable for cost-effective digital implementation with a large number of channels.
Comparative efficiency of a scheme of cyclic alternating-period subtraction
NASA Astrophysics Data System (ADS)
Golikov, V. S.; Artemenko, I. G.; Malinin, A. P.
1986-06-01
The estimation of the detection quality of a signal on a background of correlated noise according to the Neumann-Pearson criterion is examined. It is shown that, in a number of cases, the cyclic alternating-period subtraction scheme has a higher noise immunity than the conventional alternating-period subtraction scheme.
NASA Astrophysics Data System (ADS)
Tang, Tingting
In this dissertation, we develop structured population models to examine how changes in the environmental affect population processes. In Chapter 2, we develop a general continuous time size structured model describing a susceptible-infected (SI) population coupled with the environment. This model applies to problems arising in ecology, epidemiology, and cell biology. The model consists of a system of quasilinear hyperbolic partial differential equations coupled with a system of nonlinear ordinary differential equations that represent the environment. We develop a second-order high resolution finite difference scheme to numerically solve the model. Convergence of this scheme to a weak solution with bounded total variation is proved. We numerically compare the second order high resolution scheme with a first order finite difference scheme. Higher order of convergence and high resolution property are observed in the second order finite difference scheme. In addition, we apply our model to a multi-host wildlife disease problem, questions regarding the impact of the initial population structure and transition rate within each host are numerically explored. In Chapter 3, we use a stage structured matrix model for wildlife population to study the recovery process of the population given an environmental disturbance. We focus on the time it takes for the population to recover to its pre-event level and develop general formulas to calculate the sensitivity or elasticity of the recovery time to changes in the initial population distribution, vital rates and event severity. Our results suggest that the recovery time is independent of the initial population size, but is sensitive to the initial population structure. Moreover, it is more sensitive to the reduction proportion to the vital rates of the population caused by the catastrophe event relative to the duration of impact of the event. We present the potential application of our model to the amphibian population dynamic and the recovery of a certain plant population. In addition, we explore, in details, the application of the model to the sperm whale population in Gulf of Mexico after the Deepwater Horizon oil spill. In Chapter 4, we summarize the results from Chapter 2 and Chapter 3 and explore some further avenues of our research.
NASA Astrophysics Data System (ADS)
Chaouch, Naira; Temimi, Marouane; Weston, Michael; Ghedira, Hosni
2017-05-01
In this study, we intercompare seven different PBL schemes in WRF in the United Arab Emirates (UAE) and we assess their impact on the performance of the simulations. The study covered five fog events reported in 2014 at Abu Dhabi International Airport. The analysis of Synoptic conditions indicated that during all examined events, the UAE was under a high geopotential pressure and light wind that does not exceed 7 m/s at 850 hPa ( 1.5 km). Seven PBL schemes, namely, Yonsei University (YSU), Mellor-Yamada-Janjic (MYJ), Moller-Yamada Nakanishi and Niino (MYNN) level 2.5, Quasi-Normal Scale Elimination (QNSE-EDMF), Asymmetric Convective Model (ACM2), Grenier-Bretherton-McCaa (GBM) and MYNN level 3 were tested. In situ observations used in the model's assessment included radiosonde data from the Abu Dhabi International Airport and surface measurements of relative humidity (RH), dew point temperature, wind speed, and temperature profiles. Overall, all the tested PBL schemes showed comparable skills with relatively higher performance with the QNSE scheme. The average RH Root Mean Square Error (RMSE) and BIAS for all PBLs were 15.75% and - 9.07%, respectively, whereas the obtained RMSE and BIAS when QNSE was used were 14.65% and - 6.3% respectively. Comparable skills were obtained for the rest of the variables. Local PBL schemes showed better performance than non-local schemes. Discrepancies between simulated and observed values were higher at the surface level compared to high altitude values. The sensitivity to lead time showed that best simulation performances were obtained when the lead time varies between 12 and 18 h. In addition, the results of the simulations show that better performance is obtained when the starting condition is dry.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.
2011-01-01
Increases in computing resources have allowed for the utilization of high-resolution weather forecast models capable of resolving cloud microphysical and precipitation processes among varying numbers of hydrometeor categories. Several microphysics schemes are currently available within the Weather Research and Forecasting (WRF) model, ranging from single-moment predictions of precipitation content to double-moment predictions that include a prediction of particle number concentrations. Each scheme incorporates several assumptions related to the size distribution, shape, and fall speed relationships of ice crystals in order to simulate cold-cloud processes and resulting precipitation. Field campaign data offer a means of evaluating the assumptions present within each scheme. The Canadian CloudSat/CALIPSO Validation Project (C3VP) represented collaboration among the CloudSat, CALIPSO, and NASA Global Precipitation Measurement mission communities, to observe cold season precipitation processes relevant to forecast model evaluation and the eventual development of satellite retrievals of cloud properties and precipitation rates. During the C3VP campaign, widespread snowfall occurred on 22 January 2007, sampled by aircraft and surface instrumentation that provided particle size distributions, ice water content, and fall speed estimations along with traditional surface measurements of temperature and precipitation. In this study, four single-moment and two double-moment microphysics schemes were utilized to generate hypothetical WRF forecasts of the event, with C3VP data used in evaluation of their varying assumptions. Schemes that incorporate flexibility in size distribution parameters and density assumptions are shown to be preferable to fixed constants, and that a double-moment representation of the snow category may be beneficial when representing the effects of aggregation. These results may guide forecast centers in optimal configurations of their forecast models for winter weather and identify best practices present within these various schemes.
Benefits of an ultra large and multiresolution ensemble for estimating available wind power
NASA Astrophysics Data System (ADS)
Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik
2016-04-01
In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 - 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.
NASA Astrophysics Data System (ADS)
Koch, Karl
2002-10-01
The Vogtland region, in the border region of Germany and the Czech Republic, is of special interest for the identification of seismic events on a local and regional scale, since both earthquakes and explosions occur frequently in the same area, and thus are relevant for discrimination research for verification of the Comprehensive Nuclear Test Ban Treaty. Previous research on event discrimination using spectral decay and variance from data recorded by the GERESS array indicated that spectral variance determined for the S phase for the seismic events in the Vogtland region seems to be the most promising parameter for event discrimination, because this parameter provides for almost complete separation of the earthquake and explosion populations. Almost the entire set of Vogtland events used in this research and more than 3000 local events detected in Germany in 1998 and 1999 were analysed to determine spectral slopes and variance for the P- and S-wave windows from stacked spectra of recordings at the GERESS array. The results suggest that small values for the spectral variance are associated not only with earthquakes in the Vogtland region, but also with earthquakes in other parts of Germany and neighbouring countries. While mining blasts show larger spectral variance values, mining-induced events yield a wide range of values, for example, in the Lubin area. A threshold-based identification scheme was applied; almost all events classified as earthquakes are found in seismically active regions. While the earthquakes are uniformly distributed throughout the day, events classified as explosions correlate with normal working hours, which is when blasting is done in Germany. In this study spectral variance provides good event discrimination for events in other parts of Germany, not only for the Vogtland region, showing that this identification parameter may be transported to other geological regions.
Cryptanalysis of a semi-quantum secret sharing scheme based on Bell states
NASA Astrophysics Data System (ADS)
Gao, Gan; Wang, Yue; Wang, Dong
2018-03-01
In the paper [Mod. Phys. Lett. B 31 (2017) 1750150], Yin et al. proposed a semi-quantum secret sharing scheme by using Bell states. We find that the proposed scheme cannot finish the quantum secret sharing task. In addition, we also find that the proposed scheme has a security loophole, that is, it will not be detected that the dishonest participant, Charlie attacks on the quantum channel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Sheng; Suzuki, Kenji; MacMahon, Heber
2011-04-15
Purpose: To develop a computer-aided detection (CADe) scheme for nodules in chest radiographs (CXRs) with a high sensitivity and a low false-positive (FP) rate. Methods: The authors developed a CADe scheme consisting of five major steps, which were developed for improving the overall performance of CADe schemes. First, to segment the lung fields accurately, the authors developed a multisegment active shape model. Then, a two-stage nodule-enhancement technique was developed for improving the conspicuity of nodules. Initial nodule candidates were detected and segmented by using the clustering watershed algorithm. Thirty-one shape-, gray-level-, surface-, and gradient-based features were extracted from each segmentedmore » candidate for determining the feature space, including one of the new features based on the Canny edge detector to eliminate a major FP source caused by rib crossings. Finally, a nonlinear support vector machine (SVM) with a Gaussian kernel was employed for classification of the nodule candidates. Results: To evaluate and compare the scheme to other published CADe schemes, the authors used a publicly available database containing 140 nodules in 140 CXRs and 93 normal CXRs. The CADe scheme based on the SVM classifier achieved sensitivities of 78.6% (110/140) and 71.4% (100/140) with averages of 5.0 (1165/233) FPs/image and 2.0 (466/233) FPs/image, respectively, in a leave-one-out cross-validation test, whereas the CADe scheme based on a linear discriminant analysis classifier had a sensitivity of 60.7% (85/140) at an FP rate of 5.0 FPs/image. For nodules classified as ''very subtle'' and ''extremely subtle,'' a sensitivity of 57.1% (24/42) was achieved at an FP rate of 5.0 FPs/image. When the authors used a database developed at the University of Chicago, the sensitivities was 83.3% (40/48) and 77.1% (37/48) at an FP rate of 5.0 (240/48) FPs/image and 2.0 (96/48) FPs /image, respectively. Conclusions: These results compare favorably to those described for other commercial and noncommercial CADe nodule detection systems.« less
Automatic epileptic seizure detection in EEGs using MF-DFA, SVM based on cloud computing.
Zhang, Zhongnan; Wen, Tingxi; Huang, Wei; Wang, Meihong; Li, Chunfeng
2017-01-01
Epilepsy is a chronic disease with transient brain dysfunction that results from the sudden abnormal discharge of neurons in the brain. Since electroencephalogram (EEG) is a harmless and noninvasive detection method, it plays an important role in the detection of neurological diseases. However, the process of analyzing EEG to detect neurological diseases is often difficult because the brain electrical signals are random, non-stationary and nonlinear. In order to overcome such difficulty, this study aims to develop a new computer-aided scheme for automatic epileptic seizure detection in EEGs based on multi-fractal detrended fluctuation analysis (MF-DFA) and support vector machine (SVM). New scheme first extracts features from EEG by MF-DFA during the first stage. Then, the scheme applies a genetic algorithm (GA) to calculate parameters used in SVM and classify the training data according to the selected features using SVM. Finally, the trained SVM classifier is exploited to detect neurological diseases. The algorithm utilizes MLlib from library of SPARK and runs on cloud platform. Applying to a public dataset for experiment, the study results show that the new feature extraction method and scheme can detect signals with less features and the accuracy of the classification reached up to 99%. MF-DFA is a promising approach to extract features for analyzing EEG, because of its simple algorithm procedure and less parameters. The features obtained by MF-DFA can represent samples as well as traditional wavelet transform and Lyapunov exponents. GA can always find useful parameters for SVM with enough execution time. The results illustrate that the classification model can achieve comparable accuracy, which means that it is effective in epileptic seizure detection.
Construction of an annotated corpus to support biomedical information extraction
Thompson, Paul; Iqbal, Syed A; McNaught, John; Ananiadou, Sophia
2009-01-01
Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants (arguments) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes. PMID:19852798
NASA Astrophysics Data System (ADS)
Anastasiadis, Anastasios; Sandberg, Ingmar; Papaioannou, Athanasios; Georgoulis, Manolis; Tziotziou, Kostas; Jiggens, Piers; Hilgers, Alain
2015-04-01
We present a novel integrated prediction system, of both solar flares and solar energetic particle (SEP) events, which is in place to provide short-term warnings for hazardous solar radiation storms. FORSPEF system provides forecasting of solar eruptive events, such as solar flares with a projection to coronal mass ejections (CMEs) (occurrence and velocity) and the likelihood of occurrence of a SEP event. It also provides nowcasting of SEP events based on actual solar flare and CME near real-time alerts, as well as SEP characteristics (peak flux, fluence, rise time, duration) per parent solar event. The prediction of solar flares relies on a morphological method which is based on the sophisticated derivation of the effective connected magnetic field strength (Beff) of potentially flaring active-region (AR) magnetic configurations and it utilizes analysis of a large number of AR magnetograms. For the prediction of SEP events a new reductive statistical method has been implemented based on a newly constructed database of solar flares, CMEs and SEP events that covers a large time span from 1984-2013. The method is based on flare location (longitude), flare size (maximum soft X-ray intensity), and the occurrence (or not) of a CME. Warnings are issued for all > C1.0 soft X-ray flares. The warning time in the forecasting scheme extends to 24 hours with a refresh rate of 3 hours while the respective warning time for the nowcasting scheme depends on the availability of the near real-time data and falls between 15-20 minutes. We discuss the modules of the FORSPEF system, their interconnection and the operational set up. The dual approach in the development of FORPSEF (i.e. forecasting and nowcasting scheme) permits the refinement of predictions upon the availability of new data that characterize changes on the Sun and the interplanetary space, while the combined usage of solar flare and SEP forecasting methods upgrades FORSPEF to an integrated forecasting solution. This work has been funded through the "FORSPEF: FORecasting Solar Particle Events and Flares", ESA Contract No. 4000109641/13/NL/AK
An Improved Forwarding of Diverse Events with Mobile Sinks in Underwater Wireless Sensor Networks.
Raza, Waseem; Arshad, Farzana; Ahmed, Imran; Abdul, Wadood; Ghouzali, Sanaa; Niaz, Iftikhar Azim; Javaid, Nadeem
2016-11-04
In this paper, a novel routing strategy to cater the energy consumption and delay sensitivity issues in deep underwater wireless sensor networks is proposed. This strategy is named as ESDR: Event Segregation based Delay sensitive Routing. In this strategy sensed events are segregated on the basis of their criticality and, are forwarded to their respective destinations based on forwarding functions. These functions depend on different routing metrics like: Signal Quality Index, Localization free Signal to Noise Ratio, Energy Cost Function and Depth Dependent Function. The problem of incomparable values of previously defined forwarding functions causes uneven delays in forwarding process. Hence forwarding functions are redefined to ensure their comparable values in different depth regions. Packet forwarding strategy is based on the event segregation approach which forwards one third of the generated events (delay sensitive) to surface sinks and two third events (normal events) are forwarded to mobile sinks. Motion of mobile sinks is influenced by the relative distribution of normal nodes. We have also incorporated two different mobility patterns named as; adaptive mobility and uniform mobility for mobile sinks. The later one is implemented for collecting the packets generated by the normal nodes. These improvements ensure optimum holding time, uniform delay and in-time reporting of delay sensitive events. This scheme is compared with the existing ones and outperforms the existing schemes in terms of network lifetime, delay and throughput.
Xu, Zhezhuang; Liu, Guanglun; Yan, Haotian; Cheng, Bin; Lin, Feilong
2017-10-27
In wireless sensor and actor networks, when an event is detected, the sensor node needs to transmit an event report to inform the actor. Since the actor moves in the network to execute missions, its location is always unavailable to the sensor nodes. A popular solution is the search strategy that can forward the data to a node without its location information. However, most existing works have not considered the mobility of the node, and thus generate significant energy consumption or transmission delay. In this paper, we propose the trail-based search (TS) strategy that takes advantage of actor's mobility to improve the search efficiency. The main idea of TS is that, when the actor moves in the network, it can leave its trail composed of continuous footprints. The search packet with the event report is transmitted in the network to search the actor or its footprints. Once an effective footprint is discovered, the packet will be forwarded along the trail until it is received by the actor. Moreover, we derive the condition to guarantee the trail connectivity, and propose the redundancy reduction scheme based on TS (TS-R) to reduce nontrivial transmission redundancy that is generated by the trail. The theoretical and numerical analysis is provided to prove the efficiency of TS. Compared with the well-known expanding ring search (ERS), TS significantly reduces the energy consumption and search delay.
Innovative hazard detection and avoidance strategy for autonomous safe planetary landing
NASA Astrophysics Data System (ADS)
Jiang, Xiuqiang; Li, Shuang; Tao, Ting
2016-09-01
Autonomous hazard detection and avoidance (AHDA) is one of the key technologies for future safe planetary landing missions. In this paper, we address the latest progress on planetary autonomous hazard detection and avoidance technologies. First, the innovative autonomous relay hazard detection and avoidance strategy adopted in Chang'e-3 lunar soft landing mission and its flight results are reported in detail. Second, two new conceptual candidate schemes of hazard detection and avoidance are presented based on the Chang'e-3 AHDA system and the latest developing technologies for the future planetary missions, and some preliminary testing results are also given. Finally, the related supporting technologies for the two candidate schemes above are analyzed.
Fog-Based Two-Phase Event Monitoring and Data Gathering in Vehicular Sensor Networks
Yang, Fan; Su, Jinsong; Zhou, Qifeng; Wang, Tian; Zhang, Lu; Xu, Yifan
2017-01-01
Vehicular nodes are equipped with more and more sensing units, and a large amount of sensing data is generated. Recently, more and more research considers cooperative urban sensing as the heart of intelligent and green city traffic management. The key components of the platform will be a combination of a pervasive vehicular sensing system, as well as a central control and analysis system, where data-gathering is a fundamental component. However, the data-gathering and monitoring are also challenging issues in vehicular sensor networks because of the large amount of data and the dynamic nature of the network. In this paper, we propose an efficient continuous event-monitoring and data-gathering framework based on fog nodes in vehicular sensor networks. A fog-based two-level threshold strategy is adopted to suppress unnecessary data upload and transmissions. In the monitoring phase, nodes sense the environment in low cost sensing mode and generate sensed data. When the probability of the event is high and exceeds some threshold, nodes transfer to the event-checking phase, and some nodes would be selected to transfer to the deep sensing mode to generate more accurate data of the environment. Furthermore, it adaptively adjusts the threshold to upload a suitable amount of data for decision making, while at the same time suppressing unnecessary message transmissions. Simulation results showed that the proposed scheme could reduce more than 84 percent of the data transmissions compared with other existing algorithms, while it detects the events and gathers the event data. PMID:29286320
A computerized scheme of SARS detection in early stage based on chest image of digital radiograph
NASA Astrophysics Data System (ADS)
Zheng, Zhong; Lan, Rihui; Lv, Guozheng
2004-05-01
A computerized scheme for early severe acute respiratory syndrome(SARS) lesion detection in digital chest radiographs is presented in this paper. The total scheme consists of two main parts: the first part is to determine suspect lesions by the theory of locally orderless images(LOI) and their spatial features; the second part is to select real lesions among these suspect ones by their frequent features. The method we used in the second part is firstly developed by Katsuragawa et al with necessary modification. Preliminary results indicate that these features are good criterions to tell early SARS lesions apart from other normal lung structures.
Yang, R G; Zhang, J; Zhai, Z H; Zhai, S Q; Liu, K; Gao, J R
2015-08-10
Low-frequency (Hz~kHz) squeezing is very important in many schemes of quantum precision measurement. But it is more difficult than that at megahertz-frequency because of the introduction of laser low-frequency technical noise. In this paper, we propose a scheme to obtain a low-frequency signal beyond the quantum limit from the frequency comb in a non-degenerate frequency and degenerate polarization optical parametric amplifier (NOPA) operating below threshold with type I phase matching by frequency-shift detection. Low-frequency squeezing immune to laser technical noise is obtained by a detection system with a local beam of two-frequency intense laser. Furthermore, the low-frequency squeezing can be used for phase measurement in Mach-Zehnder interferometer, and the signal-to-noise ratio (SNR) can be enhanced greatly.
James, Conrad D; Galambos, Paul C; Derzon, Mark S; Graf, Darin C; Pohl, Kenneth R; Bourdon, Chris J
2012-10-23
Systems and methods for combining dielectrophoresis, magnetic forces, and hydrodynamic forces to manipulate particles in channels formed on top of an electrode substrate are discussed. A magnet placed in contact under the electrode substrate while particles are flowing within the channel above the electrode substrate allows these three forces to be balanced when the system is in operation. An optical detection scheme using near-confocal microscopy for simultaneously detecting two wavelengths of light emitted from the flowing particles is also discussed.
Tan, Chun Kiat; Ng, Jason Changwei; Xu, Xiaotian; Poh, Chueh Loo; Guan, Yong Liang; Sheah, Kenneth
2011-06-01
Teleradiology applications and universal availability of patient records using web-based technology are rapidly gaining importance. Consequently, digital medical image security has become an important issue when images and their pertinent patient information are transmitted across public networks, such as the Internet. Health mandates such as the Health Insurance Portability and Accountability Act require healthcare providers to adhere to security measures in order to protect sensitive patient information. This paper presents a fully reversible, dual-layer watermarking scheme with tamper detection capability for medical images. The scheme utilizes concepts of public-key cryptography and reversible data-hiding technique. The scheme was tested using medical images in DICOM format. The results show that the scheme is able to ensure image authenticity and integrity, and to locate tampered regions in the images.
Investigation of an Optimum Detection Scheme for a Star-Field Mapping System
NASA Technical Reports Server (NTRS)
Aldridge, M. D.; Credeur, L.
1970-01-01
An investigation was made to determine the optimum detection scheme for a star-field mapping system that uses coded detection resulting from starlight shining through specially arranged multiple slits of a reticle. The computer solution of equations derived from a theoretical model showed that the greatest probability of detection for a given star and background intensity occurred with the use of a single transparent slit. However, use of multiple slits improved the system's ability to reject the detection of undesirable lower intensity stars, but only by decreasing the probability of detection for lower intensity stars to be mapped. Also, it was found that the coding arrangement affected the root-mean-square star-position error and that detection is possible with error in the system's detected spin rate, though at a reduced probability.
Fault detection and multiclassifier fusion for unmanned aerial vehicles (UAVs)
NASA Astrophysics Data System (ADS)
Yan, Weizhong
2001-03-01
UAVs demand more accurate fault accommodation for their mission manager and vehicle control system in order to achieve a reliability level that is comparable to that of a pilot aircraft. This paper attempts to apply multi-classifier fusion techniques to achieve the necessary performance of the fault detection function for the Lockheed Martin Skunk Works (LMSW) UAV Mission Manager. Three different classifiers that meet the design requirements of the fault detection of the UAAV are employed. The binary decision outputs from the classifiers are then aggregated using three different classifier fusion schemes, namely, majority vote, weighted majority vote, and Naieve Bayes combination. All of the three schemes are simple and need no retraining. The three fusion schemes (except the majority vote that gives an average performance of the three classifiers) show the classification performance that is better than or equal to that of the best individual. The unavoidable correlation between the classifiers with binary outputs is observed in this study. We conclude that it is the correlation between the classifiers that limits the fusion schemes to achieve an even better performance.
A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images
Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong
2016-01-01
A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles’ in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians. PMID:27548179
A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images.
Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong
2016-08-19
A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles' in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians.
Introduction to the Apollo collections: Part 2: Lunar breccias
NASA Technical Reports Server (NTRS)
Mcgee, P. E.; Simonds, C. H.; Warner, J. L.; Phinney, W. C.
1979-01-01
Basic petrographic, chemical and age data for a representative suite of lunar breccias are presented for students and potential lunar sample investigators. Emphasis is on sample description and data presentation. Samples are listed, together with a classification scheme based on matrix texture and mineralogy and the nature and abundance of glass present both in the matrix and as clasts. A calculus of the classification scheme, describes the characteristic features of each of the breccia groups. The cratering process which describes the sequence of events immediately following an impact event is discussed, especially the thermal and material transport processes affecting the two major components of lunar breccias (clastic debris and fused material).
Kinahan, David J; Kearney, Sinéad M; Dimov, Nikolay; Glynn, Macdara T; Ducrée, Jens
2014-07-07
The centrifugal "lab-on-a-disc" concept has proven to have great potential for process integration of bioanalytical assays, in particular where ease-of-use, ruggedness, portability, fast turn-around time and cost efficiency are of paramount importance. Yet, as all liquids residing on the disc are exposed to the same centrifugal field, an inherent challenge of these systems remains the automation of multi-step, multi-liquid sample processing and subsequent detection. In order to orchestrate the underlying bioanalytical protocols, an ample palette of rotationally and externally actuated valving schemes has been developed. While excelling with the level of flow control, externally actuated valves require interaction with peripheral instrumentation, thus compromising the conceptual simplicity of the centrifugal platform. In turn, for rotationally controlled schemes, such as common capillary burst valves, typical manufacturing tolerances tend to limit the number of consecutive laboratory unit operations (LUOs) that can be automated on a single disc. In this paper, a major advancement on recently established dissolvable film (DF) valving is presented; for the very first time, a liquid handling sequence can be controlled in response to completion of preceding liquid transfer event, i.e. completely independent of external stimulus or changes in speed of disc rotation. The basic, event-triggered valve configuration is further adapted to leverage conditional, large-scale process integration. First, we demonstrate a fluidic network on a disc encompassing 10 discrete valving steps including logical relationships such as an AND-conditional as well as serial and parallel flow control. Then we present a disc which is capable of implementing common laboratory unit operations such as metering and selective routing of flows. Finally, as a pilot study, these functions are integrated on a single disc to automate a common, multi-step lab protocol for the extraction of total RNA from mammalian cell homogenate.
Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab
2016-01-01
In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.
Digitally balanced detection for optical tomography.
Hafiz, Rehan; Ozanyan, Krikor B
2007-10-01
Analog balanced Photodetection has found extensive usage for sensing of a weak absorption signal buried in laser intensity noise. This paper proposes schemes for compact, affordable, and flexible digital implementation of the already established analog balanced detection, as part of a multichannel digital tomography system. Variants of digitally balanced detection (DBD) schemes, suitable for weak signals on a largely varying background or weakly varying envelopes of high frequency carrier waves, are introduced analytically and elaborated in terms of algorithmic and hardware flow. The DBD algorithms are implemented on a low-cost general purpose reconfigurable hardware (field-programmable gate array), utilizing less than half of its resources. The performance of the DBD schemes compare favorably with their analog counterpart: A common mode rejection ratio of 50 dB was observed over a bandwidth of 300 kHz, limited mainly by the host digital hardware. The close relationship between the DBD outputs and those of known analog balancing circuits is discussed in principle and shown experimentally in the example case of propane gas detection.
Corona-Strauss, Farah I; Delb, Wolfgang; Schick, Bernhard; Strauss, Daniel J
2010-01-01
Auditory Brainstem Responses (ABRs) are used as objective method for diagnostics and quantification of hearing loss. Many methods for automatic recognition of ABRs have been developed, but none of them include the individual measurement setup in the analysis. The purpose of this work was to design a fast recognition scheme for chirp-evoked ABRs that is adjusted to the individual measurement condition using spontaneous electroencephalographic activity (SA). For the classification, the kernel-based novelty detection scheme used features based on the inter-sweep instantaneous phase synchronization as well as energy and entropy relations in the time-frequency domain. This method provided SA discrimination from stimulations above the hearing threshold with a minimum number of sweeps, i.e., 200 individual responses. It is concluded that the proposed paradigm, processing procedures and stimulation techniques improve the detection of ABRs in terms of the degree of objectivity, i.e., automation of procedure, and measurement time.
NASA Astrophysics Data System (ADS)
Dalarmelina, Carlos A.; Adegbite, Saheed A.; Pereira, Esequiel da V.; Nunes, Reginaldo B.; Rocha, Helder R. O.; Segatto, Marcelo E. V.; Silva, Jair A. L.
2017-05-01
Block-level detection is required to decode what may be classified as selective control information (SCI) such as control format indicator in 4G-long-term evolution systems. Using optical orthogonal frequency division multiplexing over radio-over-fiber (RoF) links, we report the experimental evaluation of an SCI detection scheme based on a time-domain correlation (TDC) technique in comparison with the conventional maximum likelihood (ML) approach. When compared with the ML method, it is shown that the TDC method improves detection performance over both 20 and 40 km of standard single mode fiber (SSMF) links. We also report a performance analysis of the TDC scheme in noisy visible light communication channel models after propagation through 40 km of SSMF. Experimental and simulation results confirm that the TDC method is attractive for practical orthogonal frequency division multiplexing-based RoF and fiber-wireless systems. Unlike the ML method, another key benefit of the TDC is that it requires no channel estimation.
Optical detection of tracer species in strongly scattering media.
Brauser, Eric M; Rose, Peter E; McLennan, John D; Bartl, Michael H
2015-03-01
A combination of optical absorption and scattering is used to detect tracer species in a strongly scattering medium. An optical setup was developed, consisting of a dual-beam scattering detection scheme in which sample scattering beam overlaps with the characteristic absorption feature of quantum dot tracer species, while the reference scattering beam is outside any absorption features of the tracer. This scheme was successfully tested in engineered breakthrough tests typical of wastewater and subsurface fluid analysis, as well as in batch analysis of oil and gas reservoir fluids and biological samples. Tracers were detected even under highly scattering conditions, conditions in which conventional absorption or fluorescence methods failed.
NASA Technical Reports Server (NTRS)
Milner, G. Martin; Black, Mike; Hovenga, Mike; Mcclure, Paul; Miller, Patrice
1988-01-01
The application of vibration monitoring to the rotating machinery typical of ECLSS components in advanced NASA spacecraft was studied. It is found that the weighted summation of the accelerometer power spectrum is the most successful detection scheme for a majority of problem types. Other detection schemes studied included high-frequency demodulation, cepstrum, clustering, and amplitude processing.
Studies on Radar Sensor Networks
2007-08-08
scheme in which 2-D image was created via adding voltages with the appropriate time offset. Simulation results show that our DCT-based scheme works...using RSNs in terms of the probability of miss detection PMD and the root mean square error (RMSE). Simulation results showed that multi-target detection... Simulation results are presented to evaluate the feasibility and effectiveness of the proposed JMIC algorithm in a query surveillance region. 5 SVD-QR and
Cosmic-Ray Extremely Distributed Observatory: a global cosmic ray detection framework
NASA Astrophysics Data System (ADS)
Sushchov, O.; Homola, P.; Dhital, N.; Bratek, Ł.; Poznański, P.; Wibig, T.; Zamora-Saa, J.; Almeida Cheminant, K.; Alvarez Castillo, D.; Góra, D.; Jagoda, P.; Jałocha, J.; Jarvis, J. F.; Kasztelan, M.; Kopański, K.; Krupiński, M.; Michałek, M.; Nazari, V.; Smelcerz, K.; Smolek, K.; Stasielak, J.; Sułek, M.
2017-12-01
The main objective of the Cosmic-Ray Extremely Distributed Observatory (CREDO) is the detection and analysis of extended cosmic ray phenomena, so-called super-preshowers (SPS), using existing as well as new infrastructure (cosmic-ray observatories, educational detectors, single detectors etc.). The search for ensembles of cosmic ray events initiated by SPS is yet an untouched ground, in contrast to the current state-of-the-art analysis, which is focused on the detection of single cosmic ray events. Theoretical explanation of SPS could be given either within classical (e.g., photon-photon interaction) or exotic (e.g., Super Heavy Dark Matter decay or annihilation) scenarios, thus detection of SPS would provide a better understanding of particle physics, high energy astrophysics and cosmology. The ensembles of cosmic rays can be classified based on the spatial and temporal extent of particles constituting the ensemble. Some classes of SPS are predicted to have huge spatial distribution, a unique signature detectable only with a facility of the global size. Since development and commissioning of a completely new facility with such requirements is economically unwarranted and time-consuming, the global analysis goals are achievable when all types of existing detectors are merged into a worldwide network. The idea to use the instruments in operation is based on a novel trigger algorithm: in parallel to looking for neighbour surface detectors receiving the signal simultaneously, one should also look for spatially isolated stations clustered in a small time window. On the other hand, CREDO strategy is also aimed at an active engagement of a large number of participants, who will contribute to the project by using common electronic devices (e.g., smartphones), capable of detecting cosmic rays. It will help not only in expanding the geographical spread of CREDO, but also in managing a large manpower necessary for a more efficient crowd-sourced pattern recognition scheme to identify and classify SPS. A worldwide network of cosmic-ray detectors could not only become a unique tool to study fundamental physics, it will also provide a number of other opportunities, including space-weather or geophysics studies. Among the latter one has to list the potential to predict earthquakes by monitoring the rate of low energy cosmic-ray events. The diversity of goals motivates us to advertise this concept across the astroparticle physics community.
NASA Astrophysics Data System (ADS)
Kawazoe, S.; Gutowski, W. J., Jr.
2015-12-01
We analyze the ability of regional climate models (RCMs) to simulate very heavy daily precipitation and supporting processes for both contemporary and future-scenario simulations during summer (JJA). RCM output comes from North American Regional Climate Change Assessment Program (NARCCAP) simulations, which are all run at a spatial resolution of 50 km. Analysis focuses on the upper Mississippi basin for summer, between 1982-1998 for the contemporary climate, and 2052-2068 during the scenario climate. We also compare simulated precipitation and supporting processes with those obtained from observed precipitation and reanalysis atmospheric states. Precipitation observations are from the University of Washington (UW) and the Climate Prediction Center (CPC) gridded dataset. Utilizing two observational datasets helps determine if any uncertainties arise from differences in precipitation gridding schemes. Reanalysis fields come from the North American Regional Reanalysis. The NARCCAP models generally reproduce well the precipitation-vs.-intensity spectrum seen in observations, while producing overly strong precipitation at high intensity thresholds. In the future-scenario climate, there is a decrease in frequency for light to moderate precipitation intensities, while an increase in frequency is seen for the higher intensity events. Further analysis focuses on precipitation events exceeding the 99.5 percentile that occur simultaneously at several points in the region, yielding so-called "widespread events". For widespread events, we analyze local and large scale environmental parameters, such as 2-m temperature and specific humidity, 500-hPa geopotential heights, Convective Available Potential Energy (CAPE), vertically integrated moisture flux convergence, among others, to compare atmospheric states and processes leading to such events in the models and observations. The results suggest that an analysis of atmospheric states supporting very heavy precipitation events is a more fruitful path for understanding and detecting changes than simply looking at precipitation itself.
NASA Astrophysics Data System (ADS)
Zhou, C.; Zhang, X.; Gong, S.; Wang, Y.; Xue, M.
2016-01-01
A comprehensive aerosol-cloud-precipitation interaction (ACI) scheme has been developed under a China Meteorological Administration (CMA) chemical weather modeling system, GRAPES/CUACE (Global/Regional Assimilation and PrEdiction System, CMA Unified Atmospheric Chemistry Environment). Calculated by a sectional aerosol activation scheme based on the information of size and mass from CUACE and the thermal-dynamic and humid states from the weather model GRAPES at each time step, the cloud condensation nuclei (CCN) are interactively fed online into a two-moment cloud scheme (WRF Double-Moment 6-class scheme - WDM6) and a convective parameterization to drive cloud physics and precipitation formation processes. The modeling system has been applied to study the ACI for January 2013 when several persistent haze-fog events and eight precipitation events occurred.
The results show that aerosols that interact with the WDM6 in GRAPES/CUACE obviously increase the total cloud water, liquid water content, and cloud droplet number concentrations, while decreasing the mean diameters of cloud droplets with varying magnitudes of the changes in each case and region. These interactive microphysical properties of clouds improve the calculation of their collection growth rates in some regions and hence the precipitation rate and distributions in the model, showing 24 to 48 % enhancements of threat score for 6 h precipitation in almost all regions. The aerosols that interact with the WDM6 also reduce the regional mean bias of temperature by 3 °C during certain precipitation events, but the monthly means bias is only reduced by about 0.3 °C.
Event-driven simulations of nonlinear integrate-and-fire neurons.
Tonnelier, Arnaud; Belmabrouk, Hana; Martinez, Dominique
2007-12-01
Event-driven strategies have been used to simulate spiking neural networks exactly. Previous work is limited to linear integrate-and-fire neurons. In this note, we extend event-driven schemes to a class of nonlinear integrate-and-fire models. Results are presented for the quadratic integrate-and-fire model with instantaneous or exponential synaptic currents. Extensions to conductance-based currents and exponential integrate-and-fire neurons are discussed.
NASA Astrophysics Data System (ADS)
de Oliveira, Helder C. R.; Mencattini, Arianna; Casti, Paola; Martinelli, Eugenio; di Natale, Corrado; Catani, Juliana H.; de Barros, Nestor; Melo, Carlos F. E.; Gonzaga, Adilson; Vieira, Marcelo A. C.
2018-02-01
This paper proposes a method to reduce the number of false-positives (FP) in a computer-aided detection (CAD) scheme for automated detection of architectural distortion (AD) in digital mammography. AD is a subtle contraction of breast parenchyma that may represent an early sign of breast cancer. Due to its subtlety and variability, AD is more difficult to detect compared to microcalcifications and masses, and is commonly found in retrospective evaluations of false-negative mammograms. Several computer-based systems have been proposed for automated detection of AD in breast images. The usual approach is automatically detect possible sites of AD in a mammographic image (segmentation step) and then use a classifier to eliminate the false-positives and identify the suspicious regions (classification step). This paper focus on the optimization of the segmentation step to reduce the number of FPs that is used as input to the classifier. The proposal is to use statistical measurements to score the segmented regions and then apply a threshold to select a small quantity of regions that should be submitted to the classification step, improving the detection performance of a CAD scheme. We evaluated 12 image features to score and select suspicious regions of 74 clinical Full-Field Digital Mammography (FFDM). All images in this dataset contained at least one region with AD previously marked by an expert radiologist. The results showed that the proposed method can reduce the false positives of the segmentation step of the CAD scheme from 43.4 false positives (FP) per image to 34.5 FP per image, without increasing the number of false negatives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Yuanbin; Pálffy, Adriana, E-mail: yuanbin.wu@mpi-hd.mpg.de, E-mail: Palffy@mpi-hd.mpg.de
Due to screening effects, nuclear reactions in astrophysical plasmas may behave differently than in the laboratory. The possibility to determine the magnitude of these screening effects in colliding laser-generated plasmas is investigated theoretically, having as a starting point a proposed experimental setup with two laser beams at the Extreme Light Infrastructure facility. A laser pulse interacting with a solid target produces a plasma through the Target Normal Sheath Acceleration scheme, and this rapidly streaming plasma (ion flow) impacts a secondary plasma created by the interaction of a second laser pulse on a gas jet target. We model this scenario heremore » and calculate the reaction events for the astrophysically relevant reaction {sup 13}C({sup 4}He, n ){sup 16}O. We find that it should be experimentally possible to determine the plasma screening enhancement factor for fusion reactions by detecting the difference in reaction events between two scenarios of ion flow interacting with the plasma target and a simple gas target. This provides a way to evaluate nuclear reaction cross-sections in stellar environments and can significantly advance the field of nuclear astrophysics.« less
Determination of Plasma Screening Effects for Thermonuclear Reactions in Laser-generated Plasmas
NASA Astrophysics Data System (ADS)
Wu, Yuanbin; Pálffy, Adriana
2017-03-01
Due to screening effects, nuclear reactions in astrophysical plasmas may behave differently than in the laboratory. The possibility to determine the magnitude of these screening effects in colliding laser-generated plasmas is investigated theoretically, having as a starting point a proposed experimental setup with two laser beams at the Extreme Light Infrastructure facility. A laser pulse interacting with a solid target produces a plasma through the Target Normal Sheath Acceleration scheme, and this rapidly streaming plasma (ion flow) impacts a secondary plasma created by the interaction of a second laser pulse on a gas jet target. We model this scenario here and calculate the reaction events for the astrophysically relevant reaction 13C(4He, n)16O. We find that it should be experimentally possible to determine the plasma screening enhancement factor for fusion reactions by detecting the difference in reaction events between two scenarios of ion flow interacting with the plasma target and a simple gas target. This provides a way to evaluate nuclear reaction cross-sections in stellar environments and can significantly advance the field of nuclear astrophysics.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-08-12
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Qin, Junping; Sun, Shiwen; Deng, Qingxu; Liu, Limin; Tian, Yonghong
2017-06-02
Object tracking and detection is one of the most significant research areas for wireless sensor networks. Existing indoor trajectory tracking schemes in wireless sensor networks are based on continuous localization and moving object data mining. Indoor trajectory tracking based on the received signal strength indicator ( RSSI ) has received increased attention because it has low cost and requires no special infrastructure. However, RSSI tracking introduces uncertainty because of the inaccuracies of measurement instruments and the irregularities (unstable, multipath, diffraction) of wireless signal transmissions in indoor environments. Heuristic information includes some key factors for trajectory tracking procedures. This paper proposes a novel trajectory tracking scheme based on Delaunay triangulation and heuristic information (TTDH). In this scheme, the entire field is divided into a series of triangular regions. The common side of adjacent triangular regions is regarded as a regional boundary. Our scheme detects heuristic information related to a moving object's trajectory, including boundaries and triangular regions. Then, the trajectory is formed by means of a dynamic time-warping position-fingerprint-matching algorithm with heuristic information constraints. Field experiments show that the average error distance of our scheme is less than 1.5 m, and that error does not accumulate among the regions.
Wang, Ji-Chao; Yao, Hong-Chang; Fan, Ze-Yu; Zhang, Lin; Wang, Jian-She; Zang, Shuang-Quan; Li, Zhong-Jun
2016-02-17
Rational design and construction of Z-scheme photocatalysts has received much attention in the field of CO2 reduction because of its great potential to solve the current energy and environmental crises. In this study, a series of Z-scheme BiOI/g-C3N4 photocatalysts are synthesized and their photocatalytic performance for CO2 reduction to produce CO, H2 and/or CH4 is evaluated under visible light irradiation (λ > 400 nm). The results show that the as-synthesized composites exhibit more highly efficient photocatalytic activity than pure g-C3N4 and BiOI and that the product yields change remarkably depending on the reaction conditions such as irradiation light wavelength. Emphasis is placed on identifying how the charge transfers across the heterojunctions and an indirect Z-scheme charge transfer mechanism is verified by detecting the intermediate I3(-) ions. The reaction mechanism is further proposed based on the detection of the intermediate (•)OH and H2O2. This work may be useful for rationally designing of new types of Z-scheme photocatalyst and provide some illuminating insights into the Z-scheme transfer mechanism.
Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.
2004-01-01
The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux derivatives and a fourth-order Runge-Kutta method are denoted.
2016-08-19
isolation and storage. ■ RESULTS A retrosynthetic route for the synthesis of compound 1 is illustrated in Scheme 1 . The three steps are an oxidative...cleavage to compound 1 and will determine detectable byproducts, both of which are issues of interest. The synthesis of the hexamethyl triolefin-trioxane 2a...However, Scheme 1 . Retrosynthetic Route toward the CO2 Trimer Scheme 2. Synthesis of Substituted Triolefin-Trioxane Scheme 3. Attempted Synthesis of
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Fukushima, Y.; Katagi, T.; Hashizume, M.; Satomura, M.; Wu, P.; Ishii, M.; Kato, T.; Fukuda, Y.
2009-04-01
Since the occurrence of the 2004 Sumatra-Andaman earthquake (Mw9.2), the Sumatra-Andaman Subduction zone has attracted geophysicists' attention. We have been carrying on CGPS observation in Thailand and Myanmar to detect postseismic deformation following this gigantic event. Since CGPS on land is not enough to clarify the detailed image of postseismic deformation, we also make InSAR analyses in Andaman and Phuket Islands. On September 12, 2007, another Mw8.4 event occurred SW off Sumatra. We report deformations observed with GPS and SAR including co- and postseismic deformation following this event. We have analyzed CGPS data up to the end of 2007 and detected postseismic displacements all over the Indochina peninsula. Phuket, which suffered from about 26cm coseismic displacement, has shifted by ~26cm southwestward till July, 2007. Postseismic transient is clearly recognized and already exceeds coseismic movements at remote sites such as Bangkok and Chiang Mai in Thailand. We try to invert observed postseismic displacement and estimate distribution of afterslip using Yabuki and Matsu'ura's(1992) scheme. Afterslip may have rapidly decayed in and around the source region of the Nias earthquake and beneath the Andaman Islands, while it still continues beneath the northern tip of Sumatra and Nicobar Island. This result implies spatial variation in frictional property on the plate interface. Our GPS sites are located in far field and the afterslip distribution obtained above does not have enough resolution in the depth direction. In order to examine near-field displacement, we also process 3 ALOS/PALSAR images acquired during Jun.19, 2007 and May 6, 2008, in Andaman Islands in order detect postseismic transient. The result shows a negative line-of-sight displacement in the southern part, which is consistent with CGPS observation by Paul et al.(2007). This movement can be simulated by an afterslip on a shallow part of the plate interface.
Bodensteiner, David; Scott, C Ronald; Sims, Katherine B; Shepherd, Gillian M; Cintron, Rebecca D; Germain, Dominique P
2008-05-01
To determine if enzyme replacement therapy, involving intravenous infusions of recombinant human alpha-galactosidase A (agalsidase beta; Fabrazyme), could be safely continued in patients with Fabry disease who had been withdrawn from a previous clinical trial as a precautionary, protocol-specified measure due to detection of serum IgE antibodies or skin-test reactivity to agalsidase beta. The rechallenge infusion protocol specified strict patient monitoring conditions and graded dosing and infusion-rate schemes that were adjusted according to each patient's tolerance to the infusion. Six males (age: 26-66 years) were enrolled. During rechallenge, five patients received between 4 and 27 infusions; one patient voluntarily withdrew after one infusion because of recurrence of infusion-associated reactions. No anaphylactic reactions occurred. All adverse events, including four serious adverse events, were mild or moderate in intensity. Most treatment-related adverse events occurred during infusions (most commonly urticaria, vomiting, nausea, chills, pruritus, hypertension) and were resolved by infusion rate reductions and/or medication. After participation in the study, all patients, including the one who withdrew after one infusion, transitioned to commercial drug. Agalsidase beta therapy can be successfully reinstated in patients with Fabry disease who have developed IgE antibodies or skin test reactivity to the recombinant enzyme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chunguang; Zheng, Chuantao; Dong, Lei
A ppb-level mid-infrared ethane (C 2H 6) sensor was developed using a continuous-wave, thermoelectrically cooled, distributed feedback interband cascade laser emitting at 3.34 μm and a miniature dense patterned multipass gas cell with a 54.6-m optical path length. The performance of the sensor was investigated using two different techniques based on the tunable interband cascade laser: direct absorption spectroscopy (DAS) and second-harmonic wavelength modulation spectroscopy (2f-WMS). Three measurement schemes, DAS, WMS and quasi-simultaneous DAS and WMS, were realized based on the same optical sensor core. A detection limit of ~7.92 ppbv with a precision of ±30 ppbv for the separatemore » DAS scheme with an averaging time of 1 s and a detection limit of ~1.19 ppbv with a precision of about ±4 ppbv for the separate WMS scheme with a 4-s averaging time were achieved. An Allan–Werle variance analysis indicated that the precisions can be further improved to 777 pptv @ 166 s for the separate DAS scheme and 269 pptv @ 108 s for the WMS scheme, respectively. For the quasi-simultaneous DAS and WMS scheme, both the 2f signal and the direct absorption signal were simultaneously extracted using a LabVIEW platform, and four C 2H 6 samples (0, 30, 60 and 90 ppbv with nitrogen as the balance gas) were used as the target gases to assess the sensor performance. A detailed comparison of the three measurement schemes is reported. Here, atmospheric C 2H 6 measurements on the Rice University campus and a field test at a compressed natural gas station in Houston, TX, were conducted to evaluate the performance of the sensor system as a robust and reliable field-deployable sensor system.« less
Li, Chunguang; Zheng, Chuantao; Dong, Lei; ...
2016-06-20
A ppb-level mid-infrared ethane (C 2H 6) sensor was developed using a continuous-wave, thermoelectrically cooled, distributed feedback interband cascade laser emitting at 3.34 μm and a miniature dense patterned multipass gas cell with a 54.6-m optical path length. The performance of the sensor was investigated using two different techniques based on the tunable interband cascade laser: direct absorption spectroscopy (DAS) and second-harmonic wavelength modulation spectroscopy (2f-WMS). Three measurement schemes, DAS, WMS and quasi-simultaneous DAS and WMS, were realized based on the same optical sensor core. A detection limit of ~7.92 ppbv with a precision of ±30 ppbv for the separatemore » DAS scheme with an averaging time of 1 s and a detection limit of ~1.19 ppbv with a precision of about ±4 ppbv for the separate WMS scheme with a 4-s averaging time were achieved. An Allan–Werle variance analysis indicated that the precisions can be further improved to 777 pptv @ 166 s for the separate DAS scheme and 269 pptv @ 108 s for the WMS scheme, respectively. For the quasi-simultaneous DAS and WMS scheme, both the 2f signal and the direct absorption signal were simultaneously extracted using a LabVIEW platform, and four C 2H 6 samples (0, 30, 60 and 90 ppbv with nitrogen as the balance gas) were used as the target gases to assess the sensor performance. A detailed comparison of the three measurement schemes is reported. Here, atmospheric C 2H 6 measurements on the Rice University campus and a field test at a compressed natural gas station in Houston, TX, were conducted to evaluate the performance of the sensor system as a robust and reliable field-deployable sensor system.« less
Imran, Muhammad; Zafar, Nazir Ahmad
2012-01-01
Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs), as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR) algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical). The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.
High-Order Residual-Distribution Schemes for Discontinuous Problems on Irregular Triangular Grids
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Nishikawa, Hiroaki
2016-01-01
In this paper, we develop second- and third-order non-oscillatory shock-capturing hyperbolic residual distribution schemes for irregular triangular grids, extending our second- and third-order schemes to discontinuous problems. We present extended first-order N- and Rusanov-scheme formulations for hyperbolic advection-diffusion system, and demonstrate that the hyperbolic diffusion term does not affect the solution of inviscid problems for vanishingly small viscous coefficient. We then propose second- and third-order blended hyperbolic residual-distribution schemes with the extended first-order Rusanov-scheme. We show that these proposed schemes are extremely accurate in predicting non-oscillatory solutions for discontinuous problems. We also propose a characteristics-based nonlinear wave sensor for accurately detecting shocks, compression, and expansion regions. Using this proposed sensor, we demonstrate that the developed hyperbolic blended schemes do not produce entropy-violating solutions (unphysical stocks). We then verify the design order of accuracy of these blended schemes on irregular triangular grids.
NASA Astrophysics Data System (ADS)
Ikedo, Yuji; Fukuoka, Daisuke; Hara, Takeshi; Fujita, Hiroshi; Takada, Etsuo; Endo, Tokiko; Morita, Takako
2007-03-01
The comparison of left and right mammograms is a common technique used by radiologists for the detection and diagnosis of masses. In mammography, computer-aided detection (CAD) schemes using bilateral subtraction technique have been reported. However, in breast ultrasonography, there are no reports on CAD schemes using comparison of left and right breasts. In this study, we propose a scheme of false positive reduction based on bilateral subtraction technique in whole breast ultrasound images. Mass candidate regions are detected by using the information of edge directions. Bilateral breast images are registered with reference to the nipple positions and skin lines. A false positive region is detected based on a comparison of the average gray values of a mass candidate region and a region with the same position and same size as the candidate region in the contralateral breast. In evaluating the effectiveness of the false positive reduction method, three normal and three abnormal bilateral pairs of whole breast images were employed. These abnormal breasts included six masses larger than 5 mm in diameter. The sensitivity was 83% (5/6) with 13.8 (165/12) false positives per breast before applying the proposed reduction method. By applying the method, false positives were reduced to 4.5 (54/12) per breast without removing a true positive region. This preliminary study indicates that the bilateral subtraction technique is effective for improving the performance of a CAD scheme in whole breast ultrasound images.
NASA Technical Reports Server (NTRS)
Moore, Timothy; Dowell, Mark; Franz, Bryan A.
2012-01-01
A generalized coccolithophore bloom classifier has been developed for use with ocean color imagery. The bloom classifier was developed using extracted satellite reflectance data from SeaWiFS images screened by the default bloom detection mask. In the current application, we extend the optical water type (OWT) classification scheme by adding a new coccolithophore bloom class formed from these extracted reflectances. Based on an in situ coccolithophore data set from the North Atlantic, the detection levels with the new scheme were between 1,500 and 1,800 coccolithophore cellsmL and 43,000 and 78,000 lithsmL. The detected bloom area using the OWT method was an average of 1.75 times greater than the default bloom detector based on a collection of SeaWiFS 1 km imagery. The versatility of the scheme is shown with SeaWiFS, MODIS Aqua, CZCS and MERIS imagery at the 1 km scale. The OWT scheme was applied to the daily global SeaWiFS imagery mission data set (years 19972010). Based on our results, average annual coccolithophore bloom area was more than two times greater in the southern hemisphere compared to the northern hemi- sphere with values of 2.00 106 km2 and 0.75 106 km2, respectively. The new algorithm detects larger bloom areas in the Southern Ocean compared to the default algorithm, and our revised global annual average of 2.75106 km2 is dominated by contributions from the Southern Ocean.
PSK Shift Timing Information Detection Using Image Processing and a Matched Filter
2009-09-01
phase shifts are enhanced. Develop, design, and test the resulting phase shift identification scheme. xx Develop, design, and test an optional...and the resulting phase shift identification algorithm is investigated for SNR levels in the range -2dB to 12 dB. Detection performances are derived...test the resulting phase shift identification scheme. Develop, design, and test an optional analysis window overlapping technique to improve phase
An Investigation of the Pareto Distribution as a Model for High Grazing Angle Clutter
2011-03-01
radar detection schemes under controlled conditions. Complicated clutter models result in mathematical difficulties in the determination of optimal and...a population [7]. It has been used in the modelling of actuarial data; an example is in excess of loss quotations in insurance [8]. Its usefulness as...UNCLASSIFIED modified Bessel functions, making it difficult to employ in radar detection schemes. The Pareto Distribution is amenable to mathematical
Wavelet Based Protection Scheme for Multi Terminal Transmission System with PV and Wind Generation
NASA Astrophysics Data System (ADS)
Manju Sree, Y.; Goli, Ravi kumar; Ramaiah, V.
2017-08-01
A hybrid generation is a part of large power system in which number of sources usually attached to a power electronic converter and loads are clustered can operate independent of the main power system. The protection scheme is crucial against faults based on traditional over current protection since there are adequate problems due to fault currents in the mode of operation. This paper adopts a new approach for detection, discrimination of the faults for multi terminal transmission line protection in presence of hybrid generation. Transient current based protection scheme is developed with discrete wavelet transform. Fault indices of all phase currents at all terminals are obtained by analyzing the detail coefficients of current signals using bior 1.5 mother wavelet. This scheme is tested for different types of faults and is found effective for detection and discrimination of fault with various fault inception angle and fault impedance.
Reset Tree-Based Optical Fault Detection
Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon
2013-01-01
In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267
NASA Astrophysics Data System (ADS)
Phan, Raymond; Androutsos, Dimitrios
2008-01-01
In this paper, we present a logo and trademark retrieval system for unconstrained color image databases that extends the Color Edge Co-occurrence Histogram (CECH) object detection scheme. We introduce more accurate information to the CECH, by virtue of incorporating color edge detection using vector order statistics. This produces a more accurate representation of edges in color images, in comparison to the simple color pixel difference classification of edges as seen in the CECH. Our proposed method is thus reliant on edge gradient information, and as such, we call this the Color Edge Gradient Co-occurrence Histogram (CEGCH). We use this as the main mechanism for our unconstrained color logo and trademark retrieval scheme. Results illustrate that the proposed retrieval system retrieves logos and trademarks with good accuracy, and outperforms the CECH object detection scheme with higher precision and recall.
A novel scheme to aid coherent detection of GMSK signals in fast Rayleigh fading channels
NASA Technical Reports Server (NTRS)
Leung, Patrick S. K.; Feher, Kamilo
1990-01-01
A novel scheme to insert carrier pilot to Gaussian Minimum Shift Keying (GMSK) signal using Binary Block Code (BBC) and a highpass filter in baseband is proposed. This allows the signal to be coherently demodulated even in a fast Rayleigh fading environment. As an illustrative example, the scheme is applied to a 16 kb/s GMSK signal, and its performance over a fast Rayleigh fading channel is investigated using computer simulation. This modem's 'irreducible error rate' is found to be Pe = 5.5 x 10(exp -5) which is more than that of differential detection. The modem's performance in Rician fading channel is currently under investigation.
NASA Astrophysics Data System (ADS)
Michels, François; Mazzoni, Federico; Becucci, Maurizio; Müller-Dethlefs, Klaus
2017-10-01
An improved detection scheme is presented for threshold ionization spectroscopy with simultaneous recording of the Zero Electron Kinetic Energy (ZEKE) and Mass Analysed Threshold Ionisation (MATI) signals. The objective is to obtain accurate dissociation energies for larger molecular clusters by simultaneously detecting the fragment and parent ion MATI signals with identical transmission. The scheme preserves an optimal ZEKE spectral resolution together with excellent separation of the spontaneous ion and MATI signals in the time-of-flight mass spectrum. The resulting improvement in sensitivity will allow for the determination of dissociation energies in clusters with substantial mass difference between parent and daughter ions.
Two-stage Keypoint Detection Scheme for Region Duplication Forgery Detection in Digital Images.
Emam, Mahmoud; Han, Qi; Zhang, Hongli
2018-01-01
In digital image forensics, copy-move or region duplication forgery detection became a vital research topic recently. Most of the existing keypoint-based forgery detection methods fail to detect the forgery in the smooth regions, rather than its sensitivity to geometric changes. To solve these problems and detect points which cover all the regions, we proposed two steps for keypoint detection. First, we employed the scale-invariant feature operator to detect the spatially distributed keypoints from the textured regions. Second, the keypoints from the missing regions are detected using Harris corner detector with nonmaximal suppression to evenly distribute the detected keypoints. To improve the matching performance, local feature points are described using Multi-support Region Order-based Gradient Histogram descriptor. Based on precision-recall rates and commonly tested dataset, comprehensive performance evaluation is performed. The results demonstrated that the proposed scheme has better detection and robustness against some geometric transformation attacks compared with state-of-the-art methods. © 2017 American Academy of Forensic Sciences.
Gating capacitive field-effect sensors by the charge of nanoparticle/molecule hybrids.
Poghossian, Arshak; Bäcker, Matthias; Mayer, Dirk; Schöning, Michael J
2015-01-21
The semiconductor field-effect platform is a powerful tool for chemical and biological sensing with direct electrical readout. In this work, the field-effect capacitive electrolyte-insulator-semiconductor (EIS) structure - the simplest field-effect (bio-)chemical sensor - modified with citrate-capped gold nanoparticles (AuNPs) has been applied for a label-free electrostatic detection of charged molecules by their intrinsic molecular charge. The EIS sensor detects the charge changes in AuNP/molecule inorganic/organic hybrids induced by the molecular adsorption or binding events. The feasibility of the proposed detection scheme has been exemplarily demonstrated by realizing capacitive EIS sensors consisting of an Al-p-Si-SiO2-silane-AuNP structure for the label-free detection of positively charged cytochrome c and poly-d-lysine molecules as well as for monitoring the layer-by-layer formation of polyelectrolyte multilayers of poly(allylamine hydrochloride)/poly(sodium 4-styrene sulfonate), representing typical model examples of detecting small proteins and macromolecules and the consecutive adsorption of positively/negatively charged polyelectrolytes, respectively. For comparison, EIS sensors without AuNPs have been investigated, too. The adsorption of molecules on the surface of AuNPs has been verified via the X-ray photoelectron spectroscopy method. In addition, a theoretical model of the functioning of the capacitive field-effect EIS sensor functionalized with AuNP/charged-molecule hybrids has been discussed.
Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model
NASA Astrophysics Data System (ADS)
Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon
Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.
Adaptive Information Dissemination Control to Provide Diffdelay for the Internet of Things.
Liu, Xiao; Liu, Anfeng; Huang, Changqin
2017-01-12
Applications running on the Internet of Things, such as the Wireless Sensor and Actuator Networks (WSANs) platform, generally have different quality of service (QoS) requirements. For urgent events, it is crucial that information be reported to the actuator quickly, and the communication cost is the second factor. However, for interesting events, communication costs, network lifetime and time all become important factors. In most situations, these different requirements cannot be satisfied simultaneously. In this paper, an adaptive communication control based on a differentiated delay (ACCDS) scheme is proposed to resolve this conflict. In an ACCDS, source nodes of events adaptively send various searching actuators routings (SARs) based on the degree of sensitivity to delay while maintaining the network lifetime. For a delay-sensitive event, the source node sends a large number of SARs to actuators to identify and inform the actuators in an extremely short time; thus, action can be taken quickly but at higher communication costs. For delay-insensitive events, the source node sends fewer SARs to reduce communication costs and improve network lifetime. Therefore, an ACCDS can meet the QoS requirements of different events using a differentiated delay framework. Theoretical analysis simulation results indicate that an ACCDS provides delay and communication costs and differentiated services; an ACCDS scheme can reduce the network delay by 11.111%-53.684% for a delay-sensitive event and reduce the communication costs by 5%-22.308% for interesting events, and reduce the network lifetime by about 28.713%.
Adaptive Information Dissemination Control to Provide Diffdelay for the Internet of Things
Liu, Xiao; Liu, Anfeng; Huang, Changqin
2017-01-01
Applications running on the Internet of Things, such as the Wireless Sensor and Actuator Networks (WSANs) platform, generally have different quality of service (QoS) requirements. For urgent events, it is crucial that information be reported to the actuator quickly, and the communication cost is the second factor. However, for interesting events, communication costs, network lifetime and time all become important factors. In most situations, these different requirements cannot be satisfied simultaneously. In this paper, an adaptive communication control based on a differentiated delay (ACCDS) scheme is proposed to resolve this conflict. In an ACCDS, source nodes of events adaptively send various searching actuators routings (SARs) based on the degree of sensitivity to delay while maintaining the network lifetime. For a delay-sensitive event, the source node sends a large number of SARs to actuators to identify and inform the actuators in an extremely short time; thus, action can be taken quickly but at higher communication costs. For delay-insensitive events, the source node sends fewer SARs to reduce communication costs and improve network lifetime. Therefore, an ACCDS can meet the QoS requirements of different events using a differentiated delay framework. Theoretical analysis simulation results indicate that an ACCDS provides delay and communication costs and differentiated services; an ACCDS scheme can reduce the network delay by 11.111%–53.684% for a delay-sensitive event and reduce the communication costs by 5%–22.308% for interesting events, and reduce the network lifetime by about 28.713%. PMID:28085097
Elbashir, Ahmed B; Abdelbagi, Azhari O; Hammad, Ahmed M A; Elzorgani, Gafar A; Laing, Mark D
2015-03-01
Ninety-six human blood samples were collected from six locations that represent areas of intensive pesticide use in Sudan, which included irrigated cotton schemes (Wad Medani, Hasaheesa, Elmanagil, and Elfaw) and sugarcane schemes (Kenana and Gunaid). Blood samples were analyzed for organochlorine pesticide residues by gas liquid chromatography (GLC) equipped with an electron capture detector (ECD). Residues of p,p'-dichlorodiphenyldichloroethylene (DDE), heptachlor epoxide, γ-HCH, and dieldrin were detected in blood from all locations surveyed. Aldrin was not detected in any of the samples analyzed, probably due to its conversion to dieldrin. The levels of total organochlorine burden detected were higher in the blood from people in the irrigated cotton schemes (mean 261 ng ml(-1), range 38-641 ng ml(-1)) than in the blood of people from the irrigated sugarcane schemes (mean 204 ng ml(-1), range 59-365 ng ml(-1)). The highest levels of heptachlor epoxide (170 ng ml(-1)) and γ-HCH (92 ng ml(-1)) were observed in blood samples from Hasaheesa, while the highest levels of DDE (618 ng ml(-1)) and dieldrin (82 ng ml(-1)) were observed in blood samples from Wad Medani and Kenana, respectively. The organochlorine levels in blood samples seemed to decrease with increasing distance from the old irrigated cotton schemes (Wad Medani, Hasaheesa, and Elmanagil) where the heavy application of these pesticides took place historically.
Performance of the ICAO standard core service modulation and coding techniques
NASA Technical Reports Server (NTRS)
Lodge, John; Moher, Michael
1988-01-01
Aviation binary phase shift keying (A-BPSK) is described and simulated performance results are given that demonstrate robust performance in the presence of hardlimiting amplifiers. The performance of coherently-detected A-BPSK with rate 1/2 convolutional coding are given. The performance loss due to the Rician fading was shown to be less than 1 dB over the simulated range. A partially coherent detection scheme that does not require carrier phase recovery was described. This scheme exhibits similiar performance to coherent detection, at high bit error rates, while it is superior at lower bit error rates.
Oriented regions grouping based candidate proposal for infrared pedestrian detection
NASA Astrophysics Data System (ADS)
Wang, Jiangtao; Zhang, Jingai; Li, Huaijiang
2018-04-01
Effectively and accurately locating the positions of pedestrian candidates in image is a key task for the infrared pedestrian detection system. In this work, a novel similarity measuring metric is designed. Based on the selective search scheme, the developed similarity measuring metric is utilized to yield the possible locations for pedestrian candidate. Besides this, corresponding diversification strategies are also provided according to the characteristics of the infrared thermal imaging system. Experimental results indicate that the presented scheme can achieve more efficient outputs than the traditional selective search methodology for the infrared pedestrian detection task.
A symmetric metamaterial element-based RF biosensor for rapid and label-free detection
NASA Astrophysics Data System (ADS)
Lee, Hee-Jo; Lee, Jung-Hyun; Jung, Hyo-Il
2011-10-01
A symmetric metamaterial element-based RF biosensing scheme is experimentally demonstrated by detecting biomolecular binding between a prostate-specific antigen (PSA) and its antibody. The metamaterial element in a high-impedance microstrip line shows an intrinsic S21 resonance having a Q-factor of 55. The frequency shift with PSA concentration, i.e., 100 ng/ml, 10 ng/ml, and 1 ng/ml, is observed and the changes are Δf ≈ 20 MHz, 10 MHz, and 5 MHz, respectively. The proposed biosensor offers advantages of label-free detection, a simple and direct scheme, and cost-efficient fabrication.
The 20-22 January 2007 Snow Events over Canada: Microphysical Properties
NASA Technical Reports Server (NTRS)
Tao. W.K.; Shi, J.J.; Matsui, T.; Hao, A.; Lang, S.; Peters-Lidard, C.; Skofronick-Jackson, G.; Petersen, W.; Cifelli, R.; Rutledge, S.
2009-01-01
One of the grand challenges of the Global Precipitation Measurement (GPM) mission is to improve precipitation measurements in mid- and high-latitudes during cold seasons through the use of high-frequency passive microwave radiometry. Toward this end, the Weather Research and Forecasting (WRF) model with the Goddard microphysics scheme is coupled with a Satellite Data Simulation Unit (WRF-SDSU) that has been developed to facilitate over-land snowfall retrieval algorithms by providing a virtual cloud library and microwave brightness temperature (Tb) measurements consistent with the GPM Microwave Imager (GMI). This study tested the Goddard cloud microphysics scheme in WRF for snowstorm events (January 20-22, 2007) that took place over the Canadian CloudSAT/CALIPSO Validation Project (C3VP) ground site (Centre for Atmospheric Research Experiments - CARE) in Ontario, Canada. In this paper, the performance of the Goddard cloud microphysics scheme both with 2ice (ice and snow) and 3ice (ice, snow and graupel) as well as other WRF microphysics schemes will be presented. The results are compared with data from the Environment Canada (EC) King Radar, an operational C-band radar located near the CARE site. In addition, the WRF model output is used to drive the Goddard SDSU to calculate radiances and backscattering signals consistent with direct satellite observations for evaluating the model results.
A cloud detection scheme for the Chinese Carbon Dioxide Observation Satellite (TANSAT)
NASA Astrophysics Data System (ADS)
Wang, Xi; Guo, Zheng; Huang, Yipeng; Fan, Hongjie; Li, Wanbiao
2017-01-01
Cloud detection is an essential preprocessing step for retrieving carbon dioxide from satellite observations of reflected sunlight. During the pre-launch study of the Chinese Carbon Dioxide Observation Satellite (TANSAT), a cloud-screening scheme was presented for the Cloud and Aerosol Polarization Imager (CAPI), which only performs measurements in five channels located in the visible to near-infrared regions of the spectrum. The scheme for CAPI, based on previous cloudscreening algorithms, defines a method to regroup individual threshold tests for each pixel in a scene according to the derived clear confidence level. This scheme is proven to be more effective for sensors with few channels. The work relies upon the radiance data from the Visible and Infrared Radiometer (VIRR) onboard the Chinese FengYun-3A Polar-orbiting Meteorological Satellite (FY-3A), which uses four wavebands similar to that of CAPI and can serve as a proxy for its measurements. The scheme has been applied to a number of the VIRR scenes over four target areas (desert, snow, ocean, forest) for all seasons. To assess the screening results, comparisons against the cloud-screening product from MODIS are made. The evaluation suggests that the proposed scheme inherits the advantages of schemes described in previous publications and shows improved cloud-screening results. A seasonal analysis reveals that this scheme provides better performance during warmer seasons, except for observations over oceans, where results are much better in colder seasons.
Kettelhut, M M; Chiodini, P L; Edwards, H; Moody, A
2003-01-01
Background: The burden of parasitic disease imported into the temperate zone is increasing, and in the tropics remains very high. Thus, high quality diagnostic parasitology services are needed, but to implement clinical governance a measure of quality of service is required. Aim: To examine performance in the United Kingdom National External Quality Assessment Scheme for Parasitology for evidence of improved standards in parasite diagnosis in clinical specimens. Methods: Analysis of performance was made for the period 1986 to 2001, to look for trends in performance scores. Results: An overall rise in performance in faecal and blood parasitology schemes was found from 1986 to 2001. This was seen particularly in the identification of ova, cysts, and larvae in the faecal scheme, the detection of Plasmodium ovale and Plasmodium vivax in the blood scheme, and also in the correct identification of non-malarial blood parasites. Despite this improvement, there are still problems. In the faecal scheme, participants still experience difficulty in recognising small protozoan cysts, differentiating vegetable matter from cysts, and detecting ova and cysts when more than one species is present. In the blood scheme, participants have problems in identifying mixed malarial infections, distinguishing between P ovale and P vivax, and estimating the percentage parasitaemia. The reasons underlying these problems have been identified via the educational part of the scheme, and have been dealt with by distributing teaching sheets and undertaking practical sessions. Conclusions: UK NEQAS for Parasitology has helped to raise the standard of diagnostic parasitology in the UK. PMID:14645352
Proposal to upgrade the MIPP data acquisition system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, W.; Carey, D.; Johnstone, C.
2005-03-01
The MIPP TPC is the largest contributor to the MIPP event size by far. Its readout system and electronics were designed in the 1990's and limit it to a readout rate of 60 Hz in simple events and {approx} 20 Hz in complicated events. With the readout chips designed for the ALICE collaboration at the LHC, we propose a low cost effective scheme of upgrading the MIPP data acquisition speed to 3000 Hz.
Grunow, Roland; Ippolito, G; Jacob, D; Sauer, U; Rohleder, A; Di Caro, A; Iacovino, R
2014-01-01
Quality assurance exercises and networking on the detection of highly infectious pathogens (QUANDHIP) is a joint action initiative set up in 2011 that has successfully unified the primary objectives of the European Network on Highly Pathogenic Bacteria (ENHPB) and of P4-laboratories (ENP4-Lab) both of which aimed to improve the efficiency, effectiveness, and response capabilities of laboratories directed at protecting the health of European citizens against high consequence bacteria and viruses of significant public health concern. Both networks have established a common collaborative consortium of 37 nationally and internationally recognized institutions with laboratory facilities from 22 European countries. The specific objectives and achievements include the initiation and establishment of a recognized and acceptable quality assurance scheme, including practical external quality assurance exercises, comprising living agents, that aims to improve laboratory performance, accuracy, and detection capabilities in support of patient management and public health responses; recognized training schemes for diagnostics and handling of highly pathogenic agents; international repositories comprising highly pathogenic bacteria and viruses for the development of standardized reference material; a standardized and transparent Biosafety and Biosecurity strategy protecting healthcare personnel and the community in dealing with high consequence pathogens; the design and organization of response capabilities dealing with cross-border events with highly infectious pathogens including the consideration of diagnostic capabilities of individual European laboratories. The project tackled several sensitive issues regarding Biosafety, Biosecurity and "dual use" concerns. The article will give an overview of the project outcomes and discuss the assessment of potential "dual use" issues.
Detection of dominant runoff generation processes for catchment classification
NASA Astrophysics Data System (ADS)
Gioia, A.; Manfreda, S.; Iacobellis, V.; Fiorentino, M.
2009-04-01
The identification of similar hydroclimatic regions in order to reduce the uncertainty on flood prediction in ungauged basins, represents one of the most exciting challenges faced by hydrologists in the last few years (e.g., IAHS Decade on Predictions in Ungauged Basins (PUB) - Sivapalan et al. [2003]). In this context, the investigation of the dominant runoff generation mechanisms may provide a strategy for catchment classification and identification of hydrologically homogeneous group of basins. In particular, the present study focuses on two classical schemes responsible of runoff production: saturation and infiltration excess. Thus, in principle, the occurrence of either mechanism may be detected in the same basin according to the climatic forcing. Here the dynamics of runoff generation are investigated over a set of basins in order to identify the dynamics which are responsible of the transition between the two schemes and to recognize homogeneous group of basins. We exploit a basin characterization obtained by means of a theoretical flood probability distribution, which was applied on a broad number of arid and humid river basins belonging to the Southern Italy region, with aim to describe the effect of different runoff production mechanisms in the generation of ordinary and extraordinary flood events. Sivapalan, M., Takeuchi, K., Franks, S. W., Gupta, V. K., Karambiri, H., Lakshmi, V., Liang, X., McDonnell, J. J., Mendiondo, E. M., O'Connell, P. E., Oki, T., Pomeroy, J. W., Schertzer, D., Uhlenbrook, S. and Zehe, E.: IAHS Decade on Predictions in Ungauged Basins (PUB), 2003-2012: Shaping an exciting future for the hydrological sciences, Hydrol. Sci. J., 48(6), 857-880, 2003.
Grunow, Roland; Ippolito, G.; Jacob, D.; Sauer, U.; Rohleder, A.; Di Caro, A.; Iacovino, R.
2014-01-01
Quality assurance exercises and networking on the detection of highly infectious pathogens (QUANDHIP) is a joint action initiative set up in 2011 that has successfully unified the primary objectives of the European Network on Highly Pathogenic Bacteria (ENHPB) and of P4-laboratories (ENP4-Lab) both of which aimed to improve the efficiency, effectiveness, and response capabilities of laboratories directed at protecting the health of European citizens against high consequence bacteria and viruses of significant public health concern. Both networks have established a common collaborative consortium of 37 nationally and internationally recognized institutions with laboratory facilities from 22 European countries. The specific objectives and achievements include the initiation and establishment of a recognized and acceptable quality assurance scheme, including practical external quality assurance exercises, comprising living agents, that aims to improve laboratory performance, accuracy, and detection capabilities in support of patient management and public health responses; recognized training schemes for diagnostics and handling of highly pathogenic agents; international repositories comprising highly pathogenic bacteria and viruses for the development of standardized reference material; a standardized and transparent Biosafety and Biosecurity strategy protecting healthcare personnel and the community in dealing with high consequence pathogens; the design and organization of response capabilities dealing with cross-border events with highly infectious pathogens including the consideration of diagnostic capabilities of individual European laboratories. The project tackled several sensitive issues regarding Biosafety, Biosecurity and “dual use” concerns. The article will give an overview of the project outcomes and discuss the assessment of potential “dual use” issues. PMID:25426479
A Secure Trust Establishment Scheme for Wireless Sensor Networks
Ishmanov, Farruh; Kim, Sung Won; Nam, Seung Yeob
2014-01-01
Trust establishment is an important tool to improve cooperation and enhance security in wireless sensor networks. The core of trust establishment is trust estimation. If a trust estimation method is not robust against attack and misbehavior, the trust values produced will be meaningless, and system performance will be degraded. We present a novel trust estimation method that is robust against on-off attacks and persistent malicious behavior. Moreover, in order to aggregate recommendations securely, we propose using a modified one-step M-estimator scheme. The novelty of the proposed scheme arises from combining past misbehavior with current status in a comprehensive way. Specifically, we introduce an aggregated misbehavior component in trust estimation, which assists in detecting an on-off attack and persistent malicious behavior. In order to determine the current status of the node, we employ previous trust values and current measured misbehavior components. These components are combined to obtain a robust trust value. Theoretical analyses and evaluation results show that our scheme performs better than other trust schemes in terms of detecting an on-off attack and persistent misbehavior. PMID:24451471
Exploring Sampling in the Detection of Multicategory EEG Signals
Siuly, Siuly; Kabir, Enamul; Wang, Hua; Zhang, Yanchun
2015-01-01
The paper presents a structure based on samplings and machine leaning techniques for the detection of multicategory EEG signals where random sampling (RS) and optimal allocation sampling (OS) are explored. In the proposed framework, before using the RS and OS scheme, the entire EEG signals of each class are partitioned into several groups based on a particular time period. The RS and OS schemes are used in order to have representative observations from each group of each category of EEG data. Then all of the selected samples by the RS from the groups of each category are combined in a one set named RS set. In the similar way, for the OS scheme, an OS set is obtained. Then eleven statistical features are extracted from the RS and OS set, separately. Finally this study employs three well-known classifiers: k-nearest neighbor (k-NN), multinomial logistic regression with a ridge estimator (MLR), and support vector machine (SVM) to evaluate the performance for the RS and OS feature set. The experimental outcomes demonstrate that the RS scheme well represents the EEG signals and the k-NN with the RS is the optimum choice for detection of multicategory EEG signals. PMID:25977705
Classification of Dust Days by Satellite Remotely Sensed Aerosol Products
NASA Technical Reports Server (NTRS)
Sorek-Hammer, M.; Cohen, A.; Levy, Robert C.; Ziv, B.; Broday, D. M.
2013-01-01
Considerable progress in satellite remote sensing (SRS) of dust particles has been seen in the last decade. From an environmental health perspective, such an event detection, after linking it to ground particulate matter (PM) concentrations, can proxy acute exposure to respirable particles of certain properties (i.e. size, composition, and toxicity). Being affected considerably by atmospheric dust, previous studies in the Eastern Mediterranean, and in Israel in particular, have focused on mechanistic and synoptic prediction, classification, and characterization of dust events. In particular, a scheme for identifying dust days (DD) in Israel based on ground PM10 (particulate matter of size smaller than 10 nm) measurements has been suggested, which has been validated by compositional analysis. This scheme requires information regarding ground PM10 levels, which is naturally limited in places with sparse ground-monitoring coverage. In such cases, SRS may be an efficient and cost-effective alternative to ground measurements. This work demonstrates a new model for identifying DD and non-DD (NDD) over Israel based on an integration of aerosol products from different satellite platforms (Moderate Resolution Imaging Spectroradiometer (MODIS) and Ozone Monitoring Instrument (OMI)). Analysis of ground-monitoring data from 2007 to 2008 in southern Israel revealed 67 DD, with more than 88 percent occurring during winter and spring. A Classification and Regression Tree (CART) model that was applied to a database containing ground monitoring (the dependent variable) and SRS aerosol product (the independent variables) records revealed an optimal set of binary variables for the identification of DD. These variables are combinations of the following primary variables: the calendar month, ground-level relative humidity (RH), the aerosol optical depth (AOD) from MODIS, and the aerosol absorbing index (AAI) from OMI. A logistic regression that uses these variables, coded as binary variables, demonstrated 93.2 percent correct classifications of DD and NDD. Evaluation of the combined CART-logistic regression scheme in an adjacent geographical region (Gush Dan) demonstrated good results. Using SRS aerosol products for DD and NDD, identification may enable us to distinguish between health, ecological, and environmental effects that result from exposure to these distinct particle populations.
Retrieval of volcanic SO2 from HIRS/2 using optimal estimation
NASA Astrophysics Data System (ADS)
Miles, Georgina M.; Siddans, Richard; Grainger, Roy G.; Prata, Alfred J.; Fisher, Bradford; Krotkov, Nickolay
2017-07-01
We present an optimal-estimation (OE) retrieval scheme for stratospheric sulfur dioxide from the High-Resolution Infrared Radiation Sounder 2 (HIRS/2) instruments on the NOAA and MetOp platforms, an infrared radiometer that has been operational since 1979. This algorithm is an improvement upon a previous method based on channel brightness temperature differences, which demonstrated the potential for monitoring volcanic SO2 using HIRS/2. The Prata method is fast but of limited accuracy. This algorithm uses an optimal-estimation retrieval approach yielding increased accuracy for only moderate computational cost. This is principally achieved by fitting the column water vapour and accounting for its interference in the retrieval of SO2. A cloud and aerosol model is used to evaluate the sensitivity of the scheme to the presence of ash and water/ice cloud. This identifies that cloud or ash above 6 km limits the accuracy of the water vapour fit, increasing the error in the SO2 estimate. Cloud top height is also retrieved. The scheme is applied to a case study event, the 1991 eruption of Cerro Hudson in Chile. The total erupted mass of SO2 is estimated to be 2300 kT ± 600 kT. This confirms it as one of the largest events since the 1991 eruption of Pinatubo, and of comparable scale to the Northern Hemisphere eruption of Kasatochi in 2008. This retrieval method yields a minimum mass per unit area detection limit of 3 DU, which is slightly less than that for the Total Ozone Mapping Spectrometer (TOMS), the only other instrument capable of monitoring SO2 from 1979 to 1996. We show an initial comparison to TOMS for part of this eruption, with broadly consistent results. Operating in the infrared (IR), HIRS has the advantage of being able to measure both during the day and at night, and there have frequently been multiple HIRS instruments operated simultaneously for better than daily sampling. If applied to all data from the series of past and future HIRS instruments, this method presents the opportunity to produce a comprehensive and consistent volcanic SO2 time series spanning over 40 years.
Research on the system scheme and experiment for the active laser polarization imaging
NASA Astrophysics Data System (ADS)
Fu, Qiang; Duan, Jin; Zhao, Rui; Li, Zheng; Zhang, Su; Zhan, Juntong; Zhu, Yong; Jiang, Hui-Lin
2015-10-01
The polarization imaging detection technology increased the polarization information on the basis of the intensity imaging, which is extensive application in the military and civil and other fields. The research present and development trend of polarization imaging detection technology was introduce, the system scheme of the active polarization imaging detection was put forward, and the key technologies such as the polarization information detection, optical system design, polarization radiation calibration and image fusion approach was analyzed. On this basis, detection system by existing equipment of laboratory was set up, and on the different materials such as wood, metal, plastic and goal was detected by polarization imaging to realize the active polarization imaging detection. The results show that image contrast of the metal and man-made objects is higher, the polarization effect is better, which provided the basis on the better performance of the polarization imaging instruments.
Evaluation of hardware costs of implementing PSK signal detection circuit based on "system on chip"
NASA Astrophysics Data System (ADS)
Sokolovskiy, A. V.; Dmitriev, D. D.; Veisov, E. A.; Gladyshev, A. B.
2018-05-01
The article deals with the choice of the architecture of digital signal processing units for implementing the PSK signal detection scheme. As an assessment of the effectiveness of architectures, the required number of shift registers and computational processes are used when implementing the "system on a chip" on the chip. A statistical estimation of the normalized code sequence offset in the signal synchronization scheme for various hardware block architectures is used.
Understanding the types of fraud in claims to South African medical schemes.
Legotlo, T G; Mutezo, A
2018-03-28
Medical schemes play a significant role in funding private healthcare in South Africa (SA). However, the sector is negatively affected by the high rate of fraudulent claims. To identify the types of fraudulent activities committed in SA medical scheme claims. A cross-sectional qualitative study was conducted, adopting a case study strategy. A sample of 15 employees was purposively selected from a single medical scheme administration company in SA. Semi-structured interviews were conducted to collect data from study participants. A thematic analysis of the data was done using ATLAS.ti software (ATLAS.ti Scientific Software Development, Germany). The study population comprised the 17 companies that administer medical schemes in SA. Data were collected from 15 study participants, who were selected from the medical scheme administrator chosen as a case study. The study found that medical schemes were defrauded in numerous ways. The perpetrators of this type of fraud include healthcare service providers, medical scheme members, employees, brokers and syndicates. Medical schemes are mostly defrauded by the submission of false claims by service providers and syndicates. Fraud committed by medical scheme members encompasses the sharing of medical scheme benefits with non-members (card farming) and non-disclosure of pre-existing conditions at the application stage. The study concluded that perpetrators of fraud have found several ways of defrauding SA medical schemes regarding claims. Understanding and identifying the types of fraud events facing medical schemes is the initial step towards establishing methods to mitigate this risk. Future studies should examine strategies to manage fraudulent medical scheme claims.
Studies of elongation factor Tu in Streptococcus faecium (ATCC 9790)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourbeau, P.P.
1986-01-01
It has been known for over twenty years that elongation factor Tu (Ef-Tu) is one of the proteins involved in protein synthesis in bacteria. Several years ago, it was proposed that Ef-Tu may, in addition, have other structural functions in bacterial. The author's research has examined the function of Ef-Tu in Streptococcus faecium. Using an antibiotic kirromycin, which specifically inhibits Ef-Tu function, the effects upon a number of cellular parameters were determined. Inhibition of both protein and RNA synthesis was found to be similar to the effect of chloramphenicol. Using the residual division technique for the determination of cell cyclemore » events with both heterogeneous and sucrose gradient fractionated cell populations, a kirromycin sensitive event was detected between 8 min. (Td = 30 min.) and 19 min. (Td = 175 min.) later in the cell cycle than the chloramphenical sensitive event. This suggests that kirromycin is inhibiting a terminal cell cycle event which is in addition to the inhibition of protein synthesis. Purification of Ef-Tu was performed using two different methods: ion exchange and molecular exclusion chromatography; and GDP affinity chromatography. Various schemes were employed to try and obtain optimum cellular fractionation, allowing for both proper separation of ribosomes from the other cellular fractions and retention of enzymatic activity by Ef-Tu as determined by a /sup 3/H-GDP binding assay. Analysis of the cell cycle of S. faecium using the residual division technique was also performed. In addition, certain cell wall antibiotics were used to determine if other cell cycle events could be determined using the residual division technique.« less
WRF model sensitivity to choice of parameterization: a study of the `York Flood 1999'
NASA Astrophysics Data System (ADS)
Remesan, Renji; Bellerby, Tim; Holman, Ian; Frostick, Lynne
2015-10-01
Numerical weather modelling has gained considerable attention in the field of hydrology especially in un-gauged catchments and in conjunction with distributed models. As a consequence, the accuracy with which these models represent precipitation, sub-grid-scale processes and exceptional events has become of considerable concern to the hydrological community. This paper presents sensitivity analyses for the Weather Research Forecast (WRF) model with respect to the choice of physical parameterization schemes (both cumulus parameterisation (CPSs) and microphysics parameterization schemes (MPSs)) used to represent the `1999 York Flood' event, which occurred over North Yorkshire, UK, 1st-14th March 1999. The study assessed four CPSs (Kain-Fritsch (KF2), Betts-Miller-Janjic (BMJ), Grell-Devenyi ensemble (GD) and the old Kain-Fritsch (KF1)) and four MPSs (Kessler, Lin et al., WRF single-moment 3-class (WSM3) and WRF single-moment 5-class (WSM5)] with respect to their influence on modelled rainfall. The study suggests that the BMJ scheme may be a better cumulus parameterization choice for the study region, giving a consistently better performance than other three CPSs, though there are suggestions of underestimation. The WSM3 was identified as the best MPSs and a combined WSM3/BMJ model setup produced realistic estimates of precipitation quantities for this exceptional flood event. This study analysed spatial variability in WRF performance through categorical indices, including POD, FBI, FAR and CSI during York Flood 1999 under various model settings. Moreover, the WRF model was good at predicting high-intensity rare events over the Yorkshire region, suggesting it has potential for operational use.
Syed Ali, M; Vadivel, R; Saravanakumar, R
2018-06-01
This study examines the problem of robust reliable control for Takagi-Sugeno (T-S) fuzzy Markovian jumping delayed neural networks with probabilistic actuator faults and leakage terms. An event-triggered communication scheme. First, the randomly occurring actuator faults and their failures rates are governed by two sets of unrelated random variables satisfying certain probabilistic failures of every actuator, new type of distribution based event triggered fault model is proposed, which utilize the effect of transmission delay. Second, Takagi-Sugeno (T-S) fuzzy model is adopted for the neural networks and the randomness of actuators failures is modeled in a Markov jump model framework. Third, to guarantee the considered closed-loop system is exponential mean square stable with a prescribed reliable control performance, a Markov jump event-triggered scheme is designed in this paper, which is the main purpose of our study. Fourth, by constructing appropriate Lyapunov-Krasovskii functional, employing Newton-Leibniz formulation and integral inequalities, several delay-dependent criteria for the solvability of the addressed problem are derived. The obtained stability criteria are stated in terms of linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Finally, numerical examples are given to illustrate the effectiveness and reduced conservatism of the proposed results over the existing ones, among them one example was supported by real-life application of the benchmark problem. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Immunity-based detection, identification, and evaluation of aircraft sub-system failures
NASA Astrophysics Data System (ADS)
Moncayo, Hever Y.
This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also analyzed in this thesis. They showed to have an important effect on detection performance and are a critical aspect when designing the configuration of the AIS. The results presented in this thesis show that the AIS paradigm addresses directly the complexity and multi-dimensionality associated with a damaged aircraft dynamic response and provides the tools necessary for a comprehensive/integrated solution to the FDIE problem. Excellent detection, identification, and evaluation performance has been recorded for all types of failures considered. The implementation of the proposed AIS-based scheme can potentially have a significant impact on the safety of aircraft operation. The output information obtained from the scheme will be useful to increase pilot situational awareness and determine automated compensation.
Impacts of extreme flooding on riverbank filtration water quality.
Ascott, M J; Lapworth, D J; Gooddy, D C; Sage, R C; Karapanos, I
2016-06-01
Riverbank filtration schemes form a significant component of public water treatment processes on a global level. Understanding the resilience and water quality recovery of these systems following severe flooding is critical for effective water resources management under potential future climate change. This paper assesses the impact of floodplain inundation on the water quality of a shallow aquifer riverbank filtration system and how water quality recovers following an extreme (1 in 17 year, duration >70 days, 7 day inundation) flood event. During the inundation event, riverbank filtrate water quality is dominated by rapid direct recharge and floodwater infiltration (high fraction of surface water, dissolved organic carbon (DOC) >140% baseline values, >1 log increase in micro-organic contaminants, microbial detects and turbidity, low specific electrical conductivity (SEC) <90% baseline, high dissolved oxygen (DO) >400% baseline). A rapid recovery is observed in water quality with most floodwater impacts only observed for 2-3 weeks after the flooding event and a return to normal groundwater conditions within 6 weeks (lower fraction of surface water, higher SEC, lower DOC, organic and microbial detects, DO). Recovery rates are constrained by the hydrogeological site setting, the abstraction regime and the water quality trends at site boundary conditions. In this case, increased abstraction rates and a high transmissivity aquifer facilitate rapid water quality recoveries, with longer term trends controlled by background river and groundwater qualities. Temporary reductions in abstraction rates appear to slow water quality recoveries. Flexible operating regimes such as the one implemented at this study site are likely to be required if shallow aquifer riverbank filtration systems are to be resilient to future inundation events. Development of a conceptual understanding of hydrochemical boundaries and site hydrogeology through monitoring is required to assess the suitability of a prospective riverbank filtration site. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Looking at flood trends with different eyes: the value of a fuzzy flood classification scheme
NASA Astrophysics Data System (ADS)
Sikorska, A. E.; Viviroli, D.; Brunner, M. I.; Seibert, J.
2016-12-01
Natural floods can be governed by several processes such as heavy rainfall or intense snow- or glacier-melt. These processes result in different flood characteristics in terms of flood shape and magnitude. Pooling floods of different types might therefore impair the analyses of flood frequencies and trends. Thus, the categorization of flood events into different flood type classes and the determination of their respective frequencies is essential for a better understanding and for the prediction of floods. In reality however most flood events are caused by a mix of processes and a unique determination of a flood type per event often becomes difficult. This study proposes an innovative method for a more reliable categorization of floods according to similarities in flood drivers. The categorization of floods into subgroups relies on a fuzzy decision tree. While the classical (crisp) decision tree allows for the identification of only one flood type per event, the fuzzy approach enables the detection of mixed types. Hence, events are represented as a spectrum of six possible flood types, while a degree of acceptance attributed to each of them specifies the importance of each type during the event formation. Considered types are flash, short rainfall, long rainfall, snow-melt, rainfall-on-snow, and, in high altitude watersheds, also glacier-melt floods. The fuzzy concept also enables uncertainty present in the identification of flood processes and in the method to be incorporated into the flood categorization process. We demonstrate, for a set of nine Swiss watersheds and 30 years of observations, that this new concept provides more reliable flood estimates than the classical approach as it allows for a more dedicated flood prevention technique adapted to a specific flood type.
Paschalidou, A K; Kassomenos, P A
2016-01-01
Wildfire management is closely linked to robust forecasts of changes in wildfire risk related to meteorological conditions. This link can be bridged either through fire weather indices or through statistical techniques that directly relate atmospheric patterns to wildfire activity. In the present work the COST-733 classification schemes are applied in order to link wildfires in Greece with synoptic circulation patterns. The analysis reveals that the majority of wildfire events can be explained by a small number of specific synoptic circulations, hence reflecting the synoptic climatology of wildfires. All 8 classification schemes used, prove that the most fire-dangerous conditions in Greece are characterized by a combination of high atmospheric pressure systems located N to NW of Greece, coupled with lower pressures located over the very Eastern part of the Mediterranean, an atmospheric pressure pattern closely linked to the local Etesian winds over the Aegean Sea. During these events, the atmospheric pressure has been reported to be anomalously high, while anomalously low 500hPa geopotential heights and negative total water column anomalies were also observed. Among the various classification schemes used, the 2 Principal Component Analysis-based classifications, namely the PCT and the PXE, as well as the Leader Algorithm classification LND proved to be the best options, in terms of being capable to isolate the vast amount of fire events in a small number of classes with increased frequency of occurrence. It is estimated that these 3 schemes, in combination with medium-range to seasonal climate forecasts, could be used by wildfire risk managers to provide increased wildfire prediction accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.
A high-resolution and intelligent dead pixel detection scheme for an electrowetting display screen
NASA Astrophysics Data System (ADS)
Luo, ZhiJie; Luo, JianKun; Zhao, WenWen; Cao, Yang; Lin, WeiJie; Zhou, GuoFu
2018-02-01
Electrowetting display technology is realized by tuning the surface energy of a hydrophobic surface by applying a voltage based on electrowetting mechanism. With the rapid development of the electrowetting industry, how to analyze efficiently the quality of an electrowetting display screen has a very important significance. There are two kinds of dead pixels on the electrowetting display screen. One is that the oil of pixel cannot completely cover the display area. The other is that indium tin oxide semiconductor wire connecting pixel and foil was burned. In this paper, we propose a high-resolution and intelligent dead pixel detection scheme for an electrowetting display screen. First, we built an aperture ratio-capacitance model based on the electrical characteristics of electrowetting display. A field-programmable gate array is used as the integrated logic hub of the system for a highly reliable and efficient control of the circuit. Dead pixels can be detected and displayed on a PC-based 2D graphical interface in real time. The proposed dead pixel detection scheme reported in this work has promise in automating electrowetting display experiments.
NASA Astrophysics Data System (ADS)
Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Qian, Wei; Zheng, Bin
2016-03-01
Current commercialized CAD schemes have high false-positive (FP) detection rates and also have high correlations in positive lesion detection with radiologists. Thus, we recently investigated a new approach to improve the efficacy of applying CAD to assist radiologists in reading and interpreting screening mammograms. Namely, we developed a new global feature based CAD approach/scheme that can cue the warning sign on the cases with high risk of being positive. In this study, we investigate the possibility of fusing global feature or case-based scores with the local or lesion-based CAD scores using an adaptive cueing method. We hypothesize that the information from the global feature extraction (features extracted from the whole breast regions) are different from and can provide supplementary information to the locally-extracted features (computed from the segmented lesion regions only). On a large and diverse full-field digital mammography (FFDM) testing dataset with 785 cases (347 negative and 438 cancer cases with masses only), we ran our lesion-based and case-based CAD schemes "as is" on the whole dataset. To assess the supplementary information provided by the global features, we used an adaptive cueing method to adaptively adjust the original CAD-generated detection scores (Sorg) of a detected suspicious mass region based on the computed case-based score (Scase) of the case associated with this detected region. Using the adaptive cueing method, better sensitivity results were obtained at lower FP rates (<= 1 FP per image). Namely, increases of sensitivities (in the FROC curves) of up to 6.7% and 8.2% were obtained for the ROI and Case-based results, respectively.
Modeling and experimental verification of single event upsets
NASA Technical Reports Server (NTRS)
Fogarty, T. N.; Attia, J. O.; Kumar, A. A.; Tang, T. S.; Lindner, J. S.
1993-01-01
The research performed and the results obtained at the Laboratory for Radiation Studies, Prairie View A&M University and Texas A&I University, on the problem of Single Events Upsets, the various schemes employed to limit them and the effects they have on the reliability and fault tolerance at the systems level, such as robotic systems are reviewed.
Surface Management System Departure Event Data Analysis
NASA Technical Reports Server (NTRS)
Monroe, Gilena A.
2010-01-01
This paper presents a data analysis of the Surface Management System (SMS) performance of departure events, including push-back and runway departure events.The paper focuses on the detection performance, or the ability to detect departure events, as well as the prediction performance of SMS. The results detail a modest overall detection performance of push-back events and a significantly high overall detection performance of runway departure events. The overall detection performance of SMS for push-back events is approximately 55%.The overall detection performance of SMS for runway departure events nears 100%. This paper also presents the overall SMS prediction performance for runway departure events as well as the timeliness of the Aircraft Situation Display for Industry data source for SMS predictions.
Brodziak, Andrzej
2015-01-01
The recent plane crash caused by the pilot increased the interest in the possibility of medical examination, which would be able to detect the intention of committing suicide. The development of such a diagnostic procedure is not only important for the prevention of events in the civil and military aviation, but also due to increase in the incidence of various suicide terrorist acts. The author expresses his opinion on the nature of such examination, due to his experience of working in Acute Poisoning Treatment Centre. The Centre admits about 1 000 patients per year, who have been rescued after suicide attempts made by the intake of a toxic substance. He discusses the developed scheme of structuralized interview, however, he believes that the ability to detect the existence of suicidal ideation was significantly improved as a result of the formulation of Interpersonal Theory of Suicide, which distinguishes four stages: 1) passive suicidal ideation, 2) suicidal desire, 3) suicidal intent, 4) lethal and near lethal suicide attempts. Next, the author presents his own prediction of the development of methods, enabling the objective detection of "suicidal intent" (plan). In his opinion, such an examination in the future, would be based on brain imaging techniques, which could detect the specific configuration of a person's brain neural circuits representing the existing plan of suicide. The real ability to detect such a configuration of neural circuits can be predicted on the basis of new, quoted results of neurophysiological studies.
NASA Astrophysics Data System (ADS)
Zhai, Guang; Shirzaei, Manoochehr
2017-12-01
Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.
Fast retrievals of tropospheric carbonyl sulfide with IASI
NASA Astrophysics Data System (ADS)
Vincent, R. Anthony; Dudhia, Anu
2017-02-01
Iterative retrievals of trace gases, such as carbonyl sulfide (OCS), from satellites can be exceedingly slow. The algorithm may even fail to keep pace with data acquisition such that analysis is limited to local events of special interest and short time spans. With this in mind, a linear retrieval scheme was developed to estimate total column amounts of OCS at a rate roughly 104 times faster than a typical iterative retrieval. This scheme incorporates two concepts not utilized in previously published linear estimates. First, all physical parameters affecting the signal are included in the state vector and accounted for jointly, rather than treated as effective noise. Second, the initialization point is determined from an ensemble of atmospheres based on comparing the model spectra to the observations, thus improving the linearity of the problem. All of the 2014 data from the Infrared Atmospheric Sounding Interferometer (IASI), instruments A and B, were analysed and showed spatial features of OCS total columns, including depletions over tropical rainforests, seasonal enhancements over the oceans, and distinct OCS features over land. Error due to assuming linearity was found to be on the order of 11 % globally for OCS. However, systematic errors from effects such as varying surface emissivity and extinction due to aerosols have yet to be robustly characterized. Comparisons to surface volume mixing ratio in situ samples taken by NOAA show seasonal correlations greater than 0.7 for five out of seven sites across the globe. Furthermore, this linear scheme was applied to OCS, but may also be used as a rapid estimator of any detectable trace gas using IASI or similar nadir-viewing instruments.
Automatic identification of epileptic seizures from EEG signals using linear programming boosting.
Hassan, Ahnaf Rashik; Subasi, Abdulhamit
2016-11-01
Computerized epileptic seizure detection is essential for expediting epilepsy diagnosis and research and for assisting medical professionals. Moreover, the implementation of an epilepsy monitoring device that has low power and is portable requires a reliable and successful seizure detection scheme. In this work, the problem of automated epilepsy seizure detection using singe-channel EEG signals has been addressed. At first, segments of EEG signals are decomposed using a newly proposed signal processing scheme, namely complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN). Six spectral moments are extracted from the CEEMDAN mode functions and train and test matrices are formed afterward. These matrices are fed into the classifier to identify epileptic seizures from EEG signal segments. In this work, we implement an ensemble learning based machine learning algorithm, namely linear programming boosting (LPBoost) to perform classification. The efficacy of spectral features in the CEEMDAN domain is validated by graphical and statistical analyses. The performance of CEEMDAN is compared to those of its predecessors to further inspect its suitability. The effectiveness and the appropriateness of LPBoost are demonstrated as opposed to the commonly used classification models. Resubstitution and 10 fold cross-validation error analyses confirm the superior algorithm performance of the proposed scheme. The algorithmic performance of our epilepsy seizure identification scheme is also evaluated against state-of-the-art works in the literature. Experimental outcomes manifest that the proposed seizure detection scheme performs better than the existing works in terms of accuracy, sensitivity, specificity, and Cohen's Kappa coefficient. It can be anticipated that owing to its use of only one channel of EEG signal, the proposed method will be suitable for device implementation, eliminate the onus of clinicians for analyzing a large bulk of data manually, and expedite epilepsy diagnosis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
CAD scheme for detection of hemorrhages and exudates in ocular fundus images
NASA Astrophysics Data System (ADS)
Hatanaka, Yuji; Nakagawa, Toshiaki; Hayashi, Yoshinori; Mizukusa, Yutaka; Fujita, Akihiro; Kakogawa, Masakatsu; Kawase, Kazuhide; Hara, Takeshi; Fujita, Hiroshi
2007-03-01
This paper describes a method for detecting hemorrhages and exudates in ocular fundus images. The detection of hemorrhages and exudates is important in order to diagnose diabetic retinopathy. Diabetic retinopathy is one of the most significant factors contributing to blindness, and early detection and treatment are important. In this study, hemorrhages and exudates were automatically detected in fundus images without using fluorescein angiograms. Subsequently, the blood vessel regions incorrectly detected as hemorrhages were eliminated by first examining the structure of the blood vessels and then evaluating the length-to-width ratio. Finally, the false positives were eliminated by checking the following features extracted from candidate images: the number of pixels, contrast, 13 features calculated from the co-occurrence matrix, two features based on gray-level difference statistics, and two features calculated from the extrema method. The sensitivity of detecting hemorrhages in the fundus images was 85% and that of detecting exudates was 77%. Our fully automated scheme could accurately detect hemorrhages and exudates.
Xu, Zhezhuang; Liu, Guanglun; Yan, Haotian; Cheng, Bin; Lin, Feilong
2017-01-01
In wireless sensor and actor networks, when an event is detected, the sensor node needs to transmit an event report to inform the actor. Since the actor moves in the network to execute missions, its location is always unavailable to the sensor nodes. A popular solution is the search strategy that can forward the data to a node without its location information. However, most existing works have not considered the mobility of the node, and thus generate significant energy consumption or transmission delay. In this paper, we propose the trail-based search (TS) strategy that takes advantage of actor’s mobility to improve the search efficiency. The main idea of TS is that, when the actor moves in the network, it can leave its trail composed of continuous footprints. The search packet with the event report is transmitted in the network to search the actor or its footprints. Once an effective footprint is discovered, the packet will be forwarded along the trail until it is received by the actor. Moreover, we derive the condition to guarantee the trail connectivity, and propose the redundancy reduction scheme based on TS (TS-R) to reduce nontrivial transmission redundancy that is generated by the trail. The theoretical and numerical analysis is provided to prove the efficiency of TS. Compared with the well-known expanding ring search (ERS), TS significantly reduces the energy consumption and search delay. PMID:29077017
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, G. Barratt, E-mail: barratt@mit.edu, E-mail: barratt.park@gmail.com; Womack, Caroline C.; Jiang, Jun
2015-04-14
Millimeter-wave detected, millimeter-wave optical double resonance (mmODR) spectroscopy is a powerful tool for the analysis of dense, complicated regions in the optical spectra of small molecules. The availability of cavity-free microwave and millimeter wave spectrometers with frequency-agile generation and detection of radiation (required for chirped-pulse Fourier-transform spectroscopy) opens up new schemes for double resonance experiments. We demonstrate a multiplexed population labeling scheme for rapid acquisition of double resonance spectra, probing multiple rotational transitions simultaneously. We also demonstrate a millimeter-wave implementation of the coherence-converted population transfer scheme for background-free mmODR, which provides a ∼10-fold sensitivity improvement over the population labeling scheme.more » We analyze perturbations in the C{sup ~} state of SO{sub 2}, and we rotationally assign a b{sub 2} vibrational level at 45 328 cm{sup −1} that borrows intensity via a c-axis Coriolis interaction. We also demonstrate the effectiveness of our multiplexed mmODR scheme for rapid acquisition and assignment of three predissociated vibrational levels of the C{sup ~} state of SO{sub 2} between 46 800 and 47 650 cm{sup −1}.« less
Sheng, Li; Wang, Zidong; Zou, Lei; Alsaadi, Fuad E
2017-10-01
In this paper, the event-based finite-horizon H ∞ state estimation problem is investigated for a class of discrete time-varying stochastic dynamical networks with state- and disturbance-dependent noises [also called (x,v) -dependent noises]. An event-triggered scheme is proposed to decrease the frequency of the data transmission between the sensors and the estimator, where the signal is transmitted only when certain conditions are satisfied. The purpose of the problem addressed is to design a time-varying state estimator in order to estimate the network states through available output measurements. By employing the completing-the-square technique and the stochastic analysis approach, sufficient conditions are established to ensure that the error dynamics of the state estimation satisfies a prescribed H ∞ performance constraint over a finite horizon. The desired estimator parameters can be designed via solving coupled backward recursive Riccati difference equations. Finally, a numerical example is exploited to demonstrate the effectiveness of the developed state estimation scheme.
MIMO transmit scheme based on morphological perceptron with competitive learning.
Valente, Raul Ambrozio; Abrão, Taufik
2016-08-01
This paper proposes a new multi-input multi-output (MIMO) transmit scheme aided by artificial neural network (ANN). The morphological perceptron with competitive learning (MP/CL) concept is deployed as a decision rule in the MIMO detection stage. The proposed MIMO transmission scheme is able to achieve double spectral efficiency; hence, in each time-slot the receiver decodes two symbols at a time instead one as Alamouti scheme. Other advantage of the proposed transmit scheme with MP/CL-aided detector is its polynomial complexity according to modulation order, while it becomes linear when the data stream length is greater than modulation order. The performance of the proposed scheme is compared to the traditional MIMO schemes, namely Alamouti scheme and maximum-likelihood MIMO (ML-MIMO) detector. Also, the proposed scheme is evaluated in a scenario with variable channel information along the frame. Numerical results have shown that the diversity gain under space-time coding Alamouti scheme is partially lost, which slightly reduces the bit-error rate (BER) performance of the proposed MP/CL-NN MIMO scheme. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Shi, J. J.; Tao, W.-K.; Matsui, T.; Cifelli, R.; Huo, A.; Lang, S.; Tokay, A.; Peters-Lidard, C.; Jackson, G.; Rutledge, S.;
2009-01-01
One of the grand challenges of the Global Precipitation Measurement (GPM) mission is to improve cold season precipitation measurements in middle and high latitudes through the use of high-frequency passive microwave radiometry. For this, the Weather Research and Forecasting (WRF) model with the Goddard microphysics scheme is coupled with a satellite data simulation unit (WRF-SDSU) that has been developed to facilitate over-land snowfall retrieval algorithms by providing a virtual cloud library and microwave brightness temperature (Tb) measurements consistent with the GPM Microwave Imager (GMI). This study tested the Goddard cloud microphysics scheme in WRF for two snowstorm events, a lake effect and a synoptic event, that occurred between 20 and 22 January 2007 over the Canadian CloudSAT/CALIPSO Validation Project (C3VP) site in Ontario, Canada. The 24h-accumulated snowfall predicted by the WRF model with the Goddard microphysics was comparable to the observed accumulated snowfall by the ground-based radar for both events. The model correctly predicted the onset and ending of both snow events at the CARE site. WRF simulations capture the basic cloud properties as seen by the ground-based radar and satellite (i.e., CloudSAT, AMSU-B) observations as well as the observed cloud streak organization in the lake event. This latter result reveals that WRF was able to capture the cloud macro-structure reasonably well.
Search Radar Track-Before-Detect Using the Hough Transform.
1995-03-01
before - detect processing method which allows previous data to help in target detection. The technique provides many advantages compared to...improved target detection scheme, applicable to search radars, using the Hough transform image processing technique. The system concept involves a track
NASA Astrophysics Data System (ADS)
Zhou, C.; Zhang, X.; Gong, S.
2015-12-01
A comprehensive aerosol-cloud-precipitation interaction (ACI) scheme has been developed under CMA chemical weather modeling system GRAPES/CUACE. Calculated by a sectional aerosol activation scheme based on the information of size and mass from CUACE and the thermal-dynamic and humid states from the weather model GRAPES at each time step, the cloud condensation nuclei (CCN) is fed online interactively into a two-moment cloud scheme (WDM6) and a convective parameterization to drive the cloud physics and precipitation formation processes. The modeling system has been applied to study the ACI for January 2013 when several persistent haze-fog events and eight precipitation events occurred. The results show that interactive aerosols with the WDM6 in GRAPES/CUACE obviously increase the total cloud water, liquid water content and cloud droplet number concentrations while decrease the mean diameter of cloud droplets with varying magnitudes of the changes in each case and region. These interactive micro-physical properties of clouds improve the calculation of their collection growth rates in some regions and hence the precipitation rate and distributions in the model, showing 24% to 48% enhancements of TS scoring for 6-h precipitation in almost all regions. The interactive aerosols with the WDM6 also reduce the regional mean bias of temperature by 3 °C during certain precipitation events, but the monthly means bias is only reduced by about 0.3°C.
Klinkenberg, Don; Thomas, Ekelijn; Artavia, Francisco F Calvo; Bouma, Annemarie
2011-08-01
Design of surveillance programs to detect infections could benefit from more insight into sampling schemes. We address the effect of sampling schemes for Salmonella Enteritidis surveillance in laying hens. Based on experimental estimates for the transmission rate in flocks, and the characteristics of an egg immunological test, we have simulated outbreaks with various sampling schemes, and with the current boot swab program with a 15-week sampling interval. Declaring a flock infected based on a single positive egg was not possible because test specificity was too low. Thus, a threshold number of positive eggs was defined to declare a flock infected, and, for small sample sizes, eggs from previous samplings had to be included in a cumulative sample to guarantee a minimum flock level specificity. Effectiveness of surveillance was measured by the proportion of outbreaks detected, and by the number of contaminated table eggs brought on the market. The boot swab program detected 90% of the outbreaks, with 75% fewer contaminated eggs compared to no surveillance, whereas the baseline egg program (30 eggs each 15 weeks) detected 86%, with 73% fewer contaminated eggs. We conclude that a larger sample size results in more detected outbreaks, whereas a smaller sampling interval decreases the number of contaminated eggs. Decreasing sample size and interval simultaneously reduces the number of contaminated eggs, but not indefinitely: the advantage of more frequent sampling is counterbalanced by the cumulative sample including less recently laid eggs. Apparently, optimizing surveillance has its limits when test specificity is taken into account. © 2011 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Li, Lei; Hu, Jianhao
2010-12-01
Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed scheme, all of the residues can be processed independently, and most of the soft errors in datapath can be corrected with the redundant relationship of the residues at correction module, which is allocated at the end of the datapath. In the back-end implementation, module isolation technique is used to improve the soft error rate performance for RRNS by physically separating the operation channels of different moduli. The case studies show at least an order of magnitude decrease on the soft error rate (SER) as compared to the NonRHBD designs, and demonstrate that RRNS+MI can reduce the SER from 10-12 to 10-17 when the processing steps of datapath are 106. The proposed scheme can even achieve less area and latency overheads than that without radiation hardening, since RRNS can reduce the operational complexity in datapath.
A systematic review of the safety of kava extract in the treatment of anxiety.
Stevinson, Clare; Huntley, Alyson; Ernst, Edzard
2002-01-01
This paper systematically reviews the clinical evidence relating to the safety of extracts of the herbal anxiolytic kava (Piper methysticum). Literature searches were conducted in four electronic databases and the reference lists of all papers located were checked for further relevant publications. Information was also sought from the spontaneous reporting schemes of the WHO and national drug safety bodies and ten manufacturers of kava preparations were contacted. Data from short-term post-marketing surveillance studies and clinical trials suggest that adverse events are, in general, rare, mild and reversible. However, published case reports indicate that serious adverse events are possible including dermatological reactions, neurological complications and, of greatest concern, liver damage. Spontaneous reporting schemes also suggest that the most common adverse events are mild, but that serious ones occur. Controlled trials suggest that kava extracts do not impair cognitive performance and vigilance or potentiate the effects of central nervous system depressants. However, a possible interaction with benzodiazepines has been reported. It is concluded that when taken as a short-term monotherapy at recommended doses, kava extracts appear to be well tolerated by most users. Serious adverse events have been reported and further research is required to determine the nature and frequency of such events.
A 300 GHz collective scattering diagnostic for low temperature plasmas.
Hardin, Robert A; Scime, Earl E; Heard, John
2008-10-01
A compact and portable 300 GHz collective scattering diagnostic employing a homodyne detection scheme has been constructed and installed on the hot helicon experiment (HELIX). Verification of the homodyne detection scheme was accomplished with a rotating grooved aluminum wheel to Doppler shift the interaction beam. The HELIX chamber geometry and collection optics allow measurement of scattering angles ranging from 60 degrees to 90 degrees. Artificially driven ion-acoustic waves are also being investigated as a proof-of-principle test for the diagnostic system.
Piro, Benoit; Shi, Shihui; Reisberg, Steeve; Noël, Vincent; Anquetin, Guillaume
2016-02-29
We review here the most frequently reported targets among the electrochemical immunosensors and aptasensors: antibiotics, bisphenol A, cocaine, ochratoxin A and estradiol. In each case, the immobilization procedures are described as well as the transduction schemes and the limits of detection. It is shown that limits of detections are generally two to three orders of magnitude lower for immunosensors than for aptasensors, due to the highest affinities of antibodies. No significant progresses have been made to improve these affinities, but transduction schemes were improved instead, which lead to a regular improvement of the limit of detections corresponding to ca. five orders of magnitude over these last 10 years. These progresses depend on the target, however.
Detection of Copy-Rotate-Move Forgery Using Zernike Moments
NASA Astrophysics Data System (ADS)
Ryu, Seung-Jin; Lee, Min-Jeong; Lee, Heung-Kyu
As forgeries have become popular, the importance of forgery detection is much increased. Copy-move forgery, one of the most commonly used methods, copies a part of the image and pastes it into another part of the the image. In this paper, we propose a detection method of copy-move forgery that localizes duplicated regions using Zernike moments. Since the magnitude of Zernike moments is algebraically invariant against rotation, the proposed method can detect a forged region even though it is rotated. Our scheme is also resilient to the intentional distortions such as additive white Gaussian noise, JPEG compression, and blurring. Experimental results demonstrate that the proposed scheme is appropriate to identify the forged region by copy-rotate-move forgery.
Optimum detection of tones transmitted by a spacecraft
NASA Technical Reports Server (NTRS)
Simon, M. K.; Shihabi, M. M.; Moon, T.
1995-01-01
The performance of a scheme proposed for automated routine monitoring of deep-space missions is presented. The scheme uses four different tones (sinusoids) transmitted from the spacecraft (S/C) to a ground station with the positive identification of each of them used to indicate different states of the S/C. Performance is measured in terms of detection probability versus false alarm probability with detection signal-to-noise ratio as a parameter. The cases where the phase of the received tone is unknown and where both the phase and frequency of the received tone are unknown are treated separately. The decision rules proposed for detecting the tones are formulated from average-likelihood ratio and maximum-likelihood ratio tests, the former resulting in optimum receiver structures.
NASA Technical Reports Server (NTRS)
Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.
1992-01-01
In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the space shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the space shuttle main engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.
Detection and identification of concealed weapons using matrix pencil
NASA Astrophysics Data System (ADS)
Adve, Raviraj S.; Thayaparan, Thayananthan
2011-06-01
The detection and identification of concealed weapons is an extremely hard problem due to the weak signature of the target buried within the much stronger signal from the human body. This paper furthers the automatic detection and identification of concealed weapons by proposing the use of an effective approach to obtain the resonant frequencies in a measurement. The technique, based on Matrix Pencil, a scheme for model based parameter estimation also provides amplitude information, hence providing a level of confidence in the results. Of specific interest is the fact that Matrix Pencil is based on a singular value decomposition, making the scheme robust against noise.
Gas detection by correlation spectroscopy employing a multimode diode laser.
Lou, Xiutao; Somesfalean, Gabriel; Zhang, Zhiguo
2008-05-01
A gas sensor based on the gas-correlation technique has been developed using a multimode diode laser (MDL) in a dual-beam detection scheme. Measurement of CO(2) mixed with CO as an interfering gas is successfully demonstrated using a 1570 nm tunable MDL. Despite overlapping absorption spectra and occasional mode hops, the interfering signals can be effectively excluded by a statistical procedure including correlation analysis and outlier identification. The gas concentration is retrieved from several pair-correlated signals by a linear-regression scheme, yielding a reliable and accurate measurement. This demonstrates the utility of the unsophisticated MDLs as novel light sources for gas detection applications.
Heralded quantum steering over a high-loss channel
Weston, Morgan M.; Slussarenko, Sergei; Chrzanowski, Helen M.; Wollmann, Sabine; Shalm, Lynden K.; Verma, Varun B.; Allman, Michael S.; Nam, Sae Woo; Pryde, Geoff J.
2018-01-01
Entanglement is the key resource for many long-range quantum information tasks, including secure communication and fundamental tests of quantum physics. These tasks require robust verification of shared entanglement, but performing it over long distances is presently technologically intractable because the loss through an optical fiber or free-space channel opens up a detection loophole. We design and experimentally demonstrate a scheme that verifies entanglement in the presence of at least 14.8 ± 0.1 dB of added loss, equivalent to approximately 80 km of telecommunication fiber. Our protocol relies on entanglement swapping to herald the presence of a photon after the lossy channel, enabling event-ready implementation of quantum steering. This result overcomes the key barrier in device-independent communication under realistic high-loss scenarios and in the realization of a quantum repeater. PMID:29322093
Accounting for Effects of Orography in LDAS Precipitation Forcing Data
NASA Astrophysics Data System (ADS)
Schaake, J.; Higgins, W.; Cong, S.; Shi, W.; Duan, Q.; Yarosh, E.
2001-05-01
Precipitation analysis procedures that are widely used to make gridded precipitation estimates do not work well in mountainous areas because the gage density is too sparse relative to the spatacial frequency content of the actual precipitation field. Moreover, in the western U.S. most of the precipitation observations are low elevations and may not even detect occurrence of storms at high elevations. Although there are indeed significant limits to how accurately actual fields of orographic precipitation can be estimated from gage data alone, it is possible to make estimates for each time period that, over a period of time, have a climatology that should approximate the true climatology of the actual events. Analysis schemes that use the PRISM precipitation climatology to aid the precipitation analysis are being tested. The results of these tests will be presented.
Heralded quantum steering over a high-loss channel.
Weston, Morgan M; Slussarenko, Sergei; Chrzanowski, Helen M; Wollmann, Sabine; Shalm, Lynden K; Verma, Varun B; Allman, Michael S; Nam, Sae Woo; Pryde, Geoff J
2018-01-01
Entanglement is the key resource for many long-range quantum information tasks, including secure communication and fundamental tests of quantum physics. These tasks require robust verification of shared entanglement, but performing it over long distances is presently technologically intractable because the loss through an optical fiber or free-space channel opens up a detection loophole. We design and experimentally demonstrate a scheme that verifies entanglement in the presence of at least 14.8 ± 0.1 dB of added loss, equivalent to approximately 80 km of telecommunication fiber. Our protocol relies on entanglement swapping to herald the presence of a photon after the lossy channel, enabling event-ready implementation of quantum steering. This result overcomes the key barrier in device-independent communication under realistic high-loss scenarios and in the realization of a quantum repeater.
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
A back-fitting algorithm to improve real-time flood forecasting
NASA Astrophysics Data System (ADS)
Zhang, Xiaojing; Liu, Pan; Cheng, Lei; Liu, Zhangjun; Zhao, Yan
2018-07-01
Real-time flood forecasting is important for decision-making with regards to flood control and disaster reduction. The conventional approach involves a postprocessor calibration strategy that first calibrates the hydrological model and then estimates errors. This procedure can simulate streamflow consistent with observations, but obtained parameters are not optimal. Joint calibration strategies address this issue by refining hydrological model parameters jointly with the autoregressive (AR) model. In this study, five alternative schemes are used to forecast floods. Scheme I uses only the hydrological model, while scheme II includes an AR model for error correction. In scheme III, differencing is used to remove non-stationarity in the error series. A joint inference strategy employed in scheme IV calibrates the hydrological and AR models simultaneously. The back-fitting algorithm, a basic approach for training an additive model, is adopted in scheme V to alternately recalibrate hydrological and AR model parameters. The performance of the five schemes is compared with a case study of 15 recorded flood events from China's Baiyunshan reservoir basin. Our results show that (1) schemes IV and V outperform scheme III during the calibration and validation periods and (2) scheme V is inferior to scheme IV in the calibration period, but provides better results in the validation period. Joint calibration strategies can therefore improve the accuracy of flood forecasting. Additionally, the back-fitting recalibration strategy produces weaker overcorrection and a more robust performance compared with the joint inference strategy.
SETIBURST: A Robotic, Commensal, Realtime Multi-science Backend for the Arecibo Telescope
NASA Astrophysics Data System (ADS)
Chennamangalam, Jayanth; MacMahon, David; Cobb, Jeff; Karastergiou, Aris; Siemion, Andrew P. V.; Rajwade, Kaustubh; Armour, Wes; Gajjar, Vishal; Lorimer, Duncan R.; McLaughlin, Maura A.; Werthimer, Dan; Williams, Christopher
2017-02-01
Radio astronomy has traditionally depended on observatories allocating time to observers for exclusive use of their telescopes. The disadvantage of this scheme is that the data thus collected is rarely used for other astronomy applications, and in many cases, is unsuitable. For example, properly calibrated pulsar search data can, with some reduction, be used for spectral line surveys. A backend that supports plugging in multiple applications to a telescope to perform commensal data analysis will vastly increase the science throughput of the facility. In this paper, we present “SETIBURST,” a robotic, commensal, realtime multi-science backend for the 305 m Arecibo Telescope. The system uses the 1.4 GHz, seven-beam Arecibo L-band Feed Array (ALFA) receiver whenever it is operated. SETIBURST currently supports two applications: SERENDIP VI, a SETI spectrometer that is conducting a search for signs of technological life, and ALFABURST, a fast transient search system that is conducting a survey of fast radio bursts (FRBs). Based on the FRB event rate and the expected usage of ALFA, we expect 0-5 FRB detections over the coming year. SETIBURST also provides the option of plugging in more applications. We outline the motivation for our instrumentation scheme and the scientific motivation of the two surveys, along with their descriptions and related discussions.
Multicolor Fluorescence Detection for Droplet Microfluidics Using Optical Fibers
Cole, Russell H.; Gartner, Zev J.; Abate, Adam R.
2016-01-01
Fluorescence assays are the most common readouts used in droplet microfluidics due to their bright signals and fast time response. Applications such as multiplex assays, enzyme evolution, and molecular biology enhanced cell sorting require the detection of two or more colors of fluorescence. Standard multicolor detection systems that couple free space lasers to epifluorescence microscopes are bulky, expensive, and difficult to maintain. In this paper, we describe a scheme to perform multicolor detection by exciting discrete regions of a microfluidic channel with lasers coupled to optical fibers. Emitted light is collected by an optical fiber coupled to a single photodetector. Because the excitation occurs at different spatial locations, the identity of emitted light can be encoded as a temporal shift, eliminating the need for more complicated light filtering schemes. The system has been used to detect droplet populations containing four unique combinations of dyes and to detect sub-nanomolar concentrations of fluorescein. PMID:27214249
NASA Astrophysics Data System (ADS)
Qiu, Yuchen; Lu, Xianglan; Yan, Shiju; Tan, Maxine; Cheng, Samuel; Li, Shibo; Liu, Hong; Zheng, Bin
2016-03-01
Automated high throughput scanning microscopy is a fast developing screening technology used in cytogenetic laboratories for the diagnosis of leukemia or other genetic diseases. However, one of the major challenges of using this new technology is how to efficiently detect the analyzable metaphase chromosomes during the scanning process. The purpose of this investigation is to develop a computer aided detection (CAD) scheme based on deep learning technology, which can identify the metaphase chromosomes with high accuracy. The CAD scheme includes an eight layer neural network. The first six layers compose of an automatic feature extraction module, which has an architecture of three convolution-max-pooling layer pairs. The 1st, 2nd and 3rd pair contains 30, 20, 20 feature maps, respectively. The seventh and eighth layers compose of a multiple layer perception (MLP) based classifier, which is used to identify the analyzable metaphase chromosomes. The performance of new CAD scheme was assessed by receiver operation characteristic (ROC) method. A number of 150 regions of interest (ROIs) were selected to test the performance of our new CAD scheme. Each ROI contains either interphase cell or metaphase chromosomes. The results indicate that new scheme is able to achieve an area under the ROC curve (AUC) of 0.886+/-0.043. This investigation demonstrates that applying a deep learning technique may enable to significantly improve the accuracy of the metaphase chromosome detection using a scanning microscopic imaging technology in the future.
The December 2008 flood event in Rome: Was it really an extreme event?
NASA Astrophysics Data System (ADS)
Lastoria, B.; Mariani, S.; Casaioli, M.; Bussettini, M.
2009-04-01
In mid December 2008, Italy suffered bad weather with heavy snowfall blanketing the north and strong winds and downpours pelting the centre-south. In particular, during the period between 10th and 12th December, intense precipitation struck the Tyrrhenian Sea side of the peninsula, inducing a flood event, which captured the attention of the national and international media, on the Tiber river and on its tributary, the Aniene. The relevance of the event was caused by the actual damages occurred in several zones over Rome area, in particular due to the downpours and to damages which would have occurred if Tiber river had overflowed its banks. The event, which was initially considered as extreme, was indeed severe but not so exceptional as shown by the meteo-hydrological post-event analysis. The peak water level of 12.55 m, recorded on 13th December at 1:30 a.m. (local time) at the Ripetta station, which is situated along the Tiber river in the centre of Rome, was higher than those observed during the last ten years (which to the utmost reached 11.41 m in December 2005). However, it did not reach the historical maximum of 16.90 m observed in 1937. Moreover, on the basis of the Ripetta historical series, such a level is associated to an ordinary flood event. Even if the flood was ordinary, a state of emergency was declared by the Rome's Mayor, since the event caused severe damages by disrupting flight and train services, blocking off major roads leading into Rome, flooding underpasses and sealing off industrial activities sited in the flooded areas, in particular nearby the confluence of the Aniene river with the Tiber river. In addition, hundreds of people were evacuated and a woman died in a her car which was submerged by a wave of water and mud in an underpass. Given these premises, the present work examines the relation between a severe, but not extraordinary, event and the considerable damages that occurred as a consequence. First, the meteorological evolution of the event, as modelled by the Hydro-Meteo-Marine forecasting system (Sistema Idro-Meteo-Mare - SIMM), is considered. SIMM, operational since 2000, is an integrated meteo-marine forecasting chain, formed by a cascade of four numerical models, telescoping from the Mediterranean basin to the Venice Lagoon. In this operational integrated system, the meteorological model consists of the parallel version of the BOlogna Limited Area Model (BOLAM). In this study, BOLAM run in three configurations, the operational one using the Kuo cumulus parameterisation scheme, an intermediate version employing the more advanced Kain-Fritsch convection scheme, and a fully updated version including a more advanced advection scheme, explicit advection of five hydrometeors, and state-of-the-art parameterization schemes for radiation, convection, boundary layer turbulence and soil processes. Then, the event actually occurred is described by means of both the rain and water-level gauges available over the Tiber river basin and EUMETSAT satellites. Ground-based observations were compared with historical data in order to evaluate the frequency and the relative magnitude of the December 2008 flood event. The last part of the work is dedicated to the description of the damages by considering three different aspects: spatial distribution, type and entity of damage. Consideration about the vulnerability of the areas more hit and the predictability of the damage occurred are also addressed.
A Computational Geometry Approach to Automated Pulmonary Fissure Segmentation in CT Examinations
Pu, Jiantao; Leader, Joseph K; Zheng, Bin; Knollmann, Friedrich; Fuhrman, Carl; Sciurba, Frank C; Gur, David
2010-01-01
Identification of pulmonary fissures, which form the boundaries between the lobes in the lungs, may be useful during clinical interpretation of CT examinations to assess the early presence and characterization of manifestation of several lung diseases. Motivated by the unique nature of the surface shape of pulmonary fissures in three-dimensional space, we developed a new automated scheme using computational geometry methods to detect and segment fissures depicted on CT images. After a geometric modeling of the lung volume using the Marching Cube Algorithm, Laplacian smoothing is applied iteratively to enhance pulmonary fissures by depressing non-fissure structures while smoothing the surfaces of lung fissures. Next, an Extended Gaussian Image based procedure is used to locate the fissures in a statistical manner that approximates the fissures using a set of plane “patches.” This approach has several advantages such as independence of anatomic knowledge of the lung structure except the surface shape of fissures, limited sensitivity to other lung structures, and ease of implementation. The scheme performance was evaluated by two experienced thoracic radiologists using a set of 100 images (slices) randomly selected from 10 screening CT examinations. In this preliminary evaluation 98.7% and 94.9% of scheme segmented fissure voxels are within 2 mm of the fissures marked independently by two radiologists in the testing image dataset. Using the scheme detected fissures as reference, 89.4% and 90.1% of manually marked fissure points have distance ≤ 2 mm to the reference suggesting a possible under-segmentation of the scheme. The case-based RMS (root-mean-square) distances (“errors”) between our scheme and the radiologist ranged from 1.48±0.92 to 2.04±3.88 mm. The discrepancy of fissure detection results between the automated scheme and either radiologist is smaller in this dataset than the inter-reader variability. PMID:19272987
NASA Astrophysics Data System (ADS)
Waigl, C. F.; Prakash, A.; Stuefer, M.; Ichoku, C. M.
2016-12-01
The aim of this work is to present and evaluate an algorithm that generates near real-time fire detections suitable for use by fire and related hazard management agencies in Alaska. Our scheme offers benefits over available global products and is sensitive to low-intensity residual burns while at the same time avoiding common sources of false detections as they are observed in the Alaskan boreal forest, such as refective river banks and old fire scars. The algorithm is based on I-band brightness temperature data form the Visible Infrared Imaging Radiometer Suite (VIIRS) on the NOAA's NPP Suomi spacecraft. Using datasets covering the entire 2015 Alaska fire season, we first evaluate the performance of two global fire products: MOD14/MYD14, derived from the Moderate Resolution Imaging Spectroradiometer (MODIS), and the more recent global VIIRS I-band product. A comparison with the fire perimeter and properties data published by the Alaska Interagency Coordination Center (AICC) shows that both MODIS and VIIRS fire products successfully detect all fires larger than approx. 1000 hectares, with the VIIRS I-band product only moderately outperforming MOD14/MYD14. For smaller fires, the VIIRS I-band product offers higher detection likelihood, but still misses one fifth of the fire events overall. Furthermore, some daytime detections are missing, possibly due to processing difficulties or incomplete data transfer. Second, as an alternative, we present a simple algorithm that uses the normalized difference between the 3.74µm and 11.45 µm VIIRS-I band at-sensor brightness temperatures to map both low- and high-intensity burn areas. Such an approach has the advantage that it makes use of data that is available via the direct readout station operated by Geographic Information Network of Alaska (GINA). We apply this scheme to known Alaskan boreal forest fires and validate it using GIS data produced by fire management agencies, fire detections from near simultanous Landsat imagery, and sub-pixel analysis. We find that our VIIRS derived fire product more accurately captures the fire spread, can differentiate well between low- and high-intensity burn areas, and has fewer errors of omission compared to the MODIS and VIIRS global fire products.
Zhu, Jane M; Zhu, Yiliang; Liu, Rui
2008-11-03
As China re-establishes its health insurance system through various cooperative schemes, little is known about schoolchildren's health insurance. This paper reports findings from a study that examined schoolchildren's insurance coverage, disparities between farmer and non-farmer households, and effects of low-premium cooperative schemes on healthcare access and utilization. It also discusses barriers to sustainable enrollment and program growth. A survey of elementary school students was conducted in Pinggu, a rural/suburban district of Beijing. Statistical analyses of association and adjusted odds ratio via logistic regression were conducted to examine various aspects of health insurance. Children's health insurance coverage rose to 54% by 2005, the rates are comparable for farmers' and non-farmer's children. However, 76% of insured farmers' children were covered under a low-premium scheme protecting only major medical events, compared to 42% among insured non-farmers' children. The low-premium schemes improved parental perceptions of children's access to and affordability of healthcare, their healthcare-seeking behaviors, and overall satisfaction with healthcare, but had little impact on utilization of outpatient care. Enrolling and retaining schoolchildren in health insurance are threatened by the limited tangible value for routine care and low reimbursement rate for major medical events under the low-premium cooperative schemes. Coverage rates may be improved by offering complimentary and supplementary benefit options with flexible premiums via a multi-tier system consisting of national, regional, and commercial programs. Health insurance education by means of community outreach can reinforce positive parental perceptions, hence promoting and retaining insurance enrollment in short-term.
Coupled assimilation for an intermediated coupled ENSO prediction model
NASA Astrophysics Data System (ADS)
Zheng, Fei; Zhu, Jiang
2010-10-01
The value of coupled assimilation is discussed using an intermediate coupled model in which the wind stress is the only atmospheric state which is slavery to model sea surface temperature (SST). In the coupled assimilation analysis, based on the coupled wind-ocean state covariance calculated from the coupled state ensemble, the ocean state is adjusted by assimilating wind data using the ensemble Kalman filter. As revealed by a series of assimilation experiments using simulated observations, the coupled assimilation of wind observations yields better results than the assimilation of SST observations. Specifically, the coupled assimilation of wind observations can help to improve the accuracy of the surface and subsurface currents because the correlation between the wind and ocean currents is stronger than that between SST and ocean currents in the equatorial Pacific. Thus, the coupled assimilation of wind data can decrease the initial condition errors in the surface/subsurface currents that can significantly contribute to SST forecast errors. The value of the coupled assimilation of wind observations is further demonstrated by comparing the prediction skills of three 12-year (1997-2008) hindcast experiments initialized by the ocean-only assimilation scheme that assimilates SST observations, the coupled assimilation scheme that assimilates wind observations, and a nudging scheme that nudges the observed wind stress data, respectively. The prediction skills of two assimilation schemes are significantly better than those of the nudging scheme. The prediction skills of assimilating wind observations are better than assimilating SST observations. Assimilating wind observations for the 2007/2008 La Niña event triggers better predictions, while assimilating SST observations fails to provide an early warning for that event.
Buffer Management Simulation in ATM Networks
NASA Technical Reports Server (NTRS)
Yaprak, E.; Xiao, Y.; Chronopoulos, A.; Chow, E.; Anneberg, L.
1998-01-01
This paper presents a simulation of a new dynamic buffer allocation management scheme in ATM networks. To achieve this objective, an algorithm that detects congestion and updates the dynamic buffer allocation scheme was developed for the OPNET simulation package via the creation of a new ATM module.
Optical temperature compensation schemes of spectral modulation sensors for aircraft engine control
NASA Astrophysics Data System (ADS)
Berkcan, Ertugrul
1993-02-01
Optical temperature compensation schemes for the ratiometric interrogation of spectral modulation sensors for source temperature robustness are presented. We have obtained better than 50 - 100X decrease of the temperature coefficient of the sensitivity using these types of compensation. We have also developed a spectrographic interrogation scheme that provides increased source temperature robustness; this affords a significantly improved accuracy over FADEC temperature ranges as well as temperature coefficient of the sensitivity that is substantially and further reduced. This latter compensation scheme can be integrated in a small E/O package including the detection, analog and digital signal processing. We find that these interrogation schemes can be used within a detector spatially multiplexed architecture.
NASA Astrophysics Data System (ADS)
Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao
2018-02-01
A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.
Playing quantum games by a scheme with pre- and post-selection
NASA Astrophysics Data System (ADS)
Weng, Guo-Fu; Yu, Yang
2016-01-01
We propose a scheme to play quantum games by assuming that the two players interact with each other. Thus, by pre-selection, two players can choose their initial states, and some dilemma in classical game may be removed by post-selection, which is particularly useful for the cooperative games. We apply the proposal to both of BoS and Prisoners' dilemma games in cooperative situations. The examples show that the proposal would guarantee a remarkably binding agreement between two parties. Any deviation during the game will be detected, and the game may be abnegated. By illuminating the examples, we find that the initial state in the cooperative game does not destroy process to get preferable payoffs by pre- and post-selections, which is not true in other schemes for implementing the quantum game. We point out that one player can use the scheme to detect his opponent's choices if he is advantageous in information theory and technology.
Fault Diagnosis for Centre Wear Fault of Roll Grinder Based on a Resonance Demodulation Scheme
NASA Astrophysics Data System (ADS)
Wang, Liming; Shao, Yimin; Yin, Lei; Yuan, Yilin; Liu, Jing
2017-05-01
Roll grinder is one of the important parts in the rolling machinery, and the grinding precision of roll surface has direct influence on the surface quality of steel strip. However, during the grinding process, the centre bears the gravity of the roll and alternating stress. Therefore, wear or spalling faults are easily observed on the centre, which will lead to an anomalous vibration of the roll grinder. In this study, a resonance demodulation scheme is proposed to detect the centre wear fault of roll grinder. Firstly, fast kurtogram method is employed to help select the sub-band filter parameters for optimal resonance demodulation. Further, the envelope spectrum are derived based on the filtered signal. Finally, two health indicators are designed to conduct the fault diagnosis for centre wear fault. The proposed scheme is assessed by analysing experimental data from a roll grinder of twenty-high rolling mill. The results show that the proposed scheme can effectively detect the centre wear fault of the roll grinder.
Free-Space Quantum Signatures Using Heterodyne Measurements
NASA Astrophysics Data System (ADS)
Croal, Callum; Peuntinger, Christian; Heim, Bettina; Khan, Imran; Marquardt, Christoph; Leuchs, Gerd; Wallden, Petros; Andersson, Erika; Korolkova, Natalia
2016-09-01
Digital signatures guarantee the authorship of electronic communications. Currently used "classical" signature schemes rely on unproven computational assumptions for security, while quantum signatures rely only on the laws of quantum mechanics to sign a classical message. Previous quantum signature schemes have used unambiguous quantum measurements. Such measurements, however, sometimes give no result, reducing the efficiency of the protocol. Here, we instead use heterodyne detection, which always gives a result, although there is always some uncertainty. We experimentally demonstrate feasibility in a real environment by distributing signature states through a noisy 1.6 km free-space channel. Our results show that continuous-variable heterodyne detection improves the signature rate for this type of scheme and therefore represents an interesting direction in the search for practical quantum signature schemes. For transmission values ranging from 100% to 10%, but otherwise assuming an ideal implementation with no other imperfections, the signature length is shorter by a factor of 2 to 10. As compared with previous relevant experimental realizations, the signature length in this implementation is several orders of magnitude shorter.
Extended linear detection range for optical tweezers using image-plane detection scheme
NASA Astrophysics Data System (ADS)
Hajizadeh, Faegheh; Masoumeh Mousavi, S.; Khaksar, Zeinab S.; Reihani, S. Nader S.
2014-10-01
Ability to measure pico- and femto-Newton range forces using optical tweezers (OT) strongly relies on the sensitivity of its detection system. We show that the commonly used back-focal-plane detection method provides a linear response range which is shorter than that of the restoring force of OT for large beads. This limits measurable force range of OT. We show, both theoretically and experimentally, that utilizing a second laser beam for tracking could solve the problem. We also propose a new detection scheme in which the quadrant photodiode is positioned at the plane optically conjugate to the object plane (image plane). This method solves the problem without need for a second laser beam for the bead sizes that are commonly used in force spectroscopy applications of OT, such as biopolymer stretching.
Low-Complexity Noncoherent Signal Detection for Nanoscale Molecular Communications.
Li, Bin; Sun, Mengwei; Wang, Siyi; Guo, Weisi; Zhao, Chenglin
2016-01-01
Nanoscale molecular communication is a viable way of exchanging information between nanomachines. In this investigation, a low-complexity and noncoherent signal detection technique is proposed to mitigate the inter-symbol-interference (ISI) and additive noise. In contrast to existing coherent detection methods of high complexity, the proposed noncoherent signal detector is more practical when the channel conditions are hard to acquire accurately or hidden from the receiver. The proposed scheme employs the molecular concentration difference to detect the ISI corrupted signals and we demonstrate that it can suppress the ISI effectively. The difference in molecular concentration is a stable characteristic, irrespective of the diffusion channel conditions. In terms of complexity, by excluding matrix operations or likelihood calculations, the new detection scheme is particularly suitable for nanoscale molecular communication systems with a small energy budget or limited computation resource.
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2015-04-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud detection schemes were designed to be numerically efficient and suited for the processing of large numbers of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient numbers of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
NASA Astrophysics Data System (ADS)
Nishiyama, M.; Igawa, H.; Kasai, T.; Watanabe, N.
2013-09-01
In this paper, we reveal characteristics of static and dynamic distributed strain measurement using a long-gauge fiber Bragg grating (FBG) and a Delayed Transmission/Reflection Ratiometric Reflectometry (DTR3) scheme. The DTR3 scheme has capability of detecting distributed strain using the long-gauge FBG with 50-cm spatial resolution. Additionally, dynamic strain measurement can be achieved using this technique in 100-Hz sampling rate. We evaluated strain sensing characteristics of the long-gauge FBG attached on 2.5-m aluminum bar by a four-point bending equipment. Experimental results showed that the DTR3 using the long-gauge FBG could detect distributed strain in static tests and resonance frequency of structure in free vibration tests. As a result, it is suggested that the DTR3 scheme using the longgauge FBG is attractive to structural health monitoring (SHM) as dynamic deformation detection of a few and tensmeters structure such as the airplane wing and the helicopter blade.
NASA Astrophysics Data System (ADS)
Li, Yifan; Liang, Xihui; Lin, Jianhui; Chen, Yuejian; Liu, Jianxin
2018-02-01
This paper presents a novel signal processing scheme, feature selection based multi-scale morphological filter (MMF), for train axle bearing fault detection. In this scheme, more than 30 feature indicators of vibration signals are calculated for axle bearings with different conditions and the features which can reflect fault characteristics more effectively and representatively are selected using the max-relevance and min-redundancy principle. Then, a filtering scale selection approach for MMF based on feature selection and grey relational analysis is proposed. The feature selection based MMF method is tested on diagnosis of artificially created damages of rolling bearings of railway trains. Experimental results show that the proposed method has a superior performance in extracting fault features of defective train axle bearings. In addition, comparisons are performed with the kurtosis criterion based MMF and the spectral kurtosis criterion based MMF. The proposed feature selection based MMF method outperforms these two methods in detection of train axle bearing faults.
A model of human event detection in multiple process monitoring situations
NASA Technical Reports Server (NTRS)
Greenstein, J. S.; Rouse, W. B.
1978-01-01
It is proposed that human decision making in many multi-task situations might be modeled in terms of the manner in which the human detects events related to his tasks and the manner in which he allocates his attention among his tasks once he feels events have occurred. A model of human event detection performance in such a situation is presented. An assumption of the model is that, in attempting to detect events, the human generates the probability that events have occurred. Discriminant analysis is used to model the human's generation of these probabilities. An experimental study of human event detection performance in a multiple process monitoring situation is described and the application of the event detection model to this situation is addressed. The experimental study employed a situation in which subjects simulataneously monitored several dynamic processes for the occurrence of events and made yes/no decisions on the presence of events in each process. Input to the event detection model of the information displayed to the experimental subjects allows comparison of the model's performance with the performance of the subjects.
Advection of Microphysical Scalars in Terminal Area Simulation System (TASS)
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.
2011-01-01
The Terminal Area Simulation System (TASS) is a large eddy scale atmospheric flow model with extensive turbulence and microphysics packages. It has been applied successfully in the past to a diverse set of problems ranging from prediction of severe convective events (Proctor et al. 2002), tracking storms and for simulating weapons effects such as the dispersion and fallout of fission debris (Bacon and Sarma 1991), etc. More recently, TASS has been used for predicting the transport and decay of wake vortices behind aircraft (Proctor 2009). An essential part of the TASS model is its comprehensive microphysics package, which relies on the accurate computation of microphysical scalar transport. This paper describes an evaluation of the Leonard scheme implemented in the TASS model for transporting microphysical scalars. The scheme is validated against benchmark cases with exact solutions and compared with two other schemes - a Monotone Upstream-centered Scheme for Conservation Laws (MUSCL)-type scheme after van Leer and LeVeque's high-resolution wave propagation method. Finally, a comparison between the schemes is made against an incident of severe tornadic super-cell convection near Del City, Oklahoma.
A Studio Project Based on the Events of September 11
ERIC Educational Resources Information Center
Ruby, Nell
2004-01-01
A week after the 9/11 WTC event, the collage project that Nell Ruby and her class had been working on in a basic design classroom lacked relevance. They had been working from master works, analyzing hue and value relationships, color schemes, shape, and composition. The master works seemed unimportant because of the immense emotional impact of the…
Narayanan, Vignesh; Jagannathan, Sarangapani
2017-09-07
In this paper, a distributed control scheme for an interconnected system composed of uncertain input affine nonlinear subsystems with event triggered state feedback is presented by using a novel hybrid learning scheme-based approximate dynamic programming with online exploration. First, an approximate solution to the Hamilton-Jacobi-Bellman equation is generated with event sampled neural network (NN) approximation and subsequently, a near optimal control policy for each subsystem is derived. Artificial NNs are utilized as function approximators to develop a suite of identifiers and learn the dynamics of each subsystem. The NN weight tuning rules for the identifier and event-triggering condition are derived using Lyapunov stability theory. Taking into account, the effects of NN approximation of system dynamics and boot-strapping, a novel NN weight update is presented to approximate the optimal value function. Finally, a novel strategy to incorporate exploration in online control framework, using identifiers, is introduced to reduce the overall cost at the expense of additional computations during the initial online learning phase. System states and the NN weight estimation errors are regulated and local uniformly ultimately bounded results are achieved. The analytical results are substantiated using simulation studies.
Validation of satellite-based rainfall in Kalahari
NASA Astrophysics Data System (ADS)
Lekula, Moiteela; Lubczynski, Maciek W.; Shemang, Elisha M.; Verhoef, Wouter
2018-06-01
Water resources management in arid and semi-arid areas is hampered by insufficient rainfall data, typically obtained from sparsely distributed rain gauges. Satellite-based rainfall estimates (SREs) are alternative sources of such data in these areas. In this study, daily rainfall estimates from FEWS-RFE∼11 km, TRMM-3B42∼27 km, CMOPRH∼27 km and CMORPH∼8 km were evaluated against nine, daily rain gauge records in Central Kalahari Basin (CKB), over a five-year period, 01/01/2001-31/12/2005. The aims were to evaluate the daily rainfall detection capabilities of the four SRE algorithms, analyze the spatio-temporal variability of rainfall in the CKB and perform bias-correction of the four SREs. Evaluation methods included scatter plot analysis, descriptive statistics, categorical statistics and bias decomposition. The spatio-temporal variability of rainfall, was assessed using the SREs' mean annual rainfall, standard deviation, coefficient of variation and spatial correlation functions. Bias correction of the four SREs was conducted using a Time-Varying Space-Fixed bias-correction scheme. The results underlined the importance of validating daily SREs, as they had different rainfall detection capabilities in the CKB. The FEWS-RFE∼11 km performed best, providing better results of descriptive and categorical statistics than the other three SREs, although bias decomposition showed that all SREs underestimated rainfall. The analysis showed that the most reliable SREs performance analysis indicator were the frequency of "miss" rainfall events and the "miss-bias", as they directly indicated SREs' sensitivity and bias of rainfall detection, respectively. The Time Varying and Space Fixed (TVSF) bias-correction scheme, improved some error measures but resulted in the reduction of the spatial correlation distance, thus increased, already high, spatial rainfall variability of all the four SREs. This study highlighted SREs as valuable source of daily rainfall data providing good spatio-temporal data coverage especially suitable for areas with limited rain gauges, such as the CKB, but also emphasized SREs' drawbacks, creating avenue for follow up research.
Research on SEU hardening of heterogeneous Dual-Core SoC
NASA Astrophysics Data System (ADS)
Huang, Kun; Hu, Keliu; Deng, Jun; Zhang, Tao
2017-08-01
The implementation of Single-Event Upsets (SEU) hardening has various schemes. However, some of them require a lot of human, material and financial resources. This paper proposes an easy scheme on SEU hardening for Heterogeneous Dual-core SoC (HD SoC) which contains three techniques. First, the automatic Triple Modular Redundancy (TMR) technique is adopted to harden the register heaps of the processor and the instruction-fetching module. Second, Hamming codes are used to harden the random access memory (RAM). Last, a software signature technique is applied to check the programs which are running on CPU. The scheme need not to consume additional resources, and has little influence on the performance of CPU. These technologies are very mature, easy to implement and needs low cost. According to the simulation result, the scheme can satisfy the basic demand of SEU-hardening.
COMCAN: a computer program for common cause analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdick, G.R.; Marshall, N.H.; Wilson, J.R.
1976-05-01
The computer program, COMCAN, searches the fault tree minimal cut sets for shared susceptibility to various secondary events (common causes) and common links between components. In the case of common causes, a location check may also be performed by COMCAN to determine whether barriers to the common cause exist between components. The program can locate common manufacturers of components having events in the same minimal cut set. A relative ranking scheme for secondary event susceptibility is included in the program.
An On-Demand Emergency Packet Transmission Scheme for Wireless Body Area Networks.
Al Ameen, Moshaddique; Hong, Choong Seon
2015-12-04
The rapid developments of sensor devices that can actively monitor human activities have given rise to a new field called wireless body area network (BAN). A BAN can manage devices in, on and around the human body. Major requirements of such a network are energy efficiency, long lifetime, low delay, security, etc. Traffic in a BAN can be scheduled (normal) or event-driven (emergency). Traditional media access control (MAC) protocols use duty cycling to improve performance. A sleep-wake up cycle is employed to save energy. However, this mechanism lacks features to handle emergency traffic in a prompt and immediate manner. To deliver an emergency packet, a node has to wait until the receiver is awake. It also suffers from overheads, such as idle listening, overhearing and control packet handshakes. An external radio-triggered wake up mechanism is proposed to handle prompt communication. It can reduce the overheads and improve the performance through an on-demand scheme. In this work, we present a simple-to-implement on-demand packet transmission scheme by taking into considerations the requirements of a BAN. The major concern is handling the event-based emergency traffic. The performance analysis of the proposed scheme is presented. The results showed significant improvements in the overall performance of a BAN compared to state-of-the-art protocols in terms of energy consumption, delay and lifetime.
An On-Demand Emergency Packet Transmission Scheme for Wireless Body Area Networks
Al Ameen, Moshaddique; Hong, Choong Seon
2015-01-01
The rapid developments of sensor devices that can actively monitor human activities have given rise to a new field called wireless body area network (BAN). A BAN can manage devices in, on and around the human body. Major requirements of such a network are energy efficiency, long lifetime, low delay, security, etc. Traffic in a BAN can be scheduled (normal) or event-driven (emergency). Traditional media access control (MAC) protocols use duty cycling to improve performance. A sleep-wake up cycle is employed to save energy. However, this mechanism lacks features to handle emergency traffic in a prompt and immediate manner. To deliver an emergency packet, a node has to wait until the receiver is awake. It also suffers from overheads, such as idle listening, overhearing and control packet handshakes. An external radio-triggered wake up mechanism is proposed to handle prompt communication. It can reduce the overheads and improve the performance through an on-demand scheme. In this work, we present a simple-to-implement on-demand packet transmission scheme by taking into considerations the requirements of a BAN. The major concern is handling the event-based emergency traffic. The performance analysis of the proposed scheme is presented. The results showed significant improvements in the overall performance of a BAN compared to state-of-the-art protocols in terms of energy consumption, delay and lifetime. PMID:26690161
Foretelling Flares and Solar Energetic Particle Events: the FORSPEF tool
NASA Astrophysics Data System (ADS)
Anastasiadis, Anastasios; Papaioannou, Athanasios; Sandberg, Ingmar; Georgoulis, Manolis K.; Tziotziou, Kostas; Jiggens, Piers
2017-04-01
A novel integrated prediction system, for both solar flares (SFs) and solar energetic particle (SEP) events is being presented. The Forecasting Solar Particle Events and Flares (FORSPEF) provides forecasting of solar eruptive events, such as SFs with a projection to coronal mass ejections (CMEs) (occurrence and velocity) and the likelihood of occurrence of a SEP event. In addition, FORSPEF, also provides nowcasting of SEP events based on actual SF and CME near real-time data, as well as the complete SEP profile (peak flux, fluence, rise time, duration) per parent solar event. The prediction of SFs relies on a morphological method: the effective connected magnetic field strength (Beff); it is based on an assessment of potentially flaring active-region (AR) magnetic configurations and it utilizes sophisticated analysis of a large number of AR magnetograms. For the prediction of SEP events new methods have been developed for both the likelihood of SEP occurrence and the expected SEP characteristics. In particular, using the location of the flare (longitude) and the flare size (maximum soft X-ray intensity), a reductive statistical method has been implemented. Moreover, employing CME parameters (velocity and width), proper functions per width (i.e. halo, partial halo, non-halo) and integral energy (E>30, 60, 100 MeV) have been identified. In our technique warnings are issued for all > C1.0 soft X-ray flares. The prediction time in the forecasting scheme extends to 24 hours with a refresh rate of 3 hours while the respective prediction time for the nowcasting scheme depends on the availability of the near real-time data and falls between 15-20 minutes for solar flares and 6 hours for CMEs. We present the modules of the FORSPEF system, their interconnection and the operational set up. The dual approach in the development of FORPSEF (i.e. forecasting and nowcasting scheme) permits the refinement of predictions upon the availability of new data that characterize changes on the Sun and the interplanetary space, while the combined usage of SF and SEP forecasting methods upgrades FORSPEF to an integrated forecasting solution. Finally, we demonstrate the validation of the modules of the FORSPEF tool using categorical scores constructed on archived data and we further discuss independent case studies. This work has been funded through the "FORSPEF: FORecasting Solar Particle Events and Flares", ESA Contract No. 4000109641/13/NL/AK and the "SPECS: Solar Particle Events foreCasting Studies" project of the National Observatory of Athens.
Computer-Aided Diagnosis in Medical Imaging: Historical Review, Current Status and Future Potential
Doi, Kunio
2007-01-01
Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. In this article, the motivation and philosophy for early development of CAD schemes are presented together with the current status and future potential of CAD in a PACS environment. With CAD, radiologists use the computer output as a “second opinion” and make the final decisions. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral chest images has the potential to improve the overall performance in the detection of lung nodules when combined with another CAD scheme for PA chest images. Because vertebral fractures can be detected reliably by computer on lateral chest radiographs, radiologists’ accuracy in the detection of vertebral fractures would be improved by the use of CAD, and thus early diagnosis of osteoporosis would become possible. In MRA, a CAD system has been developed for assisting radiologists in the detection of intracranial aneurysms. On successive bone scan images, a CAD scheme for detection of interval changes has been developed by use of temporal subtraction images. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for chest CAD may include the computerized detection of lung nodules, interstitial opacities, cardiomegaly, vertebral fractures, and interval changes in chest radiographs as well as the computerized classification of benign and malignant nodules and the differential diagnosis of interstitial lung diseases. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with known pathology, which would be very similar to a new unknown case, from PACS when a reliable and useful method has been developed for quantifying the similarity of a pair of images for visual comparison by radiologists. PMID:17349778
NASA Astrophysics Data System (ADS)
Hiramatsu, Yuya; Muramatsu, Chisako; Kobayashi, Hironobu; Hara, Takeshi; Fujita, Hiroshi
2017-03-01
Breast cancer screening with mammography and ultrasonography is expected to improve sensitivity compared with mammography alone, especially for women with dense breast. An automated breast volume scanner (ABVS) provides the operator-independent whole breast data which facilitate double reading and comparison with past exams, contralateral breast, and multimodality images. However, large volumetric data in screening practice increase radiologists' workload. Therefore, our goal is to develop a computer-aided detection scheme of breast masses in ABVS data for assisting radiologists' diagnosis and comparison with mammographic findings. In this study, false positive (FP) reduction scheme using deep convolutional neural network (DCNN) was investigated. For training DCNN, true positive and FP samples were obtained from the result of our initial mass detection scheme using the vector convergence filter. Regions of interest including the detected regions were extracted from the multiplanar reconstraction slices. We investigated methods to select effective FP samples for training the DCNN. Based on the free response receiver operating characteristic analysis, simple random sampling from the entire candidates was most effective in this study. Using DCNN, the number of FPs could be reduced by 60%, while retaining 90% of true masses. The result indicates the potential usefulness of DCNN for FP reduction in automated mass detection on ABVS images.
Adaptive Quadrature Detection for Multicarrier Continuous-Variable Quantum Key Distribution
NASA Astrophysics Data System (ADS)
Gyongyosi, Laszlo; Imre, Sandor
2015-03-01
We propose the adaptive quadrature detection for multicarrier continuous-variable quantum key distribution (CVQKD). A multicarrier CVQKD scheme uses Gaussian subcarrier continuous variables for the information conveying and Gaussian sub-channels for the transmission. The proposed multicarrier detection scheme dynamically adapts to the sub-channel conditions using a corresponding statistics which is provided by our sophisticated sub-channel estimation procedure. The sub-channel estimation phase determines the transmittance coefficients of the sub-channels, which information are used further in the adaptive quadrature decoding process. We define the technique called subcarrier spreading to estimate the transmittance conditions of the sub-channels with a theoretical error-minimum in the presence of a Gaussian noise. We introduce the terms of single and collective adaptive quadrature detection. We also extend the results for a multiuser multicarrier CVQKD scenario. We prove the achievable error probabilities, the signal-to-noise ratios, and quantify the attributes of the framework. The adaptive detection scheme allows to utilize the extra resources of multicarrier CVQKD and to maximize the amount of transmittable information. This work was partially supported by the GOP-1.1.1-11-2012-0092 (Secure quantum key distribution between two units on optical fiber network) project sponsored by the EU and European Structural Fund, and by the COST Action MP1006.
A ZigBee-Based Location-Aware Fall Detection System for Improving Elderly Telecare
Huang, Chih-Ning; Chan, Chia-Tai
2014-01-01
Falls are the primary cause of accidents among the elderly and frequently cause fatal and non-fatal injuries associated with a large amount of medical costs. Fall detection using wearable wireless sensor nodes has the potential of improving elderly telecare. This investigation proposes a ZigBee-based location-aware fall detection system for elderly telecare that provides an unobstructed communication between the elderly and caregivers when falls happen. The system is based on ZigBee-based sensor networks, and the sensor node consists of a motherboard with a tri-axial accelerometer and a ZigBee module. A wireless sensor node worn on the waist continuously detects fall events and starts an indoor positioning engine as soon as a fall happens. In the fall detection scheme, this study proposes a three-phase threshold-based fall detection algorithm to detect critical and normal falls. The fall alarm can be canceled by pressing and holding the emergency fall button only when a normal fall is detected. On the other hand, there are three phases in the indoor positioning engine: path loss survey phase, Received Signal Strength Indicator (RSSI) collection phase and location calculation phase. Finally, the location of the faller will be calculated by a k-nearest neighbor algorithm with weighted RSSI. The experimental results demonstrate that the fall detection algorithm achieves 95.63% sensitivity, 73.5% specificity, 88.62% accuracy and 88.6% precision. Furthermore, the average error distance for indoor positioning is 1.15 ± 0.54 m. The proposed system successfully delivers critical information to remote telecare providers who can then immediately help a fallen person. PMID:24743841
Travel associated legionnaires' disease in Europe: 2003.
Ricketts, K; Joseph, C
2004-10-01
Six hundred and thirty two cases of travel-associated legionnaires' disease with onset in 2003 were reported to the EWGLINET surveillance scheme by 24 countries. Eighty nine clusters were detected, 35 (39%) of which would not have been detected without the EWGLINET scheme. One hundred and seven accommodation sites were investigated and 22 sites were published on the EWGLI website. The proportion of cases diagnosed primarily by the urinary antigen test was 81.2%, and 48 positive cultures were obtained. Thirty eight deaths were reported to the EWGLINET scheme, giving a crude fatality rate of 6%. Countries are encouraged to inform the coordinating centre of cases that fall ill after travelling within their own country of residence ('internal travel'), and are also encouraged to obtain patient isolates for culture where at all possible.
Power of sign surveys to monitor population trend
Kendall, Katherine C.; Metzgar, Lee H.; Patterson, David A.; Steele, Brian M.
1992-01-01
The urgent need for an effective monitoring scheme for grizzly bear (Ursus arctos) populations led us to investigate the effort required to detect changes in populations of low—density dispersed animals, using sign (mainly scats and tracks) they leave on trails. We surveyed trails in Glacier National Park for bear tracks and scats during five consecutive years. Using these data, we modeled the occurrence of bear sign on trails, then estimated the power of various sampling schemes. Specifically, we explored the power of bear sign surveys to detect a 20% decline in sign occurrence. Realistic sampling schemes appear feasible if the density of sign is high enough, and we provide guidelines for designs with adequate replication to monitor long—term trends of dispersed populations using sign occurrences on trails.
Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.
1992-01-01
The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.
NASA Technical Reports Server (NTRS)
Shen, C. N.; YERAZUNIS
1979-01-01
The feasibility of using range/pointing angle data such as might be obtained by a laser rangefinder for the purpose of terrain evaluation in the 10-40 meter range on which to base the guidance of an autonomous rover was investigated. The decision procedure of the rapid estimation scheme for the detection of discrete obstacles has been modified to reinforce the detection ability. With the introduction of the logarithmic scanning scheme and obstacle identification scheme, previously developed algorithms are combined to demonstrate the overall performance of the intergrated route designation system using laser rangefinder. In an attempt to cover a greater range, 30 m to 100 mm, the problem estimating gradients in the presence of positioning angle noise at middle range is investigated.
NASA Astrophysics Data System (ADS)
Uchiyama, Yoshikazu; Asano, Tatsunori; Hara, Takeshi; Fujita, Hiroshi; Kinosada, Yasutomi; Asano, Takahiko; Kato, Hiroki; Kanematsu, Masayuki; Hoshi, Hiroaki; Iwama, Toru
2009-02-01
The detection of cerebrovascular diseases such as unruptured aneurysm, stenosis, and occlusion is a major application of magnetic resonance angiography (MRA). However, their accurate detection is often difficult for radiologists. Therefore, several computer-aided diagnosis (CAD) schemes have been developed in order to assist radiologists with image interpretation. The purpose of this study was to develop a computerized method for segmenting cerebral arteries, which is an essential component of CAD schemes. For the segmentation of vessel regions, we first used a gray level transformation to calibrate voxel values. To adjust for variations in the positioning of patients, registration was subsequently employed to maximize the overlapping of the vessel regions in the target image and reference image. The vessel regions were then segmented from the background using gray-level thresholding and region growing techniques. Finally, rule-based schemes with features such as size, shape, and anatomical location were employed to distinguish between vessel regions and false positives. Our method was applied to 854 clinical cases obtained from two different hospitals. The segmentation of cerebral arteries in 97.1%(829/854) of the MRA studies was attained as an acceptable result. Therefore, our computerized method would be useful in CAD schemes for the detection of cerebrovascular diseases in MRA images.
NASA Astrophysics Data System (ADS)
Yuki, Akiyama; Satoshi, Ueyama; Ryosuke, Shibasaki; Adachi, Ryuichiro
2016-06-01
In this study, we developed a method to detect sudden population concentration on a certain day and area, that is, an "Event," all over Japan in 2012 using mass GPS data provided from mobile phone users. First, stay locations of all phone users were detected using existing methods. Second, areas and days where Events occurred were detected by aggregation of mass stay locations into 1-km-square grid polygons. Finally, the proposed method could detect Events with an especially large number of visitors in the year by removing the influences of Events that occurred continuously throughout the year. In addition, we demonstrated reasonable reliability of the proposed Event detection method by comparing the results of Event detection with light intensities obtained from the night light images from the DMSP/OLS night light images. Our method can detect not only positive events such as festivals but also negative events such as natural disasters and road accidents. These results are expected to support policy development of urban planning, disaster prevention, and transportation management.
NASA Astrophysics Data System (ADS)
Jacobsen, Svein; Klemetsen, Øystein; Birkelund, Yngve
2012-09-01
Microwave radiometry is evaluated for renal thermometry tailored to detect the pediatric condition of vesicoureteral urine reflux (VUR) from the bladder through the ureter into the kidney. Prior to a potential reflux event, the urine is heated within the bladder by an external body contacting a hyperthermia applicator to generate a fluidic contrast temperature relative to normal body temperature. A single band, miniaturized radiometer (operating at 3.5 GHz) is connected to an electromagnetic-interference-shielded and suction-coupled elliptical antenna to receive thermal radiation from an ex vivo porcine phantom model. Brightness (radiometric) and fiberoptic temperature data are recorded for varying urine phantom reflux volumes (20-40 mL) and contrast temperatures ranging from 2 to 10 °C within the kidney phantom. The kidney phantom itself is located at 40 mm depth (skin-to-kidney center distance) and surrounded by the porcine phantom. Radiometric step responses to injection of urine simulant by a syringe are shown to be highly correlated with in situ kidney temperatures measured by fiberoptic probes. Statistically, the performance of the VUR detecting scheme is evaluated by error probabilities of making a wrong decision. Laboratory testing of the radiometric system supports the feasibility of passive non-invasive kidney thermometry for the detection of VUR classified within the two highest grades
Multi-Station Broad Regional Event Detection Using Waveform Correlation
NASA Astrophysics Data System (ADS)
Slinkard, M.; Stephen, H.; Young, C. J.; Eckert, R.; Schaff, D. P.; Richards, P. G.
2013-12-01
Previous waveform correlation studies have established the occurrence of repeating seismic events in various regions, and the utility of waveform-correlation event-detection on broad regional or even global scales to find events currently not included in traditionally-prepared bulletins. The computational burden, however, is high, limiting previous experiments to relatively modest template libraries and/or processing time periods. We have developed a distributed computing waveform correlation event detection utility that allows us to process years of continuous waveform data with template libraries numbering in the thousands. We have used this system to process several years of waveform data from IRIS stations in East Asia, using libraries of template events taken from global and regional bulletins. Detections at a given station are confirmed by 1) comparison with independent bulletins of seismicity, and 2) consistent detections at other stations. We find that many of the detected events are not in traditional catalogs, hence the multi-station comparison is essential. In addition to detecting the similar events, we also estimate magnitudes very precisely based on comparison with the template events (when magnitudes are available). We have investigated magnitude variation within detected families of similar events, false alarm rates, and the temporal and spatial reach of templates.
Ray-based acoustic localization of cavitation in a highly reverberant environment.
Chang, Natasha A; Dowling, David R
2009-05-01
Acoustic detection and localization of cavitation have inherent advantages over optical techniques because cavitation bubbles are natural sound sources, and acoustic transduction of cavitation sounds does not require optical access to the region of cavitating flow. In particular, near cavitation inception, cavitation bubbles may be visually small and occur infrequently, but may still emit audible sound pulses. In this investigation, direct-path acoustic recordings of cavitation events are made with 16 hydrophones mounted on the periphery of a water tunnel test section containing a low-cavitation-event-rate vortical flow. These recordings are used to localize the events in three dimensions via cross correlations to obtain arrival time differences. Here, bubble localization is hindered by reverberation, background noise, and the fact that both the pulse emission time and waveform are unknown. These hindrances are partially mitigated by a signal-processing scheme that incorporates straight-ray acoustic propagation and Monte-Carlo techniques for compensating ray-path, sound-speed, and hydrophone-location uncertainties. The acoustic localization results are compared to simultaneous optical localization results from dual-camera high-speed digital-video recordings. For 53 bubbles and a peak-signal to noise ratio frequency of 6.7 kHz, the root-mean-square spatial difference between optical and acoustic bubble location results was 1.94 cm. Parametric dependences in acoustic localization performance are also presented.
Nagy-Soper Subtraction: a Review
NASA Astrophysics Data System (ADS)
Robens, Tania
2013-07-01
In this review, we present a review on an alternative NLO subtraction scheme, based on the splitting kernels of an improved parton shower that promises to facilitate the inclusion of higher-order corrections into Monte Carlo event generators. We give expressions for the scheme for massless emitters, and point to work on the extension for massive cases. As an example, we show results for the C parameter of the process e+e-→3 jets at NLO which have recently been published as a verification of this scheme. We equally provide analytic expressions for integrated counterterms that have not been presented in previous work, and comment on the possibility of analytic approximations for the remaining numerical integrals.
NASA Astrophysics Data System (ADS)
Muramatsu, Chisako; Ishida, Kyoko; Sawada, Akira; Hatanaka, Yuji; Yamamoto, Tetsuya; Fujita, Hiroshi
2016-03-01
Early detection of glaucoma is important to slow down or cease progression of the disease and for preventing total blindness. We have previously proposed an automated scheme for detection of retinal nerve fiber layer defect (NFLD), which is one of the early signs of glaucoma observed on retinal fundus images. In this study, a new multi-step detection scheme was included to improve detection of subtle and narrow NFLDs. In addition, new features were added to distinguish between NFLDs and blood vessels, which are frequent sites of false positives (FPs). The result was evaluated with a new test dataset consisted of 261 cases, including 130 cases with NFLDs. Using the proposed method, the initial detection rate was improved from 82% to 98%. At the sensitivity of 80%, the number of FPs per image was reduced from 4.25 to 1.36. The result indicates the potential usefulness of the proposed method for early detection of glaucoma.
NASA Astrophysics Data System (ADS)
Lartizien, Carole; Marache-Francisco, Simon; Prost, Rémy
2012-02-01
Positron emission tomography (PET) using fluorine-18 deoxyglucose (18F-FDG) has become an increasingly recommended tool in clinical whole-body oncology imaging for the detection, diagnosis, and follow-up of many cancers. One way to improve the diagnostic utility of PET oncology imaging is to assist physicians facing difficult cases of residual or low-contrast lesions. This study aimed at evaluating different schemes of computer-aided detection (CADe) systems for the guided detection and localization of small and low-contrast lesions in PET. These systems are based on two supervised classifiers, linear discriminant analysis (LDA) and the nonlinear support vector machine (SVM). The image feature sets that serve as input data consisted of the coefficients of an undecimated wavelet transform. An optimization study was conducted to select the best combination of parameters for both the SVM and the LDA. Different false-positive reduction (FPR) methods were evaluated to reduce the number of false-positive detections per image (FPI). This includes the removal of small detected clusters and the combination of the LDA and SVM detection maps. The different CAD schemes were trained and evaluated based on a simulated whole-body PET image database containing 250 abnormal cases with 1230 lesions and 250 normal cases with no lesion. The detection performance was measured on a separate series of 25 testing images with 131 lesions. The combination of the LDA and SVM score maps was shown to produce very encouraging detection performance for both the lung lesions, with 91% sensitivity and 18 FPIs, and the liver lesions, with 94% sensitivity and 10 FPIs. Comparison with human performance indicated that the different CAD schemes significantly outperformed human detection sensitivities, especially regarding the low-contrast lesions.
Hardware accelerator design for change detection in smart camera
NASA Astrophysics Data System (ADS)
Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil
2011-10-01
Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.
Wang, Zhuo; Camino, Acner; Zhang, Miao; Wang, Jie; Hwang, Thomas S.; Wilson, David J.; Huang, David; Li, Dengwang; Jia, Yali
2017-01-01
Diabetic retinopathy is a pathology where microvascular circulation abnormalities ultimately result in photoreceptor disruption and, consequently, permanent loss of vision. Here, we developed a method that automatically detects photoreceptor disruption in mild diabetic retinopathy by mapping ellipsoid zone reflectance abnormalities from en face optical coherence tomography images. The algorithm uses a fuzzy c-means scheme with a redefined membership function to assign a defect severity level on each pixel and generate a probability map of defect category affiliation. A novel scheme of unsupervised clustering optimization allows accurate detection of the affected area. The achieved accuracy, sensitivity and specificity were about 90% on a population of thirteen diseased subjects. This method shows potential for accurate and fast detection of early biomarkers in diabetic retinopathy evolution. PMID:29296475
Piro, Benoit; Shi, Shihui; Reisberg, Steeve; Noël, Vincent; Anquetin, Guillaume
2016-01-01
We review here the most frequently reported targets among the electrochemical immunosensors and aptasensors: antibiotics, bisphenol A, cocaine, ochratoxin A and estradiol. In each case, the immobilization procedures are described as well as the transduction schemes and the limits of detection. It is shown that limits of detections are generally two to three orders of magnitude lower for immunosensors than for aptasensors, due to the highest affinities of antibodies. No significant progresses have been made to improve these affinities, but transduction schemes were improved instead, which lead to a regular improvement of the limit of detections corresponding to ca. five orders of magnitude over these last 10 years. These progresses depend on the target, however. PMID:26938570
Wang, Zhuo; Camino, Acner; Zhang, Miao; Wang, Jie; Hwang, Thomas S; Wilson, David J; Huang, David; Li, Dengwang; Jia, Yali
2017-12-01
Diabetic retinopathy is a pathology where microvascular circulation abnormalities ultimately result in photoreceptor disruption and, consequently, permanent loss of vision. Here, we developed a method that automatically detects photoreceptor disruption in mild diabetic retinopathy by mapping ellipsoid zone reflectance abnormalities from en face optical coherence tomography images. The algorithm uses a fuzzy c-means scheme with a redefined membership function to assign a defect severity level on each pixel and generate a probability map of defect category affiliation. A novel scheme of unsupervised clustering optimization allows accurate detection of the affected area. The achieved accuracy, sensitivity and specificity were about 90% on a population of thirteen diseased subjects. This method shows potential for accurate and fast detection of early biomarkers in diabetic retinopathy evolution.
Real-Time Event Detection for Monitoring Natural and Source ...
The use of event detection systems in finished drinking water systems is increasing in order to monitor water quality in both operational and security contexts. Recent incidents involving harmful algal blooms and chemical spills into watersheds have increased interest in monitoring source water quality prior to treatment. This work highlights the use of the CANARY event detection software in detecting suspected illicit events in an actively monitored watershed in South Carolina. CANARY is an open source event detection software that was developed by USEPA and Sandia National Laboratories. The software works with any type of sensor, utilizes multiple detection algorithms and approaches, and can incorporate operational information as needed. Monitoring has been underway for several years to detect events related to intentional or unintentional dumping of materials into the monitored watershed. This work evaluates the feasibility of using CANARY to enhance the detection of events in this watershed. This presentation will describe the real-time monitoring approach used in this watershed, the selection of CANARY configuration parameters that optimize detection for this watershed and monitoring application, and the performance of CANARY during the time frame analyzed. Further, this work will highlight how rainfall events impacted analysis, and the innovative application of CANARY taken in order to effectively detect the suspected illicit events. This presentation d
Mishra, Raghavendra; Barnwal, Amit Kumar
2015-05-01
The Telecare medical information system (TMIS) presents effective healthcare delivery services by employing information and communication technologies. The emerging privacy and security are always a matter of great concern in TMIS. Recently, Chen at al. presented a password based authentication schemes to address the privacy and security. Later on, it is proved insecure against various active and passive attacks. To erase the drawbacks of Chen et al.'s anonymous authentication scheme, several password based authentication schemes have been proposed using public key cryptosystem. However, most of them do not present pre-smart card authentication which leads to inefficient login and password change phases. To present an authentication scheme with pre-smart card authentication, we present an improved anonymous smart card based authentication scheme for TMIS. The proposed scheme protects user anonymity and satisfies all the desirable security attributes. Moreover, the proposed scheme presents efficient login and password change phases where incorrect input can be quickly detected and a user can freely change his password without server assistance. Moreover, we demonstrate the validity of the proposed scheme by utilizing the widely-accepted BAN (Burrows, Abadi, and Needham) logic. The proposed scheme is also comparable in terms of computational overheads with relevant schemes.
Update schemes of multi-velocity floor field cellular automaton for pedestrian dynamics
NASA Astrophysics Data System (ADS)
Luo, Lin; Fu, Zhijian; Cheng, Han; Yang, Lizhong
2018-02-01
Modeling pedestrian movement is an interesting problem both in statistical physics and in computational physics. Update schemes of cellular automaton (CA) models for pedestrian dynamics govern the schedule of pedestrian movement. Usually, different update schemes make the models behave in different ways, which should be carefully recalibrated. Thus, in this paper, we investigated the influence of four different update schemes, namely parallel/synchronous scheme, random scheme, order-sequential scheme and shuffled scheme, on pedestrian dynamics. The multi-velocity floor field cellular automaton (FFCA) considering the changes of pedestrians' moving properties along walking paths and heterogeneity of pedestrians' walking abilities was used. As for parallel scheme only, the collisions detection and resolution should be considered, resulting in a great difference from any other update schemes. For pedestrian evacuation, the evacuation time is enlarged, and the difference in pedestrians' walking abilities is better reflected, under parallel scheme. In face of a bottleneck, for example a exit, using a parallel scheme leads to a longer congestion period and a more dispersive density distribution. The exit flow and the space-time distribution of density and velocity have significant discrepancies under four different update schemes when we simulate pedestrian flow with high desired velocity. Update schemes may have no influence on pedestrians in simulation to create tendency to follow others, but sequential and shuffled update scheme may enhance the effect of pedestrians' familiarity with environments.
Follicle Detection on the USG Images to Support Determination of Polycystic Ovary Syndrome
NASA Astrophysics Data System (ADS)
Adiwijaya; Purnama, B.; Hasyim, A.; Septiani, M. D.; Wisesty, U. N.; Astuti, W.
2015-06-01
Polycystic Ovary Syndrome(PCOS) is the most common endocrine disorders affected to female in their reproductive cycle. This has gained the attention from married couple which affected by infertility. One of the diagnostic criteria considereded by the doctor is analysing manually the ovary USG image to detect the number and size of ovary's follicle. This analysis may affect low varibilites, reproducibility, and efficiency. To overcome this problems. automatic scheme is suggested to detect the follicle on USG image in supporting PCOS diagnosis. The first scheme is determining the initial homogeneous region which will be segmented into real follicle form The next scheme is selecting the appropriate regions to follicle criteria. then measuring the segmented region attribute as the follicle. The measurement remains the number and size that aimed at categorizing the image into the PCOS or non-PCOS. The method used is region growing which includes region-based and seed-based. To measure the follicle diameter. there will be the different method including stereology and euclidean distance. The most optimum system plan to detect PCO is by using region growing and by using euclidean distance on quantification of follicle.
Iglesias-Rojas, Juan Carlos; Gomez-Castañeda, Felipe; Moreno-Cadenas, Jose Antonio
2017-06-14
In this paper, a Least Mean Square (LMS) programming scheme is used to set the offset voltage of two operational amplifiers that were built using floating-gate transistors, enabling a 0.95 V RMS trimmer-less flame detection sensor. The programming scheme is capable of setting the offset voltage over a wide range of values by means of electron injection. The flame detection sensor consists of two programmable offset operational amplifiers; the first amplifier serves as a 26 μV offset voltage follower, whereas the second amplifier acts as a programmable trimmer-less voltage comparator. Both amplifiers form the proposed sensor, whose principle of functionality is based on the detection of the electrical changes produced by the flame ionization. The experimental results show that it is possible to measure the presence of a flame accurately after programming the amplifiers with a maximum of 35 LMS-algorithm iterations. Current commercial flame detectors are mainly used in absorption refrigerators and large industrial gas heaters, where a high voltage AC source and several mechanical trimmings are used in order to accurately measure the presence of the flame.
How quickly do breast screeners learn their skills?
NASA Astrophysics Data System (ADS)
Nevisi, Hossein; Dong, Leng; Chen, Yan; Gale, Alastair G.
2017-03-01
The UK's Breast Screening Programme is 27 years old and many experienced breast radiologists are now retiring, coupled with an influx of new screening personnel. It is important to the ongoing Programme that new mammography readers are quickly up to the skill level of experienced readers. This raises the question of how quickly the necessary cancer detection skills are learnt. All breast screening radiologists in the UK read educational training sets of challenging FFDM images (the PERFORMS® scheme) yearly to maintain and improve their performance in real life screening. Data were examined from the PERFORMS® annual scheme for 54 new screeners, 55 screeners who have been screening for one year and also for more experienced screeners (597 screeners). Not surprisingly, significant differences in cancer detection rate were found between new readers and both of the other groups. Additionally, the performance of 48 new readers who have now been screening for about a year and have taken part twice in the PERFORMS® scheme were further examined where again a significant difference in cancer detection was found. These data imply that cancer detection skills are learnt quickly in the first year of screening. Information was also examined concerning the volume of cases participants read and other factors.
You, Hongjian
2018-01-01
Target detection is one of the important applications in the field of remote sensing. The Gaofen-3 (GF-3) Synthetic Aperture Radar (SAR) satellite launched by China is a powerful tool for maritime monitoring. This work aims at detecting ships in GF-3 SAR images using a new land masking strategy, the appropriate model for sea clutter and a neural network as the discrimination scheme. Firstly, the fully convolutional network (FCN) is applied to separate the sea from the land. Then, by analyzing the sea clutter distribution in GF-3 SAR images, we choose the probability distribution model of Constant False Alarm Rate (CFAR) detector from K-distribution, Gamma distribution and Rayleigh distribution based on a tradeoff between the sea clutter modeling accuracy and the computational complexity. Furthermore, in order to better implement CFAR detection, we also use truncated statistic (TS) as a preprocessing scheme and iterative censoring scheme (ICS) for boosting the performance of detector. Finally, we employ a neural network to re-examine the results as the discrimination stage. Experiment results on three GF-3 SAR images verify the effectiveness and efficiency of this approach. PMID:29364194