Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Gaarder, N. T.; Lin, S.
1986-01-01
This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.
Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher
1998-01-01
In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.
Zainudin, Suhaila; Arif, Shereena M.
2017-01-01
Gene regulatory network (GRN) reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR) to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C) as a direct interaction (A → C). Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5. PMID:28250767
Cascade Error Projection: A Learning Algorithm for Hardware Implementation
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher
1996-01-01
In this paper, we workout a detailed mathematical analysis for a new learning algorithm termed Cascade Error Projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters. Furthermore, CEP learning algorithm is operated only on one layer, whereas the other set of weights can be calculated deterministically. In association with the dynamical stepsize change concept to convert the weight update from infinite space into a finite space, the relation between the current stepsize and the previous energy level is also given and the estimation procedure for optimal stepsize is used for validation of our proposed technique. The weight values of zero are used for starting the learning for every layer, and a single hidden unit is applied instead of using a pool of candidate hidden units similar to cascade correlation scheme. Therefore, simplicity in hardware implementation is also obtained. Furthermore, this analysis allows us to select from other methods (such as the conjugate gradient descent or the Newton's second order) one of which will be a good candidate for the learning technique. The choice of learning technique depends on the constraints of the problem (e.g., speed, performance, and hardware implementation); one technique may be more suitable than others. Moreover, for a discrete weight space, the theoretical analysis presents the capability of learning with limited weight quantization. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Shu, L.; Kasami, T.
1985-01-01
A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Lin, S.
1985-01-01
A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.
On-chip learning of hyper-spectral data for real time target recognition
NASA Technical Reports Server (NTRS)
Duong, T. A.; Daud, T.; Thakoor, A.
2000-01-01
As the focus of our present paper, we have used the cascade error projection (CEP) learning algorithm (shown to be hardware-implementable) with on-chip learning (OCL) scheme to obtain three orders of magnitude speed-up in target recognition compared to software-based learning schemes. Thus, it is shown, real time learning as well as data processing for target recognition can be achieved.
Woolf, Steven H.; Kuzel, Anton J.; Dovey, Susan M.; Phillips, Robert L.
2004-01-01
BACKGROUND Notions about the most common errors in medicine currently rest on conjecture and weak epidemiologic evidence. We sought to determine whether cascade analysis is of value in clarifying the epidemiology and causes of errors and whether physician reports are sensitive to the impact of errors on patients. METHODS Eighteen US family physicians participating in a 6-country international study filed 75 anonymous error reports. The narratives were examined to identify the chain of events and the predominant proximal errors. We tabulated the consequences to patients, both reported by physicians and inferred by investigators. RESULTS A chain of errors was documented in 77% of incidents. Although 83% of the errors that ultimately occurred were mistakes in treatment or diagnosis, 2 of 3 were set in motion by errors in communication. Fully 80% of the errors that initiated cascades involved informational or personal miscommunication. Examples of informational miscommunication included communication breakdowns among colleagues and with patients (44%), misinformation in the medical record (21%), mishandling of patients’ requests and messages (18%), inaccessible medical records (12%), and inadequate reminder systems (5%). When asked whether the patient was harmed, physicians answered affirmatively in 43% of cases in which their narratives described harms. Psychological and emotional effects accounted for 17% of physician-reported consequences but 69% of investigator-inferred consequences. CONCLUSIONS Cascade analysis of physicians’ error reports is helpful in understanding the precipitant chain of events, but physicians provide incomplete information about how patients are affected. Miscommunication appears to play an important role in propagating diagnostic and treatment mistakes. PMID:15335130
Cascade control of superheated steam temperature with neuro-PID controller.
Zhang, Jianhua; Zhang, Fenfang; Ren, Mifeng; Hou, Guolian; Fang, Fang
2012-11-01
In this paper, an improved cascade control methodology for superheated processes is developed, in which the primary PID controller is implemented by neural networks trained by minimizing error entropy criterion. The entropy of the tracking error can be estimated recursively by utilizing receding horizon window technique. The measurable disturbances in superheated processes are input to the neuro-PID controller besides the sequences of tracking error in outer loop control system, hence, feedback control is combined with feedforward control in the proposed neuro-PID controller. The convergent condition of the neural networks is analyzed. The implementation procedures of the proposed cascade control approach are summarized. Compared with the neuro-PID controller using minimizing squared error criterion, the proposed neuro-PID controller using minimizing error entropy criterion may decrease fluctuations of the superheated steam temperature. A simulation example shows the advantages of the proposed method. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Optimal information transfer in enzymatic networks: A field theoretic formulation
NASA Astrophysics Data System (ADS)
Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.
2017-07-01
Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in networks of arbitrary complexity.
NASA Astrophysics Data System (ADS)
Mao, Yaya; Wu, Chongqing; Liu, Bo; Ullah, Rahat; Tian, Feng
2017-12-01
We experimentally investigate the polarization insensitivity and cascadability of an all-optical wavelength converter for differential phase-shift keyed (DPSK) signals for the first time. The proposed wavelength converter is composed of a one-bit delay interferometer demodulation stage followed by a single semiconductor optical amplifier. The impact of input DPSK signal polarization fluctuation on receiver sensitivity for the converted signal is carried out. It is found that this scheme is almost insensitive to the state of polarization of the input DPSK signal. Furthermore, the cascadability of the converter is demonstrated in a two-path recirculating loop. Error-free transmission is achieved with 20 stage cascaded wavelength conversions over 2800 km, where the power penalty is <3.4 dB at bit error rate of 10-9.
NASA Astrophysics Data System (ADS)
Li, Xinying; Xiao, Jiangnan
2015-06-01
We propose a novel scheme for optical frequency-locked multi-carrier generation based on one electro-absorption modulated laser (EML) and one phase modulator (PM) in cascade driven by different sinusoidal radio-frequency (RF) clocks. The optimal operating zone for the cascaded EML and PM is found out based on theoretical analysis and numerical simulation. We experimentally demonstrate 25 optical subcarriers with frequency spacing of 12.5 GHz and power difference less than 5 dB can be generated based on the cascaded EML and PM operating in the optimal zone, which agrees well with the numerical simulation. We also experimentally demonstrate 28-Gbaud polarization division multiplexing quadrature phase shift keying (PDM-QPSK) modulated coherent optical transmission based on the cascaded EML and PM. The bit error ratio (BER) can be below the pre-forward-error-correction (pre-FEC) threshold of 3.8 × 10-3 after 80-km single-mode fiber-28 (SMF-28) transmission.
Closed Loop Fuzzy Logic Controlled PV Based Cascaded Boost Five-Level Inverter System
NASA Astrophysics Data System (ADS)
Revana, Guruswamy; Kota, Venkata Reddy
2018-04-01
Recent developments in intelligent control methods and power electronics have produced PV based DC to AC converters related to AC drives. Cascaded boost converter and inverter find their way in interconnecting PV and Induction Motor. This paper deals with digital simulation and implementation of closed loop controlled five-level inverter based Photo-Voltaic (PV) system. The objective of this work is to reduce the harmonics using Multi Level Inverter based system. The DC output from the PV panel is boosted using cascaded-boost-converters. The DC output of these cascaded boost converters is applied to the bridges of the cascaded inverter. The AC output voltage is obtained by the series cascading of the output voltage of the two inverters. The investigations are done with Induction motor load. Cascaded boost-converter is proposed in the present work to produce the required DC Voltage at the input of the bridge inverter. A simple FLC is applied to CBFLIIM system. The FLC is proposed to reduce the steady state error. The simulation results are compared with the hardware results. The results of the comparison are made to show the improvement in dynamic response in terms of settling time and steady state error. Design procedure and control strategy are presented in detail.
Closed Loop Fuzzy Logic Controlled PV Based Cascaded Boost Five-Level Inverter System
NASA Astrophysics Data System (ADS)
Revana, Guruswamy; Kota, Venkata Reddy
2017-12-01
Recent developments in intelligent control methods and power electronics have produced PV based DC to AC converters related to AC drives. Cascaded boost converter and inverter find their way in interconnecting PV and Induction Motor. This paper deals with digital simulation and implementation of closed loop controlled five-level inverter based Photo-Voltaic (PV) system. The objective of this work is to reduce the harmonics using Multi Level Inverter based system. The DC output from the PV panel is boosted using cascaded-boost-converters. The DC output of these cascaded boost converters is applied to the bridges of the cascaded inverter. The AC output voltage is obtained by the series cascading of the output voltage of the two inverters. The investigations are done with Induction motor load. Cascaded boost-converter is proposed in the present work to produce the required DC Voltage at the input of the bridge inverter. A simple FLC is applied to CBFLIIM system. The FLC is proposed to reduce the steady state error. The simulation results are compared with the hardware results. The results of the comparison are made to show the improvement in dynamic response in terms of settling time and steady state error. Design procedure and control strategy are presented in detail.
NASA Technical Reports Server (NTRS)
Seasholtz, R. G.
1977-01-01
A laser Doppler velocimeter (LDV) built for use in the Lewis Research Center's turbine stator cascade facilities is described. The signal processing and self contained data processing are based on a computing counter. A procedure is given for mode matching the laser to the probe volume. An analysis is presented of biasing errors that were observed in turbulent flow when the mean flow was not normal to the fringes.
Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters
NASA Astrophysics Data System (ADS)
Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei
2018-05-01
In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.
Cascading Influences on the Production of Speech: Evidence from Articulation
ERIC Educational Resources Information Center
McMillan, Corey T.; Corley, Martin
2010-01-01
Recent investigations have supported the suggestion that phonological speech errors may reflect the simultaneous activation of more than one phonemic representation. This presents a challenge for speech error evidence which is based on the assumption of well-formedness, because we may continue to perceive well-formed errors, even when they are not…
Cascaded Microinverter PV System for Reduced Cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellus, Daniel R.; Ely, Jeffrey A.
2013-04-29
In this project, a team led by Delphi will develop and demonstrate a novel cascaded photovoltaic (PV) inverter architecture using advanced components. This approach will reduce the cost and improve the performance of medium and large-sized PV systems. The overall project objective is to develop, build, and test a modular 11-level cascaded three-phase inverter building block for photovoltaic applications and to develop and analyze the associated commercialization plan. The system will be designed to utilize photovoltaic panels and will supply power to the electric grid at 208 VAC, 60 Hz 3-phase. With the proposed topology, three inverters, each with anmore » embedded controller, will monitor and control each of the cascade sections, reducing costs associated with extra control boards. This report details the final disposition on this project.« less
Cascading activation from lexical processing to letter-level processing in written word production.
Buchwald, Adam; Falconer, Carolyn
2014-01-01
Descriptions of language production have identified processes involved in producing language and the presence and type of interaction among those processes. In the case of spoken language production, consensus has emerged that there is interaction among lexical selection processes and phoneme-level processing. This issue has received less attention in written language production. In this paper, we present a novel analysis of the writing-to-dictation performance of an individual with acquired dysgraphia revealing cascading activation from lexical processing to letter-level processing. The individual produced frequent lexical-semantic errors (e.g., chipmunk → SQUIRREL) as well as letter errors (e.g., inhibit → INBHITI) and had a profile consistent with impairment affecting both lexical processing and letter-level processing. The presence of cascading activation is suggested by lower letter accuracy on words that are more weakly activated during lexical selection than on those that are more strongly activated. We operationalize weakly activated lexemes as those lexemes that are produced as lexical-semantic errors (e.g., lethal in deadly → LETAHL) compared to strongly activated lexemes where the intended target word (e.g., lethal) is the lexeme selected for production.
Vanishing Point Extraction and Refinement for Robust Camera Calibration
Tsai, Fuan
2017-01-01
This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966
Water Level Prediction of Lake Cascade Mahakam Using Adaptive Neural Network Backpropagation (ANNBP)
NASA Astrophysics Data System (ADS)
Mislan; Gaffar, A. F. O.; Haviluddin; Puspitasari, N.
2018-04-01
A natural hazard information and flood events are indispensable as a form of prevention and improvement. One of the causes is flooding in the areas around the lake. Therefore, forecasting the surface of Lake water level to anticipate flooding is required. The purpose of this paper is implemented computational intelligence method namely Adaptive Neural Network Backpropagation (ANNBP) to forecasting the Lake Cascade Mahakam. Based on experiment, performance of ANNBP indicated that Lake water level prediction have been accurate by using mean square error (MSE) and mean absolute percentage error (MAPE). In other words, computational intelligence method can produce good accuracy. A hybrid and optimization of computational intelligence are focus in the future work.
Identification of cascade water tanks using a PWARX model
NASA Astrophysics Data System (ADS)
Mattsson, Per; Zachariah, Dave; Stoica, Petre
2018-06-01
In this paper we consider the identification of a discrete-time nonlinear dynamical model for a cascade water tank process. The proposed method starts with a nominal linear dynamical model of the system, and proceeds to model its prediction errors using a model that is piecewise affine in the data. As data is observed, the nominal model is refined into a piecewise ARX model which can capture a wide range of nonlinearities, such as the saturation in the cascade tanks. The proposed method uses a likelihood-based methodology which adaptively penalizes model complexity and directly leads to a computationally efficient implementation.
Strain-free Ge/GeSiSn Quantum Cascade Lasers Based on L-Valley Intersubband Transitions
2007-01-01
found in III-V quantum cascade lasers QCLs. Various groups have obtained electroluminescence from Si-rich Si/SiGe quantum cascade structures,2–4 but...Ge/GeSiSn quantum cascade lasers based on L-valley intersubband transitions 5c. PROGRAM ELEMENT NUMBER 612305 6. AUTHOR(S) 5d. PROJECT NUMBER...ABSTRACT The authors propose a Ge/Ge0.76Si0.19Sn0.05 quantum cascade laser using intersubband transitions at L valleys of the conduction band
Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters
Park, Chan Gook
2018-01-01
An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms. PMID:29690539
NASA Astrophysics Data System (ADS)
Chien, Pie-Yau; Chao, Chen-Hsing
1993-03-01
An optical phase-locked loop system based on a triangular phase-modulated cascade Mach-Zehnder modulator is demonstrated. A reference oscillator of 10 MHz is multiplied such that it can be used to lock a target oscillator of 120 MHz. The phase error of \\varDeltaθe≤2.0× 10-4 rad/Hz1/2 has been implemented in this system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael
2012-07-24
The goal of the DTRA project is to develop a mathematical framework that will provide the fundamental understanding of network survivability, algorithms for detecting/inferring pre-cursors of abnormal network behaviors, and methods for network adaptability and self-healing from cascading failures.
Krimmel, R.M.
1999-01-01
Net mass balance has been measured since 1958 at South Cascade Glacier using the 'direct method,' e.g. area averages of snow gain and firn and ice loss at stakes. Analysis of cartographic vertical photography has allowed measurement of mass balance using the 'geodetic method' in 1970, 1975, 1977, 1979-80, and 1985-97. Water equivalent change as measured by these nearly independent methods should give similar results. During 1970-97, the direct method shows a cumulative balance of about -15 m, and the geodetic method shows a cumulative balance of about -22 m. The deviation between the two methods is fairly consistent, suggesting no gross errors in either, but rather a cumulative systematic error. It is suspected that the cumulative error is in the direct method because the geodetic method is based on a non-changing reference, the bedrock control, whereas the direct method is measured with reference to only the previous year's summer surface. Possible sources of mass loss that are missing from the direct method are basal melt, internal melt, and ablation on crevasse walls. Possible systematic measurement errors include under-estimation of the density of lost material, sinking stakes, or poorly represented areas.
Cascading failure in the wireless sensor scale-free networks
NASA Astrophysics Data System (ADS)
Liu, Hao-Ran; Dong, Ming-Ru; Yin, Rong-Rong; Han, Li
2015-05-01
In the practical wireless sensor networks (WSNs), the cascading failure caused by a failure node has serious impact on the network performance. In this paper, we deeply research the cascading failure of scale-free topology in WSNs. Firstly, a cascading failure model for scale-free topology in WSNs is studied. Through analyzing the influence of the node load on cascading failure, the critical load triggering large-scale cascading failure is obtained. Then based on the critical load, a control method for cascading failure is presented. In addition, the simulation experiments are performed to validate the effectiveness of the control method. The results show that the control method can effectively prevent cascading failure. Project supported by the Natural Science Foundation of Hebei Province, China (Grant No. F2014203239), the Autonomous Research Fund of Young Teacher in Yanshan University (Grant No. 14LGB017) and Yanshan University Doctoral Foundation, China (Grant No. B867).
The cascade high productivity language
NASA Technical Reports Server (NTRS)
Callahan, David; Chamberlain, Branford L.; Zima, Hans P.
2004-01-01
This paper describes the design of Chapel, the Cascade High Productivity Language, which is being developed in the DARPA-funded HPCS project Cascade led by Cray Inc. Chapel pushes the state-of-the-art in languages for HEC system programming by focusing on productivity, in particular by combining the goal of highest possible object code performance with that of programmability offered by a high-level user interface.
Pang, Xiaodan; Ozolins, Oskars; Schatz, Richard; Storck, Joakim; Udalcovs, Aleksejs; Navarro, Jaime Rodrigo; Kakkar, Aditya; Maisons, Gregory; Carras, Mathieu; Jacobsen, Gunnar; Popov, Sergei; Lourdudoss, Sebastian
2017-09-15
Gigabit free-space transmissions are experimentally demonstrated with a quantum cascaded laser (QCL) emitting at mid-wavelength infrared of 4.65 μm, and a commercial infrared photovoltaic detector. The QCL operating at room temperature is directly modulated using on-off keying and, for the first time, to the best of our knowledge, four- and eight-level pulse amplitude modulations (PAM-4, PAM-8). By applying pre- and post-digital equalizations, we achieve up to 3 Gbit/s line data rate in all three modulation configurations with a bit error rate performance of below the 7% overhead hard decision forward error correction limit of 3.8×10 -3 . The proposed transmission link also shows a stable operational performance in the lab environment.
NASA Astrophysics Data System (ADS)
Foresti, Loris; Reyniers, Maarten; Delobbe, Laurent
2014-05-01
The Short-Term Ensemble Prediction System (STEPS) is a probabilistic precipitation nowcasting scheme developed at the Australian Bureau of Meteorology in collaboration with the UK Met Office. In order to account for the multiscaling nature of rainfall structures, the radar field is decomposed into an 8 levels multiplicative cascade using a Fast Fourier Transform. The cascade is advected using the velocity field estimated with optical flow and evolves stochastically according to a hierarchy of auto-regressive processes. This allows reproducing the empirical observation that the rate of temporal evolution of the small scales is faster than the large scales. The uncertainty in radar rainfall measurement and the unknown future development of the velocity field are also considered by stochastic modelling in order to reflect their typical spatial and temporal variability. Recently, a 4 years national research program has been initiated by the University of Leuven, the Royal Meteorological Institute (RMI) of Belgium and 3 other partners: PLURISK ("forecasting and management of extreme rainfall induced risks in the urban environment"). The project deals with the nowcasting of rainfall and subsequent urban inundations, as well as socio-economic risk quantification, communication, warning and prevention. At the urban scale it is widely recognized that the uncertainty of hydrological and hydraulic models is largely driven by the input rainfall estimation and forecast uncertainty. In support to the PLURISK project the RMI aims at integrating STEPS in the current operational deterministic precipitation nowcasting system INCA-BE (Integrated Nowcasting through Comprehensive Analysis). This contribution will illustrate examples of STEPS ensemble and probabilistic nowcasts for a few selected case studies of stratiform and convective rain in Belgium. The paper focuses on the development of STEPS products for potential hydrological users and a preliminary verification of the nowcasts, especially to analyze the spatial distribution of forecast errors. The analysis of nowcast biases reveals the locations where the convective initiation, rainfall growth and decay processes significantly reduce the forecast accuracy, but also points out the need for improving the radar-based quantitative precipitation estimation product that is used both to generate and verify the nowcasts. The collection of fields of verification statistics is implemented using an online update strategy, which potentially enables the system to learn from forecast errors as the archive of nowcasts grows. The study of the spatial or temporal distribution of nowcast errors is a key step to convey to the users an overall estimation of the nowcast accuracy and to drive future model developments.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-28
... DEPARTMENT OF AGRICULTURE Forest Service Eastern Washington Cascades Provincial Advisory Committee and the Yakima Provincial Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting... Chief's 10-Year Stewardship Challenge, Upper Yakima Basin Water Enhancement Project, Holden Mine...
ERIC Educational Resources Information Center
Thorson, Annette, Ed.
1999-01-01
This issue of ENC Focus focuses on the topic of inquiry and problem solving. Featured articles include: (1) "Inquiry in the Everyday World of Schools" (Ronald D. Anderson); (2) "In the Cascade Reservoir Restoration Project Students Tackle Real-World Problems" (Clint Kennedy with Advanced Biology Students from Cascade High…
Free Space Optical Communication Utilizing Mid-Infrared Interband Cascade Laser
NASA Technical Reports Server (NTRS)
Soibel, A.; Wright, M.; Farr, W.; Keo, S.; Hill, C.; Yang, R. Q.; Liu, H. C.
2010-01-01
A Free Space Optical (FSO) link utilizing mid-IR Interband Cascade lasers has been demonstrated in the 3-5 micron atmospheric transmission window with data rates up to 70 Mb/s and bit-error-rate (BER) less than 10 (exp -8). The performance of the mid-IR FSO link has been compared with the performance of a near-IR link under various fog conditions using an indoor communication testbed. These experiments demonstrated the lower attenuation and scintillation advantages of a mid-IR FSO link through fog than a 1550 nm FSO link.
Jung, Sang Min; Mun, Kyoung Hak; Kang, Soo Min; Han, Sang Kook
2017-09-18
An optical signal suppression technique based on a cascaded SOA and RSOA is proposed for the reflective passive optical networks (PONs) with wavelength division multiplexing (WDM). By suppressing the downstream signal of the optical carrier, the proposed reflective PON effectively reuses the downstream optical carrier for upstream signal transmission. As an experimental demonstration, we show that the proposed optical signal suppression technique is effective in terms of the signal bandwidth and bit-error-rate (BER) performance of the remodulated upstream transmission.
INCAS: an analytical model to describe displacement cascades
NASA Astrophysics Data System (ADS)
Jumel, Stéphanie; Claude Van-Duysen, Jean
2004-07-01
REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricité de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.
Wallis, R; Degl'Iinnocenti, R; Jessop, D S; Ren, Y; Klimont, A; Shah, Y D; Mitrofanov, O; Bledt, C M; Melzer, J E; Harrington, J A; Beere, H E; Ritchie, D A
2015-10-05
The growth in terahertz frequency applications utilising the quantum cascade laser is hampered by a lack of targeted power delivery solutions over large distances (>100 mm). Here we demonstrate the efficient coupling of double-metal quantum cascade lasers into flexible polystyrene lined hollow metallic waveguides via the use of a hollow copper waveguide integrated into the laser mounting block. Our approach exhibits low divergence, Gaussian-like emission, which is robust to misalignment error, at distances > 550 mm, with a coupling efficiency from the hollow copper waveguide into the flexible waveguide > 90%. We also demonstrate the ability to nitrogen purge the flexible waveguide, increasing the power transmission by up to 20% at 2.85 THz, which paves the way for future fibre based terahertz sensing and spectroscopy applications.
NASA Astrophysics Data System (ADS)
Begum, A. Yasmine; Gireesh, N.
2018-04-01
In superheater, steam temperature is controlled in a cascade control loop. The cascade control loop consists of PI and PID controllers. To improve the superheater steam temperature control the controller's gains in a cascade control loop has to be tuned efficiently. The mathematical model of the superheater is derived by sets of nonlinear partial differential equations. The tuning methods taken for study here are designed for delay plus first order transfer function model. Hence from the dynamical model of the superheater, a FOPTD model is derived using frequency response method. Then by using Chien-Hrones-Reswick Tuning Algorithm and Gain-Phase Assignment Algorithm optimum controller gains has been found out based on the least value of integral time weighted absolute error.
GHz Modulation of GaAs-Based Bipolar Cascade VCSELs (Preprint)
2006-11-01
VCSELs were grown on n+ GaAs substrates by molecular beam epitaxy . The laser cavities consist of 1-, 2-, or 3-stage 52λ microcavi- ties, each containing...AFRL-SN-WP-TP-2006-128 GHz MODULATION OF GaAs-BASED BIPOLAR CASCADE VCSELs (PREPRINT) W.J. Siskaninetz, R.G. Bedford, T.R. Nelson, Jr., J.E...TITLE AND SUBTITLE GHz MODULATION OF GaAs-BASED BIPOLAR CASCADE VCSELs (PREPRINT) 5c. PROGRAM ELEMENT NUMBER 69199F 5d. PROJECT NUMBER 2002 5e
Enhancing GTEx by bridging the gaps between genotype, gene expression, and disease.
2017-12-01
Genetic variants have been associated with myriad molecular phenotypes that provide new insight into the range of mechanisms underlying genetic traits and diseases. Identifying any particular genetic variant's cascade of effects, from molecule to individual, requires assaying multiple layers of molecular complexity. We introduce the Enhancing GTEx (eGTEx) project that extends the GTEx project to combine gene expression with additional intermediate molecular measurements on the same tissues to provide a resource for studying how genetic differences cascade through molecular phenotypes to impact human health.
NASA Astrophysics Data System (ADS)
Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang
2017-06-01
The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.
User-Defined Data Distributions in High-Level Programming Languages
NASA Technical Reports Server (NTRS)
Diaconescu, Roxana E.; Zima, Hans P.
2006-01-01
One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.
Lu, Guo-Wei; Shinada, Satoshi; Furukawa, Hideaki; Wada, Naoya; Miyazaki, Tetsuya; Ito, Hiromasa
2010-03-15
We experimentally demonstrated ultra-fast phase-transparent wavelength conversion using cascaded sum- and difference-frequency generation (cSFG-DFG) in linear-chirped periodically poled lithium niobate (PPLN). Error-free wavelength conversion of a 160-Gb/s return-to-zero differential phase-shift keying (RZ-DPSK) signal was successfully achieved. Thanks to the enhanced conversion bandwidth in the PPLN with linear-chirped periods, no optical equalizer was required to compensate the spectrum distortion after conversion, unlike a previous demonstration of 160-Gb/s RZ on-off keying (OOK) using fixed-period PPLN.
Puerto, G; Ortega, B; Manzanedo, M D; Martínez, A; Pastor, D; Capmany, J; Kovacs, G
2006-10-30
This paper describes both the experimental and theoretical investigations on the cascadability of all-optical routers in optical label swapping networks incorporating a multistage wavelength conversion with 2R regeneration. A full description of a novel experimental setup allows the packet by packet measurement up to 16 hops with 10 Gb/s payload showing 1 dB penalty with 10(-12) bit error rate. Similarly, the simulations on the system allow a prediction on the cascadability of the router up to 64 hops.
The Cascades of the US Pacific Northwest are a climatically sensitive area. Projections of continued winter warming in this area are expected to induce a switch from a snow-dominated to a rain-dominated winter precipitation regime with a likely impact on subsurface thermal and h...
The Role of Moist Processes in the Intrinsic Predictability of Indian Ocean Cyclones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taraphdar, Sourav; Mukhopadhyay, P.; Leung, Lai-Yung R.
The role of moist processes and the possibility of error cascade from cloud scale processes affecting the intrinsic predictable time scale of a high resolution convection permitting model within the environment of tropical cyclones (TCs) over the Indian region are investigated. Consistent with past studies of extra-tropical cyclones, it is demonstrated that moist processes play a major role in forecast error growth which may ultimately limit the intrinsic predictability of the TCs. Small errors in the initial conditions may grow rapidly and cascades from smaller scales to the larger scales through strong diabatic heating and nonlinearities associated with moist convection.more » Results from a suite of twin perturbation experiments for four tropical cyclones suggest that the error growth is significantly higher in cloud permitting simulation at 3.3 km resolutions compared to simulations at 3.3 km and 10 km resolution with parameterized convection. Convective parameterizations with prescribed convective time scales typically longer than the model time step allows the effects of microphysical tendencies to average out so convection responds to a smoother dynamical forcing. Without convective parameterizations, the finer-scale instabilities resolved at 3.3 km resolution and stronger vertical motion that results from the cloud microphysical parameterizations removing super-saturation at each model time step can ultimately feed the error growth in convection permitting simulations. This implies that careful considerations and/or improvements in cloud parameterizations are needed if numerical predictions are to be improved through increased model resolution. Rapid upscale error growth from convective scales may ultimately limit the intrinsic mesoscale predictability of the TCs, which further supports the needs for probabilistic forecasts of these events, even at the mesoscales.« less
NASA Astrophysics Data System (ADS)
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.
2015-07-01
This investigation aims to study the propagation of meteorological uncertainty within a cascade modelling approach to flood prediction. The methodology was comprised of a numerical weather prediction (NWP) model, a distributed rainfall-runoff model and a 2-D hydrodynamic model. The uncertainty evaluation was carried out at the meteorological and hydrological levels of the model chain, which enabled the investigation of how errors that originated in the rainfall prediction interact at a catchment level and propagate to an estimated inundation area and depth. For this, a hindcast scenario is utilised removing non-behavioural ensemble members at each stage, based on the fit with observed data. At the hydrodynamic level, an uncertainty assessment was not incorporated; instead, the model was setup following guidelines for the best possible representation of the case study. The selected extreme event corresponds to a flood that took place in the southeast of Mexico during November 2009, for which field data (e.g. rain gauges; discharge) and satellite imagery were available. Uncertainty in the meteorological model was estimated by means of a multi-physics ensemble technique, which is designed to represent errors from our limited knowledge of the processes generating precipitation. In the hydrological model, a multi-response validation was implemented through the definition of six sets of plausible parameters from past flood events. Precipitation fields from the meteorological model were employed as input in a distributed hydrological model, and resulting flood hydrographs were used as forcing conditions in the 2-D hydrodynamic model. The evolution of skill within the model cascade shows a complex aggregation of errors between models, suggesting that in valley-filling events hydro-meteorological uncertainty has a larger effect on inundation depths than that observed in estimated flood inundation extents.
NASA Astrophysics Data System (ADS)
Tang, Shaojie; Tang, Xiangyang
2016-03-01
Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).
Multidimensional phase space methods for mass measurements and decay topology determination
NASA Astrophysics Data System (ADS)
Altunkaynak, Baris; Kilic, Can; Klimek, Matthew D.
2017-02-01
Collider events with multi-stage cascade decays fill out the kinematically allowed region in phase space with a density that is enhanced at the boundary. The boundary encodes all available information as regards the spectrum and is well populated even with moderate signal statistics due to this enhancement. In previous work, the improvement in the precision of mass measurements for cascade decays with three visible and one invisible particles was demonstrated when the full boundary information is used instead of endpoints of one-dimensional projections. We extend these results to cascade decays with four visible and one invisible particles. We also comment on how the topology of the cascade decay can be determined from the differential distribution of events in these scenarios.
Error begat error: design error analysis and prevention in social infrastructure projects.
Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M
2012-09-01
Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Goldman, L. J.; Seasholtz, R. G.; Mclallin, K. L.
1976-01-01
A laser Doppler velocimeter (LDV) was used to determine the flow conditions downstream of an annular cascade of stator blades operating at an exit critical velocity ratio of 0.87. Two modes of LDV operation (continuous scan and discrete point) were investigated. Conventional pressure probe measurements were also made for comparison with the LDV results. Biasing errors that occur in the LDV measurement of velocity components were also studied. In addition, the effect of pressure probe blockage on the flow conditions was determined with the LDV. Photographs and descriptions of the test equipment used are given.
Quadrotor trajectory tracking using PID cascade control
NASA Astrophysics Data System (ADS)
Idres, M.; Mustapha, O.; Okasha, M.
2017-12-01
Quadrotors have been applied to collect information for traffic, weather monitoring, surveillance and aerial photography. In order to accomplish their mission, quadrotors have to follow specific trajectories. This paper presents proportional-integral-derivative (PID) cascade control of a quadrotor for path tracking problem when velocity and acceleration are small. It is based on near hover controller for small attitude angles. The integral of time-weighted absolute error (ITAE) criterion is used to determine the PID gains as a function of quadrotor modeling parameters. The controller is evaluated in three-dimensional environment in Simulink. Overall, the tracking performance is found to be excellent for small velocity condition.
Climate change impacts on maritime mountain snowpack in the Oregon Cascades
E. Sproles; A.W. Nolin; K. Rittger; T.H. Painter
2013-01-01
This study investigates the effect of projected temperature increases on maritime mountain snowpack in the McKenzie River Basin (MRB; 3041 km2) in the Cascades Mountains of Oregon, USA. We simulated the spatial distribution of snow water equivalent (SWE) in the MRB for the period of 1989â2009 with SnowModel, a spatiallydistributed, process-based...
Cascade Distillation System Development
NASA Technical Reports Server (NTRS)
Callahan, Michael R.; Sargushingh, Miriam; Shull, Sarah
2014-01-01
NASA's Advanced Exploration Systems (AES) Life Support System (LSS) Project is chartered with de-veloping advanced life support systems that will ena-ble NASA human exploration beyond low Earth orbit (LEO). The goal of AES is to increase the affordabil-ity of long-duration life support missions, and to re-duce the risk associated with integrating and infusing new enabling technologies required to ensure mission success. Because of the robust nature of distillation systems, the AES LSS Project is pursuing develop-ment of the Cascade Distillation Subsystem (CDS) as part of its technology portfolio. Currently, the system is being developed into a flight forward Generation 2.0 design.
Three-dimensional Navier-Stokes analysis of turbine passage heat transfer
NASA Technical Reports Server (NTRS)
Ameri, Ali A.; Arnone, Andrea
1991-01-01
The three-dimensional Reynolds-averaged Navier-Stokes equations are numerically solved to obtain the pressure distribution and heat transfer rates on the endwalls and the blades of two linear turbine cascades. The TRAF3D code which has recently been developed in a joint project between researchers from the University of Florence and NASA Lewis Research Center is used. The effect of turbulence is taken into account by using the eddy viscosity hypothesis and the two-layer mixing length model of Baldwin and Lomax. Predictions of surface heat transfer are made for Langston's cascade and compared with the data obtained for that cascade by Graziani. The comparison was found to be favorable. The code is also applied to a linear transonic rotor cascade to predict the pressure distributions and heat transfer rates.
A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.
Intrinsic coincident linear polarimetry using stacked organic photovoltaics.
Roy, S Gupta; Awartani, O M; Sen, P; O'Connor, B T; Kudenov, M W
2016-06-27
Polarimetry has widespread applications within atmospheric sensing, telecommunications, biomedical imaging, and target detection. Several existing methods of imaging polarimetry trade off the sensor's spatial resolution for polarimetric resolution, and often have some form of spatial registration error. To mitigate these issues, we have developed a system using oriented polymer-based organic photovoltaics (OPVs) that can preferentially absorb linearly polarized light. Additionally, the OPV cells can be made semitransparent, enabling multiple detectors to be cascaded along the same optical axis. Since each device performs a partial polarization measurement of the same incident beam, high temporal resolution is maintained with the potential for inherent spatial registration. In this paper, a Mueller matrix model of the stacked OPV design is provided. Based on this model, a calibration technique is developed and presented. This calibration technique and model are validated with experimental data, taken with a cascaded three cell OPV Stokes polarimeter, capable of measuring incident linear polarization states. Our results indicate polarization measurement error of 1.2% RMS and an average absolute radiometric accuracy of 2.2% for the demonstrated polarimeter.
Bortolus, Alejandro
2008-03-01
Why do ecologists seem to underestimate the consequences of using bad taxonomy? Is it because the consequences of doing so have not been yet scrutinized well enough? Is it because these consequences are irrelevant? In this paper I examine and discuss these questions, focusing on the fact that because ecological works provide baseline information for many other biological disciplines, they play a key role in spreading and magnifying the abundance of a variety of conceptual and methodological errors. Although overlooked and underestimated, this cascade-like process originates from trivial taxonomical problems that affect hypotheses and ideas, but it soon shifts into a profound practical problem affecting our knowledge about nature, as well as the ecosystem structure and functioning and the efficiency of human health care programs. In order to improve the intercommunication among disciplines, I propose a set of specific requirements that peer reviewed journals should request from all authors, and I also advocate for urgent institutional and financial support directed at reinvigorating the formation of scientific collections that integrate taxonomy and ecology.
Islam, Mohammad Tariqul; Tanvir Ahmed, Sk.; Zabir, Ishmam; Shahnaz, Celia
2018-01-01
Photoplethysmographic (PPG) signal is getting popularity for monitoring heart rate in wearable devices because of simplicity of construction and low cost of the sensor. The task becomes very difficult due to the presence of various motion artefacts. In this study, an algorithm based on cascade and parallel combination (CPC) of adaptive filters is proposed in order to reduce the effect of motion artefacts. First, preliminary noise reduction is performed by averaging two channel PPG signals. Next in order to reduce the effect of motion artefacts, a cascaded filter structure consisting of three cascaded adaptive filter blocks is developed where three-channel accelerometer signals are used as references to motion artefacts. To further reduce the affect of noise, a scheme based on convex combination of two such cascaded adaptive noise cancelers is introduced, where two widely used adaptive filters namely recursive least squares and least mean squares filters are employed. Heart rates are estimated from the noise reduced PPG signal in spectral domain. Finally, an efficient heart rate tracking algorithm is designed based on the nature of the heart rate variability. The performance of the proposed CPC method is tested on a widely used public database. It is found that the proposed method offers very low estimation error and a smooth heart rate tracking with simple algorithmic approach. PMID:29515812
Higgs production via gluon fusion in k{sub T} factorisation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hautmann, F.; Jung, H.; Pandis, V.
2011-07-15
Theoretical studies of Higgs production via gluon fusion are frequently carried out in the limit where the top quark mass is much larger than the Higgs mass, an approximation which reduces the top quark loop to an effective vertex. We present a numerical analysis of the error thus introduced by performing a Monte Carlo calculation for gg{yields}h in k{sub T}-factorisation, using the parton shower generator CASCADE. By examining both inclusive and exclusive quantities, we find that retaining the top-mass dependence results in only a small enhancement of the cross-section. We then proceed to compare CASCADE to the collinear Monte Carlosmore » PYTHIA, MC-NLO and POWHEG.« less
Verduzco-Flores, Sergio O; O'Reilly, Randall C
2015-01-01
We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error and distal learning problems. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations.
Verduzco-Flores, Sergio O.; O'Reilly, Randall C.
2015-01-01
We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error and distal learning problems. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations. PMID:25852535
Synthetic Air Data Estimation: A case study of model-aided estimation
NASA Astrophysics Data System (ADS)
Lie, F. Adhika Pradipta
A method for estimating airspeed, angle of attack, and sideslip without using conventional, pitot-static airdata system is presented. The method relies on measurements from GPS, an inertial measurement unit (IMU) and a low-fidelity model of the aircraft's dynamics which are fused using two, cascaded Extended Kalman Filters. In the cascaded architecture, the first filter uses information from the IMU and GPS to estimate the aircraft's absolute velocity and attitude. These estimates are used as the measurement updates for the second filter where they are fused with the aircraft dynamics model to generate estimates of airspeed, angle of attack and sideslip. Methods for dealing with the time and inter-state correlation in the measurements coming from the first filter are discussed. Simulation and flight test results of the method are presented. Simulation results using high fidelity nonlinear model show that airspeed, angle of attack, and sideslip angle estimation errors are less than 0.5 m/s, 0.1 deg, and 0.2 deg RMS, respectively. Factors that affect the accuracy including the implication and impact of using a low fidelity aircraft model are discussed. It is shown using flight tests that a single linearized aircraft model can be used in lieu of a high-fidelity, non-linear model to provide reasonably accurate estimates of airspeed (less than 2 m/s error), angle of attack (less than 3 deg error), and sideslip angle (less than 5 deg error). This performance is shown to be relatively insensitive to off-trim attitudes but very sensitive to off-trim velocity.
NASA Astrophysics Data System (ADS)
Rudaz, Benjamin; Loye, Alexandre; Mazotti, Benoit; Bardou, Eric; Jaboyedoff, Michel
2013-04-01
The Materosion project, conducted between the swiss canton of Valais (CREALP) and University of Lausanne (CRET) aims at forecasting sediment transfer in alpine torrents using the sediment cascade concept. The study site is the high Anniviers valley, around the village of Zinal (Valais). The torrents are divided in homogeneous reaches, to and from which sediments are transported by debris flows and bedload transport events. The model runs simulations of 100 years, with a 1-month time step, each with a given a random meteorological event ranging from no activity up to high magnitude debris flows. These events are calibrated using local rain data and observed corresponding debris flow frequencies. The model is applied to ten torrent systems with variable geological context, watershed geometries and sediment supplies. Given the high number of possible event scenarios, 10'000 simulations per torrent are performed, giving a statistical distribution of cumulated volumes and an event size distribution. A way to visualize the complex results data is proposed, and a back-analysis of the internal sediment cascade dynamic is performed. The back-analysis shows that the results' distribution stabilize after ~5'000 simulations. The model results, especially the range of debris flow volumes are crucial to maintain mitigation measures such as retention dams, and give clues for future sediment cascade modeling.
Eddy-driven low-frequency variability: physics and observability through altimetry
NASA Astrophysics Data System (ADS)
Penduff, Thierry; Sérazin, Guillaume; Arbic, Brian; Mueller, Malte; Richman, James G.; Shriver, Jay F.; Morten, Andrew J.; Scott, Robert B.
2015-04-01
Model studies have revealed the propensity of the eddying ocean circulation to generate strong low-frequency variability (LFV) intrinsically, i.e. without low-frequency atmospheric variability. In the present study, gridded satellite altimeter products, idealized quasi-geostrophic (QG) turbulent simulations, and realistic high-resolution global ocean simulations are used to study the spontaneous tendency of mesoscale (relatively high frequency and high wavenumber) kinetic energy to non-linearly cascade towards larger time and space scales. The QG model reveals that large-scale variability, arising from the well-known spatial inverse cascade, is associated with low frequencies. Low-frequency, low-wavenumber energy is maintained primarily by nonlinearities in the QG model, with forcing (by large-scale shear) and friction playing secondary roles. In realistic simulations, nonlinearities also generally drive kinetic energy to low frequencies and low wavenumbers. In some, but not all, regions of the gridded altimeter product, surface kinetic energy is also found to cascade toward low frequencies. Exercises conducted with the realistic model suggest that the spatial and temporal filtering inherent in the construction of gridded satellite altimeter maps may contribute to the discrepancies seen in some regions between the direction of frequency cascade in models versus gridded altimeter maps. Finally, the range of frequencies that are highly energized and engaged these cascades appears much greater than the range of highly energized and engaged wavenumbers. Global eddying simulations, performed in the context of the CHAOCEAN project in collaboration with the CAREER project, provide estimates of the range of timescales that these oceanic nonlinearities are likely to feed without external variability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
BEETIT Project: Battelle is developing a new air conditioning system that uses a cascade reverse osmosis (RO)-based absorption cycle. Analyses show that this new cycle can be as much as 60% more efficient than vapor compression, which is used in 90% of air conditioners. Traditional vapor-compression systems use polluting liquids for a cooling effect. Absorption cycles use benign refrigerants such as water, which is absorbed in a salt solution and pumped as liquid—replacing compression of vapor. The refrigerant is subsequently separated from absorbing salt using heat for re-use in the cooling cycle. Battelle is replacing thermal separation of refrigerant withmore » a more efficient reverse osmosis process. Research has shown that the cycle is possible, but further investment will be needed to reduce the number of cascade reverse osmosis stages and therefore cost.« less
Frequency and Phase-lock Control of a 3 THz Quantum Cascade Laser
NASA Technical Reports Server (NTRS)
Betz, A. L.; Boreiko, R. T.; Williams, B. S.; Kumar, S.; Hu, Q.; Reno, J. L.
2005-01-01
We have locked the frequency of a 3 THz quantum cascade laser (QCL) to that of a far-infrared gas laser with a tunable microwave offset frequency. The locked QCL line shape is essentially Gaussian, with linewidths of 65 and 141 kHz at the -3 and -10 dB levels, respectively. The lock condition can be maintained indefinitely, without requiring temperature or bias current regulation of the QCL other than that provided by the lock error signal. The result demonstrates that a terahertz QCL can be frequency controlled with l-part-in-lO(exp 8) accuracy, which is a factor of 100 better than that needed for a local oscillator in a heterodyne receiver for atmospheric and astronomic spectroscopy.
Special cascade LMS equalization scheme suitable for 60-GHz RoF transmission system.
Liu, Siming; Shen, Guansheng; Kou, Yanbin; Tian, Huiping
2016-05-16
We design a specific cascade least mean square (LMS) equalizer and to the best of our knowledge, it is the first time this kind of equalizer has been employed for 60-GHz millimeter-wave (mm-wave) radio over fiber (RoF) system. The proposed cascade LMS equalizer consists of two sub-equalizers which are designated for optical and wireless channel compensations, respectively. We control the linear and nonlinear factors originated from optical link and wireless link separately. The cascade equalization scheme can keep the nonlinear distortions of the RoF system in a low degree. We theoretically and experimentally investigate the parameters of the two sub-equalizers to reach their best performances. The experiment results show that the cascade equalization scheme has a faster convergence speed. It needs a training sequence with a length of 10000 to reach its stable status, which is only half as long as the traditional LMS equalizer needs. With the utility of a proposed equalizer, the 60-GHz RoF system can successfully transmit 5-Gbps BPSK signal over 10-km fiber and 1.2-m wireless link under forward error correction (FEC) limit 10-3. An improvement of 4dBm and 1dBm in power sensitivity at BER 10-3 over traditional LMS equalizer can be observed when the signals are transmitted through Back-to-Back (BTB) and 10-km fiber 1.2-m wireless links, respectively.
NASA Astrophysics Data System (ADS)
Perlekar, Prasad; Pal, Nairita; Pandit, Rahul
2017-03-01
We study two-dimensional (2D) binary-fluid turbulence by carrying out an extensive direct numerical simulation (DNS) of the forced, statistically steady turbulence in the coupled Cahn-Hilliard and Navier-Stokes equations. In the absence of any coupling, we choose parameters that lead (a) to spinodal decomposition and domain growth, which is characterized by the spatiotemporal evolution of the Cahn-Hilliard order parameter ϕ, and (b) the formation of an inverse-energy-cascade regime in the energy spectrum E(k), in which energy cascades towards wave numbers k that are smaller than the energy-injection scale kin j in the turbulent fluid. We show that the Cahn-Hilliard-Navier-Stokes coupling leads to an arrest of phase separation at a length scale Lc, which we evaluate from S(k), the spectrum of the fluctuations of ϕ. We demonstrate that (a) Lc ~ LH, the Hinze scale that follows from balancing inertial and interfacial-tension forces, and (b) Lc is independent, within error bars, of the diffusivity D. We elucidate how this coupling modifies E(k) by blocking the inverse energy cascade at a wavenumber kc, which we show is ≃2π/Lc. We compare our work with earlier studies of this problem.
Perlekar, Prasad; Pal, Nairita; Pandit, Rahul
2017-03-21
We study two-dimensional (2D) binary-fluid turbulence by carrying out an extensive direct numerical simulation (DNS) of the forced, statistically steady turbulence in the coupled Cahn-Hilliard and Navier-Stokes equations. In the absence of any coupling, we choose parameters that lead (a) to spinodal decomposition and domain growth, which is characterized by the spatiotemporal evolution of the Cahn-Hilliard order parameter ϕ, and (b) the formation of an inverse-energy-cascade regime in the energy spectrum E(k), in which energy cascades towards wave numbers k that are smaller than the energy-injection scale kin j in the turbulent fluid. We show that the Cahn-Hilliard-Navier-Stokes coupling leads to an arrest of phase separation at a length scale Lc, which we evaluate from S(k), the spectrum of the fluctuations of ϕ. We demonstrate that (a) Lc ~ LH, the Hinze scale that follows from balancing inertial and interfacial-tension forces, and (b) Lc is independent, within error bars, of the diffusivity D. We elucidate how this coupling modifies E(k) by blocking the inverse energy cascade at a wavenumber kc, which we show is ≃2π/Lc. We compare our work with earlier studies of this problem.
Perlekar, Prasad; Pal, Nairita; Pandit, Rahul
2017-01-01
We study two-dimensional (2D) binary-fluid turbulence by carrying out an extensive direct numerical simulation (DNS) of the forced, statistically steady turbulence in the coupled Cahn-Hilliard and Navier-Stokes equations. In the absence of any coupling, we choose parameters that lead (a) to spinodal decomposition and domain growth, which is characterized by the spatiotemporal evolution of the Cahn-Hilliard order parameter ϕ, and (b) the formation of an inverse-energy-cascade regime in the energy spectrum E(k), in which energy cascades towards wave numbers k that are smaller than the energy-injection scale kin j in the turbulent fluid. We show that the Cahn-Hilliard-Navier-Stokes coupling leads to an arrest of phase separation at a length scale Lc, which we evaluate from S(k), the spectrum of the fluctuations of ϕ. We demonstrate that (a) Lc ~ LH, the Hinze scale that follows from balancing inertial and interfacial-tension forces, and (b) Lc is independent, within error bars, of the diffusivity D. We elucidate how this coupling modifies E(k) by blocking the inverse energy cascade at a wavenumber kc, which we show is ≃2π/Lc. We compare our work with earlier studies of this problem. PMID:28322219
Error Cost Escalation Through the Project Life Cycle
NASA Technical Reports Server (NTRS)
Stecklein, Jonette M.; Dabney, Jim; Dick, Brandon; Haskins, Bill; Lovell, Randy; Moroney, Gregory
2004-01-01
It is well known that the costs to fix errors increase as the project matures, but how fast do those costs build? A study was performed to determine the relative cost of fixing errors discovered during various phases of a project life cycle. This study used three approaches to determine the relative costs: the bottom-up cost method, the total cost breakdown method, and the top-down hypothetical project method. The approaches and results described in this paper presume development of a hardware/software system having project characteristics similar to those used in the development of a large, complex spacecraft, a military aircraft, or a small communications satellite. The results show the degree to which costs escalate, as errors are discovered and fixed at later and later phases in the project life cycle. If the cost of fixing a requirements error discovered during the requirements phase is defined to be 1 unit, the cost to fix that error if found during the design phase increases to 3 - 8 units; at the manufacturing/build phase, the cost to fix the error is 7 - 16 units; at the integration and test phase, the cost to fix the error becomes 21 - 78 units; and at the operations phase, the cost to fix the requirements error ranged from 29 units to more than 1500 units
Modeling spallation reactions in tungsten and uranium targets with the Geant4 toolkit
NASA Astrophysics Data System (ADS)
Malyshkin, Yury; Pshenichnov, Igor; Mishustin, Igor; Greiner, Walter
2012-02-01
We study primary and secondary reactions induced by 600 MeV proton beams in monolithic cylindrical targets made of natural tungsten and uranium by using Monte Carlo simulations with the Geant4 toolkit [1-3]. Bertini intranuclear cascade model, Binary cascade model and IntraNuclear Cascade Liège (INCL) with ABLA model [4] were used as calculational options to describe nuclear reactions. Fission cross sections, neutron multiplicity and mass distributions of fragments for 238U fission induced by 25.6 and 62.9 MeV protons are calculated and compared to recent experimental data [5]. Time distributions of neutron leakage from the targets and heat depositions are calculated. This project is supported by Siemens Corporate Technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobson, Ian; Hiskens, Ian; Linderoth, Jeffrey
Building on models of electrical power systems, and on powerful mathematical techniques including optimization, model predictive control, and simluation, this project investigated important issues related to the stable operation of power grids. A topic of particular focus was cascading failures of the power grid: simulation, quantification, mitigation, and control. We also analyzed the vulnerability of networks to component failures, and the design of networks that are responsive to and robust to such failures. Numerous other related topics were investigated, including energy hubs and cascading stall of induction machines
Cascade Distiller System Performance Testing Interim Results
NASA Technical Reports Server (NTRS)
Callahan, Michael R.; Pensinger, Stuart; Sargusingh, Miriam J.
2014-01-01
The Cascade Distillation System (CDS) is a rotary distillation system with potential for greater reliability and lower energy costs than existing distillation systems. Based upon the results of the 2009 distillation comparison test (DCT) and recommendations of the expert panel, the Advanced Exploration Systems (AES) Water Recovery Project (WRP) project advanced the technology by increasing reliability of the system through redesign of bearing assemblies and improved rotor dynamics. In addition, the project improved the CDS power efficiency by optimizing the thermoelectric heat pump (TeHP) and heat exchanger design. Testing at the NASA-JSC Advanced Exploration System Water Laboratory (AES Water Lab) using a prototype Cascade Distillation Subsystem (CDS) wastewater processor (Honeywell d International, Torrance, Calif.) with test support equipment and control system developed by Johnson Space Center was performed to evaluate performance of the system with the upgrades as compared to previous system performance. The system was challenged with Solution 1 from the NASA Exploration Life Support (ELS) distillation comparison testing performed in 2009. Solution 1 consisted of a mixed stream containing human-generated urine and humidity condensate. A secondary objective of this testing is to evaluate the performance of the CDS as compared to the state of the art Distillation Assembly (DA) used in the ISS Urine Processor Assembly (UPA). This was done by challenging the system with ISS analog waste streams. This paper details the results of the AES WRP CDS performance testing.
Skinner, Stan; Holdefer, Robert; McAuliffe, John J; Sala, Francesco
2017-11-01
Error avoidance in medicine follows similar rules that apply within the design and operation of other complex systems. The error-reduction concepts that best fit the conduct of testing during intraoperative neuromonitoring are forgiving design (reversibility of signal loss to avoid/prevent injury) and system redundancy (reduction of false reports by the multiplication of the error rate of tests independently assessing the same structure). However, error reduction in intraoperative neuromonitoring is complicated by the dichotomous roles (and biases) of the neurophysiologist (test recording and interpretation) and surgeon (intervention). This "interventional cascade" can be given as follows: test → interpretation → communication → intervention → outcome. Observational and controlled trials within operating rooms demonstrate that optimized communication, collaboration, and situational awareness result in fewer errors. Well-functioning operating room collaboration depends on familiarity and trust among colleagues. Checklists represent one method to initially enhance communication and avoid obvious errors. All intraoperative neuromonitoring supervisors should strive to use sufficient means to secure situational awareness and trusted communication/collaboration. Face-to-face audiovisual teleconnections may help repair deficiencies when a particular practice model disallows personal operating room availability. All supervising intraoperative neurophysiologists need to reject an insular or deferential or distant mindset.
Quasi-eccentricity error modeling and compensation in vision metrology
NASA Astrophysics Data System (ADS)
Shen, Yijun; Zhang, Xu; Cheng, Wei; Zhu, Limin
2018-04-01
Circular targets are commonly used in vision applications for its detection accuracy and robustness. The eccentricity error of the circular target caused by perspective projection is one of the main factors of measurement error which needs to be compensated in high-accuracy measurement. In this study, the impact of the lens distortion on the eccentricity error is comprehensively investigated. The traditional eccentricity error turns to a quasi-eccentricity error in the non-linear camera model. The quasi-eccentricity error model is established by comparing the quasi-center of the distorted ellipse with the true projection of the object circle center. Then, an eccentricity error compensation framework is proposed which compensates the error by iteratively refining the image point to the true projection of the circle center. Both simulation and real experiment confirm the effectiveness of the proposed method in several vision applications.
NASA Astrophysics Data System (ADS)
O'Leary, D., III; Hall, D. K.; Medler, M. J.; Flower, A.; Matthews, R.
2017-12-01
The spring of 2015 brought an alarmingly early snowmelt to the Cascade Mountains, impacting flora, fauna, watersheds, and wildfire activity. It is important that we understand these events because model-based projections suggest that snowmelt may arrive an average of 10-40 days earlier across the continental US by the year 2100. Available snow measurement methods including SNOTEL stations and stream gauges offer insights into point locations and individual watersheds, but lack the detail needed to assess snowmelt anomalies across the landscape. In this study we describe our new MODIS-based snowmelt timing maps (STMs), validate them with SNOTEL measurements, then use them to explore the spatial patterns of the 2015 snowmelt in the Cascades. We found that the Cascade Mountains experienced snowmelt 41 days earlier than the 2001-2015 average, with many areas melting >70 days early. Of concern to land managers, these events may be the `new normal' in the decades to come.
Ellinas, Christos; Allan, Neil; Durugbo, Christopher; Johansson, Anders
2015-01-01
Current societal requirements necessitate the effective delivery of complex projects that can do more while using less. Yet, recent large-scale project failures suggest that our ability to successfully deliver them is still at its infancy. Such failures can be seen to arise through various failure mechanisms; this work focuses on one such mechanism. Specifically, it examines the likelihood of a project sustaining a large-scale catastrophe, as triggered by single task failure and delivered via a cascading process. To do so, an analytical model was developed and tested on an empirical dataset by the means of numerical simulation. This paper makes three main contributions. First, it provides a methodology to identify the tasks most capable of impacting a project. In doing so, it is noted that a significant number of tasks induce no cascades, while a handful are capable of triggering surprisingly large ones. Secondly, it illustrates that crude task characteristics cannot aid in identifying them, highlighting the complexity of the underlying process and the utility of this approach. Thirdly, it draws parallels with systems encountered within the natural sciences by noting the emergence of self-organised criticality, commonly found within natural systems. These findings strengthen the need to account for structural intricacies of a project's underlying task precedence structure as they can provide the conditions upon which large-scale catastrophes materialise.
Optimization Studies of the FERMI at ELETTRA FEL Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Ninno, Giovanni; Fawley, William M.; Penn, Gregory E.
The FERMI at ELETTRA project at Sincotrone Trieste involves two FEL's, each based upon the principle of seeded harmonic generation and using the existing ELETTRA injection linac at 1.2 GeV beam energy. Scheduled to be completed in 2008, FEL-1 will operate in 40-100 nm wavelength range and will involve one stage of harmonic up-conversion. The second undulator line, FEL-2, will begin operation two years later in the 10-40 nm wavelength range and use two harmonic stages operating as a cascade. The FEL design assumes continuous wavelength tunability over the full wavelength range, and polarization tunability of the output radiation includingmore » vertical or horizontal linear as well as helical polarization. The design considers focusing properties and segmentation of realizable undulators and available input seed lasers. We review the studies that have led to our current design. We present results of simulations using GENESIS and GINGER simulation codes including studies of various shot-to-shot fluctuations and undulator errors. Findings for the expected output radiation in terms of the power, transverse and longitudinal coherence are reported.« less
ERIC Educational Resources Information Center
Scott, Duncan; Cooper, Adam; Swartz, Sharlene
2014-01-01
This paper presents findings of four Grade 6 teachers' involvement as facilitators of a participatory action research (PAR) project conducted in three South African primary schools. Based on the results of Phase One research which indicated that Grade 6s learn about sexuality, Human Immunodeficiency Virus (HIV) and Acquired Immunodeficiency…
Tedja, Milly S; Wojciechowski, Robert; Hysi, Pirro G; Eriksson, Nicholas; Furlotte, Nicholas A; Verhoeven, Virginie J M; Iglesias, Adriana I; Meester-Smoor, Magda A; Tompson, Stuart W; Fan, Qiao; Khawaja, Anthony P; Cheng, Ching-Yu; Höhn, René; Yamashiro, Kenji; Wenocur, Adam; Grazal, Clare; Haller, Toomas; Metspalu, Andres; Wedenoja, Juho; Jonas, Jost B; Wang, Ya Xing; Xie, Jing; Mitchell, Paul; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Paterson, Andrew D; Hosseini, S Mohsen; Shah, Rupal L; Williams, Cathy; Teo, Yik Ying; Tham, Yih Chung; Gupta, Preeti; Zhao, Wanting; Shi, Yuan; Saw, Woei-Yuh; Tai, E-Shyong; Sim, Xue Ling; Huffman, Jennifer E; Polašek, Ozren; Hayward, Caroline; Bencic, Goran; Rudan, Igor; Wilson, James F; Joshi, Peter K; Tsujikawa, Akitaka; Matsuda, Fumihiko; Whisenhunt, Kristina N; Zeller, Tanja; van der Spek, Peter J; Haak, Roxanna; Meijers-Heijboer, Hanne; van Leeuwen, Elisabeth M; Iyengar, Sudha K; Lass, Jonathan H; Hofman, Albert; Rivadeneira, Fernando; Uitterlinden, André G; Vingerling, Johannes R; Lehtimäki, Terho; Raitakari, Olli T; Biino, Ginevra; Concas, Maria Pina; Schwantes-An, Tae-Hwi; Igo, Robert P; Cuellar-Partida, Gabriel; Martin, Nicholas G; Craig, Jamie E; Gharahkhani, Puya; Williams, Katie M; Nag, Abhishek; Rahi, Jugnoo S; Cumberland, Phillippa M; Delcourt, Cécile; Bellenguez, Céline; Ried, Janina S; Bergen, Arthur A; Meitinger, Thomas; Gieger, Christian; Wong, Tien Yin; Hewitt, Alex W; Mackey, David A; Simpson, Claire L; Pfeiffer, Norbert; Pärssinen, Olavi; Baird, Paul N; Vitart, Veronique; Amin, Najaf; van Duijn, Cornelia M; Bailey-Wilson, Joan E; Young, Terri L; Saw, Seang-Mei; Stambolian, Dwight; MacGregor, Stuart; Guggenheim, Jeremy A; Tung, Joyce Y; Hammond, Christopher J; Klaver, Caroline C W
2018-06-01
Refractive errors, including myopia, are the most frequent eye disorders worldwide and an increasingly common cause of blindness. This genome-wide association meta-analysis in 160,420 participants and replication in 95,505 participants increased the number of established independent signals from 37 to 161 and showed high genetic correlation between Europeans and Asians (>0.78). Expression experiments and comprehensive in silico analyses identified retinal cell physiology and light processing as prominent mechanisms, and also identified functional contributions to refractive-error development in all cell types of the neurosensory retina, retinal pigment epithelium, vascular endothelium and extracellular matrix. Newly identified genes implicate novel mechanisms such as rod-and-cone bipolar synaptic neurotransmission, anterior-segment morphology and angiogenesis. Thirty-one loci resided in or near regions transcribing small RNAs, thus suggesting a role for post-transcriptional regulation. Our results support the notion that refractive errors are caused by a light-dependent retina-to-sclera signaling cascade and delineate potential pathobiological molecular drivers.
The effects of center of rotation errors on cardiac SPECT imaging
NASA Astrophysics Data System (ADS)
Bai, Chuanyong; Shao, Ling; Ye, Jinghan; Durbin, M.
2003-10-01
In SPECT imaging, center of rotation (COR) errors lead to the misalignment of projection data and can potentially degrade the quality of the reconstructed images. In this work, we study the effects of COR errors on cardiac SPECT imaging using simulation, point source, cardiac phantom, and patient studies. For simulation studies, we generate projection data using a uniform MCAT phantom first without modeling any physical effects (NPH), then with the modeling of detector response effect (DR) alone. We then corrupt the projection data with simulated sinusoid and step COR errors. For other studies, we introduce sinusoid COR errors to projection data acquired on SPECT systems. An OSEM algorithm is used for image reconstruction without detector response correction, but with nonuniform attenuation correction when needed. The simulation studies show that, when COR errors increase from 0 to 0.96 cm: 1) sinusoid COR errors in axial direction lead to intensity decrease in the inferoapical region; 2) step COR errors in axial direction lead to intensity decrease in the distal anterior region. The intensity decrease is more severe in images reconstructed from projection data with NPH than with DR; and 3) the effects of COR errors in transaxial direction seem to be insignificant. In other studies, COR errors slightly degrade point source resolution; COR errors of 0.64 cm or above introduce visible but insignificant nonuniformity in the images of uniform cardiac phantom; COR errors up to 0.96 cm in transaxial direction affect the lesion-to-background contrast (LBC) insignificantly in the images of cardiac phantom with defects, and COR errors up to 0.64 cm in axial direction only slightly decrease the LBC. For the patient studies with COR errors up to 0.96 cm, images have the same diagnostic/prognostic values as those without COR errors. This work suggests that COR errors of up to 0.64 cm are not likely to change the clinical applications of cardiac SPECT imaging when using iterative reconstruction algorithm without detector response correction.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-01
... Bonneville Power Administration (BPA) transmission lines through National Forest lands. The second alternative would follow the existing BPA transmission lines through the Confederated Tribes of Warm Springs...
Busing, Richard T.; Solomon, Allen M.
2004-01-01
Two forest dynamics simulators are compared along climatic gradients in the Pacific Northwest. The ZELIG and FORCLIM models are tested against forest survey data from western Oregon. Their ability to generate accurate patterns of forest basal area and species composition is evaluated for series of sites with contrasting climate. Projections from both models approximate the basal area and composition patterns for three sites along the elevation gradient at H.J. Andrews Experimental Forest in the western Cascade Range. The ZELIG model is somewhat more accurate than FORCLIM at the two low-elevation sites. Attempts to project forest composition along broader climatic gradients reveal limitations of ZELIG, however. For example, ZELIG is less accurate than FORCLIM at projecting the average composition of a west Cascades ecoregion selected for intensive analysis. Also, along a gradient consisting of several sites on an east to west transect at 44.1oN latitude, both the FORCLIM model and the actual data show strong changes in composition and total basal area, but the ZELIG model shows a limited response. ZELIG does not simulate the declines in forest basal area and the diminished dominance of mesic coniferous species east of the Cascade crest. We conclude that ZELIG is suitable for analyses of certain sites for which it has been calibrated. FORCLIM can be applied in analyses involving a range of climatic conditions without requiring calibration for specific sites.
The PoET (Prevention of Error-Based Transfers) Project.
Oliver, Jill; Chidwick, Paula
2017-01-01
The PoET (Prevention of Error-based Transfers) Project is one of the Ethics Quality Improvement Projects (EQIPs) taking place at William Osler Health System. This specific project is designed to reduce transfers from long-term care to hospital that are caused by legal and ethical errors related to consent, capacity and substitute decision-making. The project is currently operating in eight long-term care homes in the Central West Local Health Integration Network and has seen a 56% reduction in multiple transfers before death in hospital.
Detection and avoidance of errors in computer software
NASA Technical Reports Server (NTRS)
Kinsler, Les
1989-01-01
The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.
Wen, Xin; Liu, Zhehua; Lei, Xiaohui; Lin, Rongjie; Fang, Guohua; Tan, Qiaofeng; Wang, Chao; Tian, Yu; Quan, Jin
2018-08-15
The eco-hydrological system in southwestern China is undergoing great changes in recent decades owing to climate change and extensive cascading hydropower exploitation. With a growing recognition that multiple drivers often interact in complex and nonadditive ways, the purpose of this study is to predict the potential future changes in streamflow and fish habitat quality in the Yuan River and quantify the individual and cumulative effect of cascade damming and climate change. The bias corrected and spatial downscaled Coupled Model Intercomparison Project Phase 5 (CMIP5) General Circulation Model (GCM) projections are employed to drive the Soil and Water Assessment Tool (SWAT) hydrological model and to simulate and predict runoff responses under diverse scenarios. Physical habitat simulation model is established to quantify the relationship between river hydrology and fish habitat, and the relative change rate is used to assess the individual and combined effects of cascade damming and climate change. Mean annual temperature, precipitation and runoff in 2015-2100 show an increasing trend compared with that in 1951-2010, with a particularly pronounced difference between dry and wet years. The ecological habitat quality is improved under cascade hydropower development since that ecological requirement has been incorporated in the reservoir operation policy. As for middle reach, the runoff change from January to August is determined mainly by damming, and climate change influence becomes more pronounced in dry seasons from September to December. Cascade development has an effect on runoff of lower reach only in dry seasons due to the limited regulation capacity of reservoirs, and climate changes have an effect on runoff in wet seasons. Climate changes have a less significant effect on fish habitat quality in middle reach than damming, but a more significant effect in lower reach. In addition, the effect of climate changes on fish habitat quality in lower reach is high in dry seasons but low in flood seasons. Copyright © 2018 Elsevier B.V. All rights reserved.
Memory Errors in Alibi Generation: How an Alibi Can Turn Against Us.
Crozier, William E; Strange, Deryn; Loftus, Elizabeth F
2017-01-01
Alibis play a critical role in the criminal justice system. Yet research on the process of alibi generation and evaluation is still nascent. Indeed, similar to other widely investigated psychological phenomena in the legal system - such as false confessions, historical claims of abuse, and eyewitness memory - the basic assumptions underlying alibi generation and evaluation require closer empirical scrutiny. To date, the majority of alibi research investigates the social psychological aspects of the process. We argue that applying our understanding of basic human memory is critical to a complete understanding of the alibi process. Specifically, we challenge the use of alibi inconsistency as an indication of guilt by outlining the "cascading effects" that can put innocents at risk for conviction. We discuss how normal encoding and storage processes can pose problems at retrieval, particularly for innocent suspects that can result in alibi inconsistencies over time. Those inconsistencies are typically misunderstood as intentional deception, first by law enforcement, affecting the investigation, then by prosecutors affecting prosecution decisions, and finally by juries, ultimately affecting guilt judgments. Put differently, despite the universal nature of memory inconsistencies, a single error can produce a cascading effect, rendering an innocent individual's alibi, ironically, proof of guilt. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A Map/INS/Wi-Fi Integrated System for Indoor Location-Based Service Applications
Yu, Chunyang; Lan, Haiyu; Gu, Fuqiang; Yu, Fei; El-Sheimy, Naser
2017-01-01
In this research, a new Map/INS/Wi-Fi integrated system for indoor location-based service (LBS) applications based on a cascaded Particle/Kalman filter framework structure is proposed. Two-dimension indoor map information, together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value, are integrated for estimating positioning information. The main challenge of this research is how to make effective use of various measurements that complement each other in order to obtain an accurate, continuous, and low-cost position solution without increasing the computational burden of the system. Therefore, to eliminate the cumulative drift caused by low-cost IMU sensor errors, the ubiquitous Wi-Fi signal and non-holonomic constraints are rationally used to correct the IMU-derived navigation solution through the extended Kalman Filter (EKF). Moreover, the map-aiding method and map-matching method are innovatively combined to constrain the primary Wi-Fi/IMU-derived position through an Auxiliary Value Particle Filter (AVPF). Different sources of information are incorporated through a cascaded structure EKF/AVPF filter algorithm. Indoor tests show that the proposed method can effectively reduce the accumulation of positioning errors of a stand-alone Inertial Navigation System (INS), and provide a stable, continuous and reliable indoor location service. PMID:28574471
A Map/INS/Wi-Fi Integrated System for Indoor Location-Based Service Applications.
Yu, Chunyang; Lan, Haiyu; Gu, Fuqiang; Yu, Fei; El-Sheimy, Naser
2017-06-02
In this research, a new Map/INS/Wi-Fi integrated system for indoor location-based service (LBS) applications based on a cascaded Particle/Kalman filter framework structure is proposed. Two-dimension indoor map information, together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value, are integrated for estimating positioning information. The main challenge of this research is how to make effective use of various measurements that complement each other in order to obtain an accurate, continuous, and low-cost position solution without increasing the computational burden of the system. Therefore, to eliminate the cumulative drift caused by low-cost IMU sensor errors, the ubiquitous Wi-Fi signal and non-holonomic constraints are rationally used to correct the IMU-derived navigation solution through the extended Kalman Filter (EKF). Moreover, the map-aiding method and map-matching method are innovatively combined to constrain the primary Wi-Fi/IMU-derived position through an Auxiliary Value Particle Filter (AVPF). Different sources of information are incorporated through a cascaded structure EKF/AVPF filter algorithm. Indoor tests show that the proposed method can effectively reduce the accumulation of positioning errors of a stand-alone Inertial Navigation System (INS), and provide a stable, continuous and reliable indoor location service.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Y; Rottmann, J; Myronakis, M
2016-06-15
Purpose: The purpose of this study was to validate the use of a cascaded linear system model for MV cone-beam CT (CBCT) using a multi-layer (MLI) electronic portal imaging device (EPID) and provide experimental insight into image formation. A validated 3D model provides insight into salient factors affecting reconstructed image quality, allowing potential for optimizing detector design for CBCT applications. Methods: A cascaded linear system model was developed to investigate the potential improvement in reconstructed image quality for MV CBCT using an MLI EPID. Inputs to the three-dimensional (3D) model include projection space MTF and NPS. Experimental validation was performedmore » on a prototype MLI detector installed on the portal imaging arm of a Varian TrueBeam radiotherapy system. CBCT scans of up to 898 projections over 360 degrees were acquired at exposures of 16 and 64 MU. Image volumes were reconstructed using a Feldkamp-type (FDK) filtered backprojection (FBP) algorithm. Flat field images and scans of a Catphan model 604 phantom were acquired. The effect of 2×2 and 4×4 detector binning was also examined. Results: Using projection flat fields as an input, examination of the modeled and measured NPS in the axial plane exhibits good agreement. Binning projection images was shown to improve axial slice SDNR by a factor of approximately 1.4. This improvement is largely driven by a decrease in image noise of roughly 20%. However, this effect is accompanied by a subsequent loss in image resolution. Conclusion: The measured axial NPS shows good agreement with the theoretical calculation using a linear system model. Binning of projection images improves SNR of large objects on the Catphan phantom by decreasing noise. Specific imaging tasks will dictate the implementation image binning to two-dimensional projection images. The project was partially supported by a grant from Varian Medical Systems, Inc. and grant No. R01CA188446-01 from the National Cancer Institute.« less
NASA Astrophysics Data System (ADS)
Idris, N. H.; Salim, N. A.; Othman, M. M.; Yasin, Z. M.
2018-03-01
This paper presents the Evolutionary Programming (EP) which proposed to optimize the training parameters for Artificial Neural Network (ANN) in predicting cascading collapse occurrence due to the effect of protection system hidden failure. The data has been collected from the probability of hidden failure model simulation from the historical data. The training parameters of multilayer-feedforward with backpropagation has been optimized with objective function to minimize the Mean Square Error (MSE). The optimal training parameters consists of the momentum rate, learning rate and number of neurons in first hidden layer and second hidden layer is selected in EP-ANN. The IEEE 14 bus system has been tested as a case study to validate the propose technique. The results show the reliable prediction of performance validated through MSE and Correlation Coefficient (R).
Translation fidelity coevolves with longevity.
Ke, Zhonghe; Mallik, Pramit; Johnson, Adam B; Luna, Facundo; Nevo, Eviatar; Zhang, Zhengdong D; Gladyshev, Vadim N; Seluanov, Andrei; Gorbunova, Vera
2017-10-01
Whether errors in protein synthesis play a role in aging has been a subject of intense debate. It has been suggested that rare mistakes in protein synthesis in young organisms may result in errors in the protein synthesis machinery, eventually leading to an increasing cascade of errors as organisms age. Studies that followed generally failed to identify a dramatic increase in translation errors with aging. However, whether translation fidelity plays a role in aging remained an open question. To address this issue, we examined the relationship between translation fidelity and maximum lifespan across 17 rodent species with diverse lifespans. To measure translation fidelity, we utilized sensitive luciferase-based reporter constructs with mutations in an amino acid residue critical to luciferase activity, wherein misincorporation of amino acids at this mutated codon re-activated the luciferase. The frequency of amino acid misincorporation at the first and second codon positions showed strong negative correlation with maximum lifespan. This correlation remained significant after phylogenetic correction, indicating that translation fidelity coevolves with longevity. These results give new life to the role of protein synthesis errors in aging: Although the error rate may not significantly change with age, the basal rate of translation errors is important in defining lifespan across mammals. © 2017 The Authors. Aging Cell published by the Anatomical Society and John Wiley & Sons Ltd.
Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers
NASA Astrophysics Data System (ADS)
Caballero Morales, Santiago Omar; Cox, Stephen J.
2009-12-01
Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.
Tangborn, Wendell V.; Rasmussen, Lowell A.
1976-01-01
On the basis of a linear relationship between winter (October-April) precipitation and annual runoff from a drainage basin (Rasmussen and Tangborn, 1976) a physically reasonable model for predicting summer (May-September) streamflow from drainages in the North Cascades region was developed. This hydrometeorological prediction method relates streamflow for a season beginning on the day of prediction to the storage (including snow, ice, soil moisture, and groundwater) on that day. The spring storage is inferred from an input-output relationship based on the principle of conservation of mass: spring storage equals winter precipitation on the basin less winter runoff from the basin and less winter evapotranspiration, which is presumed to be small. The method of prediction is based on data only from the years previous to the one for which the prediction is made, and the system is revised each year as data for the previous year become available. To improve the basin storage estimate made in late winter or early spring, a short-season runoff prediction is made. The errors resulting from this short-term prediction are used to revise the storage estimate and improve the later prediction. This considerably improves the accuracy of the later prediction, especially for periods early in the summer runoff season. The optimum length for the test period appears to be generally less than a month for east side basins and between 1 and 2 months for those on the west side of the Cascade Range. The time distribution of the total summer runoff can be predicted when this test season is used so that on May 1 monthly streamflow for the May-September season can be predicted. It was found that summer precipitation and the time of minimum storage are two error sources that were amenable to analysis. For streamflow predictions in seasons beginning in early spring the deviation of the subsequent summer precipitation from a long-period average will contribute up to 53% of the prediction error. This contribution decreases to nearly zero during the summer and then rises slightly for late summer predictions. The reason for the smaller than expected effect of summer precipitation is thought to be due to the compensating effect of increased evaporative losses and increased infiltration when precipitation is greater than normal during the summer months. The error caused by the beginning winter month (assumed to be October in this study) not coinciding with the time of minimum storage was examined; it appears that October may be the best average beginning winter month for most drainages but that a more detailed study is needed. The optimum beginning of the winter season appears to vary from August to October when individual years are examined. These results demonstrate that standard precipitation and runoff measurements in the North Cascades region are adequate for constructing a predictive hydrologic model. This model can be used to make streamflow predictions that compare favorably with current multiple regression methods based on mountain snow surveys. This method has the added advantages of predicting the space and time distributions of storage and summer runoff.
Love, Peter E D; Smith, Jim; Teo, Pauline
2018-05-01
Error management theory is drawn upon to examine how a project-based organization, which took the form of a program alliance, was able to change its established error prevention mindset to one that enacted a learning mindfulness that provided an avenue to curtail its action errors. The program alliance was required to unlearn its existing routines and beliefs to accommodate the practices required to embrace error management. As a result of establishing an error management culture the program alliance was able to create a collective mindfulness that nurtured learning and supported innovation. The findings provide a much-needed context to demonstrate the relevance of error management theory to effectively address rework and safety problems in construction projects. The robust theoretical underpinning that is grounded in practice and presented in this paper provides a mechanism to engender learning from errors, which can be utilized by construction organizations to improve the productivity and performance of their projects. Copyright © 2018 Elsevier Ltd. All rights reserved.
Detector-Response Correction of Two-Dimensional γ -Ray Spectra from Neutron Capture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rusev, G.; Jandel, M.; Arnold, C. W.
2015-05-28
The neutron-capture reaction produces a large variety of γ-ray cascades with different γ-ray multiplicities. A measured spectral distribution of these cascades for each γ-ray multiplicity is of importance to applications and studies of γ-ray statistical properties. The DANCE array, a 4π ball of 160 BaF 2 detectors, is an ideal tool for measurement of neutron-capture γ-rays. The high granularity of DANCE enables measurements of high-multiplicity γ-ray cascades. The measured two-dimensional spectra (γ-ray energy, γ-ray multiplicity) have to be corrected for the DANCE detector response in order to compare them with predictions of the statistical model or use them in applications.more » The detector-response correction problem becomes more difficult for a 4π detection system than for a single detector. A trial and error approach and an iterative decomposition of γ-ray multiplets, have been successfully applied to the detector-response correction. As a result, applications of the decomposition methods are discussed for two-dimensional γ-ray spectra measured at DANCE from γ-ray sources and from the 10B(n, γ) and 113Cd(n, γ) reactions.« less
NASA Astrophysics Data System (ADS)
Yong, WANG; Cong, LI; Jielin, SHI; Xingwei, WU; Hongbin, DING
2017-11-01
As advanced linear plasma sources, cascaded arc plasma devices have been used to generate steady plasma with high electron density, high particle flux and low electron temperature. To measure electron density and electron temperature of the plasma device accurately, a laser Thomson scattering (LTS) system, which is generally recognized as the most precise plasma diagnostic method, has been established in our lab in Dalian University of Technology. The electron density has been measured successfully in the region of 4.5 × 1019 m-3 to 7.1 × 1020 m-3 and electron temperature in the region of 0.18 eV to 0.58 eV. For comparison, an optical emission spectroscopy (OES) system was established as well. The results showed that the electron excitation temperature (configuration temperature) measured by OES is significantly higher than the electron temperature (kinetic electron temperature) measured by LTS by up to 40% in the given discharge conditions. The results indicate that the cascaded arc plasma is recombining plasma and it is not in local thermodynamic equilibrium (LTE). This leads to significant error using OES when characterizing the electron temperature in a non-LTE plasma.
Newborn Screening and Cascade Testing for FMR1 Mutations
Sorensen, Page L.; Gane, Louise W.; Yarborough, Mark; Hagerman, Randi; Tassone, Flora
2014-01-01
We describe an ongoing pilot project in which newborn screening (NBS) for FMR1 mutations and subsequent cascade testing are performed by the MIND Institute at the University of California, Davis Medical Center (UCDMC). To date, out of 3042 newborns initially screened, 44 extended family members have been screened by cascade testing of extended family members once a newborn is identified. 14 newborns (7 males and 7 females) and 27 extended family members (5 males and 22 females) have been identified with FMR1 mutations. Three family histories are discussed in detail, each demonstrating some benefits and risks of NBS and cascade testing for FMR1 mutations in extended family members. While we acknowledge inherent risks, we propose that with genetic counseling, clinical follow-up of identified individuals and cascade testing, newborn screening (NBS) has significant benefits. Treatment for individuals in the extended family who would otherwise not have received treatment can be beneficial. In addition, knowledge of carrier status can lead to lifestyle changes and prophylactic interventions that are likely to reduce the risk of late onset neurological or psychiatric problems in carriers. Also with identification of carrier family members through NBS, reproductive choices become available to those who would not have known that they were at risk to have offspring with fragile X syndrome. PMID:23239591
Improving laboratory data entry quality using Six Sigma.
Elbireer, Ali; Le Chasseur, Julie; Jackson, Brooks
2013-01-01
The Uganda Makerere University provides clinical laboratory support to over 70 clients in Uganda. With increased volume, manual data entry errors have steadily increased, prompting laboratory managers to employ the Six Sigma method to evaluate and reduce their problems. The purpose of this paper is to describe how laboratory data entry quality was improved by using Six Sigma. The Six Sigma Quality Improvement (QI) project team followed a sequence of steps, starting with defining project goals, measuring data entry errors to assess current performance, analyzing data and determining data-entry error root causes. Finally the team implemented changes and control measures to address the root causes and to maintain improvements. Establishing the Six Sigma project required considerable resources and maintaining the gains requires additional personnel time and dedicated resources. After initiating the Six Sigma project, there was a 60.5 percent reduction in data entry errors from 423 errors a month (i.e. 4.34 Six Sigma) in the first month, down to an average 166 errors/month (i.e. 4.65 Six Sigma) over 12 months. The team estimated the average cost of identifying and fixing a data entry error to be $16.25 per error. Thus, reducing errors by an average of 257 errors per month over one year has saved the laboratory an estimated $50,115 a year. The Six Sigma QI project provides a replicable framework for Ugandan laboratory staff and other resource-limited organizations to promote quality environment. Laboratory staff can deliver excellent care at a lower cost, by applying QI principles. This innovative QI method of reducing data entry errors in medical laboratories may improve the clinical workflow processes and make cost savings across the health care continuum.
The, Bertram; Flivik, Gunnar; Diercks, Ron L; Verdonschot, Nico
2008-03-01
Wear curves from individual patients often show unexplained irregular wear curves or impossible values (negative wear). We postulated errors of two-dimensional wear measurements are mainly the result of radiographic projection differences. We tested a new method that makes two-dimensional wear measurements less sensitive for radiograph projection differences of cemented THAs. The measurement errors that occur when radiographically projecting a three-dimensional THA were modeled. Based on the model, we developed a method to reduce the errors, thus approximating three-dimensional linear wear values, which are less sensitive for projection differences. An error analysis was performed by virtually simulating 144 wear measurements under varying conditions with and without application of the correction: the mean absolute error was reduced from 1.8 mm (range, 0-4.51 mm) to 0.11 mm (range, 0-0.27 mm). For clinical validation, radiostereometric analysis was performed on 47 patients to determine the true wear at 1, 2, and 5 years. Subsequently, wear was measured on conventional radiographs with and without the correction: the overall occurrence of errors greater than 0.2 mm was reduced from 35% to 15%. Wear measurements are less sensitive to differences in two-dimensional projection of the THA when using the correction method.
Philippoff, Joanna; Baumgartner, Erin
2016-03-01
The scientific value of citizen-science programs is limited when the data gathered are inconsistent, erroneous, or otherwise unusable. Long-term monitoring studies, such as Our Project In Hawai'i's Intertidal (OPIHI), have clear and consistent procedures and are thus a good model for evaluating the quality of participant data. The purpose of this study was to examine the kinds of errors made by student researchers during OPIHI data collection and factors that increase or decrease the likelihood of these errors. Twenty-four different types of errors were grouped into four broad error categories: missing data, sloppiness, methodological errors, and misidentification errors. "Sloppiness" was the most prevalent error type. Error rates decreased with field trip experience and student age. We suggest strategies to reduce data collection errors applicable to many types of citizen-science projects including emphasizing neat data collection, explicitly addressing and discussing the problems of falsifying data, emphasizing the importance of using standard scientific vocabulary, and giving participants multiple opportunities to practice to build their data collection techniques and skills.
Innovations in Advanced Materials and Metals Manufacturing Project (IAM2)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Elizabeth
This project, under the Jobs and Innovation Accelerator Challenge, Innovations in Advanced Materials and Metals Manufacturing Project, contracted with Cascade Energy to provide a shared energy project manager engineer to work with five different companies throughout the Portland metro grant region to implement ten energy efficiency projects and develop a case study to analyze the project model. As a part of the project, the energy project manager also looked into specific new technologies and methodologies that could change the way energy is consumed by manufacturers—from game-changing equipment and technology to monitor energy use to methodologies that change the way companiesmore » interact and use their machines to reduce energy consumption.« less
Influence of nuclear de-excitation on observables relevant for space exploration
NASA Astrophysics Data System (ADS)
Mancusi, Davide; Boudard, Alain; Cugnon, Joseph; David, Jean-Christophe; Leray, Sylvie
The composition of the space radiation environment inside spacecrafts is modified by the inter-action with shielding material, with equipment and even with the astronauts' bodies. Accurate quantitative estimates of the effects of nuclear reactions are necessary, for example, for dose estimation and prediction of single-event upset rates. To this end, it is necessary to construct predictive models for nuclear reactions, which usually consist of an intranuclear-cascade or quantum-molecular-dynamics stage, followed by a nuclear de-excitation stage. While it is generally acknowledged that it is necessary to accurately simulate the first reaction stage, transport-code users often neglect or underestimate the importance of the choice of the de-excitation code. The purpose of this work is to prove that the de-excitation model is in fact a non-negligible source of uncertainty for the prediction of several observables of crucial importance for space applications. For some particular observables, such as fragmentation cross sections, the systematic uncertainty due to the de-excitation model actually dominates the theoretical error. Our point will be illustrated by making use of calculations performed with several intranuclear-cascade/de-excitation models, such as the Li`ge Intranuclear Cascade model (INCL) and Isabel (for the cascade part) and ABLA, GEMINI++ and SMM (on the de-excitation side). We will also rely on the results of the recent IAEA intercomparison of spallation models, which can be used as informative groundwork for the evaluation of the global uncertainties involved in nucleon-nucleus reactions.
Room temperature continuous wave operation of quantum cascade laser at λ ~ 9.4 μm
NASA Astrophysics Data System (ADS)
Hou, Chuncai; Zhao, Yue; Zhang, Jinchuan; Zhai, Shenqiang; Zhuo, Ning; Liu, Junqi; Wang, Lijun; Liu, Shuman; Liu, Fengqi; Wang, Zhanguo
2018-03-01
Continuous wave (CW) operation of long wave infrared (LWIR) quantum cascade lasers (QCLs) is achieved up to a temperature of 303 K. For room temperature CW operation, the wafer with 35 stages was processed into buried heterostructure lasers. For a 2-mm-long and 10-μm-wide laser with high-reflectivity (HR) coating on the rear facet, CW output power of 45 mW at 283 K and 9 mW at 303 K is obtained. The lasing wavelength is around 9.4 μm locating in the LWIR spectrum range. Project supported by the National Key Research And Development Program (No. 2016YFB0402303), the National Natural Science Foundation of China (Nos. 61435014, 61627822, 61574136, 61774146, 61674144, 61404131), the Key Projects of Chinese Academy of Sciences (Nos. ZDRW-XH-2016-4, QYZDJ-SSW-JSC027), and the Beijing Natural Science Foundation (No. 4162060, 4172060).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, R.R.; Staub, W.P.
1993-08-01
Two environmental assessments considered the potential cumulative environmental impacts resulting from the development of eight proposed hydropower projects in the Nooksack River Basin and 11 proposed projects in the Skagit River Basin, North Cascades, Washington, respectively. While not identified as a target resource, slope stability and the alteration of sediment supply to creeks and river mainstems significantly affect other resources. The slope stability assessment emphasized the potential for cumulative impacts under disturbed conditions (e.g., road construction and timber harvesting) and a landslide-induced pipeline rupture scenario. In the case of small-scale slides, the sluicing action of ruptured pipeline water on themore » fresh landslide scarp was found to be capable of eroding significantly more material than the original landslide. For large-scale landslides, sluiced material was found to be a small increment of the original landslide. These results predicted that hypothetical accidental pipeline rupture by small-scale landslides may result in potential cumulative impacts for 12 of the 19 projects with pending license applications in both river basins. 5 refs., 2 tabs.« less
The effects of time-varying observation errors on semi-empirical sea-level projections
Ruckert, Kelsey L.; Guan, Yawen; Bakker, Alexander M. R.; ...
2016-11-30
Sea-level rise is a key driver of projected flooding risks. The design of strategies to manage these risks often hinges on projections that inform decision-makers about the surrounding uncertainties. Producing semi-empirical sea-level projections is difficult, for example, due to the complexity of the error structure of the observations, such as time-varying (heteroskedastic) observation errors and autocorrelation of the data-model residuals. This raises the question of how neglecting the error structure impacts hindcasts and projections. Here, we quantify this effect on sea-level projections and parameter distributions by using a simple semi-empirical sea-level model. Specifically, we compare three model-fitting methods: a frequentistmore » bootstrap as well as a Bayesian inversion with and without considering heteroskedastic residuals. All methods produce comparable hindcasts, but the parametric distributions and projections differ considerably based on methodological choices. In conclusion, our results show that the differences based on the methodological choices are enhanced in the upper tail projections. For example, the Bayesian inversion accounting for heteroskedasticity increases the sea-level anomaly with a 1% probability of being equaled or exceeded in the year 2050 by about 34% and about 40% in the year 2100 compared to a frequentist bootstrap. These results indicate that neglecting known properties of the observation errors and the data-model residuals can lead to low-biased sea-level projections.« less
The effects of time-varying observation errors on semi-empirical sea-level projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruckert, Kelsey L.; Guan, Yawen; Bakker, Alexander M. R.
Sea-level rise is a key driver of projected flooding risks. The design of strategies to manage these risks often hinges on projections that inform decision-makers about the surrounding uncertainties. Producing semi-empirical sea-level projections is difficult, for example, due to the complexity of the error structure of the observations, such as time-varying (heteroskedastic) observation errors and autocorrelation of the data-model residuals. This raises the question of how neglecting the error structure impacts hindcasts and projections. Here, we quantify this effect on sea-level projections and parameter distributions by using a simple semi-empirical sea-level model. Specifically, we compare three model-fitting methods: a frequentistmore » bootstrap as well as a Bayesian inversion with and without considering heteroskedastic residuals. All methods produce comparable hindcasts, but the parametric distributions and projections differ considerably based on methodological choices. In conclusion, our results show that the differences based on the methodological choices are enhanced in the upper tail projections. For example, the Bayesian inversion accounting for heteroskedasticity increases the sea-level anomaly with a 1% probability of being equaled or exceeded in the year 2050 by about 34% and about 40% in the year 2100 compared to a frequentist bootstrap. These results indicate that neglecting known properties of the observation errors and the data-model residuals can lead to low-biased sea-level projections.« less
Belenko, Steven; Knight, Danica; Wasserman, Gail A; Dennis, Michael L; Wiley, Tisha; Taxman, Faye S; Oser, Carrie; Dembo, Richard; Robertson, Angela A; Sales, Jessica
2017-03-01
Substance use and substance use disorders are highly prevalent among youth under juvenile justice (JJ) supervision, and related to delinquency, psychopathology, social problems, risky sex and sexually transmitted infections, and health problems. However, numerous gaps exist in the identification of behavioral health (BH) problems and in the subsequent referral, initiation and retention in treatment for youth in community justice settings. This reflects both organizational and systems factors, including coordination between justice and BH agencies. This paper presents a new framework, the Juvenile Justice Behavioral Health Services Cascade ("Cascade"), for measuring unmet substance use treatment needs to illustrate how the cascade approach can be useful in understanding service delivery issues and identifying strategies to improve treatment engagement and outcomes for youth under community JJ supervision. We discuss the organizational and systems barriers for linking delinquent youth to BH services, and explain how the Cascade can help understand and address these barriers. We provide a detailed description of the sequential steps and measures of the Cascade, and then offer an example of its application from the Juvenile Justice - Translational Research on Interventions for Adolescents in the Legal System project (JJ-TRIALS), a multi-site research cooperative funded by the National Institute on Drug Abuse. As illustrated with substance abuse treatment, the Cascade has potential for informing and guiding efforts to improve behavioral health service linkages for adolescent offenders, developing and testing interventions and policies to improve interagency and cross-systems coordination, and informing the development of measures and interventions for improving the implementation of treatment in complex multisystem service settings. Clinical Trials Registration number - NCT02672150. Copyright © 2017 Elsevier Inc. All rights reserved.
Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.
2018-01-01
Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737
Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D
2018-01-01
Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.
A continuous quality improvement project to reduce medication error in the emergency department.
Lee, Sara Bc; Lee, Larry Ly; Yeung, Richard Sd; Chan, Jimmy Ts
2013-01-01
Medication errors are a common source of adverse healthcare incidents particularly in the emergency department (ED) that has a number of factors that make it prone to medication errors. This project aims to reduce medication errors and improve the health and economic outcomes of clinical care in Hong Kong ED. In 2009, a task group was formed to identify problems that potentially endanger medication safety and developed strategies to eliminate these problems. Responsible officers were assigned to look after seven error-prone areas. Strategies were proposed, discussed, endorsed and promulgated to eliminate the problems identified. A reduction of medication incidents (MI) from 16 to 6 was achieved before and after the improvement work. This project successfully established a concrete organizational structure to safeguard error-prone areas of medication safety in a sustainable manner.
Estimating tree bole volume using artificial neural network models for four species in Turkey.
Ozçelik, Ramazan; Diamantopoulou, Maria J; Brooks, John R; Wiant, Harry V
2010-01-01
Tree bole volumes of 89 Scots pine (Pinus sylvestris L.), 96 Brutian pine (Pinus brutia Ten.), 107 Cilicica fir (Abies cilicica Carr.) and 67 Cedar of Lebanon (Cedrus libani A. Rich.) trees were estimated using Artificial Neural Network (ANN) models. Neural networks offer a number of advantages including the ability to implicitly detect complex nonlinear relationships between input and output variables, which is very helpful in tree volume modeling. Two different neural network architectures were used and produced the Back propagation (BPANN) and the Cascade Correlation (CCANN) Artificial Neural Network models. In addition, tree bole volume estimates were compared to other established tree bole volume estimation techniques including the centroid method, taper equations, and existing standard volume tables. An overview of the features of ANNs and traditional methods is presented and the advantages and limitations of each one of them are discussed. For validation purposes, actual volumes were determined by aggregating the volumes of measured short sections (average 1 meter) of the tree bole using Smalian's formula. The results reported in this research suggest that the selected cascade correlation artificial neural network (CCANN) models are reliable for estimating the tree bole volume of the four examined tree species since they gave unbiased results and were superior to almost all methods in terms of error (%) expressed as the mean of the percentage errors. 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Brandstetter, M.; Volgger, L.; Genner, A.; Jungbauer, C.; Lendl, B.
2013-02-01
This work reports on a compact sensor for fast and reagent-free point-of-care determination of glucose, lactate and triglycerides in blood serum based on a tunable (1030-1230 cm-1) external-cavity quantum cascade laser (EC-QCL). For simple and robust operation a single beam set-up was designed and only thermoelectric cooling was used for the employed laser and detector. Full computer control of analysis including liquid handling and data analysis facilitated routine measurements. A high optical pathlength (>100 μm) is a prerequisite for robust measurements in clinical practice. Hence, the optimum optical pathlength for transmission measurements in aqueous solution was considered in theory and experiment. The experimentally determined maximum signal-to-noise ratio (SNR) was around 140 μm for the QCL blood sensor and around 50 μm for a standard FT-IR spectrometer employing a liquid nitrogen cooled mercury cadmium telluride (MCT) detector. A single absorption spectrum was used to calculate the analyte concentrations simultaneously by using a partial-least-squares (PLS) regression analysis. Glucose was determined in blood serum with a prediction error (RMSEP) of 6.9 mg/dl and triglycerides with an error of cross-validation (RMSECV) of 17.5 mg/dl in a set of 42 different patients. In spiked serum samples the lactate concentration could be determined with an RMSECV of 8.9 mg/dl.
Attention to Form or Meaning? Error Treatment in the Bangalore Project.
ERIC Educational Resources Information Center
Beretta, Alan
1989-01-01
Reports on an evaluation of the Bangalore/Madras Communicational Teaching Project (CTP), a content-based approach to language learning. Analysis of 21 lesson transcripts revealed a greater incidence of error treatment of content than linguistic error, consonant with the CTP focus on meaning rather than form. (26 references) (Author/CB)
Critical Transitions in Thin Layer Turbulence
NASA Astrophysics Data System (ADS)
Benavides, Santiago; Alexakis, Alexandros
2017-11-01
We investigate a model of thin layer turbulence that follows the evolution of the two-dimensional motions u2 D (x , y) along the horizontal directions (x , y) coupled to a single Fourier mode along the vertical direction (z) of the form uq (x , y , z) = [vx (x , y) sin (qz) ,vy (x , y) sin (qz) ,vz (x , y) cos (qz) ] , reducing thus the system to two coupled, two-dimensional equations. Its reduced dimensionality allows a thorough investigation of the transition from a forward to an inverse cascade of energy as the thickness of the layer H = π / q is varied.Starting from a thick layer and reducing its thickness it is shown that two critical heights are met (i) one for which the forward unidirectional cascade (similar to three-dimensional turbulence) transitions to a bidirectional cascade transferring energy to both small and large scales and (ii) one for which the bidirectional cascade transitions to a unidirectional inverse cascade when the layer becomes very thin (similar to two-dimensional turbulence). The two critical heights are shown to have different properties close to criticality that we are able to analyze with numerical simulations for a wide range of Reynolds numbers and aspect ratios. This work was Granted access to the HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX-29-01).
Assessment of Critical Events Corridors through Multivariate Cascading Outages Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Samaan, Nader A.; Diao, Ruisheng
2011-10-17
Massive blackouts of electrical power systems in North America over the past decade has focused increasing attention upon ways to identify and simulate network events that may potentially lead to widespread network collapse. This paper summarizes a method to simulate power-system vulnerability to cascading failures to a supplied set of initiating events synonymously termed as Extreme Events. The implemented simulation method is currently confined to simulating steady state power-system response to a set of extreme events. The outlined method of simulation is meant to augment and provide a new insight into bulk power transmission network planning that at present remainsmore » mainly confined to maintaining power system security for single and double component outages under a number of projected future network operating conditions. Although one of the aims of this paper is to demonstrate the feasibility of simulating network vulnerability to cascading outages, a more important goal has been to determine vulnerable parts of the network that may potentially be strengthened in practice so as to mitigate system susceptibility to cascading failures. This paper proposes to demonstrate a systematic approach to analyze extreme events and identify vulnerable system elements that may be contributing to cascading outages. The hypothesis of critical events corridors is proposed to represent repeating sequential outages that can occur in the system for multiple initiating events. The new concept helps to identify system reinforcements that planners could engineer in order to 'break' the critical events sequences and therefore lessen the likelihood of cascading outages. This hypothesis has been successfully validated with a California power system model.« less
Völker, Martin; Fiederer, Lukas D J; Berberich, Sofie; Hammer, Jiří; Behncke, Joos; Kršek, Pavel; Tomášek, Martin; Marusič, Petr; Reinacher, Peter C; Coenen, Volker A; Helias, Moritz; Schulze-Bonhage, Andreas; Burgard, Wolfram; Ball, Tonio
2018-06-01
Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-04
... would be located on National Forest System lands south of the city of Sisters, Oregon; east of the Three... acre Cascade Timberlands property which is being considered as a future Community Forest. The legal...
ORD's Sustainable & Healthy Communities (SHC) Nutrient research
Sustainable and healthy communities project 3.3.1 "Integrated Management of Reactive Nitrogen" aims to comprehensively examine the cascade of environmental economic and human health problems stemming from excess reactive N. Our goals are to improve understanding of the impacts o...
James K. Agee; John F. (comps.) Lehmkuhl
2009-01-01
The Fire and Fire Surrogate (FFS) project is a large long-term metastudy established to assess the effectiveness and ecological impacts of burning and fire "surrogates" such as cuttings and mechanical fuel treatments that are used instead of fire, or in combination with fire, to restore dry forests. One of the 13 national FFS sites is the Northeastern...
An equivalent circuit model for terahertz quantum cascade lasers: Modeling and experiments
NASA Astrophysics Data System (ADS)
Yao, Chen; Xu, Tian-Hong; Wan, Wen-Jian; Zhu, Yong-Hao; Cao, Jun-Cheng
2015-09-01
Terahertz quantum cascade lasers (THz QCLs) emitted at 4.4 THz are fabricated and characterized. An equivalent circuit model is established based on the five-level rate equations to describe their characteristics. In order to illustrate the capability of the model, the steady and dynamic performances of the fabricated THz QCLs are simulated by the model. Compared to the sophisticated numerical methods, the presented model has advantages of fast calculation and good compatibility with circuit simulation for system-level designs and optimizations. The validity of the model is verified by the experimental and numerical results. Project supported by the National Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61404149), the Major National Development Project of Scientific Instrument and Equipment, China (Grant No. 2011YQ150021), the National Science and Technology Major Project, China (Grant No. 2011ZX02707), the Major Project, China (Grant No. YYYJ-1123-1), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology, China (Grant Nos. 14530711300).
MAPPING THE GAS TURBULENCE IN THE COMA CLUSTER: PREDICTIONS FOR ASTRO-H
DOE Office of Scientific and Technical Information (OSTI.GOV)
ZuHone, J. A.; Markevitch, M.; Zhuravleva, I.
2016-02-01
Astro-H will be able for the first time to map gas velocities and detect turbulence in galaxy clusters. One of the best targets for turbulence studies is the Coma cluster, due to its proximity, absence of a cool core, and lack of a central active galactic nucleus. To determine what constraints Astro-H will be able to place on the Coma velocity field, we construct simulated maps of the projected gas velocity and compute the second-order structure function, an analog of the velocity power spectrum. We vary the injection scale, dissipation scale, slope, and normalization of the turbulent power spectrum, andmore » apply measurement errors and finite sampling to the velocity field. We find that even with sparse coverage of the cluster, Astro-H will be able to measure the Mach number and the injection scale of the turbulent power spectrum—the quantities determining the energy flux down the turbulent cascade and the diffusion rate for everything that is advected by the gas (metals, cosmic rays, etc.). Astro-H will not be sensitive to the dissipation scale or the slope of the power spectrum in its inertial range, unless they are outside physically motivated intervals. We give the expected confidence intervals for the injection scale and the normalization of the power spectrum for a number of possible pointing configurations, combining the structure function and velocity dispersion data. Importantly, we also determine that measurement errors on the line shift will bias the velocity structure function upward, and show how to correct this bias.« less
Mapping the Gas Turbulence in the Coma Cluster: Predictions for Astro-H
NASA Technical Reports Server (NTRS)
ZuHone, J. A.; Markevitch, M.; Zhuravleva, I.
2016-01-01
Astro-H will be able for the first time to map gas velocities and detect turbulence in galaxy clusters. One of the best targets for turbulence studies is the Coma cluster, due to its proximity, absence of a cool core, and lack of a central active galactic nucleus. To determine what constraints Astro-H will be able to place on the Coma velocity field, we construct simulated maps of the projected gas velocity and compute the second-order structure function, an analog of the velocity power spectrum. We vary the injection scale, dissipation scale, slope, and normalization of the turbulent power spectrum, and apply measurement errors and finite sampling to the velocity field. We find that even with sparse coverage of the cluster, Astro-H will be able to measure the Mach number and the injection scale of the turbulent power spectrum-the quantities determining the energy flux down the turbulent cascade and the diffusion rate for everything that is advected by the gas (metals, cosmic rays, etc.). Astro-H will not be sensitive to the dissipation scale or the slope of the power spectrum in its inertial range, unless they are outside physically motivated intervals. We give the expected confidence intervals for the injection scale and the normalization of the power spectrum for a number of possible pointing configurations, combining the structure function and velocity dispersion data. Importantly, we also determine that measurement errors on the line shift will bias the velocity structure function upward, and show how to correct this bias.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.
2000-01-01
The NASA Engine Performance Program (NEPP) can configure and analyze almost any type of gas turbine engine that can be generated through the interconnection of a set of standard physical components. In addition, the code can optimize engine performance by changing adjustable variables under a set of constraints. However, for engine cycle problems at certain operating points, the NEPP code can encounter difficulties: nonconvergence in the currently implemented Powell's optimization algorithm and deficiencies in the Newton-Raphson solver during engine balancing. A project was undertaken to correct these deficiencies. Nonconvergence was avoided through a cascade optimization strategy, and deficiencies associated with engine balancing were eliminated through neural network and linear regression methods. An approximation-interspersed cascade strategy was used to optimize the engine's operation over its flight envelope. Replacement of Powell's algorithm by the cascade strategy improved the optimization segment of the NEPP code. The performance of the linear regression and neural network methods as alternative engine analyzers was found to be satisfactory. This report considers two examples-a supersonic mixed-flow turbofan engine and a subsonic waverotor-topped engine-to illustrate the results, and it discusses insights gained from the improved version of the NEPP code.
NASA Astrophysics Data System (ADS)
Ratto, Luca; Satta, Francesca; Tanda, Giovanni
2018-06-01
This paper presents an experimental and numerical investigation of heat transfer in the endwall region of a large scale turbine cascade. The steady-state liquid crystal technique has been used to obtain the map of the heat transfer coefficient for a constant heat flux boundary condition. In the presence of two- and three-dimensional flows with significant spatial variations of the heat transfer coefficient, tangential heat conduction could lead to error in the heat transfer coefficient determination, since local heat fluxes at the wall-to-fluid interface tend to differ from point to point and surface temperatures to be smoothed out, thus making the uniform-heat-flux boundary condition difficult to be perfectly achieved. For this reason, numerical simulations of flow and heat transfer in the cascade including the effect of tangential heat conduction inside the endwall have been performed. The major objective of numerical simulations was to investigate the influence of wall heat conduction on the convective heat transfer coefficient determined during a nominal iso-flux heat transfer experiment and to interpret possible differences between numerical and experimental heat transfer results. Results were presented and discussed in terms of local Nusselt number and a convenient wall heat flux function for two values of the Reynolds number (270,000 and 960,000).
Inlet Turbulence and Length Scale Measurements in a Large Scale Transonic Turbine Cascade
NASA Technical Reports Server (NTRS)
Thurman, Douglas; Flegel, Ashlie; Giel, Paul
2014-01-01
Constant temperature hotwire anemometry data were acquired to determine the inlet turbulence conditions of a transonic turbine blade linear cascade. Flow conditions and angles were investigated that corresponded to the take-off and cruise conditions of the Variable Speed Power Turbine (VSPT) project and to an Energy Efficient Engine (EEE) scaled rotor blade tip section. Mean and turbulent flowfield measurements including intensity, length scale, turbulence decay, and power spectra were determined for high and low turbulence intensity flows at various Reynolds numbers and spanwise locations. The experimental data will be useful for establishing the inlet boundary conditions needed to validate turbulence models in CFD codes.
DEVELOPMENT AND LABORATORY PERFORMANCE EVALUATION OF A PERSONAL CASCADE IMPACTOR. (R825270)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
TESTING THE GENERALITY OF A TROPHIC CASCADE MODEL FOR PLAGUE. (R829091)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Continuous wave power scaling in high power broad area quantum cascade lasers
NASA Astrophysics Data System (ADS)
Suttinger, M.; Leshin, J.; Go, R.; Figueiredo, P.; Shu, H.; Lyakh, A.
2018-02-01
Experimental and model results for high power broad area quantum cascade lasers are presented. Continuous wave power scaling from 1.62 W to 2.34 W has been experimentally demonstrated for 3.15 mm-long, high reflection-coated 5.6 μm quantum cascade lasers with 15 stage active region for active region width increased from 10 μm to 20 μm. A semi-empirical model for broad area devices operating in continuous wave mode is presented. The model uses measured pulsed transparency current, injection efficiency, waveguide losses, and differential gain as input parameters. It also takes into account active region self-heating and sub-linearity of pulsed power vs current laser characteristic. The model predicts that an 11% improvement in maximum CW power and increased wall plug efficiency can be achieved from 3.15 mm x 25 μm devices with 21 stages of the same design but half doping in the active region. For a 16-stage design with a reduced stage thickness of 300Å, pulsed roll-over current density of 6 kA/cm2 , and InGaAs waveguide layers; optical power increase of 41% is projected. Finally, the model projects that power level can be increased to 4.5 W from 3.15 mm × 31 μm devices with the baseline configuration with T0 increased from 140 K for the present design to 250 K.
Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber
Acciarri, R.; Adams, C.; Asaadi, J.; ...
2017-03-09
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acciarri, R.; Adams, C.; Asaadi, J.
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Control of Systems With Slow Actuators Using Time Scale Separation
NASA Technical Reports Server (NTRS)
Stepanyan, Vehram; Nguyen, Nhan
2009-01-01
This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)
2002-01-01
One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.
Anatomical background and generalized detectability in tomosynthesis and cone-beam CT.
Gang, G J; Tward, D J; Lee, J; Siewerdsen, J H
2010-05-01
Anatomical background presents a major impediment to detectability in 2D radiography as well as 3D tomosynthesis and cone-beam CT (CBCT). This article incorporates theoretical and experimental analysis of anatomical background "noise" in cascaded systems analysis of 2D and 3D imaging performance to yield "generalized" metrics of noise-equivalent quanta (NEQ) and detectability index as a function of the orbital extent of the (circular arc) source-detector orbit. A physical phantom was designed based on principles of fractal self-similarity to exhibit power-law spectral density (kappa/Fbeta) comparable to various anatomical sites (e.g., breast and lung). Background power spectra [S(B)(F)] were computed as a function of source-detector orbital extent, including tomosynthesis (approximately 10 degrees -180 degrees) and CBCT (180 degrees + fan to 360 degrees) under two acquisition schemes: (1) Constant angular separation between projections (variable dose) and (2) constant total number of projections (constant dose). The resulting S(B) was incorporated in the generalized NEQ, and detectability index was computed from 3D cascaded systems analysis for a variety of imaging tasks. The phantom yielded power-law spectra within the expected spatial frequency range, quantifying the dependence of clutter magnitude (kappa) and correlation (beta) with increasing tomosynthesis angle. Incorporation of S(B) in the 3D NEQ provided a useful framework for analyzing the tradeoffs among anatomical, quantum, and electronic noise with dose and orbital extent. Distinct implications are posed for breast and chest tomosynthesis imaging system design-applications varying significantly in kappa and beta, and imaging task and, therefore, in optimal selection of orbital extent, number of projections, and dose. For example, low-frequency tasks (e.g., soft-tissue masses or nodules) tend to benefit from larger orbital extent and more fully 3D tomographic imaging, whereas high-frequency tasks (e.g., microcalcifications) require careful, application-specific selection of orbital extent and number of projections to minimize negative effects of quantum and electronic noise. The complex tradeoffs among anatomical background, quantum noise, and electronic noise in projection imaging, tomosynthesis, and CBCT can be described by generalized cascaded systems analysis, providing a useful framework for system design and optimization.
Current Problems in Turbomachinery Fluid Dynamics.
1982-05-21
Research Center. It is thought to result from the termination of the 3-D bow shock as the relAtive blade Mach decreases ,.zom tip to hub. This low...project emphasized development of at least a plausible inverse scheme for mixed supersonic, subsonic flow with the possibility of shock waves appearing...Calculation Procedure for Shock -Free or Strong Passage Shock Turbomachinery Cascades," ASME paper 82-GT-220. The next phase of this project was expected to
NASA Astrophysics Data System (ADS)
Lu, Aiming; Atkinson, Ian C.; Vaughn, J. Thomas; Thulborn, Keith R.
2011-12-01
The rapid biexponential transverse relaxation of the sodium MR signal from brain tissue requires efficient k-space sampling for quantitative imaging in a time that is acceptable for human subjects. The flexible twisted projection imaging (flexTPI) sequence has been shown to be suitable for quantitative sodium imaging with an ultra-short echo time to minimize signal loss. The fidelity of the k-space center location is affected by the readout gradient timing errors on the three physical axes, which is known to cause image distortion for projection-based acquisitions. This study investigated the impact of these timing errors on the voxel-wise accuracy of the tissue sodium concentration (TSC) bioscale measured with the flexTPI sequence. Our simulations show greater than 20% spatially varying quantification errors when the gradient timing errors are larger than 10 μs on all three axes. The quantification is more tolerant of gradient timing errors on the Z-axis. An existing method was used to measure the gradient timing errors with <1 μs error. The gradient timing error measurement is shown to be RF coil dependent, and timing error differences of up to ˜16 μs have been observed between different RF coils used on the same scanner. The measured timing errors can be corrected prospectively or retrospectively to obtain accurate TSC values.
Broad area quantum cascade lasers operating in pulsed mode above 100 °C λ ∼4.7 μm
NASA Astrophysics Data System (ADS)
Zhao, Yue; Yan, Fangliang; Zhang, Jinchuan; Liu, Fengqi; Zhuo, Ning; Liu, Junqi; Wang, Lijun; Wang, Zhanguo
2017-07-01
We demonstrate a broad area (400 μm) high power quantum cascade laser (QCL). A total peak power of 62 W operating at room temperature is achieved at λ ∼4.7 μm. The temperature dependence of the peak power characteristic is given in the experiment, and also the temperature of the active zone is simulated by a finite-element-method (FEM). We find that the interface roughness of the active core has a great effect on the temperature of the active zone and can be enormously improved using the solid source molecular beam epitaxy (MBE) growth system. Project supported by the National Basic Research Program of China (No. 2013CB632801), the National Key Research and Development Program (No. 2016YFB0402303), the National Natural Science Foundation of China (Nos. 61435014, 61627822, 61574136, 61306058, 61404131), the Key Projects of Chinese Academy of Sciences (No. ZDRW-XH-20164), and the Beijing Natural Science Foundation (No. 4162060).
Knight, Danica; Wasserman, Gail A.; Dennis, Michael L.; Wiley, Tisha; Taxman, Faye S.; Oser, Carrie; Dembo, Richard; Robertson, Angela A.; Sales, Jessica
2017-01-01
Overview Substance use and substance use disorders are highly prevalent among youth under juvenile justice (JJ) supervision, and related to delinquency, psychopathology, social problems, risky sex and sexually transmitted infections, and health problems. However, numerous gaps exist in the identification of behavioral health (BH) problems and in the subsequent referral, initiation and retention in treatment for youth in community justice settings. This reflects both organizational and systems factors, including coordination between justice and BH agencies. Methods and Results This paper presents a new framework, the Juvenile Justice Behavioral Health Services Cascade (“Cascade”), for measuring unmet substance use treatment needs to illustrate how the cascade approach can be useful in understanding service delivery issues and identifying strategies to improve treatment engagement and outcomes for youth under community JJ supervision. We discuss the organizational and systems barriers for linking delinquent youth to BH services, and explain how the Cascade can help understand and address these barriers. We provide a detailed description of the sequential steps and measures of the Cascade, and then offer an example of its application from the Juvenile Justice – Translational Research on Interventions for Adolescents in the Legal System project (JJ-TRIALS), a multi-site research cooperative funded by the National Institute on Drug Abuse. Conclusion As illustrated with substance abuse treatment, the Cascade has potential for informing and guiding efforts to improve behavioral health service linkages for adolescent offenders, developing and testing interventions and policies to improve interagency and cross-systems coordination, and informing the development of measures and interventions for improving the implementation of treatment in complex multisystem service settings. PMID:28132705
Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts
NASA Astrophysics Data System (ADS)
Gingrich, Mark
Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.
Schulz, Christian M; Burden, Amanda; Posner, Karen L; Mincer, Shawn L; Steadman, Randolph; Wagner, Klaus J; Domino, Karen B
2017-08-01
Situational awareness errors may play an important role in the genesis of patient harm. The authors examined closed anesthesia malpractice claims for death or brain damage to determine the frequency and type of situational awareness errors. Surgical and procedural anesthesia death and brain damage claims in the Anesthesia Closed Claims Project database were analyzed. Situational awareness error was defined as failure to perceive relevant clinical information, failure to comprehend the meaning of available information, or failure to project, anticipate, or plan. Patient and case characteristics, primary damaging events, and anesthesia payments in claims with situational awareness errors were compared to other death and brain damage claims from 2002 to 2013. Anesthesiologist situational awareness errors contributed to death or brain damage in 198 of 266 claims (74%). Respiratory system damaging events were more common in claims with situational awareness errors (56%) than other claims (21%, P < 0.001). The most common specific respiratory events in error claims were inadequate oxygenation or ventilation (24%), difficult intubation (11%), and aspiration (10%). Payments were made in 85% of situational awareness error claims compared to 46% in other claims (P = 0.001), with no significant difference in payment size. Among 198 claims with anesthesia situational awareness error, perception errors were most common (42%), whereas comprehension errors (29%) and projection errors (29%) were relatively less common. Situational awareness error definitions were operationalized for reliable application to real-world anesthesia cases. Situational awareness errors may have contributed to catastrophic outcomes in three quarters of recent anesthesia malpractice claims.Situational awareness errors resulting in death or brain damage remain prevalent causes of malpractice claims in the 21st century.
Fujiwara, T
2012-01-01
Unlike in urban areas where intensive water reclamation systems are available, development of decentralized technologies and systems is required for water use to be sustainable in agricultural areas. To overcome various water quality issues in those areas, a research project entitled 'Development of an innovative water management system with decentralized water reclamation and cascading material-cycle for agricultural areas under the consideration of climate change' was launched in 2009. This paper introduces the concept of this research and provides detailed information on each of its research areas: (1) development of a diffuse agricultural pollution control technology using catch crops; (2) development of a decentralized differentiable treatment system for livestock and human excreta; and (3) development of a cascading material-cycle system for water pollution control and value-added production. The author also emphasizes that the innovative water management system for agricultural areas should incorporate a strategy for the voluntary collection of bio-resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estep, Donald
2015-11-30
This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.
KINEROS2 – AGWA Suite of Modeling Tools
KINEROS2 (K2) originated in the 1960s as a distributed event-based rainfall-runoff erosion model abstracting the watershed as a cascade of overland flow elements contributing to channel model elements. Development and improvement of K2 has continued for a variety of projects and ...
2012-01-01
Background Electromyography (EMG) pattern-recognition based control strategies for multifunctional myoelectric prosthesis systems have been studied commonly in a controlled laboratory setting. Before these myoelectric prosthesis systems are clinically viable, it will be necessary to assess the effect of some disparities between the ideal laboratory setting and practical use on the control performance. One important obstacle is the impact of arm position variation that causes the changes of EMG pattern when performing identical motions in different arm positions. This study aimed to investigate the impacts of arm position variation on EMG pattern-recognition based motion classification in upper-limb amputees and the solutions for reducing these impacts. Methods With five unilateral transradial (TR) amputees, the EMG signals and tri-axial accelerometer mechanomyography (ACC-MMG) signals were simultaneously collected from both amputated and intact arms when performing six classes of arm and hand movements in each of five arm positions that were considered in the study. The effect of the arm position changes was estimated in terms of motion classification error and compared between amputated and intact arms. Then the performance of three proposed methods in attenuating the impact of arm positions was evaluated. Results With EMG signals, the average intra-position and inter-position classification errors across all five arm positions and five subjects were around 7.3% and 29.9% from amputated arms, respectively, about 1.0% and 10% low in comparison with those from intact arms. While ACC-MMG signals could yield a similar intra-position classification error (9.9%) as EMG, they had much higher inter-position classification error with an average value of 81.1% over the arm positions and the subjects. When the EMG data from all five arm positions were involved in the training set, the average classification error reached a value of around 10.8% for amputated arms. Using a two-stage cascade classifier, the average classification error was around 9.0% over all five arm positions. Reducing ACC-MMG channels from 8 to 2 only increased the average position classification error across all five arm positions from 0.7% to 1.0% in amputated arms. Conclusions The performance of EMG pattern-recognition based method in classifying movements strongly depends on arm positions. This dependency is a little stronger in intact arm than in amputated arm, which suggests that the investigations associated with practical use of a myoelectric prosthesis should use the limb amputees as subjects instead of using able-body subjects. The two-stage cascade classifier mode with ACC-MMG for limb position identification and EMG for limb motion classification may be a promising way to reduce the effect of limb position variation on classification performance. PMID:23036049
Methods to achieve accurate projection of regional and global raster databases
Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.
2002-01-01
This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.
Arbuthnott, Alexis E; Lewis, Stephen P; Bailey, Heidi N
2015-01-01
This study examined relations between repeated rumination trials and emotions in nonsuicidal self-injury (NSSI) and eating disorder behaviors (EDBs) within the context of the emotional cascade model (Selby, Anestis, & Joiner, 2008). Rumination was repeatedly induced in 342 university students (79.2% female, Mage = 18.61, standard error = .08); negative and positive emotions were reported after each rumination trial. Repeated measures analyses of variance were used to examine the relations between NSSI and EDB history and changes in emotions. NSSI history associated with greater initial increases in negative emotions, whereas EDB history associated with greater initial decreases in positive emotions. Baseline negative emotional states and trait emotion regulation mediated the relation between NSSI/EDB history and emotional states after rumination. Although NSSI and EDBs share similarities in emotion dysregulation, differences also exist. Both emotion dysregulation and maladaptive cognitive processes should be targeted in treatment for NSSI and EDBs. © 2014 Wiley Periodicals, Inc.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
Structurally Controlled Geothermal Systems in the Central Cascades Arc-Backarc Regime, Oregon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wannamaker, Philip E.
The goal of this project has been to analyze available magnetotelluric (MT) geophysical surveys, structural geology based on mapping and LiDAR, and fluid geochemical data, to identify high-temperature fluid upwellings, critically stressed rock volumes, and other evidence of structurally-controlled geothermal resources. Data were to be integrated to create conceptual models of volcanic-hosted geothermal resources along the Central Cascades arc segment, especially in the vicinity of Mt. Jefferson to Three Sisters. LiDAR data sets available at Oregon State University (OSU) allowed detailed structural geology modeling through forest canopy. Copious spring and well fluid chemistries, including isotopes, were modeled using Geo-T andmore » TOUGHREACT software.« less
The KINEROS2 – AGWA Suite of modeling tools
USDA-ARS?s Scientific Manuscript database
KINEROS2 (K2) originated in the 1960s as a distributed event-based rainfall-runoff erosion model abstracting the watershed as a cascade of overland flow elements contributing to channel model elements. Development and improvement of K2 has continued for a variety of projects and purposes resulting i...
ERIC Educational Resources Information Center
Lear, Rick
2007-01-01
This article describes how the Coalition of Essential Schools Northwest/Small Schools Project (CESNW/SSP) works with schools and districts to help them shape and then implement a coherent strategy that will lead to a redesigned high school system. The author highlights efforts taking place in two multiple high school districts: (1) Cascades School…
ERIC Educational Resources Information Center
Schon, Jennifer A.; Eitel, Karla B.; Bingaman, Deirdre; Miller, Brant G.; Rittenburg, Rebecca A.
2014-01-01
Donnelly, Idaho, is a small town surrounded by private ranches and Forest Service property. Through the center of Donnelly runs Boulder Creek, a small tributary feeding into Cascade Lake Reservoir. Boulder Creek originates from a mountain lake north of Donnelly. Since 1994 it has been listed as "impaired" by the Environmental Protection…
A root cause analysis project in a medication safety course.
Schafer, Jason J
2012-08-10
To develop, implement, and evaluate team-based root cause analysis projects as part of a required medication safety course for second-year pharmacy students. Lectures, in-class activities, and out-of-class reading assignments were used to develop students' medication safety skills and introduce them to the culture of medication safety. Students applied these skills within teams by evaluating cases of medication errors using root cause analyses. Teams also developed error prevention strategies and formally presented their findings. Student performance was assessed using a medication errors evaluation rubric. Of the 211 students who completed the course, the majority performed well on root cause analysis assignments and rated them favorably on course evaluations. Medication error evaluation and prevention was successfully introduced in a medication safety course using team-based root cause analysis projects.
Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection.
Gürsoy, Doğa; Hong, Young P; He, Kuan; Hujsak, Karl; Yoo, Seunghwan; Chen, Si; Li, Yue; Ge, Mingyuan; Miller, Lisa M; Chu, Yong S; De Andrade, Vincent; He, Kai; Cossairt, Oliver; Katsaggelos, Aggelos K; Jacobsen, Chris
2017-09-18
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the same error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gürsoy, Doğa; Hong, Young P.; He, Kuan
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
CaRe high - Cascade screening and registry for high cholesterol in Germany.
Schmidt, Nina; Grammer, Tanja; Gouni-Berthold, Ioanna; Julius, Ulrich; Kassner, Ursula; Klose, Gerald; König, Christel; Laufs, Ulrich; Otte, Britta; Steinhagen-Thiessen, Elisabeth; Wanner, Christoph; März, Winfried
2017-11-01
Familial hypercholesterolemia (FH) is an inherited disorder of the LDL metabolism, leading to cardiovascular disease, even at young age. This risk can be significantly lowered by early diagnosis and treatment. About 270,000 patients affected in Germany are not diagnosed correctly and only a small number is treated properly. To improve FH diagnosis in the general population a cascade screening and registry data is warranted, yet missing in Germany. This project aims to fill this gap. Study assistants approach physicians and lipid clinics to introduce the cascade screening and registry. The physicians identify potential FH patients and include them in the study. Patient data is acquired via questionnaires about medical history. Patients meeting at least two inclusion criteria (LDL-C >190 mg/dl or total cholesterol >290 mg/dl; tendon xanthomas; family history of hypercholesterolemia or early myocardial infarction) are included in the registry. Family members will be contacted and physicians get feedback about diagnosis and treatment options. Ethical approvals for all German states have been collected. So far physicians, lipid clinics and patients within the Rhein-Neckar region, the Saarland, North-Rhine-Westphalia, Upper Bavaria, Bremen, Saxonia and Berlin have joined the study. We expect to include more than 3000 patients during the next two years. After initial patient and data collection the project aims to improve FH-diagnosis and treatment. Utilizing registry data might advance diagnostic criteria and improve detection of FH and thus prevent CVD in this population. Copyright © 2017. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.
Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less
Fermi Observations of γ-Ray Emission from the Moon
NASA Astrophysics Data System (ADS)
Abdo, A. A.; Ackermann, M.; Ajello, M.; Atwoo, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Blandford, R. D.; Bonamente, E.; Borgland, A. W.; Bottacini, E.; Bouvier, A.; Bregeon, J.; Brigida, M.; Bruel, P.; Buehler, R.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caraveo, P. A.; Casandjian, J. M.; Cecchi, C.; Charles, E.; Chekhtman, A.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Conrad, J.; Cutini, S.; D'Ammando, F.; de Angelis, A.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Silva, E. do Couto e.; Drell, P. S.; Drlica-Wagner, A.; Dubois, R.; Favuzzi, C.; Fegan, S. J.; Focke, W. B.; Fortin, P.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gehrels, N.; Germani, S.; Giglietto, N.; Giommi, P.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Gomez-Vargas, G. A.; Grenier, I. A.; Grove, J. E.; Guiriec, S.; Hadasch, D.; Hays, E.; Hill, A. B.; Horan, D.; Hou, X.; Hughes, R. E.; Iafrate, G.; Jackson, M. S.; Jóhannesson, G.; Johnson, A. S.; Kamae, T.; Katagiri, H.; Kataoka, J.; Knödlseder, J.; Kuss, M.; Lande, J.; Larsson, S.; Latronico, L.; Lemoine-Goumard, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Mazziotta, M. N.; McEnery, J. E.; Mehault, J.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Moiseev, A. A.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Naumann-Godo, M.; Nolan, P. L.; Norris, J. P.; Nuss, E.; Ohno, M.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orienti, M.; Orlando, E.; Ormes, J. F.; Ozaki, M.; Paneque, D.; Panetta, J. H.; Parent, D.; Pesce-Rollins, M.; Pierbattista, M.; Piron, F.; Pivato, G.; Poon, H.; Porter, T. A.; Prokhorov, D.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Reposeur, T.; Rochester, L. S.; Roth, M.; Sadrozinski, H. F.-W.; Sanchez, D. A.; Sbarra, C.; Schalk, T. L.; Sgrò, C.; Share, G. H.; Siskind, E. J.; Spandre, G.; Spinelli, P.; Stawarz, Ł.; Takahashi, H.; Tanaka, T.; Thayer, J. G.; Thayer, J. B.; Thompson, D. J.; Tibaldo, L.; Tinivella, M.; Torres, D. F.; Tosti, G.; Troja, E.; Uchiyama, Y.; Usher, T. L.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; Waite, A. P.; Wang, P.; Winer, B. L.; Wood, D. L.; Wood, K. S.; Yang, Z.; Zimmer, S.
2012-10-01
We report on the detection of high-energy γ-ray emission from the Moon during the first 24 months of observations by the Fermi Large Area Telescope (LAT). This emission comes from particle cascades produced by cosmic-ray (CR) nuclei and electrons interacting with the lunar surface. The differential spectrum of the Moon is soft and can be described as a log-parabolic function with an effective cutoff at 2-3 GeV, while the average integral flux measured with the LAT from the beginning of observations in 2008 August to the end of 2010 August is F(>100\\ MeV) =(1.04+/- 0.01\\,{[statistical\\ error]}+/- 0.1\\,{[systematic\\ error]})\\times 10^{-6} cm-2 s-1. This flux is about a factor 2-3 higher than that observed between 1991 and 1994 by the EGRET experiment on board the Compton Gamma Ray Observatory, F(>100 MeV) ≈ 5 × 10-7 cm-2 s-1, when solar activity was relatively high. The higher γ-ray flux measured by Fermi is consistent with the deep solar minimum conditions during the first 24 months of the mission, which reduced effects of heliospheric modulation, and thus increased the heliospheric flux of Galactic CRs. A detailed comparison of the light curve with McMurdo Neutron Monitor rates suggests a correlation of the trends. The Moon and the Sun are so far the only known bright emitters of γ-rays with fast celestial motion. Their paths across the sky are projected onto the Galactic center and high Galactic latitudes as well as onto other areas crowded with high-energy γ-ray sources. Analysis of the lunar and solar emission may thus be important for studies of weak and transient sources near the ecliptic.
Validation of Innovative Exploration Technologies for Newberry Volcano: Drill Site Location Map 2010
Jaffe, Todd
2012-01-01
Newberry seeks to explore "blind" (no surface evidence) convective hydrothermal systems associated with a young silicic pluton on the flanks of Newberry Volcano. This project will employ a combination of innovative and conventional techniques to identify the location of subsurface geothermal fluids associated with the hot pluton. Newberry project drill site location map 2010. Once the exploration mythology is validated, it can be applied throughout the Cascade Range and elsewhere to locate and develop “blind” geothermal resources.
Zhao, C; Vassiljev, N; Konstantinidis, A C; Speller, R D; Kanicki, J
2017-03-07
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm -1 ) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
NASA Astrophysics Data System (ADS)
Zhao, C.; Vassiljev, N.; Konstantinidis, A. C.; Speller, R. D.; Kanicki, J.
2017-03-01
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm-1) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
Final Report: Correctness Tools for Petascale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellor-Crummey, John
2014-10-27
In the course of developing parallel programs for leadership computing systems, subtle programming errors often arise that are extremely difficult to diagnose without tools. To meet this challenge, University of Maryland, the University of Wisconsin—Madison, and Rice University worked to develop lightweight tools to help code developers pinpoint a variety of program correctness errors that plague parallel scientific codes. The aim of this project was to develop software tools that help diagnose program errors including memory leaks, memory access errors, round-off errors, and data races. Research at Rice University focused on developing algorithms and data structures to support efficient monitoringmore » of multithreaded programs for memory access errors and data races. This is a final report about research and development work at Rice University as part of this project.« less
Wagar, Elizabeth A; Tamashiro, Lorraine; Yasin, Bushra; Hilborne, Lee; Bruckner, David A
2006-11-01
Patient safety is an increasingly visible and important mission for clinical laboratories. Attention to improving processes related to patient identification and specimen labeling is being paid by accreditation and regulatory organizations because errors in these areas that jeopardize patient safety are common and avoidable through improvement in the total testing process. To assess patient identification and specimen labeling improvement after multiple implementation projects using longitudinal statistical tools. Specimen errors were categorized by a multidisciplinary health care team. Patient identification errors were grouped into 3 categories: (1) specimen/requisition mismatch, (2) unlabeled specimens, and (3) mislabeled specimens. Specimens with these types of identification errors were compared preimplementation and postimplementation for 3 patient safety projects: (1) reorganization of phlebotomy (4 months); (2) introduction of an electronic event reporting system (10 months); and (3) activation of an automated processing system (14 months) for a 24-month period, using trend analysis and Student t test statistics. Of 16,632 total specimen errors, mislabeled specimens, requisition mismatches, and unlabeled specimens represented 1.0%, 6.3%, and 4.6% of errors, respectively. Student t test showed a significant decrease in the most serious error, mislabeled specimens (P < .001) when compared to before implementation of the 3 patient safety projects. Trend analysis demonstrated decreases in all 3 error types for 26 months. Applying performance-improvement strategies that focus longitudinally on specimen labeling errors can significantly reduce errors, therefore improving patient safety. This is an important area in which laboratory professionals, working in interdisciplinary teams, can improve safety and outcomes of care.
Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection
Gürsoy, Doğa; Hong, Young P.; He, Kuan; ...
2017-09-18
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
NASA Astrophysics Data System (ADS)
Olson, J.; Kenyon, J.; Brown, J. M.; Angevine, W. M.; Marquis, M.; Pichugina, Y. L.; Choukulkar, A.; Bonin, T.; Banta, R. M.; Bianco, L.; Djalalova, I.; McCaffrey, K.; Wilczak, J. M.; Lantz, K. O.; Long, C. N.; Redfern, S.; McCaa, J. R.; Stoelinga, M.; Grimit, E.; Cline, J.; Shaw, W. J.; Lundquist, J. K.; Lundquist, K. A.; Kosovic, B.; Berg, L. K.; Kotamarthi, V. R.; Sharp, J.; Jiménez, P.
2017-12-01
The Rapid Refresh (RAP) and High-Resolution Rapid Refresh (HRRR) are NOAA real-time operational hourly updating forecast systems run at 13- and 3-km grid spacing, respectively. Both systems use the Advanced Research version of the Weather Research and Forecasting (WRF-ARW) as the model component of the forecast system. During the second installment of the Wind Forecast Improvement Project (WFIP 2), the RAP/HRRR have been targeted for the improvement of low-level wind forecasts in the complex terrain within the Columbia River Basin (CRB), which requires much finer grid spacing to resolve important terrain peaks in the Cascade Mountains as well as the Columbia River Gorge. Therefore, this project provides a unique opportunity to test and develop the RAP/HRRR physics suite within a very high-resolution nest (Δx = 750 m) over the northwestern US. Special effort is made to incorporate scale-aware aspects into the model physical parameterizations to improve RAP/HRRR wind forecasts for any application at any grid spacing. Many wind profiling and scanning instruments have been deployed in the CRB in support the WFIP 2 field project, which spanned 01 October 2015 to 31 March 2017. During the project, several forecast error modes were identified, such as: (1) too-shallow cold pools during the cool season, which can mix-out more frequently than observed and (2) the low wind speed bias in thermal trough-induced gap flows during the warm season. Development has been focused on the column-based turbulent mixing scheme to improve upon these biases, but investigating the effects of horizontal (and 3D) mixing has also helped improve some of the common forecast failure modes. This presentation will highlight the testing and development of various model components, showing the improvements over original versions for temperature and wind profiles. Examples of case studies and retrospective periods will be presented to illustrate the improvements. We will demonstrate that the improvements made in WFIP 2 will be extendable to other regions, complex or flat terrain. Ongoing and future challenges in RAP/HRRR physics development will be touched upon.
Blood transfusion sampling and a greater role for error recovery.
Oldham, Jane
Patient identification errors in pre-transfusion blood sampling ('wrong blood in tube') are a persistent area of risk. These errors can potentially result in life-threatening complications. Current measures to address root causes of incidents and near misses have not resolved this problem and there is a need to look afresh at this issue. PROJECT PURPOSE: This narrative review of the literature is part of a wider system-improvement project designed to explore and seek a better understanding of the factors that contribute to transfusion sampling error as a prerequisite to examining current and potential approaches to error reduction. A broad search of the literature was undertaken to identify themes relating to this phenomenon. KEY DISCOVERIES: Two key themes emerged from the literature. Firstly, despite multi-faceted causes of error, the consistent element is the ever-present potential for human error. Secondly, current focus on error prevention could potentially be augmented with greater attention to error recovery. Exploring ways in which clinical staff taking samples might learn how to better identify their own errors is proposed to add to current safety initiatives.
A recent Cleanroom success story: The Redwing project
NASA Technical Reports Server (NTRS)
Hausler, Philip A.
1992-01-01
Redwing is the largest completed Cleanroom software engineering project in IBM, both in terms of lines of code and project staffing. The product provides a decision-support facility that utilizes artificial intelligence (AI) technology for predicting and preventing complex operating problems in an MVS environment. The project used the Cleanroom process for development and realized a defect rate of 2.6 errors/KLOC, measured from first execution. This represents the total amount of errors that were found in testing and installation at three field test sites. Development productivity was 486 LOC/PM, which included all development labor expended in design specification through completion of incremental testing. In short, the Redwing team produced a complex systems software product with an extraordinarily low error rate, while maintaining high productivity. All of this was accomplished by a project team using Cleanroom for the first time. An 'introductory implementation' of Cleanroom was defined and used on Redwing. This paper describes the quality and productivity results, the Redwing project, and how Cleanroom was implemented.
The performance of projective standardization for digital subtraction radiography.
Mol, André; Dunn, Stanley M
2003-09-01
We sought to test the performance and robustness of projective standardization in preserving invariant properties of subtraction images in the presence of irreversible projection errors. Study design Twenty bone chips (1-10 mg each) were placed on dentate dry mandibles. Follow-up images were obtained without the bone chips, and irreversible projection errors of up to 6 degrees were introduced. Digitized image intensities were normalized, and follow-up images were geometrically reconstructed by 2 operators using anatomical and fiduciary landmarks. Subtraction images were analyzed by 3 observers. Regression analysis revealed a linear relationship between radiographic estimates of mineral loss and actual mineral loss (R(2) = 0.99; P <.05). The effect of projection error was not significant (general linear model [GLM]: P >.05). There was no difference between the radiographic estimates from images standardized with anatomical landmarks and those standardized with fiduciary landmarks (Wilcoxon signed rank test: P >.05). Operator variability was low for image analysis alone (R(2) = 0.99; P <.05), as well as for the entire procedure (R(2) = 0.98; P <.05). The predicted detection limit was smaller than 1 mg. Subtraction images registered by projective standardization yield estimates of osseous change that are invariant to irreversible projection errors of up to 6 degrees. Within these limits, operator precision is high and anatomical landmarks can be used to establish correspondence.
Washington Geothermal Play Fairway Analysis Data From Potential Field Studies
Anderson, Megan; Ritzinger, Brent; Glen, Jonathan; Schermerhorn, William
2017-12-20
A recent study which adapts play fairway analysis (PFA) methodology to assess geothermal potential was conducted at three locations (Mount Baker, Mount St. Helens seismic zone, and Wind River valley) along the Washington Cascade Range (Forson et al. 2017). Potential field (gravity and magnetic) methods which can detect subsurface contrasts in physical properties, provides a means for mapping and modeling subsurface geology and structure. As part of the WA-Cascade PFA project, we performed potential field studies by collecting high-resolution gravity and ground-magnetic data, and rock property measurements to (1) identify and constrain fault geometries (2) constrain subsurface lithologic distribution (3) study fault interactions (4) identify areas favorable to hydrothermal flow, and ultimately (5) guide future geothermal exploration at each location.
NASA Astrophysics Data System (ADS)
Zhao, L. N.; Liu, J.; Yuan, Y.; Hu, X. P.; Zhao, G.; Gao, Z. D.; Zhu, S. N.
2012-03-01
We present a high power red-green-blue (RGB) laser light source based on cascaded quasi-phasematched wavelength conversions in a single stoichiometric lithium tantalate. The superiority of the experimental setup is: the facula of the incident beam is elliptical to increase interaction volume, and the cavity was an idler resonant configuration for realizing more efficient red and blue light output. An average power of 2 W of quasi-white-light was obtained by proper combination of the RGB three colors. The conversion efficiency for the power of the quasi-white-light over pump power reached 36%. This efficiency and powerful RGB laser light source has potential applications in laser-based projection display et al.
Investigations of the polarization behavior of quantum cascade lasers by Stokes parameters.
Janassek, Patrick; Hartmann, Sébastien; Molitor, Andreas; Michel, Florian; Elsäßer, Wolfgang
2016-01-15
We experimentally investigate the full polarization behavior of mid-infrared emitting quantum cascade lasers (QCLs) in terms of measuring the complete Stokes parameters, instead of only projecting them on a linear polarization basis. We demonstrate that besides the pre-dominant linear TM polarization of the emitted light as governed by the selection rules of the intersubband transition, small non-TM contributions, e.g., circularly polarized light, are present reflecting the birefringent behavior of the semiconductor quantum well waveguide. Surprisingly unique is the persistence of these polarization properties well below laser threshold. These investigations give further insight into understanding, manipulating, and exploiting the polarization properties of QCLs, both from a laser point of view and with respect toward applications.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-19
... Settlement; ACM Smelter and Refinery Site, Located in Cascade County, MT AGENCY: Environmental Protection... projected future response costs concerning the ACM Smelter and Refinery NPL Site (Site), Operable Unit 1..., Helena, MT 59626. Mr. Sturn can be reached at (406) 457-5027. Comments should reference the ACM Smelter...
A Never Ending Journey: Inclusive Education Is a Principle of Practice, Not an End Game
ERIC Educational Resources Information Center
Kozleski, Elizabeth B.; Yu, Ting; Satter, Allyson L.; Francis, Grace L.; Haines, Shana J.
2015-01-01
A team from Schoolwide Integrated Framework for Transformation (SWIFT), a federally funded technical assistance project focused on creating cascading, aligned systems for inclusive education, conducted a series of focus groups and interviews with school administrators, general and special educators, and related service providers in six schools…
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Nate the Great: Administrator of the Year
ERIC Educational Resources Information Center
Oleck, Joan
2007-01-01
This article profiles Nate Greenberg, a superintendent of the Londonderry (NH) School District who viewed action research as a means for supporting libraries, which he sees as central to education. During the past six years, Londonderry's cascade of action-research projects (led by librarians) has tackled challenges such as how to get high school…
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
BRDF-dependent accuracy of array-projection-based 3D sensors.
Heist, Stefan; Kühmstedt, Peter; Tünnermann, Andreas; Notni, Gunther
2017-03-10
In order to perform high-speed three-dimensional (3D) shape measurements with structured light systems, high-speed projectors are required. One possibility is an array projector, which allows pattern projection at several tens of kilohertz by switching on and off the LEDs of various slide projectors. The different projection centers require a separate analysis, as the intensity received by the cameras depends on the projection direction and the object's bidirectional reflectance distribution function (BRDF). In this contribution, we investigate the BRDF-dependent errors of array-projection-based 3D sensors and propose an error compensation process.
Cascade Distillation Subsystem Development: Progress Toward a Distillation Comparison Test
NASA Technical Reports Server (NTRS)
Callahan, M. R.; Lubman, A.; Pickering, Karen D.
2009-01-01
Recovery of potable water from wastewater is essential for the success of long-duration manned missions to the Moon and Mars. Honeywell International and a team from NASA Johnson Space Center (JSC) are developing a wastewater processing subsystem that is based on centrifugal vacuum distillation. The wastewater processor, referred to as the Cascade Distillation Subsystem (CDS), utilizes an innovative and efficient multistage thermodynamic process to produce purified water. The rotary centrifugal design of the system also provides gas/liquid phase separation and liquid transport under microgravity conditions. A five-stage subsystem unit has been designed, built, delivered and integrated into the NASA JSC Advanced Water Recovery Systems Development Facility for performance testing. A major test objective of the project is to demonstrate the advancement of the CDS technology from the breadboard level to a subsystem level unit. An initial round of CDS performance testing was completed in fiscal year (FY) 2008. Based on FY08 testing, the system is now in development to support an Exploration Life Support (ELS) Project distillation comparison test expected to begin in early 2009. As part of the project objectives planned for FY09, the system will be reconfigured to support the ELS comparison test. The CDS will then be challenged with a series of human-gene-rated waste streams representative of those anticipated for a lunar outpost. This paper provides a description of the CDS technology, a status of the current project activities, and data on the system s performance to date.
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.
1993-01-01
A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.
NASA Astrophysics Data System (ADS)
Marconcini, Michele; Pacciani, Roberto; Arnone, Andrea
2015-11-01
The aerodynamic performance of a gas turbine nozzle vane cascade was investigated over a range of Mach and Reynolds numbers. The work is part of a vast research project aimed at the analysis of fluid dynamics and heat transfer phenomena in cooled blades. In this paper computed results on the "solid vane" (without cooling devices) are presented and discussed in comparison with experimental data. Detailed measurements were provided by the University of Bergamo where the experimental campaign was carried out by means of a subsonic wind tunnel. The impact of boundary layer transition is investigated by using a novel laminar kinetic energy transport model and the widely used Langtry-Menter γ- Re θ,t model. The comparison between calculations and measurements is presented in terms of blade loading distributions, total pressure loss coefficient contours downstream of the cascade, and velocity/turbulence-intensity profiles within the boundary layer at selected blade surface locations at mid-span. It will be shown how transitional calculations compare favorably with experiments.
The perceptual shaping of anticipatory actions.
Maffei, Giovanni; Herreros, Ivan; Sanchez-Fibla, Marti; Friston, Karl J; Verschure, Paul F M J
2017-12-20
Humans display anticipatory motor responses to minimize the adverse effects of predictable perturbations. A widely accepted explanation for this behaviour relies on the notion of an inverse model that, learning from motor errors, anticipates corrective responses. Here, we propose and validate the alternative hypothesis that anticipatory control can be realized through a cascade of purely sensory predictions that drive the motor system, reflecting the causal sequence of the perceptual events preceding the error. We compare both hypotheses in a simulated anticipatory postural adjustment task. We observe that adaptation in the sensory domain, but not in the motor one, supports the robust and generalizable anticipatory control characteristic of biological systems. Our proposal unites the neurobiology of the cerebellum with the theory of active inference and provides a concrete implementation of its core tenets with great relevance both to our understanding of biological control systems and, possibly, to their emulation in complex artefacts. © 2017 The Author(s).
Hamel, Louis-Philippe; Nicole, Marie-Claude; Duplessis, Sébastien; Ellis, Brian E.
2012-01-01
Mitogen-activated protein kinases (MAPKs) are evolutionarily conserved proteins that function as key signal transduction components in fungi, plants, and mammals. During interaction between phytopathogenic fungi and plants, fungal MAPKs help to promote mechanical and/or enzymatic penetration of host tissues, while plant MAPKs are required for activation of plant immunity. However, new insights suggest that MAPK cascades in both organisms do not operate independently but that they mutually contribute to a highly interconnected molecular dialogue between the plant and the fungus. As a result, some pathogenesis-related processes controlled by fungal MAPKs lead to the activation of plant signaling, including the recruitment of plant MAPK cascades. Conversely, plant MAPKs promote defense mechanisms that threaten the survival of fungal cells, leading to a stress response mediated in part by fungal MAPK cascades. In this review, we make use of the genomic data available following completion of whole-genome sequencing projects to analyze the structure of MAPK protein families in 24 fungal taxa, including both plant pathogens and mycorrhizal symbionts. Based on conserved patterns of sequence diversification, we also propose the adoption of a unified fungal MAPK nomenclature derived from that established for the model species Saccharomyces cerevisiae. Finally, we summarize current knowledge of the functions of MAPK cascades in phytopathogenic fungi and highlight the central role played by MAPK signaling during the molecular dialogue between plants and invading fungal pathogens. PMID:22517321
Forest dynamics in Oregon landscapes: Evaluation and application of an individual-based model
Busing, R.T.; Solomon, A.M.; McKane, R.B.; Burdick, C.A.
2007-01-01
The FORCLIM model of forest dynamics was tested against field survey data for its ability to simulate basal area and composition of old forests across broad climatic gradients in western Oregon, USA. The model was also tested for its ability to capture successional trends in ecoregions of the west Cascade Range. It was then applied to simulate present and future (1990-2050) forest landscape dynamics of a watershed in the west Cascades. Various regimes of climate change and harvesting in the watershed were considered in the landscape application. The model was able to capture much of the variation in forest basal area and composition in western Oregon even though temperature and precipitation were the only inputs that were varied among simulated sites. The measured decline in total basal area from tall coastal forests eastward to interior steppe was matched by simulations. Changes in simulated forest dominants also approximated those in the actual data. Simulated abundances of a few minor species did not match actual abundances, however. Subsequent projections of climate change and harvest effects in a west Cascades landscape indicated no change in forest dominance as of 2050. Yet, climate-driven shifts in the distributions of some species were projected. The simulation of both stand-replacing and partial-stand disturbances across western Oregon improved agreement between simulated and actual data. Simulations with fire as an agent of partial disturbance suggested that frequent fires of low severity can alter forest composition and structure as much or more than severe fires at historic frequencies. ?? 2007 by the Ecological Society of America.
Beam hardening correction in CT myocardial perfusion measurement
NASA Astrophysics Data System (ADS)
So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim
2009-05-01
This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.
Stetson, Peter D.; McKnight, Lawrence K.; Bakken, Suzanne; Curran, Christine; Kubose, Tate T.; Cimino, James J.
2002-01-01
Medical errors are common, costly and often preventable. Work in understanding the proximal causes of medical errors demonstrates that systems failures predispose to adverse clinical events. Most of these systems failures are due to lack of appropriate information at the appropriate time during the course of clinical care. Problems with clinical communication are common proximal causes of medical errors. We have begun a project designed to measure the impact of wireless computing on medical errors. We report here on our efforts to develop an ontology representing the intersection of medical errors, information needs and the communication space. We will use this ontology to support the collection, storage and interpretation of project data. The ontology’s formal representation of the concepts in this novel domain will help guide the rational deployment of our informatics interventions. A real-life scenario is evaluated using the ontology in order to demonstrate its utility.
1981-04-01
78-C-0064, Project Task No. 2307/A4 61102 F. The performance period covered by this report was from 1 Jnne 1980 to 31 March 1981. The project...Report R80-914 388-12, Sept. 1980 . 2. Blair, M. F., D. A. Bailey and R. H. Schlinker: Development of a Large Scale Wind Tunnel for the Simulation of...Two-Dimensional Potential Cascade Flow Using Finite Area Methods. AIAA Journal, Vol. 18, No. 1, Jan. 1980 . 5. Blackwell, B. F. and R. J. Moffat
Attitude output feedback control for rigid spacecraft with finite-time convergence.
Hu, Qinglei; Niu, Guanglin
2017-09-01
The main problem addressed is the quaternion-based attitude stabilization control of rigid spacecraft without angular velocity measurements in the presence of external disturbances and reaction wheel friction as well. As a stepping stone, an angular velocity observer is proposed for the attitude control of a rigid body in the absence of angular velocity measurements. The observer design ensures finite-time convergence of angular velocity state estimation errors irrespective of the control torque or the initial attitude state of the spacecraft. Then, a novel finite-time control law is employed as the controller in which the estimate of the angular velocity is used directly. It is then shown that the observer and the controlled system form a cascaded structure, which allows the application of the finite-time stability theory of cascaded systems to prove the finite-time stability of the closed-loop system. A rigorous analysis of the proposed formulation is provided and numerical simulation studies are presented to help illustrate the effectiveness of the angular-velocity observer for rigid spacecraft attitude control. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Modified energy cascade model adapted for a multicrop Lunar greenhouse prototype
NASA Astrophysics Data System (ADS)
Boscheri, G.; Kacira, M.; Patterson, L.; Giacomelli, G.; Sadler, P.; Furfaro, R.; Lobascio, C.; Lamantea, M.; Grizzaffi, L.
2012-10-01
Models are required to accurately predict mass and energy balances in a bioregenerative life support system. A modified energy cascade model was used to predict outputs of a multi-crop (tomatoes, potatoes, lettuce and strawberries) Lunar greenhouse prototype. The model performance was evaluated against measured data obtained from several system closure experiments. The model predictions corresponded well to those obtained from experimental measurements for the overall system closure test period (five months), especially for biomass produced (0.7% underestimated), water consumption (0.3% overestimated) and condensate production (0.5% overestimated). However, the model was less accurate when the results were compared with data obtained from a shorter experimental time period, with 31%, 48% and 51% error for biomass uptake, water consumption, and condensate production, respectively, which were obtained under more complex crop production patterns (e.g. tall tomato plants covering part of the lettuce production zones). These results, together with a model sensitivity analysis highlighted the necessity of periodic characterization of the environmental parameters (e.g. light levels, air leakage) in the Lunar greenhouse.
Cascading effects of fishing on Galapagos rocky reef communities: reanalysis using corrected data
Sonnenholzner, Jorge I.; Ladah, Lydia B.; Lafferty, Kevin D.
2009-01-01
This article replaces Sonnenholzner et al. (2007; Mar Ecol Prog Ser 343:77–85), which was retracted on September 19, 2007, due to errors in entry of data on sea urchins. We sampled 10 highly fished and 10 (putatively) lightly fished shallow rocky reefs in the southeastern area of the Galapagos Marine Reserve, Ecuador. After the correction, these are the new results: there was a negative association between slate-pencil urchins Eucidaris galapagensis and non-coralline algae. In addition, pencil urchins were less abundant where there were many predators. An indirect positive association between predators and non-coralline algae occurred. Fishing appeared to affect this trophic cascade. The spiny lobster Panulirus penicillatus, the slipper lobster Scyllarides astori, and the Mexican hogfish Bodianus diplotaenia were significantly less abundant at highly fished sites. Urchin density was higher at highly fished sites. Non-coralline algae were nearly absent from highly fished sites, where a continuous carpet of the anemone Aiptasia sp. was recorded, and the algal assemblage was mainly structured by encrusting coralline and articulated calcareous algae.
Wall-resolved spectral cascade-transport turbulence model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. S.; Shaver, D. R.; Lahey, R. T.
A spectral cascade-transport model has been developed and applied to turbulent channel flows (Reτ= 550, 950, and 2000 based on friction velocity, uτ ; or ReδΜ= 8,500; 14,800 and 31,000, based on the mean velocity and channel half-width). This model is an extension of a spectral model previously developed for homogeneous single and two-phase decay of isotropic turbulence and uniform shear flows; and a spectral turbulence model for wall-bounded flows without resolving the boundary layer. Data from direct numerical simulation (DNS) of turbulent channel flow was used to help develop this model and to assess its performance in the 1Dmore » direction across the channel width. The resultant spectral model is capable of predicting the mean velocity, turbulent kinetic energy and energy spectrum distributions for single-phase wall-bounded flows all the way to the wall, where the model source terms have been developed to account for the wall influence. We implemented the model into the 3D multiphase CFD code NPHASE-CMFD and the latest results are within reasonable error of the 1D predictions.« less
Wang, Jing; Qi, Minghao; Xuan, Yi; Huang, Haiyang; Li, You; Li, Ming; Chen, Xin; Jia, Qi; Sheng, Zhen; Wu, Aimin; Li, Wei; Wang, Xi; Zou, Shichang; Gan, Fuwan
2014-01-01
A novel silicon-on-insulator (SOI) polarization splitter-rotator (PSR) with a large fabrication tolerance is proposed based on cascaded multimode interference (MMI) couplers and an assisted mode-evolution taper. The tapers are designed to adiabatically convert the input TM0 mode into the TE1 mode, which will output as the TE0 mode after processed by the subsequent MMI mode converter, 90-degree phase shifter (PS) and MMI 3 dB coupler. The numerical simulation results show that the proposed device has a < 0.5 dB insertion loss with < −17 dB crosstalk in C optical communication band. Fabrication tolerance analysis is also performed with respect to the deviations of MMI coupler width, PS width, slab height and upper-cladding refractive index, showing that this device could work well even when affected by considerable fabrication errors. With such a robust performance with a large bandwidth, this device offers potential applications for CMOS-compatible polarization diversity, especially in the booming 100 Gb/s coherent optical communications based on silicon photonics technology. PMID:25402029
Wang, Jing; Qi, Minghao; Xuan, Yi; Huang, Haiyang; Li, You; Li, Ming; Chen, Xin; Jia, Qi; Sheng, Zhen; Wu, Aimin; Li, Wei; Wang, Xi; Zou, Shichang; Gan, Fuwan
2014-11-17
A novel silicon-on-insulator (SOI) polarization splitter-rotator (PSR) with a large fabrication tolerance is proposed based on cascaded multimode interference (MMI) couplers and an assisted mode-evolution taper. The tapers are designed to adiabatically convert the input TM(0) mode into the TE(1) mode, which will output as the TE(0) mode after processed by the subsequent MMI mode converter, 90-degree phase shifter (PS) and MMI 3 dB coupler. The numerical simulation results show that the proposed device has a < 0.5 dB insertion loss with < -17 dB crosstalk in C optical communication band. Fabrication tolerance analysis is also performed with respect to the deviations of MMI coupler width, PS width, slab height and upper-cladding refractive index, showing that this device could work well even when affected by considerable fabrication errors. With such a robust performance with a large bandwidth, this device offers potential applications for CMOS-compatible polarization diversity, especially in the booming 100 Gb/s coherent optical communications based on silicon photonics technology.
Wall-resolved spectral cascade-transport turbulence model
Brown, C. S.; Shaver, D. R.; Lahey, R. T.; ...
2017-07-08
A spectral cascade-transport model has been developed and applied to turbulent channel flows (Reτ= 550, 950, and 2000 based on friction velocity, uτ ; or ReδΜ= 8,500; 14,800 and 31,000, based on the mean velocity and channel half-width). This model is an extension of a spectral model previously developed for homogeneous single and two-phase decay of isotropic turbulence and uniform shear flows; and a spectral turbulence model for wall-bounded flows without resolving the boundary layer. Data from direct numerical simulation (DNS) of turbulent channel flow was used to help develop this model and to assess its performance in the 1Dmore » direction across the channel width. The resultant spectral model is capable of predicting the mean velocity, turbulent kinetic energy and energy spectrum distributions for single-phase wall-bounded flows all the way to the wall, where the model source terms have been developed to account for the wall influence. We implemented the model into the 3D multiphase CFD code NPHASE-CMFD and the latest results are within reasonable error of the 1D predictions.« less
Predicting neutron damage using TEM with in situ ion irradiation and computer modeling
NASA Astrophysics Data System (ADS)
Kirk, Marquis A.; Li, Meimei; Xu, Donghua; Wirth, Brian D.
2018-01-01
We have constructed a computer model of irradiation defect production closely coordinated with TEM and in situ ion irradiation of Molybdenum at 80 °C over a range of dose, dose rate and foil thickness. We have reexamined our previous ion irradiation data to assign appropriate error and uncertainty based on more recent work. The spatially dependent cascade cluster dynamics model is updated with recent Molecular Dynamics results for cascades in Mo. After a careful assignment of both ion and neutron irradiation dose values in dpa, TEM data are compared for both ion and neutron irradiated Mo from the same source material. Using the computer model of defect formation and evolution based on the in situ ion irradiation of thin foils, the defect microstructure, consisting of densities and sizes of dislocation loops, is predicted for neutron irradiation of bulk material at 80 °C and compared with experiment. Reasonable agreement between model prediction and experimental data demonstrates a promising direction in understanding and predicting neutron damage using a closely coordinated program of in situ ion irradiation experiment and computer simulation.
Parameter Estimation of Partial Differential Equation Models.
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.
Gregory M. Filip; Alan Kanaskie; Will R. Littke; John Browning; Kristen L. Chadwick; David C. Shaw; Robin L. Mulvey
2014-01-01
Swiss needle cast (SNC), caused by the fungus Phaeocryptopus gaeumannii, is one of the most damaging diseases of coast Douglasfir (Pseudotsuga menziesii var. menziesii) in the Pacific Northwest (Hansen and others 2000, Mainwaring and others 2005, Shaw and others 2011).
Deep groundwater mediates streamflow response to climate warming in the Oregon Cascades
Christina Tague; Gordon Grant; Mike Farrell; Janet Choate; Anne Jefferson
2008-01-01
Recent studies predict that projected climate change will lead to significant reductions in summer streamflow in the mountainous regions of the Western United States. Hydrologic modeling directed at quantifying these potential changes has focused on the magnitude and timing of spring snowmelt as the key control on the spatial temporal pattern of summer streamflow. We...
Broadscale assessment of aquatic species and habitats [Chapter 4
Danny C. Lee; James R. Sedell; Bruce F. Rieman; Russell F. Thurow; Jack E. Williams
1997-01-01
In this chapter, we report on a broad-scale scientific assessment of aquatic resources conducted as part of the Interior Columbia Basin Ecosystem Management Project. Our assessment area, collectively referred to as the Basin, includes the Columbia River Basin east of the crest of the Cascade Mountains (Washington, Oregon, Idaho, western Montana, and small portions of...
Estimated water use and general hydrologic conditions for Oregon, 1985 and 1990
Broad, T.M.; Collins, C.A.
1996-01-01
Water-use information is vital to planners, engineers, and hydrologists in water resources. This report is a compilation of water-use information for Oregon for calendar years 1985 and 1990. The report presents water-use data by geographic region for several categories of use, including public supply, domestic, commercial, industrial, mining, thermoelectric power, hydroelectric power, live-stock, irrigation, reservoir evaporation, and wastewater treatment. Hydroelectric power is the only instream use discussed; all other uses are considered offstream. The Appendix presents 1985 and 1990 data by region and by drainage basin for the previously mentioned categories of use. The Cascade Range divides Oregon into two distinct climatic zones. The area west of the Cascade Range has an average annual precipitation that ranges from 40 to 200 inches, and precipitation in the area east of the Cascade Range ranges from 10 to 20 inches. The differences in precipitation and geology have an effect on the sources, uses, and amounts of water withdrawn. Most of the large public-supply systems west of the Cascade Range rely on surface water, whereas many of the large public-supply systems east of the Cascade Range use on wells or springs. Irrigators west of the Cascade Range rely primarily on nearby surface- water sources; however, irrigators east of the Cascade Range use primarily surface water that commonly is delivered from distant sources through irrigation ditches. A variety of methods was used to estimate water-use information. Most withdrawals for public-water suppliers were metered; however, irrigation withdrawals usually were estimated by using information on crops, climate, application efficiencies, and conveyance losses. The accuracy of the estimated total withdrawal values for public supply was estimated to be within 4 percent of the values that would be obtained if all public-supply withdrawals were metered. Total withdrawals for irrigation were estimated to be within 40 percent of metered irrigation withdrawals. The estimates-of-error are presented to show the relative, rather than absolute, accuracy of the data for each water-use category. A total of 8,400 million gallons of water per day was withdrawn in Oregon during 1990, about 1,900 million gallons per day more than the 6,500 million gallons per day withdrawn in 1985. Whereas actual water use increased in 1990, the major differences between 1985 and 1990 were attributed to the inclusion of offstream fish hatcheries, the use of different crop coefficients to estimate irrigation, and the availability of more detailed information in the 1990 estimates. Surface-water withdrawals accounted for 92 percent of the total withdrawals in 1990; irrigation was the largest category of water use, accounting for 82 percent of the total withdrawals.
Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiala, David J; Mueller, Frank; Engelmann, Christian
Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data betweenmore » replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.« less
Error correcting coding-theory for structured light illumination systems
NASA Astrophysics Data System (ADS)
Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben
2017-06-01
Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.
Alan K. Swanson; Solomon Z. Dobrowski; Andrew O. Finley; James H. Thorne; Michael K. Schwartz
2013-01-01
The uncertainty associated with species distribution model (SDM) projections is poorly characterized, despite its potential value to decision makers. Error estimates from most modelling techniques have been shown to be biased due to their failure to account for spatial autocorrelation (SAC) of residual error. Generalized linear mixed models (GLMM) have the ability to...
The Effect of Data Quality on Short-term Growth Model Projections
David Gartner
2005-01-01
This study was designed to determine the effect of FIA's data quality on short-term growth model projections. The data from Georgia's 1996 statewide survey were used for the Southern variant of the Forest Vegetation Simulator to predict Georgia's first annual panel. The effect of several data error sources on growth modeling prediction errors...
NASA Technical Reports Server (NTRS)
1982-01-01
An effective data collection methodology for evaluating software development methodologies was applied to four different software development projects. Goals of the data collection included characterizing changes and errors, characterizing projects and programmers, identifying effective error detection and correction techniques, and investigating ripple effects. The data collected consisted of changes (including error corrections) made to the software after code was written and baselined, but before testing began. Data collection and validation were concurrent with software development. Changes reported were verified by interviews with programmers.
Software errors and complexity: An empirical investigation
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Perricone, Berry T.
1983-01-01
The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.
Software errors and complexity: An empirical investigation
NASA Technical Reports Server (NTRS)
Basili, V. R.; Perricone, B. T.
1982-01-01
The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Generating higher-order quantum dissipation from lower-order parametric processes
NASA Astrophysics Data System (ADS)
Mundhada, S. O.; Grimm, A.; Touzard, S.; Vool, U.; Shankar, S.; Devoret, M. H.; Mirrahimi, M.
2017-06-01
The stabilisation of quantum manifolds is at the heart of error-protected quantum information storage and manipulation. Nonlinear driven-dissipative processes achieve such stabilisation in a hardware efficient manner. Josephson circuits with parametric pump drives implement these nonlinear interactions. In this article, we propose a scheme to engineer a four-photon drive and dissipation on a harmonic oscillator by cascading experimentally demonstrated two-photon processes. This would stabilise a four-dimensional degenerate manifold in a superconducting resonator. We analyse the performance of the scheme using numerical simulations of a realisable system with experimentally achievable parameters.
NASA Technical Reports Server (NTRS)
Rosenberg, L.
1978-01-01
The paper presents an environmental analysis performed in evaluating various proposed geothermal demonstration projects at Desert Hot Springs. These are categorized in two ways: (1) indirect, or (2) direct uses. Among the former are greenhouses, industrial complexes, and car washes. The latter include aquaculture, a cascaded agribusiness system, and a mobile home park. Major categories of environmental impact covered are: (1) site, (2) construction of projects, and (3) the use of the geothermal source. Attention is also given to the disposal of the geothermal fluid after use. Finally, it is concluded that there are no major problems forseen for each project, and future objectives are discussed.
The Reliability and Sources of Error of Using Rubrics-Based Assessment for Student Projects
ERIC Educational Resources Information Center
Menéndez-Varela, José-Luis; Gregori-Giralt, Eva
2018-01-01
Rubrics are widely used in higher education to assess performance in project-based learning environments. To date, the sources of error that may affect their reliability have not been studied in depth. Using generalisability theory as its starting-point, this article analyses the influence of the assessors and the criteria of the rubrics on the…
USGS Blind Sample Project: monitoring and evaluating laboratory analytical quality
Ludtke, Amy S.; Woodworth, Mark T.
1997-01-01
The U.S. Geological Survey (USGS) collects and disseminates information about the Nation's water resources. Surface- and ground-water samples are collected and sent to USGS laboratories for chemical analyses. The laboratories identify and quantify the constituents in the water samples. Random and systematic errors occur during sample handling, chemical analysis, and data processing. Although all errors cannot be eliminated from measurements, the magnitude of their uncertainty can be estimated and tracked over time. Since 1981, the USGS has operated an independent, external, quality-assurance project called the Blind Sample Project (BSP). The purpose of the BSP is to monitor and evaluate the quality of laboratory analytical results through the use of double-blind quality-control (QC) samples. The information provided by the BSP assists the laboratories in detecting and correcting problems in the analytical procedures. The information also can aid laboratory users in estimating the extent that laboratory errors contribute to the overall errors in their environmental data.
NASA Technical Reports Server (NTRS)
Rosenberg, Linda H.; Arthur, James D.; Stapko, Ruth K.; Davani, Darush
1999-01-01
The Software Assurance Technology Center (SATC) at NASA Goddard Space Flight Center has been investigating how projects can determine when sufficient testing has been completed. For most projects, schedules are underestimated, and the last phase of the software development, testing, must be decreased. Two questions are frequently asked: "To what extent is the software error-free? " and "How much time and effort is required to detect and remove the remaining errors? " Clearly, neither question can be answered with absolute certainty. Nonetheless, the ability to answer these questions with some acceptable level of confidence is highly desirable. First, knowing the extent to which a product is error-free, we can judge when it is time to terminate testing. Secondly, if errors are judged to be present, we can perform a cost/benefit trade-off analysis to estimate when the software will be ready for use and at what cost. This paper explains the efforts of the SATC to help projects determine what is sufficient testing and when is the most cost-effective time to stop testing.
75 FR 51155 - Notice of Projects Approved for Consumptive Uses of Water
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-18
...: June 11, 2010. 35. Anadarko E&P Company, LP; Pad ID: David C Duncan Pad A, ABR- 20100635, Cascade.... Anadarko E&P Company, LP; Pad ID: COP Tract 289 C, ABR- 20100636, McHenry Township, Lycoming County, Pa.... Cairo, General Counsel, telephone: (717) 238-0423, ext. 306; fax: (717) 238-2436; e-mail: [email protected
Cyclotrons and FFAG Accelerators as Drivers for ADS
Calabretta, Luciano; Méot, François
2015-01-01
Our review summarizes projects and studies on circular accelerators proposed for driving subcritical reactors. The early isochronous cyclotron cascades, proposed about 20 years ago, and the evolution of these layouts up to the most recent solutions or designs based on cyclotrons and fixed field alternating gradient accelerators, are reported. Additionally, the newest ideas and their prospects for development are discussed.
Aeroelastic stability and response of rotating structures
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.
1993-01-01
A summary of the work performed during the progress period is presented. Analysis methods for predicting loads and instabilities of wind turbines were developed. Three new areas of research to aid the Advanced Turboprop Project (ATP) were initiated and developed. These three areas of research are aeroelastic analysis methods for cascades including blade and disk flexibility; stall flutter analysis; and computational aeroelasticity.
Expeditious reconciliation for practical quantum key distribution
NASA Astrophysics Data System (ADS)
Nakassis, Anastase; Bienfang, Joshua C.; Williams, Carl J.
2004-08-01
The paper proposes algorithmic and environmental modifications to the extant reconciliation algorithms within the BB84 protocol so as to speed up reconciliation and privacy amplification. These algorithms have been known to be a performance bottleneck 1 and can process data at rates that are six times slower than the quantum channel they serve2. As improvements in single-photon sources and detectors are expected to improve the quantum channel throughput by two or three orders of magnitude, it becomes imperative to improve the performance of the classical software. We developed a Cascade-like algorithm that relies on a symmetric formulation of the problem, error estimation through the segmentation process, outright elimination of segments with many errors, Forward Error Correction, recognition of the distinct data subpopulations that emerge as the algorithm runs, ability to operate on massive amounts of data (of the order of 1 Mbit), and a few other minor improvements. The data from the experimental algorithm we developed show that by operating on massive arrays of data we can improve software performance by better than three orders of magnitude while retaining nearly as many bits (typically more than 90%) as the algorithms that were designed for optimal bit retention.
Cascade Model of Ionization Multiplication of Electrons in Glow Discharge Plasma
NASA Astrophysics Data System (ADS)
Romanenko, V. A.; Solodky, S. A.; Kudryavtsev, A. A.; Suleymanov, I. A.
1996-10-01
For determination of EDF in non-uniform fields a Monte-Carlo simulation(Tran Ngoc An et al., J.Phys.D: Appl. Phys. 10, 2317 (1977))^,(J.P. Boeuf et al., Phys.D: Appl.Phys. 15, 2169 (1982)) is applied. As alternative multi-beam cascade model(H.B. Valentini, Contrib.Plasma Phys. 27, 331 (1987)) is offered. Our model eliminates defects of that model and enables to determine EDF of low pressure plasma in non-uniform fields. A cascade model (with EDF dividing in monoenergetic electron groups) for arbitrary electric potential profile was used. Modeling was carried out for electron forward scattering only, constant electron mean free path; ionization was considered only. The equation system was solved for the region with kinetic energies more than ionization energy. The boundary conditions (on ionization energy curve) take into account electron transitions from higher-lying level in the less than ionization energy region and secondary electron production. The problem solution in analytical functions was obtained. The insertion of additional processes does not make significant difficulties. EDF and electrokinetical parameters in helium from numerical calculations are well agreed with above-mentioned authors. Work was carried out under RFFI (project N 96-02-18417) support.
Feature combination analysis in smart grid based using SOM for Sudan national grid
NASA Astrophysics Data System (ADS)
Bohari, Z. H.; Yusof, M. A. M.; Jali, M. H.; Sulaima, M. F.; Nasir, M. N. M.
2015-12-01
In the investigation of power grid security, the cascading failure in multicontingency situations has been a test because of its topological unpredictability and computational expense. Both system investigations and burden positioning routines have their limits. In this project, in view of sorting toward Self Organizing Maps (SOM), incorporated methodology consolidating spatial feature (distance)-based grouping with electrical attributes (load) to evaluate the vulnerability and cascading impact of various part sets in the force lattice. Utilizing the grouping result from SOM, sets of overwhelming stacked beginning victimized people to perform assault conspires and asses the consequent falling impact of their failures, and this SOM-based approach viably distinguishes the more powerless sets of substations than those from the conventional burden positioning and other bunching strategies. The robustness of power grids is a central topic in the design of the so called "smart grid". In this paper, to analyze the measures of importance of the nodes in a power grid under cascading failure. With these efforts, we can distinguish the most vulnerable nodes and protect them, improving the safety of the power grid. Also we can measure if a structure is proper for power grids.
CasCADe: A Novel 4D Visualization System for Virtual Construction Planning.
Ivson, Paulo; Nascimento, Daniel; Celes, Waldemar; Barbosa, Simone Dj
2018-01-01
Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.
NASA Astrophysics Data System (ADS)
Yarnell, S. M.; Pope, K.; Podolak, K.; Wolf, E.; Burnett, R.
2016-12-01
Due to extensive livestock grazing and widespread removal of beaver and willows, headwater meadows have transformed from multi-thread channels with seasonally active floodplains into single thread, incised channels that store less carbon, retain less water, and are lower in habitat quality for a diverse suite of meadow-dependent wildlife. Meadow restoration techniques often include willow planting and cattle exclosures; however, few studies have rigorously tested the long-term efficacy of these methods or evaluated alternative restoration techniques such as reintroduction of beaver or installation of beaver dam analogues (BDAs). This project seeks to evaluate the installation of BDAs as a restoration technique in Childs Meadow, a heavily grazed meadow in the Cascade Range representative of low-gradient meadows across northern California. Using a before-after-control-impact study design, the study tests the impacts of two restoration techniques (willow planting with cattle exclusion and willow planting with cattle exclusion and BDAs) on hydrology, carbon sequestration, and sensitive species. Results will be compared with measurements in an unrestored section of the meadow that currently supports an active beaver population and two imperiled species (Cascades Frog and Willow Flycatcher). One specific project objective is to measure the response of hydrogeomorphic conditions (e.g. groundwater, surface water, temperature, habitat) and Cascades Frog and Willow Flycatcher to restorative actions. Pre-treatment data was collected in summer 2015, a cattle exclosure was established and willows were planted in fall 2015, and installation of the BDAs is planned for fall 2016. Three years of post-implementation monitoring will be completed to assess impacts of the treatments. Here, we will present our sampling design and first year results following initiation of the treatments.
Local blur analysis and phase error correction method for fringe projection profilometry systems.
Rao, Li; Da, Feipeng
2018-05-20
We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.
Estimating extreme stream temperatures by the standard deviate method
NASA Astrophysics Data System (ADS)
Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz
2006-02-01
It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.
Human factors evaluation of remote afterloading brachytherapy. Volume 2, Function and task analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callan, J.R.; Gwynne, J.W. III; Kelly, T.T.
1995-05-01
A human factors project on the use of nuclear by-product material to treat cancer using remotely operated afterloaders was undertaken by the Nuclear Regulatory Commission. The purpose of the project was to identify factors that contribute to human error in the system for remote afterloading brachytherapy (RAB). This report documents the findings from the first phase of the project, which involved an extensive function and task analysis of RAB. This analysis identified the functions and tasks in RAB, made preliminary estimates of the likelihood of human error in each task, and determined the skills needed to perform each RAB task.more » The findings of the function and task analysis served as the foundation for the remainder of the project, which evaluated four major aspects of the RAB system linked to human error: human-system interfaces; procedures and practices; training and qualifications of RAB staff; and organizational practices and policies. At its completion, the project identified and prioritized areas for recommended NRC and industry attention based on all of the evaluations and analyses.« less
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1995-01-01
This report focuses on the results obtained during the PI's recent sabbatical leave at the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland, from January 1, 1995 through June 30, 1995. Two projects investigated various properties of TURBO codes, a new form of concatenated coding that achieves near channel capacity performance at moderate bit error rates. The performance of TURBO codes is explained in terms of the code's distance spectrum. These results explain both the near capacity performance of the TURBO codes and the observed 'error floor' for moderate and high signal-to-noise ratios (SNR's). A semester project, entitled 'The Realization of the Turbo-Coding System,' involved a thorough simulation study of the performance of TURBO codes and verified the results claimed by previous authors. A copy of the final report for this project is included as Appendix A. A diploma project, entitled 'On the Free Distance of Turbo Codes and Related Product Codes,' includes an analysis of TURBO codes and an explanation for their remarkable performance. A copy of the final report for this project is included as Appendix B.
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
Eccentricity error identification and compensation for high-accuracy 3D optical measurement
He, Dong; Liu, Xiaoli; Peng, Xiang; Ding, Yabin; Gao, Bruce Z
2016-01-01
The circular target has been widely used in various three-dimensional optical measurements, such as camera calibration, photogrammetry and structured light projection measurement system. The identification and compensation of the circular target systematic eccentricity error caused by perspective projection is an important issue for ensuring accurate measurement. This paper introduces a novel approach for identifying and correcting the eccentricity error with the help of a concentric circles target. Compared with previous eccentricity error correction methods, our approach does not require taking care of the geometric parameters of the measurement system regarding target and camera. Therefore, the proposed approach is very flexible in practical applications, and in particular, it is also applicable in the case of only one image with a single target available. The experimental results are presented to prove the efficiency and stability of the proposed approach for eccentricity error compensation. PMID:26900265
Eccentricity error identification and compensation for high-accuracy 3D optical measurement.
He, Dong; Liu, Xiaoli; Peng, Xiang; Ding, Yabin; Gao, Bruce Z
2013-07-01
The circular target has been widely used in various three-dimensional optical measurements, such as camera calibration, photogrammetry and structured light projection measurement system. The identification and compensation of the circular target systematic eccentricity error caused by perspective projection is an important issue for ensuring accurate measurement. This paper introduces a novel approach for identifying and correcting the eccentricity error with the help of a concentric circles target. Compared with previous eccentricity error correction methods, our approach does not require taking care of the geometric parameters of the measurement system regarding target and camera. Therefore, the proposed approach is very flexible in practical applications, and in particular, it is also applicable in the case of only one image with a single target available. The experimental results are presented to prove the efficiency and stability of the proposed approach for eccentricity error compensation.
NASA Technical Reports Server (NTRS)
Gejji, Raghvendra, R.
1992-01-01
Network transmission errors such as collisions, CRC errors, misalignment, etc. are statistical in nature. Although errors can vary randomly, a high level of errors does indicate specific network problems, e.g. equipment failure. In this project, we have studied the random nature of collisions theoretically as well as by gathering statistics, and established a numerical threshold above which a network problem is indicated with high probability.
Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1996-01-01
We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.
Latest developments for low-power infrared laser-based trace gas sensors for sensor networks
NASA Astrophysics Data System (ADS)
So, Stephen; Thomazy, David; Wang, Wen; Marchat, Oscar; Wysocki, Gerard
2011-09-01
Academic and industrial researchers require ultra-low power, compact laser based trace-gas sensor systems for the most demanding environmental and space-borne applications. Here the latest results from research projects addressing these applications will be discussed: 1) an ultra-compact CO2 sensor based on a continuous wave quantum cascade laser, 2) an ultra-sensitive Faraday rotation spectrometer for O2 detection, 3) a fully ruggedized compact and low-power laser spectrometer, and 4) a novel non-paraxial nonthin multipass cell. Preliminary tests and projection for performance of future sensors based on this technology is presented.
Roch, N; Schwartz, M E; Motzoi, F; Macklin, C; Vijay, R; Eddins, A W; Korotkov, A N; Whaley, K B; Sarovar, M; Siddiqi, I
2014-05-02
The creation of a quantum network requires the distribution of coherent information across macroscopic distances. We demonstrate the entanglement of two superconducting qubits, separated by more than a meter of coaxial cable, by designing a joint measurement that probabilistically projects onto an entangled state. By using a continuous measurement scheme, we are further able to observe single quantum trajectories of the joint two-qubit state, confirming the validity of the quantum Bayesian formalism for a cascaded system. Our results allow us to resolve the dynamics of continuous projection onto the entangled manifold, in quantitative agreement with theory.
Pietarila Graham, Jonathan; Holm, Darryl D; Mininni, Pablo D; Pouquet, Annick
2007-11-01
We compute solutions of the Lagrangian-averaged Navier-Stokes alpha - (LANS alpha ) model for significantly higher Reynolds numbers (up to Re approximately 8300 ) than have previously been accomplished. This allows sufficient separation of scales to observe a Navier-Stokes inertial range followed by a second inertial range specific to the LANS alpha model. Both fully helical and nonhelical flows are examined, up to Reynolds numbers of approximately 1300. Analysis of the third-order structure function scaling supports the predicted l3 scaling; it corresponds to a k-1 scaling of the energy spectrum for scales smaller than alpha. The energy spectrum itself shows a different scaling, which goes as k1. This latter spectrum is consistent with the absence of stretching in the subfilter scales due to the Taylor frozen-in hypothesis employed as a closure in the derivation of the LANS alpha model. These two scalings are conjectured to coexist in different spatial portions of the flow. The l3 [E(k) approximately k-1] scaling is subdominant to k1 in the energy spectrum, but the l3 scaling is responsible for the direct energy cascade, as no cascade can result from motions with no internal degrees of freedom. We demonstrate verification of the prediction for the size of the LANS alpha attractor resulting from this scaling. From this, we give a methodology either for arriving at grid-independent solutions for the LANS alpha model, or for obtaining a formulation of the large eddy simulation optimal in the context of the alpha models. The fully converged grid-independent LANS alpha model may not be the best approximation to a direct numerical simulation of the Navier-Stokes equations, since the minimum error is a balance between truncation errors and the approximation error due to using the LANS alpha instead of the primitive equations. Furthermore, the small-scale behavior of the LANS alpha model contributes to a reduction of flux at constant energy, leading to a shallower energy spectrum for large alpha. These small-scale features, however, do not preclude the LANS alpha model from reproducing correctly the intermittency properties of the high-Reynolds-number flow.
Changing drainage patterns within South Cascade Glacier, Washington, USA, 1964-1992
Fountain, A.G.; Vaughn, B.H.
1995-01-01
The theoretical patterns of water drainage are presented for South Cascade Glacier for four different years between 1964 and 1992, during which the glacier was thinning and receding. The theoretical pattern compares well, in a broad sense, with the flow pattern determined from tracer injections in 1986 and 1987. Differences between the patterns may result from the routing of surface meltwater in crevasses prior to entering the body of the glacier. The changing drainage pattern was caused by glacier thinning. The migration of a drainage divide eventually rerouted most of the surface meltwater from the main stream that drained the glacier in 1987 to another, formerly smaller, stream by 1992. On the basis of projected glacier thinning between 1992 and 1999, we predict that the drainage divide will continue to migrate across the glacier.
Custom map projections for regional groundwater models
Kuniansky, Eve L.
2017-01-01
For regional groundwater flow models (areas greater than 100,000 km2), improper choice of map projection parameters can result in model error for boundary conditions dependent on area (recharge or evapotranspiration simulated by application of a rate using cell area from model discretization) and length (rivers simulated with head-dependent flux boundary). Smaller model areas can use local map coordinates, such as State Plane (United States) or Universal Transverse Mercator (correct zone) without introducing large errors. Map projections vary in order to preserve one or more of the following properties: area, shape, distance (length), or direction. Numerous map projections are developed for different purposes as all four properties cannot be preserved simultaneously. Preservation of area and length are most critical for groundwater models. The Albers equal-area conic projection with custom standard parallels, selected by dividing the length north to south by 6 and selecting standard parallels 1/6th above or below the southern and northern extent, preserves both area and length for continental areas in mid latitudes oriented east-west. Custom map projection parameters can also minimize area and length error in non-ideal projections. Additionally, one must also use consistent vertical and horizontal datums for all geographic data. The generalized polygon for the Floridan aquifer system study area (306,247.59 km2) is used to provide quantitative examples of the effect of map projections on length and area with different projections and parameter choices. Use of improper map projection is one model construction problem easily avoided.
Synthesis fidelity and time-varying spectral change in vowels
NASA Astrophysics Data System (ADS)
Assmann, Peter F.; Katz, William F.
2005-02-01
Recent studies have shown that synthesized versions of American English vowels are less accurately identified when the natural time-varying spectral changes are eliminated by holding the formant frequencies constant over the duration of the vowel. A limitation of these experiments has been that vowels produced by formant synthesis are generally less accurately identified than the natural vowels after which they are modeled. To overcome this limitation, a high-quality speech analysis-synthesis system (STRAIGHT) was used to synthesize versions of 12 American English vowels spoken by adults and children. Vowels synthesized with STRAIGHT were identified as accurately as the natural versions, in contrast with previous results from our laboratory showing identification rates 9%-12% lower for the same vowels synthesized using the cascade formant model. Consistent with earlier studies, identification accuracy was not reduced when the fundamental frequency was held constant across the vowel. However, elimination of time-varying changes in the spectral envelope using STRAIGHT led to a greater reduction in accuracy (23%) than was previously found with cascade formant synthesis (11%). A statistical pattern recognition model, applied to acoustic measurements of the natural and synthesized vowels, predicted both the higher identification accuracy for vowels synthesized using STRAIGHT compared to formant synthesis, and the greater effects of holding the formant frequencies constant over time with STRAIGHT synthesis. Taken together, the experiment and modeling results suggest that formant estimation errors and incorrect rendering of spectral and temporal cues by cascade formant synthesis contribute to lower identification accuracy and underestimation of the role of time-varying spectral change in vowels. .
Optimal Geoid Modelling to determine the Mean Ocean Circulation - Project Overview and early Results
NASA Astrophysics Data System (ADS)
Fecher, Thomas; Knudsen, Per; Bettadpur, Srinivas; Gruber, Thomas; Maximenko, Nikolai; Pie, Nadege; Siegismund, Frank; Stammer, Detlef
2017-04-01
The ESA project GOCE-OGMOC (Optimal Geoid Modelling based on GOCE and GRACE third-party mission data and merging with altimetric sea surface data to optimally determine Ocean Circulation) examines the influence of the satellite missions GRACE and in particular GOCE in ocean modelling applications. The project goal is an improved processing of satellite and ground data for the preparation and combination of gravity and altimetry data on the way to an optimal MDT solution. Explicitly, the two main objectives are (i) to enhance the GRACE error modelling and optimally combine GOCE and GRACE [and optionally terrestrial/altimetric data] and (ii) to integrate the optimal Earth gravity field model with MSS and drifter information to derive a state-of-the art MDT including an error assessment. The main work packages referring to (i) are the characterization of geoid model errors, the identification of GRACE error sources, the revision of GRACE error models, the optimization of weighting schemes for the participating data sets and finally the estimation of an optimally combined gravity field model. In this context, also the leakage of terrestrial data into coastal regions shall be investigated, as leakage is not only a problem for the gravity field model itself, but is also mirrored in a derived MDT solution. Related to (ii) the tasks are the revision of MSS error covariances, the assessment of the mean circulation using drifter data sets and the computation of an optimal geodetic MDT as well as a so called state-of-the-art MDT, which combines the geodetic MDT with drifter mean circulation data. This paper presents an overview over the project results with focus on the geodetic results part.
NASA Astrophysics Data System (ADS)
Tada, Kohei; Koga, Hiroaki; Okumura, Mitsutaka; Tanaka, Shingo
2018-06-01
Spin contamination error in the total energy of the Au2/MgO system was estimated using the density functional theory/plane-wave scheme and approximate spin projection methods. This is the first investigation in which the errors in chemical phenomena on a periodic surface are estimated. The spin contamination error of the system was 0.06 eV. This value is smaller than that of the dissociation of Au2 in the gas phase (0.10 eV). This is because of the destabilization of the singlet spin state due to the weakening of the Au-Au interaction caused by the Au-MgO interaction.
NASA Astrophysics Data System (ADS)
Wdowinski, S.; Peng, Z.; Ferrier, K.; Lin, C. H.; Hsu, Y. J.; Shyu, J. B. H.
2017-12-01
Earthquakes, landslides, and tropical cyclones are extreme hazards that pose significant threats to human life and property. Some of the couplings between these hazards are well known. For example, sudden, widespread landsliding can be triggered by large earthquakes and by extreme rainfall events like tropical cyclones. Recent studies have also shown that earthquakes can be triggered by erosional unloading over 100-year timescales. In a NASA supported project, titled "Cascading hazards: Understanding triggering relations between wet tropical cyclones, landslides, and earthquake", we study triggering relations between these hazard types. The project focuses on such triggering relations in Taiwan, which is subjected to very wet tropical storms, landslides, and earthquakes. One example for such triggering relations is the 2009 Morakot typhoon, which was the wettest recorded typhoon in Taiwan (2850 mm of rain in 100 hours). The typhoon caused widespread flooding and triggered more than 20,000 landslides, including the devastating Hsiaolin landslide. Six months later, the same area was hit by the 2010 M=6.4 Jiashian earthquake near Kaohsiung city, which added to the infrastructure damage induced by the typhoon and the landslides. Preliminary analysis of temporal relations between main-shock earthquakes and the six wettest typhoons in Taiwan's past 50 years reveals similar temporal relations between M≥5 events and wet typhoons. Future work in the project will include remote sensing analysis of landsliding, seismic and geodetic monitoring of landslides, detection of microseismicity and tremor activities, and mechanical modeling of crustal stress changes due to surface unloading.
Development of an Exploration-Class Cascade Distillation System: Flight Like Prototype Design Status
NASA Technical Reports Server (NTRS)
Sargusingh, Miriam C.; Callahan, Michael R.
2016-01-01
The ability to recover and purify water through physiochemical processes is crucial for realizing long-term human space missions, including both planetary habitation and space travel. Because of their robust nature, distillation systems have been actively pursued as one of the technologies for water recovery. One such technology is the Cascade Distillation System (CDS) a multi-stage vacuum rotary distiller system designed to recover water in a microgravity environment. The CDS provides a similar function to the state of the art (SOA) vapor compressor distiller (VCD) currently employed on the International Space Station, but its control scheme and ancillary components are judged to be more straightforward and simpler to implement into a more reliable and efficient system. Through the Advanced Exploration Systems (AES) Life Support Systems (LSS) Project, the NASA Johnson Space Center (JSC) in collaboration with Honeywell International is developing a second generation flight forward prototype (CDS 2.0). A preliminary design fo the CDS 2.0 was presented to the project in September 2014. Following this review, detailed design of the system continued. The existing ground test prototype was used as a platform to demonstrate key 2.0 design and operational concepts to support this effort and mitigate design risk. A volumetric prototype was also developed to evaluate the packaging design for operability and maintainability. The updated system design was reviewed by the AES LSS Project and other key stakeholders in September 2015. This paper details the status of the CDS 2.0 design.
Demonstration of an 8*10-Gb/s OTDM system
NASA Astrophysics Data System (ADS)
Huo, Li; Yang, Yanfu; Lou, Caiyun; Gao, Yizhi
2005-03-01
An 8*10 Gb/s optical time-division-multiplexing (OTDM) system was demonstrated with an electroabsorption modulator (EAM) based short pulse generator followed by a two-stage nonlinear compression scheme which generated stable 10-GHz, 2-ps full-width at half-maximum (FWHM) pulse train, an opto-electronic oscillator (OEO) that extracted 10-GHz clock with a timing jitter of 300 fs from 80-Gb/s OTDM signal and a self cascaded EAM which produced a switching window of about 10 ps. A back-to-back error free demultiplexing experiment with a power penalty of 3.25 dB was carried out to verify the system performance.
ADRC for spacecraft attitude and position synchronization in libration point orbits
NASA Astrophysics Data System (ADS)
Gao, Chen; Yuan, Jianping; Zhao, Yakun
2018-04-01
This paper addresses the problem of spacecraft attitude and position synchronization in libration point orbits between a leader and a follower. Using dual quaternion, the dimensionless relative coupled dynamical model is derived considering computation efficiency and accuracy. Then a model-independent dimensionless cascade pose-feedback active disturbance rejection controller is designed to spacecraft attitude and position tracking control problems considering parameter uncertainties and external disturbances. Numerical simulations for the final approach phase in spacecraft rendezvous and docking and formation flying are done, and the results show high-precision tracking errors and satisfactory convergent rates under bounded control torque and force which validate the proposed approach.
Referees Often Miss Obvious Errors in Computer and Electronic Publications
NASA Astrophysics Data System (ADS)
de Gloucester, Paul Colin
2013-05-01
Misconduct is extensive and damaging. So-called science is prevalent. Articles resulting from so-called science are often cited in other publications. This can have damaging consequences for society and for science. The present work includes a scientometric study of 350 articles (published by the Association for Computing Machinery; Elsevier; The Institute of Electrical and Electronics Engineers, Inc.; John Wiley; Springer; Taylor & Francis; and World Scientific Publishing Co.). A lower bound of 85.4% articles are found to be incongruous. Authors cite inherently self-contradictory articles more than valid articles. Incorrect informational cascades ruin the literature's signal-to-noise ratio even for uncomplicated cases.
Referees often miss obvious errors in computer and electronic publications.
de Gloucester, Paul Colin
2013-01-01
Misconduct is extensive and damaging. So-called science is prevalent. Articles resulting from so-called science are often cited in other publications. This can have damaging consequences for society and for science. The present work includes a scientometric study of 350 articles (published by the Association for Computing Machinery; Elsevier; The Institute of Electrical and Electronics Engineers, Inc.; John Wiley; Springer; Taylor & Francis; and World Scientific Publishing Co.). A lower bound of 85.4% articles are found to be incongruous. Authors cite inherently self-contradictory articles more than valid articles. Incorrect informational cascades ruin the literature's signal-to-noise ratio even for uncomplicated cases.
Premstaller, Georg; Cavedon, Valentina; Pisaturo, Giuseppe Roberto; Schweizer, Steffen; Adami, Vito; Righetti, Maurizio
2017-01-01
A hydropeaking mitigation project on Valsura River in the Italians Alps is described. The project is of particular interest due to several aspects. First of all, the Valsura torrent has unique morphological braiding characteristics, which are unique in the reach of Adige valley between Merano and Bolzano, and has a good reproduction potential for fish, especially in the terminal stretch along a biotope before its confluence with Adige River. Moreover, the Valsura hydropower cascade, which overall consists of six high-head hydropower plants, has an exceptional economic importance for the local hydropower industry. Lastly, the last HPP on the cascade is a multipurpose plant, so that interesting interactions between hydropeaking mitigation, irrigation supply and peak energy production are considered. The project started from a hydrological and a limnological measuring campaign and from an energetic, hydraulic and legislative framework analysis. The ecological findings are combined into a deficit analysis, founding the basis for the definition of a hydrological target state, which points to achieve a good natural reproduction for brown trout in the hydropeaked stretch, fulfilling at the same time the human safety conditions. Finally, mitigation Measures are described that at the same time comply with the following manifold aspects: a. maintenance of the requested target limits for fish reproduction; b. maintenance of the water release for the agricultural irrigation; c. enhancement of the flexibility of the hydropower plant's operation; d. reduction of the risk for local population. The paper compares operational and constructive mitigation measures and shows that constructive hydropeaking mitigation measures, for the present case study, can combine the positive effects of ecological improvement with higher safety standards and more flexible energy production. Copyright © 2016 Elsevier B.V. All rights reserved.
2014-01-01
We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588
String Stability of a Linear Formation Flight Control System
NASA Technical Reports Server (NTRS)
Allen, Michael J.; Ryan, Jack; Hanson, Curtis E.; Parle, James F.
2002-01-01
String stability analysis of an autonomous formation flight system was performed using linear and nonlinear simulations. String stability is a measure of how position errors propagate from one vehicle to another in a cascaded system. In the formation flight system considered here, each i(sup th) aircraft uses information from itself and the preceding ((i-1)(sup th)) aircraft to track a commanded relative position. A possible solution for meeting performance requirements with such a system is to allow string instability. This paper explores two results of string instability and outlines analysis techniques for string unstable systems. The three analysis techniques presented here are: linear, nonlinear formation performance, and ride quality. The linear technique was developed from a worst-case scenario and could be applied to the design of a string unstable controller. The nonlinear formation performance and ride quality analysis techniques both use nonlinear formation simulation. Three of the four formation-controller gain-sets analyzed in this paper were limited more by ride quality than by performance. Formations of up to seven aircraft in a cascaded formation could be used in the presence of light gusts with this string unstable system.
Coherence-length-gated distributed optical fiber sensing based on microwave-photonic interferometry.
Hua, Liwei; Song, Yang; Cheng, Baokai; Zhu, Wenge; Zhang, Qi; Xiao, Hai
2017-12-11
This paper presents a new optical fiber distributed sensing concept based on coherent microwave-photonics interferometry (CMPI), which uses a microwave modulated coherent light source to interrogate cascaded interferometers for distributed measurement. By scanning the microwave frequencies, the complex microwave spectrum is obtained and converted to time domain signals at known locations by complex Fourier transform. The amplitudes of these time domain pulses are a function of the optical path differences (OPDs) of the distributed interferometers. Cascaded fiber Fabry-Perot interferometers (FPIs) fabricated by femtosecond laser micromachining were used to demonstrate the concept. The experimental results indicated that the strain measurement resolution can be better than 0.6 µε using a FPI with a cavity length of 1.5 cm. Further improvement of the strain resolution to the nε level is achievable by increasing the cavity length of the FPI to over 1m. The tradeoff between the sensitivity and dynamic range was also analyzed in detail. To minimize the optical power instability (either from the light source or the fiber loss) induced errors, a single reflector was added in front of an individual FPI as an optical power reference for the purpose of compensation.
Aerial video mosaicking using binary feature tracking
NASA Astrophysics Data System (ADS)
Minnehan, Breton; Savakis, Andreas
2015-05-01
Unmanned Aerial Vehicles are becoming an increasingly attractive platform for many applications, as their cost decreases and their capabilities increase. Creating detailed maps from aerial data requires fast and accurate video mosaicking methods. Traditional mosaicking techniques rely on inter-frame homography estimations that are cascaded through the video sequence. Computationally expensive keypoint matching algorithms are often used to determine the correspondence of keypoints between frames. This paper presents a video mosaicking method that uses an object tracking approach for matching keypoints between frames to improve both efficiency and robustness. The proposed tracking method matches local binary descriptors between frames and leverages the spatial locality of the keypoints to simplify the matching process. Our method is robust to cascaded errors by determining the homography between each frame and the ground plane rather than the prior frame. The frame-to-ground homography is calculated based on the relationship of each point's image coordinates and its estimated location on the ground plane. Robustness to moving objects is integrated into the homography estimation step through detecting anomalies in the motion of keypoints and eliminating the influence of outliers. The resulting mosaics are of high accuracy and can be computed in real time.
DOT National Transportation Integrated Search
2012-03-01
This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...
Science for a wilder Anthropocene: Synthesis and future directions for trophic rewilding research.
Svenning, Jens-Christian; Pedersen, Pil B M; Donlan, C Josh; Ejrnæs, Rasmus; Faurby, Søren; Galetti, Mauro; Hansen, Dennis M; Sandel, Brody; Sandom, Christopher J; Terborgh, John W; Vera, Frans W M
2016-01-26
Trophic rewilding is an ecological restoration strategy that uses species introductions to restore top-down trophic interactions and associated trophic cascades to promote self-regulating biodiverse ecosystems. Given the importance of large animals in trophic cascades and their widespread losses and resulting trophic downgrading, it often focuses on restoring functional megafaunas. Trophic rewilding is increasingly being implemented for conservation, but remains controversial. Here, we provide a synthesis of its current scientific basis, highlighting trophic cascades as the key conceptual framework, discussing the main lessons learned from ongoing rewilding projects, systematically reviewing the current literature, and highlighting unintentional rewilding and spontaneous wildlife comebacks as underused sources of information. Together, these lines of evidence show that trophic cascades may be restored via species reintroductions and ecological replacements. It is clear, however, that megafauna effects may be affected by poorly understood trophic complexity effects and interactions with landscape settings, human activities, and other factors. Unfortunately, empirical research on trophic rewilding is still rare, fragmented, and geographically biased, with the literature dominated by essays and opinion pieces. We highlight the need for applied programs to include hypothesis testing and science-based monitoring, and outline priorities for future research, notably assessing the role of trophic complexity, interplay with landscape settings, land use, and climate change, as well as developing the global scope for rewilding and tools to optimize benefits and reduce human-wildlife conflicts. Finally, we recommend developing a decision framework for species selection, building on functional and phylogenetic information and with attention to the potential contribution from synthetic biology.
Science for a wilder Anthropocene: Synthesis and future directions for trophic rewilding research
Svenning, Jens-Christian; Pedersen, Pil B. M.; Donlan, C. Josh; Ejrnæs, Rasmus; Faurby, Søren; Galetti, Mauro; Hansen, Dennis M.; Sandel, Brody; Sandom, Christopher J.; Terborgh, John W.; Vera, Frans W. M.
2016-01-01
Trophic rewilding is an ecological restoration strategy that uses species introductions to restore top-down trophic interactions and associated trophic cascades to promote self-regulating biodiverse ecosystems. Given the importance of large animals in trophic cascades and their widespread losses and resulting trophic downgrading, it often focuses on restoring functional megafaunas. Trophic rewilding is increasingly being implemented for conservation, but remains controversial. Here, we provide a synthesis of its current scientific basis, highlighting trophic cascades as the key conceptual framework, discussing the main lessons learned from ongoing rewilding projects, systematically reviewing the current literature, and highlighting unintentional rewilding and spontaneous wildlife comebacks as underused sources of information. Together, these lines of evidence show that trophic cascades may be restored via species reintroductions and ecological replacements. It is clear, however, that megafauna effects may be affected by poorly understood trophic complexity effects and interactions with landscape settings, human activities, and other factors. Unfortunately, empirical research on trophic rewilding is still rare, fragmented, and geographically biased, with the literature dominated by essays and opinion pieces. We highlight the need for applied programs to include hypothesis testing and science-based monitoring, and outline priorities for future research, notably assessing the role of trophic complexity, interplay with landscape settings, land use, and climate change, as well as developing the global scope for rewilding and tools to optimize benefits and reduce human–wildlife conflicts. Finally, we recommend developing a decision framework for species selection, building on functional and phylogenetic information and with attention to the potential contribution from synthetic biology. PMID:26504218
Buchini, Sara; Quattrin, Rosanna
2012-04-01
To record the frequency of interruptions and their causes, to identify 'avoidable' interruptions and to build an improvement project to reduce 'avoidable' interruptions. In Italy each year 30,000-35,000 deaths per year are attributed to health-care system errors, of which 19% are caused by medication errors. The factors that contribute to drug management error also include interruptions and carelessness during treatment administration. A descriptive study design was used to record the frequency of interruptions and their causes and to identify 'avoidable' interruptions in an intensive rehabilitation ward in Northern Italy. A data collection grid was used to record the data over a 6-month period. A total of 3000 work hours were observed. During the study period 1170 interruptions were observed. The study identified 14 causes of interruption. The study shows that of the 14 cases of interruptions at least nine can be defined as 'avoidable'. An improvement project has been proposed to reduce unnecessary interruptions and distractions to avoid making errors. An additional useful step to reduce the incidence of treatment errors would be to implement the use of a single patient medication sheet for the recording of drug prescription, preparation and administration and also the incident reporting. © 2011 Blackwell Publishing Ltd.
A high-density remote reference magnetic variation profile in the Pacific northwest of North America
Hermance, J.F.; Lusi, S.; Slocum, W.; Neumann, G.A.; Green, A.W.
1989-01-01
During the summer of 1985, as part of the EMSLAB Project, Brown University conducted a detailed magnetic variation study of the Oregon Coast Range and Cascades volcanic system along an E-W profile in central Oregon. Comprised of a sequence of 75 remote reference magnetic variation (MV) stations spaced 3-4 km apart, the profile stretched for 225 km from Newport, on the Oregon coast, across the Coast Range, the Willamette Valley, and the High Cascades to a point ??? 50 km east of Santiam Pass. At all of the MV stations, data were collected for short periods (16-100 s), and at 17 of these stations data were also obtained at longer periods (100-1600 s). Data were monitored with a three-component ring core fluxgate magnetometer (Nanotesla), and were recorded with a microcomputer (DEC PDP 11/73) based data acquisition system. A 2-D generalized inversion of the magnetic transfer coefficients over the period range of 16-1600 s indicates four distinct conductors. First, we see the coast effect caused by a large sedimentary wedge offshore. Second, we see the effect of currents flowing in the conductive sediments of the Willamette Valley. Our inversion suggests that the Willamette Valley consists of two electrically distinct features, due perhaps to a horst-like structure imprinted on the valley sediments. Next we note an electric current system centered beneath the High Cascades. This latter feature may be associated with a sediment-filled graben beneath Santiam Pass as suggested by some of the gravity and MT results reported to date. Finally, we detect the presence of a deep conductor at mid-crustal depths which laterally extends westward from beneath the Basin and Range Province, and terminates beneath the western Cascades. One view of this last result is that it appears that modern Basin and Range structure is being imprinted on pre-existing Cascade structure. ?? 1989.
The Southern Washington Cascades magmatic system imaged with magnetotellurics
NASA Astrophysics Data System (ADS)
Bowles-martinez, E.; Bedrosian, P.; Schultz, A.; Hill, G. J.; Peacock, J.
2016-12-01
The goal of the interdisciplinary iMUSH project (Imaging Magma Under Saint Helens) is to image the magmatic system of Mount Saint Helens (MSH), and to determine the relationship of this system to the greater Cascades volcanic arc. We are especially interested in an anomalously conductive crustal zone between MSH and Mount Adams known as the Southern Washington Cascades Conductor (SWCC), which early studies interpreted as accreted sediments, but more recently has been interpreted as a broad region of partial melt. MSH is located 50 km west of the main arc and is the most active of the Cascade volcanoes. Its 1980 eruption highlighted the need to understand this potentially hazardous volcanic system. We use wideband magnetotelluric (MT) data collected in 2014-2015 along with data from earlier studies to create a 3D model of the electrical resistivity throughout the region, covering MSH as well as Mount Adams and Mount Rainier along the main volcanic arc. We look at not only the volcanoes themselves, but also their relationship to one another and to regional geologic structures. Preliminary modeling identifies several conductive features, including a mid-crustal conductive region between MSH and Mount Adams that passes below Indian Heaven Volcanic Field and coincides with a region with a high Vp/Vs ratio identified in the seismic component of iMUSH. This suggests that it could be magmatic, but does not preclude the possibility of conductive sediments. Synthesis of seismic and MT data to address this question is ongoing. We also note a conductive zone running north-south just west of MSH that is likely associated with fluids within faults of the Saint Helens Seismic Zone. We finally note that curvature of the conductive lineament that defines the main Cascade arc suggests that the boundary of magmatism is influenced by compression within the Yakima Fold and Thrust Belt, east and southeast of Mount Adams.
Evaluating the utility of dynamical downscaling in agricultural impacts projections
Glotter, Michael; Elliott, Joshua; McInerney, David; Best, Neil; Foster, Ian; Moyer, Elisabeth J.
2014-01-01
Interest in estimating the potential socioeconomic costs of climate change has led to the increasing use of dynamical downscaling—nested modeling in which regional climate models (RCMs) are driven with general circulation model (GCM) output—to produce fine-spatial-scale climate projections for impacts assessments. We evaluate here whether this computationally intensive approach significantly alters projections of agricultural yield, one of the greatest concerns under climate change. Our results suggest that it does not. We simulate US maize yields under current and future CO2 concentrations with the widely used Decision Support System for Agrotechnology Transfer crop model, driven by a variety of climate inputs including two GCMs, each in turn downscaled by two RCMs. We find that no climate model output can reproduce yields driven by observed climate unless a bias correction is first applied. Once a bias correction is applied, GCM- and RCM-driven US maize yields are essentially indistinguishable in all scenarios (<10% discrepancy, equivalent to error from observations). Although RCMs correct some GCM biases related to fine-scale geographic features, errors in yield are dominated by broad-scale (100s of kilometers) GCM systematic errors that RCMs cannot compensate for. These results support previous suggestions that the benefits for impacts assessments of dynamically downscaling raw GCM output may not be sufficient to justify its computational demands. Progress on fidelity of yield projections may benefit more from continuing efforts to understand and minimize systematic error in underlying climate projections. PMID:24872455
Translating Climate Projections for Bridge Engineering
NASA Astrophysics Data System (ADS)
Anderson, C.; Takle, E. S.; Krajewski, W.; Mantilla, R.; Quintero, F.
2015-12-01
A bridge vulnerability pilot study was conducted by Iowa Department of Transportation (IADOT) as one of nineteen pilots supported by the Federal Highway Administration Climate Change Resilience Pilots. Our pilot study team consisted of the IADOT senior bridge engineer who is the preliminary design section leader as well as climate and hydrological scientists. The pilot project culminated in a visual graphic designed by the bridge engineer (Figure 1), and an evaluation framework for bridge engineering design. The framework has four stages. The first two stages evaluate the spatial and temporal resolution needed in climate projection data in order to be suitable for input to a hydrology model. The framework separates streamflow simulation error into errors from the streamflow model and from the coarseness of input weather data series. In the final two stages, the framework evaluates credibility of climate projection streamflow simulations. Using an empirically downscaled data set, projection streamflow is generated. Error is computed in two time frames: the training period of the empirical downscaling methodology, and an out-of-sample period. If large errors in projection streamflow were observed during the training period, it would indicate low accuracy and, therefore, low credibility. If large errors in streamflow were observed during the out-of-sample period, it would mean the approach may not include some causes of change and, therefore, the climate projections would have limited credibility for setting expectations for changes. We address uncertainty with confidence intervals on quantiles of streamflow discharge. The results show the 95% confidence intervals have significant overlap. Nevertheless, the use of confidence intervals enabled engineering judgement. In our discussions, we noted the consistency in direction of change across basins, though the flood mechanism was different across basins, and the high bound of bridge lifetime period quantiles exceeded that of the historical period. This suggested the change was not isolated, and it systemically altered the risk profile. One suggestion to incorporate engineering judgement was to consider degrees of vulnerability using the median discharge of the historical period and the upper bound discharge for the bridge lifetime period.
Error analysis of the crystal orientations obtained by the dictionary approach to EBSD indexing.
Ram, Farangis; Wright, Stuart; Singh, Saransh; De Graef, Marc
2017-10-01
The efficacy of the dictionary approach to Electron Back-Scatter Diffraction (EBSD) indexing was evaluated through the analysis of the error in the retrieved crystal orientations. EBSPs simulated by the Callahan-De Graef forward model were used for this purpose. Patterns were noised, distorted, and binned prior to dictionary indexing. Patterns with a high level of noise, with optical distortions, and with a 25 × 25 pixel size, when the error in projection center was 0.7% of the pattern width and the error in specimen tilt was 0.8°, were indexed with a 0.8° mean error in orientation. The same patterns, but 60 × 60 pixel in size, were indexed by the standard 2D Hough transform based approach with almost the same orientation accuracy. Optimal detection parameters in the Hough space were obtained by minimizing the orientation error. It was shown that if the error in detector geometry can be reduced to 0.1% in projection center and 0.1° in specimen tilt, the dictionary approach can retrieve a crystal orientation with a 0.2° accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.
FERMI Observations of Gamma -Ray Emission From the Moon
NASA Technical Reports Server (NTRS)
Abdo, A. A.; Ackermann, M.; Ajello, M.; Atwoo, W. B.; Baldini, I.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.;
2012-01-01
We report on the detection of high-energy ? -ray emission from the Moon during the first 24 months of observations by the Fermi Large Area Telescope (LAT). This emission comes from particle cascades produced by cosmicray (CR) nuclei and electrons interacting with the lunar surface. The differential spectrum of the Moon is soft and can be described as a log-parabolic function with an effective cutoff at 2-3 GeV, while the average integral flux measured with the LAT from the beginning of observations in 2008 August to the end of 2010 August is F(greater than100 MeV) = (1.04 plus or minus 0.01 [statistical error] plus or minus 0.1 [systematic error]) × 10(sup -6) cm(sup -2) s(sup -1). This flux is about a factor 2-3 higher than that observed between 1991 and 1994 by the EGRET experiment on board the Compton Gamma Ray Observatory, F(greater than100 MeV)˜5×10(sup -7) cm(sup -2) s(sup -1), when solar activity was relatively high. The higher gamma -ray flux measured by Fermi is consistent with the deep solar minimum conditions during the first 24 months of the mission, which reduced effects of heliospheric modulation, and thus increased the heliospheric flux of Galactic CRs. A detailed comparison of the light curve with McMurdo Neutron Monitor rates suggests a correlation of the trends. The Moon and the Sun are so far the only known bright emitters of gamma-rays with fast celestial motion. Their paths across the sky are projected onto the Galactic center and high Galactic latitudes as well as onto other areas crowded with high-energy gamma-ray sources. Analysis of the lunar and solar emission may thus be important for studies of weak and transient sources near the ecliptic.
An advanced SEU tolerant latch based on error detection
NASA Astrophysics Data System (ADS)
Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao
2018-05-01
This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).
Strontium-90 Error Discovered in Subcontract Laboratory Spreadsheet
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. D. Brown A. S. Nagel
1999-07-31
West Valley Demonstration Project health physicists and environment scientists discovered a series of errors in a subcontractor's spreadsheet being used to reduce data as part of their strontium-90 analytical process.
Pierson, T.C.
2007-01-01
Dating of dynamic, young (<500 years) geomorphic landforms, particularly volcanofluvial features, requires higher precision than is possible with radiocarbon dating. Minimum ages of recently created landforms have long been obtained from tree-ring ages of the oldest trees growing on new surfaces. But to estimate the year of landform creation requires that two time corrections be added to tree ages obtained from increment cores: (1) the time interval between stabilization of the new landform surface and germination of the sampled trees (germination lag time or GLT); and (2) the interval between seedling germination and growth to sampling height, if the trees are not cored at ground level. The sum of these two time intervals is the colonization time gap (CTG). Such time corrections have been needed for more precise dating of terraces and floodplains in lowland river valleys in the Cascade Range, where significant eruption-induced lateral shifting and vertical aggradation of channels can occur over years to decades, and where timing of such geomorphic changes can be critical to emergency planning. Earliest colonizing Douglas fir (Pseudotsuga menziesii) were sampled for tree-ring dating at eight sites on lowland (<750 m a.s.l.), recently formed surfaces of known age near three Cascade volcanoes - Mount Rainier, Mount St. Helens and Mount Hood - in southwestern Washington and northwestern Oregon. Increment cores or stem sections were taken at breast height and, where possible, at ground level from the largest, oldest-looking trees at each study site. At least ten trees were sampled at each site unless the total of early colonizers was less. Results indicate that a correction of four years should be used for GLT and 10 years for CTG if the single largest (and presumed oldest) Douglas fir growing on a surface of unknown age is sampled. This approach would have a potential error of up to 20 years. Error can be reduced by sampling the five largest Douglas fir instead of the single largest. A GLT correction of 5 years should be added to the mean ring-count age of the five largest trees growing on the surface being dated, if the trees are cored at ground level. This correction would have an approximate error of ??5 years. If the trees are cored at about 1.4 m above the round surface (breast height), a CTG correction of 11 years should be added to the mean age of the five sampled trees (with an error of about ??7 years).
NASA Technical Reports Server (NTRS)
Wagner, Michael Broderick
1987-01-01
The modeled cascade cells offer an alternative to conventional series cascade designs that require a monolithic intercell ohmic contact. Selective electrodes provide a simple means of fabricating three-terminal devices, which can be configured in complementary pairs to circumvent the attendant losses and fabrication complexities of intercell ohmic contacts. Moreover, selective electrodes allow incorporation of additional layers in the upper subcell which can improve spectral response and increase radiation tolerance. Realistic simulations of such cells operating under one-sun AMO conditions show that the seven-layer structure is optimum from the standpoint of beginning-of-life efficiency and radiation tolerance. Projected efficiencies exceed 26 percent. Under higher concentration factors, it should be possible to achieve efficiencies beyond 30 percent. However, to simulate operation at high concentration will require a model for resistive losses. Overall, these devices appear to be a promising contender for future space applications.
Combined Brayton-JT cycles with refrigerants for natural gas liquefaction
NASA Astrophysics Data System (ADS)
Chang, Ho-Myung; Park, Jae Hoon; Lee, Sanggyu; Choe, Kun Hyung
2012-06-01
Thermodynamic cycles for natural gas liquefaction with single-component refrigerants are investigated under a governmental project in Korea, aiming at new processes to meet the requirements on high efficiency, large capacity, and simple equipment. Based upon the optimization theory recently published by the present authors, it is proposed to replace the methane-JT cycle in conventional cascade process with a nitrogen-Brayton cycle. A variety of systems to combine nitrogen-Brayton, ethane-JT and propane-JT cycles are simulated with Aspen HYSYS and quantitatively compared in terms of thermodynamic efficiency, flow rate of refrigerants, and estimated size of heat exchangers. A specific Brayton-JT cycle is suggested with detailed thermodynamic data for further process development. The suggested cycle is expected to be more efficient and simpler than the existing cascade process, while still taking advantage of easy and robust operation with single-component refrigerants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shevenell, Lisa; Coolbaugh, Mark; Hinz, Nick
This project brings a global perspective to volcanic arc geothermal play fairway analysis by developing statistics for the occurrence of geothermal reservoirs and their geoscience context worldwide in order to rank U.S. prospects. The focus of the work was to develop play fairways for the Cascade and Aleutian arcs to rank the individual volcanic centers in these arcs by their potential to host electricity grade geothermal systems. The Fairway models were developed by describing key geologic factors expected to be indicative of productive geothermal systems in a global training set, which includes 74 volcanic centers world-wide with current power production.more » To our knowledge, this is the most robust geothermal benchmark training set for magmatic systems to date that will be made public.« less
Reducing operating costs for struvite formation with a carbon dioxide stripper.
Fattah, K P; Sabrina, N; Mavinic, D S; Koch, F A
2008-01-01
One of the major operational costs of phosphorus recovery as struvite is the cost of caustic chemical that is added to maintain a desired level of operative pH. A study was conducted at the Lulu Island Wastewater Treatment Plant (LIWWTP), Richmond, BC, using a struvite crystallizer and a cascade stripper designed at the University of British Columbia (UBC). The stripper was tested under different operating conditions to determine the effectiveness of CO(2) stripping in increasing the pH of the water matrix and thereby reducing caustic chemical use. This reduction is expected to reduce the operational costs of struvite production. Throughout the project, a high percentage (90%) of phosphorus removal was achieved under each condition. The cascade stripper was very effective in saving caustic usage, ranging from 35% to 86%, depending on the operating conditions. However, the stripper showed relatively poor performance regarding ammonia stripping. Copyright IWA Publishing 2008.
Blind phase error suppression for color-encoded digital fringe projection profilometry
NASA Astrophysics Data System (ADS)
Ma, S.; Zhu, R.; Quan, C.; Li, B.; Tay, C. J.; Chen, L.
2012-04-01
Color-encoded digital fringe projection profilometry (CDFPP) has the advantage of fast speed, non-contact and full-field testing. It is one of the most important dynamic three-dimensional (3D) profile measurement techniques. However, due to factors such as color cross-talk and gamma distortion of electro-optical devices, phase errors arise when conventional phase-shifting algorithms with fixed phase shift values are utilized to retrieve phases. In this paper, a simple and effective blind phase error suppression approach based on isotropic n-dimensional fringe pattern normalization (INFPN) and carrier squeezing interferometry (CSI) is proposed. It does not require pre-calibration for the gamma and color-coupling coefficients or the phase shift values. Simulation and experimental works show that our proposed approach is able to effectively suppress phase errors and achieve accurate measurement results in CDFPP.
Besharati Tabrizi, Leila; Mahvash, Mehran
2015-07-01
An augmented reality system has been developed for image-guided neurosurgery to project images with regions of interest onto the patient's head, skull, or brain surface in real time. The aim of this study was to evaluate system accuracy and to perform the first intraoperative application. Images of segmented brain tumors in different localizations and sizes were created in 10 cases and were projected to a head phantom using a video projector. Registration was performed using 5 fiducial markers. After each registration, the distance of the 5 fiducial markers from the visualized tumor borders was measured on the virtual image and on the phantom. The difference was considered a projection error. Moreover, the image projection technique was intraoperatively applied in 5 patients and was compared with a standard navigation system. Augmented reality visualization of the tumors succeeded in all cases. The mean time for registration was 3.8 minutes (range 2-7 minutes). The mean projection error was 0.8 ± 0.25 mm. There were no significant differences in accuracy according to the localization and size of the tumor. Clinical feasibility and reliability of the augmented reality system could be proved intraoperatively in 5 patients (projection error 1.2 ± 0.54 mm). The augmented reality system is accurate and reliable for the intraoperative projection of images to the head, skull, and brain surface. The ergonomic advantage of this technique improves the planning of neurosurgical procedures and enables the surgeon to use direct visualization for image-guided neurosurgery.
What Can We Learn From Point-of-Care Blood Glucose Values Deleted and Repeated by Nurses?
Corl, Dawn; Yin, Tom; Ulibarri, May; Lien, Heather; Tylee, Tracy; Chao, Jing; Wisse, Brent E
2018-03-01
Hospitals rely on point-of-care (POC) blood glucose (BG) values to guide important decisions related to insulin administration and glycemic control. Evaluation of POC BG in hospitalized patients is associated with measurement and operator errors. Based on a previous quality improvement (QI) project we introduced an option for operators to delete and repeat POC BG values suspected as erroneous. The current project evaluated our experience with deleted POC BG values over a 2-year period. A retrospective QI project included all patients hospitalized at two regional academic medical centers in the Pacific Northwest during 2014 and 2015. Laboratory Medicine POC BG data were reviewed to evaluate all inpatient episodes of deleted and repeated POC BG. Inpatient operators choose to delete and repeat only 0.8% of all POC BG tests. Hypoglycemic and extreme hyperglycemic BG values are more likely to be deleted and repeated. Of initial values <40 mg/dL, 58% of deleted values (18% of all values) are errors. Of values >400 mg/dL, 40% of deleted values (5% of all values) are errors. Not all repeated POC BG values are first deleted. Optimal use of the option to delete and repeat POC BG values <40 mg/dL could decrease reported rates of severe hypoglycemia by as much as 40%. This project demonstrates that operators are frequently able to identify POC BG values that are measurement/operator errors. Eliminating these errors significantly reduces documented rates of severe hypoglycemia and hyperglycemia, and has the potential to improve patient safety.
Jones, Tiffany M.; Hill, Karl G.; Epstein, Marina; Lee, Jungeun Olivia; Hawkins, J. David; Catalano, Richard F.
2016-01-01
This study examines the interplay between individual and social-developmental factors in the development of positive functioning, substance use problems, and mental health problems. This interplay is nested within positive and negative developmental cascades that span childhood, adolescence, the transition to adulthood, and adulthood. Data are drawn from the Seattle Social Development Project, a gender-balanced, ethnically diverse community sample of 808 participants interviewed 12 times from ages 10 to 33. Path modeling showed short- and long-term cascading effects of positive social environments, family history of depression, and substance using social environments throughout development. Positive family social environments set a template for future partner social environment interaction and had positive influences on proximal individual functioning, both in the next developmental period and long term. Family history of depression adversely affected mental health functioning throughout adulthood. Family substance use began a cascade of substance-specific social environments across development, which was the pathway through which increasing severity of substance use problems flowed. The model also indicated that adolescent, but not adult, individual functioning influenced selection into positive social environments, and significant cross-domain effects were found in which substance using social environments affected subsequent mental health. PMID:27427802
Food, Feed and Fuel: a Story About Nitrogen
NASA Astrophysics Data System (ADS)
Galloway, J. N.; Burke, M. B.; Mooney, H. A.; Steinfeld, H.
2008-12-01
Humans obtain metabolic energy by eating food. Nitrogen is required to grow food, but natural supplies of N for human purposes have been inadequate since the beginning of the twentieth century. The Haber-Bosch process now provides a virtually inexhaustible supply of nitrogen, limited primarily by the cost of energy. However, most nitrogen used in food production is lost to the environment, where it cascades through environmental reservoirs contributing to many of the major environmental issues of the day. Furthermore, growing international trade in nitrogen-containing commodities is increasingly replacing wind and water as an important international transporter of nitrogen around the globe. Finally, the rapid growth in crop-based biofuels, and its attendant effects on the global production and trade of all agricultural commodities, could greatly affect global patterns of N use and loss. In the light of the findings above, this paper examines the role of nitrogen in food, feed and fuel production. It describes the beneficial consequences for food production and the negative consequences associated with the commodity nitrogen cascade and the environmental nitrogen cascade. The paper reviews estimates of future projections of nitrogen demands for food and fuel, including the impact of changing diets in the developing world. The paper concludes by presenting the potential interactions among global change, agricultural production and the nitrogen and carbon cycles.
Jones, Tiffany M; Hill, Karl G; Epstein, Marina; Lee, Jungeun Olivia; Hawkins, J David; Catalano, Richard F
2016-08-01
This study examines the interplay between individual and social-developmental factors in the development of positive functioning, substance use problems, and mental health problems. This interplay is nested within positive and negative developmental cascades that span childhood, adolescence, the transition to adulthood, and adulthood. Data are drawn from the Seattle Social Development Project, a gender-balanced, ethnically diverse community sample of 808 participants interviewed 12 times from ages 10 to 33. Path modeling showed short- and long-term cascading effects of positive social environments, family history of depression, and substance-using social environments throughout development. Positive family social environments set a template for future partner social environment interaction and had positive influences on proximal individual functioning, both in the next developmental period and long term. Family history of depression adversely affected mental health functioning throughout adulthood. Family substance use began a cascade of substance-specific social environments across development, which was the pathway through which increasing severity of substance use problems flowed. The model also indicated that adolescent, but not adult, individual functioning influenced selection into positive social environments, and significant cross-domain effects were found in which substance-using social environments affected subsequent mental health.
NASA Astrophysics Data System (ADS)
Gupta, P.; Williams, G. V. M.; Hübner, R.; Vajandar, S.; Osipowicz, T.; Heinig, K.-H.; Becker, H.-W.; Markwitz, A.
2017-04-01
Mono-energetic cobalt implantation into hydrogenated diamond-like carbon at room temperature results in a bimodal distribution of implanted atoms without any thermal treatment. The ˜100 nm thin films were synthesised by mass selective ion beam deposition. The films were implanted with cobalt at an energy of 30 keV and an ion current density of ˜5 μA cm-2. Simulations suggest the implantation profile to be single Gaussian with a projected range of ˜37 nm. High resolution Rutherford backscattering measurements reveal that a bimodal distribution evolves from a single near-Gaussian distribution as the fluence increases from 1.2 to 7 × 1016 cm-2. Cross-sectional transmission electron microscopy further reveals that the implanted atoms cluster into nanoparticles. At high implantation doses, the nanoparticles assemble primarily in two bands: one near the surface with nanoparticle diameters of up to 5 nm and the other beyond the projected range with ˜2 nm nanoparticles. The bimodal distribution along with the nanoparticle formation is explained with diffusion enhanced by energy deposited during collision cascades, relaxation of thermal spikes, and defects formed during ion implantation. This unique distribution of magnetic nanoparticles with the bimodal size and range is of significant interest to magnetic semiconductor and sensor applications.
Fisher classifier and its probability of error estimation
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, B; Miften, M
2014-06-15
Purpose: Cone-beam CT (CBCT) projection images provide anatomical data in real-time over several respiratory cycles, forming a comprehensive picture of tumor movement. We developed a method using these projections to determine the trajectory and dose of highly mobile tumors during each fraction of treatment. Methods: CBCT images of a respiration phantom were acquired, where the trajectory mimicked a lung tumor with high amplitude (2.4 cm) and hysteresis. A template-matching algorithm was used to identify the location of a steel BB in each projection. A Gaussian probability density function for tumor position was calculated which best fit the observed trajectory ofmore » the BB in the imager geometry. Two methods to improve the accuracy of tumor track reconstruction were investigated: first, using respiratory phase information to refine the trajectory estimation, and second, using the Monte Carlo method to sample the estimated Gaussian tumor position distribution. 15 clinically-drawn abdominal/lung CTV volumes were used to evaluate the accuracy of the proposed methods by comparing the known and calculated BB trajectories. Results: With all methods, the mean position of the BB was determined with accuracy better than 0.1 mm, and root-mean-square (RMS) trajectory errors were lower than 5% of marker amplitude. Use of respiratory phase information decreased RMS errors by 30%, and decreased the fraction of large errors (>3 mm) by half. Mean dose to the clinical volumes was calculated with an average error of 0.1% and average absolute error of 0.3%. Dosimetric parameters D90/D95 were determined within 0.5% of maximum dose. Monte-Carlo sampling increased RMS trajectory and dosimetric errors slightly, but prevented over-estimation of dose in trajectories with high noise. Conclusions: Tumor trajectory and dose-of-the-day were accurately calculated using CBCT projections. This technique provides a widely-available method to evaluate highly-mobile tumors, and could facilitate better strategies to mitigate or compensate for motion during SBRT.« less
THz transceiver characterization : LDRD project 139363 final report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordquist, Christopher Daniel; Wanke, Michael Clement; Cich, Michael Joseph
2009-09-01
LDRD Project 139363 supported experiments to quantify the performance characteristics of monolithically integrated Schottky diode + quantum cascade laser (QCL) heterodyne mixers at terahertz (THz) frequencies. These integrated mixers are the first all-semiconductor THz devices to successfully incorporate a rectifying diode directly into the optical waveguide of a QCL, obviating the conventional optical coupling between a THz local oscillator and rectifier in a heterodyne mixer system. This integrated mixer was shown to function as a true heterodyne receiver of an externally received THz signal, a breakthrough which may lead to more widespread acceptance of this new THz technology paradigm. Inmore » addition, questions about QCL mode shifting in response to temperature, bias, and external feedback, and to what extent internal frequency locking can improve stability have been answered under this project.« less
Current status of SK-Gd project and EGADS
NASA Astrophysics Data System (ADS)
Xu, Chenyuan;
2016-05-01
Supernova Relic Neutrino (SRN) has not been observed yet because of its low event rate and high background. By adding gadolinium into water Cherenkov detector, inverse beta decay will have two signals, the prompt one is positron signal and the delayed one is a ~8 MeV gamma cascade from neutron capture on gadolinium. By this way, background for SRN can be largely reduced by detecting prompt and delayed signals coincidently, and Super-K will also have the ability to distinguish neutrino and anti-neutrino. SK-Gd is a R&D project proposed to dissolve gadolinium into Super-K. As a part of it, EGADS, a 200 ton water Cherenkov detector was built in Kamioka mine. Current status of SK-Gd project and the physics work being performed in EGADS will be presented here.
Evaluating a medical error taxonomy.
Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie
2002-01-01
Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.
Modeling to Improve the Risk Reduction Process for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Bryant, Larry; Waggoner, Bruce
2013-01-01
The Jet Propulsion Laboratory has learned that even innocuous errors in the spacecraft command process can have significantly detrimental effects on a space mission. Consequently, such Command File Errors (CFE), regardless of their effect on the spacecraft, are treated as significant events for which a root cause is identified and corrected. A CFE during space mission operations is often the symptom of imbalance or inadequacy within the system that encompasses the hardware and software used for command generation as well as the human experts and processes involved in this endeavor. As we move into an era of increased collaboration with other NASA centers and commercial partners, these systems become more and more complex. Consequently, the ability to thoroughly model and analyze CFEs formally in order to reduce the risk they pose is increasingly important. In this paper, we summarize the results of applying modeling techniques previously developed to the DAWN flight project. The original models were built with the input of subject matter experts from several flight projects. We have now customized these models to address specific questions for the DAWN flight project and formulating use cases to address their unique mission needs. The goal of this effort is to enhance the project's ability to meet commanding reliability requirements for operations and to assist them in managing their Command File Errors.
Data Policy Construction Set - Building Blocks from Childhood Constructions
NASA Astrophysics Data System (ADS)
Fleischer, Dirk; Paul-Stueve, Thilo; Jobmann, Alexandra; Farrenkopf, Stefan
2016-04-01
A complete construction set of building blocks usually comes with instructions and these instruction include building stages. The products of these building stages usually build from very general parts become highly specialized building parts for very unique features of the whole construction model. This sounds very much like the construction or organization of an interdisciplinary research project, institution or association, doesn't it! The creation process of an overarching data policy for a project group or institution is exactly the combination of individual interests with the common goal of a collaborative data policy and can be compared with the building stages of a construction set of building blocks and the building instructions. Keeping this in mind we created the data policy construction set of textual building blocks. This construction set is subdivided into several building stages or parts each containing multiple building blocks as text blocks. By combining building blocks of all subdivisions it is supposed to create a cascading data policy document. Cascading from the top level as a construction set provider for all further down existing levels such as project, themes, work packages or Universities, faculties, institutes down to the working level of working groups. The working groups are picking from the remaining building blocks in the provided construction set the suitable blocks for its working procedures to create a very specific policy from the available construction set provided by the top level community. Nevertheless, if a working group realized that there are missing building blocks or worse that there are missing building parts, then they have the chance to add the missing pieces to the construction set of direct an future use. This cascading approach enables project or institution wide application of the encoded rules from the textual level on access to data storage infrastructure. This structured approach is flexible enough to allow for the fact that interdisciplinary research projects always bring together very diverse amount of working habits, methods and requirements. All these need to be considered for the creation of the general document on data sharing and research data management. This approach focused on the recommendation of the RDA practical policy working group to implement practical policies derived from the textual level. Therefore it aims to move the data policy creation procedure and implementation towards the consortium or institutional formation with all the benefits of an existing data policy construction set already during the proposal creation and proposal review. Picking up the metaphor of real building blocks in context of data policies provides also the insight that existing building blocks and building parts can be reused as they are, but also can be redesigned with very little changes or a full overhaul.
Learning binary code via PCA of angle projection for image retrieval
NASA Astrophysics Data System (ADS)
Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong
2018-01-01
With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.
Improved Calibration through SMAP RFI Change Detection
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey; De Amici, Giovanni; Mohammed, Priscilla; Peng, Jinzheng
2017-01-01
Anthropogenic Radio-Frequency Interference (RFI) drove both the SMAP (Soil Moisture Active Passive) microwave radiometer hardware and Level 1 science algorithm designs to use new technology and techniques for the first time on a spaceflight project. Care was taken to provide special features allowing the detection and removal of harmful interference in order to meet the error budget. Nonetheless, the project accepted a risk that RFI and its mitigation would exceed the 1.3-K error budget. Thus, RFI will likely remain a challenge afterwards due to its changing and uncertain nature. To address the challenge, we seek to answer the following questions: How does RFI evolve over the SMAP lifetime? What calibration error does the changing RFI environment cause? Can time series information be exploited to reduce these errors and improve calibration for all science products reliant upon SMAP radiometer data? In this talk, we address the first question.
NASA Astrophysics Data System (ADS)
Papa, Marco
The effect of secondary flows on mass transfer from a simulated gas turbine blade and hubwall is investigated. Measurements performed using naphthalene sublimation provide non-dimensional mass transfer coefficients, in the form of Sherwood numbers, that can be converted to heat transfer coefficients through the use of an analogy. Tests are conducted in a linear cascade composed of five blades having the profile of a first stage rotor blade of a high-pressure turbine aircraft engine. Detailed mass transfer maps on the airfoil and endwall surfaces allow the identification of significant flow features that are in good agreement with existing secondary flow models. These results are well-suited for validation of numerical codes, as they are obtained with an accurate technique that does not suffer from conduction or radiation errors and allows the imposition of precise boundary conditions. The performance of a RANS (Reynolds Averaged Navier-Stokes) numerical code that simulates the flow and heat/mass transfer in the cascade using the SST (Shear Stress Transport) k-o model is evaluated through a comparison with the experimental results. Tests performed with a modified blade leading edge show that the introduction of a fillet at the junction with the endwall reduces the effects of the horseshoe vortex in the first part of the passage, while no measurable changes in mass transfer are observed further downstream. Air injected through a slot located upstream of the cascade simulates the engine wheelspace coolant injection between the stator and the rotor. Local mass transfer data obtained injecting naphthalene-free and naphthalene-saturated air are reduced to derive maps of cooling effectiveness on the blade and endwall. Oil dot tests show the surface flow on the endwall. The surface downstream of the gap is coplanar to the upstream surface in the baseline configuration and is shifted to form a forward and backward facing step to investigate the effects of component misalignments. Sufficiently high injection rates alter the structure of the secondary flows and significantly improve the cooling performance.
Investigation of Back-Projection Uncertainties with M6 Earthquakes
NASA Astrophysics Data System (ADS)
Fan, W.; Shearer, P. M.
2017-12-01
We investigate possible biasing effects of inaccurate timing corrections on teleseismic P-wave back-projection imaging of large earthquake ruptures. These errors occur because empirically-estimated time shifts based on aligning P-wave first arrivals are exact only at the hypocenter and provide approximate corrections for other parts of the rupture. Using the Japan subduction zone as a test region, we analyze 46 M6-7 earthquakes over a ten-year period, including many aftershocks of the 2011 M9 Tohoku earthquake, performing waveform cross-correlation of their initial P-wave arrivals to obtain hypocenter timing corrections to global seismic stations. We then compare back-projection images for each earthquake using its own timing corrections with those obtained using the time corrections for other earthquakes. This provides a measure of how well sub-events can be resolved with back-projection of a large rupture as a function of distance from the hypocenter. Our results show that back-projection is generally very robust and that sub-event location errors average about 20 km across the entire study region ( 700 km). The back-projection coherence loss and location errors do not noticeably converge to zero even when the event pairs are very close (<20 km). This indicates that most of the timing differences are due to 3D structure close to each of the hypocenter regions, which limits the effectiveness of attempts to refine back-projection images using aftershock calibration, at least in this region.
NASA Astrophysics Data System (ADS)
Ushakov, Anton; Orlov, Alexey; Sovach, Victor P.
2018-03-01
This article presents the results of research filling of gas centrifuge cascade for separation of the multicomponent isotope mixture with process gas by various feed flow rate. It has been used mathematical model of the nonstationary hydraulic and separation processes occurring in the gas centrifuge cascade. The research object is definition of the regularity transient of nickel isotopes into cascade during filling of the cascade. It is shown that isotope concentrations into cascade stages after its filling depend on variable parameters and are not equal to its concentration on initial isotope mixture (or feed flow of cascade). This assumption is used earlier any researchers for modeling such nonstationary process as set of steady-state concentration of isotopes into cascade. Article shows physical laws of isotope distribution into cascade stage after its filling. It's shown that varying each parameters of cascade (feed flow rate, feed stage number or cascade stage number) it is possible to change isotope concentration on output cascade flows (light or heavy fraction) for reduction of duration of further process to set of steady-state concentration of isotopes into cascade.
Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.
1999-01-01
Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Ouliang; Gary, S. Peter; Wang, Joseph, E-mail: ouliang@usc.edu, E-mail: pgary@lanl.gov, E-mail: josephjw@usc.edu
2015-02-20
We present the results of the first fully three-dimensional particle-in-cell simulations of decaying whistler turbulence in a magnetized, homogeneous, collisionless plasma in which both forward cascades to shorter wavelengths, and inverse cascades to longer wavelengths are allowed to proceed. For the electron beta β {sub e} = 0.10 initial value considered here, the early-time rate of inverse cascade is very much smaller than the rate of forward cascade, so that at late times the fluctuation energy in the regime of the inverse cascade is much weaker than that in the forward cascade regime. Similarly, the wavevector anisotropy in the inversemore » cascade regime is much weaker than that in the forward cascade regime.« less
ERIC Educational Resources Information Center
Polo, Blanca J.
2013-01-01
Much research has been done in regards to student programming errors, online education and studio-based learning (SBL) in computer science education. This study furthers this area by bringing together this knowledge and applying it to proactively help students overcome impasses caused by common student programming errors. This project proposes a…
Kirychuk, Shelley P; Reynolds, Stephen J; Koehncke, Niels; Nakatsu, J; Mehaffy, John
2009-01-01
The health of persons engaged in agricultural activities are often related or associated with environmental exposures in their workplace. Accurately measuring, analyzing, and reporting these exposures is paramount to outcomes interpretation. This paper describes issues related to sampling air in poultry barns with a cascade impactor. Specifically, the authors describe how particle bounce can affect measurement outcomes and how the use of impaction grease can impact particle bounce and laboratory analyses such as endotoxin measurements. This project was designed to (1) study the effect of particle bounce in Marple cascade impactors that use polyvinyl chloride (PVC) filters; (2) to determine the effect of impaction grease on endotoxin assays when sampling poultry barn dust. A pilot study was undertaken utilizing six-stage Marple cascade impactors with PVC filters. Distortion of particulate size distributions and the effects of impaction grease on endotoxin analysis in samples of poultry dust distributed into a wind tunnel were studied. Although there was no significant difference in the overall dust concentration between utilizing impaction grease and not, there was a greater than 50% decrease in the mass median aerodynamic diameter (MMAD) values when impaction grease was not utilized. There was no difference in airborne endotoxin concentration or endotoxin MMAD between filters treated with impaction grease and those not treated. The results indicate that particle bounce should be a consideration when sampling poultry barn dust with Marple samplers containing PVC filters with no impaction grease. Careful consideration should be given to the utilization of impaction grease on PVC filters, which will undergo endotoxin analysis, as there is potential for interference, particularly if high or low levels of endotoxin are anticipated.
The application of improved neural network in hydrocarbon reservoir prediction
NASA Astrophysics Data System (ADS)
Peng, Xiaobo
2013-03-01
This paper use BP neural network techniques to realize hydrocarbon reservoir predication easier and faster in tarim basin in oil wells. A grey - cascade neural network model is proposed and it is faster convergence speed and low error rate. The new method overcomes the shortcomings of traditional BP neural network convergence slow, easy to achieve extreme minimum value. This study had 220 sets of measured logging data to the sample data training mode. By changing the neuron number and types of the transfer function of hidden layers, the best work prediction model is analyzed. The conclusion is the model which can produce good prediction results in general, and can be used for hydrocarbon reservoir prediction.
Yoon, Paul K; Zihajehzadeh, Shaghayegh; Bong-Soo Kang; Park, Edward J
2015-08-01
This paper proposes a novel indoor localization method using the Bluetooth Low Energy (BLE) and an inertial measurement unit (IMU). The multipath and non-line-of-sight errors from low-power wireless localization systems commonly result in outliers, affecting the positioning accuracy. We address this problem by adaptively weighting the estimates from the IMU and BLE in our proposed cascaded Kalman filter (KF). The positioning accuracy is further improved with the Rauch-Tung-Striebel smoother. The performance of the proposed algorithm is compared against that of the standard KF experimentally. The results show that the proposed algorithm can maintain high accuracy for position tracking the sensor in the presence of the outliers.
NASA Technical Reports Server (NTRS)
Kobayashi, H.
1978-01-01
Two dimensional, quasi three dimensional and three dimensional theories for the prediction of pure tone fan noise due to the interaction of inflow distortion with a subsonic annular blade row were studied with the aid of an unsteady three dimensional lifting surface theory. The effects of compact and noncompact source distributions on pure tone fan noise in an annular cascade were investigated. Numerical results show that the strip theory and quasi three-dimensional theory are reasonably adequate for fan noise prediction. The quasi three-dimensional method is more accurate for acoustic power and model structure prediction with an acoustic power estimation error of about plus or minus 2db.
Data for Design of Entrance Vanes from Two-Dimensional Tests of Airfoils in Cascade
1945-10-01
thlc!: boundary Isyer nlo.v. the vail.7 en -3 the :i’ f fieulty of neasurinrr the eitranoe velocity -’.’or tht hlr-1; t«rr.Jn,3 angles...initial c<ynu:".tc ...v^PDare CL-I.’T:! f lea the error In r.tte^inlnf tJio i.- en -i i’yri’-’e rjrcr.-<n?f;. T’iO tfjEts...as<j In solidity •sr^ü’-’.e-ss little ^.hir.f.s of turning M- fle ij-d, therefore, tj-.e- dirccfc’on if rean flow is sij-.nEsd ’rly slirhtly. Per
Measurement of an asymmetry parameter in the decay of the cascade-minus hyperon
NASA Astrophysics Data System (ADS)
Chakravorty, Alak
2000-10-01
Fermilab experiment E756 collected a large dataset of polarized Ξ -hyperon decays, produced by 800-GeV/c unpolarized protons on a beryllium target. Of principal interest was the decay process Ξ - --> Λ0π- --> pπ-π-. An analysis of the asymmetry parameters of this decay was carried out on a sample of 1.3 × 106 Ξ- decays. φ Ξ was measured to be -1.33° +/- 2.66° +/- 1.22°, where the first error is statistical and the second is systematic. This corresponds to a measurement of the asymmetry parameter βΞ = -0.021 +/- 0.042 +/- 0.019, which is consistent with current theoretical estimates.
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.
Zhang, Man; Wang, Guanyong; Zhang, Lei
2017-10-26
Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai Jing; Read, Paul W.; Baisden, Joseph M.
Purpose: To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Methods and Materials: Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA)more » from RedCAM ({epsilon}), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability ({nu}). Results: Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies ({epsilon} = -21.64% {+-} 8.23%) and lung tumor patient studies ({epsilon} = -20.31% {+-} 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly ({epsilon} = -5.13{nu} - 6.71, r{sup 2} = 0.76) with the subjects' respiratory variability. Conclusions: Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.« less
Cai, Jing; Read, Paul W; Baisden, Joseph M; Larner, James M; Benedict, Stanley H; Sheng, Ke
2007-11-01
To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA) from RedCAM (epsilon), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability (nu). Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies (epsilon = -21.64% +/- 8.23%) and lung tumor patient studies (epsilon = -20.31% +/- 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly (epsilon = -5.13nu - 6.71, r(2) = 0.76) with the subjects' respiratory variability. Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.
Local projection stabilization for linearized Brinkman-Forchheimer-Darcy equation
NASA Astrophysics Data System (ADS)
Skrzypacz, Piotr
2017-09-01
The Local Projection Stabilization (LPS) is presented for the linearized Brinkman-Forchheimer-Darcy equation with high Reynolds numbers. The considered equation can be used to model porous medium flows in chemical reactors of packed bed type. The detailed finite element analysis is presented for the case of nonconstant porosity. The enriched variant of LPS is based on the equal order interpolation for the velocity and pressure. The optimal error bounds for the velocity and pressure errors are justified numerically.
2016-07-01
CAC common access card DoD Department of Defense FOUO For Official Use Only GIS geographic information systems GUI graphical user interface HISA...as per requirements of this project, is UNCLASS/For Official Use Only (FOUO), with access re- stricted to DOD common access card (CAC) users. Key...Boko Haram Fuel Dump Discovered in Maiduguru.” Available: http://saharareporters.com/2015/10/01/another-boko-haram-fuel- dump - discovered-maiduguri
The Islamic State Battle Plan: Press Release Natural Language Processing
2016-06-01
Processing, text mining , corpus, generalized linear model, cascade, R Shiny, leaflet, data visualization 15. NUMBER OF PAGES 83 16. PRICE CODE...Terrorism and Responses to Terrorism TDM Term Document Matrix TF Term Frequency TF-IDF Term Frequency-Inverse Document Frequency tm text mining (R...package=leaflet. Feinerer I, Hornik K (2015) Text Mining Package “tm,” Version 0.6-2. (Jul 3) https://cran.r-project.org/web/packages/tm/tm.pdf
Urban rail transit projects : forecast versus actual ridership and costs. final report
DOT National Transportation Integrated Search
1989-10-01
Substantial errors in forecasting ridership and costs for the ten rail transit projects reviewed in this report, put forth the possibility that more accurate forecasts would have led decision-makers to select projects other than those reviewed in thi...
Motion and positional error correction for cone beam 3D-reconstruction with mobile C-arms.
Bodensteiner, C; Darolti, C; Schumacher, H; Matthäus, L; Schweikard, A
2007-01-01
CT-images acquired by mobile C-arm devices can contain artefacts caused by positioning errors. We propose a data driven method based on iterative 3D-reconstruction and 2D/3D-registration to correct projection data inconsistencies. With a 2D/3D-registration algorithm, transformations are computed to align the acquired projection images to a previously reconstructed volume. In an iterative procedure, the reconstruction algorithm uses the results of the registration step. This algorithm also reduces small motion artefacts within 3D-reconstructions. Experiments with simulated projections from real patient data show the feasibility of the proposed method. In addition, experiments with real projection data acquired with an experimental robotised C-arm device have been performed with promising results.
Camera calibration based on the back projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui
2015-12-01
Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.
Renal artery origins: best angiographic projection angles.
Verschuyl, E J; Kaatee, R; Beek, F J; Patel, N H; Fontaine, A B; Daly, C P; Coldwell, D M; Bush, W H; Mali, W P
1997-10-01
To determine the best projection angles for imaging the renal artery origins in profile. A mathematical model of the anatomy at the renal artery origins in the transverse plane was used to analyze the amount of aortic lumen that projects over the renal artery origins at various projection angles. Computed tomographic (CT) angiographic data about the location of 400 renal artery origins in 200 patients were statistically analyzed. In patients with an abdominal aortic diameter no larger than 3.0 cm, approximately 0.5 mm of the proximal part of the renal artery and origin may be hidden from view if there is a projection error of +/-10 degrees from the ideal image. A combination of anteroposterior and 20 degrees and 40 degrees left anterior oblique projections resulted in a 92% yield of images that adequately profiled the renal artery origins. Right anterior oblique projections resulted in the least useful images. An error in projection angle of +/-10 degrees is acceptable for angiographic imaging of the renal artery origins. Patients sex, site of interest (left or right artery), and local diameter of the abdominal aorta are important factors to consider.
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
Zeng, Yi; Land, Kenneth C.; Wang, Zhenglian; Gu, Danan
2012-01-01
This article presents the core methodological ideas, empirical assessments, and applications of an extended cohort-component approach (known as the “ProFamy model”) to simultaneously project household composition, living arrangements, and population sizes at the subnational level in the United States. Comparisons of projections from 1990 to 2000 using this approach with census counts in 2000 for each of the 50 states and Washington, DC show that 68.0 %, 17.0 %, 11.2 %, and 3.8 % of the absolute percentage errors are <3.0 %, 3.0 % to 4.99 %, 5.0 % to 9.99 %, and ≥10.0 %, respectively. Another analysis compares average forecast errors between the extended cohort-component approach and the still widely used classic headship-rate method, by projecting number-of-bedrooms–specific housing demands from 1990 to 2000 and then comparing those projections with census counts in 2000 for each of the 50 states and Washington, DC. The results demonstrate that, compared with the extended cohort-component approach, the headship-rate method produces substantially more serious forecast errors because it cannot project households by size while the extended cohort-component approach projects detailed household sizes. We also present illustrative household and living arrangement projections for the five decades from 2000 to 2050, with medium-, small-, and large-family scenarios for each of the 50 states; Washington, DC; six counties of southern California, and the Minneapolis–St. Paul metropolitan area. Among many interesting numerical outcomes of household and living arrangement projections with medium, low, and high bounds, the aging of American households over the next few decades across all states/areas is particularly striking. Finally, the limitations of the present study and potential future lines of research are discussed. PMID:23208782
Magnetic field errors tolerances of Nuclotron booster
NASA Astrophysics Data System (ADS)
Butenko, Andrey; Kazinova, Olha; Kostromin, Sergey; Mikhaylov, Vladimir; Tuzikov, Alexey; Khodzhibagiyan, Hamlet
2018-04-01
Generation of magnetic field in units of booster synchrotron for the NICA project is one of the most important conditions for getting the required parameters and qualitative accelerator operation. Research of linear and nonlinear dynamics of ion beam 197Au31+ in the booster have carried out with MADX program. Analytical estimation of magnetic field errors tolerance and numerical computation of dynamic aperture of booster DFO-magnetic lattice are presented. Closed orbit distortion with random errors of magnetic fields and errors in layout of booster units was evaluated.
A median filter approach for correcting errors in a vector field
NASA Technical Reports Server (NTRS)
Schultz, H.
1985-01-01
Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.
Correction for specimen movement and rotation errors for in-vivo Optical Projection Tomography
Birk, Udo Jochen; Rieckher, Matthias; Konstantinides, Nikos; Darrell, Alex; Sarasa-Renedo, Ana; Meyer, Heiko; Tavernarakis, Nektarios; Ripoll, Jorge
2010-01-01
The application of optical projection tomography to in-vivo experiments is limited by specimen movement during the acquisition. We present a set of mathematical correction methods applied to the acquired data stacks to correct for movement in both directions of the image plane. These methods have been applied to correct experimental data taken from in-vivo optical projection tomography experiments in Caenorhabditis elegans. Successful reconstructions for both fluorescence and white light (absorption) measurements are shown. Since no difference between movement of the animal and movement of the rotation axis is made, this approach at the same time removes artifacts due to mechanical drifts and errors in the assumed center of rotation. PMID:21258448
NASA Astrophysics Data System (ADS)
Szeląg, Bartosz; Barbusiński, Krzysztof; Studziński, Jan; Bartkiewicz, Lidia
2017-11-01
In the study, models developed using data mining methods are proposed for predicting wastewater quality indicators: biochemical and chemical oxygen demand, total suspended solids, total nitrogen and total phosphorus at the inflow to wastewater treatment plant (WWTP). The models are based on values measured in previous time steps and daily wastewater inflows. Also, independent prediction systems that can be used in case of monitoring devices malfunction are provided. Models of wastewater quality indicators were developed using MARS (multivariate adaptive regression spline) method, artificial neural networks (ANN) of the multilayer perceptron type combined with the classification model (SOM) and cascade neural networks (CNN). The lowest values of absolute and relative errors were obtained using ANN+SOM, whereas the MARS method produced the highest error values. It was shown that for the analysed WWTP it is possible to obtain continuous prediction of selected wastewater quality indicators using the two developed independent prediction systems. Such models can ensure reliable WWTP work when wastewater quality monitoring systems become inoperable, or are under maintenance.
Indoor Map Aided Wi-Fi Integrated Lbs on Smartphone Platforms
NASA Astrophysics Data System (ADS)
Yu, C.; El-Sheimy, N.
2017-09-01
In this research, an indoor map aided INS/Wi-Fi integrated location based services (LBS) applications is proposed and implemented on smartphone platforms. Indoor map information together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value from Wi-Fi are collected to obtain an accurate, continuous, and low-cost position solution. The main challenge of this research is to make effective use of various measurements that complement each other without increasing the computational burden of the system. The integrated system in this paper includes three modules: INS, Wi-Fi (if signal available) and indoor maps. A cascade structure Particle/Kalman filter framework is applied to combine the different modules. Firstly, INS position and Wi-Fi fingerprint position integrated through Kalman filter for estimating positioning information. Then, indoor map information is applied to correct the error of INS/Wi-Fi estimated position through particle filter. Indoor tests show that the proposed method can effectively reduce the accumulation positioning errors of stand-alone INS systems, and provide stable, continuous and reliable indoor location service.
Wang, Ning; Sun, Jing-Chao; Han, Min; Zheng, Zhongjiu; Er, Meng Joo
2017-09-06
In this paper, for a general class of uncertain nonlinear (cascade) systems, including unknown dynamics, which are not feedback linearizable and cannot be solved by existing approaches, an innovative adaptive approximation-based regulation control (AARC) scheme is developed. Within the framework of adding a power integrator (API), by deriving adaptive laws for output weights and prediction error compensation pertaining to single-hidden-layer feedforward network (SLFN) from the Lyapunov synthesis, a series of SLFN-based approximators are explicitly constructed to exactly dominate completely unknown dynamics. By the virtue of significant advancements on the API technique, an adaptive API methodology is eventually established in combination with SLFN-based adaptive approximators, and it contributes to a recursive mechanism for the AARC scheme. As a consequence, the output regulation error can asymptotically converge to the origin, and all other signals of the closed-loop system are uniformly ultimately bounded. Simulation studies and comprehensive comparisons with backstepping- and API-based approaches demonstrate that the proposed AARC scheme achieves remarkable performance and superiority in dealing with unknown dynamics.
Hippo pathway and protection of genome stability in response to DNA damage.
Pefani, Dafni E; O'Neill, Eric
2016-04-01
The integrity of DNA is constantly challenged by exposure to the damaging effects of chemical and physical agents. Elucidating the cellular mechanisms that maintain genomic integrity via DNA repair and cell growth control is vital because errors in these processes lead to genomic damage and the development of cancer. By gaining a deep molecular understanding of the signaling pathways regulating genome integrity it is hoped to uncover new therapeutics and treatment designs to combat cancer. Components of the Hippo pathway, a tumor-suppressor cascade, have recently been defined to limit cancer transformation in response to DNA damage. In this review, we briefly introduce the Hippo signaling cascade in mammals and discuss in detail how the Hippo pathway has been established as part of the DNA damage response, activated by apical signaling kinases that recognize breaks in DNA. We also highlight the significance of the Hippo pathway activator RASSF1A tumor suppressor, a direct target of ataxia telangiectasia mutated and ataxia telangiectasia and Rad3 related ATR. Furthermore we discuss how Hippo pathway in response DNA lesions can induce cell death via Yes-associated protein (YAP) (the canonical Hippo pathway effector) or promote maintenance of genome integrity in a YAP-independent manner. © 2015 FEBS.
K-Ar dating of young volcanic rocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damon, P.E.; Shafiqullah, M.
1991-01-31
Potassium-Argon (K-Ar) age dates were determined for forty-two young geologic samples by the Laboratory of Isotope Geochemistry, Department of Geosciences, in the period February 1, 1986 to June 30, 1989. Under the terms of Department of Energy Grant No. FG07-86ID12622, The University of Arizona was to provide state-of-the-art K-Ar age dating services, including sample preparation, analytical procedures, and computations, for forty-two young geologic samples submitted by DOE geothermal researchers. We billed only for forty samples. Age dates were determined for geologic samples from five regions with geothermal potential: the Cascade Mountains (Oregon); the Cascade Mountains (Washington); Ascension Island, South Atlanticmore » Ocean; Cerro Prieto, Mexico; and Las Azufres, Mexico. The ages determined varied from 5.92 m.a. to 0.62 m.a. The integration of K-Ar dates with geologic data and the interpretation in terms of geologic and geothermal significance has been reported separately by the various DOE geothermal researchers. Table 1 presents a detailed listing of all samples dated, general sample location, researcher, researcher's organization, rock type, age, and probable error (1 standard deviation). Additional details regarding the geologic samples may be obtained from the respective geothermal researcher. 1 tab.« less
Passivity-based control of linear time-invariant systems modelled by bond graph
NASA Astrophysics Data System (ADS)
Galindo, R.; Ngwompo, R. F.
2018-02-01
Closed-loop control systems are designed for linear time-invariant (LTI) controllable and observable systems modelled by bond graph (BG). Cascade and feedback interconnections of BG models are realised through active bonds with no loading effect. The use of active bonds may lead to non-conservation of energy and the overall system is modelled by proposed pseudo-junction structures. These structures are build by adding parasitic elements to the BG models and the overall system may become singularly perturbed. The structures for these interconnections can be seen as consisting of inner structures that satisfy energy conservation properties and outer structures including multiport-coupled dissipative fields. These fields highlight energy properties like passivity that are useful for control design. In both interconnections, junction structures and dissipative fields for the controllers are proposed, and passivity is guaranteed for the closed-loop systems assuring robust stability. The cascade interconnection is applied to the structural representation of closed-loop transfer functions, when a stabilising controller is applied to a given nominal plant. Applications are given when the plant and the controller are described by state-space realisations. The feedback interconnection is used getting necessary and sufficient stability conditions based on the closed-loop characteristic polynomial, solving a pole-placement problem and achieving zero-stationary state error.
Sainz de Murieta, Iñaki; Rodríguez-Patón, Alfonso
2012-08-01
Despite the many designs of devices operating with the DNA strand displacement, surprisingly none is explicitly devoted to the implementation of logical deductions. The present article introduces a new model of biosensor device that uses nucleic acid strands to encode simple rules such as "IF DNA_strand(1) is present THEN disease(A)" or "IF DNA_strand(1) AND DNA_strand(2) are present THEN disease(B)". Taking advantage of the strand displacement operation, our model makes these simple rules interact with input signals (either DNA or any type of RNA) to generate an output signal (in the form of nucleotide strands). This output signal represents a diagnosis, which either can be measured using FRET techniques, cascaded as the input of another logical deduction with different rules, or even be a drug that is administered in response to a set of symptoms. The encoding introduces an implicit error cancellation mechanism, which increases the system scalability enabling longer inference cascades with a bounded and controllable signal-noise relation. It also allows the same rule to be used in forward inference or backward inference, providing the option of validly outputting negated propositions (e.g. "diagnosis A excluded"). The models presented in this paper can be used to implement smart logical DNA devices that perform genetic diagnosis in vitro. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
O'Hagan, S.; Northern, J. H.; Gras, B.; Ewart, P.; Kim, C. S.; Kim, M.; Merritt, C. D.; Bewley, W. W.; Canedy, C. L.; Vurgaftman, I.; Meyer, J. R.
2016-06-01
The application of an interband cascade laser, ICL, to multi-mode absorption spectroscopy, MUMAS, in the mid-infrared region is reported. Measurements of individual mode linewidths of the ICL, derived from the pressure dependence of lineshapes in MUMAS signatures of single, isolated, lines in the spectrum of HCl, were found to be in the range 10-80 MHz. Multi-line spectra of methane were recorded using spectrally limited bandwidths, of approximate width 27 cm-1, defined by an interference filter, and consist of approximately 80 modes at spectral locations spanning the 100 cm-1 bandwidth of the ICL output. Calibration of the methane pressures derived from MUMAS data using a capacitance manometer provided measurements with an uncertainty of 1.1 %. Multi-species sensing is demonstrated by the simultaneous detection of methane, acetylene and formaldehyde in a gas mixture. Individual partial pressures of the three gases are derived from best fits of model MUMAS signatures to the data with an experimental error of 10 %. Using an ICL, with an inter-mode interval of ~10 GHz, MUMAS spectra were recorded at pressures in the range 1-10 mbar, and, based on the data, a potential minimum detection limit of the order of 100 ppmv is estimated for MUMAS at atmospheric pressure using an inter-mode interval of 80 GHz.
An adaptive strategy for active debris removal
NASA Astrophysics Data System (ADS)
White, Adam E.; Lewis, Hugh G.
2014-04-01
Many parameters influence the evolution of the near-Earth debris population, including launch, solar, explosion and mitigation activities, as well as other future uncertainties such as advances in space technology or changes in social and economic drivers that effect the utilisation of space activities. These factors lead to uncertainty in the long-term debris population. This uncertainty makes it difficult to identify potential remediation strategies, involving active debris removal (ADR), that will perform effectively in all possible future cases. Strategies that cannot perform effectively, because of this uncertainty, risk either not achieving their intended purpose, or becoming a hindrance to the efforts of spacecraft manufactures and operators to address the challenges posed by space debris. One method to tackle this uncertainty is to create a strategy that can adapt and respond to the space debris population. This work explores the concept of an adaptive strategy, in terms of the number of objects required to be removed by ADR, to prevent the low Earth orbit (LEO) debris population from growing in size. This was demonstrated by utilising the University of Southampton’s Debris Analysis and Monitoring Architecture to the Geosynchronous Environment (DAMAGE) tool to investigate ADR rates (number of removals per year) that change over time in response to the current space environment, with the requirement of achieving zero growth of the LEO population. DAMAGE was used to generate multiple Monte Carlo projections of the future LEO debris environment. Within each future projection, the debris removal rate was derived at five-year intervals, by a new statistical debris evolutionary model called the Computational Adaptive Strategy to Control Accurately the Debris Environment (CASCADE) model. CASCADE predicted the long-term evolution of the current DAMAGE population with a variety of different ADR rates in order to identify a removal rate that produced a zero net growth for that particular projection after 200 years. The results show that using an adaptive ADR rate generated by CASCADE, alongside good compliance with existing mitigation measures, increases the probability of achieving a constant LEO population of objects greater than 10 cm. This was shown to be 12% greater compared with removing five objects per year, with the additional advantage of requiring only 3.1 removals per year, on average.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1973-01-01
The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.
New Challenges in Computational Thermal Hydraulics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadigaroglu, George; Lakehal, Djamel
New needs and opportunities drive the development of novel computational methods for the design and safety analysis of light water reactors (LWRs). Some new methods are likely to be three dimensional. Coupling is expected between system codes, computational fluid dynamics (CFD) modules, and cascades of computations at scales ranging from the macro- or system scale to the micro- or turbulence scales, with the various levels continuously exchanging information back and forth. The ISP-42/PANDA and the international SETH project provide opportunities for testing applications of single-phase CFD methods to LWR safety problems. Although industrial single-phase CFD applications are commonplace, computational multifluidmore » dynamics is still under development. However, first applications are appearing; the state of the art and its potential uses are discussed. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water is a perfect illustration of a simulation cascade: At the top of the hierarchy of scales, system behavior can be modeled with a system code; at the central level, the volume-of-fluid method can be applied to predict large-scale bubbling behavior; at the bottom of the cascade, direct-contact condensation can be treated with direct numerical simulation, in which turbulent flow (in both the gas and the liquid), interfacial dynamics, and heat/mass transfer are directly simulated without resorting to models.« less
Fluctuations of a Temperate Mountain Glacier in Response to Climate Change
NASA Astrophysics Data System (ADS)
Bachmann, M.; Bidlake, W.
2012-12-01
Glacier mass balance is a fundamental parameter for understanding and predicting the evolution of glaciers on the landscape in response to climate change. The USGS Ice and Climate Project (ICP) continues to extend the longest-running USGS benchmark glacier mass-balance record at South Cascade Glacier, Washington. Due to the importance of South Cascade Glacier data sets for glaciological and climate research, ICP is releasing decades-old previously unpublished glacier surface and bed maps, mass balance data at individual sites, ice velocity data, and an updated ice inventory for the surrounding basin. The complete record includes a pre-Industrial Revolution reconstruction of the glacier and seasonal mass balance measurements for the past 54 years (1958-2012). Since 2000, the glacier has experienced four of the five most negative summer balances and two of the largest positive accumulation years, indicating that the glacier is continuing to respond to recent warming and precipitation changes. Recently, ICP has developed a temperature-index glacier melt model that extrapolates daily accumulation and melt rates from intermittent field observations based on regional meteorological data, and an expert system for mass balance that captures the strengths of both measurement and modeling for assessing mass balance. The models have been successfully calibrated at South Cascade Glacier, where ample observations are available, but are designed to be used with as few or as many glaciological field data as are available for a given ice mass.
High fishery catches through trophic cascades in China.
Szuwalski, Cody S; Burgess, Matthew G; Costello, Christopher; Gaines, Steven D
2017-01-24
Indiscriminate and intense fishing has occurred in many marine ecosystems around the world. Although this practice may have negative effects on biodiversity and populations of individual species, it may also increase total fishery productivity by removing predatory fish. We examine the potential for this phenomenon to explain the high reported wild catches in the East China Sea-one of the most productive ecosystems in the world that has also had its catch reporting accuracy and fishery management questioned. We show that reported catches can be approximated using an ecosystem model that allows for trophic cascades (i.e., the depletion of predators and consequent increases in production of their prey). This would be the world's largest known example of marine ecosystem "engineering" and suggests that trade-offs between conservation and food production exist. We project that fishing practices could be modified to increase total catches, revenue, and biomass in the East China Sea, but single-species management would decrease both catches and revenue by reversing the trophic cascades. Our results suggest that implementing single-species management in currently lightly managed and highly exploited multispecies fisheries (which account for a large fraction of global fish catch) may result in decreases in global catch. Efforts to reform management in these fisheries will need to consider system wide impacts of changes in management, rather than focusing only on individual species.
High fishery catches through trophic cascades in China
Szuwalski, Cody S.; Burgess, Matthew G.; Costello, Christopher; Gaines, Steven D.
2017-01-01
Indiscriminate and intense fishing has occurred in many marine ecosystems around the world. Although this practice may have negative effects on biodiversity and populations of individual species, it may also increase total fishery productivity by removing predatory fish. We examine the potential for this phenomenon to explain the high reported wild catches in the East China Sea—one of the most productive ecosystems in the world that has also had its catch reporting accuracy and fishery management questioned. We show that reported catches can be approximated using an ecosystem model that allows for trophic cascades (i.e., the depletion of predators and consequent increases in production of their prey). This would be the world’s largest known example of marine ecosystem “engineering” and suggests that trade-offs between conservation and food production exist. We project that fishing practices could be modified to increase total catches, revenue, and biomass in the East China Sea, but single-species management would decrease both catches and revenue by reversing the trophic cascades. Our results suggest that implementing single-species management in currently lightly managed and highly exploited multispecies fisheries (which account for a large fraction of global fish catch) may result in decreases in global catch. Efforts to reform management in these fisheries will need to consider system wide impacts of changes in management, rather than focusing only on individual species. PMID:28028218
Constructing the L2-Graph for Robust Subspace Learning and Subspace Clustering.
Peng, Xi; Yu, Zhiding; Yi, Zhang; Tang, Huajin
2017-04-01
Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i.e., intrasubspace data points). Recent works achieve good performance by modeling errors into their objective functions to remove the errors from the inputs. However, these approaches face the limitations that the structure of errors should be known prior and a complex convex problem must be solved. In this paper, we present a novel method to eliminate the effects of the errors from the projection space (representation) rather than from the input space. We first prove that l 1 -, l 2 -, l ∞ -, and nuclear-norm-based linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we introduce a method to construct a sparse similarity graph, called L2-graph. The subspace clustering and subspace learning algorithms are developed upon L2-graph. We conduct comprehensive experiment on subspace learning, image clustering, and motion segmentation and consider several quantitative benchmarks classification/clustering accuracy, normalized mutual information, and running time. Results show that L2-graph outperforms many state-of-the-art methods in our experiments, including L1-graph, low rank representation (LRR), and latent LRR, least square regression, sparse subspace clustering, and locally linear representation.
NASA Astrophysics Data System (ADS)
Deligne, Natalia; Cashman, Katharine; Grant, Gordon; Jefferson, Anne
2013-04-01
Lava flows are often considered to be natural hazards with localized bimodal impact - they completely destroy everything in their path, but apart from the occasional forest fire, cause little or no damage outside their immediate footprint. However, in certain settings, lava flows can have surprising far reaching impacts with the potential to cause serious problems in distant urban areas. Here we present results from a study of the interaction between lava flows and surface water in the central Oregon Cascades, USA, where we find that lava flows in the High Cascades have the potential to cause considerable water shortages in Eugene, Oregon (Oregon's second largest metropolitan area) and the greater Willamette Valley (home to ~70% of Oregon's population). The High Cascades host a groundwater dominated hydrological regime with water residence times on the order of years. Due to the steady output of groundwater, rivers sourced in the High Cascades are a critical water resource for Oregon, particularly in August and September when it has not rained for several months. One such river, the McKenzie River, is the sole source of drinking water for Eugene, Oregon, and prior to the installation of dams in the 1960s accounted for ~40% of late summer river flow in the Willamette River in Portland, 445 river km downstream of the source of the McKenzie River. The McKenzie River has been dammed at least twice by lava flows during the Holocene; depending the time of year that these eruptions occurred, we project that available water would have decreased by 20% in present-day Eugene, Oregon, for days to weeks at a time. Given the importance of the McKenzie River and its location on the margin of an active volcanic area, we expect that future volcanic eruptions could likewise impact water supplies in Eugene and the greater Willamette Valley. As such, the urban center of Eugene, Oregon, and also the greater Willamette Valley, is vulnerable to the most benign of volcanic hazards, lava flows, located over 100 km away.
1987-12-01
Study for Sun River Electr’: Cooperative, Inc. Fairfield, Montana. Butler, G.C., C. Hyslop , and 0. Huntzinger (editors) 1980 Anthroposenic Compounds...Counties, Montana, 1980 -1984 3.1.3-2 Actual and Projected Population of Selected Montana ................ 3-7 Counties and Cities, the State of...by Grade Level 3.1.3-4 City of Great Falls Revenues and Expenditures, All Governmental .... 3-18 Governmental Funds, FY 1980 -2000 3.1.3-5 Cascade
Steinwand, Daniel R.; Hutchinson, John A.; Snyder, J.P.
1995-01-01
In global change studies the effects of map projection properties on data quality are apparent, and the choice of projection is significant. To aid compilers of global and continental data sets, six equal-area projections were chosen: the interrupted Goode Homolosine, the interrupted Mollweide, the Wagner IV, and the Wagner VII for global maps; the Lambert Azimuthal Equal-Area for hemisphere maps; and the Oblated Equal-Area and the Lambert Azimuthal Equal-Area for continental maps. Distortions in small-scale maps caused by reprojection, and the additional distortions incurred when reprojecting raster images, were quantified and graphically depicted. For raster images, the errors caused by the usual resampling methods (pixel brightness level interpolation) were responsible for much of the additional error where the local resolution and scale change were the greatest.
NASA Astrophysics Data System (ADS)
McAfee, S. A.; DeLaFrance, A.
2017-12-01
Investigating the impacts of climate change often entails using projections from inherently imperfect general circulation models (GCMs) to drive models that simulate biophysical or societal systems in great detail. Error or bias in the GCM output is often assessed in relation to observations, and the projections are adjusted so that the output from impacts models can be compared to historical or observed conditions. Uncertainty in the projections is typically accommodated by running more than one future climate trajectory to account for differing emissions scenarios, model simulations, and natural variability. The current methods for dealing with error and uncertainty treat them as separate problems. In places where observed and/or simulated natural variability is large, however, it may not be possible to identify a consistent degree of bias in mean climate, blurring the lines between model error and projection uncertainty. Here we demonstrate substantial instability in mean monthly temperature bias across a suite of GCMs used in CMIP5. This instability is greatest in the highest latitudes during the cool season, where shifts from average temperatures below to above freezing could have profound impacts. In models with the greatest degree of bias instability, the timing of regional shifts from below to above average normal temperatures in a single climate projection can vary by about three decades, depending solely on the degree of bias assessed. This suggests that current bias correction methods based on comparison to 20- or 30-year normals may be inappropriate, particularly in the polar regions.
NASA Astrophysics Data System (ADS)
Suttinger, Matthew; Go, Rowel; Figueiredo, Pedro; Todi, Ankesh; Shu, Hong; Leshin, Jason; Lyakh, Arkadiy
2018-01-01
Experimental and model results for 15-stage broad area quantum cascade lasers (QCLs) are presented. Continuous wave (CW) power scaling from 1.62 to 2.34 W has been experimentally demonstrated for 3.15-mm long, high reflection-coated QCLs for an active region width increased from 10 to 20 μm. A semiempirical model for broad area devices operating in CW mode is presented. The model uses measured pulsed transparency current, injection efficiency, waveguide losses, and differential gain as input parameters. It also takes into account active region self-heating and sublinearity of pulsed power versus current laser characteristic. The model predicts that an 11% improvement in maximum CW power and increased wall-plug efficiency can be achieved from 3.15 mm×25 μm devices with 21 stages of the same design, but half doping in the active region. For a 16-stage design with a reduced stage thickness of 300 Å, pulsed rollover current density of 6 kA/cm2, and InGaAs waveguide layers, an optical power increase of 41% is projected. Finally, the model projects that power level can be increased to ˜4.5 W from 3.15 mm×31 μm devices with the baseline configuration with T0 increased from 140 K for the present design to 250 K.
Model dependence and its effect on ensemble projections in CMIP5
NASA Astrophysics Data System (ADS)
Abramowitz, G.; Bishop, C.
2013-12-01
Conceptually, the notion of model dependence within climate model ensembles is relatively simple - modelling groups share a literature base, parametrisations, data sets and even model code - the potential for dependence in sampling different climate futures is clear. How though can this conceptual problem inform a practical solution that demonstrably improves the ensemble mean and ensemble variance as an estimate of system uncertainty? While some research has already focused on error correlation or error covariance as a candidate to improve ensemble mean estimates, a complete definition of independence must at least implicitly subscribe to an ensemble interpretation paradigm, such as the 'truth-plus-error', 'indistinguishable', or more recently 'replicate Earth' paradigm. Using a definition of model dependence based on error covariance within the replicate Earth paradigm, this presentation will show that accounting for dependence in surface air temperature gives cooler projections in CMIP5 - by as much as 20% globally in some RCPs - although results differ significantly for each RCP, especially regionally. The fact that the change afforded by accounting for dependence across different RCPs is different is not an inconsistent result. Different numbers of submissions to each RCP by different modelling groups mean that differences in projections from different RCPs are not entirely about RCP forcing conditions - they also reflect different sampling strategies.
Study of a co-designed decision feedback equalizer, deinterleaver, and decoder
NASA Technical Reports Server (NTRS)
Peile, Robert E.; Welch, Loyd
1990-01-01
A technique that promises better quality data from band limited channels at lower received power in digital transmission systems is presented. Data transmission, in such systems often suffers from intersymbol interference (ISI) and noise. Two separate techniques, channel coding and equalization, have caused considerable advances in the state of communication systems and both concern themselves with removing the undesired effects of a communication channel. Equalizers mitigate the ISI whereas coding schemes are used to incorporate error-correction. In the past, most of the research in these two areas has been carried out separately. However, the individual techniques have strengths and weaknesses that are complementary in many applications: an integrated approach realizes gains in excess to that of a simple juxtaposition. Coding schemes have been successfully used in cascade with linear equalizers which in the absence of ISI provide excellent performance. However, when both ISI and the noise level are relatively high, nonlinear receivers like the decision feedback equalizer (DFE) perform better. The DFE has its drawbacks: it suffers from error propagation. The technique presented here takes advantage of interleaving to integrate the two approaches so that the error propagation in DFE can be reduced with the help of error correction provided by the decoder. The results of simulations carried out for both, binary, and non-binary, channels confirm that significant gain can be obtained by codesigning equalizer and decoder. Although, systems with time-invariant channels and simple DFE having linear filters were looked into, the technique is fairly general and can easily be modified for more sophisticated equalizers to obtain even larger gains.
ERIC Educational Resources Information Center
Rice, Bart F.; Wilde, Carroll O.
It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…
B. Studies: 6. The Yugoslav Serbo-Croatian-English Contrastive Project.
ERIC Educational Resources Information Center
Filipovic, Rudolf, Ed.
Articles in this volume relate to the Yugoslav Serbo-Croatian-English Contrastive Project: (1) "The Yugoslav Serbo-Croatian-English Contrastive Project at the End of its Second Phase (1971-1975)," Rudolf Filipovic: Pedagogical goals and application of contrastive analysis are best achieved when accompanied by error analysis. Reports,…
Sensitivity-enhanced optical temperature sensor with cascaded LPFGs
NASA Astrophysics Data System (ADS)
Tsutsumi, Yasuhiro; Miyoshi, Yuji; Ohashi, Masaharu
2011-12-01
We propose a new structure of optical fiber temperature sensor with cascaded long-period fiber gratings (LPFGs) and investigate the temperature dependent loss of cascaded LFPGs. Each of the cascaded LPFGs has the same resonance wavelength with the same temperature change, because the cascaded LPFGs are made of a heat-shrinkable tube and a screw. The total resonance loss of proposed cascaded LPFGs shows higher temperature sensitivity than that of a single LPFG. The thermal coefficient of 4-cascaded LPFG also shows more than 4 times larger than that of a single one.
Code of Federal Regulations, 2011 CFR
2011-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...
Code of Federal Regulations, 2013 CFR
2013-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...
Code of Federal Regulations, 2014 CFR
2014-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...
Code of Federal Regulations, 2012 CFR
2012-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...
Error compensation for hybrid-computer solution of linear differential equations
NASA Technical Reports Server (NTRS)
Kemp, N. H.
1970-01-01
Z-transform technique compensates for digital transport delay and digital-to-analog hold. Method determines best values for compensation constants in multi-step and Taylor series projections. Technique also provides hybrid-calculation error compared to continuous exact solution, plus system stability properties.
Tan, Aimin; Saffaj, Taoufiq; Musuku, Adrien; Awaiye, Kayode; Ihssane, Bouchaib; Jhilal, Fayçal; Sosse, Saad Alaoui; Trabelsi, Fethi
2015-03-01
The current approach in regulated LC-MS bioanalysis, which evaluates the precision and trueness of an assay separately, has long been criticized for inadequate balancing of lab-customer risks. Accordingly, different total error approaches have been proposed. The aims of this research were to evaluate the aforementioned risks in reality and the difference among four common total error approaches (β-expectation, β-content, uncertainty, and risk profile) through retrospective analysis of regulated LC-MS projects. Twenty-eight projects (14 validations and 14 productions) were randomly selected from two GLP bioanalytical laboratories, which represent a wide variety of assays. The results show that the risk of accepting unacceptable batches did exist with the current approach (9% and 4% of the evaluated QC levels failed for validation and production, respectively). The fact that the risk was not wide-spread was only because the precision and bias of modern LC-MS assays are usually much better than the minimum regulatory requirements. Despite minor differences in magnitude, very similar accuracy profiles and/or conclusions were obtained from the four different total error approaches. High correlation was even observed in the width of bias intervals. For example, the mean width of SFSTP's β-expectation is 1.10-fold (CV=7.6%) of that of Saffaj-Ihssane's uncertainty approach, while the latter is 1.13-fold (CV=6.0%) of that of Hoffman-Kringle's β-content approach. To conclude, the risk of accepting unacceptable batches was real with the current approach, suggesting that total error approaches should be used instead. Moreover, any of the four total error approaches may be used because of their overall similarity. Lastly, the difficulties/obstacles associated with the application of total error approaches in routine analysis and their desirable future improvements are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Evaluation of a UMLS Auditing Process of Semantic Type Assignments
Gu, Huanying; Hripcsak, George; Chen, Yan; Morrey, C. Paul; Elhanan, Gai; Cimino, James J.; Geller, James; Perl, Yehoshua
2007-01-01
The UMLS is a terminological system that integrates many source terminologies. Each concept in the UMLS is assigned one or more semantic types from the Semantic Network, an upper level ontology for biomedicine. Due to the complexity of the UMLS, errors exist in the semantic type assignments. Finding assignment errors may unearth modeling errors. Even with sophisticated tools, discovering assignment errors requires manual review. In this paper we describe the evaluation of an auditing project of UMLS semantic type assignments. We studied the performance of the auditors who reviewed potential errors. We found that four auditors, interacting according to a multi-step protocol, identified a high rate of errors (one or more errors in 81% of concepts studied) and that results were sufficiently reliable (0.67 to 0.70) for the two most common types of errors. However, reliability was low for each individual auditor, suggesting that review of potential errors is resource-intensive. PMID:18693845
The effects of training on errors of perceived direction in perspective displays
NASA Technical Reports Server (NTRS)
Tharp, Gregory K.; Ellis, Stephen R.
1990-01-01
An experiment was conducted to determine the effects of training on the characteristic direction errors that are observed when subjects estimate exocentric directions on perspective displays. Changes in five subjects' perceptual errors were measured during a training procedure designed to eliminate the error. The training was provided by displaying to each subject both the sign and the direction of his judgment error. The feedback provided by the error display was found to decrease but not eliminate the error. A lookup table model of the source of the error was developed in which the judgement errors were attributed to overestimates of both the pitch and the yaw of the viewing direction used to produce the perspective projection. The model predicts the quantitative characteristics of the data somewhat better than previous models did. A mechanism is proposed for the observed learning, and further tests of the model are suggested.
This project summary highlights recent findings from research undertaken to develop improved methods to assess potential human health risks related to drinking water disinfection byproduct (DBP) exposures.
Prairie Monitoring Protocol Development: North Coast and Cascades Network
McCoy, Allen; Dalby, Craig
2009-01-01
The purpose of the project was to conduct research that will guide development of a standard approach to monitoring several components of prairies within the North Coast and Cascades Network (NCCN) parks. Prairies are an important element of the natural environment at many parks, including San Juan Island National Historical Park (NHP) and Ebey's Landing National Historical Reserve (NHR). Forests have been encroaching on these prairies for many years, and so monitoring of the prairies is an important resource issue. This project specifically focused on San Juan Island NHP. Prairies at Ebey's Landing NHR will be monitored in the future, but that park was not mapped as part of this prototype project. In the interest of efficiency, the Network decided to investigate two main issues before launching a full protocol development effort: (1) the imagery requirements for monitoring prairie components, and (2) the effectiveness of software to assist in extracting features from the imagery. Several components of prairie monitoring were initially identified as being easily tracked using aerial imagery. These components included prairie/forest edge, broad prairie composition (for example, shrubs, scattered trees), and internal exclusions (for example, shrubs, bare ground). In addition, we believed that it might be possible to distinguish different grasses in the prairies if the imagery were of high enough resolution. Although the areas in question at San Juan Island NHP are small enough that mapping on the ground with GPS (Global Positioning System) would be feasible, other applications could benefit from aerial image acquisition on a regular, recurring basis and thereby make the investment in aerial imagery worthwhile. The additional expense of orthorectifying the imagery also was determined to be cost-effective.
NASA Astrophysics Data System (ADS)
Xue, Fei; Bompard, Ettore; Huang, Tao; Jiang, Lin; Lu, Shaofeng; Zhu, Huaiying
2017-09-01
As the modern power system is expected to develop to a more intelligent and efficient version, i.e. the smart grid, or to be the central backbone of energy internet for free energy interactions, security concerns related to cascading failures have been raised with consideration of catastrophic results. The researches of topological analysis based on complex networks have made great contributions in revealing structural vulnerabilities of power grids including cascading failure analysis. However, existing literature with inappropriate assumptions in modeling still cannot distinguish the effects between the structure and operational state to give meaningful guidance for system operation. This paper is to reveal the interrelation between network structure and operational states in cascading failure and give quantitative evaluation by integrating both perspectives. For structure analysis, cascading paths will be identified by extended betweenness and quantitatively described by cascading drop and cascading gradient. Furthermore, the operational state for cascading paths will be described by loading level. Then, the risk of cascading failure along a specific cascading path can be quantitatively evaluated considering these two factors. The maximum cascading gradient of all possible cascading paths can be used as an overall metric to evaluate the entire power grid for its features related to cascading failure. The proposed method is tested and verified on IEEE30-bus system and IEEE118-bus system, simulation evidences presented in this paper suggests that the proposed model can identify the structural causes for cascading failure and is promising to give meaningful guidance for the protection of system operation in the future.
NASA Astrophysics Data System (ADS)
Mao, Cuili; Lu, Rongsheng; Liu, Zhijian
2018-07-01
In fringe projection profilometry, the phase errors caused by the nonlinear intensity response of digital projectors needs to be correctly compensated. In this paper, a multi-frequency inverse-phase method is proposed. The theoretical model of periodical phase errors is analyzed. The periodical phase errors can be adaptively compensated in the wrapped maps by using a set of fringe patterns. The compensated phase is then unwrapped with multi-frequency method. Compared with conventional methods, the proposed method can greatly reduce the periodical phase error without calibrating measurement system. Some simulation and experimental results are presented to demonstrate the validity of the proposed approach.
Methods to achieve accurate projection of regional and global raster databases
Usery, E. Lynn; Seong, Jeong Chang; Steinwand, Dan
2002-01-01
Modeling regional and global activities of climatic and human-induced change requires accurate geographic data from which we can develop mathematical and statistical tabulations of attributes and properties of the environment. Many of these models depend on data formatted as raster cells or matrices of pixel values. Recently, it has been demonstrated that regional and global raster datasets are subject to significant error from mathematical projection and that these errors are of such magnitude that model results may be jeopardized (Steinwand, et al., 1995; Yang, et al., 1996; Usery and Seong, 2001; Seong and Usery, 2001). There is a need to develop methods of projection that maintain the accuracy of these datasets to support regional and global analyses and modeling
Wang, Nu; Boswell, Paul G
2017-10-20
Gradient retention times are difficult to project from the underlying retention factor (k) vs. solvent composition (φ) relationships. A major reason for this difficulty is that gradients produced by HPLC pumps are imperfect - gradient delay, gradient dispersion, and solvent mis-proportioning are all difficult to account for in calculations. However, we recently showed that a gradient "back-calculation" methodology can measure these imperfections and take them into account. In RPLC, when the back-calculation methodology was used, error in projected gradient retention times is as low as could be expected based on repeatability in the k vs. φ relationships. HILIC, however, presents a new challenge: the selectivity of HILIC columns drift strongly over time. Retention is repeatable in short time, but selectivity frequently drifts over the course of weeks. In this study, we set out to understand if the issue of selectivity drift can be avoid by doing our experiments quickly, and if there any other factors that make it difficult to predict gradient retention times from isocratic k vs. φ relationships when gradient imperfections are taken into account with the back-calculation methodology. While in past reports, the accuracy of retention projections was >5%, the back-calculation methodology brought our error down to ∼1%. This result was 6-43 times more accurate than projections made using ideal gradients and 3-5 times more accurate than the same retention projections made using offset gradients (i.e., gradients that only took gradient delay into account). Still, the error remained higher in our HILIC projections than in RPLC. Based on the shape of the back-calculated gradients, we suspect the higher error is a result of prominent gradient distortion caused by strong, preferential water uptake from the mobile phase into the stationary phase during the gradient - a factor our model did not properly take into account. It appears that, at least with the stationary phase we used, column distortion is an important factor to take into account in retention projection in HILIC that is not usually important in RPLC. Copyright © 2017 Elsevier B.V. All rights reserved.
A Unified Fault-Tolerance Protocol
NASA Technical Reports Server (NTRS)
Miner, Paul; Gedser, Alfons; Pike, Lee; Maddalon, Jeffrey
2004-01-01
Davies and Wakerly show that Byzantine fault tolerance can be achieved by a cascade of broadcasts and middle value select functions. We present an extension of the Davies and Wakerly protocol, the unified protocol, and its proof of correctness. We prove that it satisfies validity and agreement properties for communication of exact values. We then introduce bounded communication error into the model. Inexact communication is inherent for clock synchronization protocols. We prove that validity and agreement properties hold for inexact communication, and that exact communication is a special case. As a running example, we illustrate the unified protocol using the SPIDER family of fault-tolerant architectures. In particular we demonstrate that the SPIDER interactive consistency, distributed diagnosis, and clock synchronization protocols are instances of the unified protocol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.
Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at differentmore » developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner. • The nicotine-induced secondary motoneuron axonal pathfinding errors can occur independent of any muscle fiber alterations. • Nicotine exposure primarily affects dorsal projecting secondary motoneurons axons. • Nicotine-induced primary motoneuron axon pathfinding errors can influence secondary motoneuron axon morphology.« less
JPEG2000-coded image error concealment exploiting convex sets projections.
Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio
2005-04-01
Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.
NASA Astrophysics Data System (ADS)
Tilg, Anna-Maria; Schöber, Johannes; Huttenlau, Matthias; Messner, Jakob; Achleitner, Stefan
2017-04-01
Hydropower is a renewable energy source which can help to stabilize fluctuations in the volatile energy market. Especially pumped-storage infrastructures in the European Alps play an important role within the European energy grid system. Today, the runoff of rivers in the Alps is often influenced by cascades of hydropower infrastructures where the operational procedures are triggered by energy market demands, water deliveries and flood control aspects rather than by hydro-meteorological variables. An example for such a highly hydropower regulated river is the catchment of the river Inn in the Eastern European Alps, originating in the Engadin (Switzerland). A new hydropower plant is going to be built as transboundary project at the boarder of Switzerland and Austria using the water of the Inn River. For the operation, a runoff forecast to the plant is required. The challenge in this case is that a high proportion of runoff is turbine water from an upstream situated hydropower cascade. The newly developed physically based hydrological forecasting system is mainly capable to cover natural hydrological runoff processes caused by storms and snow melt but can model only a small degree of human impact. These discontinuous parts of the runoff downstream of the pumped storage are described by means of an additional statistical model which has been developed. The main goal of the statistical model is to forecast the turbine water up to five days in advance. The lead time of the data driven model exceeds the lead time of the used energy production forecast. Additionally, the amount of turbine water is linked to the need of electricity production and the electricity price. It has been shown that especially the parameters day-ahead prognosis of the energy production and turbine inflow of the previous week are good predictors and are therefore used as input parameters for the model. As the data is restricted due to technical conditions, so-called Tobit models have been used to develop a linear regression for the runoff forecast. Although the day-ahead prognosis cannot always be kept, the regression model delivers, especially during office hours, very reasonable results. In the remaining hours the error between measurement and the forecast increases. Overall, the inflow forecast can be substantially improved by the implementation of the developed regression in the hydrological modelling system.
From Here to There: Lessons from an Integrative Patient Safety Project in Rural Health Care Settings
2005-05-01
errors and patient falls. The medication errors generally involved one of three issues: incorrect dose, time, or port. Although most of the health...statistics about trends; and the summary of events related to patient safety and medical errors.12 The interplay among factors These three domains...the medical staff. We explored these issues further when administering a staff-wide Patient Safety Survey. Responses mirrored the findings that
PLANNING QUALITY IN GEOSPATIAL PROJECTS
This presentation will briefly review some legal drivers and present a structure for the writing of geospatial Quality Assurance Projects Plans. In addition, the Geospatial Quality Council geospatial information life-cycle and sources of error flowchart will be reviewed.
SITE project. Phase 1: Continuous data bit-error-rate testing
NASA Technical Reports Server (NTRS)
Fujikawa, Gene; Kerczewski, Robert J.
1992-01-01
The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.
TNFa/TNFR2 signaling is required for glial ensheathment at the dorsal root entry zone
Smith, Cody J.; Bagnat, Michel; Deppmann, Christopher D.
2017-01-01
Somatosensory information from the periphery is routed to the spinal cord through centrally-projecting sensory axons that cross into the central nervous system (CNS) via the dorsal root entry zone (DREZ). The glial cells that ensheath these axons ensure rapid propagation of this information. Despite the importance of this glial-axon arrangement, how this afferent nerve is assembled during development is unknown. Using in vivo, time-lapse imaging we show that as centrally-projecting pioneer axons from dorsal root ganglia (DRG) enter the spinal cord, they initiate expression of the cytokine TNFalpha. This induction coincides with ensheathment of these axons by associated glia via a TNF receptor 2 (TNFR2)-mediated process. This work identifies a signaling cascade that mediates peripheral glial-axon interactions and it functions to ensure that DRG afferent projections are ensheathed after pioneer axons complete their navigation, which promotes efficient somatosensory neural function. PMID:28379965
An Empirical Analysis of the Cascade Secret Key Reconciliation Protocol for Quantum Key Distribution
2011-09-01
performance with the parity checks within each pass increasing and as a result, the processing time is expected to increase as well. A conclusion is drawn... timely manner has driven efforts to develop new key distribution methods. The most promising method is Quantum Key Distribution (QKD) and is...thank the QKD Project Team for all of the insight and support they provided in such a short time period. Thanks are especially in order for my
National Guard and Reserve Equipment Report for Fiscal Year 2013 (NGRER FY 2013)
2012-02-01
MTOEs and modernization of equipment; 2-11 however, the net result has been a more ready and modern force, prepared for utilization as an...projections for cascades to the ARNG through FY 2015. 3. Funding for New and Displaced Equipment Training New Equipment Training ( NET )/Displaced Equipment...Training (DET) funding is dependent on the amount of new equipment scheduled to be received. In FY 2011, the ARNG received $79.6M in NET funding to
Orthogonal Projection in Teaching Regression and Financial Mathematics
ERIC Educational Resources Information Center
Kachapova, Farida; Kachapov, Ilias
2010-01-01
Two improvements in teaching linear regression are suggested. The first is to include the population regression model at the beginning of the topic. The second is to use a geometric approach: to interpret the regression estimate as an orthogonal projection and the estimation error as the distance (which is minimized by the projection). Linear…
Empirical prediction intervals improve energy forecasting
Kaack, Lynn H.; Apt, Jay; Morgan, M. Granger; McSharry, Patrick
2017-01-01
Hundreds of organizations and analysts use energy projections, such as those contained in the US Energy Information Administration (EIA)’s Annual Energy Outlook (AEO), for investment and policy decisions. Retrospective analyses of past AEO projections have shown that observed values can differ from the projection by several hundred percent, and thus a thorough treatment of uncertainty is essential. We evaluate the out-of-sample forecasting performance of several empirical density forecasting methods, using the continuous ranked probability score (CRPS). The analysis confirms that a Gaussian density, estimated on past forecasting errors, gives comparatively accurate uncertainty estimates over a variety of energy quantities in the AEO, in particular outperforming scenario projections provided in the AEO. We report probabilistic uncertainties for 18 core quantities of the AEO 2016 projections. Our work frames how to produce, evaluate, and rank probabilistic forecasts in this setting. We propose a log transformation of forecast errors for price projections and a modified nonparametric empirical density forecasting method. Our findings give guidance on how to evaluate and communicate uncertainty in future energy outlooks. PMID:28760997
Fringe-period selection for a multifrequency fringe-projection phase unwrapping method
NASA Astrophysics Data System (ADS)
Zhang, Chunwei; Zhao, Hong; Jiang, Kejian
2016-08-01
The multi-frequency fringe-projection phase unwrapping method (MFPPUM) is a typical phase unwrapping algorithm for fringe projection profilometry. It has the advantage of being capable of correctly accomplishing phase unwrapping even in the presence of surface discontinuities. If the fringe frequency ratio of the MFPPUM is too large, fringe order error (FOE) may be triggered. FOE will result in phase unwrapping error. It is preferable for the phase unwrapping to be kept correct while the fewest sets of lower frequency fringe patterns are used. To achieve this goal, in this paper a parameter called fringe order inaccuracy (FOI) is defined, dominant factors which may induce FOE are theoretically analyzed, a method to optimally select the fringe periods for the MFPPUM is proposed with the aid of FOI, and experiments are conducted to research the impact of the dominant factors in phase unwrapping and demonstrate the validity of the proposed method. Some novel phenomena are revealed by these experiments. The proposed method helps to optimally select the fringe periods and detect the phase unwrapping error for the MFPPUM.
Error Analysis of CM Data Products Sources of Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, Brian D.; Eckert-Gallup, Aubrey Celia; Cochran, Lainy Dromgoole
This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across amore » wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.« less
Global land cover mapping: a review and uncertainty analysis
Congalton, Russell G.; Gu, Jianyu; Yadav, Kamini; Thenkabail, Prasad S.; Ozdogan, Mutlu
2014-01-01
Given the advances in remotely sensed imagery and associated technologies, several global land cover maps have been produced in recent times including IGBP DISCover, UMD Land Cover, Global Land Cover 2000 and GlobCover 2009. However, the utility of these maps for specific applications has often been hampered due to considerable amounts of uncertainties and inconsistencies. A thorough review of these global land cover projects including evaluating the sources of error and uncertainty is prudent and enlightening. Therefore, this paper describes our work in which we compared, summarized and conducted an uncertainty analysis of the four global land cover mapping projects using an error budget approach. The results showed that the classification scheme and the validation methodology had the highest error contribution and implementation priority. A comparison of the classification schemes showed that there are many inconsistencies between the definitions of the map classes. This is especially true for the mixed type classes for which thresholds vary for the attributes/discriminators used in the classification process. Examination of these four global mapping projects provided quite a few important lessons for the future global mapping projects including the need for clear and uniform definitions of the classification scheme and an efficient, practical, and valid design of the accuracy assessment.
CBO’s Revenue Forecasting Record
2015-11-01
1983 1988 1993 1998 2003 2008 2013 -10 0 10 20 30 CBO Administration CBO’s Mean Forecast Error (1.1%) Forecast Errors for CBO’s and the...Administration’s Two-Year Revenue Projections CONGRESS OF THE UNITED STATES CONGRESSIONAL BUDGET OFFICE CBO CBO’s Revenue Forecasting Record NOVEMBER 2015...
Application of the SEIPS Model to Analyze Medication Safety in a Crisis Residential Center.
Steele, Maria L; Talley, Brenda; Frith, Karen H
2018-02-01
Medication safety and error reduction has been studied in acute and long-term care settings, but little research is found in the literature regarding mental health settings. Because mental health settings are complex, medication administration is vulnerable to a variety of errors from transcription to administration. The purpose of this study was to analyze critical factors related to a mental health work system structure and processes that threaten safe medication administration practices. The Systems Engineering Initiative for Patient Safety (SEIPS) model provides a framework to analyze factors affecting medication safety. The model approach analyzes the work system concepts of technology, tasks, persons, environment, and organization to guide the collection of data. In the study, the Lean methodology tools were used to identify vulnerabilities in the system that could be targeted later for improvement activities. The project director completed face-to-face interviews, asked nurses to record disruptions in a log, and administered a questionnaire to nursing staff. The project director also conducted medication chart reviews and recorded medication errors using a standardized taxonomy for errors that allowed categorization of the prevalent types of medication errors. Results of the study revealed disruptions during the medication process, pharmacology training needs, and documentation processes as the primary opportunities for improvement. The project engaged nurses to identify sustainable quality improvement strategies to improve patient safety. The mental health setting carries challenges for safe medication administration practices. Through analysis of the structure, process, and outcomes of medication administration, opportunities for quality improvement and sustainable interventions were identified, including minimizing the number of distractions during medication administration, training nurses on psychotropic medications, and improving the documentation system. A task force was created to analyze the descriptive data and to establish objectives aimed at improving efficiency of the work system and care process involved in medication administration at the end of the project. Copyright © 2017 Elsevier Inc. All rights reserved.
Tampering with the turbulent energy cascade with polymer additives
NASA Astrophysics Data System (ADS)
Valente, Pedro; da Silva, Carlos; Pinho, Fernando
2014-11-01
We show that the strong depletion of the viscous dissipation in homogeneous viscoelastic turbulence reported by previous authors does not necessarily imply a depletion of the turbulent energy cascade. However, for large polymer relaxation times there is an onset of a polymer-induced kinetic energy cascade which competes with the non-linear energy cascade leading to its depletion. Remarkably, the total energy cascade flux from both cascade mechanisms remains approximately the same fraction of the kinetic energy over the turnover time as the non-linear energy cascade flux in Newtonian turbulence. The authors acknowledge the funding from COMPETE, FEDER and FCT (Grant PTDC/EME-MFE/113589/2009).
Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)
NASA Technical Reports Server (NTRS)
Adler, Robert; Gu, Guojun; Huffman, George
2012-01-01
A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.
First order error corrections in common introductory physics experiments
NASA Astrophysics Data System (ADS)
Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team
As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.
Cosmic-ray cascades photographed in scintillator
NASA Technical Reports Server (NTRS)
Barrowes, S. C.; Huggett, R. W.; Levit, L. B.; Porter, L. G.
1974-01-01
Light produced by nuclear-electromagnetic cascades in a plastic scintillator can be photographed, and the resulting images on film used to measure both the energy content of the cascades and also the positions at which the cascades passed through the scintillator. The energy content of a cascade can be measured to 20% and its position determined to plus or minus 0.8 cm in each scintillator. Techniques for photographing the cascades and analyzing the film are described. Sample data are presented and discussed.
Higher-order Kerr effect and harmonic cascading in gases.
Bache, Morten; Eilenberger, Falk; Minardi, Stefano
2012-11-15
The higher-order Kerr effect (HOKE) has recently been advocated to explain measurements of the saturation of the nonlinear refractive index in gases. Here we show that cascaded third-harmonic generation results in an effective fifth-order nonlinearity that is negative and significant. Higher-order harmonic cascading will also occur from the HOKE, and the cascading contributions may significantly modify the observed nonlinear index change. At lower wavelengths, cascading increases the HOKE saturation intensity, while for longer wavelengths cascading will decrease the HOKE saturation intensity.
Aerodynamics of a linear oscillating cascade
NASA Technical Reports Server (NTRS)
Buffum, Daniel H.; Fleeter, Sanford
1990-01-01
The steady and unsteady aerodynamics of a linear oscillating cascade are investigated using experimental and computational methods. Experiments are performed to quantify the torsion mode oscillating cascade aerodynamics of the NASA Lewis Transonic Oscillating Cascade for subsonic inlet flowfields using two methods: simultaneous oscillation of all the cascaded airfoils at various values of interblade phase angle, and the unsteady aerodynamic influence coefficient technique. Analysis of these data and correlation with classical linearized unsteady aerodynamic analysis predictions indicate that the wind tunnel walls enclosing the cascade have, in some cases, a detrimental effect on the cascade unsteady aerodynamics. An Euler code for oscillating cascade aerodynamics is modified to incorporate improved upstream and downstream boundary conditions and also the unsteady aerodynamic influence coefficient technique. The new boundary conditions are shown to improve the unsteady aerodynamic influence coefficient technique. The new boundary conditions are shown to improve the unsteady aerodynamic predictions of the code, and the computational unsteady aerodynamic influence coefficient technique is shown to be a viable alternative for calculation of oscillating cascade aerodynamics.
Investigation of oscillating cascade aerodynamics by an experimental influence coefficient technique
NASA Technical Reports Server (NTRS)
Buffum, Daniel H.; Fleeter, Sanford
1988-01-01
Fundamental experiments are performed in the NASA Lewis Transonic Oscillating Cascade Facility to investigate the torsion mode unsteady aerodynamics of a biconvex airfoil cascade at realistic values of the reduced frequency for all interblade phase angles at a specified mean flow condition. In particular, an unsteady aerodynamic influence coefficient technique is developed and utilized in which only one airfoil in the cascade is oscillated at a time and the resulting airfoil surface unsteady pressure distribution measured on one dynamically instrumented airfoil. The unsteady aerodynamics of an equivalent cascade with all airfoils oscillating at a specified interblade phase angle are then determined through a vector summation of these data. These influence coefficient determined oscillation cascade data are correlated with data obtained in this cascade with all airfoils oscillating at several interblade phase angle values. The influence coefficients are then utilized to determine the unsteady aerodynamics of the cascade for all interblade phase angles, with these unique data subsequently correlated with predictions from a linearized unsteady cascade model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinisch, H.L.
1997-04-01
The intracascade evolution of the defect distributions of cascades in copper is investigated using stochastic annealing simulations applied to cascades generated with molecular dynamics (MD). The temperature and energy dependencies of annihilation, clustering and free defect production are determined for individual cascades. The annealing simulation results illustrate the strong influence on intracascade evolution of the defect configuration existing in the primary damage state. Another factor significantly affecting the evolution of the defect distribution is the rapid one-dimensional diffusion of small, glissile interstitial loops produced directly in cascades. This phenomenon introduces a cascade energy dependence of defect evolution that is apparentmore » only beyond the primary damage state, amplifying the need for further study of the annealing phase of cascade evolution and for performing many more MD cascade simulations at higher energies.« less
NASA Astrophysics Data System (ADS)
Chao, Li; Peigang, Yan; Xiangfeng, Wang; Wanjin, Han; Qingchao, Wang
2017-08-01
This paper investigates the feasibility of improving the aerodynamic performance of low pressure turbine (LPT) blade cascades and developing low solidity LPT blade cascades through deflected trailing edge. A deflected trailing edge improved aerodynamic performance of both LPT blade cascades and low solidity LPT blade cascades. For standard solidity LPT cascades, deflecting the trailing edge can decrease the energy loss coefficient by 20.61 % for a Reynolds number (Re) of 25,000 and freestream turbulence intensities (FSTI) of 1 %. For a low solidity LPT cascade, aerodynamic performance was also improved by deflecting the trailing edge. Solidity of the LPT cascade can be reduced by 12.5 % for blades with a deflected trailing edge without a drop in efficiency. Here, the flow control mechanism surrounding a deflected trailing edge was also revealed.
Error detection and correction unit with built-in self-test capability for spacecraft applications
NASA Technical Reports Server (NTRS)
Timoc, Constantin
1990-01-01
The objective of this project was to research and develop a 32-bit single chip Error Detection and Correction unit capable of correcting all single bit errors and detecting all double bit errors in the memory systems of a spacecraft. We designed the 32-bit EDAC (Error Detection and Correction unit) based on a modified Hamming code and according to the design specifications and performance requirements. We constructed a laboratory prototype (breadboard) which was converted into a fault simulator. The correctness of the design was verified on the breadboard using an exhaustive set of test cases. A logic diagram of the EDAC was delivered to JPL Section 514 on 4 Oct. 1988.
Quantum-electrodynamic cascades in intense laser fields
NASA Astrophysics Data System (ADS)
Narozhny, N. B.; Fedotov, A. M.
2015-01-01
It is shown that in an intense laser field, along with cascades similar to extensive air showers, self-sustaining field-energized cascades can develop. For intensities of 1024~ \\text {W cm}-2 or higher, such cascades can even be initiated by a particle at rest in the focal area of a tightly focused laser pulse. The cascade appearance effect can considerably alter the progression of any process occurring in a high-intensity laser field. At very high intensities, the evolvement of such cascades can lead to the depletion of the laser field. This paper presents a design of an experiment to observe these two cascade types simultaneously already in next-generation laser facilities.
NASA Astrophysics Data System (ADS)
Li, Jiqing; Yang, Xiong
2018-06-01
In this paper, to explore the efficiency and rationality of the cascade combined generation, a cascade combined optimal model with the maximum generating capacity is established, and solving the model by the modified GA-POA method. It provides a useful reference for the joint development of cascade hydro-power stations in large river basins. The typical annual runoff data are selected to calculate the difference between the calculated results under different representative years. The results show that the cascade operation of cascaded hydro-power stations can significantly increase the overall power generation of cascade and ease the flood risk caused by concentration of flood season.
Computation of flow in radial- and mixed-flow cascades by an inviscid-viscous interaction method
NASA Technical Reports Server (NTRS)
Serovy, G. K.; Hansen, E. C.
1980-01-01
The use of inviscid-viscous interaction methods for the case of radial or mixed-flow cascade diffusers is discussed. A literature review of investigations considering cascade flow-field prediction by inviscid-viscous iterative computation is given. Cascade aerodynamics in the third blade row of a multiple-row radial cascade diffuser are specifically investigated.
Design of an off-axis visual display based on a free-form projection screen to realize stereo vision
NASA Astrophysics Data System (ADS)
Zhao, Yuanming; Cui, Qingfeng; Piao, Mingxu; Zhao, Lidong
2017-10-01
A free-form projection screen is designed for an off-axis visual display, which shows great potential in applications such as flight training for providing both accommodation and convergence cues for pilots. The method based on point cloud is proposed for the design of the free-form surface, and the design of the point cloud is controlled by a program written in the macro-language. In the visual display based on the free-form projection screen, when the error of the screen along Z-axis is 1 mm, the error of visual distance at each filed is less than 1%. And the resolution of the design for full field is better than 1‧, which meet the requirement of resolution for human eyes.
NASA Astrophysics Data System (ADS)
Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.
2018-04-01
Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.
Cascaded analysis of signal and noise propagation through a heterogeneous breast model.
Mainprize, James G; Yaffe, Martin J
2010-10-01
The detectability of lesions in radiographic images can be impaired by patterns caused by the surrounding anatomic structures. The presence of such patterns is often referred to as anatomic noise. Others have previously extended signal and noise propagation theory to include variable background structure as an additional noise term and used in simulations for analysis by human and ideal observers. Here, the analytic forms of the signal and noise transfer are derived to obtain an exact expression for any input random distribution and the "power law" filter used to generate the texture of the tissue distribution. A cascaded analysis of propagation through a heterogeneous model is derived for x-ray projection through simulated heterogeneous backgrounds. This is achieved by considering transmission through the breast as a correlated amplification point process. The analytic forms of the cascaded analysis were compared to monoenergetic Monte Carlo simulations of x-ray propagation through power law structured backgrounds. As expected, it was found that although the quantum noise power component scales linearly with the x-ray signal, the anatomic noise will scale with the square of the x-ray signal. There was a good agreement between results obtained using analytic expressions for the noise power and those from Monte Carlo simulations for different background textures, random input functions, and x-ray fluence. Analytic equations for the signal and noise properties of heterogeneous backgrounds were derived. These may be used in direct analysis or as a tool to validate simulations in evaluating detectability.
Honeywell Cascade Distiller System Performance Testing Interim Results
NASA Technical Reports Server (NTRS)
Callahan, Michael R.; Sargusingh, Miriam
2014-01-01
The ability to recover and purify water through physiochemical processes is crucial for realizing long-term human space missions, including both planetary habitation and space travel. Because of their robust nature, distillation systems have been actively pursued as one of the technologies for water recovery. The Cascade Distillation System (CDS) is a vacuum rotary distillation system with potential for greater reliability and lower energy costs than existing distillation systems. The CDS was previously under development through Honeywell and NASA. In 2009, an assessment was performed to collect data to support down-selection and development of a primary distillation technology for application in a lunar outpost water recovery system. Based on the results of this testing, an expert panel concluded that the CDS showed adequate development maturity, TRL-4, together with the best product water quality and competitive weight and power estimates to warrant further development. The Advanced Exploration Systems (AES) Water Recovery Project (WRP) worked to address weaknesses identified by The Panel; namely bearing design and heat pump power efficiency. Testing at the NASA-JSC Advanced Exploration System Water Laboratory (AES Water Lab) using a prototype Cascade Distillation Subsystem (CDS) wastewater processor (Honeywell International, Torrance, Calif.) with test support equipment and control system developed by Johnson Space Center was performed to evaluate performance of the system with the upgrades. The CDS will also have been challenged with ISS analog waste streams and a subset of those being considered for Exploration architectures. This paper details interim results of the AES WRP CDS performance testing.
NASA Technical Reports Server (NTRS)
Callahan, Michael R.; Sargusingh, Miriam J.
2015-01-01
The ability to recover and purify water through physiochemical processes is crucial for realizing long-term human space missions, including both planetary habitation and space travel. Because of their robust nature, distillation systems have been actively pursued as one of the technologies for water recovery. One such technology is the Cascade Distillation System (CDS) a multi-stage vacuum rotary distiller system designed to recover water in a microgravity environment. Its rotating cascading distiller operates similarly to the state of the art (SOA) vapor compressor distiller (VCD), but its control scheme and ancillary components are judged to be straightforward and simpler to implement into a successful design. Through the Advanced Exploration Systems (AES) Life Support Systems (LSS) Project, the NASA Johnson Space Center (JSC) in collaboration with Honeywell International is developing a second generation flight forward prototype (CDS 2.0). The key objectives for the CDS 2.0 design task is to provide a flight forward ground prototype that demonstrates improvements over the SOA system in the areas of increased reliability and robustness, and reduced mass, power and volume. It will also incorporate exploration-class automation. The products of this task are a preliminary flight system design and a high fidelity prototype of an exploration class CDS. These products will inform the design and development of the third generation CDS which is targeted for on-orbit DTO. This paper details the preliminary design of the CDS 2.0.
Cascade aeroacoustics including steady loading effects
NASA Astrophysics Data System (ADS)
Chiang, Hsiao-Wei D.; Fleeter, Sanford
A mathematical model is developed to analyze the effects of airfoil and cascade geometry, steady aerodynamic loading, and the characteristics of the unsteady flow field on the discrete frequency noise generation of a blade row in an incompressible flow. The unsteady lift which generates the noise is predicted with a complex first-order cascade convected gust analysis. This model was then applied to the Gostelow airfoil cascade and variations, demonstrating that steady loading, cascade solidity, and the gust direction are significant. Also, even at zero incidence, the classical flat plate cascade predictions are unacceptable.
A whole stand basal area projection model for Appalachian hardwoods
John R. Brooks; Lichun Jiang; Matthew Perkowski; Benktesh Sharma
2008-01-01
Two whole-stand basal area projection models were developed for Appalachian hardwood stands. The proposed equations are an algebraic difference projection form based on existing basal area and the change in age, trees per acre, and/or dominant height. Average equation error was less than 10 square feet per acre and residuals exhibited no irregular trends.
The Aerospace Safety Advisory panel's report to Doctor Robert A. Frosch, 1977
NASA Technical Reports Server (NTRS)
1978-01-01
Risks attendant to NASA's operations are identified for the following: mission operations; orbiter readiness for orbital flight tests; space shuttle main engine; avionics; thermal projection system; hazard assessment; human error. Past and future projects are assessed.
Energy flow along the medium-induced parton cascade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaizot, J.-P., E-mail: jean-paul.blaizot@cea.fr; Mehtar-Tani, Y., E-mail: ymehtar@uw.edu
2016-05-15
We discuss the dynamics of parton cascades that develop in dense QCD matter, and contrast their properties with those of similar cascades of gluon radiation in vacuum. We argue that such cascades belong to two distinct classes that are characterized respectively by an increasing or a constant (or decreasing) branching rate along the cascade. In the former class, of which the BDMPS, medium-induced, cascade constitutes a typical example, it takes a finite time to transport a finite amount of energy to very soft quanta, while this time is essentially infinite in the latter case, to which the DGLAP cascade belongs.more » The medium induced cascade is accompanied by a constant flow of energy towards arbitrary soft modes, leading eventually to the accumulation of the initial energy of the leading particle at zero energy. It also exhibits scaling properties akin to wave turbulence. These properties do not show up in the cascade that develops in vacuum. There, the energy accumulates in the spectrum at smaller and smaller energy as the cascade develops, but the energy never flows all the way down to zero energy. Our analysis suggests that the way the energy is shared among the offsprings of a splitting gluon has little impact on the qualitative properties of the cascades, provided the kernel that governs the splittings is not too singular.« less
Environmental solid particle effects on compressor cascade performance
NASA Technical Reports Server (NTRS)
Tabakoff, W.; Balan, C.
1982-01-01
The effect of suspended solid particles on the performance of the compressor cascade was investigated experimentally in a specially built cascade tunnel, using quartz sand particles. The cascades were made of NACA 65(10)10 airfoils. Three cascades were tested, one accelerating cascade and two diffusing cascades. The theoretical analysis assumes inviscid and incompressible two dimensional flow. The momentum exchange between the fluid and the particle is accounted for by the interphase force terms in the fluid momentum equation. The modified fluid phase momentum equations and the continuity equation are reduced to the conventional stream function vorticity formulation. The method treats the fluid phase in the Eulerian system and the particle phase in Lagrangian system. The experimental results indicate a small increase in the blade surface static pressures, while the theoretical results indicate a small decrease. The theoretical analysis, also predicts the loss in total pressure associated with the particulate flow through the cascade.
Integrated Broadband Quantum Cascade Laser
NASA Technical Reports Server (NTRS)
Mansour, Kamjou (Inventor); Soibel, Alexander (Inventor)
2016-01-01
A broadband, integrated quantum cascade laser is disclosed, comprising ridge waveguide quantum cascade lasers formed by applying standard semiconductor process techniques to a monolithic structure of alternating layers of claddings and active region layers. The resulting ridge waveguide quantum cascade lasers may be individually controlled by independent voltage potentials, resulting in control of the overall spectrum of the integrated quantum cascade laser source. Other embodiments are described and claimed.
The Effects of Measurement Error on Statistical Models for Analyzing Change. Final Report.
ERIC Educational Resources Information Center
Dunivant, Noel
The results of six major projects are discussed including a comprehensive mathematical and statistical analysis of the problems caused by errors of measurement in linear models for assessing change. In a general matrix representation of the problem, several new analytic results are proved concerning the parameters which affect bias in…
ERIC Educational Resources Information Center
Freund, Barbara; Petrakos, Davithoula
2008-01-01
We developed driving restrictions that are linked to specific driving errors, allowing cognitively impaired individuals to continue to independently meet mobility needs while minimizing risk to themselves and others. The purpose of this project was to evaluate the efficacy and duration expectancy of these restrictions in promoting safe continued…
DOT National Transportation Integrated Search
2014-02-01
The purpose of this memorandum is to provide recommended Total System Error (TSE) models : for aircraft using RNAV (GPS) guidance when analyzing the wake encounter risk of proposed : simultaneous dependent (paired) approach operations to Closel...
Sur, Maitreyi; Belthoff, James R.; Bjerre, Emily R.; Millsap, Brian A.; Katzner, Todd
2018-01-01
Wind energy development is rapidly expanding in North America, often accompanied by requirements to survey potential facility locations for existing wildlife. Within the USA, golden eagles (Aquila chrysaetos) are among the most high-profile species of birds that are at risk from wind turbines. To minimize golden eagle fatalities in areas proposed for wind development, modified point count surveys are usually conducted to estimate use by these birds. However, it is not always clear what drives variation in the relationship between on-site point count data and actual use by eagles of a wind energy project footprint. We used existing GPS-GSM telemetry data, collected at 15 min intervals from 13 golden eagles in 2012 and 2013, to explore the relationship between point count data and eagle use of an entire project footprint. To do this, we overlaid the telemetry data on hypothetical project footprints and simulated a variety of point count sampling strategies for those footprints. We compared the time an eagle was found in the sample plots with the time it was found in the project footprint using a metric we called “error due to sampling”. Error due to sampling for individual eagles appeared to be influenced by interactions between the size of the project footprint (20, 40, 90 or 180 km2) and the sampling type (random, systematic or stratified) and was greatest on 90 km2 plots. However, use of random sampling resulted in lowest error due to sampling within intermediate sized plots. In addition sampling intensity and sampling frequency both influenced the effectiveness of point count sampling. Although our work focuses on individual eagles (not the eagle populations typically surveyed in the field), our analysis shows both the utility of simulations to identify specific influences on error and also potential improvements to sampling that consider the context-specific manner that point counts are laid out on the landscape.
Calculating tumor trajectory and dose-of-the-day using cone-beam CT projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Bernard L., E-mail: bernard.jones@ucdenver.edu; Westerly, David; Miften, Moyed
2015-02-15
Purpose: Cone-beam CT (CBCT) projection images provide anatomical data in real-time over several respiratory cycles, forming a comprehensive picture of tumor movement. The authors developed and validated a method which uses these projections to determine the trajectory of and dose to highly mobile tumors during each fraction of treatment. Methods: CBCT images of a respiration phantom were acquired, the trajectory of which mimicked a lung tumor with high amplitude (up to 2.5 cm) and hysteresis. A template-matching algorithm was used to identify the location of a steel BB in each CBCT projection, and a Gaussian probability density function for themore » absolute BB position was calculated which best fit the observed trajectory of the BB in the imager geometry. Two modifications of the trajectory reconstruction were investigated: first, using respiratory phase information to refine the trajectory estimation (Phase), and second, using the Monte Carlo (MC) method to sample the estimated Gaussian tumor position distribution. The accuracies of the proposed methods were evaluated by comparing the known and calculated BB trajectories in phantom-simulated clinical scenarios using abdominal tumor volumes. Results: With all methods, the mean position of the BB was determined with accuracy better than 0.1 mm, and root-mean-square trajectory errors averaged 3.8% ± 1.1% of the marker amplitude. Dosimetric calculations using Phase methods were more accurate, with mean absolute error less than 0.5%, and with error less than 1% in the highest-noise trajectory. MC-based trajectories prevent the overestimation of dose, but when viewed in an absolute sense, add a small amount of dosimetric error (<0.1%). Conclusions: Marker trajectory and target dose-of-the-day were accurately calculated using CBCT projections. This technique provides a method to evaluate highly mobile tumors using ordinary CBCT data, and could facilitate better strategies to mitigate or compensate for motion during stereotactic body radiotherapy.« less
NASA Technical Reports Server (NTRS)
Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)
2002-01-01
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.
Huang, Hao; Milione, Giovanni; Lavery, Martin P. J.; Xie, Guodong; Ren, Yongxiong; Cao, Yinwen; Ahmed, Nisar; An Nguyen, Thien; Nolan, Daniel A.; Li, Ming-Jun; Tur, Moshe; Alfano, Robert R.; Willner, Alan E.
2015-01-01
Mode division multiplexing (MDM)– using a multimode optical fiber’s N spatial modes as data channels to transmit N independent data streams – has received interest as it can potentially increase optical fiber data transmission capacity N-times with respect to single mode optical fibers. Two challenges of MDM are (1) designing mode (de)multiplexers with high mode selectivity (2) designing mode (de)multiplexers without cascaded beam splitting’s 1/N insertion loss. One spatial mode basis that has received interest is that of orbital angular momentum (OAM) modes. In this paper, using a device referred to as an OAM mode sorter, we show that OAM modes can be (de)multiplexed over a multimode optical fiber with higher than −15 dB mode selectivity and without cascaded beam splitting’s 1/N insertion loss. As a proof of concept, the OAM modes of the LP11 mode group (OAM−1,0 and OAM+1,0), each carrying 20-Gbit/s polarization division multiplexed and quadrature phase shift keyed data streams, are transmitted 5km over a graded-index, few-mode optical fibre. Channel crosstalk is mitigated using 4 × 4 multiple-input-multiple-output digital-signal-processing with <1.5 dB power penalties at a bit-error-rate of 2 × 10−3. PMID:26450398
Huang, Hao; Milione, Giovanni; Lavery, Martin P J; Xie, Guodong; Ren, Yongxiong; Cao, Yinwen; Ahmed, Nisar; An Nguyen, Thien; Nolan, Daniel A; Li, Ming-Jun; Tur, Moshe; Alfano, Robert R; Willner, Alan E
2015-10-09
Mode division multiplexing (MDM)- using a multimode optical fiber's N spatial modes as data channels to transmit N independent data streams - has received interest as it can potentially increase optical fiber data transmission capacity N-times with respect to single mode optical fibers. Two challenges of MDM are (1) designing mode (de)multiplexers with high mode selectivity (2) designing mode (de)multiplexers without cascaded beam splitting's 1/N insertion loss. One spatial mode basis that has received interest is that of orbital angular momentum (OAM) modes. In this paper, using a device referred to as an OAM mode sorter, we show that OAM modes can be (de)multiplexed over a multimode optical fiber with higher than -15 dB mode selectivity and without cascaded beam splitting's 1/N insertion loss. As a proof of concept, the OAM modes of the LP11 mode group (OAM-1,0 and OAM+1,0), each carrying 20-Gbit/s polarization division multiplexed and quadrature phase shift keyed data streams, are transmitted 5km over a graded-index, few-mode optical fibre. Channel crosstalk is mitigated using 4 × 4 multiple-input-multiple-output digital-signal-processing with <1.5 dB power penalties at a bit-error-rate of 2 × 10(-3).
Chang, Yuan-Pin; Merer, Anthony J; Chang, Hsun-Hui; Jhang, Li-Ji; Chao, Wen; Lin, Jim Jr-Min
2017-06-28
The region 1273-1290 cm -1 of the ν 4 fundamental of the simplest Criegee intermediate, CH 2 OO, has been measured using a quantum cascade laser transient absorption spectrometer, which offers greater sensitivity and spectral resolution (<0.004 cm -1 ) than previous works based on thermal light sources. Gas phase CH 2 OO was generated from the reaction of CH 2 I + O 2 at 298 K and 4 Torr. The analysis of the absorption spectrum has provided precise values for the vibrational frequency and the rotational constants, with fitting errors of a few MHz. The determined ratios of the rotational constants, A'/A″ = 0.9986, B'/B″ = 0.9974, and C'/C″ = 1.0010, and the relative intensities of the a- and b-type transitions, 90:10, are in good agreement with literature values from a theoretical calculation using the MULTIMODE approach, based on a high-level ab initio potential energy surface. The low-K (=K a ) lines can be fitted extremely well, but rotational perturbations by other vibrational modes disrupt the structure for K = 4 and K ≥ 6. Not only the spectral resolution but also the detection sensitivity of CH 2 OO IR transitions has been greatly improved in this work, allowing for unambiguous monitoring of CH 2 OO in kinetic studies at low concentrations.
NASA Astrophysics Data System (ADS)
Ullah, Rahat; Liu, Bo; Zhang, Qi; Saad Khan, Muhammad; Ahmad, Ibrar; Ali, Amjad; Khan, Razaullah; Tian, Qinghua; Yan, Cheng; Xin, Xiangjun
2016-09-01
An architecture for flattened and broad spectrum multicarriers is presented by generating 60 comb lines from pulsed laser driven by user-defined bit stream in cascade with three modulators. The proposed scheme is a cost-effective architecture for optical line terminal (OLT) in wavelength division multiplexed passive optical network (WDM-PON) system. The optical frequency comb generator consists of a pulsed laser in cascade with a phase modulator and two Mach-Zehnder modulators driven by an RF source incorporating no phase shifter, filter, or electrical amplifier. Optical frequency comb generation is deployed in the simulation environment at OLT in WDM-PON system supports 1.2-Tbps data rate. With 10-GHz frequency spacing, each frequency tone carries data signal of 20 Gbps-based differential quadrature phase shift keying (DQPSK) in downlink transmission. We adopt DQPSK-based modulation technique in the downlink transmission because it supports 2 bits per symbol, which increases the data rate in WDM-PON system. Furthermore, DQPSK format is tolerant to different types of dispersions and has a high spectral efficiency with less complex configurations. Part of the downlink power is utilized in the uplink transmission; the uplink transmission is based on intensity modulated on-off keying. Minimum power penalties have been observed with excellent eye diagrams and other transmission performances at specified bit error rates.
NASA Astrophysics Data System (ADS)
Huang, Hao; Milione, Giovanni; Lavery, Martin P. J.; Xie, Guodong; Ren, Yongxiong; Cao, Yinwen; Ahmed, Nisar; An Nguyen, Thien; Nolan, Daniel A.; Li, Ming-Jun; Tur, Moshe; Alfano, Robert R.; Willner, Alan E.
2015-10-01
Mode division multiplexing (MDM)- using a multimode optical fiber’s N spatial modes as data channels to transmit N independent data streams - has received interest as it can potentially increase optical fiber data transmission capacity N-times with respect to single mode optical fibers. Two challenges of MDM are (1) designing mode (de)multiplexers with high mode selectivity (2) designing mode (de)multiplexers without cascaded beam splitting’s 1/N insertion loss. One spatial mode basis that has received interest is that of orbital angular momentum (OAM) modes. In this paper, using a device referred to as an OAM mode sorter, we show that OAM modes can be (de)multiplexed over a multimode optical fiber with higher than -15 dB mode selectivity and without cascaded beam splitting’s 1/N insertion loss. As a proof of concept, the OAM modes of the LP11 mode group (OAM-1,0 and OAM+1,0), each carrying 20-Gbit/s polarization division multiplexed and quadrature phase shift keyed data streams, are transmitted 5km over a graded-index, few-mode optical fibre. Channel crosstalk is mitigated using 4 × 4 multiple-input-multiple-output digital-signal-processing with <1.5 dB power penalties at a bit-error-rate of 2 × 10-3.
Land-use threats and protected areas: a scenario-based, landscape level approach
Wilson, Tamara S.; Sleeter, Benjamin M.; Sleeter, Rachel R.; Soulard, Christopher E.
2014-01-01
Anthropogenic land use will likely present a greater challenge to biodiversity than climate change this century in the Pacific Northwest, USA. Even if species are equipped with the adaptive capacity to migrate in the face of a changing climate, they will likely encounter a human-dominated landscape as a major dispersal obstacle. Our goal was to identify, at the ecoregion-level, protected areas in close proximity to lands with a higher likelihood of future land-use conversion. Using a state-and-transition simulation model, we modeled spatially explicit (1 km2) land use from 2000 to 2100 under seven alternative land-use and emission scenarios for ecoregions in the Pacific Northwest. We analyzed scenario-based land-use conversion threats from logging, agriculture, and development near existing protected areas. A conversion threat index (CTI) was created to identify ecoregions with highest projected land-use conversion potential within closest proximity to existing protected areas. Our analysis indicated nearly 22% of land area in the Coast Range, over 16% of land area in the Puget Lowland, and nearly 11% of the Cascades had very high CTI values. Broader regional-scale land-use change is projected to impact nearly 40% of the Coast Range, 30% of the Puget Lowland, and 24% of the Cascades (i.e., two highest CTI classes). A landscape level, scenario-based approach to modeling future land use helps identify ecoregions with existing protected areas at greater risk from regional land-use threats and can help prioritize future conservation efforts.
How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?
Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C
2016-10-01
The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.
Cooper, Lauren A; Stringer, Anne M; Wade, Joseph T
2018-04-17
In clustered regularly interspaced short palindromic repeat (CRISPR)-Cas (CRISPR-associated) immunity systems, short CRISPR RNAs (crRNAs) are bound by Cas proteins, and these complexes target invading nucleic acid molecules for degradation in a process known as interference. In type I CRISPR-Cas systems, the Cas protein complex that binds DNA is known as Cascade. Association of Cascade with target DNA can also lead to acquisition of new immunity elements in a process known as primed adaptation. Here, we assess the specificity determinants for Cascade-DNA interaction, interference, and primed adaptation in vivo , for the type I-E system of Escherichia coli Remarkably, as few as 5 bp of crRNA-DNA are sufficient for association of Cascade with a DNA target. Consequently, a single crRNA promotes Cascade association with numerous off-target sites, and the endogenous E. coli crRNAs direct Cascade binding to >100 chromosomal sites. In contrast to the low specificity of Cascade-DNA interactions, >18 bp are required for both interference and primed adaptation. Hence, Cascade binding to suboptimal, off-target sites is inert. Our data support a model in which the initial Cascade association with DNA targets requires only limited sequence complementarity at the crRNA 5' end whereas recruitment and/or activation of the Cas3 nuclease, a prerequisite for interference and primed adaptation, requires extensive base pairing. IMPORTANCE Many bacterial and archaeal species encode CRISPR-Cas immunity systems that protect against invasion by foreign DNA. In the Escherichia coli CRISPR-Cas system, a protein complex, Cascade, binds 61-nucleotide (nt) CRISPR RNAs (crRNAs). The Cascade complex is directed to invading DNA molecules through base pairing between the crRNA and target DNA. This leads to recruitment of the Cas3 nuclease, which destroys the invading DNA molecule and promotes acquisition of new immunity elements. We made the first in vivo measurements of Cascade binding to DNA targets. Thus, we show that Cascade binding to DNA is highly promiscuous; endogenous E. coli crRNAs can direct Cascade binding to >100 chromosomal locations. In contrast, we show that targeted degradation and acquisition of new immunity elements require highly specific association of Cascade with DNA, limiting CRISPR-Cas function to the appropriate targets. Copyright © 2018 Cooper et al.
Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections
NASA Astrophysics Data System (ADS)
Bertholet, J.; Wan, H.; Toftegaard, J.; Schmidt, M. L.; Chotard, F.; Parikh, P. J.; Poulsen, P. R.
2017-02-01
Radio-opaque fiducial markers of different shapes are often implanted in or near abdominal or thoracic tumors to act as surrogates for the tumor position during radiotherapy. They can be used for real-time treatment adaptation, but this requires a robust, automatic segmentation method able to handle arbitrarily shaped markers in a rotational imaging geometry such as cone-beam computed tomography (CBCT) projection images and intra-treatment images. In this study, we propose a fully automatic dynamic programming (DP) assisted template-based (TB) segmentation method. Based on an initial DP segmentation, the DPTB algorithm generates and uses a 3D marker model to create 2D templates at any projection angle. The 2D templates are used to segment the marker position as the position with highest normalized cross-correlation in a search area centered at the DP segmented position. The accuracy of the DP algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated. The mean 2D segmentation error of DP was reduced from 4.1 pixels to 3.0 pixels by DPTB, while the fraction of wrong segmentations was reduced from 17.4% to 6.8%. DPTB allowed rejection of uncertain segmentations as deemed by a low normalized cross-correlation coefficient and contrast-to-noise ratio. For a rejection rate of 9.97%, the sensitivity in detecting wrong segmentations was 67% and the specificity was 94%. The accepted segmentations had a mean segmentation error of 1.8 pixels and 2.5% wrong segmentations.
NASA Astrophysics Data System (ADS)
Zhang, Chengzhu; Xie, Shaocheng; Klein, Stephen A.; Ma, Hsi-yen; Tang, Shuaiqi; Van Weverberg, Kwinten; Morcrette, Cyril J.; Petch, Jon
2018-03-01
All the weather and climate models participating in the Clouds Above the United States and Errors at the Surface project show a summertime surface air temperature (T2 m) warm bias in the region of the central United States. To understand the warm bias in long-term climate simulations, we assess the Atmospheric Model Intercomparison Project simulations from the Coupled Model Intercomparison Project Phase 5, with long-term observations mainly from the Atmospheric Radiation Measurement program Southern Great Plains site. Quantities related to the surface energy and water budget, and large-scale circulation are analyzed to identify possible factors and plausible links involved in the warm bias. The systematic warm season bias is characterized by an overestimation of T2 m and underestimation of surface humidity, precipitation, and precipitable water. Accompanying the warm bias is an overestimation of absorbed solar radiation at the surface, which is due to a combination of insufficient cloud reflection and clear-sky shortwave absorption by water vapor and an underestimation in surface albedo. The bias in cloud is shown to contribute most to the radiation bias. The surface layer soil moisture impacts T2 m through its control on evaporative fraction. The error in evaporative fraction is another important contributor to T2 m. Similar sources of error are found in hindcast from other Clouds Above the United States and Errors at the Surface studies. In Atmospheric Model Intercomparison Project simulations, biases in meridional wind velocity associated with the low-level jet and the 500 hPa vertical velocity may also relate to T2 m bias through their control on the surface energy and water budget.
A selective-update affine projection algorithm with selective input vectors
NASA Astrophysics Data System (ADS)
Kong, NamWoong; Shin, JaeWook; Park, PooGyeon
2011-10-01
This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.
NASA Technical Reports Server (NTRS)
Ketchum, E.
1988-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) will be responsible for performing ground attitude determination for Gamma Ray Observatory (GRO) support. The study reported in this paper provides the FDD and the GRO project with ground attitude determination error information and illustrates several uses of the Generalized Calibration System (GCS). GCS, an institutional software tool in the FDD, automates the computation of the expected attitude determination uncertainty that a spacecraft will encounter during its mission. The GRO project is particularly interested in the uncertainty in the attitude determination using Sun sensors and a magnetometer when both star trackers are inoperable. In order to examine the expected attitude errors for GRO, a systematic approach was developed including various parametric studies. The approach identifies pertinent parameters and combines them to form a matrix of test runs in GCS. This matrix formed the basis for this study.
Error Characterization of Altimetry Measurements at Climate Scales
NASA Astrophysics Data System (ADS)
Ablain, Michael; Larnicol, Gilles; Faugere, Yannice; Cazenave, Anny; Meyssignac, Benoit; Picot, Nicolas; Benveniste, Jerome
2013-09-01
Thanks to studies performed in the framework of the SALP project (supported by CNES) since the TOPEX era and more recently in the framework of the Sea- Level Climate Change Initiative project (supported by ESA), strong improvements have been provided on the estimation of the global and regional mean sea level over the whole altimeter period for all the altimetric missions. Thanks to these efforts, a better characterization of altimeter measurements errors at climate scales has been performed and presented in this paper. These errors have been compared to user requirements in order to know if scientific goals are reached by altimeter missions. The main issue of this paper is the importance to enhance the link between altimeter and climate communities to improve or refine user requirements, to better specify future altimeter system for climate applications but also to reprocess older missions beyond their original specifications.
A multiscale filter for noise reduction of low-dose cone beam projections.
Yao, Weiguang; Farr, Jonathan B
2015-08-21
The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, exp(-x2/2σ(2)(f)) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of σ(f), which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ(2)(f)) is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024 × 768 pixels.
Comparison of forward- and back-projection in vivo EPID dosimetry for VMAT treatment of the prostate
NASA Astrophysics Data System (ADS)
Bedford, James L.; Hanson, Ian M.; Hansen, Vibeke N.
2018-01-01
In the forward-projection method of portal dosimetry for volumetric modulated arc therapy (VMAT), the integrated signal at the electronic portal imaging device (EPID) is predicted at the time of treatment planning, against which the measured integrated image is compared. In the back-projection method, the measured signal at each gantry angle is back-projected through the patient CT scan to give a measure of total dose to the patient. This study aims to investigate the practical agreement between the two types of EPID dosimetry for prostate radiotherapy. The AutoBeam treatment planning system produced VMAT plans together with corresponding predicted portal images, and a total of 46 sets of gantry-resolved portal images were acquired in 13 patients using an iViewGT portal imager. For the forward-projection method, each acquisition of gantry-resolved images was combined into a single integrated image and compared with the predicted image. For the back-projection method, iViewDose was used to calculate the dose distribution in the patient for comparison with the planned dose. A gamma index for 3% and 3 mm was used for both methods. The results were investigated by delivering the same plans to a phantom and repeating some of the deliveries with deliberately introduced errors. The strongest agreement between forward- and back-projection methods is seen in the isocentric intensity/dose difference, with moderate agreement in the mean gamma. The strongest correlation is observed within a given patient, with less correlation between patients, the latter representing the accuracy of prediction of the two methods. The error study shows that each of the two methods has its own distinct sensitivity to errors, but that overall the response is similar. The forward- and back-projection EPID dosimetry methods show moderate agreement in this series of prostate VMAT patients, indicating that both methods can contribute to the verification of dose delivered to the patient.
Sciacovelli, Laura; O'Kane, Maurice; Skaik, Younis Abdelwahab; Caciagli, Patrizio; Pellegrini, Cristina; Da Rin, Giorgio; Ivanov, Agnes; Ghys, Timothy; Plebani, Mario
2011-05-01
The adoption of Quality Indicators (QIs) has prompted the development of tools to measure and evaluate the quality and effectiveness of laboratory testing, first in the hospital setting and subsequently in ambulatory and other care settings. While Laboratory Medicine has an important role in the delivery of high-quality care, no consensus exists as yet on the use of QIs focussing on all steps of the laboratory total testing process (TTP), and further research in this area is required. In order to reduce errors in laboratory testing, the IFCC Working Group on "Laboratory Errors and Patient Safety" (WG-LEPS) developed a series of Quality Indicators, specifically designed for clinical laboratories. In the first phase of the project, specific QIs for key processes of the TTP were identified, including all the pre-, intra- and post-analytic steps. The overall aim of the project is to create a common reporting system for clinical laboratories based on standardized data collection, and to define state-of-the-art and Quality Specifications (QSs) for each QI independent of: a) the size of organization and type of activities; b) the complexity of processes undertaken; and c) different degree of knowledge and ability of the staff. The aim of the present paper is to report the results collected from participating laboratories from February 2008 to December 2009 and to identify preliminary QSs. The results demonstrate that a Model of Quality Indicators managed as an External Quality Assurance Program can serve as a tool to monitor and control the pre-, intra- and post-analytical activities. It might also allow clinical laboratories to identify risks that lead to errors resulting in patient harm: identification and design of practices that eliminate medical errors; the sharing of information and education of clinical and laboratory teams on practices that reduce or prevent errors; the monitoring and evaluation of improvement activities.
Zeng, Bowei; Meng, Fanle; Ding, Hui; Wang, Guangzhi
2017-08-01
Using existing stereoelectroencephalography (SEEG) electrode implantation surgical robot systems, it is difficult to intuitively validate registration accuracy and display the electrode entry points (EPs) and the anatomical structure around the electrode trajectories in the patient space to the surgeon. This paper proposes a prototype system that can realize video see-through augmented reality (VAR) and spatial augmented reality (SAR) for SEEG implantation. The system helps the surgeon quickly and intuitively confirm the registration accuracy, locate EPs and visualize the internal anatomical structure in the image space and patient space. We designed and developed a projector-camera system (PCS) attached to the distal flange of a robot arm. First, system calibration is performed. Second, the PCS is used to obtain the point clouds of the surface of the patient's head, which are utilized for patient-to-image registration. Finally, VAR is produced by merging the real-time video of the patient and the preoperative three-dimensional (3D) operational planning model. In addition, SAR is implemented by projecting the planning electrode trajectories and local anatomical structure onto the patient's scalp. The error of registration, the electrode EPs and the target points are evaluated on a phantom. The fiducial registration error is [Formula: see text] mm (max 1.22 mm), and the target registration error is [Formula: see text] mm (max 1.18 mm). The projection overlay error is [Formula: see text] mm, and the TP error after the pre-warped projection is [Formula: see text] mm. The TP error caused by a surgeon's viewpoint deviation is also evaluated. The presented system can help surgeons quickly verify registration accuracy during SEEG procedures and can provide accurate EP locations and internal structural information to the surgeon. With more intuitive surgical information, the surgeon may have more confidence and be able to perform surgeries with better outcomes.
NASA Astrophysics Data System (ADS)
Gang, Jin; Yiqi, Zhuang; Yue, Yin; Miao, Cui
2015-03-01
A novel digitally controlled automatic gain control (AGC) loop circuitry for the global navigation satellite system (GNSS) receiver chip is presented. The entire AGC loop contains a programmable gain amplifier (PGA), an AGC circuit and an analog-to-digital converter (ADC), which is implemented in a 0.18 μm complementary metal-oxide-semiconductor (CMOS) process and measured. A binary-weighted approach is proposed in the PGA to achieve wide dB-linear gain control with small gain error. With binary-weighted cascaded amplifiers for coarse gain control, and parallel binary-weighted trans-conductance amplifier array for fine gain control, the PGA can provide a 64 dB dynamic range from -4 to 60 dB in 1.14 dB gain steps with a less than 0.15 dB gain error. Based on the Gaussian noise statistic characteristic of the GNSS signal, a digital AGC circuit is also proposed with low area and fast settling. The feed-backward AGC loop occupies an area of 0.27 mm2 and settles within less than 165 μs while consuming an average current of 1.92 mA at 1.8 V.
Seed, Thomas M; Xiao, Shiyun; Manley, Nancy; Nikolich-Zugich, Janko; Pugh, Jason; Van den Brink, Marcel; Hirabayashi, Yoko; Yasutomo, Koji; Iwama, Atsushi; Koyasu, Shigeo; Shterev, Ivo; Sempowski, Gregory; Macchiarini, Francesca; Nakachi, Kei; Kunugi, Keith C; Hammer, Clifford G; Dewerd, Lawrence A
2016-01-01
An interlaboratory comparison of radiation dosimetry was conducted to determine the accuracy of doses being used experimentally for animal exposures within a large multi-institutional research project. The background and approach to this effort are described and discussed in terms of basic findings, problems and solutions. Dosimetry tests were carried out utilizing optically stimulated luminescence (OSL) dosimeters embedded midline into mouse carcasses and thermal luminescence dosimeters (TLD) embedded midline into acrylic phantoms. The effort demonstrated that the majority (4/7) of the laboratories was able to deliver sufficiently accurate exposures having maximum dosing errors of ≤5%. Comparable rates of 'dosimetric compliance' were noted between OSL- and TLD-based tests. Data analysis showed a highly linear relationship between 'measured' and 'target' doses, with errors falling largely between 0 and 20%. Outliers were most notable for OSL-based tests, while multiple tests by 'non-compliant' laboratories using orthovoltage X-rays contributed heavily to the wide variation in dosing errors. For the dosimetrically non-compliant laboratories, the relatively high rates of dosing errors were problematic, potentially compromising the quality of ongoing radiobiological research. This dosimetry effort proved to be instructive in establishing rigorous reviews of basic dosimetry protocols ensuring that dosing errors were minimized.
Automatic diagnostic system for measuring ocular refractive errors
NASA Astrophysics Data System (ADS)
Ventura, Liliane; Chiaradia, Caio; de Sousa, Sidney J. F.; de Castro, Jarbas C.
1996-05-01
Ocular refractive errors (myopia, hyperopia and astigmatism) are automatic and objectively determined by projecting a light target onto the retina using an infra-red (850 nm) diode laser. The light vergence which emerges from the eye (light scattered from the retina) is evaluated in order to determine the corresponding ametropia. The system basically consists of projecting a target (ring) onto the retina and analyzing the scattered light with a CCD camera. The light scattered by the eye is divided into six portions (3 meridians) by using a mask and a set of six prisms. The distance between the two images provided by each of the meridians, leads to the refractive error of the referred meridian. Hence, it is possible to determine the refractive error at three different meridians, which gives the exact solution for the eye's refractive error (spherical and cylindrical components and the axis of the astigmatism). The computational basis used for the image analysis is a heuristic search, which provides satisfactory calculation times for our purposes. The peculiar shape of the target, a ring, provides a wider range of measurement and also saves parts of the retina from unnecessary laser irradiation. Measurements were done in artificial and in vivo eyes (using cicloplegics) and the results were in good agreement with the retinoscopic measurements.
Sampling Technique for Robust Odorant Detection Based on MIT RealNose Data
NASA Technical Reports Server (NTRS)
Duong, Tuan A.
2012-01-01
This technique enhances the detection capability of the autonomous Real-Nose system from MIT to detect odorants and their concentrations in noisy and transient environments. The lowcost, portable system with low power consumption will operate at high speed and is suited for unmanned and remotely operated long-life applications. A deterministic mathematical model was developed to detect odorants and calculate their concentration in noisy environments. Real data from MIT's NanoNose was examined, from which a signal conditioning technique was proposed to enable robust odorant detection for the RealNose system. Its sensitivity can reach to sub-part-per-billion (sub-ppb). A Space Invariant Independent Component Analysis (SPICA) algorithm was developed to deal with non-linear mixing that is an over-complete case, and it is used as a preprocessing step to recover the original odorant sources for detection. This approach, combined with the Cascade Error Projection (CEP) Neural Network algorithm, was used to perform odorant identification. Signal conditioning is used to identify potential processing windows to enable robust detection for autonomous systems. So far, the software has been developed and evaluated with current data sets provided by the MIT team. However, continuous data streams are made available where even the occurrence of a new odorant is unannounced and needs to be noticed by the system autonomously before its unambiguous detection. The challenge for the software is to be able to separate the potential valid signal from the odorant and from the noisy transition region when the odorant is just introduced.
Sciacovelli, Laura; Lippi, Giuseppe; Sumarac, Zorica; West, Jamie; Garcia Del Pino Castro, Isabel; Furtado Vieira, Keila; Ivanov, Agnes; Plebani, Mario
2017-03-01
The knowledge of error rates is essential in all clinical laboratories as it enables them to accurately identify their risk level, and compare it with those of other laboratories in order to evaluate their performance in relation to the State-of-the-Art (i.e. benchmarking) and define priorities for improvement actions. Although no activity is risk free, it is widely accepted that the risk of error is minimized by the use of Quality Indicators (QIs) managed as a part of laboratory improvement strategy and proven to be suitable monitoring and improvement tools. The purpose of QIs is to keep the error risk at a level that minimizes the likelihood of patients. However, identifying a suitable State-of-the-Art is challenging, because it calls for the knowledge of error rates measured in a variety of laboratories throughout world that differ in their organization and management, context, and the population they serve. Moreover, it also depends on the choice of the events to keep under control and the individual procedure for measurement. Although many laboratory professionals believe that the systemic use of QIs in Laboratory Medicine may be effective in decreasing errors occurring throughout the total testing process (TTP), to improve patient safety as well as to satisfy the requirements of International Standard ISO 15189, they find it difficult to maintain standardized and systematic data collection, and to promote continued high level of interest, commitment and dedication in the entire staff. Although many laboratories worldwide express a willingness to participate to the Model of QIs (MQI) project of IFCC Working Group "Laboratory Errors and Patient Safety", few systematically enter/record their own results and/or use a number of QIs designed to cover all phases of the TTP. Many laboratories justify their inadequate participation in data collection of QIs by claiming that the number of QIs included in the MQI is excessive. However, an analysis of results suggests that QIs need to be split into further measurements. As the International Standard on Laboratory Accreditation and approved guidelines do not specify the appropriate number of QIs to be used in the laboratory, and the MQI project does not compel laboratories to use all the QIs proposed, it appears appropriate to include in the MQI all the indicators of apparent utility in monitoring critical activities. The individual laboratory should also be able to decide how many and which QIs can be adopted. In conclusion, the MQI project is proving to be an important tool that, besides providing the TTP error rate and spreading the importance of the use of QIs in enhancing patient safety, highlights critical aspects compromising the widespread and appropriate use of QIs.
3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy
NASA Astrophysics Data System (ADS)
Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Siewerdsen, J. H.
2014-01-01
An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ˜0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ˜10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.
NASA Astrophysics Data System (ADS)
Swanson, Steven Roy
The objective of the dissertation is to improve state estimation performance, as compared to a Kalman filter, when non-constant, or changing, biases exist in the measurement data. The state estimation performance increase will come from the use of a fuzzy model to determine the position and velocity gains of a state estimator. A method is proposed for incorporating heuristic knowledge into a state estimator through the use of a fuzzy model. This method consists of using a fuzzy model to determine the gains of the state estimator, converting the heuristic knowledge into the fuzzy model, and then optimizing the fuzzy model with a genetic algorithm. This method is applied to the problem of state estimation of a cascaded global positioning system (GPS)/inertial reference unit (IRU) navigation system. The GPS position data contains two major sources for position bias. The first bias is due to satellite errors and the second is due to the time delay or lag from when the GPS position is calculated until it is used in the state estimator. When a change in the bias of the measurement data occurs, a state estimator will converge on the new measurement data solution. This will introduce errors into a Kalman filter's estimated state velocities, which in turn will cause a position overshoot as it converges. By using a fuzzy model to determine the gains of a state estimator, the velocity errors and their associated deficiencies can be reduced.
Stevenson, Ryan A; Schlesinger, Joseph J; Wallace, Mark T
2013-02-01
Anesthesiology requires performing visually oriented procedures while monitoring auditory information about a patient's vital signs. A concern in operating room environments is the amount of competing information and the effects that divided attention has on patient monitoring, such as detecting auditory changes in arterial oxygen saturation via pulse oximetry. The authors measured the impact of visual attentional load and auditory background noise on the ability of anesthesia residents to monitor the pulse oximeter auditory display in a laboratory setting. Accuracies and response times were recorded reflecting anesthesiologists' abilities to detect changes in oxygen saturation across three levels of visual attention in quiet and with noise. Results show that visual attentional load substantially affects the ability to detect changes in oxygen saturation concentrations conveyed by auditory cues signaling 99 and 98% saturation. These effects are compounded by auditory noise, up to a 17% decline in performance. These deficits are seen in the ability to accurately detect a change in oxygen saturation and in speed of response. Most anesthesia accidents are initiated by small errors that cascade into serious events. Lack of monitor vigilance and inattention are two of the more commonly cited factors. Reducing such errors is thus a priority for improving patient safety. Specifically, efforts to reduce distractors and decrease background noise should be considered during induction and emergence, periods of especially high risk, when anesthesiologists has to attend to many tasks and are thus susceptible to error.
Ranking and validation of spallation models for isotopic production cross sections of heavy residua
NASA Astrophysics Data System (ADS)
Sharma, Sushil K.; Kamys, Bogusław; Goldenbaum, Frank; Filges, Detlef
2017-07-01
The production cross sections of isotopically identified residual nuclei of spallation reactions induced by 136Xe projectiles at 500AMeV on hydrogen target were analyzed in a two-step model. The first stage of the reaction was described by the INCL4.6 model of an intranuclear cascade of nucleon-nucleon and pion-nucleon collisions whereas the second stage was analyzed by means of four different models; ABLA07, GEM2, GEMINI++ and SMM. The quality of the data description was judged quantitatively using two statistical deviation factors; the H-factor and the M-factor. It was found that the present analysis leads to a different ranking of models as compared to that obtained from the qualitative inspection of the data reproduction. The disagreement was caused by sensitivity of the deviation factors to large statistical errors present in some of the data. A new deviation factor, the A factor, was proposed, that is not sensitive to the statistical errors of the cross sections. The quantitative ranking of models performed using the A-factor agreed well with the qualitative analysis of the data. It was concluded that using the deviation factors weighted by statistical errors may lead to erroneous conclusions in the case when the data cover a large range of values. The quality of data reproduction by the theoretical models is discussed. Some systematic deviations of the theoretical predictions from the experimental results are observed.
NASA Astrophysics Data System (ADS)
Hughes, G.
Ultra High Energy Cosmic Rays (UHECRs) have an energy many times greater than that of particles accelerated in colliders. The Extended Air Showers (EAS) resulting from their interaction in the atmosphere give us the opportunity to study not only Cosmic Rays but also these extremely energetic cascades. A method to calculate the Average Longitudinal Shower profile has been applied to the High Resolution Fly's Eye Detector (HiRes) data. A complete detector simulation was used to throw CORSIKA (QGSJET) showers which are then analyzed using the same technique. The main features of the average showers are compared to the Monte Carlo as a function of energy. Systematic errors in the reconstruction of the profile are considered.
Closed-loop analysis and control of a non-inverting buck-boost converter
NASA Astrophysics Data System (ADS)
Chen, Zengshi; Hu, Jiangang; Gao, Wenzhong
2010-11-01
In this article, a cascade controller is designed and analysed for a non-inverting buck-boost converter. The fast inner current loop uses sliding mode control. The slow outer voltage loop uses the proportional-integral (PI) control. Stability analysis and selection of PI gains are based on the nonlinear closed-loop error dynamics incorporating both the inner and outer loop controllers. The closed-loop system is proven to have a nonminimum phase structure. The voltage transient due to step changes of input voltage or resistance is predictable. The operating range of the reference voltage is discussed. The controller is validated by a simulation circuit. The simulation results show that the reference output voltage is well-tracked under system uncertainties or disturbances, confirming the validity of the proposed controller.
Analyzing Strategic Business Rules through Simulation Modeling
NASA Astrophysics Data System (ADS)
Orta, Elena; Ruiz, Mercedes; Toro, Miguel
Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.
Design and Realization of Controllable Ultrasonic Fault Detector Automatic Verification System
NASA Astrophysics Data System (ADS)
Sun, Jing-Feng; Liu, Hui-Ying; Guo, Hui-Juan; Shu, Rong; Wei, Kai-Li
The ultrasonic flaw detection equipment with remote control interface is researched and the automatic verification system is developed. According to use extensible markup language, the building of agreement instruction set and data analysis method database in the system software realizes the controllable designing and solves the diversification of unreleased device interfaces and agreements. By using the signal generator and a fixed attenuator cascading together, a dynamic error compensation method is proposed, completes what the fixed attenuator does in traditional verification and improves the accuracy of verification results. The automatic verification system operating results confirms that the feasibility of the system hardware and software architecture design and the correctness of the analysis method, while changes the status of traditional verification process cumbersome operations, and reduces labor intensity test personnel.
NASA Technical Reports Server (NTRS)
Gupta, Kajal (Technical Monitor); Kirby, Kelvin
2004-01-01
The NASA Cooperative Agreement NAG4-210 was granted under the FY2000 Faculty Awards for Research (FAR) Program. The project was proposed to examine the effects of charged particles and neutrons on selected random access memory (RAM) technologies. The concept of the project was to add to the current knowledge of Single Event Effects (SEE) concerning RAM and explore the impact of selected forms of radiation on Error Detection and Correction Systems. The project was established as an extension of a previous FAR awarded to Prairie View A&M University (PVAMU), under the direction of Dr. Richard Wilkins as principal investigator. The NASA sponsored Center for Applied Radiation Research (CARR) at PVAMU developed an electronic test-bed to explore and quantify SEE on RAM from charged particles and neutrons. The test-bed was developed using 486DX microprocessor technology (PC-104) and a custom test board to mount RAM integrated circuits or other electronic devices. The test-bed had two configurations - a bench test version for laboratory experiments and a 400 Hz powered rack version for flight experiments. The objectives of this project were to: 1) Upgrade the Electronic Test-bed (ETB) to a Pentium configuration; 2) Accommodate more than only 8 Mbytes of RAM; 3) Explore Error Detection and Correction Systems for radiation effects; 4) Test modern RAM technologies in radiation environments.
Cascades frog conservation assessment
Karen Pope; Catherine Brown; Marc Hayes; Gregory Green; Diane Macfarlane
2014-01-01
The Cascades frog (Rana cascadae) is a montane, lentic-breeding amphibian that has become rare in the southern Cascade Range and remains relatively widespread in the Klamath Mountains of northern California. In the southern Cascades, remaining populations occur primarily in meadow habitats where the fungal disease, chytridiomycosis, and habitat...
Calculation of transonic flow in radial turbine blade cascade
NASA Astrophysics Data System (ADS)
Petr, Straka
2017-09-01
Numerical modeling of transonic centripetal turbulent flow in radial blade cascade is described in this paper. Attention is paid to effect of the outlet confusor on flow through the radial blade cascade. Parameters of presented radial blade cascade are compared with its linear representation
A Mulit-State Model for Catalyzing the Home Energy Efficiency Market
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blackmon, Glenn
The RePower Kitsap partnership sought to jump-start the market for energy efficiency upgrades in Kitsap County, an underserved market on Puget Sound in Washington State. The Washington State Department of Commerce partnered with Washington State University (WSU) Energy Program to supplement and extend existing utility incentives offered by Puget Sound Energy (PSE) and Cascade Natural Gas and to offer energy efficiency finance options through the Kitsap Credit Union and Puget Sound Cooperative Credit Union (PSCCU). RePower Kitsap established a coordinated approach with a second Better Buildings Neighborhood Program project serving the two largest cities in the county – Bainbridge Islandmore » and Bremerton. These two projects shared both the “RePower” brand and implementation team (Conservation Services Group (CSG) and Earth Advantage).« less
CORRELATED AND ZONAL ERRORS OF GLOBAL ASTROMETRIC MISSIONS: A SPHERICAL HARMONIC SOLUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, V. V.; Dorland, B. N.; Gaume, R. A.
We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.
Correlated and Zonal Errors of Global Astrometric Missions: A Spherical Harmonic Solution
NASA Astrophysics Data System (ADS)
Makarov, V. V.; Dorland, B. N.; Gaume, R. A.; Hennessy, G. S.; Berghea, C. T.; Dudik, R. P.; Schmitt, H. R.
2012-07-01
We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.
CAUSES: Clouds Above the United States and Errors at the Surface
NASA Astrophysics Data System (ADS)
Ma, H. Y.; Klein, S. A.; Xie, S.; Morcrette, C. J.; Van Weverberg, K.; Zhang, Y.; Lo, M. H.
2015-12-01
The Clouds Above the United States and Errors at the Surface (CAUSES) is a new joint Global Atmospheric System Studies/Regional and Global Climate model/Atmospheric System Research (GASS/RGCM/ASR) intercomparison project to evaluate the central U.S. summertime surface warm biases seen in many weather and climate models. The main focus is to identify the role of cloud, radiation, and precipitation processes in contributing to surface air temperature biases. In this project, we use short-term hindcast approach and examine the growth of the error as a function of hindcast lead time. The study period covers from April 1 to August 31, 2011, which also covers the entire Midlatitude Continental Convective Clouds Experiment (MC3E) campaign. Preliminary results from several models will be presented. (http://portal.nersc.gov/project/capt/CAUSES/) (This study is funded by the RGCM and ASR programs of the U.S. Department of Energy as part of the Cloud-Associated Parameterizations Testbed. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-658017)
CAUSES: Clouds Above the United States and Errors at the Surface
NASA Astrophysics Data System (ADS)
Ma, H. Y.; Klein, S. A.; Xie, S.; Zhang, Y.; Morcrette, C. J.; Van Weverberg, K.; Petch, J.; Lo, M. H.
2014-12-01
The Clouds Above the United States and Errors at the Surface (CAUSES) is a new joint Global Atmospheric System Studies/Regional and Global Climate model/Atmospheric System Research (GASS/RGCM/ASR) intercomparison project to evaluate the central U.S. summertime surface warm biases seen in many weather and climate models. The main focus is to identify the role of cloud, radiation, and precipitation processes in contributing to surface air temperature biases. In this project, we use short-term hindcast approach and examine the growth of the error as a function of hindcast lead time. The study period covers from April 1 to August 31, 2011, which also covers the entire Midlatitude Continental Convective Clouds Experiment (MC3E) campaign. Preliminary results from several models will be presented. (http://portal.nersc.gov/project/capt/CAUSES/) (This study is funded by the RGCM and ASR programs of the U.S. Department of Energy as part of the Cloud-Associated Parameterizations Testbed. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-658017)
Integrating Six Sigma with total quality management: a case example for measuring medication errors.
Revere, Lee; Black, Ken
2003-01-01
Six Sigma is a new management philosophy that seeks a nonexistent error rate. It is ripe for healthcare because many healthcare processes require a near-zero tolerance for mistakes. For most organizations, establishing a Six Sigma program requires significant resources and produces considerable stress. However, in healthcare, management can piggyback Six Sigma onto current total quality management (TQM) efforts so that minimal disruption occurs in the organization. Six Sigma is an extension of the Failure Mode and Effects Analysis that is required by JCAHO; it can easily be integrated into existing quality management efforts. Integrating Six Sigma into the existing TQM program facilitates process improvement through detailed data analysis. A drilled-down approach to root-cause analysis greatly enhances the existing TQM approach. Using the Six Sigma metrics, internal project comparisons facilitate resource allocation while external project comparisons allow for benchmarking. Thus, the application of Six Sigma makes TQM efforts more successful. This article presents a framework for including Six Sigma in an organization's TQM plan while providing a concrete example using medication errors. Using the process defined in this article, healthcare executives can integrate Six Sigma into all of their TQM projects.
The HIV care cascade: a systematic review of data sources, methodology and comparability.
Medland, Nicholas A; McMahon, James H; Chow, Eric P F; Elliott, Julian H; Hoy, Jennifer F; Fairley, Christopher K
2015-01-01
The cascade of HIV diagnosis, care and treatment (HIV care cascade) is increasingly used to direct and evaluate interventions to increase population antiretroviral therapy (ART) coverage, a key component of treatment as prevention. The ability to compare cascades over time, sub-population, jurisdiction or country is important. However, differences in data sources and methodology used to construct the HIV care cascade might limit its comparability and ultimately its utility. Our aim was to review systematically the different methods used to estimate and report the HIV care cascade and their comparability. A search of published and unpublished literature through March 2015 was conducted. Cascades that reported the continuum of care from diagnosis to virological suppression in a demographically definable population were included. Data sources and methods of measurement or estimation were extracted. We defined the most comparable cascade elements as those that directly measured diagnosis or care from a population-based data set. Thirteen reports were included after screening 1631 records. The undiagnosed HIV-infected population was reported in seven cascades, each of which used different data sets and methods and could not be considered to be comparable. All 13 used mandatory HIV diagnosis notification systems to measure the diagnosed population. Population-based data sets, derived from clinical data or mandatory reporting of CD4 cell counts and viral load tests from all individuals, were used in 6 of 12 cascades reporting linkage, 6 of 13 reporting retention, 3 of 11 reporting ART and 6 of 13 cascades reporting virological suppression. Cascades with access to population-based data sets were able to directly measure cascade elements and are therefore comparable over time, place and sub-population. Other data sources and methods are less comparable. To ensure comparability, countries wishing to accurately measure the cascade should utilize complete population-based data sets from clinical data from elements of a centralized healthcare setting, where available, or mandatory CD4 cell count and viral load test result reporting. Additionally, virological suppression should be presented both as percentage of diagnosed and percentage of estimated total HIV-infected population, until methods to calculate the latter have been standardized.
NASA Astrophysics Data System (ADS)
Hajabdollahi, Farzaneh; Premnath, Kannan N.
2018-05-01
Lattice Boltzmann (LB) models used for the computation of fluid flows represented by the Navier-Stokes (NS) equations on standard lattices can lead to non-Galilean-invariant (GI) viscous stress involving cubic velocity errors. This arises from the dependence of their third-order diagonal moments on the first-order moments for standard lattices, and strategies have recently been introduced to restore Galilean invariance without such errors using a modified collision operator involving corrections to either the relaxation times or the moment equilibria. Convergence acceleration in the simulation of steady flows can be achieved by solving the preconditioned NS equations, which contain a preconditioning parameter that can be used to tune the effective sound speed, and thereby alleviating the numerical stiffness. In the present paper, we present a GI formulation of the preconditioned cascaded central-moment LB method used to solve the preconditioned NS equations, which is free of cubic velocity errors on a standard lattice, for steady flows. A Chapman-Enskog analysis reveals the structure of the spurious non-GI defect terms and it is demonstrated that the anisotropy of the resulting viscous stress is dependent on the preconditioning parameter, in addition to the fluid velocity. It is shown that partial correction to eliminate the cubic velocity defects is achieved by scaling the cubic velocity terms in the off-diagonal third-order moment equilibria with the square of the preconditioning parameter. Furthermore, we develop additional corrections based on the extended moment equilibria involving gradient terms with coefficients dependent locally on the fluid velocity and the preconditioning parameter. Such parameter dependent corrections eliminate the remaining truncation errors arising from the degeneracy of the diagonal third-order moments and fully restore Galilean invariance without cubic defects for the preconditioned LB scheme on a standard lattice. Several conclusions are drawn from the analysis of the structure of the non-GI errors and the associated corrections, with particular emphasis on their dependence on the preconditioning parameter. The GI preconditioned central-moment LB method is validated for a number of complex flow benchmark problems and its effectiveness to achieve convergence acceleration and improvement in accuracy is demonstrated.
NASA Astrophysics Data System (ADS)
Bukhari, W.; Hong, S.-M.
2015-01-01
Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR+, implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR+ algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR+ implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR+ in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR+. The experimental results show that the EKF-GPR+ algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR+ reduces the patient-wise RMS error to 37%, 39% and 42% in percent ratios relative to no prediction for a duty cycle of 80% at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The experiments also confirm that EKF-GPR+ controls the duty cycle with reasonable accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, P.; Ghose, D.
The sputter ripple formation in polycrystalline metal thin films of Al, Co, Cu, and Ag has been studied by 16.7 keV Ar{sup +} and O{sub 2}{sup +} ion bombardment as a function of angle of ion incidence. The experimental results show the existence of a critical angle of ion incidence ({theta}{sub c}) beyond which the ripples of wave vectors perpendicular to the projected ion beam direction appear. Monte Carlo simulation (SRIM) is carried out to calculate the depth, longitudinal and lateral straggling widths of energy deposition as these values are crucial in determining the critical angle {theta}{sub c}. It ismore » found that the radial energy distribution of the damage cascade has the maximum slightly away from the ion path in contradiction to the Gaussian distribution and the distribution is better characterized by an exponential function. The lower values of lateral straggling widths as those extracted from the measured critical angles using the Bradley and Harper theory indicate a highly anisotropic deposited-energy distribution.« less
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
NASA Astrophysics Data System (ADS)
Gao, Chan; Tian, Dongfeng; Li, Maosheng; Qian, Dazhi
2017-04-01
Different interatomic potentials produce displacement cascades with different features, and hence they significantly influence the results obtained from the displacement cascade simulations. The displacement cascade simulations in α-Fe have been carried out by molecular dynamics with three 'magnetic' potentials (MP) and Mendelev-type potential in this paper. Prior to the cascade simulations, the 'magnetic' potentials are hardened to suit for cascade simulations. We find that the peak time, maximum of defects, cascade volume and cascade density with 'magnetic' potentials are smaller than those with Mendelev-type potential. There is no significant difference within statistical uncertainty in the defect production efficiency with Mendelev-type potential and the second 'magnetic' potential at the same cascade energy, but remarkably smaller than those with the first and third 'magnetic' potential. Self interstitial atom (SIA) clustered fractions with 'magnetic' potentials are smaller than that with Mendelev-type potential, especially at the higher energy, due to the larger interstitial formation energies which result from the 'magnetic' potentials. The defect clustered fractions, which are input data for radiation damage accumulation models, may influence the prediction of microstructural evolution under radiation.
Cascading costs: an economic nitrogen cycle.
Moomaw, William R; Birch, Melissa B L
2005-09-01
The chemical nitrogen cycle is becoming better characterized in terms of fluxes and reservoirs on a variety of scales. Galloway has demonstrated that reactive nitrogen can cascade through multiple ecosystems causing environmental damage at each stage before being denitrified to N(2). We propose to construct a parallel economic nitrogen cascade (ENC) in which economic impacts of nitrogen fluxes can be estimated by the costs associated with each stage of the chemical cascade. Using economic data for the benefits of damage avoided and costs of mitigation in the Chesapeake Bay basin, we have constructed an economic nitrogen cascade for the region. Since a single ton of nitrogen can cascade through the system, the costs also cascade. Therefore evaluating the benefits of mitigating a ton of reactive nitrogen released needs to consider the damage avoided in all of the ecosystems through which that ton would cascade. The analysis reveals that it is most cost effective to remove a ton of nitrogen coming from combustion since it has the greatest impact on human health and creates cascading damage through the atmospheric, terrestrial, aquatic and coastal ecosystems. We will discuss the implications of this analysis for determining the most cost effective policy option for achieving environmental quality goals.
Cascading costs: an economic nitrogen cycle.
Moomaw, William R; Birch, Melissa B L
2005-12-01
The chemical nitrogen cycle is becoming better characterized in terms of fluxes and reservoirs on a variety of scales. Galloway has demonstrated that reactive nitrogen can cascade through multiple ecosystems causing environmental damage at each stage before being denitrified to N2. We propose to construct a parallel economic nitrogen cascade (ENC) in which economic impacts of nitrogen fluxes can be estimated by the costs associated with each stage of the chemical cascade. Using economic data for the benefits of damage avoided and costs of mitigation in the Chesapeake Bay basin, we have constructed an economic nitrogen cascade for the region. Since a single tonne of nitrogen can cascade through the system, the costs also cascade. Therefore evaluating the benefits of mitigating a tonne of reactive nitrogen released needs to consider the damage avoided in all of the ecosystems through which that tonne would cascade. The analysis reveals that it is most cost effective to remove a tonne of nitrogen coming from combustion since it has the greatest impact on human health and creates cascading damage through the atmospheric, terrestrial, aquatic and coastal ecosystems. We will discuss the implications of this analysis for determining the most cost effective policy option for achieving environmental quality goals.
Zhang, Yonghong; Sun, Weihong; Gutchell, Emily M; Kvecher, Leonid; Kohr, Joni; Bekhash, Anthony; Shriver, Craig D; Liebman, Michael N; Mural, Richard J; Hu, Hai
2013-01-01
In clinical and translational research as well as clinical trial projects, clinical data collection is prone to errors such as missing data, and misinterpretation or inconsistency of the data. A good quality assurance (QA) program can resolve many such errors though this requires efficient communications between the QA staff and data collectors. Managing such communications is critical to resolving QA problems but imposes a major challenge for a project involving multiple clinical and data processing sites. We have developed a QA issue tracking (QAIT) system to support clinical data QA in the Clinical Breast Care Project (CBCP). This web-based application provides centralized management of QA issues with role-based access privileges. It has greatly facilitated the QA process and enhanced the overall quality of the CBCP clinical data. As a stand-alone system, QAIT can supplement any other clinical data management systems and can be adapted to support other projects. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Phillips, J. R.
1996-01-01
In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.
Pedagogical Materials 1. The Yugoslav Serbo-Croatian-English Contrastive Project.
ERIC Educational Resources Information Center
Filipovic, Rudolf, Ed.
The first volume in this series on Serbo-Croatian-English contrastive analysis contains six articles. They are: "Contrastive Analysis and Error Analysis in Pedagogical Materials," by Rudolf Filipovic; "Errors in the Morphology and Syntax of the Parts of Speech in the English of Learners from the Serbo-Croatian-Speaking Area," by Vera Andrassy;…
Quality in the Basic Grant Delivery System: Volume 2, Corrective Actions.
ERIC Educational Resources Information Center
Advanced Technology, Inc., McLean, VA.
Alternative management procedures are recommended that may lower the rate and magnitude of errors in the award of the Basic Educational Opportunity Grants (BEOGs), or Pell Grants. The recommendations are part of the BEOG quality control project and are based on a review of current (1980-1981) levels, distribution, and significance of error in the…
Desikan, Radhika
2016-01-01
Cellular signal transduction usually involves activation cascades, the sequential activation of a series of proteins following the reception of an input signal. Here, we study the classic model of weakly activated cascades and obtain analytical solutions for a variety of inputs. We show that in the special but important case of optimal gain cascades (i.e. when the deactivation rates are identical) the downstream output of the cascade can be represented exactly as a lumped nonlinear module containing an incomplete gamma function with real parameters that depend on the rates and length of the cascade, as well as parameters of the input signal. The expressions obtained can be applied to the non-identical case when the deactivation rates are random to capture the variability in the cascade outputs. We also show that cascades can be rearranged so that blocks with similar rates can be lumped and represented through our nonlinear modules. Our results can be used both to represent cascades in computational models of differential equations and to fit data efficiently, by reducing the number of equations and parameters involved. In particular, the length of the cascade appears as a real-valued parameter and can thus be fitted in the same manner as Hill coefficients. Finally, we show how the obtained nonlinear modules can be used instead of delay differential equations to model delays in signal transduction. PMID:27581482
Conscious coupling: The challenges and opportunities of cascading enzymatic microreactors.
Gruber, Pia; Marques, Marco P C; O'Sullivan, Brian; Baganz, Frank; Wohlgemuth, Roland; Szita, Nicolas
2017-07-01
The continuous production of high value or difficult to synthesize products is of increasing interest to the pharmaceutical industry. Cascading reaction systems have already been employed for chemical synthesis with great success, allowing a quick change in reaction conditions and addition of new reactants as well as removal of side products. A cascading system can remove the need for isolating unstable intermediates, increasing the yield of a synthetic pathway. Based on the success for chemical synthesis, the question arises how cascading systems could be beneficial to chemo-enzymatic or biocatalytic synthesis. Microreactors, with their rapid mass and heat transfer, small reaction volumes and short diffusion pathways, are promising tools for the development of such processes. In this mini-review, the authors provide an overview of recent examples of cascaded microreactors. Special attention will be paid to how microreactors are combined and the challenges as well as opportunities that arise from such combinations. Selected chemical reaction cascades will be used to illustrate this concept, before the discussion is widened to include chemo-enzymatic and multi-enzyme cascades. The authors also present the state of the art of online and at-line monitoring for enzymatic microreactor cascades. Finally, the authors review work-up and purification steps and their integration with microreactor cascades, highlighting the potential and the challenges of integrated cascades. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Petrova, Natalia; Kocoulin, Valerii; Nefediev, Yurii
2016-07-01
In the Kazan University computer simulation is carried out for observation of lunar physical libration in projects planned installation of measuring equipment on the lunar surface. One such project is the project of ILOM (Japan), in which on the lunar pole an optical telescope with CCD will be equipped. As a result, the determining the selenographic coordinates (x and y) of a star with an accuracy of 1 ms of arc will be achieved. On the basis of the analytical theory of physical libration we developed a technique for solving the inverse problem of the libration. And we have already shown, for example, that the error in determining selenographic coordinates about ɛ seconds does not lead to errors in the determination of the libration angles ρ and Iσ larger than the 1.414ɛ. Libration in longitude is not determined from observations of the polar star (Petrova et al., 2012). The accuracy of the libration in the inverse problem depends on accuracy of the coordinates of the stars - α and δ - taken from the star catalogs. Checking this influence is the task of the present study. To do simulation we have developed that allows to choose the stars, falling in the field of view of the lunar telescope on observation period. Equatorial coordinates of stars were chosen by us from several fundamental catalogs: UCAC2-BSS, Hipparcos, Tycho, FK6 (part I, III) and the Astronomical Almanac. An analysis of these catalogues from the point of view accuracy of coordinates of stars represented in them was performed by Nefediev et al., 2013. The largest error, 20-70 ms, found in the catalogues UCAC2 and Tycho, the others have an error about a millisecond of arc. We simulated the observations with mentioned errors and got the following results. 1. The error in the declination Δδ of the star causes the same order error in libration parameters ρ and Iσ , while the sensitivity of libration to errors in Δα is ten time smaller. Fortunately, due to statistics (30 to 70, depending on the time of observation), this error is reduced by an order, i.e. does not exceed the error of observation selenographic coordinates. 2. The worst thing - errors in coordinates of catalogue causes though a small but constant shift in the ρ and Iσ. So, when Δα, Δδ ˜0.01", then the shift reaches 0.0025". Moreover there is a trend, with a slight, but noticeable slope. 3. Effect of error in declination of a stars is substantially strong than the error in right ascension. Perhaps it is characteristic only for polar observations. For the required accuracy in determination of the physical libration these phenomena must be taken into account when processing the planned observations. Referencies. Nefediev et al., 2013. Uchenye zapiski Kazanskogo universiteta, v. 155, 1, p.188-194. Petrova, N., Abdulmyanov T., Hanada H. Some qualitative manifestations of the physical libration of the Moon by observing stars from the lunar surface. //J. Adv. Space Res., 2012a. V. 50, p. 1702-1711
A line-source method for aligning on-board and other pinhole SPECT systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Susu; Bowsher, James; Yin, Fang-Fang
2013-12-15
Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems.Methods: An alignment model consisting of multiple alignmentmore » parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot.Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.« less
A line-source method for aligning on-board and other pinhole SPECT systems
Yan, Susu; Bowsher, James; Yin, Fang-Fang
2013-01-01
Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. Methods: An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. PMID:24320537
A line-source method for aligning on-board and other pinhole SPECT systems.
Yan, Susu; Bowsher, James; Yin, Fang-Fang
2013-12-01
In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system-to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)-is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.
A calibration method immune to the projector errors in fringe projection profilometry
NASA Astrophysics Data System (ADS)
Zhang, Ruihua; Guo, Hongwei
2017-08-01
In fringe projection technique, system calibration is a tedious task to establish the mapping relationship between the object depths and the fringe phases. Especially, it is not easy to accurately determine the parameters of the projector in this system, which may induce errors in the measurement results. To solve this problem, this paper proposes a new calibration by using the cross-ratio invariance in the system geometry for determining the phase-to-depth relations. In it, we analyze the epipolar eometry of the fringe projection system. On each epipolar plane, the depth variation along an incident ray induces the pixel movement along the epipolar line on the image plane of the camera. These depth variations and pixel movements can be connected by use of the projective transformations, under which condition the cross-ratio for each of them keeps invariant. Based on this fact, we suggest measuring the depth map by use of this cross-ratio invariance. Firstly, we shift the reference board in its perpendicular direction to three positions with known depths, and measure their phase maps as the reference phase maps; and secondly, when measuring an object, we calculate the object depth at each pixel by equating the cross-ratio of the depths to that of the corresponding pixels having the same phase on the image plane of the camera. This method is immune to the errors sourced from the projector, including the distortions both in the geometric shapes and in the intensity profiles of the projected fringe patterns.The experimental results demonstrate the proposed method to be feasible and valid.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-18
... NUCLEAR REGULATORY COMMISSION [EA-11-013] USEC Inc. (American Centrifuge Lead Cascade Facility and American Centrifuge Plant); Order Approving Direct Transfer of Licenses and Conforming Amendment I USEC... Centrifuge Lead Cascade Facility (Lead Cascade) and American Centrifuge Plant (ACP), respectively, which...
Understanding global climate change scenarios through bioclimate stratification
NASA Astrophysics Data System (ADS)
Soteriades, A. D.; Murray-Rust, D.; Trabucco, A.; Metzger, M. J.
2017-08-01
Despite progress in impact modelling, communicating and understanding the implications of climatic change projections is challenging due to inherent complexity and a cascade of uncertainty. In this letter, we present an alternative representation of global climate change projections based on shifts in 125 multivariate strata characterized by relatively homogeneous climate. These strata form climate analogues that help in the interpretation of climate change impacts. A Random Forests classifier was calculated and applied to 63 Coupled Model Intercomparison Project Phase 5 climate scenarios at 5 arcmin resolution. Results demonstrate how shifting bioclimate strata can summarize future environmental changes and form a middle ground, conveniently integrating current knowledge of climate change impact with the interpretation advantages of categorical data but with a level of detail that resembles a continuous surface at global and regional scales. Both the agreement in major change and differences between climate change projections are visually combined, facilitating the interpretation of complex uncertainty. By making the data and the classifier available we provide a climate service that helps facilitate communication and provide new insight into the consequences of climate change.
Ingram, W Scott; Yang, Jinzhong; Wendt, Richard; Beadle, Beth M; Rao, Arvind; Wang, Xin A; Court, Laurence E
2017-08-01
To assess the influence of non-rigid anatomy and differences in patient positioning between CT acquisition and endoscopic examination on endoscopy-CT image registration in the head and neck. Radiotherapy planning CTs and 31-35 daily treatment-room CTs were acquired for nineteen patients. Diagnostic CTs were acquired for thirteen of the patients. The surfaces of the airways were segmented on all scans and triangular meshes were created to render virtual endoscopic images with a calibrated pinhole model of an endoscope. The virtual images were used to take projective measurements throughout the meshes, with reference measurements defined as those taken on the planning CTs and test measurements defined as those taken on the daily or diagnostic CTs. The influence of non-rigid anatomy was quantified by 3D distance errors between reference and test measurements on the daily CTs, and the influence of patient positioning was quantified by 3D distance errors between reference and test measurements on the diagnostic CTs. The daily CT measurements were also used to investigate the influences of camera-to-surface distance, surface angle, and the interval of time between scans. Average errors in the daily CTs were 0.36 ± 0.61 cm in the nasal cavity, 0.58 ± 0.83 cm in the naso- and oropharynx, and 0.47 ± 0.73 cm in the hypopharynx and larynx. Average errors in the diagnostic CTs in those regions were 0.52 ± 0.69 cm, 0.65 ± 0.84 cm, and 0.69 ± 0.90 cm, respectively. All CTs had errors heavily skewed towards 0, albeit with large outliers. Large camera-to-surface distances were found to increase the errors, but the angle at which the camera viewed the surface had no effect. The errors in the Day 1 and Day 15 CTs were found to be significantly smaller than those in the Day 30 CTs (P < 0.05). Inconsistencies of patient positioning have a larger influence than non-rigid anatomy on projective measurement errors. In general, these errors are largest when the camera is in the superior pharynx, where it sees large distances and a lot of muscle motion. The errors are larger when the interval of time between CT acquisitions is longer, which suggests that the interval of time between the CT acquisition and the endoscopic examination should be kept short. The median errors found in this study are comparable to acceptable levels of uncertainty in deformable CT registration. Large errors are possible even when image alignment is very good, indicating that projective measurements must be made carefully to avoid these outliers. © 2017 American Association of Physicists in Medicine.
Radio detection of cosmic-ray air showers and high-energy neutrinos
NASA Astrophysics Data System (ADS)
Schröder, Frank G.
2017-03-01
In the last fifteen years radio detection made it back to the list of promising techniques for extensive air showers, firstly, due to the installation and successful operation of digital radio experiments and, secondly, due to the quantitative understanding of the radio emission from atmospheric particle cascades. The radio technique has an energy threshold of about 100 PeV, which coincides with the energy at which a transition from the highest-energy galactic sources to the even more energetic extragalactic cosmic rays is assumed. Thus, radio detectors are particularly useful to study the highest-energy galactic particles and ultra-high-energy extragalactic particles of all types. Recent measurements by various antenna arrays like LOPES, CODALEMA, AERA, LOFAR, Tunka-Rex, and others have shown that radio measurements can compete in precision with other established techniques, in particular for the arrival direction, the energy, and the position of the shower maximum, which is one of the best estimators for the composition of the primary cosmic rays. The scientific potential of the radio technique seems to be maximum in combination with particle detectors, because this combination of complementary detectors can significantly increase the total accuracy for air-shower measurements. This increase in accuracy is crucial for a better separation of different primary particles, like gamma-ray photons, neutrinos, or different types of nuclei, because showers initiated by these particles differ in average depth of the shower maximum and in the ratio between the amplitude of the radio signal and the number of muons. In addition to air-shower measurements, the radio technique can be used to measure particle cascades in dense media, which is a promising technique for detection of ultra-high-energy neutrinos. Several pioneering experiments like ARA, ARIANNA, and ANITA are currently searching for the radio emission by neutrino-induced particle cascades in ice. In the next years these two sub-fields of radio detection of cascades in air and in dense media will likely merge, because several future projects aim at the simultaneous detection of both, high-energy cosmic-rays and neutrinos. SKA will search for neutrino and cosmic-ray initiated cascades in the lunar regolith and simultaneously provide unprecedented detail for air-shower measurements. Moreover, detectors with huge exposure like GRAND, SWORD or EVA are being considered to study the highest energy cosmic rays and neutrinos. This review provides an introduction to the physics of radio emission by particle cascades, an overview on the various experiments and their instrumental properties, and a summary of methods for reconstructing the most important air-shower properties from radio measurements. Finally, potential applications of the radio technique in high-energy astroparticle physics are discussed.
NASA Astrophysics Data System (ADS)
Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Karion, A.; Mueller, K.; Gourdji, S.; Martin, C.; Whetstone, J. R.
2017-12-01
The National Institute of Standards and Technology (NIST) supports the North-East Corridor Baltimore Washington (NEC-B/W) project and Indianapolis Flux Experiment (INFLUX) aiming to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties. These projects employ different flux estimation methods including top-down inversion approaches. The traditional Bayesian inversion method estimates emission distributions by updating prior information using atmospheric observations of Green House Gases (GHG) coupled to an atmospheric and dispersion model. The magnitude of the update is dependent upon the observed enhancement along with the assumed errors such as those associated with prior information and the atmospheric transport and dispersion model. These errors are specified within the inversion covariance matrices. The assumed structure and magnitude of the specified errors can have large impact on the emission estimates from the inversion. The main objective of this work is to build a data-adaptive model for these covariances matrices. We construct a synthetic data experiment using a Kalman Filter inversion framework (Lopez et al., 2017) employing different configurations of transport and dispersion model and an assumed prior. Unlike previous traditional Bayesian approaches, we estimate posterior emissions using regularized sample covariance matrices associated with prior errors to investigate whether the structure of the matrices help to better recover our hypothetical true emissions. To incorporate transport model error, we use ensemble of transport models combined with space-time analytical covariance to construct a covariance that accounts for errors in space and time. A Kalman Filter is then run using these covariances along with Maximum Likelihood Estimates (MLE) of the involved parameters. Preliminary results indicate that specifying sptio-temporally varying errors in the error covariances can improve the flux estimates and uncertainties. We also demonstrate that differences between the modeled and observed meteorology can be used to predict uncertainties associated with atmospheric transport and dispersion modeling which can help improve the skill of an inversion at urban scales.
Mertz, Leslie
2012-01-01
When the Defense Advanced Research Projects Agency (DARPA) asks research questions, it goes big. This is, after all, the same agency that put together teams of scientists and engineers to find a way to connect the worlds computers and, in doing so, developed the precursor to the Internet. DARPA, the experimental research wing of the U.S. Department of Defense, funds the types of research queries that scientists and engineers dream of tackling. Unlike a traditional granting agency that conservatively metes out its funding and only to projects with a good chance of success, DARPA puts its money on massive, multi-institutional projects that have no guarantees, but have enormous potential. In the 1990s, DARPA began its biological and medical science research to improve the safety, health, and well being of military personnel, according to DARPA program manager and Army Colonel Geoffrey Ling, Ph.D., M.D. More recently, DARPA has entered the realm of neuroscience and neurotechnology. Its focus with these projects is on its prime customer, the U.S. Department of Defense, but Ling acknowledged that technologies developed in its programs "certainly have potential to cascade into civilian uses."
NASA Technical Reports Server (NTRS)
Riffel, R. E.; Rothrock, M. D.
1980-01-01
A two dimensional cascade of harmonically oscillating airfoils was designed to model a near tip section from a rotor which was known to have experienced supersonic translational model flutter. This five bladed cascade had a solidity of 1.52 and a setting angle of 0.90 rad. Unique graphite epoxy airfoils were fabricated to achieve the realistic high reduced frequency level of 0.15. The cascade was tested over a range of static pressure ratios approximating the blade element operating conditions of the rotor along a constant speed line which penetrated the flutter boundary. The time steady and time unsteady flow field surrounding the center cascade airfoil were investigated.
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
QED cascade saturation in extreme high fields.
Luo, Wen; Liu, Wei-Yuan; Yuan, Tao; Chen, Min; Yu, Ji-Ye; Li, Fei-Yu; Del Sorbo, D; Ridgers, C P; Sheng, Zheng-Ming
2018-05-30
Upcoming ultrahigh power lasers at 10 PW level will make it possible to experimentally explore electron-positron (e - e + ) pair cascades and subsequent relativistic e - e + jets formation, which are supposed to occur in extreme astrophysical environments, such as black holes, pulsars, quasars and gamma-ray bursts. In the latter case it is a long-standing question as to how the relativistic jets are formed and what their temperatures and compositions are. Here we report simulation results of pair cascades in two counter-propagating QED-strong laser fields. A scaling of QED cascade growth with laser intensity is found, showing clear cascade saturation above threshold intensity of ~10 24 W/cm 2 . QED cascade saturation leads to pair plasma cooling and longitudinal compression along the laser axis, resulting in the subsequent formation of relativistic dense e - e + jets along transverse directions. Such laser-driven QED cascade saturation may open up the opportunity to study energetic astrophysical phenomena in laboratory.
Cascade defense via routing in complex networks
NASA Astrophysics Data System (ADS)
Xu, Xiao-Lan; Du, Wen-Bo; Hong, Chen
2015-05-01
As the cascading failures in networked traffic systems are becoming more and more serious, research on cascade defense in complex networks has become a hotspot in recent years. In this paper, we propose a traffic-based cascading failure model, in which each packet in the network has its own source and destination. When cascade is triggered, packets will be redistributed according to a given routing strategy. Here, a global hybrid (GH) routing strategy, which uses the dynamic information of the queue length and the static information of nodes' degree, is proposed to defense the network cascade. Comparing GH strategy with the shortest path (SP) routing, efficient routing (ER) and global dynamic (GD) routing strategies, we found that GH strategy is more effective than other routing strategies in improving the network robustness against cascading failures. Our work provides insight into the robustness of networked traffic systems.
Cost Risk Analysis Based on Perception of the Engineering Process
NASA Technical Reports Server (NTRS)
Dean, Edwin B.; Wood, Darrell A.; Moore, Arlene A.; Bogart, Edward H.
1986-01-01
In most cost estimating applications at the NASA Langley Research Center (LaRC), it is desirable to present predicted cost as a range of possible costs rather than a single predicted cost. A cost risk analysis generates a range of cost for a project and assigns a probability level to each cost value in the range. Constructing a cost risk curve requires a good estimate of the expected cost of a project. It must also include a good estimate of expected variance of the cost. Many cost risk analyses are based upon an expert's knowledge of the cost of similar projects in the past. In a common scenario, a manager or engineer, asked to estimate the cost of a project in his area of expertise, will gather historical cost data from a similar completed project. The cost of the completed project is adjusted using the perceived technical and economic differences between the two projects. This allows errors from at least three sources. The historical cost data may be in error by some unknown amount. The managers' evaluation of the new project and its similarity to the old project may be in error. The factors used to adjust the cost of the old project may not correctly reflect the differences. Some risk analyses are based on untested hypotheses about the form of the statistical distribution that underlies the distribution of possible cost. The usual problem is not just to come up with an estimate of the cost of a project, but to predict the range of values into which the cost may fall and with what level of confidence the prediction is made. Risk analysis techniques that assume the shape of the underlying cost distribution and derive the risk curve from a single estimate plus and minus some amount usually fail to take into account the actual magnitude of the uncertainty in cost due to technical factors in the project itself. This paper addresses a cost risk method that is based on parametric estimates of the technical factors involved in the project being costed. The engineering process parameters are elicited from the engineer/expert on the project and are based on that expert's technical knowledge. These are converted by a parametric cost model into a cost estimate. The method discussed makes no assumptions about the distribution underlying the distribution of possible costs, and is not tied to the analysis of previous projects, except through the expert calibrations performed by the parametric cost analyst.
Carl N. Skinner; Alan H. Taylor
2006-01-01
The Cascade Range extends from British Columbia, Canada, south to northern California where it meets the Sierra Nevada. The Southern Cascades bioregion in California is bounded on the west by the Sacramento Valley and the Klamath Mountains, and on the east by the Modoc Plateau and Great Basin. The bioregion encompasses the Southern Cascades section of Miles and Goudey...
Cascaded Bragg scattering in fiber optics.
Xu, Y Q; Erkintalo, M; Genty, G; Murdoch, S G
2013-01-15
We report on a theoretical and experimental study of cascaded Bragg scattering in fiber optics. We show that the usual energy-momentum conservation of Bragg scattering can be considerably relaxed via cascade-induced phase-matching. Experimentally we demonstrate frequency translation over six- and 11-fold cascades, in excellent agreement with derived phase-matching conditions.
TU-F-18A-06: Dual Energy CT Using One Full Scan and a Second Scan with Very Few Projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T; Zhu, L
Purpose: The conventional dual energy CT (DECT) requires two full CT scans at different energy levels, resulting in dose increase as well as imaging errors from patient motion between the two scans. To shorten the scan time of DECT and thus overcome these drawbacks, we propose a new DECT algorithm using one full scan and a second scan with very few projections by preserving structural information. Methods: We first reconstruct a CT image on the full scan using a standard filtered-backprojection (FBP) algorithm. We then use a compressed sensing (CS) based iterative algorithm on the second scan for reconstruction frommore » very few projections. The edges extracted from the first scan are used as weights in the Objectives: function of the CS-based reconstruction to substantially improve the image quality of CT reconstruction. The basis material images are then obtained by an iterative image-domain decomposition method and an electron density map is finally calculated. The proposed method is evaluated on phantoms. Results: On the Catphan 600 phantom, the CT reconstruction mean error using the proposed method on 20 and 5 projections are 4.76% and 5.02%, respectively. Compared with conventional iterative reconstruction, the proposed edge weighting preserves object structures and achieves a better spatial resolution. With basis materials of Iodine and Teflon, our method on 20 projections obtains similar quality of decomposed material images compared with FBP on a full scan and the mean error of electron density in the selected regions of interest is 0.29%. Conclusion: We propose an effective method for reducing projections and therefore scan time in DECT. We show that a full scan plus a 20-projection scan are sufficient to provide DECT images and electron density with similar quality compared with two full scans. Our future work includes more phantom studies to validate the performance of our method.« less
Cascading Failures as Continuous Phase-Space Transitions
Yang, Yang; Motter, Adilson E.
2017-12-14
In network systems, a local perturbation can amplify as it propagates, potentially leading to a large-scale cascading failure. We derive a continuous model to advance our understanding of cascading failures in power-grid networks. The model accounts for both the failure of transmission lines and the desynchronization of power generators and incorporates the transient dynamics between successive steps of the cascade. In this framework, we show that a cascade event is a phase-space transition from an equilibrium state with high energy to an equilibrium state with lower energy, which can be suitably described in a closed form using a global Hamiltonian-likemore » function. From this function, we show that a perturbed system cannot always reach the equilibrium state predicted by quasi-steady-state cascade models, which would correspond to a reduced number of failures, and may instead undergo a larger cascade. We also show that, in the presence of two or more perturbations, the outcome depends strongly on the order and timing of the individual perturbations. These results offer new insights into the current understanding of cascading dynamics, with potential implications for control interventions.« less
Optical Wave Turbulence and Wave Condensation in a Nonlinear Optical Experiment
NASA Astrophysics Data System (ADS)
Laurie, Jason; Bortolozzo, Umberto; Nazarenko, Sergey; Residori, Stefania
We present theory, numerical simulations and experimental observations of a 1D optical wave system. We show that this system is of a dual cascade type, namely, the energy cascading directly to small scales, and the photons or wave action cascading to large scales. In the optical context the inverse cascade is particularly interesting because it means the condensation of photons. We show that the cascades are induced by a six-wave resonant interaction process described by weak turbulence theory. We show that by starting with weakly nonlinear randomized waves as an initial condition, there exists an inverse cascade of photons towards the lowest wavenumbers. During the cascade nonlinearity becomes strong at low wavenumbers and, due to the focusing nature of the nonlinearity, it leads to modulational instability resulting in the formation of solitons. Further interaction of the solitons among themselves and with incoherent waves leads to the final condensate state dominated by a single strong soliton. In addition, we show the existence of the direct energy cascade numerically and that it agrees with the wave turbulence prediction.
Cascading Failures as Continuous Phase-Space Transitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yang; Motter, Adilson E.
In network systems, a local perturbation can amplify as it propagates, potentially leading to a large-scale cascading failure. We derive a continuous model to advance our understanding of cascading failures in power-grid networks. The model accounts for both the failure of transmission lines and the desynchronization of power generators and incorporates the transient dynamics between successive steps of the cascade. In this framework, we show that a cascade event is a phase-space transition from an equilibrium state with high energy to an equilibrium state with lower energy, which can be suitably described in a closed form using a global Hamiltonian-likemore » function. From this function, we show that a perturbed system cannot always reach the equilibrium state predicted by quasi-steady-state cascade models, which would correspond to a reduced number of failures, and may instead undergo a larger cascade. We also show that, in the presence of two or more perturbations, the outcome depends strongly on the order and timing of the individual perturbations. These results offer new insights into the current understanding of cascading dynamics, with potential implications for control interventions.« less
2013-01-01
A new approach, the projective system approach, is proposed to realize modified projective synchronization between two different chaotic systems. By simple analysis of trajectories in the phase space, a projective system of the original chaotic systems is obtained to replace the errors system to judge the occurrence of modified projective synchronization. Theoretical analysis and numerical simulations show that, although the projective system may not be unique, modified projective synchronization can be achieved provided that the origin of any of projective systems is asymptotically stable. Furthermore, an example is presented to illustrate that even a necessary and sufficient condition for modified projective synchronization can be derived by using the projective system approach. PMID:24187522
Cooper, Lauren A.; Stringer, Anne M.
2018-01-01
ABSTRACT In clustered regularly interspaced short palindromic repeat (CRISPR)-Cas (CRISPR-associated) immunity systems, short CRISPR RNAs (crRNAs) are bound by Cas proteins, and these complexes target invading nucleic acid molecules for degradation in a process known as interference. In type I CRISPR-Cas systems, the Cas protein complex that binds DNA is known as Cascade. Association of Cascade with target DNA can also lead to acquisition of new immunity elements in a process known as primed adaptation. Here, we assess the specificity determinants for Cascade-DNA interaction, interference, and primed adaptation in vivo, for the type I-E system of Escherichia coli. Remarkably, as few as 5 bp of crRNA-DNA are sufficient for association of Cascade with a DNA target. Consequently, a single crRNA promotes Cascade association with numerous off-target sites, and the endogenous E. coli crRNAs direct Cascade binding to >100 chromosomal sites. In contrast to the low specificity of Cascade-DNA interactions, >18 bp are required for both interference and primed adaptation. Hence, Cascade binding to suboptimal, off-target sites is inert. Our data support a model in which the initial Cascade association with DNA targets requires only limited sequence complementarity at the crRNA 5′ end whereas recruitment and/or activation of the Cas3 nuclease, a prerequisite for interference and primed adaptation, requires extensive base pairing. PMID:29666291
Wakefield, Claire E; Sansom-Daly, Ursula M; McGill, Brittany C; Ellis, Sarah J; Doolan, Emma L; Robertson, Eden G; Mathur, Sanaa; Cohn, Richard J
2016-06-01
The aim of this study was to evaluate the feasibility and acceptability of "Cascade": an online, group-based, cognitive behavioral therapy intervention, delivered "live" by a psychologist, to assist parents of children who have completed cancer treatment. Forty-seven parents were randomized to Cascade (n = 25) or a 6-month waitlist (n = 22). Parents completed questionnaires at baseline, 1-2 weeks and 6 months post-intervention. Thirty parents completed full evaluations of the Cascade program (n = 21 randomized to Cascade, n = 9 completed Cascade post-waitlist). Ninety-six percent of Cascade participants completed the intervention (n = 24/25). Eighty percent of parents completed every questionnaire (mean completion time 25 min (SD = 12)). Cascade was described as at least "somewhat" helpful by all parents. None rated Cascade as "very/quite" burdensome. Parents reported that the "online format was easy to use" (n = 28, 93.3 %), "I learnt new skills" (n = 28, 93.3 %), and "I enjoyed talking to others" (n = 29, 96.7 %). Peer-to-peer benefits were highlighted by good group cohesion scores. Cascade is highly acceptable and feasible. Its online delivery mechanism may address inequities in post-treatment support for parents, a particularly acute concern for rural/remote families. Future research needs to establish the efficacy of the intervention. ACTRN12613000270718, https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?ACTRN=12613000270718.
Error Correction for the JLEIC Ion Collider Ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Guohui; Morozov, Vasiliy; Lin, Fanglei
2016-05-01
The sensitivity to misalignment, magnet strength error, and BPM noise is investigated in order to specify design tolerances for the ion collider ring of the Jefferson Lab Electron Ion Collider (JLEIC) project. Those errors, including horizontal, vertical, longitudinal displacement, roll error in transverse plane, strength error of main magnets (dipole, quadrupole, and sextupole), BPM noise, and strength jitter of correctors, cause closed orbit distortion, tune change, beta-beat, coupling, chromaticity problem, etc. These problems generally reduce the dynamic aperture at the Interaction Point (IP). According to real commissioning experiences in other machines, closed orbit correction, tune matching, beta-beat correction, decoupling, andmore » chromaticity correction have been done in the study. Finally, we find that the dynamic aperture at the IP is restored. This paper describes that work.« less
NASA Technical Reports Server (NTRS)
1985-01-01
A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.
1980-02-01
formula for predictinq the number of errors during system testing. The equation he presents is B V/ ECRIT where B is the number of 19 ’R , errors...expected, V is the volume, and ECRIT is "the mean number of elementary discriminations between potential errors in programming" (p. 85). E CRIT can also...prediction of delivered bugs is: "V VX 2 B = V/ ECRIT -3- 13,824 2.3 McCabe’s Complexity Metric Thomas McCabe (1976) defined complexity in relation to
Sea otters, social justice, and ecosystem-service perceptions in Clayoquot Sound, Canada.
Levine, Jordan; Muthukrishna, Michael; Chan, Kai M A; Satterfield, Terre
2017-04-01
We sought to take a first step toward better integration of social concerns into empirical ecosystem service (ES) work. We did this by adapting cognitive anthropological techniques to study the Clayoquot Sound social-ecological system on the Pacific coast of Canada's Vancouver Island. We used freelisting and ranking exercises to elicit how locals perceive ESs and to determine locals' preferred food species. We analyzed these data with the freelist-analysis software package ANTHROPAC. We considered the results in light of an ongoing trophic cascade caused by the government reintroduction of sea otters (Enhydra lutris) and their spread along the island's Pacific coast. We interviewed 67 local residents (n = 29 females, n = 38 males; n = 26 self-identified First Nation individuals, and n = 41 non-First Nation individuals) and 4 government managers responsible for conservation policy in the region. We found that the mental categories participants-including trained ecologists-used to think about ESs, did not match the standard academic ES typology. With reference to the latest ecological model projections for the region, we found that First Nations individuals and women were most likely to perceive the most immediate ES losses from the trophic cascade, with the most certainty. The inverse was found for men and non-First Nations individuals, generally. This suggests that 2 historically disadvantaged groups (i.e., First Nations and women) are poised to experience the immediate impacts of the government-initiated trophic cascade as yet another social injustice in a long line of perceived inequities. Left unaddressed, this could complicate efforts at multistakeholder ecosystem management in the region. © 2016 Society for Conservation Biology.
Collins, Susan J; Newhouse, Robin; Porter, Jody; Talsma, AkkeNeel
2014-07-01
Approximately 2,700 patients are harmed by wrong-site surgery each year. The World Health Organization created the surgical safety checklist to reduce the incidence of wrong-site surgery. A project team conducted a narrative review of the literature to determine the effectiveness of the surgical safety checklist in correcting and preventing errors in the OR. Team members used Swiss cheese model of error by Reason to analyze the findings. Analysis of results indicated the effectiveness of the surgical checklist in reducing the incidence of wrong-site surgeries and other medical errors; however, checklists alone will not prevent all errors. Successful implementation requires perioperative stakeholders to understand the nature of errors, recognize the complex dynamic between systems and individuals, and create a just culture that encourages a shared vision of patient safety. Copyright © 2014 AORN, Inc. Published by Elsevier Inc. All rights reserved.
An investigation of error correcting techniques for OMV and AXAF
NASA Technical Reports Server (NTRS)
Ingels, Frank; Fryer, John
1991-01-01
The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.
Development of multiple-eye PIV using mirror array
NASA Astrophysics Data System (ADS)
Maekawa, Akiyoshi; Sakakibara, Jun
2018-06-01
In order to reduce particle image velocimetry measurement error, we manufactured an ellipsoidal polyhedral mirror and placed it between a camera and flow target to capture n images of identical particles from n (=80 maximum) different directions. The 3D particle positions were determined from the ensemble average of n C2 intersecting points of a pair of line-of-sight back-projected points from a particle found in any combination of two images in the n images. The method was then applied to a rigid-body rotating flow and a turbulent pipe flow. In the former measurement, bias error and random error fell in a range of ±0.02 pixels and 0.02–0.05 pixels, respectively; additionally, random error decreased in proportion to . In the latter measurement, in which the measured value was compared to direct numerical simulation, bias error was reduced and random error also decreased in proportion to .
NASA Astrophysics Data System (ADS)
Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.
2018-06-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.
Ocean regional circulation model sensitizes to resolution of the lateral boundary conditions
NASA Astrophysics Data System (ADS)
Pham, Van Sy; Hwang, Jin Hwan
2017-04-01
Dynamical downscaling with nested regional oceanographic models is an effective approach for forecasting operationally coastal weather and projecting long term climate on the ocean. Nesting procedures deliver the unwanted in dynamic downscaling due to the differences of numerical grid sizes and updating steps. Therefore, such unavoidable errors restrict the application of the Ocean Regional Circulation Model (ORCMs) in both short-term forecasts and long-term projections. The current work identifies the effects of errors induced by computational limitations during nesting procedures on the downscaled results of the ORCMs. The errors are quantitatively evaluated for each error source and its characteristics by the Big-Brother Experiments (BBE). The BBE separates identified errors from each other and quantitatively assess the amount of uncertainties employing the same model to simulate for both nesting and nested model. Here, we focus on discussing errors resulting from two main matters associated with nesting procedures. They should be the spatial grids' differences and the temporal updating steps. After the diverse cases from separately running of the BBE, a Taylor diagram was adopted to analyze the results and suggest an optimization intern of grid size and updating period and domain sizes. Key words: lateral boundary condition, error, ocean regional circulation model, Big-Brother Experiment. Acknowledgement: This research was supported by grants from the Korean Ministry of Oceans and Fisheries entitled "Development of integrated estuarine management system" and a National Research Foundation of Korea (NRF) Grant (No. 2015R1A5A 7037372) funded by MSIP of Korea. The authors thank the Integrated Research Institute of Construction and Environmental Engineering of Seoul National University for administrative support.
Human error in aviation operations
NASA Technical Reports Server (NTRS)
Billings, C. E.; Lanber, J. K.; Cooper, G. E.
1974-01-01
This report is a brief description of research being undertaken by the National Aeronautics and Space Administration. The project is designed to seek out factors in the aviation system which contribute to human error, and to search for ways of minimizing the potential threat posed by these factors. The philosophy and assumptions underlying the study are discussed, together with an outline of the research plan.
NASA Astrophysics Data System (ADS)
Doroszkiewicz, Joanna; Romanowicz, Renata
2016-04-01
Uncertainty in the results of the hydraulic model is not only associated with the limitations of that model and the shortcomings of data. An important factor that has a major impact on the uncertainty of the flood risk assessment in a changing climate conditions is associated with the uncertainty of future climate scenarios (IPCC WG I, 2013). Future climate projections provided by global climate models are used to generate future runoff required as an input to hydraulic models applied in the derivation of flood risk maps. Biala Tarnowska catchment, situated in southern Poland is used as a case study. Future discharges at the input to a hydraulic model are obtained using the HBV model and climate projections obtained from the EUROCORDEX project. The study describes a cascade of uncertainty related to different stages of the process of derivation of flood risk maps under changing climate conditions. In this context it takes into account the uncertainty of future climate projections, an uncertainty of flow routing model, the propagation of that uncertainty through the hydraulic model, and finally, the uncertainty related to the derivation of flood risk maps. One of the aims of this study is an assessment of a relative impact of different sources of uncertainty on the uncertainty of flood risk maps. Due to the complexity of the process, an assessment of total uncertainty of maps of inundation probability might be very computer time consuming. As a way forward we present an application of a hydraulic model simulator based on a nonlinear transfer function model for the chosen locations along the river reach. The transfer function model parameters are estimated based on the simulations of the hydraulic model at each of the model cross-section. The study shows that the application of the simulator substantially reduces the computer requirements related to the derivation of flood risk maps under future climatic conditions. Acknowledgements: This work was supported by the project CHIHE (Climate Change Impact on Hydrological Extremes), carried out in the Institute of Geophysics Polish Academy of Sciences, funded by Norway Grants (contract No. Pol-Nor/196243/80/2013). The hydro-meteorological observations were provided by the Institute of Meteorology and Water Management (IMGW), Poland.
1997 Toxic Hazards Research Annual Report
1998-05-01
TYPE AND DATES COVERED I May 1998 Interim Report - October 1996-September 1997 4 . TITLE AND SUBTITLE 5. FUNDING NUMBERS 1997 Toxic Hazards Research... 4 3 TRICHLOROETHYLENE (TCE) CARCINOGENICITY PROJECT ......................... 7 3.1 SYNTHESIS AND...EXPERIMENTAL ERROR AND INTERINDIVIDUAL VARIABILITY .................. 20 J.Z. Byczkowski and J.C. Lipscomb 4 HALON REPLACEMENT TOXICITY PROJECT
Preliminary Airworthiness Evaluation AH-1S Helicopter with OGEE Tip Shape Rotor Blades
1980-05-01
ENGINEER PROJECT PILOT HENRY ARNAIZ PROJECT ENGINEER DTIC MAY 1980 ELECTEV SEP 2 I8 Approved for public release; distribution unlimited. A UNITED STATES...compressibility effects between flights. 7. Airspeed and altitude were obtained from a boom-mounted pitot -static probe. Corrections for position error
NASA Astrophysics Data System (ADS)
Liu, Yang; Song, Fazhi; Yang, Xiaofeng; Dong, Yue; Tan, Jiubin
2018-06-01
Due to their structural simplicity, linear motors are increasingly receiving attention for use in high velocity and high precision applications. The force ripple, as a space-periodic disturbance, however, would deteriorate the achievable dynamic performance. Conventional force ripple measurement approaches are time-consuming and have high requirements on the experimental conditions. In this paper, a novel learning identification algorithm is proposed for force ripple intelligent measurement and compensation. Existing identification schemes always use all the error signals to update the parameters in the force ripple. However, the error induced by noise is non-effective for force ripple identification, and even deteriorates the identification process. In this paper only the most pertinent information in the error signal is utilized for force ripple identification. Firstly, the effective error signals caused by the reference trajectory and the force ripple are extracted by projecting the overall error signals onto a subspace spanned by the physical model of the linear motor as well as the sinusoidal model of the force ripple. The time delay in the linear motor is compensated in the basis functions. Then, a data-driven approach is proposed to design the learning gain. It balances the trade-off between convergence speed and robustness against noise. Simulation and experimental results validate the proposed method and confirm its effectiveness and superiority.
An interacting boundary layer model for cascades
NASA Technical Reports Server (NTRS)
Davis, R. T.; Rothmayer, A. P.
1983-01-01
A laminar, incompressible interacting boundary layer model is developed for two-dimensional cascades. In the limit of large cascade spacing these equations reduce to the interacting boundary layer equations for a single body immersed in an infinite stream. A fully implicit numerical method is used to solve the governing equations, and is found to be at least as efficient as the same technique applied to the single body problem. Solutions are then presented for a cascade of finite flat plates and a cascade of finite sine-waves, with cusped leading and trailing edges.
NASA Astrophysics Data System (ADS)
Nakamura, Kazuyuki; Sasao, Tsutomu; Matsuura, Munehiro; Tanaka, Katsumasa; Yoshizumi, Kenichi; Nakahara, Hiroki; Iguchi, Yukihiro
2006-04-01
A large-scale memory-technology-based programmable logic device (PLD) using a look-up table (LUT) cascade is developed in the 0.35-μm standard complementary metal oxide semiconductor (CMOS) logic process. Eight 64 K-bit synchronous SRAMs are connected to form an LUT cascade with a few additional circuits. The features of the LUT cascade include: 1) a flexible cascade connection structure, 2) multi phase pseudo asynchronous operations with synchronous static random access memory (SRAM) cores, and 3) LUT-bypass redundancy. This chip operates at 33 MHz in 8-LUT cascades at 122 mW. Benchmark results show that it achieves a comparable performance to field programmable gate array (FPGAs).
MOLECULAR DYNAMICS OF CASCADES OVERLAP IN TUNGSTEN WITH 20-KEV PRIMARY KNOCK-ON ATOMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setyawan, Wahyu; Nandipati, Giridhar; Roche, Kenneth J.
2015-04-16
Molecular dynamics simulations are performed to investigate the mutual influence of two subsequent cascades in tungsten. The influence is studied using 20-keV primary knock-on atoms, to induce one cascade after another separated by 15 ps, in a lattice temperature of 1025 K (i.e. 0.25 of the melting temperature of the interatomic potential). The center of mass of the vacancies at the peak damage during the cascade is taken as the location of the cascade. The distance between this location to that of the next cascade is taken as the overlap parameter. Empirical fits describing the number of surviving vacancies andmore » interstitial atoms as a function of overlap are presented.« less
Geothermal segmentation of the Cascade Range in the USA
Guffanti, Marianne; Muffler, L.J.; Mariner, R.H.; Sherrod, D.R.; Smith, James G.; Blackwell, D.D.; Weaver, C.S.
1990-01-01
Characteristics of the crustal thermal regime of the Quaternary Cascades vary systematically along the range. Spatially congruent changes in volcanic vent distribution, volcanic extrusion rate, hydrothermal discharge rate, and regional conductive heat flow define 5 geothermal segments. These segments are, from north to south: (1) the Washington Cascades north of Mount Rainier, (2) the Cascades from Mount Rainier to Mount Hood, (3) the Oregon Cascades from south of Mount Hood to the California border, (4) northernmost California, including Mount Shasta and Medicine Lake volcano, and (5) the Lassen region of northern California. This segmentation indicates that geothermal resource potential is not uniform in the Cascade Range. Potential varies from high in parts of Oregon to low in Washington north of Mount Rainier.
Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F
2011-01-01
To generalize and experimentally validate a novel algorithm for reconstructing the 3D pose (position and orientation) of implanted brachytherapy seeds from a set of a few measured 2D cone-beam CT (CBCT) x-ray projections. The iterative forward projection matching (IFPM) algorithm was generalized to reconstruct the 3D pose, as well as the centroid, of brachytherapy seeds from three to ten measured 2D projections. The gIFPM algorithm finds the set of seed poses that minimizes the sum-of-squared-difference of the pixel-by-pixel intensities between computed and measured autosegmented radiographic projections of the implant. Numerical simulations of clinically realistic brachytherapy seed configurations were performed to demonstrate the proof of principle. An in-house machined brachytherapy phantom, which supports precise specification of seed position and orientation at known values for simulated implant geometries, was used to experimentally validate this algorithm. The phantom was scanned on an ACUITY CBCT digital simulator over a full 660 sinogram projections. Three to ten x-ray images were selected from the full set of CBCT sinogram projections and postprocessed to create binary seed-only images. In the numerical simulations, seed reconstruction position and orientation errors were approximately 0.6 mm and 5 degrees, respectively. The physical phantom measurements demonstrated an absolute positional accuracy of (0.78 +/- 0.57) mm or less. The theta and phi angle errors were found to be (5.7 +/- 4.9) degrees and (6.0 +/- 4.1) degrees, respectively, or less when using three projections; with six projections, results were slightly better. The mean registration error was better than 1 mm/6 degrees compared to the measured seed projections. Each test trial converged in 10-20 iterations with computation time of 12-18 min/iteration on a 1 GHz processor. This work describes a novel, accurate, and completely automatic method for reconstructing seed orientations, as well as centroids, from a small number of radiographic projections, in support of intraoperative planning and adaptive replanning. Unlike standard back-projection methods, gIFPM avoids the need to match corresponding seed images on the projections. This algorithm also successfully reconstructs overlapping clustered and highly migrated seeds in the implant. The accuracy of better than 1 mm and 6 degrees demonstrates that gIFPM has the potential to support 2D Task Group 43 calculations in clinical practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.
2011-01-15
Purpose: To generalize and experimentally validate a novel algorithm for reconstructing the 3D pose (position and orientation) of implanted brachytherapy seeds from a set of a few measured 2D cone-beam CT (CBCT) x-ray projections. Methods: The iterative forward projection matching (IFPM) algorithm was generalized to reconstruct the 3D pose, as well as the centroid, of brachytherapy seeds from three to ten measured 2D projections. The gIFPM algorithm finds the set of seed poses that minimizes the sum-of-squared-difference of the pixel-by-pixel intensities between computed and measured autosegmented radiographic projections of the implant. Numerical simulations of clinically realistic brachytherapy seed configurations weremore » performed to demonstrate the proof of principle. An in-house machined brachytherapy phantom, which supports precise specification of seed position and orientation at known values for simulated implant geometries, was used to experimentally validate this algorithm. The phantom was scanned on an ACUITY CBCT digital simulator over a full 660 sinogram projections. Three to ten x-ray images were selected from the full set of CBCT sinogram projections and postprocessed to create binary seed-only images. Results: In the numerical simulations, seed reconstruction position and orientation errors were approximately 0.6 mm and 5 deg., respectively. The physical phantom measurements demonstrated an absolute positional accuracy of (0.78{+-}0.57) mm or less. The {theta} and {phi} angle errors were found to be (5.7{+-}4.9) deg. and (6.0{+-}4.1) deg., respectively, or less when using three projections; with six projections, results were slightly better. The mean registration error was better than 1 mm/6 deg. compared to the measured seed projections. Each test trial converged in 10-20 iterations with computation time of 12-18 min/iteration on a 1 GHz processor. Conclusions: This work describes a novel, accurate, and completely automatic method for reconstructing seed orientations, as well as centroids, from a small number of radiographic projections, in support of intraoperative planning and adaptive replanning. Unlike standard back-projection methods, gIFPM avoids the need to match corresponding seed images on the projections. This algorithm also successfully reconstructs overlapping clustered and highly migrated seeds in the implant. The accuracy of better than 1 mm and 6 deg. demonstrates that gIFPM has the potential to support 2D Task Group 43 calculations in clinical practice.« less
North Atlantic observations sharpen meridional overturning projections
NASA Astrophysics Data System (ADS)
Olson, R.; An, S.-I.; Fan, Y.; Evans, J. P.; Caesar, L.
2018-06-01
Atlantic Meridional Overturning Circulation (AMOC) projections are uncertain due to both model errors, as well as internal climate variability. An AMOC slowdown projected by many climate models is likely to have considerable effects on many aspects of global and North Atlantic climate. Previous studies to make probabilistic AMOC projections have broken new ground. However, they do not drift-correct or cross-validate the projections, and do not fully account for internal variability. Furthermore, they consider a limited subset of models, and ignore the skill of models at representing the temporal North Atlantic dynamics. We improve on previous work by applying Bayesian Model Averaging to weight 13 Coupled Model Intercomparison Project phase 5 models by their skill at modeling the AMOC strength, and its temporal dynamics, as approximated by the northern North-Atlantic temperature-based AMOC Index. We make drift-corrected projections accounting for structural model errors, and for the internal variability. Cross-validation experiments give approximately correct empirical coverage probabilities, which validates our method. Our results present more evidence that AMOC likely already started slowing down. While weighting considerably moderates and sharpens our projections, our results are at low end of previously published estimates. We project mean AMOC changes between periods 1960-1999 and 2060-2099 of -4.0 Sv and -6.8 Sv for RCP4.5 and RCP8.5 emissions scenarios respectively. The corresponding average 90% credible intervals for our weighted experiments are [-7.2, -1.2] and [-10.5, -3.7] Sv respectively for the two scenarios.
Elimination of Emergency Department Medication Errors Due To Estimated Weights.
Greenwalt, Mary; Griffen, David; Wilkerson, Jim
2017-01-01
From 7/2014 through 6/2015, 10 emergency department (ED) medication dosing errors were reported through the electronic incident reporting system of an urban academic medical center. Analysis of these medication errors identified inaccurate estimated weight on patients as the root cause. The goal of this project was to reduce weight-based dosing medication errors due to inaccurate estimated weights on patients presenting to the ED. Chart review revealed that 13.8% of estimated weights documented on admitted ED patients varied more than 10% from subsequent actual admission weights recorded. A random sample of 100 charts containing estimated weights revealed 2 previously unreported significant medication dosage errors (.02 significant error rate). Key improvements included removing barriers to weighing ED patients, storytelling to engage staff and change culture, and removal of the estimated weight documentation field from the ED electronic health record (EHR) forms. With these improvements estimated weights on ED patients, and the resulting medication errors, were eliminated.