Analysis of dead zone sources in a closed-loop fiber optic gyroscope.
Chong, Kyoung-Ho; Choi, Woo-Seok; Chong, Kil-To
2016-01-01
Analysis of the dead zone is among the intensive studies in a closed-loop fiber optic gyroscope. In a dead zone, a gyroscope cannot detect any rotation and produces a zero bias. In this study, an analysis of dead zone sources is performed in simulation and experiments. In general, the problem is mainly due to electrical cross coupling and phase modulation drift. Electrical cross coupling is caused by interference between modulation voltage and the photodetector. The cross-coupled signal produces spurious gyro bias and leads to a dead zone if it is larger than the input rate. Phase modulation drift as another dead zone source is due to the electrode contamination, the piezoelectric effect of the LiNbO3 substrate, or to organic fouling. This modulation drift lasts for a short or long period of time like a lead-lag filter response and produces gyro bias error, noise spikes, or dead zone. For a more detailed analysis, the cross-coupling effect and modulation phase drift are modeled as a filter and are simulated in both the open-loop and closed-loop modes. The sources of dead zone are more clearly analyzed in the simulation and experimental results.
Bagherpoor, H M; Salmasi, Farzad R
2015-07-01
In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Liu, Yan-Jun; Gao, Ying; Tong, Shaocheng; Chen, C L Philip
2016-01-01
In this paper, an effective adaptive control approach is constructed to stabilize a class of nonlinear discrete-time systems, which contain unknown functions, unknown dead-zone input, and unknown control direction. Different from linear dead zone, the dead zone, in this paper, is a kind of nonlinear dead zone. To overcome the noncausal problem, which leads to the control scheme infeasible, the systems can be transformed into a m -step-ahead predictor. Due to nonlinear dead-zone appearance, the transformed predictor still contains the nonaffine function. In addition, it is assumed that the gain function of dead-zone input and the control direction are unknown. These conditions bring about the difficulties and the complicacy in the controller design. Thus, the implicit function theorem is applied to deal with nonaffine dead-zone appearance, the problem caused by the unknown control direction can be resolved through applying the discrete Nussbaum gain, and the neural networks are used to approximate the unknown function. Based on the Lyapunov theory, all the signals of the resulting closed-loop system are proved to be semiglobal uniformly ultimately bounded. Moreover, the tracking error is proved to be regulated to a small neighborhood around zero. The feasibility of the proposed approach is demonstrated by a simulation example.
Robust adaptive precision motion control of hydraulic actuators with valve dead-zone compensation.
Deng, Wenxiang; Yao, Jianyong; Ma, Dawei
2017-09-01
This paper addresses the high performance motion control of hydraulic actuators with parametric uncertainties, unmodeled disturbances and unknown valve dead-zone. By constructing a smooth dead-zone inverse, a robust adaptive controller is proposed via backstepping method, in which adaptive law is synthesized to deal with parametric uncertainties and a continuous nonlinear robust control law to suppress unmodeled disturbances. Since the unknown dead-zone parameters can be estimated by adaptive law and then the effect of dead-zone can be compensated effectively via inverse operation, improved tracking performance can be expected. In addition, the disturbance upper bounds can also be updated online by adaptive laws, which increases the controller operability in practice. The Lyapunov based stability analysis shows that excellent asymptotic output tracking with zero steady-state error can be achieved by the developed controller even in the presence of unmodeled disturbance and unknown valve dead-zone. Finally, the proposed control strategy is experimentally tested on a servovalve controlled hydraulic actuation system subjected to an artificial valve dead-zone. Comparative experimental results are obtained to illustrate the effectiveness of the proposed control scheme. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wyman, D.; Steinman, R. M.
1973-01-01
Recently Timberlake, Wyman, Skavenski, and Steinman (1972) concluded in a study of the oculomotor error signal in the fovea that 'the oculomotor dead zone is surely smaller than 10 min and may even be less than 5 min (smaller than the 0.25 to 0.5 deg dead zone reported by Rashbass (1961) with similar stimulus conditions).' The Timberlake et al. speculation is confirmed by demonstrating that the fixating eye consistently and accurately corrects target displacements as small as 3.4 min. The contact lens optical lever technique was used to study the manner in which the oculomotor system responds to small step displacements of the fixation target. Subjects did, without prior practice, use saccades to correct step displacements of the fixation target just as they correct small position errors during maintained fixation.
Liu, Yan-Jun; Tong, Shaocheng
2015-03-01
In the paper, an adaptive tracking control design is studied for a class of nonlinear discrete-time systems with dead-zone input. The considered systems are of the nonaffine pure-feedback form and the dead-zone input appears nonlinearly in the systems. The contributions of the paper are that: 1) it is for the first time to investigate the control problem for this class of discrete-time systems with dead-zone; 2) there are major difficulties for stabilizing such systems and in order to overcome the difficulties, the systems are transformed into an n-step-ahead predictor but nonaffine function is still existent; and 3) an adaptive compensative term is constructed to compensate for the parameters of the dead-zone. The neural networks are used to approximate the unknown functions in the transformed systems. Based on the Lyapunov theory, it is proven that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded and the tracking error converges to a small neighborhood of zero. Two simulation examples are provided to verify the effectiveness of the control approach in the paper.
Li, Zhijun; Su, Chun-Yi
2013-09-01
In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.
NASA Astrophysics Data System (ADS)
Cui, Guozeng; Xu, Shengyuan; Ma, Qian; Li, Yongmin; Zhang, Zhengqiang
2018-05-01
In this paper, the problem of prescribed performance distributed output consensus for higher-order non-affine nonlinear multi-agent systems with unknown dead-zone input is investigated. Fuzzy logical systems are utilised to identify the unknown nonlinearities. By introducing prescribed performance, the transient and steady performance of synchronisation errors are guaranteed. Based on Lyapunov stability theory and the dynamic surface control technique, a new distributed consensus algorithm for non-affine nonlinear multi-agent systems is proposed, which ensures cooperatively uniformly ultimately boundedness of all signals in the closed-loop systems and enables the output of each follower to synchronise with the leader within predefined bounded error. Finally, simulation examples are provided to demonstrate the effectiveness of the proposed control scheme.
Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.
Chen, Mou; Tao, Gang
2016-08-01
In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.
Fuss, Franz Konstantin; Düking, Peter; Weizman, Yehuda
2018-01-01
This paper provides the evidence of a sweet spot on the boot/foot as well as the method for detecting it with a wearable pressure sensitive device. This study confirmed the hypothesized existence of sweet and dead spots on a soccer boot or foot when kicking a ball. For a stationary curved kick, kicking the ball at the sweet spot maximized the probability of scoring a goal (58-86%), whereas having the impact point at the dead zone minimized the probability (11-22%). The sweet spot was found based on hypothesized favorable parameter ranges (center of pressure in x/y-directions and/or peak impact force) and the dead zone based on hypothesized unfavorable parameter ranges. The sweet spot was rather concentrated, independent of which parameter combination was used (two- or three-parameter combination), whereas the dead zone, located 21 mm from the sweet spot, was more widespread.
Controlled sound field with a dual layer loudspeaker array
NASA Astrophysics Data System (ADS)
Shin, Mincheol; Fazi, Filippo M.; Nelson, Philip A.; Hirono, Fabio C.
2014-08-01
Controlled sound interference has been extensively investigated using a prototype dual layer loudspeaker array comprised of 16 loudspeakers. Results are presented for measures of array performance such as input signal power, directivity of sound radiation and accuracy of sound reproduction resulting from the application of conventional control methods such as minimization of error in mean squared pressure, maximization of energy difference and minimization of weighted pressure error and energy. Procedures for selecting the tuning parameters have also been introduced. With these conventional concepts aimed at the production of acoustically bright and dark zones, all the control methods used require a trade-off between radiation directivity and reproduction accuracy in the bright zone. An alternative solution is proposed which can achieve better performance based on the measures presented simultaneously by inserting a low priority zone named as the “gray” zone. This involves the weighted minimization of mean-squared errors in both bright and dark zones together with the gray zone in which the minimization error is given less importance. This results in the production of directional bright zone in which the accuracy of sound reproduction is maintained with less required input power. The results of simulations and experiments are shown to be in excellent agreement.
Hua, Changchun; Zhang, Liuliu; Guan, Xinping
2017-01-01
This paper studies the problem of distributed output tracking consensus control for a class of high-order stochastic nonlinear multiagent systems with unknown nonlinear dead-zone under a directed graph topology. The adaptive neural networks are used to approximate the unknown nonlinear functions and a new inequality is used to deal with the completely unknown dead-zone input. Then, we design the controllers based on backstepping method and the dynamic surface control technique. It is strictly proved that the resulting closed-loop system is stable in probability in the sense of semiglobally uniform ultimate boundedness and the tracking errors between the leader and the followers approach to a small residual set based on Lyapunov stability theory. Finally, two simulation examples are presented to show the effectiveness and the advantages of the proposed techniques.
Shephard, Roy J
2017-03-01
The Douglas bag technique is reviewed as one in a series of articles looking at historical insights into measurement of whole body metabolic rate. Consideration of all articles looking at Douglas bag technique and chemical gas analysis has here focused on the growing appreciation of errors in measuring expired volumes and gas composition, and subjective reactions to airflow resistance and dead space. Multiple small sources of error have been identified and appropriate remedies proposed over a century of use of the methodology. Changes in the bag lining have limited gas diffusion, laboratories conducting gas analyses have undergone validation, and WHO guidelines on airflow resistance have minimized reactive effects. One remaining difficulty is a contamination of expirate by dead space air, minimized by keeping the dead space <70 mL. Care must also be taken to ensure a steady state, and formal validation of the Douglas bag method still needs to be carried out. We may conclude that the Douglas bag method has helped to define key concepts in exercise physiology. Although now superceded in many applications, the errors in a meticulously completed measurement are sufficiently low to warrant retention of the Douglas bag as the gold standard when evaluating newer open-circuit methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connolly, R.; Dawson, C.; Jao, S.
2016-08-05
Three problems with the eIPMs were corrected during the 2015 summer shutdown. These involved ac coupling and 'negative profiles', detector 'dead zone' created by biasing, and gain control on ramp. With respect to Run 16, problems dealt with included gain depletion on horizontal MCP and rf pickup on profile signals; it was found that the MCP was severely damaged over part of the aperture. Various corrective measures were applied. Some results of these measured obtained during Run 16 are shown. At the end of Run 16 there was a three-day beam run to study polarized proton beams in the AGS.more » Attempts to minimize beam injection errors which increase emittance by using the eIPMs to measure the contribution of injection mismatch to the AGS output beam emittance are recounted. .« less
Tong, Shaocheng; Wang, Tong; Li, Yongming; Zhang, Huaguang
2014-06-01
This paper discusses the problem of adaptive neural network output feedback control for a class of stochastic nonlinear strict-feedback systems. The concerned systems have certain characteristics, such as unknown nonlinear uncertainties, unknown dead-zones, unmodeled dynamics and without the direct measurements of state variables. In this paper, the neural networks (NNs) are employed to approximate the unknown nonlinear uncertainties, and then by representing the dead-zone as a time-varying system with a bounded disturbance. An NN state observer is designed to estimate the unmeasured states. Based on both backstepping design technique and a stochastic small-gain theorem, a robust adaptive NN output feedback control scheme is developed. It is proved that all the variables involved in the closed-loop system are input-state-practically stable in probability, and also have robustness to the unmodeled dynamics. Meanwhile, the observer errors and the output of the system can be regulated to a small neighborhood of the origin by selecting appropriate design parameters. Simulation examples are also provided to illustrate the effectiveness of the proposed approach.
Predictive and Neural Predictive Control of Uncertain Systems
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.
2000-01-01
Accomplishments and future work are:(1) Stability analysis: the work completed includes characterization of stability of receding horizon-based MPC in the setting of LQ paradigm. The current work-in-progress includes analyzing local as well as global stability of the closed-loop system under various nonlinearities; for example, actuator nonlinearities; sensor nonlinearities, and other plant nonlinearities. Actuator nonlinearities include three major types of nonlineaxities: saturation, dead-zone, and (0, 00) sector. (2) Robustness analysis: It is shown that receding horizon parameters such as input and output horizon lengths have direct effect on the robustness of the system. (3) Code development: A matlab code has been developed which can simulate various MPC formulations. The current effort is to generalize the code to include ability to handle all plant types and all MPC types. (4) Improved predictor: It is shown that MPC design using better predictors that can minimize prediction errors. It is shown analytically and numerically that Smith predictor can provide closed-loop stability under GPC operation for plants with dead times where standard optimal predictor fails. (5) Neural network predictors: When neural network is used as predictor it can be shown that neural network predicts the plant output within some finite error bound under certain conditions. Our preliminary study shows that with proper choice of update laws and network architectures such bound can be obtained. However, much work needs to be done to obtain a similar result in general case.
Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan
2014-11-01
This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.
TURBULENCE, TRANSPORT, AND WAVES IN OHMIC DEAD ZONES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gole, Daniel; Simon, Jacob B.; Armitage, Philip J.
We use local numerical simulations to study a vertically stratified accretion disk with a resistive mid-plane that damps magnetohydrodynamic (MHD) turbulence. This is an idealized model for the dead zones that may be present at some radii in protoplanetary and dwarf novae disks. We vary the relative thickness of the dead and active zones to quantify how forced fluid motions in the dead zone change. We find that the residual Reynolds stress near the mid-plane decreases with increasing dead zone thickness, becoming negligible in cases where the active to dead mass ratio is less than a few percent. This impliesmore » that purely Ohmic dead zones would be vulnerable to episodic accretion outbursts via the mechanism of Martin and Lubow. We show that even thick dead zones support a large amount of kinetic energy, but this energy is largely in fluid motions that are inefficient at angular momentum transport. Confirming results from Oishi and Mac Low, the perturbed velocity field in the dead zone is dominated by an oscillatory, vertically extended circulation pattern with a low frequency compared to the orbital frequency. This disturbance has the properties predicted for the lowest order r mode in a hydrodynamic disk. We suggest that in a global disk similar excitations would lead to propagating waves, whose properties would vary with the thickness of the dead zone and the nature of the perturbations (isothermal or adiabatic). Flows with similar amplitudes would buckle settled particle layers and could reduce the efficiency of pebble accretion.« less
Shi, Wuxi; Luo, Rui; Li, Baoquan
2017-01-01
In this study, an adaptive fuzzy prescribed performance control approach is developed for a class of uncertain multi-input and multi-output (MIMO) nonlinear systems with unknown control direction and unknown dead-zone inputs. The properties of symmetric matrix are exploited to design adaptive fuzzy prescribed performance controller, and a Nussbaum-type function is incorporated in the controller to estimate the unknown control direction. This method has two prominent advantages: it does not require the priori knowledge of control direction and only three parameters need to be updated on-line for this MIMO systems. It is proved that all the signals in the resulting closed-loop system are bounded and that the tracking errors converge to a small residual set with the prescribed performance bounds. The effectiveness of the proposed approach is validated by simulation results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Vu, Lien T; Chen, Chao-Chang A; Lee, Chia-Cheng; Yu, Chia-Wei
2018-04-20
This study aims to develop a compensating method to minimize the shrinkage error of the shell mold (SM) in the injection molding (IM) process to obtain uniform optical power in the central optical zone of soft axial symmetric multifocal contact lenses (CL). The Z-shrinkage error along the Z axis or axial axis of the anterior SM corresponding to the anterior surface of a dry contact lens in the IM process can be minimized by optimizing IM process parameters and then by compensating for additional (Add) powers in the central zone of the original lens design. First, the shrinkage error is minimized by optimizing three levels of four IM parameters, including mold temperature, injection velocity, packing pressure, and cooling time in 18 IM simulations based on an orthogonal array L 18 (2 1 ×3 4 ). Then, based on the Z-shrinkage error from IM simulation, three new contact lens designs are obtained by increasing the Add power in the central zone of the original multifocal CL design to compensate for the optical power errors. Results obtained from IM process simulations and the optical simulations show that the new CL design with 0.1 D increasing in Add power has the closest shrinkage profile to the original anterior SM profile with percentage of reduction in absolute Z-shrinkage error of 55% and more uniform power in the central zone than in the other two cases. Moreover, actual experiments of IM of SM for casting soft multifocal CLs have been performed. The final product of wet CLs has been completed for the original design and the new design. Results of the optical performance have verified the improvement of the compensated design of CLs. The feasibility of this compensating method has been proven based on the measurement results of the produced soft multifocal CLs of the new design. Results of this study can be further applied to predict or compensate for the total optical power errors of the soft multifocal CLs.
Recipe for Hypoxia: Playing the Dead Zone Game
ERIC Educational Resources Information Center
Kastler, Jessica A.
2009-01-01
Dead zones--areas experiencing low levels of dissolved oxygen--are growing in shallow ocean waters around the world. Research has shown that dead zones form as a result of a specific type of pollution, called nutrient enrichment or eutrophication, and are found in almost every coastal zone where humans have large populations. Concepts related to…
Optimization of integrated impeller mixer via radiotracer experiments.
Othman, N; Kamarudin, S K; Takriff, M S; Rosli, M I; Engku Chik, E M F; Adnan, M A K
2014-01-01
Radiotracer experiments are carried out in order to determine the mean residence time (MRT) as well as percentage of dead zone, V dead (%), in an integrated mixer consisting of Rushton and pitched blade turbine (PBT). Conventionally, optimization was performed by varying one parameter and others were held constant (OFAT) which lead to enormous number of experiments. Thus, in this study, a 4-factor 3-level Taguchi L9 orthogonal array was introduced to obtain an accurate optimization of mixing efficiency with minimal number of experiments. This paper describes the optimal conditions of four process parameters, namely, impeller speed, impeller clearance, type of impeller, and sampling time, in obtaining MRT and V dead (%) using radiotracer experiments. The optimum conditions for the experiments were 100 rpm impeller speed, 50 mm impeller clearance, Type A mixer, and 900 s sampling time to reach optimization.
Investigating Aquatic Dead Zones
ERIC Educational Resources Information Center
Testa, Jeremy; Gurbisz, Cassie; Murray, Laura; Gray, William; Bosch, Jennifer; Burrell, Chris; Kemp, Michael
2010-01-01
This article features two engaging high school activities that include current scientific information, data, and authentic case studies. The activities address the physical, biological, and chemical processes that are associated with oxygen-depleted areas, or "dead zones," in aquatic systems. Students can explore these dead zones through both…
Water quality modeling in the dead end sections of drinking water distribution networks.
Abokifa, Ahmed A; Yang, Y Jeffrey; Lo, Cynthia S; Biswas, Pratim
2016-02-01
Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of the distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used to calibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variations in flow demands on the simulation accuracy. A set of three correction factors were analytically derived to adjust residence time, dispersion rate and wall demand to overcome simulation error caused by spatial aggregation approximation. The current model results show better agreement with field-measured concentrations of conservative fluoride tracer and free chlorine disinfectant than the simulations of recent advection dispersion reaction models published in the literature. Accuracy of the simulated concentration profiles showed significant dependence on the spatial distribution of the flow demands compared to temporal variation. Copyright © 2015 Elsevier Ltd. All rights reserved.
The evolution of a dead zone in a circumplanetary disk
NASA Astrophysics Data System (ADS)
Chen, Cheng; Martin, Rebecca; Zhu, Zhaohuan
2018-01-01
Studying the evolution of a circumplanetary disk can help us to understand the formation of Jupiter and the four Galilean satellites. With the grid-based hydrodynamic code, FARGO3D, we simulate the evolution of a circumplanetary disk with a dead zone, a region of low turbulence. Tidal torques from the sun constrain the size of the circumplanetary disk to about 0.4 R_H. The dead zone provides a cold environment for icy satellite formation. However, as material builds up there, the temperature of the dead zone may reach the critical temperature required for the magnetorotational instability to drive turbulence. Part of the dead zone accretes on to the planet in an accretion outburst. We explore possible disk parameters that provide a suitable environment for satellite formation.
Climate change and dead zones.
Altieri, Andrew H; Gedan, Keryn B
2015-04-01
Estuaries and coastal seas provide valuable ecosystem services but are particularly vulnerable to the co-occurring threats of climate change and oxygen-depleted dead zones. We analyzed the severity of climate change predicted for existing dead zones, and found that 94% of dead zones are in regions that will experience at least a 2 °C temperature increase by the end of the century. We then reviewed how climate change will exacerbate hypoxic conditions through oceanographic, ecological, and physiological processes. We found evidence that suggests numerous climate variables including temperature, ocean acidification, sea-level rise, precipitation, wind, and storm patterns will affect dead zones, and that each of those factors has the potential to act through multiple pathways on both oxygen availability and ecological responses to hypoxia. Given the variety and strength of the mechanisms by which climate change exacerbates hypoxia, and the rates at which climate is changing, we posit that climate change variables are contributing to the dead zone epidemic by acting synergistically with one another and with recognized anthropogenic triggers of hypoxia including eutrophication. This suggests that a multidisciplinary, integrated approach that considers the full range of climate variables is needed to track and potentially reverse the spread of dead zones. © 2014 John Wiley & Sons Ltd.
Metagenomic insights into important microbes from the Dead Zone
NASA Astrophysics Data System (ADS)
Thrash, C.; Baker, B.; Seitz, K.; Temperton, B.; Gillies, L.; Rabalais, N. N.; Mason, O. U.
2015-12-01
Coastal regions of eutrophication-driven oxygen depletion are widespread and increasing in number. Also known as dead zones, these regions take their name from the deleterious effects of hypoxia (dissolved oxygen less than 2 mg/L) on shrimp, demersal fish, and other animal life. Dead zones result from nutrient enrichment of primary production, concomitant consumption by chemoorganotrophic aerobic microorganisms, and strong stratification that prevents ventilation of bottom water. One of the largest dead zones in the world occurs seasonally in the northern Gulf of Mexico (nGOM), where hypoxia can reach up to 22,000 square kilometers. While this dead zone shares many features with more well-known marine oxygen minimum zones, it is nevertheless understudied with regards to the microbial assemblages involved in biogeochemical cycling. We performed metagenomic and metatranscriptomic sequencing on six samples from the 2013 nGOM dead zone from both hypoxic and oxic bottom waters. Assembly and binning led to the recovery of over fifty partial to nearly complete metagenomes from key microbial taxa previously determined to be numerically abundant from 16S rRNA data, such as Thaumarcheaota, Marine Group II Euryarchaeota, SAR406, SAR324, Synechococcus spp., and Planctomycetes. These results provide information about the roles of these taxa in the nGOM dead zone, and opportunities for comparing this region of low oxygen to others around the globe.
Can dead zones create structures like a transition disk?
NASA Astrophysics Data System (ADS)
Pinilla, Paola; Flock, Mario; Ovelar, Maria de Juan; Birnstiel, Til
2016-12-01
Context. Regions of low ionisation where the activity of the magneto-rotational instability is suppressed, the so-called dead zones, have been suggested to explain gaps and asymmetries of transition disks. Dead zones are therefore a potential cause for the observational signatures of transition disks without requiring the presence of embedded planets. Aims: We investigate the gas and dust evolution simultaneously assuming simplified prescriptions for a dead zone and a magnetohydrodynamic (MHD) wind acting on the disk. We explore whether the resulting gas and dust distribution can create signatures similar to those observed in transition disks. Methods: We imposed a dead zone and/or an MHD wind in the radial evolution of gas and dust in protoplanetary disks. For the dust evolution, we included the transport, growth, and fragmentation of dust particles. To compare with observations, we produced synthetic images in scattered optical light and in thermal emission at mm wavelengths. Results: In all models with a dead zone, a bump in the gas surface density is produced that is able to efficiently trap large particles (≳ 1 mm) at the outer edge of the dead zone. The gas bump reaches an amplitude of a factor of 5, which can be enhanced by the presence of an MHD wind that removes mass from the inner disk. While our 1D simulations suggest that such a structure can be present only for 1 Myr, the structure may be maintained for a longer time when more realistic 2D/3D simulations are performed. In the synthetic images, gap-like low-emission regions are seen at scattered light and in thermal emission at mm wavelengths, as previously predicted in the case of planet-disk interaction. Conclusions: Main signatures of transition disks can be reproduced by assuming a dead zone in the disk, such as gap-like structure in scattered light and millimetre continuum emission, and a lower gas surface density within the dead zone. Previous studies showed that the Rossby wave instability can also develop at the edge of such dead zones, forming vortices and also creating asymmetries.
Maaoui-Ben Hassine, Ikram; Naouar, Mohamed Wissem; Mrabet-Bellaaj, Najiba
2016-05-01
In this paper, Model Predictive Control and Dead-beat predictive control strategies are proposed for the control of a PMSG based wind energy system. The proposed MPC considers the model of the converter-based system to forecast the possible future behavior of the controlled variables. It allows selecting the voltage vector to be applied that leads to a minimum error by minimizing a predefined cost function. The main features of the MPC are low current THD and robustness against parameters variations. The Dead-beat predictive control is based on the system model to compute the optimum voltage vector that ensures zero-steady state error. The optimum voltage vector is then applied through Space Vector Modulation (SVM) technique. The main advantages of the Dead-beat predictive control are low current THD and constant switching frequency. The proposed control techniques are presented and detailed for the control of back-to-back converter in a wind turbine system based on PMSG. Simulation results (under Matlab-Simulink software environment tool) and experimental results (under developed prototyping platform) are presented in order to show the performances of the considered control strategies. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
ON HYDRODYNAMIC MOTIONS IN DEAD ZONES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oishi, Jeffrey S.; Mac Low, Mordecai-Mark, E-mail: jsoishi@astro.berkeley.ed, E-mail: mordecai@amnh.or
We investigate fluid motions near the midplane of vertically stratified accretion disks with highly resistive midplanes. In such disks, the magnetorotational instability drives turbulence in thin layers surrounding a resistive, stable dead zone. The turbulent layers in turn drive motions in the dead zone. We examine the properties of these motions using three-dimensional, stratified, local, shearing-box, non-ideal, magnetohydrodynamical simulations. Although the turbulence in the active zones provides a source of vorticity to the midplane, no evidence for coherent vortices is found in our simulations. It appears that this is because of strong vertical oscillations in the dead zone. By analyzingmore » time series of azimuthally averaged flow quantities, we identify an axisymmetric wave mode particular to models with dead zones. This mode is reduced in amplitude, but not suppressed entirely, by changing the equation of state from isothermal to ideal. These waves are too low frequency to affect sedimentation of dust to the midplane, but may have significance for the gravitational stability of the resulting midplane dust layers.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-25
... rule eliminates the nine-month ``dead zone'' for filing an inter partes review petition challenging a..., section 1(d) of the AIA Technical Corrections Act and this final rule eliminate the nine-month ``dead zone... the nine-month ``dead zone'' as to first-to-invent patents and reissue patents. Costs and Benefits...
Uncertainty evaluation of dead zone of diagnostic ultrasound equipment
NASA Astrophysics Data System (ADS)
Souza, R. M.; Alvarenga, A. V.; Braz, D. S.; Petrella, L. I.; Costa-Felix, R. P. B.
2016-07-01
This paper presents a model for evaluating measurement uncertainty of a feature used in the assessment of ultrasound images: dead zone. The dead zone was measured by two technicians of the INMETRO's Laboratory of Ultrasound using a phantom and following the standard IEC/TS 61390. The uncertainty model was proposed based on the Guide to the Expression of Uncertainty in Measurement. For the tested equipment, results indicate a dead zone of 1.01 mm, and based on the proposed model, the expanded uncertainty was 0.17 mm. The proposed uncertainty model contributes as a novel way for metrological evaluation of diagnostic imaging by ultrasound.
MAGNETIZED ACCRETION AND DEAD ZONES IN PROTOSTELLAR DISKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dzyurkevich, Natalia; Henning, Thomas; Turner, Neal J.
The edges of magnetically dead zones in protostellar disks have been proposed as locations where density bumps may arise, trapping planetesimals and helping form planets. Magneto-rotational turbulence in magnetically active zones provides both accretion of gas on the star and transport of mass to the dead zone. We investigate the location of the magnetically active regions in a protostellar disk around a solar-type star, varying the disk temperature, surface density profile, and dust-to-gas ratio. We also consider stellar masses between 0.4 and 2 M{sub Sun }, with corresponding adjustments in the disk mass and temperature. The dead zone's size andmore » shape are found using the Elsasser number criterion with conductivities including the contributions from ions, electrons, and charged fractal dust aggregates. The charged species' abundances are found using the approach proposed by Okuzumi. The dead zone is in most cases defined by the ambipolar diffusion. In our maps, the dead zone takes a variety of shapes, including a fish tail pointing away from the star and islands located on and off the midplane. The corresponding accretion rates vary with radius, indicating locations where the surface density will increase over time, and others where it will decrease. We show that density bumps do not readily grow near the dead zone's outer edge, independently of the disk parameters and the dust properties. Instead, the accretion rate peaks at the radius where the gas-phase metals freeze out. This could lead to clearing a valley in the surface density, and to a trap for pebbles located just outside the metal freezeout line.« less
Analysis of archaeal communities in Gulf of Mexico dead zone sediments.
Sediments may contribute significantly to Louisiana continental shelf “dead zone” hypoxia but limited information hinders comparison of sediment biogeochemistry between norm-oxic and hypoxic seasons. Dead zone sediment cores collected during hypoxia (September 2006) had higher l...
Formation of Circumbinary Planets in a Dead Zone
NASA Astrophysics Data System (ADS)
Martin, Rebecca G.; Armitage, Philip J.; Alexander, Richard D.
2013-08-01
Circumbinary planets have been observed at orbital radii where binary perturbations may have significant effects on the gas disk structure, on planetesimal velocity dispersion, and on the coupling between turbulence and planetesimals. Here, we note that the impact of all of these effects on planet formation is qualitatively altered if the circumbinary disk structure is layered, with a non-turbulent midplane layer (dead zone) and strongly turbulent surface layers. For close binaries, we find that the dead zone typically extends from a radius close to the inner disk edge up to a radius of around 10-20 AU from the center of mass of the binary. The peak in the surface density occurs within the dead zone, far from the inner disk edge, close to the snow line, and may act as a trap for aerodynamically coupled solids. We suggest that circumbinary planet formation may be easier near this preferential location than for disks around single stars. However, dead zones around wide binaries are less likely, and hence planet formation may be more difficult there.
NASA Astrophysics Data System (ADS)
Lubberts, Ronald K.; Ben-Avraham, Zvi
2002-02-01
The Dead Sea Basin is a morphotectonic depression along the Dead Sea Transform. Its structure can be described as a deep rhomb-graben (pull-apart) flanked by two block-faulted marginal zones. We have studied the recent tectonic structure of the northwestern margin of the Dead Sea Basin in the area where the northern strike-slip master fault enters the basin and approaches the western marginal zone (Western Boundary Fault). For this purpose, we have analyzed 3.5-kHz seismic reflection profiles obtained from the northwestern corner of the Dead Sea. The seismic profiles give insight into the recent tectonic deformation of the northwestern margin of the Dead Sea Basin. A series of 11 seismic profiles are presented and described. Although several deformation features can be explained in terms of gravity tectonics, it is suggested that the occurrence of strike-slip in this part of the Dead Sea Basin is most likely. Seismic sections reveal a narrow zone of intensely deformed strata. This zone gradually merges into a zone marked by a newly discovered tectonic depression, the Qumran Basin. It is speculated that both structural zones originate from strike-slip along right-bending faults that splay-off from the Jordan Fault, the strike-slip master fault that delimits the active Dead Sea rhomb-graben on the west. Fault interaction between the strike-slip master fault and the normal faults bounding the transform valley seems the most plausible explanation for the origin of the right-bending splays. We suggest that the observed southward widening of the Dead Sea Basin possibly results from the successive formation of secondary right-bending splays to the north, as the active depocenter of the Dead Sea Basin migrates northward with time.
FORMATION OF CIRCUMBINARY PLANETS IN A DEAD ZONE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Rebecca G.; Armitage, Philip J.; Alexander, Richard D.
Circumbinary planets have been observed at orbital radii where binary perturbations may have significant effects on the gas disk structure, on planetesimal velocity dispersion, and on the coupling between turbulence and planetesimals. Here, we note that the impact of all of these effects on planet formation is qualitatively altered if the circumbinary disk structure is layered, with a non-turbulent midplane layer (dead zone) and strongly turbulent surface layers. For close binaries, we find that the dead zone typically extends from a radius close to the inner disk edge up to a radius of around 10-20 AU from the center ofmore » mass of the binary. The peak in the surface density occurs within the dead zone, far from the inner disk edge, close to the snow line, and may act as a trap for aerodynamically coupled solids. We suggest that circumbinary planet formation may be easier near this preferential location than for disks around single stars. However, dead zones around wide binaries are less likely, and hence planet formation may be more difficult there.« less
Control design based on dead-zone and leakage adaptive laws for artificial swarm mechanical systems
NASA Astrophysics Data System (ADS)
Zhao, Xiaomin; Chen, Y. H.; Zhao, Han
2017-05-01
We consider the control design of artificial swarm systems with emphasis on four characteristics. First, the agent is made of mechanical components. As a result, the motion of each agent is subject to physical laws that govern mechanical systems. Second, both nonlinearity and uncertainty of the mechanical system are taken into consideration. Third, the ideal agent kinematic performance is treated as a desired d'Alembert constraint. This in turn suggests a creative way of embedding the constraint into the control design. Fourth, two types of adaptive robust control schemes are designed. They both contain leakage and dead-zone. However, one design suggests a trade-off between the amount of leakage and the size of dead-zone, in exchange for a simplified dead-zone structure.
How big is the Ocean Dead Zone off the Coast of California?
NASA Astrophysics Data System (ADS)
Hofmann, A. F.; Peltzer, E. T.; Walz, P. M.; Brewer, P. G.
2010-12-01
The term “Ocean Dead Zone”, generally referring to a zone that is devoid of aerobic marine life of value to humans, is now widely used in the press and scientific literature but it appears to be not universally defined. The global assessment and monitoring of ocean dead zones, however, is of high public concern due to the considerable economic value associated with impacted fisheries and with questions over the growth of these zones forced by climate change. We report on the existence of a zone at ~850m depth off Santa Monica, California where dissolved oxygen (DO) levels are 1 μmol/kg; an order of magnitude below any existing definition of an “Ocean Dead Zone”. ROV dives show the region to be visually devoid of all aerobic marine life. But how large is this dead zone, and how may its boundaries be defined? Without an accepted definition we cannot report this nor can we compare it to other dead zones reported elsewhere in the world. “Dead zones” are now assessed solely by DO levels. A multitude of values in different units are used (Fig 1), which are clearly not universally applicable. This seriously hampers an integrated global monitoring and management effort and frustrates the development of valid connections with climate change and assessment of the consequences. Furthermore, input of anthropogenic CO2 can also stress marine life. Recent work supported by classical data suggests that higher pCO2 influences the thermodynamic energy efficiency of oxic respiration (CH2O + O2 -> CO2 + H2O). The ratio pO2/pCO2, called the respiration index (RI), emerges as the critical variable, combining the impacts of warming on DO and rising CO2 levels within a single, well defined quantity. We advocate that future monitoring efforts report pO2 and pCO2 concurrently, thus making it possible to classify, monitor and manage “dead zones” within a standard reference system that may include, as with e.g, hurricanes, differing categories of intensity. Fig.1. A DO profile off Southern California with overlay of commonly used DO thresholds (μmolO2/kg); “dead zones” may occur anywhere from 250 - 2,200 m depth. The widely reported “dead zone” off the Mississippi delta is defined by DO of < 2 mg/l (~ 64 μmol/kg).
Optimal sensor fusion for land vehicle navigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrow, J.D.
1990-10-01
Position location is a fundamental requirement in autonomous mobile robots which record and subsequently follow x,y paths. The Dept. of Energy, Office of Safeguards and Security, Robotic Security Vehicle (RSV) program involves the development of an autonomous mobile robot for patrolling a structured exterior environment. A straight-forward method for autonomous path-following has been adopted and requires digitizing'' the desired road network by storing x,y coordinates every 2m along the roads. The position location system used to define the locations consists of a radio beacon system which triangulates position off two known transponders, and dead reckoning with compass and odometer. Thismore » paper addresses the problem of combining these two measurements to arrive at a best estimate of position. Two algorithms are proposed: the optimal'' algorithm treats the measurements as random variables and minimizes the estimate variance, while the average error'' algorithm considers the bias in dead reckoning and attempts to guarantee an average error. Data collected on the algorithms indicate that both work well in practice. 2 refs., 7 figs.« less
Dead Zone Accretion Flows in Protostellar Disks
NASA Technical Reports Server (NTRS)
Turner, Neal; Sano, T.
2008-01-01
Planets form inside protostellar disks in a dead zone where the electrical resistivity of the gas is too high for magnetic forces to drive turbulence. We show that much of the dead zone nevertheless is active and flows toward the star while smooth, large-scale magnetic fields transfer the orbital angular momentum radially outward. Stellar X-ray and radionuclide ionization sustain a weak coupling of the dead zone gas to the magnetic fields, despite the rapid recombination of free charges on dust grains. Net radial magnetic fields are generated in the magnetorotational turbulence in the electrically conducting top and bottom surface layers of the disk, and reach the midplane by ohmic diffusion. A toroidal component to the fields is produced near the midplane by the orbital shear. The process is similar to the magnetization of the solar tachocline. The result is a laminar, magnetically driven accretion flow in the region where the planets form.
Adaptive NN controller design for a class of nonlinear MIMO discrete-time systems.
Liu, Yan-Jun; Tang, Li; Tong, Shaocheng; Chen, C L Philip
2015-05-01
An adaptive neural network tracking control is studied for a class of multiple-input multiple-output (MIMO) nonlinear systems. The studied systems are in discrete-time form and the discretized dead-zone inputs are considered. In addition, the studied MIMO systems are composed of N subsystems, and each subsystem contains unknown functions and external disturbance. Due to the complicated framework of the discrete-time systems, the existence of the dead zone and the noncausal problem in discrete-time, it brings about difficulties for controlling such a class of systems. To overcome the noncausal problem, by defining the coordinate transformations, the studied systems are transformed into a special form, which is suitable for the backstepping design. The radial basis functions NNs are utilized to approximate the unknown functions of the systems. The adaptation laws and the controllers are designed based on the transformed systems. By using the Lyapunov method, it is proved that the closed-loop system is stable in the sense that the semiglobally uniformly ultimately bounded of all the signals and the tracking errors converge to a bounded compact set. The simulation examples and the comparisons with previous approaches are provided to illustrate the effectiveness of the proposed control algorithm.
Cheng, Yuhua; Chen, Kai; Bai, Libing; Yang, Jing
2014-02-01
Precise control of the grid-connected current is a challenge in photovoltaic inverter research. Traditional Proportional-Integral (PI) control technology cannot eliminate steady-state error when tracking the sinusoidal signal from the grid, which results in a very high total harmonic distortion in the grid-connected current. A novel PI controller has been developed in this paper, in which the sinusoidal wave is discretized into an N-step input signal that is decided by the control frequency to eliminate the steady state error of the system. The effect of periodical error caused by the dead zone of the power switch and conduction voltage drop can be avoided; the current tracking accuracy and current harmonic content can also be improved. Based on the proposed PI controller, a 700 W photovoltaic grid-connected inverter is developed and validated. The improvement has been demonstrated through experimental results.
Closed-loop control of renal perfusion pressure in physiological experiments.
Campos-Delgado, D U; Bonilla, I; Rodríguez-Martínez, M; Sánchez-Briones, M E; Ruiz-Hernández, E
2013-07-01
This paper presents the design, experimental modeling, and control of a pump-driven renal perfusion pressure (RPP)-regulatory system to implement precise and relatively fast RPP regulation in rats. The mechatronic system is a simple, low-cost, and reliable device to automate the RPP regulation process based on flow-mediated occlusion. Hence, the regulated signal is the RPP measured in the left femoral artery of the rat, and the manipulated variable is the voltage applied to a dc motor that controls the occlusion of the aorta. The control system is implemented in a PC through the LabView software, and a data acquisition board NI USB-6210. A simple first-order linear system is proposed to approximate the dynamics in the experiment. The parameters of the model are chosen to minimize the error between the predicted and experimental output averaged from eight input/output datasets at different RPP operating conditions. A closed-loop servocontrol system based on a pole-placement PD controller plus dead-zone compensation was proposed for this purpose. First, the feedback structure was validated in simulation by considering parameter uncertainty, and constant and time-varying references. Several experimental tests were also conducted to validate in real time the closed-loop performance for stepwise and fast switching references, and the results show the effectiveness of the proposed automatic system to regulate the RPP in the rat, in a precise, accurate (mean error less than 2 mmHg) and relatively fast mode (10-15 s of response time).
A multidisciplinary glider survey of an open ocean dead-zone eddy
NASA Astrophysics Data System (ADS)
Karstensen, Johannes; Schütte, Florian; Pietri, Alice; Krahmann, Gerd; Fiedler, Björn; Löscher, Carolin; Grundle, Damian; Hauss, Helena; Körtzinger, Arne; Testor, Pierre; Viera, Nuno
2016-04-01
The physical (temperature, salinity) and biogeochemical (oxygen, nitrate, chlorophyll fluorescence, turbidity) structure of an anticyclonic modewater eddy, hosting an open ocean dead zone, is investigated using observational data sampled in high temporal and spatial resolution with autonomous gliders in March and April 2014. The core of the eddy is identified in the glider data as a volume of fresher (on isopycnals) water in the depth range from the mixed layer base (about 70m) to about 200m depth. The width is about 80km. The core aligns well with the 40 μmolkg-1 oxygen contour. From two surveys about 1 month apart, changes in the minimal oxygen concentrations (below 5μmolkg-1) are observed that indicate that small scale processes are in operation. Several scales of coherent variability of physical and biogeochemical variable are identified - from a few meters to the mesoscale. One of the gliders carried an autonomous Nitrate (N) sensor and the data is used to analyse the possible nitrogen pathways within the eddy. Also the highest N is accompanied by lowest oxygen concentrations, the AOU:N ratio reveals a preferred oxygen cycling per N.
DEAD ZONE IN THE POLAR-CAP ACCELERATOR OF PULSARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Alexander Y.; Beloborodov, Andrei M.
We study plasma flows above pulsar polar caps using time-dependent simulations of plasma particles in the self-consistent electric field. The flow behavior is controlled by the dimensionless parameter {alpha} = j/c{rho}{sub GJ}, where j is the electric current density and {rho}{sub GJ} is the Goldreich-Julian charge density. The region of the polar cap where 0 < {alpha} < 1 is a {sup d}ead zone{sup -}in this zone, particle acceleration is inefficient and pair creation is not expected even for young, rapidly rotating pulsars. Pulsars with polar caps near the rotation axis are predicted to have a hollow-cone structure of radiomore » emission, as the dead zone occupies the central part of the polar cap. Our results apply to charge-separated flows of electrons (j < 0) or ions (j > 0). In the latter case, we consider the possibility of a mixed flow consisting of different ion species, and observe the development of two-stream instability. The dead zone at the polar cap is essential for the development of an outer gap near the null surface {rho}{sub GJ} = 0.« less
Water quality modeling in the dead end sections of drinking water (Supplement)
Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of the distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used tocalibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variation
Water Quality Modeling in the Dead End Sections of Drinking ...
Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of a distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used to calibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variations
Ackermann, Mark; Diels, Jean-Claude
2005-06-28
A scatterometer utilizes the dead zone resulting from lockup caused by scatter from a sample located in the optical path of a ring laser at a location where counter-rotating pulses cross. The frequency of one pulse relative to the other is varied across the lockup dead zone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyra, Wladimir; Mac Low, Mordecai-Mark, E-mail: wlyra@jpl.nasa.gov, E-mail: mordecai@amnh.org
It has been suggested that the transition between magnetorotationally active and dead zones in protoplanetary disks should be prone to the excitation of vortices via Rossby wave instability (RWI). However, the only numerical evidence for this has come from alpha disk models, where the magnetic field evolution is not followed, and the effect of turbulence is parameterized by Laplacian viscosity. We aim to establish the phenomenology of the flow in the transition in three-dimensional resistive-magnetohydrodynamical models. We model the transition by a sharp jump in resistivity, as expected in the inner dead zone boundary, using the PENCIL CODE to simulatemore » the flow. We find that vortices are readily excited in the dead side of the transition. We measure the mass accretion rate finding similar levels of Reynolds stress at the dead and active zones, at the {alpha} Almost-Equal-To 10{sup -2} level. The vortex sits in a pressure maximum and does not migrate, surviving until the end of the simulation. A pressure maximum in the active zone also triggers the RWI. The magnetized vortex that results should be disrupted by parasitical magneto-elliptic instabilities, yet it subsists in high resolution. This suggests that either the parasitic modes are still numerically damped or that the RWI supplies vorticity faster than they can destroy it. We conclude that the resistive transition between the active and dead zones in the inner regions of protoplanetary disks, if sharp enough, can indeed excite vortices via RWI. Our results lend credence to previous works that relied on the alpha-disk approximation, and caution against the use of overly reduced azimuthal coverage on modeling this transition.« less
A Metagenomic Assembly-Based Approach to Decoding Taxa in the Dead Zone
NASA Astrophysics Data System (ADS)
Thrash, C.; Baker, B.; Seitz, K.; Gillies, L.; Temperton, B.; Rabalais, N. N.; Mason, O. U.
2016-02-01
Coastal regions of eutrophication-driven oxygen depletion are widespread and increasing in number. Also known as dead zones, these regions take their name from the deleterious effects of hypoxia (dissolved oxygen less than 2 mg/L) on shrimp, demersal fish, and other animal life. Dead zones result from nutrient enrichment of primary production, concomitant consumption by chemoorganotrophic aerobic microorganisms, and strong stratification that prevents ventilation of bottom water. One of the largest dead zones in the world occurs seasonally in the northern Gulf of Mexico (nGOM), where hypoxia can reach up to 22,000 square kilometers. To explore the underlying genomic variation and metabolic potential of microorganisms in hypoxia, we performed metagenomic and metatranscriptomic sequencing on six samples from the 2013 nGOM dead zone from both hypoxic and oxic bottom waters. Over 217 Mb of sequence was assembled into contigs of at least 3 kb with IDBA-UD, with 72 greater than 100 kb, and the largest 495 kb in length. Annotation by IMG recovered over 224 thousand genes in these contigs. Binning with tetra-ESOM and quality filtering based on relative coverage of sample-specific reads led to the recovery of 83 partial to near complete (31 over 70%) high-quality genomes. These metagenomes represent key microbial taxa previously determined to be numerically abundant from 16S rRNA data, such as Thaumarcheaota, Marine Group II Euryarchaeota, SAR406, Synechococcus spp., Actinobacteria, and Planctomycetes. Ongoing work includes the recruitment of metatranscriptomic data to binned contigs for evaluation of relative gene expression, metabolic reconstruction, and comparative genomics with related organisms elsewhere in the global oceans. These data will provide us with detailed information regarding the metabolic potential and activity of many of the key players in the nGOM dead zone.
Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors
NASA Astrophysics Data System (ADS)
Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.
2018-04-01
The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.
Postmortem procedures in the emergency department: using the recently dead to practise and teach.
Iserson, K V
1993-01-01
In generations past, it was common practice for doctors to learn lifesaving technical skills on patients who had recently died. But this practice has lately been criticised on religious, legal, and ethical grounds, and has fallen into disuse in many hospitals and emergency departments. This paper uses four questions to resolve whether doctors in emergency departments should practise and teach non-invasive and minimally invasive procedures on the newly dead: Is it ethically and legally permissible to practise and teach non-invasive and minimally invasive procedures on the newly dead emergency-department patient? What are the alternatives or possible consequences of not practising non-invasive and minimally invasive procedures on newly dead patients? Is consent from relatives required? Should doctors in emergency departments allow or even encourage this use of newly dead patients? PMID:8331644
Study on the hydraulic characteristics of side inlet/outlet by physical model test
NASA Astrophysics Data System (ADS)
Kong, Bo; Ye, Fei; Hu, Qiu-yue; Zhang, Jing
2017-04-01
The hydraulic characteristics at the side inlet/outlet of pumped storage plants were studied by physical model test. The gravity similarity rule was adopted and head loss coefficients under pumped and power conditions were given. The flow distribution under both conditions was studied. Scheme of changing the separation pier section area proportion for minimizing velocity uneven coefficient was brought forward and the cause of test error was researched. Vortex evaluation and observation were studied under the pumped condition at normal and dead reservoir water levels.
Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.
Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon
2017-01-01
In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.
Cheatgrass Dead Zones in Northern Nevada
USDA-ARS?s Scientific Manuscript database
Reports of areas of cheatgrass die-off are becoming more frequent. In 2009, we investigated cheatgrass die-off in north-central Nevada. Dead zones ranged from several to hundreds of acres in size and were largely unvegetated and covered by cheatgrass litter with a distinct gray cast. We collected re...
Perils of using speed zone data to assess real-world compliance to speed limits.
Chevalier, Anna; Clarke, Elizabeth; Chevalier, Aran John; Brown, Julie; Coxon, Kristy; Ivers, Rebecca; Keay, Lisa
2017-11-17
Real-world driving studies, including those involving speeding alert devices and autonomous vehicles, can gauge an individual vehicle's speeding behavior by comparing measured speed with mapped speed zone data. However, there are complexities with developing and maintaining a database of mapped speed zones over a large geographic area that may lead to inaccuracies within the data set. When this approach is applied to large-scale real-world driving data or speeding alert device data to determine speeding behavior, these inaccuracies may result in invalid identification of speeding. We investigated speeding events based on service provider speed zone data. We compared service provider speed zone data (Speed Alert by Smart Car Technologies Pty Ltd., Ultimo, NSW, Australia) against a second set of speed zone data (Google Maps Application Programming Interface [API] mapped speed zones). We found a systematic error in the zones where speed limits of 50-60 km/h, typical of local roads, were allocated to high-speed motorways, which produced false speed limits in the speed zone database. The result was detection of false-positive high-range speeding. Through comparison of the service provider speed zone data against a second set of speed zone data, we were able to identify and eliminate data most affected by this systematic error, thereby establishing a data set of speeding events with a high level of sensitivity (a true positive rate of 92% or 6,412/6,960). Mapped speed zones can be a source of error in real-world driving when examining vehicle speed. We explored the types of inaccuracies found within speed zone data and recommend that a second set of speed zone data be utilized when investigating speeding behavior or developing mapped speed zone data to minimize inaccuracy in estimates of speeding.
Wang, Lijie; Li, Hongyi; Zhou, Qi; Lu, Renquan
2017-09-01
This paper investigates the problem of observer-based adaptive fuzzy control for a category of nonstrict feedback systems subject to both unmodeled dynamics and fuzzy dead zone. Through constructing a fuzzy state observer and introducing a center of gravity method, unmeasurable states are estimated and the fuzzy dead zone is defuzzified, respectively. By employing fuzzy logic systems to identify the unknown functions. And combining small-gain approach with adaptive backstepping control technique, a novel adaptive fuzzy output feedback control strategy is developed, which ensures that all signals involved are semi-globally uniformly bounded. Simulation results are given to demonstrate the effectiveness of the presented method.
Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming
2013-01-01
In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.
Li, Dong-Juan; Li, Da-Peng
2017-09-14
In this paper, an adaptive output feedback control is framed for uncertain nonlinear discrete-time systems. The considered systems are a class of multi-input multioutput nonaffine nonlinear systems, and they are in the nested lower triangular form. Furthermore, the unknown dead-zone inputs are nonlinearly embedded into the systems. These properties of the systems will make it very difficult and challenging to construct a stable controller. By introducing a new diffeomorphism coordinate transformation, the controlled system is first transformed into a state-output model. By introducing a group of new variables, an input-output model is finally obtained. Based on the transformed model, the implicit function theorem is used to determine the existence of the ideal controllers and the approximators are employed to approximate the ideal controllers. By using the mean value theorem, the nonaffine functions of systems can become an affine structure but nonaffine terms still exist. The adaptation auxiliary terms are skillfully designed to cancel the effect of the dead-zone input. Based on the Lyapunov difference theorem, the boundedness of all the signals in the closed-loop system can be ensured and the tracking errors are kept in a bounded compact set. The effectiveness of the proposed technique is checked by a simulation study.
Bai, Mingsian R; Wen, Jheng-Ciang; Hsu, Hoshen; Hua, Yi-Hsin; Hsieh, Yu-Hao
2014-10-01
A sound reconstruction system is proposed for audio reproduction with extended sweet spot and reduced reflections. An equivalent source method (ESM)-based sound field synthesis (SFS) approach, with the aid of dark zone minimization is adopted in the study. Conventional SFS that is based on the free-field assumption suffers from synthesis error due to boundary reflections. To tackle the problem, the proposed system utilizes convex optimization in designing array filters with both reproduction performance and acoustic contrast taken into consideration. Control points are deployed in the dark zone to minimize the reflections from the walls. Two approaches are employed to constrain the pressure and velocity in the dark zone. Pressure matching error (PME) and acoustic contrast (AC) are used as performance measures in simulations and experiments for a rectangular loudspeaker array. Perceptual Evaluation of Audio Quality (PEAQ) is also used to assess the audio reproduction quality. The results show that the pressure-constrained (PC) method yields better acoustic contrast, but poorer reproduction performance than the pressure-velocity constrained (PVC) method. A subjective listening test also indicates that the PVC method is the preferred method in a live room.
Integration, Authenticity, and Relevancy in College Science through Engineering Design
ERIC Educational Resources Information Center
Turner, Ken L., Jr.; Hoffman, Adam R.
2018-01-01
Engineering design is an ideal perspective for engaging students in college science classes. An engineering design problem-solving framework was used to create a general chemistry lab activity focused on an important environmental issue--dead zones. Dead zones impact over 400 locations around the world and are a result of nutrient pollution, one…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okuzumi, Satoshi; Hirose, Shigenobu, E-mail: okuzumi@nagoya-u.jp
Turbulence driven by magnetorotational instability (MRI) affects planetesimal formation by inducing diffusion and collisional fragmentation of dust particles. We examine conditions preferred for planetesimal formation in MRI-inactive 'dead zones' using an analytic dead-zone model based on our recent resistive MHD simulations. We argue that successful planetesimal formation requires not only a sufficiently large dead zone (which can be produced by tiny dust grains) but also a sufficiently small net vertical magnetic flux (NVF). Although often ignored, the latter condition is indeed important since the NVF strength determines the saturation level of turbulence in MRI-active layers. We show that direct collisionalmore » formation of icy planetesimal across the fragmentation barrier is possible when the NVF strength is lower than 10 mG (for the minimum-mass solar nebula model). Formation of rocky planetesimals via the secular gravitational instability is also possible within a similar range of the NVF strength. Our results indicate that the fate of planet formation largely depends on how the NVF is radially transported in the initial disk formation and subsequent disk accretion processes.« less
NASA Astrophysics Data System (ADS)
Gandhi, Neeraj; Kim, Sungmin; Kazanzides, Peter; Lediju Bell, Muyinatu A.
2017-03-01
Minimally invasive surgery carries the deadly risk of rupturing major blood vessels, such as the internal carotid arteries hidden by bone in endonasal transsphenoidal surgery. We propose a novel approach to surgical guidance that relies on photoacoustic-based vessel separation measurements to assess the extent of safety zones during these type of surgical procedures. This approach can be implemented with or without a robot or navigation system. To determine the accuracy of this approach, a custom phantom was designed and manufactured for modular placement of two 3.18-mm diameter vessel-mimicking targets separated by 10-20 mm. Photoacoustic images were acquired as the optical fiber was swept across the vessels in the absence and presence of teleoperation with a research da Vinci Surgical System. When the da Vinci was used, vessel positions were recorded based on the fiber position (calculated from the robot kinematics) that corresponded to an observed photoacoustic signal. In all cases, compounded photoacoustic data from a single sweep displayed the four vessel boundaries in one image. Amplitude- and coherence-based photoacoustic images were used to estimate vessel separations, resulting in 0.52-0.56 mm mean absolute errors, 0.66-0.71 mm root mean square errors, and 65-68% more accuracy compared to fiber position measurements obtained through the da Vinci robot kinematics. Results indicate that with further development, photoacoustic image-based measurements of anatomical landmarks could be a viable method for real-time path planning in multiple interventional photoacoustic applications.
Blöchliger, Nicolas; Keller, Peter M; Böttger, Erik C; Hombach, Michael
2017-09-01
The procedure for setting clinical breakpoints (CBPs) for antimicrobial susceptibility has been poorly standardized with respect to population data, pharmacokinetic parameters and clinical outcome. Tools to standardize CBP setting could result in improved antibiogram forecast probabilities. We propose a model to estimate probabilities for methodological categorization errors and defined zones of methodological uncertainty (ZMUs), i.e. ranges of zone diameters that cannot reliably be classified. The impact of ZMUs on methodological error rates was used for CBP optimization. The model distinguishes theoretical true inhibition zone diameters from observed diameters, which suffer from methodological variation. True diameter distributions are described with a normal mixture model. The model was fitted to observed inhibition zone diameters of clinical Escherichia coli strains. Repeated measurements for a quality control strain were used to quantify methodological variation. For 9 of 13 antibiotics analysed, our model predicted error rates of < 0.1% applying current EUCAST CBPs. Error rates were > 0.1% for ampicillin, cefoxitin, cefuroxime and amoxicillin/clavulanic acid. Increasing the susceptible CBP (cefoxitin) and introducing ZMUs (ampicillin, cefuroxime, amoxicillin/clavulanic acid) decreased error rates to < 0.1%. ZMUs contained low numbers of isolates for ampicillin and cefuroxime (3% and 6%), whereas the ZMU for amoxicillin/clavulanic acid contained 41% of all isolates and was considered not practical. We demonstrate that CBPs can be improved and standardized by minimizing methodological categorization error rates. ZMUs may be introduced if an intermediate zone is not appropriate for pharmacokinetic/pharmacodynamic or drug dosing reasons. Optimized CBPs will provide a standardized antibiotic susceptibility testing interpretation at a defined level of probability. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Wensveen, Paul J; Thomas, Len; Miller, Patrick J O
2015-01-01
Detailed information about animal location and movement is often crucial in studies of natural behaviour and how animals respond to anthropogenic activities. Dead-reckoning can be used to infer such detailed information, but without additional positional data this method results in uncertainty that grows with time. Combining dead-reckoning with new Fastloc-GPS technology should provide good opportunities for reconstructing georeferenced fine-scale tracks, and should be particularly useful for marine animals that spend most of their time under water. We developed a computationally efficient, Bayesian state-space modelling technique to estimate humpback whale locations through time, integrating dead-reckoning using on-animal sensors with measurements of whale locations using on-animal Fastloc-GPS and visual observations. Positional observation models were based upon error measurements made during calibrations. High-resolution 3-dimensional movement tracks were produced for 13 whales using a simple process model in which errors caused by water current movements, non-location sensor errors, and other dead-reckoning errors were accumulated into a combined error term. Positional uncertainty quantified by the track reconstruction model was much greater for tracks with visual positions and few or no GPS positions, indicating a strong benefit to using Fastloc-GPS for track reconstruction. Compared to tracks derived only from position fixes, the inclusion of dead-reckoning data greatly improved the level of detail in the reconstructed tracks of humpback whales. Using cross-validation, a clear improvement in the predictability of out-of-set Fastloc-GPS data was observed compared to more conventional track reconstruction methods. Fastloc-GPS observation errors during calibrations were found to vary by number of GPS satellites received and by orthogonal dimension analysed; visual observation errors varied most by distance to the whale. By systematically accounting for the observation errors in the position fixes, our model provides a quantitative estimate of location uncertainty that can be appropriately incorporated into analyses of animal movement. This generic method has potential application for a wide range of marine animal species and data recording systems.
"Bagels Anyone?": Pedagogy of the Confused and Hungry in the Dead Zone.
ERIC Educational Resources Information Center
Katz, Julie
One instructor's "dead zone" (her windowless classroom in the depths of the Humanities building) was the place where little exchange between teacher and students took place. When one day she overheard the students talking about how little money they had left on their meal cards, she took a few dozen bagels to that afternoon's writing…
Explosive Model Tarantula V1/JWL++ Calibration of LX-17: #2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souers, P C; Vitello, P
2009-05-01
Tarantula V1 is a kinetic package for reactive flow codes that seeks to describe initiation, failure, dead zones and detonation simultaneously. The most important parameter is P1, the pressure between the initiation and failure regions. Both dead zone formation and failure can be largely controlled with this knob. However, V1 does failure with low settings and dead zones with higher settings, so that it cannot fulfill its purpose in the current format. To this end, V2 is under test. The derivation of the initiation threshold P0 is discussed. The derivation of the initiation pressure-tau curve as an output of Tarantulamore » shows that the initiation package is sound. A desensitization package is also considered.« less
The 20th-century development and expansion of Louisiana shelf hypoxia, Gulf of Mexico
Osterman, L.E.; Poore, R.Z.; Swarzenski, P.W.; Senn, D.B.; DiMarco, Steven F.
2009-01-01
Since systematic measurements of Louisiana continental-shelf waters were initiated in 1985, hypoxia (oxygen content <2 mg L-1) has increased considerably in an area termed the dead zone. Monitoring and modeling studies have concluded that the expansion of the Louisiana shelf dead zone is related to increased anthropogenically derived nutrient delivery from the Mississippi River drainage basin, physical and hydrographical changes of the Louisiana Shelf, and possibly coastal erosion of wetlands in southern Louisiana. In order to track the development and expansion of seasonal low-oxygen conditions on the Louisiana shelf prior to 1985, we used a specific low-oxygen foraminiferal faunal proxy, the PEB index, which has been shown statistically to represent the modern Louisiana hypoxia zone. We constructed a network of 13 PEB records with excess 210Pb-derived chronologies to establish the development of low-oxygen and hypoxic conditions over a large portion of the modern dead zone for the last 100 years. The PEB index record indicates that areas of low-oxygen bottom water began to appear in the early 1910s in isolated hotspots near the Mississippi Delta and rapidly expanded across the entire Louisiana shelf beginning in the 1950s. Since ???1950, the percentage of PEB species has steadily increased over a large portion of the modern dead zone. By 1960, subsurface low-oxygen conditions were occurring seasonally over a large part of the geographic area now known as the dead zone. The long-term trends in the PEB index are consistent with the 20th-century observational and proxy data for low oxygen and hypoxia. ?? 2009 US Government.
Effect of Detector Dead Time on the Performance of Optical Direct-Detection Communication Links
NASA Technical Reports Server (NTRS)
Chen, C.-C.
1988-01-01
Avalanche photodiodes (APDs) operating in the Geiger mode can provide a significantly improved single-photon detect ion sensitivity over conventional photodiodes. However, the quenching circuit required to remove the excess charge carriers after each photon event can introduce an undesirable dead time into the detection process. The effect of this detector dead time on the performance of a binary pulse-position-modulted (PPM) channel is studied by analyzing the error probability. It is shown that, when back- ground noise is negligible, the performance of the detector with dead time is similar to that o f a quantum-limited receiver. For systems with increasing background intensities, the error rate of the receiver starts to degrade rapidly with increasing dead time. The power penalty due to detector dead time is also evaluated and shown to depend critically on background intensity as well as dead time. Given the expected background strength in an optical channel, therefore, a constraint must be placed on the bandwidth of the receiver to limit the amount of power penalty due to detector dead time.
Effect of detector dead time on the performance of optical direct-detection communication links
NASA Astrophysics Data System (ADS)
Chen, C.-C.
1988-05-01
Avalanche photodiodes (APDs) operating in the Geiger mode can provide a significantly improved single-photon detection sensitivity over conventional photodiodes. However, the quenching circuit required to remove the excess charge carriers after each photon event can introduce an undesirable dead time into the detection process. The effect of this detector dead time on the performance of a binary pulse-position-modulated (PPM) channel is studied by analyzing the error probability. It is shown that, when background noise is negligible, the performance of the detector with dead time is similar to that of a quantum-limited receiver. For systems with increasing background intensities, the error rate of the receiver starts to degrade rapidly with increasing dead time. The power penalty due to detector dead time is also evaluated and shown to depend critically on badkground intensity as well as dead time. Given the expected background strength in an optical channel, therefore, a constraint must be placed on the bandwidth of the receiver to limit the amount of power penalty due to detector dead time.
NASA Astrophysics Data System (ADS)
Hassan, Mahmoud A.
2004-02-01
Digital elevation models (DEMs) are important tools in the planning, design and maintenance of mobile communication networks. This research paper proposes a method for generating high accuracy DEMs based on SPOT satellite 1A stereo pair images, ground control points (GCP) and Erdas OrthoBASE Pro image processing software. DEMs with 0.2911 m mean error were achieved for the hilly and heavily populated city of Amman. The generated DEM was used to design a mobile communication network resulted in a minimum number of radio base transceiver stations, maximum number of covered regions and less than 2% of dead zones.
THE QUANTITY AND TURNOVER OF DEAD WOOD IN PERMANENT FOREST PLOTS IN SIX LIFE ZONES OF VENEZUELA
Dead wood can be an important component of the carbon pool in many forests, but few measurements have been made of this pool in tropical forests. To fill this gap, we determined the quantity of dead wood (downed and standing dead) in 25 long-term (up to 30 yr) permanent forest pl...
Attempt to induce lightwood in eastern hemlock by treating with paraquat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiatgrajai, P.; Rowe, J.W.; Conner, A.H.
1976-01-01
Treatment of eastern hemlock (Tsuga canadensis (L.) Carr.) with the herbicide paraquat did not induce formation of lightwood. However, a dead zone of phloem extended above the paraquat-treatment site. Traumatic resin ducts were observed in the wood immediately adjacent to this dead zone. Although some of these ducts were filled with resin, most were empty. A very small amount of resin soaking was observed at the edges of the dead phloem. Although there was an increase in the extractives content of the wood behind the dead phloem, this did not reach levels with commercial potential. The increase in turpentine levelmore » was mostly due to slightly volatile waxes. The nonvolatile ether extractives were predominately acids other than the fatty and resin acids typical of pines. Electron microscopy revealed fungal hyphae in the wood behind the dead phloem, and several species of fungi were cultured from the wood.« less
Statistical Mechanics and Dynamics of the Outer Solar System.I. The Jupiter/Saturn Zone
NASA Technical Reports Server (NTRS)
Grazier, K. R.; Newman, W. I.; Kaula, W. M.; Hyman, J. M.
1996-01-01
We report on numerical simulations designed to understand how the solar system evolved through a winnowing of planetesimals accreeted from the early solar nebula. This sorting process is driven by the energy and angular momentum and continues to the present day. We reconsider the existence and importance of stable niches in the Jupiter/Saturn Zone using greatly improved numerical techniques based on high-order optimized multi-step integration schemes coupled to roundoff error minimizing methods.
Effect of baffle spacing and baffle cut on thermal-hydraulic characteristics of the fluid flow
NASA Astrophysics Data System (ADS)
Chernyateva, R. R.
2018-01-01
This article presents the results of investigations of the influence of baffle spacing and baffle cut on the size of dead zone formed near the cross baffles using numerical simulation methods. It is showed the structure of an additional baffle plate which can be used to reduce the dead zone and smoother flow distribution over the cross section.
Aleiferis, Pavlos; Charalambides, Alexandros; Hardalupas, Yannis; Soulopoulos, Nikolaos; Taylor, A M K P; Urata, Yunichi
2015-05-10
Schlieren [Schlieren and Shadowgraphy Techniques (McGraw-Hill, 2001); Optics of Flames (Butterworths, 1963)] is a non-intrusive technique that can be used to detect density variations in a medium, and thus, under constant pressure and mixture concentration conditions, measure whole-field temperature distributions. The objective of the current work was to design a schlieren system to measure line-of-sight (LOS)-averaged temperature distribution with the final aim to determine the temperature distribution inside the cylinder of internal combustion (IC) engines. In a preliminary step, we assess theoretically the errors arising from the data reduction used to determine temperature from a schlieren measurement and find that the total error, random and systematic, is less than 3% for typical conditions encountered in the present experiments. A Z-type, curved-mirror schlieren system was used to measure the temperature distribution from a hot air jet in an open air environment in order to evaluate the method. Using the Abel transform, the radial distribution of the temperature was reconstructed from the LOS measurements. There was good agreement in the peak temperature between the reconstructed schlieren and thermocouple measurements. Experiments were then conducted in a four-stroke, single-cylinder, optical spark ignition engine with a four-valve, pentroof-type cylinder head to measure the temperature distribution of the reaction zone of an iso-octane-air mixture. The engine optical windows were designed to produce parallel rays and allow accurate application of the technique. The feasibility of the method to measure temperature distributions in IC engines was evaluated with simulations of the deflection angle combined with equilibrium chemistry calculations that estimated the temperature of the reaction zone at the position of maximum ray deflection as recorded in a schlieren image. Further simulations showed that the effects of exhaust gas recirculation and air-to-fuel ratio on the schlieren images were minimal under engine conditions compared to the temperature effect. At 20 crank angle degrees before top dead center (i.e., 20 crank angle degrees after ignition timing), the measured temperature of the flame front was in agreement with the simulations (730-1320 K depending on the shape of the flame front). Furthermore, the schlieren images identified the presence of hot gases ahead of the reaction zone due to diffusion and showed that there were no hot spots in the unburned mixture.
Near bottom temperature anomalies in the Dead Sea
NASA Astrophysics Data System (ADS)
Ben-Avraham, Zvi; Ballard, Robert D.
1984-12-01
A bottom photographic and temperature study was carried out in the Dead Sea using a miniature version of the unmanned camera system ANGUS (mini-ANGUS). Due to the low transparency of the Dead Sea water, the bottom photographs provide very poor results. Only in a very few locations was the floor visible and in those cases it was found to be a white undulating sedimentary surface. The bottom temperature measurements, which were made continuously along the ship track, indicate the presence of a large zone of temperature anomalies. This zone is located in the deep part of the north basin at a water depth of over 330 m. The anomalies occur above a portion of an east-west fault which cuts through the Dead Sea suggesting the presence of hydrothermal activity.
Bergfeld, D.; Goff, F.; Janik, C.J.
2001-01-01
In the later part of the 1990s, a large die-off of desert shrubs occurred over an approximately 1 km2 area in the northwestern section of the Dixie Valley (DV) geothermal field. This paper reports results from accumulation-chamber measurements of soil CO2 flux from locations in the dead zone and stable isotope and chemical data on fluids from fumaroles, shallow wells, and geothermal production wells within and adjacent to the dead zone. A cumulative probability plot shows three types of flux sites within the dead zone: Locations with a normal background CO2 flux (7 g m-2 day-1); moderate flux sites displaying "excess" geothermal flux; and high flux sites near young vents and fumaroles. A maximum CO2 flux of 570 g m-2 day-1 was measured at a location adjacent to a fumarole. Using statistical methods appropriate for lognormally distributed populations of data, estimates of the geothermal flux range from 7.5 t day-1 from a 0.14-km2 site near the Stillwater Fault to 0.1 t day-1 from a 0.01 -km2 location of steaming ground on the valley floor. Anomalous CO2 flux is positively correlated with shallow temperature anomalies. The anomalous flux associated with the entire dead zone area declined about 35% over a 6-month period. The decline was most notable at a hot zone located on an alluvial fan and in the SG located on the valley floor. Gas geochemistry indicates that older established fumaroles along the Stillwater Fault and a 2-year-old vent in the lower section of the dead zone discharge a mixture of geothermal gases and air or gases from air-saturated meteoric water (ASMW). Stable isotope data indicate that steam from the smaller fumaroles is produced by ??? 100??C boiling of these mixed fluids and reservoir fluid. Steam from the Senator fumarole (SF) and from shallow wells penetrating the dead zone are probably derived by 140??C to 160??C boiling of reservoir fluid. Carbon-13 isotope data suggest that the reservoir CO2 is produced mainly by thermal decarbonation of hydrothermal calcite in veins that cut reservoir rocks. Formation of the dead zone is linked to the reservoir pressure decline caused by continuous reservoir drawdown from 1986 to present. These reservoir changes have restricted flow and induced boiling in a subsurface hydrothermal outflow plume extending from the Stillwater Fault southeast toward the DV floor. We estimate that maximum CO2 flux in the upflow zone along the Stillwater Fault in 1998 was roughly seven to eight times greater than the pre-production flux in 1986. The eventual decline in CO2 flux reflects the drying out of the outflow plume. Published by Elsevier Science B.V.
Umar, Amara; Javaid, Nadeem; Ahmad, Ashfaq; Khan, Zahoor Ali; Qasim, Umar; Alrajeh, Nabil; Hayat, Amir
2015-06-18
Performance enhancement of Underwater Wireless Sensor Networks (UWSNs) in terms of throughput maximization, energy conservation and Bit Error Rate (BER) minimization is a potential research area. However, limited available bandwidth, high propagation delay, highly dynamic network topology, and high error probability leads to performance degradation in these networks. In this regard, many cooperative communication protocols have been developed that either investigate the physical layer or the Medium Access Control (MAC) layer, however, the network layer is still unexplored. More specifically, cooperative routing has not yet been jointly considered with sink mobility. Therefore, this paper aims to enhance the network reliability and efficiency via dominating set based cooperative routing and sink mobility. The proposed work is validated via simulations which show relatively improved performance of our proposed work in terms the selected performance metrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Rebecca G.; Livio, Mario; Palaniswamy, Divya
Pulsar timing observations have revealed planets around only a few pulsars. We suggest that the rarity of these planets is due mainly to two effects. First, we show that the most likely formation mechanism requires the destruction of a companion star. Only pulsars with a suitable companion (with an extreme mass ratio) are able to form planets. Second, while a dead zone (a region of low turbulence) in the disk is generally thought to be essential for planet formation, it is most probably rare in disks around pulsars, because of the irradiation from the pulsar. The irradiation strongly heats themore » inner parts of the disk, thus pushing the inner boundary of the dead zone out. We suggest that the rarity of pulsar planets can be explained by the low probability for these two requirements to be satisfied: a very low-mass companion and a dead zone.« less
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
Long-lived Dust Asymmetries at Dead Zone Edges in Protoplanetary Disks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miranda, Ryan; Li, Hui; Li, Shengtai
A number of transition disks exhibit significant azimuthal asymmetries in thermal dust emission. One possible origin for these asymmetries is dust trapping in vortices formed at the edges of dead zones. We carry out high-resolution, two-dimensional hydrodynamic simulations of this scenario, including the effects of dust feedback. We find that, although feedback weakens the vortices and slows down the process of dust accumulation, the dust distribution in the disk can nonetheless remain asymmetric for many thousands of orbits. We show that even after 10{sup 4} orbits, or 2.5 Myr when scaled to the parameters of Oph IRS 48 (a significantmore » fraction of its age), the dust is not dispersed into an axisymmetric ring, in contrast to the case of a vortex formed by a planet. This is because accumulation of mass at the dead zone edge constantly replenishes the vortex, preventing it from being fully destroyed. We produce synthetic dust emission images using our simulation results. We find that multiple small clumps of dust may be distributed azimuthally. These clumps, if not resolved from one another, appear as a single large feature. A defining characteristic of a disk with a dead zone edge is that an asymmetric feature is accompanied by a ring of dust located about twice as far from the central star.« less
Tropical dead zones and mass mortalities on coral reefs.
Altieri, Andrew H; Harrison, Seamus B; Seemann, Janina; Collin, Rachel; Diaz, Robert J; Knowlton, Nancy
2017-04-04
Degradation of coastal water quality in the form of low dissolved oxygen levels (hypoxia) can harm biodiversity, ecosystem function, and human wellbeing. Extreme hypoxic conditions along the coast, leading to what are often referred to as "dead zones," are known primarily from temperate regions. However, little is known about the potential threat of hypoxia in the tropics, even though the known risk factors, including eutrophication and elevated temperatures, are common. Here we document an unprecedented hypoxic event on the Caribbean coast of Panama and assess the risk of dead zones to coral reefs worldwide. The event caused coral bleaching and massive mortality of corals and other reef-associated organisms, but observed shifts in community structure combined with laboratory experiments revealed that not all coral species are equally sensitive to hypoxia. Analyses of global databases showed that coral reefs are associated with more than half of the known tropical dead zones worldwide, with >10% of all coral reefs at elevated risk for hypoxia based on local and global risk factors. Hypoxic events in the tropics and associated mortality events have likely been underreported, perhaps by an order of magnitude, because of the lack of local scientific capacity for their detection. Monitoring and management plans for coral reef resilience should incorporate the growing threat of coastal hypoxia and include support for increased detection and research capacity.
Tropical dead zones and mass mortalities on coral reefs
Altieri, Andrew H.; Harrison, Seamus B.; Seemann, Janina; Collin, Rachel; Diaz, Robert J.; Knowlton, Nancy
2017-01-01
Degradation of coastal water quality in the form of low dissolved oxygen levels (hypoxia) can harm biodiversity, ecosystem function, and human wellbeing. Extreme hypoxic conditions along the coast, leading to what are often referred to as “dead zones,” are known primarily from temperate regions. However, little is known about the potential threat of hypoxia in the tropics, even though the known risk factors, including eutrophication and elevated temperatures, are common. Here we document an unprecedented hypoxic event on the Caribbean coast of Panama and assess the risk of dead zones to coral reefs worldwide. The event caused coral bleaching and massive mortality of corals and other reef-associated organisms, but observed shifts in community structure combined with laboratory experiments revealed that not all coral species are equally sensitive to hypoxia. Analyses of global databases showed that coral reefs are associated with more than half of the known tropical dead zones worldwide, with >10% of all coral reefs at elevated risk for hypoxia based on local and global risk factors. Hypoxic events in the tropics and associated mortality events have likely been underreported, perhaps by an order of magnitude, because of the lack of local scientific capacity for their detection. Monitoring and management plans for coral reef resilience should incorporate the growing threat of coastal hypoxia and include support for increased detection and research capacity. PMID:28320966
NASA Astrophysics Data System (ADS)
Rogener, M. K.; Roberts, B. J.; Rabalais, N. N.; Stewart, F. J.; Joye, S. B.
2016-02-01
Excess nitrogen in coastal environments leads to eutrophication, harmful algal blooms, habitat loss, oxygen depletion and reductions in biodiversity. As such, biological nitrogen (N) removal through the microbially-mediated process of denitrification is a critical ecosystem function that can mitigate the negative consequences of excess nitrogen loading. However, denitrification can produce nitrous oxide, a potent greenhouse gas, as a byproduct under some environmental conditions. To understand how excess nitrogen loading impacts denitrification, we measured rates of this process in the water column of the Gulf of Mexico "Dead Zone" three times over the summer of 2015. The Dead Zone is generated by excessive nitrogen loading from the Mississippi River co-occurring with strong water column stratification, which leads to a large summer-time hypoxic/anoxic area at the mouth of the river and along the coast of Louisiana. Rates of denitrification ranged from 31 to 153 nmol L-1 d-1. Dead Zone waters are also enriched in methane and aerobic methane oxidation rates ranged from 0.1 to 4.3 nmol L-1 d-1. Maximal denitrification rates were observed at stations with the lowest oxygen concentrations and highest methane oxidation rates, suggesting a potential coupling between nitrate reduction and methane oxidation which both scrubs reactive N and methane from the system, thus performing a duel ecosystem service.
Plasma-Generating Glucose Monitor Accuracy Demonstrated in an Animal Model
Magarian, Peggy; Sterling, Bernhard
2009-01-01
Introduction Four randomized controlled trials have compared mortality and morbidity of tight glycemic control versus conventional glucose for intensive care unit (ICU) patients. Two trials showed a positive outcome. However, one single-center trial and a large multicenter trial had negative results. The positive trials used accurate portable lab analyzers. The negative trial allowed the use of meters. The portable analyzer measures in filtered plasma, minimizing the interference effects. OptiScan Biomedical Corporation is developing a continuous glucose monitor using centrifuged plasma and mid-infrared spectroscopy for use in ICU medicine. The OptiScanner draws approximately 0.1 ml of blood every 15 min and creates a centrifuged plasma sample. Internal quality control minimizes sample preparation error. Interference adjustment using this technique has been presented at the Society of Critical Care Medicine in separate studies since 2006. Method A good laboratory practice study was conducted on three Yorkshire pigs using a central venous catheter over 6 h while performing a glucose challenge. Matching Yellow Springs Instrument glucose readings were obtained. Results Some 95.7% of the predicted values were in the Clarke Error Grid A zone and 4.3% in the B zone. Of those in the B zone, all were within 3.3% of the A zone boundaries. The coefficient of determination (R2) was 0.993. The coefficient of variance was 5.02%. Animal necropsy and blood panels demonstrated safety. Conclusion The OptiScanner investigational device performed safely and accurately in an animal model. Human studies using the device will begin soon. PMID:20144396
Jack Rabbit Pretest 2021E PT6 Photonic Doppler Velocimetry Data Volume 6 Section 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M M; Strand, O T; Bosson, S T
The Jack Rabbit Pretest (PT) 2021E PT6 experiment was fired on April 1, 2008 at the Contained Firing Facility, Site 300, Lawrence Livermore National Laboratory. This experiment is part of an effort to determine the properties of LX-17 in a regime where corner-turning behavior and dead-zone formation are not well understood. Photonic Doppler Velocimetry (PDV) measured diagnostic plate velocities confirming the presence of a persistent LX-17 dead-zone formation and the resultant impulse gradient applied under the diagnostic plate. The Jack Rabbit Pretest 2021E PT6, 160 millimeter diameter experiment returned data on all eight PDV probes. The probes measured on themore » central axis and at 20, 30, 35, 45, 55, 65, 75 millimeters from the central axis. The experiment was shot at an ambient room temperature of 65 degrees Fahrenheit. The earliest PDV signal extinction was 54.2 microseconds at 30 millimeters. The latest PDV signal extinction time was 64.5 microseconds at the central axis. The measured velocity ranged from meters per second to thousands of meters per second. First detonation wave induced jump-off was measured at 55 millimeters at 14.1 microseconds. The PDV data provided an unambiguous indication of dead-zone formation and an impulse gradient applied to the diagnostic plate. The central axis had a last measured velocity of 1860 meters per second. At 55 millimeters the last measured velocity was 2408 meters per second. The low-to-high velocity ratio was 0.77. Velocity data was integrated to compute diagnostic plate cross section profiles. Velocity data was differentiated to compute a peak pressure under the diagnostic plate at the central axis of 227 kilobars at 20.1 microseconds, indicating a late time chemical reaction in the LX-17 dead-zone. Substantial motion (>1 m/s) of the diagnostic plate over the dead-zone is followed by detonation region motion within approximately 1.7 microseconds.« less
Wootton, Jeffery H; Hsu, I-Chow Joe; Diederich, Chris J
2011-02-01
The clinical success of hyperthermia adjunct to radiotherapy depends on adequate temperature elevation in the tumor with minimal temperature rise in organs at risk. Existing technologies for thermal treatment of the cervix have limited spatial control or rapid energy falloff. The objective of this work is to develop an endocervical applicator using a linear array of multisectored tubular ultrasound transducers to provide 3-D conformal, locally targeted hyperthermia concomitant to radiotherapy in the uterine cervix. The catheter-based device is integrated within a HDR brachytherapy applicator to facilitate sequential and potentially simultaneous heat and radiation delivery. Treatment planning images from 35 patients who underwent HDR brachytherapy for locally advanced cervical cancer were inspected to assess the dimensions of radiation clinical target volumes (CTVs) and gross tumor volumes (GTVs) surrounding the cervix and the proximity of organs at risk. Biothermal simulation was used to identify applicator and catheter material parameters to adequately heat the cervix with minimal thermal dose accumulation in nontargeted structures. A family of ultrasound applicators was fabricated with two to three tubular transducers operating at 6.6-7.4 MHz that are unsectored (360 degrees), bisectored (2 x 180 degrees), or trisectored (3 x 120 degrees) for control of energy deposition in angle and along the device length in order to satisfy anatomical constraints. The device is housed in a 6 mm diameter PET catheter with cooling water flow for endocervical implantation. Devices were characterized by measuring acoustic efficiencies, rotational acoustic intensity distributions, and rotational temperature distributions in phantom. The CTV in HDR brachytherapy plans extends 20.5 +/- 5.0 mm from the endocervical tandem with the rectum and bladder typically <8 mm from the target boundary. The GTV extends 19.4 +/- 7.3 mm from the tandem. Simulations indicate that for 60 min treatments the applicator can heat to 41 degrees C and deliver > 5EM(43 degrees C) over 4-5 cm diameter with Tmax < 45 degrees C and 1 kg m(-3) s(-1) blood perfusion. The 41 degrees C contour diameter is reduced to 3-4 cm at 3 kg m(-3) s(-1) perfusion. Differential power control to transducer elements and sectors demonstrates tailoring of heating along the device length and in angle. Sector cuts are associated with a 14-47 degrees acoustic dead zone, depending on cut width, resulting in a approximately 2-4 degrees C temperature reduction within the dead zone below Tmax. Dead zones can be oriented for thermal protection of the rectum and bladder. Fabricated devices have acoustic efficiencies of 33.4%-51.8% with acoustic output that is well collimated in length, reflects the sectoring strategy, and is strongly correlated with temperature distributions. A catheter-based ultrasound applicator was developed for endocervical implantation with locally targeted, 3-D conformal thermal delivery to the uterine cervix. Feasibility of heating clinically relevant target volumes was demonstrated with power control along the device length and in angle to treat the cervix with minimal thermal dose delivery to the rectum and bladder.
Properties of the dead zone due to the gas cushion effect in PBX 9502
NASA Astrophysics Data System (ADS)
Anderson, William
2017-06-01
The gas cushion effect is a well-known phenomenon in which gas trapped between an impactor and an explosive precompresses and deadens a layer of the explosive. We have conducted a series of impact experiments, with and without a trapped gas layer, on the plastic bonded explosive PBX 9502 (95% TATB and 5% Kel-F 800). In each experiment, a 100-oriented LiF window was glued, with an intervening Al foil (a reflector for VISAR), to the surface of a thin (2.5-3.3 mm) PBX 9502 sample and the opposite surface impacted by an impactor at a velocity sufficient to produce an overdriven detonation. VISAR was used to observe arrival of the resulting shock wave and reverberations between the LiF window and the impactor. In three experiments, a gap of 25-38 mm, filled with He gas at a pressure of 0.79 bar, existed between the impactor and the sample at the beginning of the experiment. In these three experiments, a low-amplitude wave reflected from the interface between the reacted explosive and the dead zone was observed to precede the reflection from the impactor. We have used the observed wave amplitudes and arrival times to quantify the properties of the dead zone and, by comparison to existing EOS data for reacted and unreacted PBX 9502, estimate the extent of reaction in the dead zone. This work was supported by the US Department of Energy under contract DE-AC52-06NA25396.
Bed Erosion Process in Geophysical Viscoplastic Fluid
NASA Astrophysics Data System (ADS)
Luu, L. H.; Philippe, P.; Chambon, G.; Vigneaux, P.; Marly, A.
2017-12-01
The bulk behavior of materials involved in geophysical fluid dynamics such as snow avalanches or debris flows has often been modeled as viscoplastic fluid that starts to flow once its stress state overcomes a critical yield value. This experimental and numerical study proposes to interpret the process of erosion in terms of solid-fluid transition for these complex materials. The experimental setup consists in a closed rectangular channel with a cavity in its base. By means of high-resolution optical velocimetry (PIV), we properly examine the typical velocity profiles of a model elasto-viscoplastic flow (Carbopol) at the vicinity of the solid-fluid interface, separating a yielded flowing layer above from an unyielded dead zone below. In parallel, numerical simulations in this expansion-contraction geometry with Augmented Lagrangian and Finite-Differences methods intend to discuss the possibility to describe the specific flow related to the existence of a dead zone, with a simple Bingham rheology. First results of this comparative analysis show a good numerical ability to capture the main scalings and flow features, such as the non-monotonous evolution of the shear stress in the boundary layer between the central plug zone and the dead zone at the bottom of the cavity.
Control at stability's edge minimizes energetic costs: expert stick balancing
Meyer, Ryan; Zhvanetsky, Max; Ridge, Sarah; Insperger, Tamás
2016-01-01
Stick balancing on the fingertip is a complex voluntary motor task that requires the stabilization of an unstable system. For seated expert stick balancers, the time delay is 0.23 s, the shortest stick that can be balanced for 240 s is 0.32 m and there is a ° dead zone for the estimation of the vertical displacement angle in the saggital plane. These observations motivate a switching-type, pendulum–cart model for balance control which uses an internal model to compensate for the time delay by predicting the sensory consequences of the stick's movements. Numerical simulations using the semi-discretization method suggest that the feedback gains are tuned near the edge of stability. For these choices of the feedback gains, the cost function which takes into account the position of the fingertip and the corrective forces is minimized. Thus, expert stick balancers optimize control with a combination of quick manoeuvrability and minimum energy expenditures. PMID:27278361
Marjanovic, Josip; Weiger, Markus; Reber, Jonas; Brunner, David O; Dietrich, Benjamin E; Wilm, Bertram J; Froidevaux, Romain; Pruessmann, Klaas P
2018-02-01
For magnetic resonance imaging of tissues with very short transverse relaxation times, radio-frequency excitation must be immediately followed by data acquisition with fast spatial encoding. In zero-echo-time (ZTE) imaging, excitation is performed while the readout gradient is already on, causing data loss due to an initial dead time. One major dead time contribution is the settling time of the filters involved in signal down-conversion. In this paper, a multi-rate acquisition scheme is proposed to minimize dead time due to filtering. Short filters and high output bandwidth are used initially to minimize settling time. With increasing time since the signal onset, longer filters with better frequency selectivity enable stronger signal decimation. In this way, significant dead time reduction is accomplished at only a slight increase in the overall amount of output data. Multi-rate acquisition was implemented with a two-stage filter cascade in a digital receiver based on a field-programmable gate array. In ZTE imaging in a phantom and in vivo, dead time reduction by multi-rate acquisition is shown to improve image quality and expand the feasible bandwidth while increasing the amount of data collected by only a few percent.
Error-Eliciting Problems: Fostering Understanding and Thinking
ERIC Educational Resources Information Center
Lim, Kien H.
2014-01-01
Student errors are springboards for analyzing, reasoning, and justifying. The mathematics education community recognizes the value of student errors, noting that "mistakes are seen not as dead ends but rather as potential avenues for learning." To induce specific errors and help students learn, choose tasks that might produce mistakes.…
Water Quality Modeling in the Dead End Sections of Drinking Water Distribution Networks
Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Wate...
Science and Technology Review July/August 2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blobaum, K M
2010-05-27
This issue has the following articles: (1) Deterrence with a Minimum Nuclear Stockpile - Commentary by Bruce T. Goodwin; (2) Enhancing Confidence in the Nation's Nuclear Stockpile - Livermore experts are participating in a national effort aimed at predicting how nuclear weapon materials and systems will likely change over time; (3) Narrowing Uncertainties - For climate modeling and many other fields, understanding uncertainty, or margin of error, is critical; (4) Insight into a Deadly Disease - Laboratory experiments reveal the pathogenesis of tularemia in host cells, bringing scientists closer to developing a vaccine for this debilitating disease. (5) Return tomore » Rongelap - On the Rongelap Atoll, Livermore scientists are working to minimize radiological exposure for natives now living on or wishing to return to the islands.« less
NASA Technical Reports Server (NTRS)
Martin, D. L.; Perry, M. J.
1994-01-01
Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.
Sustained Accretion on Gas Giants Surrounded by Low-Turbulence Circumplanetary Disks
NASA Astrophysics Data System (ADS)
D'Angelo, Gennaro; Marzari, Francesco
2015-11-01
Gas giants more massive than Saturn acquire most of their envelope while surrounded by a circumplanetary disk (CPD), which extends over a fraction of the planet’s Hill radius. Akin to circumstellar disks, CPDs may be subject to MRI-driven turbulence and contain low-turbulence regions, i.e., dead zones. It was suggested that CPDs may inhibit sustained gas accretion, thus limiting planet growth, because gas transport through a CPD may be severely reduced by a dead zone, a consequence at odds with the presence of Jupiter-mass (and larger) planets. We studied how an extended dead zone influences gas accretion on a Jupiter-mass planet, using global 3D hydrodynamics calculations with mesh refinements. The accretion flow from the circumstellar disk to the CPD is resolved locally at the length scale Rj, Jupiter's radius. The gas kinematic viscosity is assumed to be constant and the dead zone around the planet is modeled as a region of much lower viscosity, extending from ~Rj out to ~60Rj and off the mid-plane for a few CPD scale heights. We obtain accretion rates only marginally smaller than those reported by, e.g., D'Angelo et al. (2003), Bate et al. (2003), Bodenheimer et al. (2013), who applied the same constant kinematic viscosity everywhere, including in the CPD. As found by several previous studies (e.g., D’Angelo et al. 2003; Bate et al. 2003; Tanigawa et al. 2012; Ayliffe and Bate 2012; Gressel et al. 2013; Szulágyi et al. 2014), the accretion flow does not proceed through the CPD mid-plane but rather at and above the CPD surface, hence involving MRI-active regions (Turner et al. 2014). We conclude that the presence of a dead zone in a CPD does not inhibit gas accretion on a giant planet. Sustained accretion in the presence of a CPD is consistent not only with the formation of Jupiter but also with observed extrasolar planets more massive than Jupiter. We place these results in the context of the growth and migration of a pair of giant planets locked in the 2:1 mean motion resonance
Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Wate...
Phosphorus release from ash and remaining tissues of two wetland species after a prescribed fire.
Liu, G D; Gu, B; Miao, S L; Li, Y C; Migliaccio, K W; Qian, Y
2010-01-01
Dead plant tissues and ash from a prescribed fire play an important role in nutrient balance and cycling in the Florida Everglades ecosystem. The objective of this study was to assess the dynamic changes in total phosphorus release (TPr) from ash or tissues of either cattail (Typha domingensis Pers.) or sawgrass (Cladium jamaicense Crantz) to water. Natural-dead (senesced-dead) and burning-dead (standing-dead due to a prescribed fire) cattail and sawgrass were collected from highly (H) and moderately (M) impacted zones in the Florida Everglades. This experiment was conducted by incubation and water-extraction of the materials in plastic bottles for 65 d at room temperature (24 +/- 1 degrees C). Results showed that 63 to 88%, 17 to 48%, 9 to 20%, and 13 to 28% of total P (TPp) were released as TPr from cattail and sawgrass ash, cattail tissues from the H zone, cattail tissues, and sawgrass tissues from the M zone, respectively. TPp means total P of plant tissues, whereas TPr is total P release from the tissues or ash. Most of the TPr was released within 24 h after burning. The quick release of TPr observed in this experiment may help explain the P surge in the surface water immediately following a fire in the marsh. These findings suggest that prescribed burning accelerates P release from cattail and sawgrass. They also imply that it is very important to keep the water stagnant in the first 24 h to maximize the benefits of a prescribed fire in the Everglades.
Hough, S.E.; Avni, R.
2009-01-01
In combination with the historical record, paleoseismic investigations have provided a record of large earthquakes in the Dead Sea Rift that extends back over 1500 years. Analysis of macroseismic effects can help refine magnitude estimates for large historical events. In this study we consider the detailed intensity distributions for two large events, in 1170 CE and 1202 CE, as determined from careful reinterpretation of available historical accounts, using the 1927 Jericho earthquake as a guide in their interpretation. In the absence of an intensity attenuation relationship for the Dead Sea region, we use the 1927 Jericho earthquake to develop a preliminary relationship based on a modification of the relationships developed in other regions. Using this relation, we estimate M7.6 for the 1202 earthquake and M6.6 for the 1170 earthquake. The uncertainties for both estimates are large and difficult to quantify with precision. The large uncertainties illustrate the critical need to develop a regional intensity attenuation relation. We further consider the distribution of magnitudes in the historic record and show that it is consistent with a b-value distribution with a b-value of 1. Considering the entire Dead Sea Rift zone, we show that the seismic moment release rate over the past 1500 years is sufficient, within the uncertainties of the data, to account for the plate tectonic strain rate along the plate boundary. The results reveal that an earthquake of M7.8 is expected within the zone on average every 1000 years. ?? 2011 Science From Israel/LPPLtd.
NASA Astrophysics Data System (ADS)
Steckler, Michael S.; ten Brink, Uri S.
1986-08-01
The complex plate boundary between Arabia and Africa at the northern end of the Red Sea includes the Gulf of Suez rift and the Gulf of Aqaba—Dead Sea transform. Geologic evidence indicates that during the earliest phase of rifting the Red Sea propagated NNW towards the Mediterranean Sea creating the Gulf of Suez. Subsequently, the majority of the relative movement between the plates shifted eastward to the Dead Sea transform. We propose that an increase in the strength of the lithosphere across the Mediterranean continental margin acted as a barrier to the propagation of the rift. A new plate boundary, the Dead Sea transform formed along a zone of minimum strength. We present an analysis of lithospheric strength variations across the Mediterranean continental margin. The main factors controlling these variations are the geotherm, crustal thickness and composition, and sediment thickness. The analysis predicts a characteristic strength profile at continental margins which consists of a marked increase in strength seaward of the hinge zone and a strength minimum landward of the hinge zone. This strength profile also favors the creation of thin continental slivers such as the Levant west of the Dead Sea transform and the continental promontory containing Socotra Island at the mouth of the Gulf of Aden. Calculations of strength variations based on changes of crustal thickness, geotherm and sediment thickness can be extended to other geologic settings as well. They can explain the location of rerifting events at intracratonic basins, of backarc basins and of major continental strike-slip zones.
Indoor localization using pedestrian dead reckoning updated with RFID-based fiducials.
House, Samuel; Connell, Sean; Milligan, Ian; Austin, Daniel; Hayes, Tamara L; Chiang, Patrick
2011-01-01
We describe a low-cost wearable system that tracks the location of individuals indoors using commonly available inertial navigation sensors fused with radio frequency identification (RFID) tags placed around the smart environment. While conventional pedestrian dead reckoning (PDR) calculated with an inertial measurement unit (IMU) is susceptible to sensor drift inaccuracies, the proposed wearable prototype fuses the drift-sensitive IMU with a RFID tag reader. Passive RFID tags placed throughout the smart-building then act as fiducial markers that update the physical locations of each user, thereby correcting positional errors and sensor inaccuracy. Experimental measurements taken for a 55 m × 20 m 2D floor space indicate an over 1200% improvement in average error rate of the proposed RFID-fused system over dead reckoning alone.
EXor OUTBURSTS FROM DISK AMPLIFICATION OF STELLAR MAGNETIC CYCLES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armitage, Philip J., E-mail: pja@jilau1.colorado.edu
EXor outbursts—moderate-amplitude disk accretion events observed in Class I and Class II protostellar sources—have timescales and amplitudes that are consistent with the viscous accumulation and release of gas in the inner disk near the dead zone boundary. We suggest that outbursts are indirectly triggered by stellar dynamo cycles, via poloidal magnetic flux that diffuses radially outward through the disk. Interior to the dead zone the strength of the net field modulates the efficiency of angular momentum transport by the magnetorotational instability. In the dead zone changes in the polarity of the net field may lead to stronger outbursts because ofmore » the dominant role of the Hall effect in this region of the disk. At the level of simple estimates we show that changes to kG-strength stellar fields could stimulate disk outbursts on 0.1 au scales, though this optimistic conclusion depends upon the uncertain efficiency of net flux transport through the inner disk. The model predicts a close association between observational tracers of stellar magnetic activity and EXor events.« less
NASA Astrophysics Data System (ADS)
Ann, Byoung-moo; Song, Younghoon; Kim, Junki; Yang, Daeho; An, Kyungwon
2015-08-01
Exact measurement of the second-order correlation function g(2 )(t ) of a light source is essential when investigating the photon statistics and the light generation process of the source. For a stationary single-mode light source, the Mandel Q factor is directly related to g(2 )(0 ) . For a large mean photon number in the mode, the deviation of g(2 )(0 ) from unity is so small that even a tiny error in measuring g(2 )(0 ) would result in an inaccurate Mandel Q . In this work, we address the detector-dead-time effect on g(2 )(0 ) of stationary sub-Poissonian light. It is then found that detector dead time can induce a serious error in g(2 )(0 ) and thus in Mandel Q in those cases even in a two-detector configuration. Utilizing the cavity-QED microlaser, a well-established sub-Poissonian light source, we measured g(2 )(0 ) with two different types of photodetectors with different dead times. We also introduced prolonged dead time by intentionally deleting the photodetection events following a preceding one within a specified time interval. We found that the observed Q of the cavity-QED microlaser was underestimated by 19% with respect to the dead-time-free Q when its mean photon number was about 600. We derived an analytic formula which well explains the behavior of the g(2 )(0 ) as a function of the dead time.
Free space optics: a viable last-mile alternative
NASA Astrophysics Data System (ADS)
Willebrand, Heinz A.; Clark, Gerald R.
2001-10-01
This paper explores Free Space Optics (FSO) as an access technology in the last mile of metropolitan area networks (MANs). These networks are based in part on fiber-optic telecommunications infrastructure, including network architectures of Synchronous Optical Network (commonly referred to as SONET), the North American standard for synchronous data transmission; and Synchronous Digital Hierarchy (commonly referred to as SDH), the international standard and equivalent of SONET. Several converging forces have moved FSO beyond a niche technology for use only in local area networks (LANs) as a bridge connecting two facilities. FSO now allows service providers to cost effectively provide optical bandwidth for access networks and accelerate the extension of metro optical networks bridging what has been termed by industry experts as the optical dead zone. The optical dead zone refers to both the slowdown in capital investment in the short-term future and the actual connectivity gap that exists today between core metro optical networks and the access optical networks. Service providers have built extensive core and minimal metro networks but have not yet provided optical bandwidth to the access market largely due to the non-compelling economics to bridge the dead zone with fiber. Historically, such infrastructure build-out slowdowns have been blamed on a combination of economics, time-to-market constraints and limited technology options. However, new technology developments and market acceptance of FSO give service providers a new cost-effective alternative to provide high-bandwidth services with optical bandwidth in the access networks. Merrill Lynch predicts FSO will grow into a $2 billion market by 2005. The drivers for this market are a mere 5%- 6% penetration of fiber to business buildings; cost effective solution versus RF or fiber; and significant capacity which can only be matched by a physical fiber link, Merrill Lynch reports. This paper will describe FSO technology, its capabilities and its limitations. The paper will investigate how FSO technology has evolved to its current stage for deployment in MANs, LANs, wireless backhaul and metropolitan network extensions - applications that fall within the category of last mile. The paper will address the market, drivers and the adoption of FSO, plus provide a projection of future FSO technology, based on today's product roadmaps. The paper concludes with a summary of findings and recommendations.
Williams, George Sie; Naiene, Jeremias; Gayflor, Joseph; Malibiche, Theophil; Zoogley, Bentoe; Frank, Wimot G; Nayeri, Fariba
2015-08-01
As West Africa continues to suffer from a deadly Ebola epidemic, the national health sectors struggle to minimize the damages and stop the spread of disease. A cohort of inhabitants of a small village and an Ebola hot zone in Sinoe County of Liberia was followed on a day-by-day basis to search for new cases and to minimize the spread of Ebola to the other community members or to other regions. Technical, clinical, and humanistic aspects of the response are discussed in this report. Of the 22 confirmed Ebola cases in Sinoe County since the beginning of outbreak (June 16, 2014), 7 cases were inhabitants of Polay Town, a small village 5.5 miles east of Greenville, the Sinoe County capital. After the last wave of outbreak at the beginning of December, enhanced response activity provided essential coordination and mobilized the resources to stop the epidemic. Despite unprotected contacts in crowded houses, no new cases were detected among the contact families, or in the surrounding houses or communities. Strong national mobilization in a decentralized but harmonized system at the community level has been of great value in controlling the epidemic in Liberia. The major interventions include epidemiological surveillance, public information dissemination, effective communication, case management, and infection control. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Variability of intertidal foraminferal assemblages in a salt marsh, Oregon, USA
Milker, Yvonne; Horton, Benjamin P.; Nelson, Alan R.; Engelhart, Simon E.; Witter, Robert C.
2015-01-01
We studied 18 sampling stations along a transect to investigate the similarity between live (rose Bengal stained) foraminiferal populations and dead assemblages, their small-scale spatial variations and the distribution of infaunal foraminifera in a salt marsh (Toms Creek marsh) at the upper end of the South Slough arm of the Coos Bay estuary, Oregon, USA. We aimed to test to what extent taphonomic processes, small-scale variability and infaunal distribution influence the accuracy of sea-level reconstructions based on intertidal foraminifera. Cluster analyses have shown that dead assemblages occur in distinct zones with respect to elevation, a prerequisite for using foraminifera as sea-level indicators. Our nonparametric multivariate analysis of variance showed that small-scale spatial variability has only a small influence on live (rose Bengal stained) populations and dead assemblages. The dissimilarity was higher, however, between live (rose Bengal stained) populations in the middle marsh. We observed early diagenetic dissolution of calcareous tests in the dead assemblages. If comparable post-depositional processes and similar minor spatial variability also characterize fossil assemblages, then dead assemblage are the best modern analogs for paleoenvironmental reconstructions. The Toms Creek tidal flat and low marsh vascular plant zones are dominated by Miliammina fusca, the middle marsh is dominated by Balticammina pseudomacrescens and Trochammina inflata, and the high marsh and upland–marsh transition zone are dominated by Trochamminita irregularis. Analysis of infaunal foraminifera showed that most living specimens are found in the surface sediments and the majority of live (rose Bengal stained) infaunal specimens are restricted to the upper 10 cm, but living individuals are found to depths of 50 cm. The dominant infaunal specimens are similar to those in the corresponding surface samples and no species have been found living solely infaunally. The total numbers of infaunal foraminifera are small compared to the total numbers of dead specimens in the surface samples. This suggests that surface samples adequately represent the modern intertidal environment in Toms Creek.
The active structure of the Dead Sea depression
NASA Astrophysics Data System (ADS)
Shamir, G.
2003-04-01
The ~220km long gravitational and structural Dead Sea Depression (DSD), situated along the southern section of the Dead Sea Transform (DST), is centered by the Dead Sea basin sensu strictu (DSB), which has been described since the 1960?s as a pull-apart basin over a presumed left-hand fault step. However, several observations, or their lack thereof, question this scheme, e.g. (i) It is not supported by recent seismological and geomorphic data; (ii) It does not explain the fault pattern and mixed sinistral and dextral offset along the DSB western boundary; (iii) It does not simply explain the presence of intense deformation outside the presumed fault step zone; (iv) It is inconsistent with the orientation of seismically active faults within the Dead Sea and Jericho Valley; (v); It is apparently inconsistent with the symmetrical structure of the DSD; (vi) The length of the DSB exceeds the total offset along the Dead Sea Transform, while its subsidence is about the age of the DST. Integration of newly acquired and analyzed data (high resolution and petroleum seismic reflection data, earthquake relocation and fault plane solutions) with previously published data (structural mapping, fracture orientation distribution, Bouguer anomaly maps, sinkhole distribution, geomorphic lineaments) now shows that the active upper crustal manifestation of the DSD is a broad shear zone dominated by internal fault systems oriented NNE and NNW. These fault systems are identified by earthquake activity, seismic reflection observations, alignment of recent sinkholes, and distribution of Bouguer anomaly gradients. Motion on the NNE system is normal-dextral, suggesting that counterclockwise rotation may have taken place within the shear zone. The overall sinistral motion between the Arabian and Israel-Sinai plates along the DSD is thus accommodated by distributed shear across the N-S extending DSD. The three-dimensionality of this motion at the DSD may be related to the rate of convergence between the two plates.
NASA Astrophysics Data System (ADS)
Zuo, Xiuling; Su, Fenzhen; Zhao, Huanting; Zhang, Junjue; Wang, Qi; Wu, Di
2017-05-01
Coral reefs in the Xisha Islands (also known as the Paracel Islands in English), South China Sea, have experienced dramatic declines in coral cover. However, the current regional scale hard coral distribution of geomorphic and ecological zones, essential for reefs management in the context of global warming and ocean acidification, is not well documented. We analyzed data from field surveys, Landsat-8 and GF-1 images to map the distribution of hard coral within geomorphic zones and reef flat ecological zones. In situ surveys conducted in June 2014 on nine reefs provided a complete picture of reef status with regard to live coral diversity, evenness of coral cover and reef health (live versus dead cover) for the Xisha Islands. Mean coral cover was 12.5% in 2014 and damaged reefs seemed to show signs of recovery. Coral cover in sheltered habitats such as lagoon patch reefs and biotic dense zones of reef flats was higher, but there were large regional differences and low diversity. In contrast, the more exposed reef slopes had high coral diversity, along with high and more equal distributions of coral cover. Mean hard coral cover of other zones was <10%. The total Xisha reef system was estimated to cover 1 060 km2, and the emergent reefs covered 787 m2. Hard corals of emergent reefs were considered to cover 97 km2. The biotic dense zone of the reef flat was a very common zone on all simple atolls, especially the broader northern reef flats. The total cover of live and dead coral can reach above 70% in this zone, showing an equilibrium between live and dead coral as opposed to coral and algae. This information regarding the spatial distribution of hard coral can support and inform the management of Xisha reef ecosystems.
An Adaptive Mesh Algorithm: Mesh Structure and Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scannapieco, Anthony J.
2016-06-21
The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented bymore » a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally sparse.« less
View of the ODS in the Atlantis payload bay prior to docking
1996-09-17
STS079-824-081 (16-26 Sept. 1996) --- In this 70mm frame from the space shuttle Atlantis, the Jordan River Valley can be traced as it separates Lebanon, Palestine and Israel on the west, from Syria and Jordan on the east. The river flows along the Dead Sea rift; the east side of the fault zone (Syria, Jordan, Saudi Arabia) has moved north about 100 kilometers relative to the west side (Lebanon, Israel, Egypt) during the past 24 million years. The Dead Sea and Sea of Galilee are in depressions formed where faults of the zone diverge or step over. The Dead Sea once covered the area of salt evaporation pans (the bright blue water). The lagoon, barrier islands and evaporite deposits (bright white) along the Mediterranean coast of the Sinai Peninsula (lower left of frame) are just east of Port Said.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gressel, O.; Nelson, R. P.; Turner, N. J.
We present global hydrodynamic (HD) and magnetohydrodynamic (MHD) simulations with mesh refinement of accreting planets embedded in protoplanetary disks (PPDs). The magnetized disk includes Ohmic resistivity that depends on the overlying mass column, leading to turbulent surface layers and a dead zone near the midplane. The main results are: (1) the accretion flow in the Hill sphere is intrinsically three-dimensional for HD and MHD models. Net inflow toward the planet is dominated by high-latitude flows. A circumplanetary disk (CPD) forms. Its midplane flows outward in a pattern whose details differ between models. (2) The opening of a gap magnetically couplesmore » and ignites the dead zone near the planet, leading to stochastic accretion, a quasi-turbulent flow in the Hill sphere, and a CPD whose structure displays high levels of variability. (3) Advection of magnetized gas onto the rotating CPD generates helical fields that launch magnetocentrifugally driven outflows. During one specific epoch, a highly collimated, one-sided jet is observed. (4) The CPD's surface density is ∼30 g cm{sup −2}, small enough for significant ionization and turbulence to develop. (5) The accretion rate onto the planet in the MHD simulation reaches a steady value 8 × 10{sup –3} M {sub ⊕} yr{sup –1} and is similar in the viscous HD runs. Our results suggest that gas accretion onto a forming giant planet within a magnetized PPD with a dead zone allows rapid growth from Saturnian to Jovian masses. As well as being relevant for giant planet formation, these results have important implications for the formation of regular satellites around gas giant planets.« less
Nebular dead zone effects on the D/H ratio in chondrites and comets
NASA Astrophysics Data System (ADS)
Ali-Dib, Mohamad; Martin, R. G.; Petit, J.-M.; Mousis, O.; Vernazza, P.; Lunine, J. I.
2015-11-01
Comets and chondrites show non-monotonic behaviour of their Deuterium to Hydrogen (D/H) ratio as a function of their formation location from the Sun. This is difficult to explain with a classical protoplanetary disk model that has a decreasing temperature structure with radius from the Sun.We want to understand if a protoplanetary disc with a dead zone, a region of zero or low turbulence, can explain the measured D/H values in comets and chondrites. We use time snapshots of a vertically layered disk model with turbulent surface layers and a dead zone at the midplane. The disc has a non-monotonic temperature structure due to increased heating from self-gravity in the outer parts of the dead zone. We couple this to a D/H ratio evolution model in order to quantify the effect of such thermal profiles on D/H enrichment in the nebula.We find that the local temperature peak in the disk can explain the diversity in the D/H ratios of different chondritic families. This disk temperature profile leads to a non-monotonic D/H enrichment evolution, allowing these families to acquire their different D/H values while forming in close proximity. The formation order we infer for these families is compatible with that inferred from their water abundances. However, we find that even for very young disks, the thermal profile reversal is too close to the Sun to be relevant for comets.[1] Ali-Dib, M., Martin, R. G., Petit, J.-M., Mousis, O., Vernazza, P., and Lunine, J. I. (2015, in press A&A). arXiv:1508.00263.
High-performance etching of multilevel phase-type Fresnel zone plates with large apertures
NASA Astrophysics Data System (ADS)
Guo, Chengli; Zhang, Zhiyu; Xue, Donglin; Li, Longxiang; Wang, Ruoqiu; Zhou, Xiaoguang; Zhang, Feng; Zhang, Xuejun
2018-01-01
To ensure the etching depth uniformity of large-aperture Fresnel zone plates (FZPs) with controllable depths, a combination of a point source ion beam with a dwell-time algorithm has been proposed. According to the obtained distribution of the removal function, the latter can be used to optimize the etching time matrix by minimizing the root-mean-square error between the simulation results and the design value. Owing to the convolution operation in the utilized algorithm, the etching depth error is insensitive to the etching rate fluctuations of the ion beam, thereby reducing the requirement for the etching stability of the ion system. As a result, a 4-level FZP with a circular aperture of 300 mm was fabricated. The obtained results showed that the etching depth uniformity of the full aperture could be reduced to below 1%, which was sufficiently accurate for meeting the use requirements of FZPs. The proposed etching method may serve as an alternative way of etching high-precision diffractive optical elements with large apertures.
Thrash, J Cameron; Seitz, Kiley W; Baker, Brett J; Temperton, Ben; Gillies, Lauren E; Rabalais, Nancy N; Henrissat, Bernard; Mason, Olivia U
2017-09-12
Marine regions that have seasonal to long-term low dissolved oxygen (DO) concentrations, sometimes called "dead zones," are increasing in number and severity around the globe with deleterious effects on ecology and economics. One of the largest of these coastal dead zones occurs on the continental shelf of the northern Gulf of Mexico (nGOM), which results from eutrophication-enhanced bacterioplankton respiration and strong seasonal stratification. Previous research in this dead zone revealed the presence of multiple cosmopolitan bacterioplankton lineages that have eluded cultivation, and thus their metabolic roles in this ecosystem remain unknown. We used a coupled shotgun metagenomic and metatranscriptomic approach to determine the metabolic potential of Marine Group II Euryarchaeota , SAR406, and SAR202. We recovered multiple high-quality, nearly complete genomes from all three groups as well as candidate phyla usually associated with anoxic environments- Parcubacteria (OD1) and Peregrinibacteria Two additional groups with putative assignments to ACD39 and PAUC34f supplement the metabolic contributions by uncultivated taxa. Our results indicate active metabolism in all groups, including prevalent aerobic respiration, with concurrent expression of genes for nitrate reduction in SAR406 and SAR202, and dissimilatory nitrite reduction to ammonia and sulfur reduction by SAR406. We also report a variety of active heterotrophic carbon processing mechanisms, including degradation of complex carbohydrate compounds by SAR406, SAR202, ACD39, and PAUC34f. Together, these data help constrain the metabolic contributions from uncultivated groups in the nGOM during periods of low DO and suggest roles for these organisms in the breakdown of complex organic matter. IMPORTANCE Dead zones receive their name primarily from the reduction of eukaryotic macrobiota (demersal fish, shrimp, etc.) that are also key coastal fisheries. Excess nutrients contributed from anthropogenic activity such as fertilizer runoff result in algal blooms and therefore ample new carbon for aerobic microbial metabolism. Combined with strong stratification, microbial respiration reduces oxygen in shelf bottom waters to levels unfit for many animals (termed hypoxia). The nGOM shelf remains one of the largest eutrophication-driven hypoxic zones in the world, yet despite its potential as a model study system, the microbial metabolisms underlying and resulting from this phenomenon-many of which occur in bacterioplankton from poorly understood lineages-have received only preliminary study. Our work details the metabolic potential and gene expression activity for uncultivated lineages across several low DO sites in the nGOM, improving our understanding of the active biogeochemical cycling mediated by these "microbial dark matter" taxa during hypoxia. Copyright © 2017 Thrash et al.
Computer-aided design of high-contact-ratio gears for minimum dynamic load and stress
NASA Technical Reports Server (NTRS)
Lin, Hsiang Hsi; Lee, Chinwai; Oswald, Fred B.; Townsend, Dennis P.
1990-01-01
A computer aided design procedure is presented for minimizing dynamic effects on high contact ratio gears by modification of the tooth profile. Both linear and parabolic tooth profile modifications of high contact ratio gears under various loading conditions are examined and compared. The effects of the total amount of modification and the length of the modification zone were systematically studied at various loads and speeds to find the optimum profile design for minimizing the dynamic load and the tooth bending stress. Parabolic profile modification is preferred over linear profile modification for high contact ratio gears because of its lower sensitivity to manufacturing errors. For parabolic modification, a greater amount of modification at the tooth tip and a longer modification zone are required. Design charts are presented for high contact ratio gears with various profile modifications operating under a range of loads. A procedure is illustrated for using the charts to find the optimum profile design.
The 'Soil Cover App' - a new tool for fast determination of dead and living biomass on soil
NASA Astrophysics Data System (ADS)
Bauer, Thomas; Strauss, Peter; Riegler-Nurscher, Peter; Prankl, Johann; Prankl, Heinrich
2017-04-01
Worldwide many agricultural practices aim on soil protection strategies using living or dead biomass as soil cover. Especially for the case when management practices are focusing on soil erosion mitigation the effectiveness of these practices is directly driven by the amount of soil coverleft on the soil surface. Hence there is a need for quick and reliable methods of soil cover estimation not only for living biomass but particularly for dead biomass (mulch). Available methods for the soil cover measurement are either subjective, depending on an educated guess or time consuming, e.g., if the image is analysed manually at grid points. We therefore developed a mobile application using an algorithm based on entangled forest classification. The final output of the algorithm gives classified labels for each pixel of the input image as well as the percentage of each class which are living biomass, dead biomass, stones and soil. Our training dataset consisted of more than 250 different images and their annotated class information. Images have been taken in a set of different environmental conditions such as light, soil coverages from between 0% to 100%, different materials such as living plants, residues, straw material and stones. We compared the results provided by our mobile application with a data set of 180 images that had been manually annotated A comparison between both methods revealed a regression slope of 0.964 with a coefficient of determination R2 = 0.92, corresponding to an average error of about 4%. While average error of living plant classification was about 3%, dead residue classification resulted in an 8% error. Thus the new mobile application tool offers a fast and easy way to obtain information on the protective potential of a particular agricultural management site.
MacIntyre, Hugh L; Cullen, John J
2016-08-01
Regulations for ballast water treatment specify limits on the concentrations of living cells in discharge water. The vital stains fluorescein diacetate (FDA) and 5-chloromethylfluorescein diacetate (CMFDA) in combination have been recommended for use in verification of ballast water treatment technology. We tested the effectiveness of FDA and CMFDA, singly and in combination, in discriminating between living and heat-killed populations of 24 species of phytoplankton from seven divisions, verifying with quantitative growth assays that uniformly live and dead populations were compared. The diagnostic signal, per-cell fluorescence intensity, was measured by flow cytometry and alternate discriminatory thresholds were defined statistically from the frequency distributions of the dead or living cells. Species were clustered by staining patterns: for four species, the staining of live versus dead cells was distinct, and live-dead classification was essentially error free. But overlap between the frequency distributions of living and heat-killed cells in the other taxa led to unavoidable errors, well in excess of 20% in many. In 4 very weakly staining taxa, the mean fluorescence intensity in the heat-killed cells was higher than that of the living cells, which is inconsistent with the assumptions of the method. Applying the criteria of ≤5% false negative plus ≤5% false positive errors, and no significant loss of cells due to staining, FDA and FDA+CMFDA gave acceptably accurate results for only 8-10 of 24 species (i.e., 33%-42%). CMFDA was the least effective stain and its addition to FDA did not improve the performance of FDA alone. © 2016 The Authors. Journal of Phycology published by Wiley Periodicals, Inc. on behalf of Phycological Society of America.
Theoretical Analysis of Pore Pressure Diffusion in Some Basic Rock Mechanics Experiments
NASA Astrophysics Data System (ADS)
Braun, Philipp; Ghabezloo, Siavash; Delage, Pierre; Sulem, Jean; Conil, Nathalie
2018-05-01
Non-homogeneity of the pore pressure field in a specimen is an issue for characterization of the thermo-poromechanical behaviour of low-permeability geomaterials, as in the case of the Callovo-Oxfordian claystone ( k < 10-20 m2), a possible host rock for deep radioactive waste disposal in France. In tests with drained boundary conditions, excess pore pressure can result in significant errors in the measurement of material parameters. Analytical solutions are presented for the change in time of the pore pressure field in a specimen submitted to various loading paths and different rates. The pore pressure field in mechanical and thermal undrained tests is simulated with a 1D finite difference model taking into account the dead volume of the drainage system of the triaxial cell connected to the specimen. These solutions provide a simple and efficient tool for the estimation of the conditions that must hold for reliable determination of material parameters and for optimization of various test conditions to minimize the experimental duration, while keeping the measurement errors at an acceptable level.
The Dipole Segment Model for Axisymmetrical Elongated Asteroids
NASA Astrophysics Data System (ADS)
Zeng, Xiangyuan; Zhang, Yonglong; Yu, Yang; Liu, Xiangdong
2018-02-01
Various simplified models have been investigated as a way to understand the complex dynamical environment near irregular asteroids. A dipole segment model is explored in this paper, one that is composed of a massive straight segment and two point masses at the extremities of the segment. Given an explicitly simple form of the potential function that is associated with the dipole segment model, five topological cases are identified with different sets of system parameters. Locations, stabilities, and variation trends of the system equilibrium points are investigated in a parametric way. The exterior potential distribution of nearly axisymmetrical elongated asteroids is approximated by minimizing the acceleration error in a test zone. The acceleration error minimization process determines the parameters of the dipole segment. The near-Earth asteroid (8567) 1996 HW1 is chosen as an example to evaluate the effectiveness of the approximation method for the exterior potential distribution. The advantages of the dipole segment model over the classical dipole and the traditional segment are also discussed. Percent error of acceleration and the degree of approximation are illustrated by using the dipole segment model to approximate four more asteroids. The high efficiency of the simplified model over the polyhedron is clearly demonstrated by comparing the CPU time.
Open ocean dead zones in the tropical North Atlantic Ocean
NASA Astrophysics Data System (ADS)
Karstensen, J.; Fiedler, B.; Schütte, F.; Brandt, P.; Körtzinger, A.; Fischer, G.; Zantopp, R.; Hahn, J.; Visbeck, M.; Wallace, D.
2015-04-01
Here we present first observations, from instrumentation installed on moorings and a float, of unexpectedly low (<2 μmol kg-1) oxygen environments in the open waters of the tropical North Atlantic, a region where oxygen concentration does normally not fall much below 40 μmol kg-1. The low-oxygen zones are created at shallow depth, just below the mixed layer, in the euphotic zone of cyclonic eddies and anticyclonic-modewater eddies. Both types of eddies are prone to high surface productivity. Net respiration rates for the eddies are found to be 3 to 5 times higher when compared with surrounding waters. Oxygen is lowest in the centre of the eddies, in a depth range where the swirl velocity, defining the transition between eddy and surroundings, has its maximum. It is assumed that the strong velocity at the outer rim of the eddies hampers the transport of properties across the eddies boundary and as such isolates their cores. This is supported by a remarkably stable hydrographic structure of the eddies core over periods of several months. The eddies propagate westward, at about 4 to 5 km day-1, from their generation region off the West African coast into the open ocean. High productivity and accompanying respiration, paired with sluggish exchange across the eddy boundary, create the "dead zone" inside the eddies, so far only reported for coastal areas or lakes. We observe a direct impact of the open ocean dead zones on the marine ecosystem as such that the diurnal vertical migration of zooplankton is suppressed inside the eddies.
Autonomous vehicle navigation utilizing fuzzy controls concepts for a next generation wheelchair.
Hansen, J D; Barrett, S F; Wright, C H G; Wilcox, M
2008-01-01
Three different positioning techniques were investigated to create an autonomous vehicle that could accurately navigate towards a goal: Global Positioning System (GPS), compass dead reckoning, and Ackerman steering. Each technique utilized a fuzzy logic controller that maneuvered a four-wheel car towards a target. The reliability and the accuracy of the navigation methods were investigated by modeling the algorithms in software and implementing them in hardware. To implement the techniques in hardware, positioning sensors were interfaced to a remote control car and a microprocessor. The microprocessor utilized the sensor measurements to orient the car with respect to the target. Next, a fuzzy logic control algorithm adjusted the front wheel steering angle to minimize the difference between the heading and bearing. After minimizing the heading error, the car maintained a straight steering angle along its path to the final destination. The results of this research can be used to develop applications that require precise navigation. The design techniques can also be implemented on alternate platforms such as a wheelchair to assist with autonomous navigation.
Greenhalgh, S; Faeth, P
2001-11-22
Nutrient pollution, now the leading cause of water quality impairment in the U.S., has had significant impact on the nation"s waterways. Excessive nutrient pollution has been linked to habitat loss, fish kills, blooms of toxic algae, and hypoxia (oxygen-depleted water). The hypoxic "dead zone" in the Gulf of Mexico is one of the most striking illustrations of what can happen when too many nutrients from inland watersheds reach coastal areas. Despite programs to improve municipal wastewater treatment facilities, more stringent industrial wastewater requirements, and agricultural programs designed to reduce sediment loads in waterways, water quality and nutrient pollution continues to be a problem, and in many cases has worsened. We undertook a policy analysis to assess how the agricultural community could better reduce its contribution to the dead zone and also to evaluate the synergistic impacts of these policies on other environmental concerns such as climate change. Using a sectorial model of U.S. agriculture, we compared policies including untargeted conservation subsidies, nutrient trading, Conservation Reserve Program extension, agricultural sales of carbon and greenhouse gas credits, and fertilizer reduction. This economic and environmental analysis is watershed-based, primarily focusing on nitrogen in the Mississippi River basin, which allowed us to assess the distribution of nitrogen reduction in streams, environmental co-benefits, and impact on agricultural cash flows within the Mississippi River basin from various options. The model incorporates a number of environmental factors, making it possible to get a more a complete picture of the costs and co-benefits of nutrient reduction. These elements also help to identify the policy options that minimize the costs to farmers and maximize benefits to society.
Application of artificial neural networks to chemostratigraphy
NASA Astrophysics Data System (ADS)
Malmgren, BjöRn A.; Nordlund, Ulf
1996-08-01
Artificial neural networks, a branch of artificial intelligence, are computer systems formed by a number of simple, highly interconnected processing units that have the ability to learn a set of target vectors from a set of associated input signals. Neural networks learn by self-adjusting a set of parameters, using some pertinent algorithm to minimize the error between the desired output and network output. We explore the potential of this approach in solving a problem involving classification of geochemical data. The data, taken from the literature, are derived from four late Quaternary zones of volcanic ash of basaltic and rhyolithic origin from the Norwegian Sea. These ash layers span the oxygen isotope zones 1, 5, 7, and 11, respectively (last 420,000 years). The data consist of nine geochemical variables (oxides) determined in each of 183 samples. We employed a three-layer back propagation neural network to assess its efficiency to optimally differentiate samples from the four ash zones on the basis of their geochemical composition. For comparison, three statistical pattern recognition techniques, linear discriminant analysis, the k-nearest neighbor (k-NN) technique, and SIMCA (soft independent modeling of class analogy), were applied to the same data. All of these showed considerably higher error rates than the artificial neural network, indicating that the back propagation network was indeed more powerful in correctly classifying the ash particles to the appropriate zone on the basis of their geochemical composition.
Minimization of Dead-Periods in MRI Pulse Sequences for Imaging Oblique Planes
Atalar, Ergin; McVeigh, Elliot R.
2007-01-01
With the advent of breath-hold MR cardiac imaging techniques, the minimization of TR and TE for oblique planes has become a critical issue. The slew rates and maximum currents of gradient amplifiers limit the minimum possible TR and TE by adding dead-periods to the pulse sequences. We propose a method of designing gradient waveforms that will be applied to the amplifiers instead of the slice, readout, and phase encoding waveforms. Because this method ensures that the gradient amplifiers will always switch at their maximum slew rate, it results in the minimum possible dead-period for given imaging parameters and scan plane position. A GRASS pulse sequence has been designed and ultra-short TR and TE values have been obtained with standard gradient amplifiers and coils. For some oblique slices, we have achieved shorter TR and TE values than those for nonoblique slices. PMID:7869900
Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation
NASA Astrophysics Data System (ADS)
Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.
2014-12-01
Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.
NASA Astrophysics Data System (ADS)
Regály, Zs.; Juhász, A.; Nehéz, D.
2017-12-01
Recent submillimeter observations show nonaxisymmetric brightness distributions with a horseshoe-like morphology for more than a dozen transition disks. The most-accepted explanation for the observed asymmetries is the accumulation of dust in large-scale vortices. Protoplanetary disks’ vortices can form by the excitation of Rossby wave instability in the vicinity of a steep pressure gradient, which can develop at the edges of a giant planet–carved gap or at the edges of an accretionally inactive zone. We studied the formation and evolution of vortices formed in these two distinct scenarios by means of two-dimensional locally isothermal hydrodynamic simulations. We found that the vortex formed at the edge of a planetary gap is short-lived, unless the disk is nearly inviscid. In contrast, the vortex formed at the outer edge of a dead zone is long-lived. The vortex morphology can be significantly different in the two scenarios: the vortex radial and azimuthal extensions are ∼1.5 and ∼3.5 times larger for the dead-zone edge compared to gap models. In some particular cases, the vortex aspect ratios can be similar in the two scenarios; however, the vortex azimuthal extensions can be used to distinguish the vortex formation mechanisms. We calculated predictions for vortex observability in the submillimeter continuum with ALMA. We found that the azimuthal and radial extent of the brightness asymmetry correlates with the vortex formation process within the limitations of α-viscosity prescription.
NOAA-NASA Coastal Zone Color Scanner reanalysis effort.
Gregg, Watson W; Conkright, Margarita E; O'Reilly, John E; Patt, Frederick S; Wang, Menghua H; Yoder, James A; Casey, Nancy W
2002-03-20
Satellite observations of global ocean chlorophyll span more than two decades. However, incompatibilities between processing algorithms prevent us from quantifying natural variability. We applied a comprehensive reanalysis to the Coastal Zone Color Scanner (CZCS) archive, called the National Oceanic and Atmospheric Administration and National Aeronautics and Space Administration (NOAA-NASA) CZCS reanalysis (NCR) effort. NCR consisted of (1) algorithm improvement (AI), where CZCS processing algorithms were improved with modernized atmospheric correction and bio-optical algorithms and (2) blending where in situ data were incorporated into the CZCS AI to minimize residual errors. Global spatial and seasonal patterns of NCR chlorophyll indicated remarkable correspondence with modern sensors, suggesting compatibility. The NCR permits quantitative analyses of interannual and interdecadal trends in global ocean chlorophyll.
Diatoms dominate the eukaryotic metatranscriptome during spring in coastal 'dead zone' sediments.
Broman, Elias; Sachpazidou, Varvara; Dopson, Mark; Hylander, Samuel
2017-10-11
An important characteristic of marine sediments is the oxygen concentration that affects many central metabolic processes. There has been a widespread increase in hypoxia in coastal systems (referred to as 'dead zones') mainly caused by eutrophication. Hence, it is central to understand the metabolism and ecology of eukaryotic life in sediments during changing oxygen conditions. Therefore, we sampled coastal 'dead zone' Baltic Sea sediment during autumn and spring, and analysed the eukaryotic metatranscriptome from field samples and after incubation in the dark under oxic or anoxic conditions. Bacillariophyta (diatoms) dominated the eukaryotic metatranscriptome in spring and were also abundant during autumn. A large fraction of the diatom RNA reads was associated with the photosystems suggesting a constitutive expression in darkness. Microscope observation showed intact diatom cells and these would, if hatched, represent a significant part of the pelagic phytoplankton biomass. Oxygenation did not significantly change the relative proportion of diatoms nor resulted in any major shifts in metabolic 'signatures'. By contrast, diatoms rapidly responded when exposed to light suggesting that light is limiting diatom development in hypoxic sediments. Hence, it is suggested that diatoms in hypoxic sediments are on 'standby' to exploit the environment if they reach suitable habitats. © 2017 The Author(s).
Yang, Kun; Wu, Yanqing; Huang, Fenglei
2018-08-15
A physical model is developed to describe the viscoelastic-plastic deformation, cracking damage, and ignition behavior of polymer-bonded explosives (PBXs) under mild impact. This model improves on the viscoelastic-statistical crack mechanical model (Visco-SCRAM) in several respects. (i) The proposed model introduces rate-dependent plasticity into the framework which is more suitable for explosives with relatively high binder content. (ii) Damage evolution is calculated by the generalized Griffith instability criterion with the dominant (most unstable) crack size rather than the averaged crack size over all crack orientations. (iii) The fast burning of cracks following ignition and the effects of gaseous products on crack opening are considered. The predicted uniaxial and triaxial stress-strain responses of PBX9501 sample under dynamic compression loading are presented to illustrate the main features of the materials. For an uncovered cylindrical PBX charge impacted by a flat-nosed rod, the simulated results show that a triangular-shaped dead zone is formed beneath the front of the rod. The cracks in the dead zone are stable due to friction-locked stress state, whereas the cracks near the front edges of dead zone become unstable and turn into hotspots due to high-shear effects. Copyright © 2018 Elsevier B.V. All rights reserved.
Fuzzy Adaptive Control Design and Discretization for a Class of Nonlinear Uncertain Systems.
Zhao, Xudong; Shi, Peng; Zheng, Xiaolong
2016-06-01
In this paper, tracking control problems are investigated for a class of uncertain nonlinear systems in lower triangular form. First, a state-feedback controller is designed by using adaptive backstepping technique and the universal approximation ability of fuzzy logic systems. During the design procedure, a developed method with less computation is proposed by constructing one maximum adaptive parameter. Furthermore, adaptive controllers with nonsymmetric dead-zone are also designed for the systems. Then, a sampled-data control scheme is presented to discretize the obtained continuous-time controller by using the forward Euler method. It is shown that both proposed continuous and discrete controllers can ensure that the system output tracks the target signal with a small bounded error and the other closed-loop signals remain bounded. Two simulation examples are presented to verify the effectiveness and applicability of the proposed new design techniques.
Jack Rabbit Pretest 2021E PT3 Photonic Doppler Velocimetry Data Volume 3 Section 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M M; Strand, O T; Bosson, S T
The Jack Rabbit Pretest (PT) 2021E PT3 was fired on March 12, 2008 at the Contained Firing Facility, Site 300, Lawrence Livermore National Laboratory. This experiment is part of an effort to determine the properties of LX-17 in a regime where corner-turning behavior and dead-zone formation are not well understood. Photonic Doppler Velocimetry (PDV) measured diagnostic plate velocities confirming the presence of a persistent LX-17 dead-zone formation and the resultant impulse gradient applied under the diagnostic plate. The Jack Rabbit Pretest 2021E PT3, 120 millimeter diameter experiment returned data on all eight PDV probes. The probes measured on the centralmore » axis and at 10, 20, 25, 30, 35, 40, 50 millimeters from the central axis. The experiment was shot at an ambient room temperature of 65 degrees Fahrenheit. The earliest PDV signal extinction was 41.7 microseconds at 30 millimeters. The latest PDV signal extinction time was 65.0 microseconds at 10 millimeters. The measured velocity ranged from meters per second to thousands of meters per second. First detonation wave induced jump-off was measured at 40 millimeters at 10.9 microseconds. The PDV data provided an unambiguous indication of dead-zone formation and an impulse gradient applied to the diagnostic plate. The central axis had a last measured velocity of 1636 meters per second. At 40 millimeters the last measured velocity was 2056 meters per second. The low-to-high velocity ratio was 0.80. Velocity data was integrated to compute diagnostic plate cross section profiles. Velocity data was differentiated to compute a peak pressure under the diagnostic plate at the central axis of 64.6 kilobars at 15.7 microseconds. Substantial motion (>1 m/s) of the diagnostic plate over the dead-zone is followed by detonation region motion within approximately 2.2 microseconds.« less
Jack Rabbit Pretest 2021E PT4 Photonic Doppler Velocimetry Data Volume 4 Section 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M M; Strand, O T; Bosson, S T
The Jack Rabbit Pretest (PT) 2021E PT4 was fired on March 19, 2008 at the Contained Firing Facility, Site 300, Lawrence Livermore National Laboratory. This experiment is part of an effort to determine the properties of LX-17 in a regime where corner-turning behavior and dead-zone formation are not well understood. Photonic Doppler Velocimetry (PDV) measured diagnostic plate velocities confirming the presence of a persistent LX-17 dead-zone formation and the resultant impulse gradient applied under the diagnostic plate. The Jack Rabbit Pretest 2021E PT4, 120 millimeter diameter experiment returned data on all eight PDV probes. The probes measured on the centralmore » axis and at 10, 20, 25, 30, 35, 40, 50 millimeters from the central axis. The experiment was shot at an ambient room temperature of 64 degrees Fahrenheit. The earliest PDV signal extinction was 44.9 microseconds at 30 millimeters. The latest PDV signal extinction time was 69.5 microseconds at 10 millimeters. The measured velocity ranged from meters per second to thousands of meters per second. First detonation wave induced jump-off was measured at 50 millimeters at 13.3 microseconds. The PDV data provided an unambiguous indication of dead-zone formation and an impulse gradient applied to the diagnostic plate. The central axis had a last measured velocity of 1558 meters per second. At 40 millimeters the last measured velocity was 2019 meters per second. The low-to-high velocity ratio was 0.77. Velocity data was integrated to compute diagnostic plate cross section profiles. Velocity data was differentiated to compute a peak pressure under the diagnostic plate at the central axis of 98.6 kilobars at 15.0 microseconds. Substantial motion (>1 m/s) of the diagnostic plate over the dead-zone is followed by detonation region motion within approximately 0.7 microseconds.« less
Jack Rabbit Pretest 2021E PT5 Photonic Doppler Velocimetry Data Volume 5 Section 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M M; Strand, O T; Bosson, S T
The Jack Rabbit Pretest (PT) 2021E PT5 was fired on March 17, 2008 at the Contained Firing Facility, Site 300, Lawrence Livermore National Laboratory. This experiment is part of an effort to determine the properties of LX-17 in a regime where corner-turning behavior and dead-zone formation are not well understood. Photonic Doppler Velocimetry (PDV) measured diagnostic plate velocities confirming the presence of a persistent LX-17 dead-zone formation and the resultant impulse gradient applied under the diagnostic plate. The Jack Rabbit Pretest 2021E PT5, 160 millimeter diameter experiment returned data on all eight PDV probes. The probes measured on the centralmore » axis and at 20, 30, 35, 45, 55, 65, 75 millimeters from the central axis. The experiment was shot at an ambient room temperature of 65 degrees Fahrenheit. The earliest PDV signal extinction was 40.0 microseconds at 45 millimeters. The latest PDV signal extinction time was 64.9 microseconds at 20 millimeters. The measured velocity ranged from meters per second to thousands of meters per second. First detonation wave induced jump-off was measured at 55 millimeters at 12.8 microseconds. The PDV data provided an unambiguous indication of dead-zone formation and an impulse gradient applied to the diagnostic plate. The central axis had a last measured velocity of 1877 meters per second. At 65 millimeters the last measured velocity was 2277 meters per second. The low-to-high velocity ratio was 0.82. Velocity data was integrated to compute diagnostic plate cross section profiles. Velocity data was differentiated to compute a peak pressure under the diagnostic plate at the central axis of 78 kilobars at 11.9 and 21.2 microseconds. Substantial motion (>1 m/s) of the diagnostic plate over the dead-zone is followed by detonation region motion within approximately 4.1 microseconds.« less
Jack Rabbit Pretest 2021E PT7 Photonic Doppler Velocimetry Data Volume 7 Section 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M M; Strand, O T; Bosson, S T
The Jack Rabbit Pretest (PT) 2021E PT7 experiment was fired on April 3, 2008 at the Contained Firing Facility, Site 300, Lawrence Livermore National Laboratory. This experiment is part of an effort to determine the properties of LX-17 in a regime where corner-turning behavior and dead-zone formation are not well understood. Photonic Doppler Velocimetry (PDV) measured diagnostic plate velocities confirming the presence of a persistent LX-17 dead-zone formation and the resultant impulse gradient applied under the diagnostic plate. The Jack Rabbit Pretest 2021E PT7, 160 millimeter diameter experiment returned data on all eight PDV probes. The probes measured on themore » central axis and at 20, 30, 35, 45, 55, 65, 75 millimeters from the central axis. The experiment was shot at an ambient room temperature of 65 degrees Fahrenheit. The PDV earliest signal extinction was 50.7 microseconds at 45 millimeters. The latest PDV signal extinction time was 65.0 microseconds at 20 millimeters. The measured velocity ranged from meters per second to thousands of meters per second. First detonation wave induced jump-off was measured at 55 millimeters and at 15.2 microseconds. The PDV data provided an unambiguous indication of dead-zone formation and an impulse gradient applied to the diagnostic plate. The central axis had a last measured velocity of 1447 meters per second. At 65 millimeters the last measured velocity was 2360 meters per second. The low-to-high velocity ratio was 0.61. Velocity data was integrated to compute diagnostic plate cross section profiles. Velocity data was differentiated to compute a peak pressure under the diagnostic plate at the central axis of 49 kilobars at 23.3 microseconds. Substantial motion (>1 m/s) of the diagnostic plate over the dead-zone is followed by detonation region motion within approximately 4.6 microseconds.« less
NASA Astrophysics Data System (ADS)
Closson, D.; Abou Karaki, N.; Milisavljevic, N.; Pasquali, P.; Holecz, F.; Bouaraba, A.
2012-04-01
For several decades, surface water and groundwater located in the closed Dead Sea basin experience excessive exploitation. In fifty years, the level of the terminal lake has fallen by about 30 meters and its surface shrunk by one third. The coastal zone is the one that best shows the stigma of the general environmental degradation. Among these are the sinkholes, landslides and subsidence. For years, these phenomena are relatively well documented, particularly sinkholes and subsidence. Over the past five years, field observations combined with ground deformations measurements by radar interferometric stacking techniques have shown that the intensity (size, frequency) of the collapses is increasing in the most affected part of the southern Dead Sea area. The zones of the dried up Lynch Strait, the Lisan peninsula and Ghor Al Haditha in Jordan seem the most affected. Very high resolution (0.5 to 2 m) GeoEye satellite images have shown that many sinkholes also formed below the level of the Dead Sea. The water transparency allows observations up to several meters deep. These data contribute to the validation of the models developed in connection with the deformation of the fresh/saline water interface due to an imbalance always more pronounced between the levels of the surrounding groundwaters and of the terminal lake.
Capelo, J L; Galesio, M M; Felisberto, G M; Vaz, C; Pessoa, J Costa
2005-06-15
Analytical minimalism is a concept that deals with the optimization of all stages of an analytical procedure so that it becomes less time, cost, sample, reagent and energy consuming. The guide-lines provided in the USEPA extraction method 3550B recommend the use of focused ultrasound (FU), i.e., probe sonication, for the solid-liquid extraction of Polycyclic Aromatic Hydrocarbons, PAHs, but ignore the principle of analytical minimalism. The problems related with the dead sonication zones, often present when high volumes are sonicated with probe, are also not addressed. In this work, we demonstrate that successful extraction and quantification of PAHs from sediments can be done with low sample mass (0.125g), low reagent volume (4ml), short sonication time (3min) and low sonication amplitude (40%). Two variables are here particularly taken into account for total extraction: (i) the design of the extraction vessel and (ii) the solvent used to carry out the extraction. Results showed PAHs recoveries (EPA priority list) ranged between 77 and 101%, accounting for more than 95% for most of the PAHs here studied, as compared with the values obtained after soxhlet extraction. Taking into account the results reported in this work we recommend a revision of the EPA guidelines for PAHs extraction from solid matrices with focused ultrasound, so that these match the analytical minimalism concept.
Generalized energy detector for weak random signals via vibrational resonance
NASA Astrophysics Data System (ADS)
Ren, Yuhao; Pan, Yan; Duan, Fabing
2018-03-01
In this paper, the generalized energy (GE) detector is investigated for detecting weak random signals via vibrational resonance (VR). By artificially injecting the high-frequency sinusoidal interferences into an array of GE statistics formed for the detector, we show that the normalized asymptotic efficacy can be maximized when the interference intensity takes an appropriate non-zero value. It is demonstrated that the normalized asymptotic efficacy of the dead-zone-limiter detector, aided by the VR mechanism, outperforms that of the GE detector without the help of high-frequency interferences. Moreover, the maximum normalized asymptotic efficacy of dead-zone-limiter detectors can approach a quarter of the second-order Fisher information for a wide range of non-Gaussian noise types.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
Optimization of insulation of a linear Fresnel collector
NASA Astrophysics Data System (ADS)
Ardekani, Mohammad Moghimi; Craig, Ken J.; Meyer, Josua P.
2017-06-01
This study presents a simulation based optimization study of insulation around the cavity receiver of a Linear Fresnel Collector. This optimization study focuses on minimizing heat losses from a cavity receiver (maximizing plant thermal efficiency), while minimizing insulation cross-sectional area (minimizing material cost and cavity dead load), which leads to a cheaper and thermally more efficient LFC cavity receiver.
Metabolic Roles of Uncultivated Bacterioplankton Lineages in the Northern Gulf of Mexico “Dead Zone”
Seitz, Kiley W.; Temperton, Ben; Gillies, Lauren E.; Rabalais, Nancy N.; Henrissat, Bernard; Mason, Olivia U.
2017-01-01
ABSTRACT Marine regions that have seasonal to long-term low dissolved oxygen (DO) concentrations, sometimes called “dead zones,” are increasing in number and severity around the globe with deleterious effects on ecology and economics. One of the largest of these coastal dead zones occurs on the continental shelf of the northern Gulf of Mexico (nGOM), which results from eutrophication-enhanced bacterioplankton respiration and strong seasonal stratification. Previous research in this dead zone revealed the presence of multiple cosmopolitan bacterioplankton lineages that have eluded cultivation, and thus their metabolic roles in this ecosystem remain unknown. We used a coupled shotgun metagenomic and metatranscriptomic approach to determine the metabolic potential of Marine Group II Euryarchaeota, SAR406, and SAR202. We recovered multiple high-quality, nearly complete genomes from all three groups as well as candidate phyla usually associated with anoxic environments—Parcubacteria (OD1) and Peregrinibacteria. Two additional groups with putative assignments to ACD39 and PAUC34f supplement the metabolic contributions by uncultivated taxa. Our results indicate active metabolism in all groups, including prevalent aerobic respiration, with concurrent expression of genes for nitrate reduction in SAR406 and SAR202, and dissimilatory nitrite reduction to ammonia and sulfur reduction by SAR406. We also report a variety of active heterotrophic carbon processing mechanisms, including degradation of complex carbohydrate compounds by SAR406, SAR202, ACD39, and PAUC34f. Together, these data help constrain the metabolic contributions from uncultivated groups in the nGOM during periods of low DO and suggest roles for these organisms in the breakdown of complex organic matter. PMID:28900024
The growth of deactivated layers on CsI(Na) scintillating crystals
NASA Technical Reports Server (NTRS)
Goodman, N. B.
1975-01-01
An effective and sensitive measurement of the depth of a deactivated or dead layer can be obtained from the relative attenuation of the 22.162 KeV and 87.9 KeV X-rays emitted by Cd 109. The alpha-particles emitted by Am 241 are also useful in measuring dead layers less than 25 microns. The properties and temporal development of dead layers are discussed in detail. The rate of growth of a deal layer is closely related to the ambient humidity, and the damage to the crystal is irreversible by any known process. The dead layer can be minimized by polishing all crystal surfaces and by keeping the crystal in a vacuum or a dry atmosphere. Since a dead layer seriously inhibits the response of a crystal to X-rays of energies below approximately 20 keV, CsI(Na) detectors should not be used at these energies unless precautions are taken to ensure that no dead layer forms.
Montazerhodjat, Vahid; Chaudhuri, Shomesh E; Sargent, Daniel J; Lo, Andrew W
2017-09-14
Randomized clinical trials (RCTs) currently apply the same statistical threshold of alpha = 2.5% for controlling for false-positive results or type 1 error, regardless of the burden of disease or patient preferences. Is there an objective and systematic framework for designing RCTs that incorporates these considerations on a case-by-case basis? To apply Bayesian decision analysis (BDA) to cancer therapeutics to choose an alpha and sample size that minimize the potential harm to current and future patients under both null and alternative hypotheses. We used the National Cancer Institute (NCI) Surveillance, Epidemiology, and End Results (SEER) database and data from the 10 clinical trials of the Alliance for Clinical Trials in Oncology. The NCI SEER database was used because it is the most comprehensive cancer database in the United States. The Alliance trial data was used owing to the quality and breadth of data, and because of the expertise in these trials of one of us (D.J.S.). The NCI SEER and Alliance data have already been thoroughly vetted. Computations were replicated independently by 2 coauthors and reviewed by all coauthors. Our prior hypothesis was that an alpha of 2.5% would not minimize the overall expected harm to current and future patients for the most deadly cancers, and that a less conservative alpha may be necessary. Our primary study outcomes involve measuring the potential harm to patients under both null and alternative hypotheses using NCI and Alliance data, and then computing BDA-optimal type 1 error rates and sample sizes for oncology RCTs. We computed BDA-optimal parameters for the 23 most common cancer sites using NCI data, and for the 10 Alliance clinical trials. For RCTs involving therapies for cancers with short survival times, no existing treatments, and low prevalence, the BDA-optimal type 1 error rates were much higher than the traditional 2.5%. For cancers with longer survival times, existing treatments, and high prevalence, the corresponding BDA-optimal error rates were much lower, in some cases even lower than 2.5%. Bayesian decision analysis is a systematic, objective, transparent, and repeatable process for deciding the outcomes of RCTs that explicitly incorporates burden of disease and patient preferences.
Montazerhodjat, Vahid; Chaudhuri, Shomesh E.; Sargent, Daniel J.
2017-01-01
Importance Randomized clinical trials (RCTs) currently apply the same statistical threshold of alpha = 2.5% for controlling for false-positive results or type 1 error, regardless of the burden of disease or patient preferences. Is there an objective and systematic framework for designing RCTs that incorporates these considerations on a case-by-case basis? Objective To apply Bayesian decision analysis (BDA) to cancer therapeutics to choose an alpha and sample size that minimize the potential harm to current and future patients under both null and alternative hypotheses. Data Sources We used the National Cancer Institute (NCI) Surveillance, Epidemiology, and End Results (SEER) database and data from the 10 clinical trials of the Alliance for Clinical Trials in Oncology. Study Selection The NCI SEER database was used because it is the most comprehensive cancer database in the United States. The Alliance trial data was used owing to the quality and breadth of data, and because of the expertise in these trials of one of us (D.J.S.). Data Extraction and Synthesis The NCI SEER and Alliance data have already been thoroughly vetted. Computations were replicated independently by 2 coauthors and reviewed by all coauthors. Main Outcomes and Measures Our prior hypothesis was that an alpha of 2.5% would not minimize the overall expected harm to current and future patients for the most deadly cancers, and that a less conservative alpha may be necessary. Our primary study outcomes involve measuring the potential harm to patients under both null and alternative hypotheses using NCI and Alliance data, and then computing BDA-optimal type 1 error rates and sample sizes for oncology RCTs. Results We computed BDA-optimal parameters for the 23 most common cancer sites using NCI data, and for the 10 Alliance clinical trials. For RCTs involving therapies for cancers with short survival times, no existing treatments, and low prevalence, the BDA-optimal type 1 error rates were much higher than the traditional 2.5%. For cancers with longer survival times, existing treatments, and high prevalence, the corresponding BDA-optimal error rates were much lower, in some cases even lower than 2.5%. Conclusions and Relevance Bayesian decision analysis is a systematic, objective, transparent, and repeatable process for deciding the outcomes of RCTs that explicitly incorporates burden of disease and patient preferences. PMID:28418507
Microfluidic devices connected to fused-silica capillaries with minimal dead volume.
Bings, N H; Wang, C; Skinner, C D; Colyer, C L; Thibault, P; Harrison, D J
1999-08-01
Fused-silica capillaries have been connected to microfluidic devices for capillary electrophoresis by drilling into the edge of the device using 200-μm tungsten carbide drills. The standard pointed drill bits create a hole with a conical-shaped bottom that leads to a geometric dead volume of 0.7 nL at the junction, and significant band broadening when used with 0.2-nL sample plugs. The plate numbers obtained on the fused-silica capillary connected to the chip were about 16-25% of the predicted numbers. The conical area was removed with a flat-tipped drill bit and the band broadening was substantially eliminated (on average 98% of the predicted plate numbers were observed). All measurements were made while the device was operating with an electrospray from the end of the capillary. The effective dead volume of the flat-bottom connection is minimal and allows microfluidic devices to be connected to a wide variety of external detectors.
Johnson, Terence L.; Zule, William A.; Carda-Auten, Jessica; Golin, Carol E.
2015-01-01
Ongoing injection drug use contributes to the HIV and HCV epidemics in people who inject drugs. In many places, pharmacies are the primary source of sterile syringes for people who inject drugs; thus, pharmacies provide a viable public health service that reduces blood-borne disease transmission. Replacing the supply of high dead space syringes with low dead space syringes could have far-reaching benefits that include further prevention of disease transmission in people who inject drugs and reductions in dosing inaccuracies, medication errors, and medication waste in patients who use syringes. We explored using pharmacies in a structural intervention to increase the uptake of low dead space syringes as part of a comprehensive strategy to reverse these epidemics. PMID:25880955
Oramasionwu, Christine U; Johnson, Terence L; Zule, William A; Carda-Auten, Jessica; Golin, Carol E
2015-06-01
Ongoing injection drug use contributes to the HIV and HCV epidemics in people who inject drugs. In many places, pharmacies are the primary source of sterile syringes for people who inject drugs; thus, pharmacies provide a viable public health service that reduces blood-borne disease transmission. Replacing the supply of high dead space syringes with low dead space syringes could have far-reaching benefits that include further prevention of disease transmission in people who inject drugs and reductions in dosing inaccuracies, medication errors, and medication waste in patients who use syringes. We explored using pharmacies in a structural intervention to increase the uptake of low dead space syringes as part of a comprehensive strategy to reverse these epidemics.
Coding for Communication Channels with Dead-Time Constraints
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon
2004-01-01
Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM frames separated by d-slot dead times.
The stream subsurface: nitrogen cycling and the cleansing function of hyporheic zones
Rhonda Mazza; Steve Wondzell; Jay Zarnetske
2014-01-01
Nitrogen is an element essential to plant growth and ecosystem productivity. Excess nitrogen, however, is a common water pollutant. It can lead to algal blooms that deplete the water's dissolved oxygen, creating "dead zones" devoid of fish and aquatic insects.Previous research showed that the subsurface area of a stream, known as the hyporheic...
QF/PQM-102 Target System, Project PAVE DEUCE
1975-05-01
Actual scores are computed within the dead zone by mathemat - ical computation utilizing missile velocity and time within the zone. Evaluation of...PrerUS r0l, inStability Was traced t0 a Possib’e Mufi of the autopNot ate- ensmg gyro, and It was replaced for the re-fly of OF REcord Flight No 14
On-off nonlinear active control of floor vibrations
NASA Astrophysics Data System (ADS)
Díaz, Iván M.; Reynolds, Paul
2010-08-01
Human-induced floor vibrations can be mitigated by means of active control via an electromagnetic proof-mass actuator. Previous researchers have developed a system for floor vibration comprising linear velocity feedback control (LVFC) with a command limiter (saturation in the command signal to avoid actuator overloading). The performance of this control is highly dependent on the linear gain utilised, which has to be designed for a particular excitation and might not be optimum for other excitations. This work explores the use of on-off nonlinear velocity feedback control (NLVFC) as the natural evolution of LVFC when high gains and/or significant vibration level are present together with saturation in the control law. Firstly, the describing function tool is employed to analyse the stability properties of: (1) LVFC with saturation, (2) on-off NLVFC with a dead zone and (3) on-off NLVFC with a switching-off function. Particular emphasis is paid to the resulting limit cycle behaviour and the design of appropriate dead zone and switching-off levels to avoid it. Secondly, experimental trials using the three control laws are conducted on a laboratory test floor. The results corroborate the analytical stability predictions. The pros of on-off NLVFC are that no gain has to be chosen and maximum actuator energy is delivered to cancel the vibration. In contrast, the requirement to select a dead zone or switching-off function provides a drawback in its application.
Fuzzy modelling and efficiency in health care systems.
Ozok, Ahmet F
2012-01-01
American Medical Institute reports that each year, because of the medical error, minimum fifty thousand people are dead. For a safety and quality medical system, it is important that information systems are used in health care systems. Health information applications help us to reduce the human error and to support patient care systems. Recently, it is reported that medical information systems applications have also some negative effect on all medical integral elements. The cost of health care information systems is about 4.6% of the total cost. In this paper, it is tried a risk determination model according to principles of fuzzy logic. The improvement of health care systems has become a very popular topic in Turkey recent years. Using necessary information system; it became possible to care patients in a safer way. However, using the necessary HIS tools to manage of administrative and clinical processes at hospitals became more important than before. For example; clinical work flows and communication among pharmacists, nurses and physicians are still not enough investigated. We use fuzzy modeling as a research strategy and developed sum fuzzy membership functions to minimize human error. In application in Turkey the results are significantly related with each other. Besides, the sign differences in health care information systems strongly effects of risk magnitude. The obtained results are discussed and some comments are added.
The Active Structure of the Greater Dead Sea Basin
NASA Astrophysics Data System (ADS)
Shamir, G.
2002-12-01
The Greater Dead Sea Basin (GDSB) is a 220km long depression situated along the southern section of the Dead Sea Transform (DST), between two structurally and gravitationally elevated points, Wadi Malih in the north and Paran fault zone in the south. In its center is the Dead Sea basin 'sensu strictu' (DSB), which has been described since the 1970s as a pull-apart basin at a left step-over along the DST. However, several observations, or their lack thereof, contradict this scheme, e.g. (i) It is not supported by recent seismological and geomorphic data; (ii) It does not explain the fault pattern and mixed sinistral and dextral offset along the DSB western boundary; (iii) It does not simply explain the presence of intense deformation outside the presumed fault step zone; (iv) It is inconsistent with the orientation of seismically active faults within the Dead Sea and Jericho Valley; (v) The length of the DSB exceeds the total offset along the Dead Sea Transform, while its subsidence is about the age of the DST. In this study, newly acquired and analyzed data (high resolution seismic reflection and earthquake relocation and fault plane solutions) has been integrated with previously published data (structural mapping, fracture orientation distribution, Bouguer anomaly maps, sinkhole distribution, geomorphic lineaments). The results show that the GDSB is dominated by two active fault systems, one trending NNE and showing normal-dextral motion, the other trending NW. These systems are identified by earthquake activity, seismic reflection observations, alignment of recent sinkholes, and distribution of Bouguer anomaly gradients. As a result, the intra-basin structure is of a series of rectangular blocks. The dextral slip component along NNE trending faults, the mixed sense of lateral offset along the western boundary of the DSB and temporal change in fracture orientation in the Jericho Valley suggest that the intra-basin blocks have rotated counterclockwise since the Pleistocene. The overall sinistral motion between the Arabian and Israel-Sinai plates along the GDSB may thus be accommodated by the postulated, internally rotating shear zone. Then, the subsidence of the DSB may possibly be explained if the rate of the resulting internal E-W shortening is greater than the rate of plate convergence.
Characterization and impact of "dead-zone" eddies in the tropical Northeast Atlantic Ocean
NASA Astrophysics Data System (ADS)
Schuette, Florian; Karstensen, Johannes; Krahmann, Gerd; Hauss, Helena; Fiedler, Björn; Brandt, Peter; Visbeck, Martin; Körtzinger, Arne
2016-04-01
Localized open-ocean low-oxygen dead-zones in the tropical Northeast Atlantic are recently discovered ocean features that can develop in dynamically isolated water masses within cyclonic eddies (CE) and anticyclonic modewater eddies (ACME). Analysis of a comprehensive oxygen dataset obtained from gliders, moorings, research vessels and Argo floats shows that eddies with low oxygen concentrations at 50-150 m depths can be found in surprisingly high numbers and in a large area (from about 5°N to 20°N, from the shelf at the eastern boundary to 30°W). Minimum oxygen concentrations of about 9 μmol/kg in CEs and close to anoxic concentrations (< 1 μmol/kg) in ACMEs were observed. In total, 495 profiles with oxygen concentrations below the minimum background concentration of 40 μmol/kg could be associated with 27 independent "dead-zone" eddies (10 CEs; 17 ACMEs). The low oxygen concentration right beneath the mixed layer has been attributed to the combination of high productivity in the surface waters of the eddies and the isolation of the eddies' cores. Indeed eddies of both types feature a cold sea surface temperature anomaly and enhanced chlorophyll concentrations in their center. The oxygen minimum is located in the eddy core beneath the mixed layer at around 80 m depth. The mean oxygen anomaly between 50 to 150 m depth for CEs (ACMEs) is -49 (-81) μmol/kg. Eddies south of 12°N carry weak hydrographic anomalies in their cores and seem to be generated in the open ocean away from the boundary. North of 12°N, eddies of both types carry anomalously low salinity water of South Atlantic Central Water origin from the eastern boundary upwelling region into the open ocean. This points to an eddy generation near the eastern boundary. A conservative estimate yields that around 5 dead-zone eddies (4 CEs; 1 ACME) per year entering the area north of 12°N between the Cap Verde Islands and 19°W. The associated contribution to the oxygen budget of the shallow oxygen minimum zone in that area is about -10.3 (-3.0) μmol/kg/yr for CEs (ACMEs). The consumption within these eddies represents an essential part of the total consumption in the open tropical Northeast Atlantic Ocean and might be partly responsible for the formation of the shallow oxygen minimum zone.
An Interview with Stephen King.
ERIC Educational Resources Information Center
Janeczko, Paul
1980-01-01
The author of five best-selling novels, including "Carrie,""Salem's Lot,""The Shining,""The Stand," and "The Dead Zone," discusses the teaching of creative writing at high school and college levels. (DF)
Hypoxia by degrees: Establishing definitions for a changing ocean
NASA Astrophysics Data System (ADS)
Hofmann, A. F.; Peltzer, E. T.; Walz, P. M.; Brewer, P. G.
2011-12-01
The marked increase in occurrences of low oxygen events on continental shelves coupled with observed expansion of low oxygen regions of the ocean has drawn significant scientific and public attention. With this has come the need for the establishment of better definitions for widely used terms such as "hypoxia" and "dead zones". Ocean chemists and physicists use concentration units such as μmolO2/kg for reporting since these units are independent of temperature, salinity and pressure and are required for mass balances and for numerical models of ocean transport. Much of the reporting of dead zone occurrences is in volumetric concentration units of mlO 2/l or mgO 2/l for historical reasons. And direct measurements of the physiological state of marine animals require reporting of the partial pressure of oxygen (pO 2) in matm or kPa since this provides the thermodynamic driving force for molecular transfer through tissue. This necessarily incorporates temperature and salinity terms and thus accommodates changes driven by climate warming and the influence of the very large temperature range around the world where oxygen limiting values are reported. Here we examine the various definitions used and boundaries set and place them within a common framework. We examine the large scale ocean pO 2 fields required for pairing with pCO 2 data for examination of the combined impacts of ocean acidification and global warming. The term "dead zones", which recently has received considerable attention in both the scientific literature and the press, usually describes shallow, coastal regions of low oxygen caused either by coastal eutrophication and organic matter decomposition or by upwelling of low oxygen waters. While we make clear that bathyal low oxygen waters should not be confused with shallow-water "dead zones", as deep water species are well adapted, we show that those waters represent a global vast reservoir of low oxygen water which can readily be entrained in upwelling waters and contribute to coastal hypoxia around the world and may be characterized identically. We examine the potential for expansion of those water masses onto continental shelves worldwide, thereby crossing limits set for many not adapted species.
Katzman, Rafael; ten Brink, Uri S.; Lin, Jian
1995-01-01
We model the three-dimensional (3-D) crustal deformation in a deep pull-apart basin as a result of relative plate motion along a transform system and compare the results to the tectonics of the Dead Sea Basin. The brittle upper crust is modeled by a boundary element technique as an elastic block, broken by two en echelon semi-infinite vertical faults. The deformation is caused by a horizontal displacement that is imposed everywhere at the bottom of the block except in a stress-free “shear zone” in the vicinity of the fault zone. The bottom displacement represents the regional relative plate motion. Results show that the basin deformation depends critically on the width of the shear zone and on the amount of overlap between basin-bounding faults. As the width of the shear zone increases, the depth of the basin decreases, the rotation around a vertical axis near the fault tips decreases, and the basin shape (the distribution of subsidence normalized by the maximum subsidence) becomes broader. In contrast, two-dimensional plane stress modeling predicts a basin shape that is independent of the width of the shear zone. Our models also predict full-graben profiles within the overlapped region between bounding faults and half-graben shapes elsewhere. Increasing overlap also decreases uplift near the fault tips and rotation of blocks within the basin. We suggest that the observed structure of the Dead Sea Basin can be described by a 3-D model having a large overlap (more than 30 km) that probably increased as the basin evolved as a result of a stable shear motion that was distributed laterally over 20 to 40 km.
Brainstem Encoding of Aided Speech in Hearing Aid Users with Cochlear Dead Region(s).
Hassaan, Mohammad Ramadan; Ibraheem, Ola Abdallah; Galhom, Dalia Helal
2016-07-01
Neural encoding of speech begins with the analysis of the signal as a whole broken down into its sinusoidal components in the cochlea, which has to be conserved up to the higher auditory centers. Some of these components target the dead regions of the cochlea causing little or no excitation. Measuring aided speech-evoked auditory brainstem response elicited by speech stimuli with different spectral maxima can give insight into the brainstem encoding of aided speech with spectral maxima at these dead regions. This research aims to study the impact of dead regions of the cochlea on speech processing at the brainstem level after a long period of hearing aid use. This study comprised 30 ears without dead regions and 46 ears with dead regions at low, mid, or high frequencies. For all ears, we measured the aided speech-evoked auditory brainstem response using speech stimuli of low, mid, and high spectral maxima. Aided speech-evoked auditory brainstem response was producible in all subjects. Responses evoked by stimuli with spectral maxima at dead regions had longer latencies and smaller amplitudes when compared with the control group or the responses of other stimuli. The presence of cochlear dead regions affects brainstem encoding of speech with spectral maxima perpendicular to these regions. Brainstem neuroplasticity and the extrinsic redundancy of speech can minimize the impact of dead regions in chronic hearing aid users.
Sequential CFAR detectors using a dead-zone limiter
NASA Astrophysics Data System (ADS)
Tantaratana, Sawasd
1990-09-01
The performances of some proposed sequential constant-false-alarm-rate (CFAR) detectors are evaluated. The observations are passed through a dead-zone limiter, the output of which is -1, 0, or +1, depending on whether the input is less than -c, between -c and c, or greater than c, where c is a constant. The test statistic is the sum of the outputs. The test is performed on a reduced set of data (those with absolute value larger than c), with the test statistic being the sum of the signs of the reduced set of data. Both constant and linear boundaries are considered. Numerical results show a significant reduction of the average number of observations needed to achieve the same false alarm and detection probabilities as a fixed-sample-size CFAR detector using the same kind of test statistic.
Dependence of sound characteristics on the bowing position in a violin
NASA Astrophysics Data System (ADS)
Roh, YuJi; Kim, Young H.
2014-12-01
A quantitative analysis of violin sounds produced for different bowing positions over the full length of a violin string has been carried out. An automated bowing machine was employed in order to keep the bowing parameters constant. A 3-dimensional profile of the frequency spectrum was introduced in order to characterize the violin's sound. We found that the fundamental frequency did not change for different bowing positions, whereas the frequencies of the higher harmonics were different. Bowing the string at 30 mm from the bridge produced musical sounds. The middle of the string was confirmed to be a dead zone, as reported in previous works. In addition, the quarter position was also found to be a dead zone. Bowing the string 90 mm from the bridge dominantly produces a fundamental frequency of 864 Hz and its harmonics.
Safety illusion and error trap in a collectively-operated machine accident.
de Almeida, Ildeberto Muniz; Nobre, Hildeberto; do Amaral Dias, Maria Dionísia; Vilela, Rodolfo Andrade Gouveia
2012-01-01
Workplace accidents involving machines are relevant for their magnitude and their impacts on worker health. Despite consolidated critical statements, explanation centered on errors of operators remains predominant with industry professionals, hampering preventive measures and the improvement of production-system reliability. Several initiatives were adopted by enforcement agencies in partnership with universities to stimulate production and diffusion of analysis methodologies with a systemic approach. Starting from one accident case that occurred with a worker who operated a brake-clutch type mechanical press, the article explores cognitive aspects and the existence of traps in the operation of this machine. It deals with a large-sized press that, despite being endowed with a light curtain in areas of access to the pressing zone, did not meet legal requirements. The safety devices gave rise to an illusion of safety, permitting activation of the machine when a worker was still found within the operational zone. Preventive interventions must stimulate the tailoring of systems to the characteristics of workers, minimizing the creation of traps and encouraging safety policies and practices that replace judgments of behaviors that participate in accidents by analyses of reasons that lead workers to act in that manner.
Low mass planet migration in magnetically torqued dead zones - I. Static migration torque
NASA Astrophysics Data System (ADS)
McNally, Colin P.; Nelson, Richard P.; Paardekooper, Sijme-Jan; Gressel, Oliver; Lyra, Wladimir
2017-12-01
Motivated by models suggesting that the inner planet forming regions of protoplanetary discs are predominantly lacking in viscosity-inducing turbulence, and are possibly threaded by Hall-effect generated large-scale horizontal magnetic fields, we examine the dynamics of the corotation region of a low-mass planet in such an environment. The corotation torque in an inviscid, isothermal, dead zone ought to saturate, with the libration region becoming both symmetrical and of a uniform vortensity, leading to fast inward migration driven by the Lindblad torques alone. However, in such a low viscosity situation, the material on librating streamlines essentially preserves its vortensity. If there is relative radial motion between the disc gas and the planet, the librating streamlines will no longer be symmetrical. Hence, if the gas is torqued by a large-scale magnetic field so that it undergoes a net inflow or outflow past the planet, driving evolution of the vortensity and inducing asymmetry of the corotation region, the corotation torque can grow, leading to a positive torque. In this paper, we treat this effect by applying a symmetry argument to the previously studied case of a migrating planet in an inviscid disc. Our results show that the corotation torque due to a laminar Hall-induced magnetic field in a dead zone behaves quite differently from that studied previously for a viscous disc. Furthermore, the magnetic field induced corotation torque and the dynamical corotation torque in a low viscosity disc can be regarded as one unified effect.
ten Brink, Uri S.; Al-Zoubi, A. S.; Flores, C.H.; Rotstein, Y.; Qabbani, I.; Harder, S.H.; Keller, Gordon R.
2006-01-01
New seismic observations from the Dead Sea basin (DSB), a large pull-apart basin along the Dead Sea transform (DST) plate boundary, show a low velocity zone extending to a depth of 18 km under the basin. The lower crust and Moho are not perturbed. These observations are incompatible with the current view of mid-crustal strength at low temperatures and with support of the basin's negative load by a rigid elastic plate. Strain softening in the middle crust is invoked to explain the isostatic compensation and the rapid subsidence of the basin during the Pleistocene. Whether the deformation is influenced by the presence of fluids and by a long history of seismic activity on the DST, and what the exact softening mechanism is, remain open questions. The uplift surrounding the DST also appears to be an upper crustal phenomenon but its relationship to a mid-crustal strength minimum is less clear. The shear deformation associated with the transform plate boundary motion appears, on the other hand, to cut throughout the entire crust. Copyright 2006 by the American Geophysical Union.
Electrical crosstalk-coupling measurement and analysis for digital closed loop fibre optic gyro
NASA Astrophysics Data System (ADS)
Jin, Jing; Tian, Hai-Ting; Pan, Xiong; Song, Ning-Fang
2010-03-01
The phase modulation and the closed-loop controller can generate electrical crosstalk-coupling in digital closed-loop fibre optic gyro. Four electrical cross-coupling paths are verified by the open-loop testing approach. It is found the variation of ramp amplitude will lead to the alternation of gyro bias. The amplitude and the phase parameters of the electrical crosstalk signal are measured by lock-in amplifier, and the variation of gyro bias is confirmed to be caused by the alternation of phase according to the amplitude of the ramp. A digital closed-loop fibre optic gyro electrical crosstalk-coupling model is built by approximating the electrical cross-coupling paths as a proportion and integration segment. The results of simulation and experiment show that the modulation signal electrical crosstalk-coupling can cause the dead zone of the gyro when a small angular velocity is inputted, and it could also lead to a periodic vibration of the bias error of the gyro when a large angular velocity is inputted.
Design of aerosol face masks for children using computerized 3D face analysis.
Amirav, Israel; Luder, Anthony S; Halamish, Asaf; Raviv, Dan; Kimmel, Ron; Waisman, Dan; Newhouse, Michael T
2014-08-01
Aerosol masks were originally developed for adults and downsized for children. Overall fit to minimize dead space and a tight seal are problematic, because children's faces undergo rapid and marked topographic and internal anthropometric changes in their first few months/years of life. Facial three-dimensional (3D) anthropometric data were used to design an optimized pediatric mask. Children's faces (n=271, aged 1 month to 4 years) were scanned with 3D technology. Data for the distance from the bridge of the nose to the tip of the chin (H) and the width of the mouth opening (W) were used to categorize the scans into "small," "medium," and "large" "clusters." "Average" masks were developed from each cluster to provide an optimal seal with minimal dead space. The resulting computerized contour, W and H, were used to develop the SootherMask® that enables children, "suckling" on their own pacifier, to keep the mask on their face, mainly by means of subatmospheric pressure. The relatively wide and flexible rim of the mask accommodates variations in facial size within and between clusters. Unique pediatric face masks were developed based on anthropometric data obtained through computerized 3D face analysis. These masks follow facial contours and gently seal to the child's face, and thus may minimize aerosol leakage and dead space.
Dead layer on silicon p-i-n diode charged-particle detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wall, B. L.; Amsbaugh, John F.; Beglarian, A.
Abstract Semiconductor detectors in general have a dead layer at their surfaces that is either a result of natural or induced passivation, or is formed during the process of making a contact. Charged particles passing through this region produce ionization that is incompletely collected and recorded, which leads to departures from the ideal in both energy deposition and resolution. The silicon p-i-n diode used in the KATRIN neutrinomass experiment has such a dead layer. We have constructed a detailed Monte Carlo model for the passage of electrons from vacuum into a silicon detector, and compared the measured energy spectra tomore » the predicted ones for a range of energies from 12 to 20 keV. The comparison provides experimental evidence that a substantial fraction of the ionization produced in the "dead" layer evidently escapes by discussion, with 46% being collected in the depletion zone and the balance being neutralized at the contact or by bulk recombination. The most elementary model of a thinner dead layer from which no charge is collected is strongly disfavored.« less
SU-E-T-458: Determining Threshold-Of-Failure for Dead Pixel Rows in EPID-Based Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gersh, J; Wiant, D
Purpose: A pixel correction map is applied to all EPID-based applications on the TrueBeam (Varian Medical Systems, Palo Alto, CA). When dead pixels are detected, an interpolative smoothing algorithm is applied using neighboring-pixel information to supplement missing-pixel information. The vendor suggests that when the number of dead pixels exceeds 70,000, the panel should be replaced. It is common for entire detector rows to be dead, as well as their neighboring rows. Approximately 70 rows can be dead before the panel reaches this threshold. This study determines the number of neighboring dead-pixel rows that would create a large enough deviation inmore » measured fluence to cause failures in portal dosimetry (PD). Methods: Four clinical two-arc VMAT plans were generated using Eclipse's AXB algorithm and PD plans were created using the PDIP algorithm. These plans were chosen to represent those commonly encountered in the clinic: prostate, lung, abdomen, and neck treatments. During each iteration of this study, an increasing number of dead-pixel rows are artificially applied to the correction map and a fluence QA is performed using the EPID (corrected with this map). To provide a worst-case-scenario, the dead-pixel rows are chosen so that they present artifacts in the highfluence region of the field. Results: For all eight arc-fields deemed acceptable via a 3%/3mm gamma analysis (pass rate greater than 99%), VMAT QA yielded identical results with a 5 pixel-width dead zone. When 10 dead lines were present, half of the fields had pass rates below the 99% pass rate. With increasing dead rows, the pass rates were reduced substantially. Conclusion: While the vendor still suggests to request service at the point where 70,000 dead rows are measured (as recommended by the vendor), the authors suggest that service should be requested when there are greater than 5 consecutive dead rows.« less
Boundary-layer mantle flow under the Dead Sea transform fault inferred from seismic anisotropy.
Rümpker, Georg; Ryberg, Trond; Bock, Günter
2003-10-02
Lithospheric-scale transform faults play an important role in the dynamics of global plate motion. Near-surface deformation fields for such faults are relatively well documented by satellite geodesy, strain measurements and earthquake source studies, and deeper crustal structure has been imaged by seismic profiling. Relatively little is known, however, about deformation taking place in the subcrustal lithosphere--that is, the width and depth of the region associated with the deformation, the transition between deformed and undeformed lithosphere and the interaction between lithospheric and asthenospheric mantle flow at the plate boundary. Here we present evidence for a narrow, approximately 20-km-wide, subcrustal anisotropic zone of fault-parallel mineral alignment beneath the Dead Sea transform, obtained from an inversion of shear-wave splitting observations along a dense receiver profile. The geometry of this zone and the contrast between distinct anisotropic domains suggest subhorizontal mantle flow within a vertical boundary layer that extends through the entire lithosphere and accommodates the transform motion between the African and Arabian plates within this relatively narrow zone.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Brainstem Encoding of Aided Speech in Hearing Aid Users with Cochlear Dead Region(s)
Hassaan, Mohammad Ramadan; Ibraheem, Ola Abdallah; Galhom, Dalia Helal
2016-01-01
Introduction Neural encoding of speech begins with the analysis of the signal as a whole broken down into its sinusoidal components in the cochlea, which has to be conserved up to the higher auditory centers. Some of these components target the dead regions of the cochlea causing little or no excitation. Measuring aided speech-evoked auditory brainstem response elicited by speech stimuli with different spectral maxima can give insight into the brainstem encoding of aided speech with spectral maxima at these dead regions. Objective This research aims to study the impact of dead regions of the cochlea on speech processing at the brainstem level after a long period of hearing aid use. Methods This study comprised 30 ears without dead regions and 46 ears with dead regions at low, mid, or high frequencies. For all ears, we measured the aided speech-evoked auditory brainstem response using speech stimuli of low, mid, and high spectral maxima. Results Aided speech-evoked auditory brainstem response was producible in all subjects. Responses evoked by stimuli with spectral maxima at dead regions had longer latencies and smaller amplitudes when compared with the control group or the responses of other stimuli. Conclusion The presence of cochlear dead regions affects brainstem encoding of speech with spectral maxima perpendicular to these regions. Brainstem neuroplasticity and the extrinsic redundancy of speech can minimize the impact of dead regions in chronic hearing aid users. PMID:27413404
Anatomy of the dead sea transform from lithospheric to microscopic scale
Weber, M.; Abu-Ayyash, K.; Abueladas, A.; Agnon, A.; Alasonati-Tasarova, Z.; Al-Zubi, H.; Babeyko, A.; Bartov, Y.; Bauer, K.; Becken, M.; Bedrosian, P.A.; Ben-Avraham, Z.; Bock, G.; Bohnhoff, M.; Bribach, J.; Dulski, P.; Ebbing, J.; El-Kelani, R.; Forster, A.; Forster, H.-J.; Frieslander, U.; Garfunkel, Z.; Goetze, H.J.; Haak, V.; Haberland, C.; Hassouneh, M.; Helwig, S.; Hofstetter, A.; Hoffmann-Rotrie, A.; Jackel, K.H.; Janssen, C.; Jaser, D.; Kesten, D.; Khatib, M.; Kind, R.; Koch, O.; Koulakov, I.; Laske, Gabi; Maercklin, N.; Masarweh, R.; Masri, A.; Matar, A.; Mechie, J.; Meqbel, N.; Plessen, B.; Moller, P.; Mohsen, A.; Oberhansli, R.; Oreshin, S.; Petrunin, A.; Qabbani, I.; Rabba, I.; Ritter, O.; Romer, R.L.; Rumpker, G.; Rybakov, M.; Ryberg, T.; Saul, J.; Scherbaum, F.; Schmidt, S.; Schulze, A.; Sobolev, S.V.; Stiller, M.; Stromeyer, D.; Tarawneh, K.; Trela, C.; Weckmann, U.; Wetzel, U.; Wylegalla, K.
2009-01-01
Fault zones are the locations where motion of tectonic plates, often associated with earthquakes, is accommodated. Despite a rapid increase in the understanding of faults in the last decades, our knowledge of their geometry, petrophysical properties, and controlling processes remains incomplete. The central questions addressed here in our study of the Dead Sea Transform (DST) in the Middle East are as follows: (1) What are the structure and kinematics of a large fault zone? (2) What controls its structure and kinematics? (3) How does the DST compare to other plate boundary fault zones? The DST has accommodated a total of 105 km of leftlateral transform motion between the African and Arabian plates since early Miocene (???20 Ma). The DST segment between the Dead Sea and the Red Sea, called the Arava/ Araba Fault (AF), is studied here using a multidisciplinary and multiscale approach from the ??m to the plate tectonic scale. We observe that under the DST a narrow, subvertical zone cuts through crust and lithosphere. First, from west to east the crustal thickness increases smoothly from 26 to 39 km, and a subhorizontal lower crustal reflector is detected east of the AF. Second, several faults exist in the upper crust in a 40 km wide zone centered on the AF, but none have kilometer-size zones of decreased seismic velocities or zones of high electrical conductivities in the upper crust expected for large damage zones. Third, the AF is the main branch of the DST system, even though it has accommodated only a part (up to 60 km) of the overall 105 km of sinistral plate motion. Fourth, the AF acts as a barrier to fluids to a depth of 4 km, and the lithology changes abruptly across it. Fifth, in the top few hundred meters of the AF a locally transpressional regime is observed in a 100-300 m wide zone of deformed and displaced material, bordered by subparallel faults forming a positive flower structure. Other segments of the AF have a transtensional character with small pull-aparts along them. The damage zones of the individual faults are only 5-20 m wide at this depth range. Sixth, two areas on the AF show mesoscale to microscale faulting and veining in limestone sequences with faulting depths between 2 and 5 km. Seventh, fluids in the AF are carried downward into the fault zone. Only a minor fraction of fluids is derived from ascending hydrothermal fluids. However, we found that on the kilometer scale the AF does not act as an important fluid conduit. Most of these findings are corroborated using thermomechanical modeling where shear deformation in the upper crust is localized in one or two major faults; at larger depth, shear deformation occurs in a 20-40 km wide zone with a mechanically weak decoupling zone extending subvertically through the entire lithosphere. Copyright 2009 by the American Geophysical Union.
Fill-in binary loop pulse-torque quantizer
NASA Technical Reports Server (NTRS)
Lory, C. B.
1975-01-01
Fill-in binary (FIB) loop provides constant heating of torque generator, an advantage of binary current switching. At the same time, it avoids mode-related dead zone and data delay of binary, an advantage of ternary quantization.
Maercklin, N.; Bedrosian, P.A.; Haberland, C.; Ritter, O.; Ryberg, T.; Weber, M.; Weckmann, U.
2005-01-01
Seismic tomography, imaging of seismic scatterers, and magnetotelluric soundings reveal a sharp lithologic contrast along a ???10 km long segment of the Arava Fault (AF), a prominent fault of the southern Dead Sea Transform (DST) in the Middle East. Low seismic velocities and resistivities occur on its western side and higher values east of it, and the boundary between the two units coincides partly with a seismic scattering image. At 1-4 km depth the boundary is offset to the east of the AF surface trace, suggesting that at least two fault strands exist, and that slip occurred on multiple strands throughout the margin's history. A westward fault jump, possibly associated with straightening of a fault bend, explains both our observations and the narrow fault zone observed by others. Copyright 2005 by the American Geophysical Union.
Accuracy increase of self-compensator
NASA Astrophysics Data System (ADS)
Zhambalova, S. Ts; Vinogradova, A. A.
2018-03-01
In this paper, the authors consider a self-compensation system and a method for increasing its accuracy, without compromising the condition of the information theory of measuring devices. The result can be achieved using the pulse control of the tracking system in the dead zone (the zone of the proportional section of the amplifier's characteristic). Pulse control allows one to increase the control power, but the input signal of the amplifier is infinitesimal. To do this, the authors use the conversion scheme for the input quantity. It is also possible to reduce the dead band, but the system becomes unstable. The amount of information received from the instrument, correcting circuits complicates the system, and, reducing the feedback coefficient dramatically, reduces the speed. Thanks to this, without compromising the measurement condition, the authors increase the accuracy of the self-compensation system. The implementation technique allows increasing the power of the input signal by many orders of magnitude.
Modeling initiation trains based on HMX and TATB
NASA Astrophysics Data System (ADS)
Drake, R. C.; Maisey, M.
2017-01-01
There will always be a requirement to reduce the size of initiation trains. However, as the size is reduced the performance characteristics can be compromised. A detailed science-based understanding of the processes (ignition and growth to detonation) which determine the performance characteristics is required to enable compact and robust initiation trains to be designed. To assess the use of numerical models in the design of initiation trains a modeling study has been undertaken, with the aim of understanding the initiation of TATB and HMX charges by a confined, surface mounted detonator. The effect of detonator diameter and detonator confinement on the formation of dead zones in the acceptor explosives has been studied. The size of dead zones can be reduced by increasing the diameter of the detonator and by increasing the impedance of the confinement. The implications for the design of initiation trains are discussed.
Observed-Based Adaptive Fuzzy Tracking Control for Switched Nonlinear Systems With Dead-Zone.
Tong, Shaocheng; Sui, Shuai; Li, Yongming
2015-12-01
In this paper, the problem of adaptive fuzzy output-feedback control is investigated for a class of uncertain switched nonlinear systems in strict-feedback form. The considered switched systems contain unknown nonlinearities, dead-zone, and immeasurable states. Fuzzy logic systems are utilized to approximate the unknown nonlinear functions, a switched fuzzy state observer is designed and thus the immeasurable states are obtained by it. By applying the adaptive backstepping design principle and the average dwell time method, an adaptive fuzzy output-feedback tracking control approach is developed. It is proved that the proposed control approach can guarantee that all the variables in the closed-loop system are bounded under a class of switching signals with average dwell time, and also that the system output can track a given reference signal as closely as possible. The simulation results are given to check the effectiveness of the proposed approach.
A steep peripheral ring in irregular cornea topography, real or an instrument error?
Galindo-Ferreiro, Alicia; Galvez-Ruiz, Alberto; Schellini, Silvana A; Galindo-Alonso, Julio
2016-01-01
To demonstrate that the steep peripheral ring (red zone) on corneal topography after myopic laser in situ keratomileusis (LASIK) could possibly due to instrument error and not always to a real increase in corneal curvature. A spherical model for the corneal surface and modifying topography software was used to analyze the cause of an error due to instrument design. This study involved modification of the software of a commercially available topographer. A small modification of the topography image results in a red zone on the corneal topography color map. Corneal modeling indicates that the red zone could be an artifact due to an instrument-induced error. The steep curvature changes after LASIK, signified by the red zone, could be also an error due to the plotting algorithms of the corneal topographer, besides a steep curvature change.
The Seven Deadly Sins of Online Microcomputing.
ERIC Educational Resources Information Center
King, Alan
1989-01-01
Offers suggestions for avoiding common errors in online microcomputer use. Areas discussed include learning the basics; hardware protection; backup options; hard disk organization; software selection; file security; and the use of dedicated communications lines. (CLB)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charnoz, Sebastien; Taillifet, Esther, E-mail: charnoz@cea.fr
Dust is a major component of protoplanetary and debris disks as it is the main observable signature of planetary formation. However, since dust dynamics are size-dependent (because of gas drag or radiation pressure) any attempt to understand the full dynamical evolution of circumstellar dusty disks that neglect the coupling of collisional evolution with dynamical evolution is thwarted because of the feedback between these two processes. Here, a new hybrid Lagrangian/Eulerian code is presented that overcomes some of these difficulties. The particles representing 'dust clouds' are tracked individually in a Lagrangian way. This system is then mapped on an Eulerian spatialmore » grid, inside the cells of which the local collisional evolutions are computed. Finally, the system is remapped back in a collection of discrete Lagrangian particles, keeping their number constant. An application example of dust growth in a turbulent protoplanetary disk at 1 AU is presented. First, the growth of dust is considered in the absence of a dead zone and the vertical distribution of dust is self-consistently computed. It is found that the mass is rapidly dominated by particles about a fraction of a millimeter in size. Then the same case with an embedded dead zone is investigated and it is found that coagulation is much more efficient and produces, in a short timescale, 1-10 cm dust pebbles that dominate the mass. These pebbles may then be accumulated into embryo-sized objects inside large-scale turbulent structures as shown recently.« less
NASA Astrophysics Data System (ADS)
Khaibrakhmanov, S. A.; Dudorov, A. E.; Parfenov, S. Yu.; Sobolev, A. M.
2017-01-01
We investigate the fossil magnetic field in the accretion and protoplanetary discs using the Shakura and Sunyaev approach. The distinguishing feature of this study is the accurate solution of the ionization balance equations and the induction equation with Ohmic diffusion, magnetic ambipolar diffusion, buoyancy and the Hall effect. We consider the ionization by cosmic rays, X-rays and radionuclides, radiative recombinations, recombinations on dust grains and also thermal ionization. The buoyancy appears as the additional mechanism of magnetic flux escape in the steady-state solution of the induction equation. Calculations show that Ohmic diffusion and magnetic ambipolar diffusion constraint the generation of the magnetic field inside the `dead' zones. The magnetic field in these regions is quasi-vertical. The buoyancy constraints the toroidal magnetic field strength close to the disc inner edge. As a result, the toroidal and vertical magnetic fields become comparable. The Hall effect is important in the regions close to the borders of the `dead' zones because electrons are magnetized there. The magnetic field in these regions is quasi-radial. We calculate the magnetic field strength and geometry for the discs with accretion rates (10^{-8}-10^{-6}) {M}_{⊙} {yr}^{-1}. The fossil magnetic field geometry does not change significantly during the disc evolution while the accretion rate decreases. We construct the synthetic maps of dust emission polarized due to the dust grain alignment by the magnetic field. In the polarization maps, the `dead' zones appear as the regions with the reduced values of polarization degree in comparison to those in the adjacent regions.
Mitigation of image artifacts in LWIR microgrid polarimeter images
NASA Astrophysics Data System (ADS)
Ratliff, Bradley M.; Tyo, J. Scott; Boger, James K.; Black, Wiley T.; Bowers, David M.; Kumar, Rakesh
2007-09-01
Microgrid polarimeters, also known as division of focal plane (DoFP) polarimeters, are composed of an integrated array of micropolarizing elements that immediately precedes the FPA. The result of the DoFP device is that neighboring pixels sense different polarization states. The measurements made at each pixel can be combined to estimate the Stokes vector at every reconstruction point in a scene. DoFP devices have the advantage that they are mechanically rugged and inherently optically aligned. However, they suffer from the severe disadvantage that the neighboring pixels that make up the Stokes vector estimates have different instantaneous fields of view (IFOV). This IFOV error leads to spatial differencing that causes false polarization signatures, especially in regions of the image where the scene changes rapidly in space. Furthermore, when the polarimeter is operating in the LWIR, the FPA has inherent response problems such as nonuniformity and dead pixels that make the false polarization problem that much worse. In this paper, we present methods that use spatial information from the scene to mitigate two of the biggest problems that confront DoFP devices. The first is a polarimetric dead pixel replacement (DPR) scheme, and the second is a reconstruction method that chooses the most appropriate polarimetric interpolation scheme for each particular pixel in the image based on the scene properties. We have found that these two methods can greatly improve both the visual appearance of polarization products as well as the accuracy of the polarization estimates, and can be implemented with minimal computational cost.
The Effects: Dead Zones and Harmful Algal Blooms
Excess nitrogen and phosphorus can cause algae blooms. The overgrowth of algae consumes oxygen and blocks sunlight from underwater plants. When the algae die, the oxygen in the water is consumed, making it impossible for aquatic life to survive.
Kauffman, S A; Goodwin, B C
1990-06-07
We review the evidence presented in Part I showing that transcripts and protein products of maternal, gap, pair-rule, and segment polarity genes exhibit increasingly complex, multipeaked longitudinal waveforms in the early Drosophila embryo. The central problem we address in Part II is the use the embryo makes of these wave forms to specify longitudinal pattern. Based on the fact that mutants of many of these genes generate deletions and mirror symmetrical duplications of pattern elements on length scales ranging from about half the egg to within segments, we propose that position is specified by measuring a "phase angle" by use of the ratios of two or more variables. Pictorially, such a phase angle can be thought of as a colour on a colour wheel. Any such model contains a phaseless singularity where all or many phases, or colours, come together. We suppose as well that positional values sufficiently close to the singularity are meaningless, hence a "dead zone". Duplications and deletions are accounted for by deformation of the cycle of morphogen values occurring along the antero-posterior axis. If the cycle of values surrounds the singularity and lies outside the dead zone, pattern is normal. If the curve transects the dead zone, pattern elements are deleted. If the curve lies entirely on one side of the singularity, pattern elements are deleted and others are duplicated with mirror symmetry. The existence of different wavelength transcript patterns in maternal, gap, pair-rule, and segment polarity genes and the roles of those same genes in generating deletions and mirror symmetrical duplications on a variety of length scales lead us to propose that position is measured simultaneously on at least four colour wheels, which cycle different numbers of times along the anterior-posterior axis. These yield progressively finer grained positional information. Normal pattern specification requires a unique angle, outside of the dead zone, from each of the four wheels. Deformations of the cycle of gene product concentrations yield the deletions and mirror symmetric duplications observed in the mutants discussed. The alternative familiar hypothesis that longitudinal position is specified in an "on" "off" combinatorial code does not readily account for the duplication deletion phenomena.
NASA Technical Reports Server (NTRS)
Jekeli, C.
1979-01-01
Through the method of truncation functions, the oceanic geoid undulation is divided into two constituents: an inner zone contribution expressed as an integral of surface gravity disturbances over a spherical cap; and an outer zone contribution derived from a finite set of potential harmonic coefficients. Global, average error estimates are formulated for undulation differences, thereby providing accuracies for a relative geoid. The error analysis focuses on the outer zone contribution for which the potential coefficient errors are modeled. The method of computing undulations based on gravity disturbance data for the inner zone is compared to the similar, conventional method which presupposes gravity anomaly data within this zone.
New evidence on the accurate displacement along the Arava/Araba segment of the Dead Sea Transform
NASA Astrophysics Data System (ADS)
Beyth, M.; Sagy, A.; Hajazi, H.; Alkhraisha, S.; Mushkin, A.; Ginat, H.
2018-06-01
The sinistral displacement along the Dead Sea Transform (DST), the plate boundary between the African and the Arabian plates, south of the Dead Sea basin, was previously attributed to two main fault zones: the Arava/Araba or Dead Sea fault and the Feinan or Al Quwayra fault zone. This was based on similarities of features on either side of the Araba Valley. In particular, the Timna and the Feinan copper mines, located north of the Themed and Dana faults, and the onlap of the Cambrian formations southward onto the Amram rhyolite and Ahyamir volcanics. To these we add a more accurate offset indicator in the form of an offset Early Cambrian (532 Ma) dolerite dyke previously mapped in Mount Amram (Israel) on the African plate and recently discovered across the Araba Valley in Jabal Sumr al Tayyiba (southwest Jordan) on the Arabian plate. This dolerite dyke is 20 m thick, strikes N50°E and is the only dyke intruding the Jabal Sumr al Tayyiba pink rhyolite flows of the Ahyamir Volcanics. Geochemical and geochronological correlations between the Jabal Sumr al Tayyiba dolerite dyke and the Mount Amram dolerite dyke demonstrate 85 km of sinistral offset across the Arava/Araba fault. Our results also suggest approximately 109 km of combined sinistral displacement across the Arava/Araba and Feinan faults based on petrological correlations between the Timna and Jabal Hanna igneous complexes on the African and Arabian plates, respectively. This constrains the total sinistral displacement of the Feinan fault and its accessory faults to be 24 km.
New evidence on the accurate displacement along the Arava/Araba segment of the Dead Sea Transform
NASA Astrophysics Data System (ADS)
Beyth, M.; Sagy, A.; Hajazi, H.; Alkhraisha, S.; Mushkin, A.; Ginat, H.
2017-11-01
The sinistral displacement along the Dead Sea Transform (DST), the plate boundary between the African and the Arabian plates, south of the Dead Sea basin, was previously attributed to two main fault zones: the Arava/Araba or Dead Sea fault and the Feinan or Al Quwayra fault zone. This was based on similarities of features on either side of the Araba Valley. In particular, the Timna and the Feinan copper mines, located north of the Themed and Dana faults, and the onlap of the Cambrian formations southward onto the Amram rhyolite and Ahyamir volcanics. To these we add a more accurate offset indicator in the form of an offset Early Cambrian (532 Ma) dolerite dyke previously mapped in Mount Amram (Israel) on the African plate and recently discovered across the Araba Valley in Jabal Sumr al Tayyiba (southwest Jordan) on the Arabian plate. This dolerite dyke is 20 m thick, strikes N50°E and is the only dyke intruding the Jabal Sumr al Tayyiba pink rhyolite flows of the Ahyamir Volcanics. Geochemical and geochronological correlations between the Jabal Sumr al Tayyiba dolerite dyke and the Mount Amram dolerite dyke demonstrate 85 km of sinistral offset across the Arava/Araba fault. Our results also suggest approximately 109 km of combined sinistral displacement across the Arava/Araba and Feinan faults based on petrological correlations between the Timna and Jabal Hanna igneous complexes on the African and Arabian plates, respectively. This constrains the total sinistral displacement of the Feinan fault and its accessory faults to be 24 km.
Integrated 3D density modelling and segmentation of the Dead Sea Transform
NASA Astrophysics Data System (ADS)
Götze, H.-J.; El-Kelani, R.; Schmidt, S.; Rybakov, M.; Hassouneh, M.; Förster, H.-J.; Ebbing, J.
2007-04-01
A 3D interpretation of the newly compiled Bouguer anomaly in the area of the “Dead Sea Rift” is presented. A high-resolution 3D model constrained with the seismic results reveals the crustal thickness and density distribution beneath the Arava/Araba Valley (AV), the region between the Dead Sea and the Gulf of Aqaba/Elat. The Bouguer anomalies along the axial portion of the AV, as deduced from the modelling results, are mainly caused by deep-seated sedimentary basins ( D > 10 km). An inferred zone of intrusion coincides with the maximum gravity anomaly on the eastern flank of the AV. The intrusion is displaced at different sectors along the NNW-SSE direction. The zone of maximum crustal thinning (depth 30 km) is attained in the western sector at the Mediterranean. The southeastern plateau, on the other hand, shows by far the largest crustal thickness of the region (38-42 km). Linked to the left lateral movement of approx. 105 km at the boundary between the African and Arabian plate, and constrained with recent seismic data, a small asymmetric topography of the Moho beneath the Dead Sea Transform (DST) was modelled. The thickness and density of the crust suggest that the AV is underlain by continental crust. The deep basins, the relatively large intrusion and the asymmetric topography of the Moho lead to the conclusion that a small-scale asthenospheric upwelling could be responsible for the thinning of the crust and subsequent creation of the Dead Sea basin during the left lateral movement. A clear segmentation along the strike of the DST was obtained by curvature analysis: the northern part in the neighbourhood of the Dead Sea is characterised by high curvature of the residual gravity field. Flexural rigidity calculations result in very low values of effective elastic lithospheric thickness ( t e < 5 km). This points to decoupling of crust in the Dead Sea area. In the central, AV the curvature is less pronounced and t e increases to approximately 10 km. Curvature is high again in the southernmost part near the Aqaba region. Solutions of Euler deconvolution were visualised together with modelled density bodies and fit very well into the density model structures.
Improving the Glucose Meter Error Grid With the Taguchi Loss Function.
Krouwer, Jan S
2016-07-01
Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the Taguchi loss function. Applying the Taguchi loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. © 2015 Diabetes Technology Society.
Experimental study of the solid-liquid interface in a yield-stress fluid flow upstream of a step
NASA Astrophysics Data System (ADS)
Luu, Li-Hua; Pierre, Philippe; Guillaume, Chambon
2014-11-01
We present an experimental study where a yield-stress fluid is implemented to carefully examine the interface between a liquid-like unyielded region and a solid-like yielded region. The studied hydrodynamics consists of a rectangular pipe-flow disturbed by the presence of a step. Upstream of the step, a solid-liquid interface between a dead zone and a flow zone appears. This configuration can both model geophysical erosion phenomenon in debris flows or find applications for industrial extrusion processes. We aim to investigate the dominant physical mechanism underlying the formation of the static domain, by combining the rheological characterization of the yield-stress fluid with local measurements of the related hydrodynamic parameters. In this work, we use a model fluid, namely polymer micro-gel Carbopol, that exhibits a Hershel-Bulkley viscoplastic rheology. Exploiting the fluid transparency, the flow is monitored by Particle Image Velocimetry thanks to internal visualization technique. In particular, we demonstrate that the flow above the dead zone roughly behaves as a plug flow whose velocity profile can successfully be described by a Poiseuille equation including a Hershel-Bulkley rheology (PHB theory), with exception of a thin zone at the close vicinity of the static domain. The border inside the flow zone above which the so-called PHB flow starts, is found to be the same regardless of the flow rate and to move with a constant velocity that increases with the flow rate. We interpret this feature as a slip frontier.
Seasonal hypoxia in the benthic waters of the Louisiana Coastal Shelf contributes to the Gulf of Mexico "dead zone" phenomena. Limited information is available on sedimentary biogeochemical interactions during periods of hypoxia.
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Y.; Gohar, Y.; Nuclear Engineering Division
In almost every detector counting system, a minimal dead time is required to record two successive events as two separated pulses. Due to the random nature of neutron interactions in the subcritical assembly, there is always some probability that a true neutron event will not be recorded because it occurs too close to the preceding event. These losses may become rather severe for counting systems with high counting rates, and should be corrected before any utilization of the experimental data. This report examines the dead time effects for the pulsed neutron experiments of the YALINA-Booster subcritical assembly. The nonparalyzable modelmore » is utilized to correct the experimental data due to dead time. Overall, the reactivity values are increased by 0.19$ and 0.32$ after the spatial corrections for the YALINA-Booster 36% and 21% configurations respectively. The differences of the reactivities obtained with He-3 long or short detectors at the same detector channel diminish after the dead time corrections of the experimental data for the 36% YALINA-Booster configuration. In addition, better agreements between reactivities obtained from different experimental data sets are also observed after the dead time corrections for the 21% YALINA-Booster configuration.« less
Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn
2017-02-01
This study aimed to examine the effect of the impact point on the golf ball on the horizontal launch angle and side spin during putting with a mechanical putting arm and human participants. Putts of 3.2 m were completed with a mechanical putting arm (four putter-ball combinations, total of 160 trials) and human participants (two putter-ball combinations, total of 337 trials). The centre of the dimple pattern (centroid) was located and the following variables were measured: distance and angle of the impact point from the centroid and surface area of the impact zone. Multiple regression analysis was conducted to identify whether impact variables had significant associations with ball roll variables, horizontal launch angle and side spin. Significant associations were identified between impact variables and horizontal launch angle with the mechanical putting arm but this was not replicated with human participants. The variability caused by "dimple error" was minimal with the mechanical putting arm and not evident with human participants. Differences between the mechanical putting arm and human participants may be due to the way impulse is imparted on the ball. Therefore it is concluded that variability of impact point on the golf ball has a minimal effect on putting performance.
Analysis of the Isolated SecA DEAD Motor Suggests a Mechanism for Chemical-Mechanical Coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nithianantham, Stanley; Shilton, Brian H
The preprotein cross-linking domain and C-terminal domains of Escherichia coli SecA were removed to create a minimal DEAD motor, SecA-DM. SecA-DM hydrolyzes ATP and has the same affinity for ADP as full-length SecA. The crystal structure of SecA-DM in complex with ADP was solved and shows the DEAD motor in a closed conformation. Comparison with the structure of the E. coli DEAD motor in an open conformation (Protein Data Bank ID 2FSI) indicates main-chain conformational changes in two critical sequences corresponding to Motif III and Motif V of the DEAD helicase family. The structures that the Motif III and Motifmore » V sequences adopt in the DEAD motor open conformation are incompatible with the closed conformation. Therefore, when the DEAD motor makes the transition from open to closed, Motif III and Motif V are forced to change their conformations, which likely functions to regulate passage through the transition state for ATP hydrolysis. The transition state for ATP hydrolysis for the SecA DEAD motor was modeled based on the conformation of the Vasa helicase in complex with adenylyl imidodiphosphate and RNA (Protein Data Bank ID 2DB3). A mechanism for chemical-mechanical coupling emerges, where passage through the transition state for ATP hydrolysis is hindered by the conformational changes required in Motif III and Motif V, and may be promoted by binding interactions with the preprotein substrate and/or other translocase domains and subunits.« less
Analysis of the Isolated SecA DEAD Motor Suggests a Mechanism for Chemical-Mechanical Coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nithianantham, Stanley; Shilton, Brian H
2011-09-28
The preprotein cross-linking domain and C-terminal domains of Escherichia coli SecA were removed to create a minimal DEAD motor, SecA-DM. SecA-DM hydrolyzes ATP and has the same affinity for ADP as full-length SecA. The crystal structure of SecA-DM in complex with ADP was solved and shows the DEAD motor in a closed conformation. Comparison with the structure of the E. coli DEAD motor in an open conformation (Protein Data Bank ID 2FSI) indicates main-chain conformational changes in two critical sequences corresponding to Motif III and Motif V of the DEAD helicase family. The structures that the Motif III and Motifmore » V sequences adopt in the DEAD motor open conformation are incompatible with the closed conformation. Therefore, when the DEAD motor makes the transition from open to closed, Motif III and Motif V are forced to change their conformations, which likely functions to regulate passage through the transition state for ATP hydrolysis. The transition state for ATP hydrolysis for the SecA DEAD motor was modeled based on the conformation of the Vasa helicase in complex with adenylyl imidodiphosphate and RNA (Protein Data Bank ID 2DB3). A mechanism for chemical-mechanical coupling emerges, where passage through the transition state for ATP hydrolysis is hindered by the conformational changes required in Motif III and Motif V, and may be promoted by binding interactions with the preprotein substrate and/or other translocase domains and subunits.« less
Gravitational Instabilities, Chondrule Formation, and the FU Orionis Phenomenon
NASA Astrophysics Data System (ADS)
Boley, Aaron C.; Durisen, Richard H.
2008-10-01
Using analytic arguments and numerical simulations, we examine whether chondrule formation and the FU Orionis phenomenon can be caused by the burstlike onset of gravitational instabilities (GIs) in dead zones. At least two scenarios for bursting dead zones can work, in principle. If the disk is on the verge of fragmentation, GI activation near r ~ 4-5 AU can produce chondrule-forming shocks, at least under extreme conditions. Mass fluxes are also high enough during the onset of GIs to suggest that the outburst is related to an FU Orionis phenomenon. This situation is demonstrated by numerical simulations. In contrast, as supported by analytic arguments, if the burst takes place close to r ~ 1 AU, then even low pitch angle spiral waves can create chondrule-producing shocks and outbursts. We also study the stability of the massive disks in our simulations against fragmentation and find that although disk evolution is sensitive to changes in opacity, the disks we study do not fragment, even at high resolution and even for extreme assumptions.
Heading Estimation for Pedestrian Dead Reckoning Based on Robust Adaptive Kalman Filtering.
Wu, Dongjin; Xia, Linyuan; Geng, Jijun
2018-06-19
Pedestrian dead reckoning (PDR) using smart phone-embedded micro-electro-mechanical system (MEMS) sensors plays a key role in ubiquitous localization indoors and outdoors. However, as a relative localization method, it suffers from the problem of error accumulation which prevents it from long term independent running. Heading estimation error is one of the main location error sources, and therefore, in order to improve the location tracking performance of the PDR method in complex environments, an approach based on robust adaptive Kalman filtering (RAKF) for estimating accurate headings is proposed. In our approach, outputs from gyroscope, accelerometer, and magnetometer sensors are fused using the solution of Kalman filtering (KF) that the heading measurements derived from accelerations and magnetic field data are used to correct the states integrated from angular rates. In order to identify and control measurement outliers, a maximum likelihood-type estimator (M-estimator)-based model is used. Moreover, an adaptive factor is applied to resist the negative effects of state model disturbances. Extensive experiments under static and dynamic conditions were conducted in indoor environments. The experimental results demonstrate the proposed approach provides more accurate heading estimates and supports more robust and dynamic adaptive location tracking, compared with methods based on conventional KF.
Nonlinear Dynamic Characteristics of the Railway Vehicle
NASA Astrophysics Data System (ADS)
Uyulan, Çağlar; Gokasan, Metin
2017-06-01
The nonlinear dynamic characteristics of a railway vehicle are checked into thoroughly by applying two different wheel-rail contact model: a heuristic nonlinear friction creepage model derived by using Kalker 's theory and Polach model including dead-zone clearance. This two models are matched with the quasi-static form of the LuGre model to obtain more realistic wheel-rail contact model. LuGre model parameters are determined using nonlinear optimization method, which it's objective is to minimize the error between the output of the Polach and Kalker model and quasi-static LuGre model for specific operating conditions. The symmetric/asymmetric bifurcation attitude and stable/unstable motion of the railway vehicle in the presence of nonlinearities which are yaw damping forces in the longitudinal suspension system are analyzed in great detail by changing the vehicle speed. Phase portraits of the lateral displacement of the leading wheelset of the railway vehicle are drawn below and on the critical speeds, where sub-critical Hopf bifurcation take place, for two wheel-rail contact model. Asymmetric periodic motions have been observed during the simulation in the lateral displacement of the wheelset under different vehicle speed range. The coexistence of multiple steady states cause bounces in the amplitude of vibrations, resulting instability problems of the railway vehicle. By using Lyapunov's indirect method, the critical hunting speeds are calculated with respect to the radius of the curved track parameter changes. Hunting, which is defined as the oscillation of the lateral displacement of wheelset with a large domain, is described by a limit cycle-type oscillation nature. The evaluated accuracy of the LuGre model adopted from Kalker's model results for prediction of critical speed is higher than the results of the LuGre model adopted from Polach's model. From the results of the analysis, the critical hunting speed must be resolved by investigating the track tests under various kind of excitations.
Simulation of the Aerosol-Atmosphere Interaction in the Dead Sea Area with COSMO-ART
NASA Astrophysics Data System (ADS)
Vogel, Bernhard; Bangert, Max; Kottmeier, Christoph; Rieger, Daniel; Schad, Tobias; Vogel, Heike
2014-05-01
The Dead Sea is a unique environment located in the Dead Sea Rift Valley. The fault system of the Dead Sea Rift Valley marks the political borders between Israel, Jordan, and Palestine. The Dead Sea region and the ambient Eastern Mediterranean coastal zone provide a natural laboratory for studying atmospheric processes ranging from the smallest scale of cloud processes to regional weather and climate. The virtual institute DESERVE is designed as a cross-disciplinary and cooperative international project of the Helmholtz Centers KIT, GFZ, and UFZ with well-established partners in Israel, Jordan and Palestine. One main focus of one of the work packages is the role of aerosols in modifying clouds and precipitation and in developing the Dead Sea haze layer as one of the most intriguing questions. The haze influences visibility, solar radiation, and evaporation and may even affect economy and health. We applied the online coupled model system COSMO-ART, which is able to treat the feedback processes between aerosol, radiation, and cloud formation, for a case study above the Dead Sea and adjacent regions. Natural aerosol like mineral dust and sea salt as well as anthropogenic primary and secondary aerosol is taken into account. Some of the observed features like the vertical double structure of the haze layer are already covered by the simulation. We found that absorbing aerosol like mineral dust causes a temperature increase in parts of the model domain. In other areas a decrease in temperature due to cirrus clouds modified by elevated dust layers is simulated.
Large-Area Visually Augmented Navigation for Autonomous Underwater Vehicles
2005-06-01
constrain position drift . Correction of errors in position and orientation are made each time the mosaic is updated, which occurs every Lth video frame. They...are the greatest strength of a VAN methodology. It is these measurements which help to correct dead-reckoned drift error and enforce recovery of a...systems. [INSTRUMENT [VARIABLE I INTENAL? I UPDATE RATE PRECISION FRANGE J DRIFT Acoustic Altimeter Z - Altitude yes varies: 0.1-10 Hz 0.01-1.0 m varies
NASA Astrophysics Data System (ADS)
Hamiel, Yariv; Piatibratova, Oksana; Mizrahi, Yaakov; Nahmias, Yoav; Sagy, Amir
2018-04-01
Detailed field and geodetic observations of crustal deformation across the Jericho Fault section of the Dead Sea Fault are presented. New field observations reveal several slip episodes that rupture the surface, consist with strike slip and extensional deformation along a fault zone width of about 200 m. Using dense Global Positioning System measurements, we obtain the velocities of new stations across the fault. We find that this section is locked for strike-slip motion with a locking depth of 16.6 ± 7.8 km and a slip rate of 4.8 ± 0.7 mm/year. The Global Positioning System measurements also indicate asymmetrical extension at shallow depths of the Jericho Fault section, between 0.3 and 3 km. Finally, our results suggest the vast majority of the sinistral slip along the Dead Sea Fault in southern Jorden Valley is accommodated by the Jericho Fault section.
Statistical Sensor Fusion of a 9-DOF Mems Imu for Indoor Navigation
NASA Astrophysics Data System (ADS)
Chow, J. C. K.
2017-09-01
Sensor fusion of a MEMS IMU with a magnetometer is a popular system design, because such 9-DoF (degrees of freedom) systems are capable of achieving drift-free 3D orientation tracking. However, these systems are often vulnerable to ambient magnetic distortions and lack useful position information; in the absence of external position aiding (e.g. satellite/ultra-wideband positioning systems) the dead-reckoned position accuracy from a 9-DoF MEMS IMU deteriorates rapidly due to unmodelled errors. Positioning information is valuable in many satellite-denied geomatics applications (e.g. indoor navigation, location-based services, etc.). This paper proposes an improved 9-DoF IMU indoor pose tracking method using batch optimization. By adopting a robust in-situ user self-calibration approach to model the systematic errors of the accelerometer, gyroscope, and magnetometer simultaneously in a tightly-coupled post-processed least-squares framework, the accuracy of the estimated trajectory from a 9-DoF MEMS IMU can be improved. Through a combination of relative magnetic measurement updates and a robust weight function, the method is able to tolerate a high level of magnetic distortions. The proposed auto-calibration method was tested in-use under various heterogeneous magnetic field conditions to mimic a person walking with the sensor in their pocket, a person checking their phone, and a person walking with a smartwatch. In these experiments, the presented algorithm improved the in-situ dead-reckoning orientation accuracy by 79.8-89.5 % and the dead-reckoned positioning accuracy by 72.9-92.8 %, thus reducing the relative positioning error from metre-level to decimetre-level after ten seconds of integration, without making assumptions about the user's dynamics.
Datli, Asli; Suh, HyunSuk; Kim, Young Chul; Choi, Doon Hoon; Hong, Joon Pio Jp
2017-03-01
The reconstruction of the posterior trunk, especially with large dead spaces, remains challenging. Regional muscle flaps may lack adequate volume and reach. The purpose of this report was to evaluate the efficacy of deepithelialized free-style perforator-based propeller flaps to obliterate defects with large dead space. A total of 7 patients with defects on the posterior trunk with large dead spaces were evaluated. After complete debridement or resection, all flaps were designed on a single perforator adjacent to the defect, deepithelialized, and then rotated in a propeller fashion. Flaps were further modified in some cases such as folding the flap after deepithelialization to increase bulk and to obliterate the dead space. The flap dimension ranged from 10 × 5 × 1 to 15 × 8 × 2.5 cm based on a single perforator. The rotation arch of the flap ranged from 90 to 180 degrees. Uneventful healing was noted in all cases. One case showed latent redness and swelling at 7 months after falling down, which resolved with medication. During the average follow-up of 28 months, there were no other flap and donor site complications. The deepithelialized propeller flap can be used efficiently to obliterate dead spaces in the posterior trunk and retains advantages such as having a good vascular supply, adequate bulk, sufficient reach without tension, and minimal donor site morbidity.
Common but unappreciated sources of error in one, two, and multiple-color pyrometry
NASA Technical Reports Server (NTRS)
Spjut, R. Erik
1988-01-01
The most common sources of error in optical pyrometry are examined. They can be classified as either noise and uncertainty errors, stray radiation errors, or speed-of-response errors. Through judicious choice of detectors and optical wavelengths the effect of noise errors can be minimized, but one should strive to determine as many of the system properties as possible. Careful consideration of the optical-collection system can minimize stray radiation errors. Careful consideration must also be given to the slowest elements in a pyrometer when measuring rapid phenomena.
Six-state phase modulation for reduced crosstalk in a fiber optic gyroscope.
Zhang, Chunxi; Zhang, Shaobo; Pan, Xiong; Jin, Jing
2018-04-16
Electrical crosstalk in an interferometric fiber-optic gyroscope (IFOG) is regarded as the most significant factor influencing dead bands. Here, we present a six-state modulation (SSM) technique to reduce crosstalk. Compared to conventional four-state modulation (FSM) or square-wave modulation (SWM), the SSM reduces the correlation between modulation voltage and demodulation reference by separating their fundamental frequencies, and thus reduces the bias error induced by crosstalk. The measured dead band of a 1500-m IFOG is approximately 0.02 °/h using FSM and approximately 0.08 °/h using SWM, whereas there is no evidence of dead band using SSM. The IFOG using SSM also exhibits better angular random walk (ARW) and bias instability performance compared to the same IFOG using FSM or SWM. These results verify the crosstalk reduction effect of SSM. In theory, by using the relative intensity noise (RIN) suppressing technique with the optimal modulation depth of 2π/3, the SSM can eliminate the crosstalk, which offers the potential for a high-performance IFOG with low noise, high sensitivity, wide dynamic range, and no dead band.
NASA Astrophysics Data System (ADS)
Iliescu, Ciprian; Tresset, Guillaume; Xu, Guolin
2007-06-01
This letter presents a dielectrophoretic (DEP) separation method of particles under continuous flow. The method consists of flowing two particle populations through a microfluidic channel, in which the vertical walls are the electrodes of the DEP device. The irregular shape of the electrodes generates both electric field and fluid velocity gradients. As a result, the particles that exhibit negative DEP can be trapped in the fluidic dead zones, while the particles that experience positive DEP are concentrated in the regions with high velocity and collected at the outlet. The device was tested with dead and living yeast cells.
Geology and hydrocarbon potential of the Dead Sea Rift Basins of Israel and Jordan
Coleman, James; ten Brink, Uri S.
2016-01-01
Geochemical analyses indicate that the source of all oils, asphalts, and tars recovered in the Lake Lisan basin is the Ghareb Formation. Geothermal gradients along the Dead Sea fault zone vary from basin to basin. Syn-wrench potential reservoir rocks are highly porous and permeable, whereas pre-wrench strata commonly exhibit lower porosity and permeability. Biogenic gas has been produced from Pleistocene reservoirs. Potential sealing intervals may be present in Neogene evaporites and tight lacustrine limestones and shales. Simple structural traps are not evident; however, subsalt traps may exist. Unconventional source rock reservoir potential has not been tested.
Magnetic character of a large continental transform: an aeromagnetic survey of the Dead Sea Fault
ten Brink, Uri S.; Rybakov, Michael; Al-Zoubi, Abdallah S.; Rotstein, Yair
2007-01-01
New high-resolution airborne magnetic (HRAM) data along a 120-km-long section of the Dead Sea Transform in southern Jordan and Israel shed light on the shallow structure of the fault zone and on the kinematics of the plate boundary. Despite infrequent seismic activity and only intermittent surface exposure, the fault is delineated clearly on a map of the first vertical derivative of the magnetic intensity, indicating that the source of the magnetic anomaly is shallow. The fault is manifested by a 10–20 nT negative anomaly in areas where the fault cuts through magnetic basement and by a
NASA Astrophysics Data System (ADS)
Saarinen, N.; Vastaranta, M.; Näsi, R.; Rosnell, T.; Hakala, T.; Honkavaara, E.; Wulder, M. A.; Luoma, V.; Tommaselli, A. M. G.; Imai, N. N.; Ribeiro, E. A. W.; Guimarães, R. B.; Holopainen, M.; Hyyppä, J.
2017-10-01
Biodiversity is commonly referred to as species diversity but in forest ecosystems variability in structural and functional characteristics can also be treated as measures of biodiversity. Small unmanned aerial vehicles (UAVs) provide a means for characterizing forest ecosystem with high spatial resolution, permitting measuring physical characteristics of a forest ecosystem from a viewpoint of biodiversity. The objective of this study is to examine the applicability of photogrammetric point clouds and hyperspectral imaging acquired with a small UAV helicopter in mapping biodiversity indicators, such as structural complexity as well as the amount of deciduous and dead trees at plot level in southern boreal forests. Standard deviation of tree heights within a sample plot, used as a proxy for structural complexity, was the most accurately derived biodiversity indicator resulting in a mean error of 0.5 m, with a standard deviation of 0.9 m. The volume predictions for deciduous and dead trees were underestimated by 32.4 m3/ha and 1.7 m3/ha, respectively, with standard deviation of 50.2 m3/ha for deciduous and 3.2 m3/ha for dead trees. The spectral features describing brightness (i.e. higher reflectance values) were prevailing in feature selection but several wavelengths were represented. Thus, it can be concluded that structural complexity can be predicted reliably but at the same time can be expected to be underestimated with photogrammetric point clouds obtained with a small UAV. Additionally, plot-level volume of dead trees can be predicted with small mean error whereas identifying deciduous species was more challenging at plot level.
The NOAA-NASA CZCS Reanalysis Effort
NASA Technical Reports Server (NTRS)
Gregg, Watson W.; Conkright, Margarita E.; OReilly, John E.; Patt, Frederick S.; Wang, Meng-Hua; Yoder, James; Casey-McCabe, Nancy; Koblinsky, Chester J. (Technical Monitor)
2001-01-01
Satellite observations of global ocean chlorophyll span over two decades. However, incompatibilities between processing algorithms prevent us from quantifying natural variability. We applied a comprehensive reanalysis to the Coastal Zone Color Scanner (CZCS) archive, called the NOAA-NASA CZCS Reanalysis (NCR) Effort. NCR consisted of 1) algorithm improvement (AI), where CZCS processing algorithms were improved using modernized atmospheric correction and bio-optical algorithms, and 2) blending, where in situ data were incorporated into the CZCS AI to minimize residual errors. The results indicated major improvement over the previously available CZCS archive. Global spatial and seasonal patterns of NCR chlorophyll indicated remarkable correspondence with modern sensors, suggesting compatibility. The NCR permits quantitative analyses of interannual and interdecadal trends in global ocean chlorophyll.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
Discordance between living and death assemblages as evidence for anthropogenic ecological change
Kidwell, Susan M.
2007-01-01
Mismatches between the composition of a time-averaged death assemblage (dead remains sieved from the upper mixed-zone of the sedimentary column) and the local living community are typically attributed to natural postmortem processes. However, statistical analysis of 73 molluscan data sets from estuaries and lagoons reveals significantly poorer average “live-dead agreement” in settings of documented anthropogenic eutrophication (AE) than in areas where AE and other human impacts are negligible. Taxonomic similarity of paired live and dead species lists declines steadily among areas as a function of AE severity, and, for data sets comprising only adults, rank-order agreement in species abundance drops where AE is suspected. The observed live-dead differences in composition are consistent with eutrophication (anomalous abundance of seagrass-dwellers and/or scarcity of organic-loving species in the death assemblage), suggesting compositional inertia of death assemblages to recent environmental change. Molluscan data sets from open shelf settings (n = 34) also show higher average live-dead discordance in areas of AE. These results indicate that (i) live-dead discordance in surficial grab samples provides valuable evidence for strong anthropogenic modification of benthic communities, (ii) actualistic estimates of the ecological fidelity of molluscan death assemblages tend to be erroneously pessimistic when conducted in nonpristine settings, and (iii) based on their high fidelity in pristine study areas, death assemblages are a promising means of reconstructing otherwise elusive preimpact ecological baselines from sedimentary records. PMID:17965231
Flow Mapping Based on the Motion-Integration Errors of Autonomous Underwater Vehicles
NASA Astrophysics Data System (ADS)
Chang, D.; Edwards, C. R.; Zhang, F.
2016-02-01
Knowledge of a flow field is crucial in the navigation of autonomous underwater vehicles (AUVs) since the motion of AUVs is affected by ambient flow. Due to the imperfect knowledge of the flow field, it is typical to observe a difference between the actual and predicted trajectories of an AUV, which is referred to as a motion-integration error (also known as a dead-reckoning error if an AUV navigates via dead-reckoning). The motion-integration error has been essential for an underwater glider to compute its flow estimate from the travel information of the last leg and to improve navigation performance by using the estimate for the next leg. However, the estimate by nature exhibits a phase difference compared to ambient flow experienced by gliders, prohibiting its application in a flow field with strong temporal and spatial gradients. In our study, to mitigate the phase problem, we have developed a local ocean model by combining the flow estimate based on the motion-integration error with flow predictions from a tidal ocean model. Our model has been used to create desired trajectories of gliders for guidance. Our method is validated by Long Bay experiments in 2012 and 2013 in which we deployed multiple gliders on the shelf of South Atlantic Bight and near the edge of Gulf Stream. In our recent study, the application of the motion-integration error is further extended to create a spatial flow map. Considering that the motion-integration errors of AUVs accumulate along their trajectories, the motion-integration error is formulated as a line integral of ambient flow which is then reformulated into algebraic equations. By solving an inverse problem for these algebraic equations, we obtain the knowledge of such flow in near real time, allowing more effective and precise guidance of AUVs in a dynamic environment. This method is referred to as motion tomography. We provide the results of non-parametric and parametric flow mapping from both simulated and experimental data.
Optimization of Highway Work Zone Decisions Considering Short-Term and Long-Term Impacts
2010-01-01
strategies which can minimize the one-time work zone cost. Considering the complex and combinatorial nature of this optimization problem, a heuristic...combination of lane closure and traffic control strategies which can minimize the one-time work zone cost. Considering the complex and combinatorial nature ...zone) NV # the number of vehicle classes NPV $ Net Present Value p’(t) % Adjusted traffic diversion rate at time t p(t) % Natural diversion rate
Phosphorus (P) remediation is an extremely difficult and costly environmental problem and could cost $44.5 billion for treatment using conventional water treatment plants to meet EPA requirements. Phosphorus runoffs can lead to dead zones due to eutrophication and also ca...
Monitoring ecosystem restoration at various scales in LAEs can be challenging, frustrating and rewarding. Some of the major ecosystem restoration monitoring occurring in LAEs include: seagrass expansion/contraction; dead zone sizes; oyster reefs; sea turtle nesting; toxic and nu...
Agricultural production in the Corn Belt region of the Upper Mississippi River Basin (UMRB) remains a leading source of nitrogen runoff that contributes to the annual hypoxic 'Dead Zone' in the Gulf of Mexico. The rise of corn production, land conversion, and fertilizer use in re...
McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.
2016-01-01
The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821
Perceptual Color Characterization of Cameras
Vazquez-Corral, Javier; Connah, David; Bertalmío, Marcelo
2014-01-01
Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures. PMID:25490586
Stripe-PZT Sensor-Based Baseline-Free Crack Diagnosis in a Structure with a Welded Stiffener.
An, Yun-Kyu; Shen, Zhiqi; Wu, Zhishen
2016-09-16
This paper proposes a stripe-PZT sensor-based baseline-free crack diagnosis technique in the heat affected zone (HAZ) of a structure with a welded stiffener. The proposed technique enables one to identify and localize a crack in the HAZ using only current data measured using a stripe-PZT sensor. The use of the stripe-PZT sensor makes it possible to significantly improve the applicability to real structures and minimize man-made errors associated with the installation process by embedding multiple piezoelectric sensors onto a printed circuit board. Moreover, a new frequency-wavenumber analysis-based baseline-free crack diagnosis algorithm minimizes false alarms caused by environmental variations by avoiding simple comparison with the baseline data accumulated from the pristine condition of a target structure. The proposed technique is numerically as well as experimentally validated using a plate-like structure with a welded stiffener, reveling that it successfully identifies and localizes a crack in HAZ.
Stripe-PZT Sensor-Based Baseline-Free Crack Diagnosis in a Structure with a Welded Stiffener
An, Yun-Kyu; Shen, Zhiqi; Wu, Zhishen
2016-01-01
This paper proposes a stripe-PZT sensor-based baseline-free crack diagnosis technique in the heat affected zone (HAZ) of a structure with a welded stiffener. The proposed technique enables one to identify and localize a crack in the HAZ using only current data measured using a stripe-PZT sensor. The use of the stripe-PZT sensor makes it possible to significantly improve the applicability to real structures and minimize man-made errors associated with the installation process by embedding multiple piezoelectric sensors onto a printed circuit board. Moreover, a new frequency-wavenumber analysis-based baseline-free crack diagnosis algorithm minimizes false alarms caused by environmental variations by avoiding simple comparison with the baseline data accumulated from the pristine condition of a target structure. The proposed technique is numerically as well as experimentally validated using a plate-like structure with a welded stiffener, reveling that it successfully identifies and localizes a crack in HAZ. PMID:27649200
NASA Astrophysics Data System (ADS)
Palchan, Daniel; Stein, Mordechai; Goldstein, Steven L.; Almogi-Labin, Ahuva; Tirosh, Ofir; Erel, Yigal
2018-01-01
The sediments deposited at the depocenter of the Dead Sea comprise high-resolution archive of hydrological changes in the lake's watershed and record the desert dust transport to the region. This paper reconstructs the dust transport to the region during the termination of glacial Marine Isotope Stage 6 (MIS 6; ∼135-129 ka) and the last interglacial peak period (MIS5e, ∼129-116 ka). We use chemical and Nd and Sr isotope compositions of fine detritus material recovered from sediment core drilled at the deepest floor of the Dead Sea. The data is integrated with data achieved from cores drilled at the floor of the Red Sea, thus, forming a Red Sea-Dead Sea transect extending from the desert belt to the Mediterranean climate zone. The Dead Sea accumulated flood sediments derived from three regional surface cover types: settled desert dust, mountain loess-soils and loess-soils filling valleys in the Dead Sea watershed termed here "Valley Loess". The Valley Loess shows a distinct 87Sr/86Sr ratio of 0.7081 ± 1, inherited from dissolved detrital calcites that originate from dried waterbodies in the Sahara and are transported with the dust to the entire transect. Our hydro-climate and synoptic conditions reconstruction illustrates the following history: During glacial period MIS6, Mediterranean cyclones governed the transport of Saharan dust and rains to the Dead Sea watershed, driving the development of both mountain soils and Valley Loess. Then, at Heinrich event 11, dry western winds blew Saharan dust over the entire Red Sea - Dead Sea transect marking latitudinal expansion of the desert belt. Later, when global sea-level rose, the Dead Sea watershed went through extreme aridity, the lake retreated, depositing salt and accumulating fine detritus of the Valley Loess. During peak interglacial MIS 5e, enhanced flooding activity flushed the mountain soils and fine detritus from all around the Dead Sea and Red Sea, marking a significant "contraction" of the desert belt. At the end of MIS 5e the effect of the regional precipitation diminished and the Dead Sea and Red Sea areas re-entered sever arid conditions with extensive salt deposition at the Dead Sea.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Quartermaster 1 and C, Rate Training Manual.
ERIC Educational Resources Information Center
Naval Personnel Program Support Activity, Washington, DC.
The subject matter of this training manual is prepared for regular navy and naval reserve personnel. Operations of gyrocompasses and magnetic and magnesyn compasses are discussed with a background of error determination, compass adjustments, and degaussing applications. Navigation techniques are analyzed in terms of piloting, dead reckoning,…
Delay time correction of the gas analyzer in the calculation of anatomical dead space of the lung.
Okubo, T; Shibata, H; Takishima, T
1983-07-01
By means of a mathematical model, we have studied a way to correct the delay time of the gas analyzer in order to calculate the anatomical dead space using Fowler's graphical method. The mathematical model was constructed of ten tubes of equal diameter but unequal length, so that the amount of dead space varied from tube to tube; the tubes were emptied sequentially. The gas analyzer responds with a time lag from the input of the gas signal to the beginning of the response, followed by an exponential response output. The single breath expired volume-concentration relationship was examined with three types of expired flow patterns of which were constant, exponential and sinusoidal. The results indicate that the time correction by the lag time plus time constant of the exponential response of the gas analyzer gives an accurate estimation of anatomical dead space. Time correction less inclusive than this, e.g. lag time only or lag time plus 50% response time, gives an overestimation, and a correction larger than this results in underestimation. The magnitude of error is dependent on the flow pattern and flow rate. The time correction in this study is only for the calculation of dead space, as the corrected volume-concentration curves does not coincide with the true curve. Such correction of the output of the gas analyzer is extremely important when one needs to compare the dead spaces of different gas species at a rather faster flow rate.
Kartush, J M
1996-11-01
Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.
Interpreting SBUV Smoothing Errors: an Example Using the Quasi-biennial Oscillation
NASA Technical Reports Server (NTRS)
Kramarova, N. A.; Bhartia, Pawan K.; Frith, S. M.; McPeters, R. D.; Stolarski, R. S.
2013-01-01
The Solar Backscattered Ultraviolet (SBUV) observing system consists of a series of instruments that have been measuring both total ozone and the ozone profile since 1970. SBUV measures the profile in the upper stratosphere with a resolution that is adequate to resolve most of the important features of that region. In the lower stratosphere the limited vertical resolution of the SBUV system means that there are components of the profile variability that SBUV cannot measure. The smoothing error, as defined in the optimal estimation retrieval method, describes the components of the profile variability that the SBUV observing system cannot measure. In this paper we provide a simple visual interpretation of the SBUV smoothing error by comparing SBUV ozone anomalies in the lower tropical stratosphere associated with the quasi-biennial oscillation (QBO) to anomalies obtained from the Aura Microwave Limb Sounder (MLS). We describe a methodology for estimating the SBUV smoothing error for monthly zonal mean (mzm) profiles. We construct covariance matrices that describe the statistics of the inter-annual ozone variability using a 6 yr record of Aura MLS and ozonesonde data. We find that the smoothing error is of the order of 1percent between 10 and 1 hPa, increasing up to 15-20 percent in the troposphere and up to 5 percent in the mesosphere. The smoothing error for total ozone columns is small, mostly less than 0.5 percent. We demonstrate that by merging the partial ozone columns from several layers in the lower stratosphere/troposphere into one thick layer, we can minimize the smoothing error. We recommend using the following layer combinations to reduce the smoothing error to about 1 percent: surface to 25 hPa (16 hPa) outside (inside) of the narrow equatorial zone 20 S-20 N.
Spatio-temporal development of sinkholes on the eastern shore of the Dead Sea
NASA Astrophysics Data System (ADS)
Holohan, Eoghan; Saberi, Leila; Al-Halbouni, Djamil; Sawarieh, Ali; Closson, Damien; Alrshdan, Hussam; Walter, Thomas; Dahm, Torsten
2017-04-01
The ongoing, largely anthropogenically-forced decline of the Dead Sea is associated with the most prolific development of sinkholes worldwide. The fall in hydrological base level since the 1960s is thought to enable relatively fresh ground waters to dissolve underground salt deposits that were previously in equilibrium with hypersaline Dead Sea brine. Sinkhole development in response to this dissolution began in the 1980s and is still ongoing; it represents a significant geohazard in the Dead Sea region. We present new research undertaken within the Dead Sea Research Venue (DESERVE) on the spatio-temporal evolution of the main sinkhole-affected site on the Eastern shore of the Dead Sea, at Ghor Al-Haditha in Jordan. Our data set includes optical satellite imagery, aerial survey photographs and drone-based photogrammetric surveys with high spatial (< 1 m2 - 0.05 m per pixel) and temporal (decadal from 1970-2010, annual from 2004-2016) resolution. These enable new quantitative insights into this, the largest of all the Dead Sea sinkhole sites. Our analysis shows that there are now over 800 sinkholes at Ghor al-Haditha. Sinkholes initiated as spatially distinct clusters in the late 1980's to early 1990s. While some clusters have since become inactive, most have expanded and merged with time. New clusters have also developed, mainly in the more recently exposed north of the area. With the retreat of the Dead Sea, the roughly coastline-parallel zone of sinkhole formation has expanded unevenly but systematically seawards. Such a seaward migration of sinkhole formation is predicted from hydrogeological theory, but as yet not consistently observed elsewhere at the Dead Sea. The rate of sinkhole formation at Ghor Haditha accelerated markedly during the late 2000s to a peak of about 100 per year in 2009. Similar accelerations are observed on the western shore, but differ in timing. The rate of sinkhole formation on the Eastern shore has since declined to about 50 per year. Such differences in the overall spatio-temporal evolution of sinkholes on the eastern and western shores of the Dead Sea likely highlights the important role of local hydrogeological conditions and processes in governing sinkhole development.
NASA Technical Reports Server (NTRS)
Clerici, Giancarlo; Burnside, Walter D.
1989-01-01
In recent years, the compact range has become very popular for measuring Radar Cross Section (RCS) and antenna patterns. The compact range, in fact, offers several advantages due to reduced size, a controlled environment, and privacy. On the other hand, it has some problems of its own, which must be solved properly in order to achieve high quality measurement results. For example, diffraction from the edges of the main reflector corrupts the plane wave in the target zone and creates spurious scattering centers in RCS measurements. While diffraction can be minimized by using rolled edges, the field of an offset single reflector compact range is corrupted by three other errors: the taper of the reflected field, the cross polarization introduced by the tilt of the feed and the aperture blockage introduced by the feed itself. These three errors can be eliminated by the use of a subreflector system. A properly designed subreflector system offers very little aperture blockage, no cross-polarization introduced and a minimization of the taper of the reflected field. A Gregorian configuration has been adopted in order to enclose the feed and the ellipsoidal subreflector in a lower chamber, which is isolated by absorbers from the upper chamber, where the main parabolic reflector and the target zone are enclosed. The coupling between the two rooms is performed through a coupling aperture. The first cut design for such a subreflector system is performed through Geometrical Optics ray tracing techniques (GO), and is greatly simplified by the use of the concept of the central ray introduced by Dragone. The purpose of the GO design is to establish the basic dimensions of the main reflector and subreflector, the size of the primary and secondary illuminating surfaces, the tilt angles of the subreflector and feed, and estimate the feed beamwidth. At the same time, the shape of the coupling aperture is initially determined.
Specific effect of vincristine on epididymis.
Averal, H I; Stanley, A; Murugaian, P; Palanisamy, M; Akbarsha, M A
1996-01-01
Wistar strain male albino rats were administered with vincristine (VCR) sulphate (10 micrograms/day for 15 days); epithelial cell types of the caput (zone II) and cauda (zone V) were studied light microscopically adopting semithin sectioning. VCR caused conspicuous pathological changes in the principal and apical cells of the caput and the clear cells of the cauda. The study points to toxic effect of VCR on these cell types, suggesting impairment of epididymal function, particularly concerning sperm maturation and endocytotic removal of the contents of the cytoplasmic droplets and dead sperm.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-13
... document, which addresses safety achieved through drug product design, is the first in a series of planned...] Draft Guidance for Industry on Safety Considerations for Product Design To Minimize Medication Errors... Considerations for Product Design to Minimize Medication Errors.'' The draft guidance provides sponsors of...
Tay, C S
2000-02-01
Medical and dental errors and negligence are again in the spotlight in recent news report. Dead because of doctor's bad handwriting Prescribing drug overdoses Germ-infested soap pumps--infections in hospitals This articles explains dental negligence including dental duty of care and the standard of care expected of dentists in relation to the Bolam principle.
Explosive Model Tarantula 4d/JWL++ Calibration of LX-17
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souers, P C; Vitello, P A
2008-09-30
Tarantula is an explosive kinetic package intended to do detonation, shock initiation, failure, corner-turning with dead zones, gap tests and air gaps in reactive flow hydrocode models. The first, 2007-2008 version with monotonic Q is here run inside JWL++ with square zoning from 40 to 200 zones/cm on ambient LX-17. The model splits the rate behavior in every zone into sections set by the hydrocode pressure, P + Q. As the pressure rises, we pass through the no-reaction, initiation, ramp-up/failure and detonation sections sequentially. We find that the initiation and pure detonation rate constants are largely insensitive to zoning butmore » that the ramp-up/failure rate constant is extremely sensitive. At no time does the model pass every test, but the pressure-based approach generally works. The best values for the ramp/failure region are listed here in Mb units.« less
High-rate dead-time corrections in a general purpose digital pulse processing system
Abbene, Leonardo; Gerardi, Gaetano
2015-01-01
Dead-time losses are well recognized and studied drawbacks in counting and spectroscopic systems. In this work the abilities on dead-time correction of a real-time digital pulse processing (DPP) system for high-rate high-resolution radiation measurements are presented. The DPP system, through a fast and slow analysis of the output waveform from radiation detectors, is able to perform multi-parameter analysis (arrival time, pulse width, pulse height, pulse shape, etc.) at high input counting rates (ICRs), allowing accurate counting loss corrections even for variable or transient radiations. The fast analysis is used to obtain both the ICR and energy spectra with high throughput, while the slow analysis is used to obtain high-resolution energy spectra. A complete characterization of the counting capabilities, through both theoretical and experimental approaches, was performed. The dead-time modeling, the throughput curves, the experimental time-interval distributions (TIDs) and the counting uncertainty of the recorded events of both the fast and the slow channels, measured with a planar CdTe (cadmium telluride) detector, will be presented. The throughput formula of a series of two types of dead-times is also derived. The results of dead-time corrections, performed through different methods, will be reported and discussed, pointing out the error on ICR estimation and the simplicity of the procedure. Accurate ICR estimations (nonlinearity < 0.5%) were performed by using the time widths and the TIDs (using 10 ns time bin width) of the detected pulses up to 2.2 Mcps. The digital system allows, after a simple parameter setting, different and sophisticated procedures for dead-time correction, traditionally implemented in complex/dedicated systems and time-consuming set-ups. PMID:26289270
DOT National Transportation Integrated Search
2012-02-01
The influence of the rebar configuration on the occurrence of dead zones (= zero velocity) during flow of Self-Consolidating Concrete : in formworks has been investigated by single fluid numerical simulations. The main findings showed that for small ...
Detonation corner turning in vapor-deposited explosives using the micromushroom test
NASA Astrophysics Data System (ADS)
Tappan, Alexander S.; Yarrington, Cole D.; Knepper, Robert
2017-06-01
Detonation corner turning describes the ability of a detonation wave to propagate into unreacted explosive that is not immediately in the path normal to the wave. The classic example of corner turning is cylindrical and involves a small diameter explosive propagating into a larger diameter explosive as described by Los Alamos' Mushroom test (e.g. (Hill, Seitz et al. 1998)), where corner turning is inferred from optical breakout of the detonation wave. We present a complimentary method to study corner turning in millimeter-scale explosives through the use of vapor deposition to prepare the slab (quasi-2D) analog of the axisymmetric mushroom test. Because the samples are in a slab configuration, optical access to the explosive is excellent and direct imaging of the detonation wave and ``dead zone'' that results during corner turning is possible. Results are compared for explosives that demonstrate a range of behaviors, from pentaerythritol tetranitrate (PETN), which has corner turning properties that are nearly ideal; to HNAB (hexanitroazobenzene), which has corner turning properties that reveal a substantial dead zone. Results are discussed in the context of microstructure and detonation failure thickness.
NASA Technical Reports Server (NTRS)
Jewett, M. E.; Rimmer, D. W.; Duffy, J. F.; Klerman, E. B.; Kronauer, R. E.; Czeisler, C. A.
1997-01-01
Fifty-six resetting trials were conducted across the subjective day in 43 young men using a three-cycle bright-light (approximately 10,000 lx). The phase-response curve (PRC) to these trials was assessed for the presence of a "dead zone" of photic insensitivity and was compared with another three-cycle PRC that had used a background of approximately 150 lx. To assess possible transients after the light stimulus, the trials were divided into 43 steady-state trials, which occurred after several baseline days, and 13 consecutive trials, which occurred immediately after a previous resetting trial. We found that 1) bright light induces phase shifts throughout subjective day with no apparent dead zone; 2) there is no evidence of transients in constant routine assessments of the fitted temperature minimum 1-2 days after completion of the resetting stimulus; and 3) the timing of background room light modulates the resetting response to bright light. These data indicate that the human circadian pacemaker is sensitive to light at virtually all circadian phases, implying that the entire 24-h pattern of light exposure contributes to entrainment.
NASA Astrophysics Data System (ADS)
Hur, Jin; Jung, In-Soung; Sung, Ha-Gyeong; Park, Soon-Sup
2003-05-01
This paper represents the force performance of a brushless dc motor with a continuous ring-type permanent magnet (PM), considering its magnetization patterns: trapezoidal, trapezoidal with dead zone, and unbalanced trapezoidal magnetization with dead zone. The radial force density in PM motor causes vibration, because vibration is induced the traveling force from the rotating PM acting on the stator. Magnetization distribution of the PM as well as the shape of the teeth determines the distribution of force density. In particular, the distribution has a three-dimensional (3-D) pattern because of overhang, that is, it is not uniform in axial direction. Thus, the analysis of radial force density required dynamic analysis considering the 3-D shape of the teeth and overhang. The results show that the force density as a source of vibration varies considerably depending on the overhang and magnetization distribution patterns. In addition, the validity of the developed method, coupled 3-D equivalent magnetic circuit network method, with driving circuit and motion equation, is confirmed by comparison of conventional method using 3D finite element method.
Minimizing traffic-related work zone crashes in Illinois.
DOT National Transportation Integrated Search
2013-04-01
This report presents the findings of a research project to study and develop recommendations to minimize work : zone crashes in Illinois. The objectives of this project were (1) to provide in-depth comprehensive review of the : latest literature on t...
Seismic surface-wave prospecting methods for sinkhole hazard assessment along the Dead Sea shoreline
NASA Astrophysics Data System (ADS)
Ezersky, M.; Bodet, L.; Al-Zoubi, A.; Camerlynck, C.; Dhemaied, A.; Galibert, P.-Y.; Keydar, S.
2012-04-01
The Dead Sea's coastal areas have been dramatically hit by sinkholes occurrences since around 1990 and there is an obvious potential for further collapse beneath main highways, agricultural lands and other populated places. The sinkhole hazard in this area threatens human lives and compromise future economic developments. The understanding of such phenomenon is consequently of great importance in the development of protective solutions. Several geological and geophysical studies tend to show that evaporite karsts, caused by slow salt dissolution, are linked to the mechanism of sinkhole formation along both Israel and Jordan shorelines. The continuous drop of the Dead Sea level, at a rate of 1m/yr during the past decade, is generally proposed as the main triggering factor. The water table lowering induces the desaturation of shallow sediments overlying buried cavities in 10 to 30 meters thick salt layers, at depths from 25 to 50 meters. Both the timing and location of sinkholes suggest that: (1) the salt weakens as result of increasing fresh water circulation, thus enhancing the karstification process; (2) sinkholes appear to be related to the decompaction of the sediments above karstified zones. The location, depth, thickness and weakening of salt layers along the Dead Sea shorelines, as well as the thickness and mechanical properties of the upper sedimentary deposits, are thus considered as controlling factors of this ongoing process. Pressure-wave seismic methods are typically used to study sinkhole developments in this area. P-wave refraction and reflection methods are very useful to delineate the salt layers and to determine the thickness of overlying sediments. But the knowledge of shear-wave velocities (Vs) should add valuable insights on their mechanical properties, more particularly when the groundwater level plays an important role in the process. However, from a practical point of view, the measurement of Vs remains delicate because of well-known shear waves generation and picking issues in shear-wave refraction seismic methods. As an alternative, indirect estimation of Vs can then be proposed thanks to surface-wave dispersion measurements and inversion, an emerging seismic prospecting method for near-surface engineering and environment applications. Surface-wave prospecting methods have thus been proposed to address the sinkholes development processes along the Dead Sea shorelines. Two approaches have been used: (1) Vs mapping has been performed to discriminate soft and hard zones within salt layers, after calibration of inverted Vs near boreholes. Preliminarily, soft zones, associated with karstified salt, were characterized by Vs values lower than 1000 m/s, whereas hard zones presented values greater than 1400 m/s (will be specified during following studies); (2) roll along acquisition and dispersion stacking has been performed to achieve multi-modal dispersion measurements along linear profiles. Inverted pseudo-2D Vs sections presented low Vs anomalies in the vicinity of existing sinkholes and made it possible to detect loose sediment associated with potential sinkholes occurrences. Acknowledgements This publication was made possible through support provided by the U.S. Agency for International Development (USAID) and MERC Program under terms of Award No M27-050.
A negentropy minimization approach to adaptive equalization for digital communication systems.
Choi, Sooyong; Lee, Te-Won
2004-07-01
In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.
Lee, Sever; Pinhas, Alpert; Alexei, Lyapustin; Yujie, Wang; Alexandra, Chudnovsky A
2017-09-01
The extreme rate of evaporation of the Dead Sea (DS) has serious implicatios for the surrounding area, including atmospheric conditions. This study analyzes the aerosol properties over the western and eastern parts of the DS during the year 2013, using MAIAC (Multi-Angle Implementation of Atmospheric Correction) for MODIS, which retrieves aerosol optical depth (AOD) data at a resolution of 1km. The main goal of the study is to evaluate MAIAC over the study area and determine, for the first time, the prevailing aerosol spatial patterns. First, the MAIAC-derived AOD data was compared with data from three nearby AERONET sites (Nes Ziona - an urban site, and Sede Boker and Masada - two arid sites), and with the conventional Dark Target (DT) and Deep Blue (DB) retrievals for the same days and locations, on a monthly basis throughout 2013. For the urban site, the correlation coefficient (r) for DT/DB products showed better performance than MAIAC (r=0.80, 0.75, and 0.64 respectively) year-round. However, in the arid zones, MAIAC showed better correspondence to AERONET sites than the conventional retrievals (r=0.58-0.60 and 0.48-0.50 respectively). We investigated the difference in AOD levels, and its variability, between the Dead Sea coasts on a seasonal basis and calculated monthly/seasonal AOD averages for presenting AOD patterns over arid zones. Thus, we demonstrated that aerosol concentrations show a strong preference for the western coast, particularly during the summer season. This preference, is most likely a result of local anthropogenic emissions combined with the typical seasonal synoptic conditions, the Mediterranean Sea breeze, and the region complex topography. Our results also indicate that a large industrial zone showed higher AOD levels compared to an adjacent reference-site, i.e., 13% during the winter season.
The application of phase grating to CLM technology for the sub-65nm node optical lithography
NASA Astrophysics Data System (ADS)
Yoon, Gi-Sung; Kim, Sung-Hyuck; Park, Ji-Soong; Choi, Sun-Young; Jeon, Chan-Uk; Shin, In-Kyun; Choi, Sung-Woon; Han, Woo-Sung
2005-06-01
As a promising technology for sub-65nm node optical lithography, CLM(Chrome-Less Mask) technology among RETs(Resolution Enhancement Techniques) for low k1 has been researched worldwide in recent years. CLM has several advantages, such as relatively simple manufacturing process and competitive performance compared to phase-edge PSM's. For the low-k1 lithography, we have researched CLM technique as a good solution especially for sub-65nm node. As a step for developing the sub-65nm node optical lithography, we have applied CLM technology in 80nm-node lithography with mesa and trench method. From the analysis of the CLM technology in the 80nm lithography, we found that there is the optimal shutter size for best performance in the technique, the increment of wafer ADI CD varied with pattern's pitch, and a limitation in patterning various shapes and size by OPC dead-zone - OPC dead-zone in CLM technique is the specific region of shutter size that dose not make the wafer CD increased more than a specific size. And also small patterns are easily broken, while fabricating the CLM mask in mesa method. Generally, trench method has better optical performance than mesa. These issues have so far restricted the application of CLM technology to a small field. We approached these issues with 3-D topographic simulation tool and found that the issues could be overcome by applying phase grating in trench-type CLM. With the simulation data, we made some test masks which had many kinds of patterns with many different conditions and analyzed their performance through AIMS fab 193 and exposure on wafer. Finally, we have developed the CLM technology which is free of OPC dead-zone and pattern broken in fabrication process. Therefore, we can apply the CLM technique into sub-65nm node optical lithography including logic devices.
An Alternative Time Metric to Modified Tau for Unmanned Aircraft System Detect And Avoid
NASA Technical Reports Server (NTRS)
Wu, Minghong G.; Bageshwar, Vibhor L.; Euteneuer, Eric A.
2017-01-01
A new horizontal time metric, Time to Protected Zone, is proposed for use in the Detect and Avoid (DAA) Systems equipped by unmanned aircraft systems (UAS). This time metric has three advantages over the currently adopted time metric, modified tau: it corresponds to a physical event, it is linear with time, and it can be directly used to prioritize intruding aircraft. The protected zone defines an area around the UAS that can be a function of each intruding aircraft's surveillance measurement errors. Even with its advantages, the Time to Protected Zone depends explicitly on encounter geometry and may be more sensitive to surveillance sensor errors than modified tau. To quantify its sensitivity, simulation of 972 encounters using realistic sensor models and a proprietary fusion tracker is performed. Two sensitivity metrics, the probability of time reversal and the average absolute time error, are computed for both the Time to Protected Zone and modified tau. Results show that the sensitivity of the Time to Protected Zone is comparable to that of modified tau if the dimensions of the protected zone are adequately defined.
Wanner, R A; Edwards, M J; Wright, R G
1976-04-01
Hyperthermia was induced in guinea-pigs on day 21 of gestation by placing them in an incubator set at 42-5 degrees-43-0 degrees C for 1 hr. At intervals thereafter foetuses were removed from the uterus and sections of the telencephalon were prepared for light and electron microscopy. The histologic and ultrastructural appearance of the telencephalon of the normal 21-day guinea-pig foetus was described for comparative purposes. Damage to cells in mitosis characterised by clumping of chromosomes, and dispersal of polysomes in interphase cells were observed immediately after hyperthermia. Breakdown of the network of junctional complexes was apparent at 4 hr and cellular proliferation was inhibited for 6-8 hr. Degenerative changes and cell deaths were observed deep in the venticular zone after 8 hr; the extent of cell death was related to the post-stressing temperature. Proliferation was resumed at 8 hr and damaged and dead cells moved outward toward the intermediate zone. Phagocytosis of debris by large mononuclear cells was a common finding. Cytoplasmic inclusions, some of which were Feulgen-positive, were present in otherwise normal ventricular cells. Occasional dead cells and empty spaces were present in the ventricular zone at 24 hr and by 48 hr the ventricular zone was normal in appearance. It was concluded that previously observed micrencephaly in the offspring of guine-pig mothers which were heat stressed on day 21 of gestation resulted from a temporary cessation of proliferation and partial depopulation of the proliferating neuroepithelium.
Abandon the dead donor rule or change the definition of death?
Veatch, Robert M
2004-09-01
Research by Siminoff and colleagues reveals that many lay people in Ohio classify legally living persons in irreversible coma or persistent vegetative state (PVS) as dead that additional respondents, although classifying such patients as living, would be willing to procure organs from them. This paper analyzes possible implications of these findings for public policy. A majority would procure organs from those in irreversible coma or in PVS. Two strategies for legitimizing such procurement are suggested. One strategy would be to make exceptions to the dead donor rule permitting procurement from those in PVS or at least those who are in irreversible coma while continuing to classify them as living. Another strategy would be to further amend the definition of death to classify one or both groups as deceased, thus permitting procurement without violation of the dead donor rule. Permitting exceptions to the dead donor rule would require substantial changes in law--such as authorizing procuring surgeons to end the lives of patients by means of organ procurement--and would weaken societal prohibitions on killing. The paper suggests that it would be easier and less controversial to further amend the definition of death to classify those in irreversible coma and PVS as dead. Incorporation of a conscience clause to permit those whose religious or philosophical convictions support whole-brain or cardiac-based death pronouncement would avoid violating their beliefs while causing no more than minimal social problems. The paper questions whether those who would support an exception to the dead donor rule in these cases and those who would support a further amendment to the definition of death could reach agreement to adopt a public policy permitting organ procurement of those in irreversible coma or PVS when proper consent is obtained.
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
Modeling work zone crash frequency by quantifying measurement errors in work zone length.
Yang, Hong; Ozbay, Kaan; Ozturk, Ozgur; Yildirimoglu, Mehmet
2013-06-01
Work zones are temporary traffic control zones that can potentially cause safety problems. Maintaining safety, while implementing necessary changes on roadways, is an important challenge traffic engineers and researchers have to confront. In this study, the risk factors in work zone safety evaluation were identified through the estimation of a crash frequency (CF) model. Measurement errors in explanatory variables of a CF model can lead to unreliable estimates of certain parameters. Among these, work zone length raises a major concern in this analysis because it may change as the construction schedule progresses generally without being properly documented. This paper proposes an improved modeling and estimation approach that involves the use of a measurement error (ME) model integrated with the traditional negative binomial (NB) model. The proposed approach was compared with the traditional NB approach. Both models were estimated using a large dataset that consists of 60 work zones in New Jersey. Results showed that the proposed improved approach outperformed the traditional approach in terms of goodness-of-fit statistics. Moreover it is shown that the use of the traditional NB approach in this context can lead to the overestimation of the effect of work zone length on the crash occurrence. Copyright © 2013 Elsevier Ltd. All rights reserved.
Air Gaps, Size Effect, and Corner-Turning in Ambient LX-17
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souers, P C; Hernandez, A; Cabacungan, C
2008-02-05
Various ambient measurements are presented for LX-17. The size (diameter) effect has been measured with copper and Lucite confinement, where the failure radii are 4.0 and 6.5 mm, respectively. The air well corner-turn has been measured with an LX-07 booster, and the dead-zone results are comparable to the previous TATB-boosted work. Four double cylinders have been fired, and dead zones appear in all cases. The steel-backed samples are faster than the Lucite-backed samples by 0.6 {micro}s. Bare LX-07 and LX-17 of 12.7 mm-radius were fired with air gaps. Long acceptor regions were used to truly determine if detonation occurred ormore » not. The LX-07 crossed at 10 mm with a slight time delay. Steady state LX-17 crossed at 3.5 mm gap but failed to cross at 4.0 mm. LX-17 with a 12.7 mm run after the booster crossed a 1.5 mm gap but failed to cross 2.5 mm. Timing delays were measured where the detonation crossed the gaps. The Tarantula model is introduced as embedded in 0 reactive flow JWL++ and Linked Cheetah V4, mostly at 4 zones/mm. Tarantula has four pressure regions: off, initiation, failure and detonation. The physical basis of the input parameters is considered.« less
NASA Astrophysics Data System (ADS)
Klaus, Julian; Smettem, Keith; Pfister, Laurent; Harris, Nick
2017-04-01
There is ongoing interest in understanding and quantifying the travel times and dispersion of solutes moving through stream environments, including the hyporheic zone and/or in-channel dead zones where retention affects biogeochemical cycling processes that are critical to stream ecosystem functioning. Modelling these transport and retention processes requires acquisition of tracer data from injection experiments where the concentrations are recorded downstream. Such experiments are often time consuming and costly, which may be the reason many modelling studies of chemical transport have tended to rely on relatively few well documented field case studies. This leads to the need of fast and cheap distributed sensor arrays that respond instantly and record chemical transport at points of interest on timescales of seconds at various locations in the stream environment. To tackle this challenge we present data from several tracer experiments carried out in the Attert river catchment in Luxembourg employing low-cost (in the order of a euro per sensor) potentiometric chloride sensors in a distributed array. We injected NaCl under various baseflow conditions in streams of different morphologies and observed solute transport at various distances and locations. This data is used to benchmark the sensors to data obtained from more expensive electrical conductivity meters. Furthermore, the data allowed spatial resolution of hydrodynamic mixing processes and identification of chemical 'dead zones' in the study reaches.
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2013-07-01
Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fisher, Len
2013-12-01
In his book Brilliant Blunders, Mario Livio offers a detailed and fascinating examination of major errors made by five great scientists - Charles Darwin, Linus Pauling, Lord Kelvin, Fred Hoyle and Albert Einstein - as they sought to understand the evolution of life on Earth, the evolution of the Earth itself and the evolution of the universe as a whole.
Antimicrobial Activity of Bacillus Persicus 24-DSM Isolated from Dead Sea Mud.
Al-Karablieh, Nehaya
2017-01-01
Dead Sea is a hypersaline lake with 34% salinity, gains its name due to the absence of any living macroscopic creatures. Despite the extreme hypersaline environment, it is a unique ecosystem for various halophilic microorganisms adapted to this environment. Halophilic microorganisms are known for various potential biotechnological applications, the purpose of the current research is isolation and screening of halophilic bacteria from Dead Sea mud for potential antimicrobial applications. Screening for antagonistic bacteria was conducted by bacterial isolation from Dead Sea mud samples and agar plate antagonistic assay. The potential antagonistic isolates were subjected to biochemical characterization and identification by 16S-rRNA sequencing. Among the collected isolates, four isolates showed potential antagonistic activity against Bacillus subtilis 6633 and Escherichia coli 8739. The most active isolate (24-DSM) was subjected for antagonistic activity and minimal inhibitory concentration against different gram positive and negative bacterial strains after cultivation in different salt concentration media. Results: The results of 16S-rRNA analysis revealed that 24-DSM is very closely related to Bacillus persicus strain B48, which was isolated from hypersaline lake in Iran. Therefore, the isolate 24-DSM is assigned as a new strain of B. persicusi isolated from the Dead Sea mud. B. persicusi 24-DSM showed higher antimicrobial activity, when it was cultivated with saline medium, against all tested bacterial strains, where the most sensitive bacterial strain was Corynebacterium diphtheria 51696.
Size-dependent control of colloid transport via solute gradients in dead-end channels
Shin, Sangwoo; Um, Eujin; Sabass, Benedikt; Ault, Jesse T.; Rahimi, Mohammad; Warren, Patrick B.; Stone, Howard A.
2016-01-01
Transport of colloids in dead-end channels is involved in widespread applications including drug delivery and underground oil and gas recovery. In such geometries, Brownian motion may be considered as the sole mechanism that enables transport of colloidal particles into or out of the channels, but it is, unfortunately, an extremely inefficient transport mechanism for microscale particles. Here, we explore the possibility of diffusiophoresis as a means to control the colloid transport in dead-end channels by introducing a solute gradient. We demonstrate that the transport of colloidal particles into the dead-end channels can be either enhanced or completely prevented via diffusiophoresis. In addition, we show that size-dependent diffusiophoretic transport of particles can be achieved by considering a finite Debye layer thickness effect, which is commonly ignored. A combination of diffusiophoresis and Brownian motion leads to a strong size-dependent focusing effect such that the larger particles tend to concentrate more and reside deeper in the channel. Our findings have implications for all manners of controlled release processes, especially for site-specific delivery systems where localized targeting of particles with minimal dispersion to the nontarget area is essential. PMID:26715753
Removal of the Magnetic Dead Layer by Geometric Design
Guo, Er-jia; Roldan, Manuel; Charlton, Timothy R.; ...
2018-05-28
The proximity effect is used to engineer interface effects such as magnetoelectric coupling, exchange bias, and emergent interfacial magnetism. However, the presence of a magnetic “dead layer” adversely affects the functionality of a heterostructure. Here in this paper, it is shown that by utilizing (111) polar planes, the magnetization of a manganite ultrathin layer can be maintained throughout its thickness. Combining structural characterization, magnetometry measurements, and magnetization depth profiling with polarized neutron reflectometry, it is found that the magnetic dead layer is absent in the (111)-oriented manganite layers, however, it occurs in the films with other orientations. Quantitative analysis ofmore » local structural and elemental spatial evolutions using scanning transmission electron microscopy and electron energy loss spectroscopy reveals that atomically sharp interfaces with minimal chemical intermixing in the (111)-oriented superlattices. The polar discontinuity across the (111) interfaces inducing charge redistribution within the SrTiO 3 layers is suggested, which promotes ferromagnetism throughout the (111)-oriented ultrathin manganite layers. The approach of eliminating problematic magnetic dead layers by changing the crystallographic orientation suggests a conceptually useful recipe to engineer the intriguing physical properties of oxide interfaces, especially in low dimensionality.« less
Removal of the Magnetic Dead Layer by Geometric Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Er-jia; Roldan, Manuel; Charlton, Timothy R.
The proximity effect is used to engineer interface effects such as magnetoelectric coupling, exchange bias, and emergent interfacial magnetism. However, the presence of a magnetic “dead layer” adversely affects the functionality of a heterostructure. Here in this paper, it is shown that by utilizing (111) polar planes, the magnetization of a manganite ultrathin layer can be maintained throughout its thickness. Combining structural characterization, magnetometry measurements, and magnetization depth profiling with polarized neutron reflectometry, it is found that the magnetic dead layer is absent in the (111)-oriented manganite layers, however, it occurs in the films with other orientations. Quantitative analysis ofmore » local structural and elemental spatial evolutions using scanning transmission electron microscopy and electron energy loss spectroscopy reveals that atomically sharp interfaces with minimal chemical intermixing in the (111)-oriented superlattices. The polar discontinuity across the (111) interfaces inducing charge redistribution within the SrTiO 3 layers is suggested, which promotes ferromagnetism throughout the (111)-oriented ultrathin manganite layers. The approach of eliminating problematic magnetic dead layers by changing the crystallographic orientation suggests a conceptually useful recipe to engineer the intriguing physical properties of oxide interfaces, especially in low dimensionality.« less
Ted Hogg, Edward H; Michaelian, Michael
2015-05-01
Increases in mortality of trembling aspen (Populus tremuloides Michx.) have been recorded across large areas of western North America following recent periods of exceptionally severe drought. The resultant increase in standing, dead tree biomass represents a significant potential source of carbon emissions to the atmosphere, but the timing of emissions is partially driven by dead-wood dynamics which include the fall down and breakage of dead aspen stems. The rate at which dead trees fall to the ground also strongly influences the period over which forest dieback episodes can be detected by aerial surveys or satellite remote sensing observations. Over a 12-year period (2000-2012), we monitored the annual status of 1010 aspen trees that died during and following a severe regional drought within 25 study areas across west-central Canada. Observations of stem fall down and breakage (snapping) were used to estimate woody biomass transfer from standing to downed dead wood as a function of years since tree death. For the region as a whole, we estimated that >80% of standing dead aspen biomass had fallen after 10 years. Overall, the rate of fall down was minimal during the year following stem death, but thereafter fall rates followed a negative exponential equation with k = 0.20 per year. However, there was high between-site variation in the rate of fall down (k = 0.08-0.37 per year). The analysis showed that fall down rates were positively correlated with stand age, site windiness, and the incidence of decay fungi (Phellinus tremulae (Bond.) Bond. and Boris.) and wood-boring insects. These factors are thus likely to influence the rate of carbon emissions from dead trees following periods of climate-related forest die-off episodes. © 2014 Her Majesty the Queen in Right of Canada Global Change Biology © 2014 John Wiley & Sons Ltd Reproduced with the permission of the Minister of Natural Resources Canada.
2015-06-01
embassy bombings in Kenya and Tanzania that killed 225 people. An Islamist spokesman claimed that many nomadic tribesmen, including children, were...remained in the single digits . In September of 2011, Press TV, which is an Iranian news organization, claimed that there had been over 80 drone strikes
When Schools Become Dead Zones of the Imagination: A Critical Pedagogy Manifesto
ERIC Educational Resources Information Center
Giroux, Henry A.
2016-01-01
In this article Henry Giroux discusses corporate school reform movement and its detrimental impact on the public school system such as the closure of public schools in cities such as, Philadelphia, Chicago and New York to make way for charter schools. Giroux argues that corporate school reform is not simply obsessed with measurements that degrade…
Lessons from aviation - the role of checklists in minimally invasive cardiac surgery.
Hussain, S; Adams, C; Cleland, A; Jones, P M; Walsh, G; Kiaii, B
2016-01-01
We describe an adverse event during minimally invasive cardiac surgery that resulted in a multi-disciplinary review of intra-operative errors and the creation of a procedural checklist. This checklist aims to prevent errors of omission and communication failures that result in increased morbidity and mortality. We discuss the application of the aviation - led "threats and errors model" to medical practice and the role of checklists and other strategies aimed at reducing medical errors. © The Author(s) 2015.
Attention in the predictive mind.
Ransom, Madeleine; Fazelpour, Sina; Mole, Christopher
2017-01-01
It has recently become popular to suggest that cognition can be explained as a process of Bayesian prediction error minimization. Some advocates of this view propose that attention should be understood as the optimization of expected precisions in the prediction-error signal (Clark, 2013, 2016; Feldman & Friston, 2010; Hohwy, 2012, 2013). This proposal successfully accounts for several attention-related phenomena. We claim that it cannot account for all of them, since there are certain forms of voluntary attention that it cannot accommodate. We therefore suggest that, although the theory of Bayesian prediction error minimization introduces some powerful tools for the explanation of mental phenomena, its advocates have been wrong to claim that Bayesian prediction error minimization is 'all the brain ever does'. Copyright © 2016 Elsevier Inc. All rights reserved.
Characteristics of the Central Costa Rican Seismogenic Zone Determined from Microseismicity
NASA Astrophysics Data System (ADS)
DeShon, H. R.; Schwartz, S. Y.; Bilek, S. L.; Dorman, L. M.; Protti, M.; Gonzalez, V.
2001-12-01
Large or great subduction zone thrust earthquakes commonly nucleate within the seismogenic zone, a region of unstable slip on or near the converging plate interface. A better understanding of the mechanical, thermal and hydrothermal processes controlling seismic behavior in these regions requires accurate earthquake locations. Using arrival time data from an onland and offshore local seismic array and advanced 3D absolute and relative earthquake location techniques, we locate interplate seismic activity northwest of the Osa Peninsula, Costa Rica. We present high resolution locations of ~600 aftershocks of the 8/20/1999 Mw=6.9 underthrusting earthquake recorded by our local network between September and December 1999. We have developed a 3D velocity model based on published refraction lines and located events within a subducting slab geometry using QUAKE3D, a finite-differences based grid-searching algorithm (Nelson & Vidale, 1990). These absolute locations are input into HYPODD, a location program that uses P and S wave arrival time differences from nearby events and solves for the best relative locations (Waldhauser & Ellsworth, 2000). The pattern of relative earthquake locations is tied to an absolute reference using the absolute positions of the best-located earthquakes in the entire population. By using these programs in parallel, we minimize location errors, retain the aftershock pattern and provide the best absolute locations within a complex subduction geometry. We use the resulting seismicity pattern to determine characteristics of the seismogenic zone including geometry and up- and down-dip limits. These are compared with thermal models of the Middle America subduction zone, structures of the upper and lower plates, and characteristics of the Nankai seismogenic zone.
Pini, Giovanni; Brutschy, Arne; Scheidler, Alexander; Dorigo, Marco; Birattari, Mauro
2014-01-01
We study task partitioning in the context of swarm robotics. Task partitioning is the decomposition of a task into subtasks that can be tackled by different workers. We focus on the case in which a task is partitioned into a sequence of subtasks that must be executed in a certain order. This implies that the subtasks must interface with each other, and that the output of a subtask is used as input for the subtask that follows. A distinction can be made between task partitioning with direct transfer and with indirect transfer. We focus our study on the first case: The output of a subtask is directly transferred from an individual working on that subtask to an individual working on the subtask that follows. As a test bed for our study, we use a swarm of robots performing foraging. The robots have to harvest objects from a source, situated in an unknown location, and transport them to a home location. When a robot finds the source, it memorizes its position and uses dead reckoning to return there. Dead reckoning is appealing in robotics, since it is a cheap localization method and it does not require any additional external infrastructure. However, dead reckoning leads to errors that grow in time if not corrected periodically. We compare a foraging strategy that does not make use of task partitioning with one that does. We show that cooperation through task partitioning can be used to limit the effect of dead reckoning errors. This results in improved capability of locating the object source and in increased performance of the swarm. We use the implemented system as a test bed to study benefits and costs of task partitioning with direct transfer. We implement the system with real robots, demonstrating the feasibility of our approach in a foraging scenario.
Cascade control of superheated steam temperature with neuro-PID controller.
Zhang, Jianhua; Zhang, Fenfang; Ren, Mifeng; Hou, Guolian; Fang, Fang
2012-11-01
In this paper, an improved cascade control methodology for superheated processes is developed, in which the primary PID controller is implemented by neural networks trained by minimizing error entropy criterion. The entropy of the tracking error can be estimated recursively by utilizing receding horizon window technique. The measurable disturbances in superheated processes are input to the neuro-PID controller besides the sequences of tracking error in outer loop control system, hence, feedback control is combined with feedforward control in the proposed neuro-PID controller. The convergent condition of the neural networks is analyzed. The implementation procedures of the proposed cascade control approach are summarized. Compared with the neuro-PID controller using minimizing squared error criterion, the proposed neuro-PID controller using minimizing error entropy criterion may decrease fluctuations of the superheated steam temperature. A simulation example shows the advantages of the proposed method. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Shallow lithological structure across the Dead Sea Transform derived from geophysical experiments
Stankiewicz, J.; Munoz, G.; Ritter, O.; Bedrosian, P.A.; Ryberg, T.; Weckmann, U.; Weber, M.
2011-01-01
In the framework of the DEad SEa Rift Transect (DESERT) project a 150 km magnetotelluric profile consisting of 154 sites was carried out across the Dead Sea Transform. The resistivity model presented shows conductive structures in the western section of the study area terminating abruptly at the Arava Fault. For a more detailed analysis we performed a joint interpretation of the resistivity model with a P wave velocity model from a partially coincident seismic experiment. The technique used is a statistical correlation of resistivity and velocity values in parameter space. Regions of high probability of a coexisting pair of values for the two parameters are mapped back into the spatial domain, illustrating the geographical location of lithological classes. In this study, four regions of enhanced probability have been identified, and are remapped as four lithological classes. This technique confirms the Arava Fault marks the boundary of a highly conductive lithological class down to a depth of ???3 km. That the fault acts as an impermeable barrier to fluid flow is unusual for large fault zone, which often exhibit a fault zone characterized by high conductivity and low seismic velocity. At greater depths it is possible to resolve the Precambrian basement into two classes characterized by vastly different resistivity values but similar seismic velocities. The boundary between these classes is approximately coincident with the Al Quweira Fault, with higher resistivities observed east of the fault. This is interpreted as evidence for the original deformation along the DST originally taking place at the Al Quweira Fault, before being shifted to the Arava Fault.
High efficiency x-ray nanofocusing by the blazed stacking of binary zone plates
NASA Astrophysics Data System (ADS)
Mohacsi, I.; Karvinen, P.; Vartiainen, I.; Diaz, A.; Somogyi, A.; Kewish, C. M.; Mercere, P.; David, C.
2013-09-01
The focusing efficiency of binary Fresnel zone plate lenses is fundamentally limited and higher efficiency requires a multi step lens profile. To overcome the manufacturing problems of high resolution and high efficiency multistep zone plates, we investigate the concept of stacking two different binary zone plates in each other's optical near-field. We use a coarse zone plate with π phase shift and a double density fine zone plate with π/2 phase shift to produce an effective 4- step profile. Using a compact experimental setup with piezo actuators for alignment, we demonstrated 47.1% focusing efficiency at 6.5 keV using a pair of 500 μm diameter and 200 nm smallest zone width. Furthermore, we present a spatially resolved characterization method using multiple diffraction orders to identify manufacturing errors, alignment errors and pattern distortions and their effect on diffraction efficiency.
Binarization of apodizers by adapted one-dimensional error diffusion method
NASA Astrophysics Data System (ADS)
Kowalczyk, Marek; Cichocki, Tomasz; Martinez-Corral, Manuel; Andres, Pedro
1994-10-01
Two novel algorithms for the binarization of continuous rotationally symmetric real positive pupil filters are presented. Both algorithms are based on 1-D error diffusion concept. The original gray-tone apodizer is substituted by a set of transparent and opaque concentric annular zones. Depending on the algorithm the resulting binary mask consists of either equal width or equal area zones. The diffractive behavior of binary filters is evaluated. It is shown that the pupils with equal width zones give Fraunhofer diffraction pattern more similar to that of the original continuous-tone pupil than those with equal area zones, assuming in both cases the same resolution limit of printing device.
Ivanov, Vadim A
2016-02-01
The reduction of instrumental dead space is a recognized approach to preventing ventilation-induced lung injury in premature infants. However, there are no published data regarding the effectiveness of instrumental dead-space reduction in endotracheal tube (ETT) connectors. We tested the impact of the Y-piece/ETT connector pairs with reduced instrumental dead space on CO2 elimination in a model of the premature neonate lung. The standard ETT connector was compared with a low-dead-space ETT connector and with a standard connector equipped with an insert. We compared the setups by measuring the CO2 elimination rate in an artificial lung ventilated via the connectors. The lung was connected to a ventilator via a standard circuit, a 2.5-mm ETT, and one of the connectors under investigation. The ventilator was run in volume-controlled continuous mandatory ventilation mode. The low-dead-space ETT connector/Y-piece and insert-equipped standard connector/Y-piece pairs had instrumental dead space reduced by 36 and 67%, respectively. With set tidal volumes (VT) of 2.5, 5, and 10 mL, in comparison with the standard ETT connector, the low-dead-space connector reduced CO2 elimination time by 4.5% (P < .05), 4.4% (P < .01), and 7.1% (not significant), respectively. The insert-equipped standard connector reduced CO2 elimination time by 13.5, 25.1, and 16.1% (all P < .01). The low-dead-space connector increased inspiratory resistance by 17.8% (P < .01), 9.6% (P < .05), and 5.0% (not significant); the insert-equipped standard connector increased inspiratory resistance by 9.1, 8.4, and 5.9% (all not significant). The low-dead-space connector decreased expiratory resistance by 6.8% (P < .01) and 1.8% (not significant) and increased it by 1.4% (not significant); the insert-equipped standard connector decreased expiratory resistance by 1.5 and 1% and increased it by 1% (all not significant). The low-dead-space connector increased work of breathing by 4.7% (P < .01), 3.8% (P < .01), and 2.5% (not significant); the insert-equipped standard connector increased it by 0.8% (not significant), 2.5% (P < .01), and 2.8% (P < .01). Both methods of instrumental dead-space reduction led to improvements in artificial lung ventilation. Negative effects on resistance and work of breathing appeared minimal. Further testing in vivo should be performed to confirm the lung model results and, if successful, translated into clinical practice. Copyright © 2016 by Daedalus Enterprises.
Evolving geometrical heterogeneities of fault trace data
NASA Astrophysics Data System (ADS)
Wechsler, Neta; Ben-Zion, Yehuda; Christofferson, Shari
2010-08-01
We perform a systematic comparative analysis of geometrical fault zone heterogeneities using derived measures from digitized fault maps that are not very sensitive to mapping resolution. We employ the digital GIS map of California faults (version 2.0) and analyse the surface traces of active strike-slip fault zones with evidence of Quaternary and historic movements. Each fault zone is broken into segments that are defined as a continuous length of fault bounded by changes of angle larger than 1°. Measurements of the orientations and lengths of fault zone segments are used to calculate the mean direction and misalignment of each fault zone from the local plate motion direction, and to define several quantities that represent the fault zone disorder. These include circular standard deviation and circular standard error of segments, orientation of long and short segments with respect to the mean direction, and normal separation distances of fault segments. We examine the correlations between various calculated parameters of fault zone disorder and the following three potential controlling variables: cumulative slip, slip rate and fault zone misalignment from the plate motion direction. The analysis indicates that the circular standard deviation and circular standard error of segments decrease overall with increasing cumulative slip and increasing slip rate of the fault zones. The results imply that the circular standard deviation and error, quantifying the range or dispersion in the data, provide effective measures of the fault zone disorder, and that the cumulative slip and slip rate (or more generally slip rate normalized by healing rate) represent the fault zone maturity. The fault zone misalignment from plate motion direction does not seem to play a major role in controlling the fault trace heterogeneities. The frequency-size statistics of fault segment lengths can be fitted well by an exponential function over the entire range of observations.
NASA Astrophysics Data System (ADS)
Kinsey, Adam M.; Diederich, Chris J.; Nau, William H.; Ross, Anthony B.; Butts Pauly, Kim; Rieke, Viola; Sommer, Graham
2006-05-01
Multi-sectored ultrasound heating applicators with dynamic angular and longitudinal control of heating profiles are being investigated for the thermal treatment of tumors in sites such as prostate, uterus, and brain. Multi-sectored tubular ultrasound transducers with independent sector power control were incorporated into interstitial and transurethral applicators and provided dynamic angular control of a heating pattern without requiring device manipulation during treatment. Acoustic beam measurements of each applicator type demonstrated a 35-40° acoustic dead zone between each independent sector, with negligible mechanical or electrical coupling. Despite the acoustic dead zone between sectors, simulations and experiments under MR temperature (MRT) monitoring showed that the variance from the maximum lesion radius (scalloping) with all elements activated on a transducer was minimal and did not affect conformal heating of a target area. A biothermal model with a multi-point controller was used to adjust the applied power and treatment time of individual transducer segments as the tissue temperature changed in simulations of thermal lesions with both interstitial and transurethral applicators. Transurethral ultrasound applicators for benign prostatic hyperplasia (BPH) treatment with either three or four sectors conformed a thermal dose to a simulated target area in the angular and radial dimensions. The simulated treatment was controlled to a maximum temperature of 85°C, and had a maximum duration of 5 min when power was turned off as the 52°C temperature contour reach a predetermined control point for each sector in the tissue. Experiments conducted with multi-sectored applicators under MRT monitoring showed thermal ablation and hyperthermia treatments had little or no border `scalloping', conformed to a pretreatment target area, and correlated very well with the simulated thermal lesions. The radial penetration of the heat treatments in tissue with interstitial (1.5-1.8 mm OD transducer) and transurethral (2.5-4.0 mm OD transducer) applicators was at least 1.5 cm and 2.0 cm, respectively, for a treatment duration of 10 min. Angular control of thermal ablation and hyperthermia therapy often relies upon non-adjustable angular power deposition patterns and/or mechanical manipulation of the heating device. The multi-sectored ultrasound applicators developed in this study provide dynamic control of the angular heating distribution during treatment without device manipulation and maintain previously reported heating penetration and spatial control characteristics of similar ultrasound devices.
NASA Astrophysics Data System (ADS)
Pavan Kumar, G.; Mahesh, P.; Nagar, Mehul; Mahender, E.; Kumar, Virendhar; Mohan, Kapil; Ravi Kumar, M.
2017-05-01
Fluids play a prominent role in the genesis of earthquakes, particularly in intraplate settings. In this study, we present evidence for a highly heterogeneous nature of electrical conductivity in the crust and uppermost mantle beneath the Kachchh rift basin of northwestern India, which is host to large, deadly intraplate earthquakes. We interpret our results of high conductive zones inferred from magnetotelluric and 3-D local earthquake tomography investigations in terms of a fluid reservoir in the upper mantle. The South Wagad Fault (SWF) imaged as a near-vertical north dipping low resistivity zone traversing the entire crust and an elongated south dipping conductor demarcating the North Wagad Fault (NWF) serve as conduits for fluid flow from the reservoir to the middle to lower crustal depths. Importantly, the epicentral zone of the 2001 main shock is characterized as a fluid saturated zone at the rooting of NWF onto the SWF.
NASA Astrophysics Data System (ADS)
Jordan, T. A.; Ferraccioli, F.; Ross, N.; Siegert, M. J.; Corr, H.; Leat, P. T.; Bingham, R. G.; Rippin, D. M.; le Brocq, A.
2012-04-01
The >500 km wide Weddell Sea Rift was a major focus for Jurassic extension and magmatism during the early stages of Gondwana break-up, and underlies the Weddell Sea Embayment, which separates East Antarctica from a collage of crustal blocks in West Antarctica. Here we present new aeromagnetic data combined with airborne radar and gravity data collected during the 2010-11 field season over the Institute and Moeller ice stream in West Antarctica. Our interpretations identify the major tectonic boundaries between the Weddell Sea Rift, the Ellsworth-Whitmore Mountains block and East Antarctica. Digitally enhanced aeromagnetic data and gravity anomalies indicate the extent of Proterozoic basement, Middle Cambrian rift-related volcanic rocks, Jurassic granites, and post Jurassic sedimentary infill. Two new joint magnetic and gravity models were constructed, constrained by 2D and 3D magnetic depth-to-source estimates to assess the extent of Proterozoic basement and the thickness of major Jurassic intrusions and post-Jurassic sedimentary infill. The Jurassic granites are modelled as 5-8 km thick and emplaced at the transition between the thicker crust of the Ellsworth-Whitmore Mountains block and the thinner crust of the Weddell Sea Rift, and within the Pagano Fault Zone, a newly identified ~75 km wide left-lateral strike-slip fault system that we interpret as a major tectonic boundary between East and West Antarctica. We also suggest a possible analogy between the Pagano Fault Zone and the Dead Sea transform. In this scenario the Jurassic Pagano Fault Zone is the kinematic link between extension in the Weddell Sea Rift and convergence across the Pacific margin of West Antarctica, as the Dead Sea transform links Red Sea extension to compression within the Zagros Mountains.
Samuel, Nir; Hirschhorn, Gil; Chen, Jacob; Steiner, Ivan P; Shavit, Itai
2013-03-01
In Israel, the Airborne Rescue and Evacuation Unit (AREU) provides prehospital trauma care in times of peace and during times of armed conflict. In peacetime, the AREU transports children who were involved in motor vehicle collisions (MVC) and those who fall off cliffs (FOC). During armed conflict, the AREU evacuates children who sustain firearm injuries (FI) from the fighting zones. To report on prehospital injury severity of children who were evacuated by the AREU from combat zones. A retrospective comparative analysis was conducted on indicators of prehospital injury severity for patients who had MVC, FOC, and FI. It included the National Advisory Committee for Aeronautics (NACA) score, the Glasgow Coma Scale (GCS) score on scene, and the number of procedures performed by emergency medical personnel and by the AREU air-crew. From January 2003 to December 2009, 36 MVC, 25 FOC, and 17 FI children were transported from the scene by the AREU. Five patients were dead at the scene: 1 (2.8%) MVC, 1 (4%) FOC, and 3 (17.6%) FI. Two (11.7%) FI patients were dead on arrival at the hospital. MVC, FOC, and FI patients had mean (±SD) NACA scores of 4.4 ± 1.2, 3.6 ± 1.2, and 5 ± 0.7, respectively. Mean (±SD) GCS scores were 8.9 ± 5.6, 13.6 ± 4, and 6.9 ± 5.3, respectively. Life support interventions were required by 29 (80.6%) MVC, 3 (12%) FOC, and 15 (88.2%) FI patients. In the prehospital setting, children evacuated from combat zones were more severely injured than children who were transported from the scene during peacetime. Copyright © 2013 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Wyner, Yael
2010-01-01
This inquiry-based activity provides a real-world example that connects to students' everyday seafood choices. In fact, many students went home and insisted to their parents that they should only buy "green" seafood choices. It was also an effective activity because students were able to use what they learned about ocean ecosystems and…
When Schools Become Dead Zones of the Imagination: A Critical Pedagogy Manifesto
ERIC Educational Resources Information Center
Giroux, Henry A.
2014-01-01
This article examines the so-called new school reform movement led by a host of right-wing ideologues, billionaires, and foundations. It argues that instead of being reformers, the latter are part of a counter-revolution in American education to dismantle public schools not because they are failing but because they are public and make a claim,…
Jack Rabbit Pretest Data For TATB Based IHE Model Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M M; Strand, O T; Bosson, S T
The Jack Rabbit Pretest series consisted of 5 focused hydrodynamic experiments, 2021E PT3, PT4, PT5, PT6, and PT7. They were fired in March and April of 2008 at the Contained Firing Facility, Site 300, Lawrence Livermore National Laboratory, Livermore, California. These experiments measured dead-zone formation and impulse gradients created during the detonation of TATB based insensitive high explosive. This document contains reference data tables for all 5 experiments. These data tables include: (1) Measured laser velocimetry of the experiment diagnostic plate (2) Computed diagnostic plate profile contours through velocity integration (3) Computed center axis pressures through velocity differentiation. All timesmore » are in microseconds, referenced from detonator circuit current start. All dimensions are in millimeters. Schematic axi-symmetric cross sections are shown for each experiment. These schematics detail the materials used and dimensions of the experiment and component parts. This should allow anyone wanting to evaluate their TATB based insensitive high explosive detonation model against experiment. These data are particularly relevant in examining reactive flow detonation model prediction in computational simulation of dead-zone formation and resulting impulse gradients produced by detonating TATB based explosive.« less
Synimpact-postimpact transition inside Chesapeake Bay crater
Poag, Claude (Wylie)
2002-01-01
The transition from synimpact to postimpact sedimentation inside Chesapeake Bay impact crater began with accumulation of fallout debris, the final synimpact deposit. Evi dence of a synimpact fallout layer at this site comes from the presence of unusual, millimeter- scale, pyrite microstructures at the top of the Exmore crater-fill breccia. The porous geometry of the pyrite microstructures indicates that they originally were part of a more extensive pyrite lattice that encompassed a layer of millimeter-scale glass microspherules—fallout melt particles produced by the bolide impact. Above this microspherule layer is the initial postimpact deposit, a laminated clay-silt-sand unit, 19 cm thick. This laminated unit is a dead zone, which contains abundant stratigraphically mixed and diagenetically altered or impact-altered microfossils (foraminifera, calcareous nannofossils, dinoflagellates, ostracodes), but no evidence of indigenous biota. By extrapolation of sediment- accumulation rates, I estimate that conditions unfavorable to microbiota persisted for as little as <1 k.y. to 10 k.y. after the bolide impact. Subsequently, an abrupt improvement of the late Eocene paleoenvironment allowed species-rich assemblages of foraminifera, ostracodes, dinoflagellates, radiolarians, and calcareous nannoplankton to quickly reoccupy the crater basin, as documented in the first sample of the Chickahominy Formation above the dead zone.
Synimpact-postimpact transition inside Chesapeake Bay crater
Poag, C.W.
2002-01-01
The transition from synimpact to postimpact sedimentation inside Chesapeake Bay impact crater began with accumulation of fallout debris, the final synimpact deposit. Evidence of a synimpact fallout layer at this site comes from the presence of unusual, millimeter-scale, pyrite microstructures at the top of the Exmore crater-fill breccia. The porous geometry of the pyrite microstructures indicates that they originally were part of a more extensive pyrite lattice that encompassed a layer of millimeter-scale glass microspherules-fallout melt particles produced by the bolide impact. Above this microspherule layer is the initial postimpact deposit, a laminated clay-silt-sand unit, 19 cm thick. This laminated unit is a dead zone, which contains abundant stratigraphically mixed and diagenetically altered or impact-altered microfossils (foraminifera, calcareous nannofossils, dinoflagellates, ostracodes), but no evidence of indigenous biota. By extrapolation of sediment-accumulation rates, I estimate that conditions unfavorable to microbiota persisted for as little as <1 k.y. to 10 k.y. after the bolide impact. Subsequently, an abrupt improvement of the late Eocene paleoenvironment allowed species-rich assemblages of foraminifera, ostracodes, dinoflagellates, radiolarians, and calcareous nannoplankton to quickly reoccupy the crater basin, as documented in the first sample of the Chickahominy Formation above the dead zone.
Hohwy, Jakob
2017-01-01
I discuss top-down modulation of perception in terms of a variable Bayesian learning rate, revealing a wide range of prior hierarchical expectations that can modulate perception. I then switch to the prediction error minimization framework and seek to conceive cognitive penetration specifically as prediction error minimization deviations from a variable Bayesian learning rate. This approach retains cognitive penetration as a category somewhat distinct from other top-down effects, and carves a reasonable route between penetrability and impenetrability. It prevents rampant, relativistic cognitive penetration of perception and yet is consistent with the continuity of cognition and perception. Copyright © 2016 Elsevier Inc. All rights reserved.
Minimal change disease in a patient with myasthenia gravis: A case report.
Tsai, Jun-Li; Tsai, Shang-Feng
2016-09-01
Myasthenia gravis superimposed with proteinuria is a very rare disorder with only 39 cases reported so far. Of these cases, the most commonly associated disorder is minimal change disease. Myasthenia gravis and minimal change disease are both related to the dysfunction of T lymphocytes and hence the 2 disorders may be connected. Here we report the first case on a patient diagnosed with myasthenia gravis concurrently with the minimal change disease, and it was presented in the absence of thymoma or thymic hyperplasia. Treatment for myasthenia gravis also lowered proteinuria of minimal change disease. He ever experienced good control for myasthenia gravis and minimal change disease. However, pneumonia related septic shock occurred to him and finally he was dead. Minimal change disease is generally considered to occur subsequent to the onset of myasthenia gravis with causal association. After extensive literature review, we noted only 47.8% minimal change disease had occurred after the onset of myasthenia gravis. Minimal change disease mostly occurs in children and if diagnosed in adults, clinicians should search for a potential cause such as myasthenia gravis and other associated thymic disorders.
Uttman, L; Bitzén, U; De Robertis, E; Enoksson, J; Johansson, L; Jonson, B
2012-10-01
Low tidal volume (V(T)), PEEP, and low plateau pressure (P(PLAT)) are lung protective during acute respiratory distress syndrome (ARDS). This study tested the hypothesis that the aspiration of dead space (ASPIDS) together with computer simulation can help maintain gas exchange at these settings, thus promoting protection of the lungs. ARDS was induced in pigs using surfactant perturbation plus an injurious ventilation strategy. One group then underwent 24 h protective ventilation, while control groups were ventilated using a conventional ventilation strategy at either high or low pressure. Pressure-volume curves (P(el)/V), blood gases, and haemodynamics were studied at 0, 4, 8, 16, and 24 h after the induction of ARDS and lung histology was evaluated. The P(el)/V curves showed improvements in the protective strategy group and deterioration in both control groups. In the protective group, when respiratory rate (RR) was ≈ 60 bpm, better oxygenation and reduced shunt were found. Histological damage was significantly more severe in the high-pressure group. There were no differences in venous oxygen saturation and pulmonary vascular resistance between the groups. The protective ventilation strategy of adequate pH or PaCO2 with minimal V(T), and high/safe P(PLAT) resulting in high PEEP was based on the avoidance of known lung-damaging phenomena. The approach is based upon the optimization of V(T), RR, PEEP, I/E, and dead space. This study does not lend itself to conclusions about the independent role of each of these features. However, dead space reduction is fundamental for achieving minimal V(T) at high RR. Classical physiology is applicable at high RR. Computer simulation optimizes ventilation and limiting of dead space using ASPIDS. Inspiratory P(el)/V curves recorded from PEEP or, even better, expiratory P(el)/V curves allow monitoring in ARDS.
NASA Astrophysics Data System (ADS)
Hegazy, Ahmad K.; Kabiel, Hanan F.
2007-05-01
Anastatica hierochuntica L. (Brassicaceae) is a desert monocarpic annual species characterized by a topochory/ombrohydrochory type of seed dispersal. The hygrochastic nature of the dry skeletons (dead individuals) permits controlling seed dispersal by rain events. The amount of dispersed seeds is proportional to the intensity of rainfall. When light showers occur, seeds are released and remain in the site. Seeds dispersed in the vicinity of the mother or source plant (primary type of seed dispersal) resulted in clumped pattern and complicated interrelationships among size-classes of the population. Following heavy rainfall, most seeds are released and transported into small patches and shallow depressions which collect runoff water. The dead A. hierochuntica skeletons demonstrate site-dependent size-class structure, spatial pattern and spatial interrelationships in different microhabitats. Four microhabitat types have been sampled: runnels, patches and simple and compound depressions in two sites (gravel and sand). Ripley's K-function was used to analyze the spatial pattern in populations of A. hierochuntica skeletons in the study microhabitats. Clumped patterns were observed in nearly all of the study microhabitats. Populations of A. hierochuntica in the sand site were more productive than in the gravel site and usually had more individuals in the larger size-classes. In the compound-depression microhabitat, the degree of clumping decreased from the core zone to the intermediate zone then shifted into overdispersed pattern in the outer zone. At the within size-class level, the clumped pattern dominated in small size classes but shifted into random and overdispersed patterns in the larger size classes. Aggregation between small and large size-classes was not well-defined but large individuals were found closer to the smaller individuals than to those of their own class. In relation to the phytomass and the size-class structure, the outer zone of the simple depression and the outer and intermediate zones of the compound depression microhabitats were the most productive sites.
Molecular Insights into Plant-Microbial Processes and Carbon Storage in Mangrove Ecosystems
NASA Astrophysics Data System (ADS)
Romero, I. C.; Ziegler, S. E.; Fogel, M.; Jacobson, M.; Fuhrman, J. A.; Capone, D. G.
2009-12-01
Mangrove forests, in tropical and subtropical coastal zones, are among the most productive ecosystems, representing a significant global carbon sink. We report new molecular insights into the functional relationship among microorganisms, mangrove trees and sediment geochemistry. The interactions among these elements were studied in peat-based mangrove sediments (Twin Cays, Belize) subjected to a long-term fertilization experiment with N and P, providing an analog for eutrophication. The composition and δ13C of bacterial PLFA showed that bacteria and mangrove trees had similar nutrient limitation patterns (N in the fringe mangrove zone, P in the interior zone), and that fertilization with N or P can affect bacterial metabolic processes and bacterial carbon uptake (from diverse mangrove sources including leaf litter, live and dead roots). PCR amplified nifH genes showed a high diversity (26% nifH novel clones) and a remarkable spatial and temporal variability in N-fixing microbial populations in the rhizosphere, varying primarily with the abundance of dead roots, PO4-3 and H2S concentrations in natural and fertilized environments. Our results indicate that eutrophication of mangrove ecosystems has the potential to alter microbial organic matter remineralization and carbon release with important implications for the coastal carbon budget. In addition, we will present preliminary data from a new study exploring the modern calibration of carbon and hydrogen isotopes of plant leaf waxes as a proxy recorder of past environmental change in mangrove ecosystems.
Kim, Youngwon; Welk, Gregory J
2017-02-01
Sedentary behaviour (SB) has emerged as a modifiable risk factor, but little is known about measurement errors of SB. The purpose of this study was to determine the validity of 24-h Physical Activity Recall (24PAR) relative to SenseWear Armband (SWA) for assessing SB. Each participant (n = 1485) undertook a series of data collection procedures on two randomly selected days: wearing a SWA for full 24-h, and then completing the telephone-administered 24PAR the following day to recall the past 24-h activities. Estimates of total sedentary time (TST) were computed without the inclusion of reported or recorded sleep time. Equivalence testing was used to compare estimates of TST. Analyses from equivalence testing showed no significant equivalence of 24PAR for TST (90% CI: 443.0 and 457.6 min · day -1 ) relative to SWA (equivalence zone: 580.7 and 709.8 min · day -1 ). Bland-Altman plots indicated individuals that were extremely or minimally sedentary provided relatively comparable sedentary time between 24PAR and SWA. Overweight/obese and/or older individuals were more likely to under-estimate sedentary time than normal weight and/or younger individuals. Measurement errors of 24PAR varied by the level of sedentary time and demographic indicators. This evidence informs future work to develop measurement error models to correct for errors of self-reports.
Absolute vs. relative error characterization of electromagnetic tracking accuracy
NASA Astrophysics Data System (ADS)
Matinfar, Mohammad; Narayanasamy, Ganesh; Gutierrez, Luis; Chan, Raymond; Jain, Ameet
2010-02-01
Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data unusable. We present a mapping method for the operating region over which EM tracking sensors are used, allowing for characterization of measurement errors, in turn providing physicians with visual feedback about measurement confidence or reliability of localization estimates. In this instance, we employ a calibration phantom to assess distortion within the operating field of the EM tracker and to display in real time the distribution of measurement errors, as well as the location and extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean") EM environment. The registration results in the locations of sensors with respect to each other and defines the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement and orientation) are computed. Based on error thresholds provided by the operator, the spatial distribution of localization errors are clustered and dynamically displayed as separate confidence zones within the operating region of the EM tracker space.
NASA Astrophysics Data System (ADS)
Hamiel, Yariv; Masson, Frederic; Piatibratova, Oksana; Mizrahi, Yaakov
2018-01-01
Detailed analysis of crustal deformation along the southern Arava Valley section of the Dead Sea Fault is presented. Using dense GPS measurements we obtain the velocities of new near- and far-field campaign stations across the fault. We find that this section is locked with a locking depth of 19.9 ± 7.7 km and a slip rate of 5.0 ± 0.8 mm/yr. The geodetically determined locking depth is found to be highly consistent with the thickness of the seismogenic zone in this region. Analysis of instrumental seismic record suggests that only 1% of the total seismic moment accumulated since the last large event occurred about 800 years ago, was released by small to moderate earthquakes. Historical and paleo-seismic catalogs of this region together with instrumental seismic data and calculations of Coulomb stress changes induced by the 1995 Mw 7.2 Nuweiba earthquake suggest that the southern Arava Valley section of the Dead Sea Fault is in the late stage of the current interseismic period.
Ultraviolet radiation properties as applied to photoclimatherapy at the Dead Sea.
Kudish, A I; Abels, D; Harari, M
2003-05-01
The Dead Sea basin, the lowest terrestrial point on earth, is recognized as a natural treatment center for patients with various cutaneous and rheumatic diseases. Psoriasis is the major skin disease treated at the Dead Sea with excellent improvement to complete clearance exceeding 85% after 4 weeks of treatment. These results were postulated to be associated with a unique spectrum of ultraviolet radiation present in the Dead Sea area. The UVB and UVA radiation at two sites is measured continuously by identical sets of broad-band Solar Light Co. Inc. meters (Philadelphia, PA). The spectral selectivity within the UVB and UVA spectrum was determined using a narrow-band spectroradiometer, UV-Optronics 742 (Orlando, FL). The optimum exposure time intervals for photoclimatherapy, defined as the minimum ratio of erythema to therapeutic radiation intensities, were also determined using a Solar Light Co. Inc. Microtops II, Ozone Monitor-Sunphotometer. The ultraviolet radiation at the Dead Sea is attenuated relative to Beer Sheva as a result of the increased optical path length and consequent enhanced scattering. The UVB radiation is attenuated to a greater extent than UVA and the shorter erythema UVB spectral range decreased significantly compared with the longer therapeutic UVB wavelengths. It was demonstrated that the relative attenuation within the UVB spectral range is greatest for the shorter erythema rays and less for the longer therapeutic UVB wavelengths, thus producing a greater proportion of the longer therapeutic UVB wavelengths in the ultraviolet spectrum. These measurements can be utilized to minimize the exposure to solar radiation by correlating the cumulative UVB radiation dose to treatment efficacy and by formulating a patient sun exposure treatment protocol for Dead Sea photoclimatherapy.
Stress tensor and focal mechanisms in the Dead Sea basin
NASA Astrophysics Data System (ADS)
Hofstetter, A.; Dorbath, C.; Dorbath, L.; Braeuer, B.; Weber, M. H.
2015-12-01
We use the recorded seismicity, confined to the Dead Sea basin and its boundaries, by the Dead Sea Integrated Research (DESIRE) portable seismic network and the Israel and Jordan permanent seismic networks for studying the mechanisms of earthquakes that occurred in the Dead Sea basin. The observed seismicity in the Dead Sea basin was divided into 9 regions according to the spatial distribution of the earthquakes and the known tectonic features. The large number of recording stations and the good station distribution allowed the reliable determinations of 494 earthquake focal mechanisms. For each region, based on the inversion of the observed polarities of the earthquakes, we determine the focal mechanisms and the associated stress tensor. For 159 earthquakes out of the 494 mechanisms we could determine compatible fault planes. On the eastern side, the focal mechanisms are mainly strike-slip mechanism with nodal planes in the N-S and E-W directions. The azimuths of the stress axes are well constrained presenting minimal variability in the inversion of the data, which is in good agreement with the Arava fault on the eastern side of the Dead Sea basin and what we had expected from the regional geodynamics. However, larger variabilities of the azimuthal and dip angles are observed on the western side of the basin. Due to the wider range of azimuths of the fault planes, we observe the switching of sigma1 and sigma2 or the switching of sigma2 and sigma3as major horizontal stress directions. This observed switching of stress axes allows having dip-slip and normal mechanisms in a region that is dominated by strike-slip motion.
Feedback controlled optics with wavefront compensation
NASA Technical Reports Server (NTRS)
Breckenridge, William G. (Inventor); Redding, David C. (Inventor)
1993-01-01
The sensitivity model of a complex optical system obtained by linear ray tracing is used to compute a control gain matrix by imposing the mathematical condition for minimizing the total wavefront error at the optical system's exit pupil. The most recent deformations or error states of the controlled segments or optical surfaces of the system are then assembled as an error vector, and the error vector is transformed by the control gain matrix to produce the exact control variables which will minimize the total wavefront error at the exit pupil of the optical system. These exact control variables are then applied to the actuators controlling the various optical surfaces in the system causing the immediate reduction in total wavefront error observed at the exit pupil of the optical system.
Minimizing Accidents and Risks in High Adventure Outdoor Pursuits.
ERIC Educational Resources Information Center
Meier, Joel
The fundamental dilemma in adventure programming is eliminating unreasonable risks to participants without also reducing levels of excitement, challenge, and stress. Most accidents are caused by a combination of unsafe conditions, unsafe acts, and error judgments. The best and only way to minimize critical human error in adventure programs is…
Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.
OConnor, William; Runquist, Elizabeth A
2008-07-01
Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.
NASA Astrophysics Data System (ADS)
Al-Damegh, Khaled; Sandvol, Eric; Al-Lazki, Ali; Barazangi, Muawia
2004-05-01
Continuous recordings of 17 broadband and short-period digital seismic stations from a newly established seismological network in Saudi Arabia, along with digital recordings from the broadband stations of the GSN, MEDNET, GEOFON, a temporary array in Saudi Arabia, and temporary short period stations in Oman, were analysed to study the lithospheric structure of the Arabian Plate and surrounding regions. The Arabian Plate is surrounded by a variety of types of plate boundaries: continental collision (Zagros Belt and Bitlis Suture), continental transform (Dead Sea fault system), young seafloor spreading (Red Sea and the Gulf of Aden) and oceanic transform (Owen fracture zone). Also, there are many intraplate Cenozoic processes such as volcanic eruptions, faulting and folding that are taking place. We used this massive waveform database of more than 6200 regional seismograms to map zones of blockage, inefficient and efficient propagation of the Lg and Sn phases in the Middle East and East Africa. We observed Lg blockage across the Bitlis Suture and the Zagros fold and thrust belt, corresponding to the boundary between the Arabian and Eurasian plates. This is probably due to a major lateral change in the Lg crustal waveguide. We also observed inefficient Lg propagation along the Oman mountains. Blockage and inefficient Sn propagation is observed along and for a considerable distance to the east of the Dead Sea fault system and in the northern portion of the Arabian Plate (south of the Bitlis Suture). These mapped zones of high Sn attenuation, moreover, closely coincide with extensive Neogene and Quaternary volcanic activity. We have also carefully mapped the boundaries of the Sn blockage within the Turkish and Iranian plateaus. Furthermore, we observed Sn blockage across the Owen fracture zone and across some segments of the Red Sea. These regions of high Sn attenuation most probably have anomalously hot and possibly thin lithospheric mantle (i.e. mantle lid). A surprising result is the efficient propagation of Sn across a segment of the Red Sea, an indication that active seafloor spreading is not continuous along the axis of the Red Sea. We also investigated the attenuation of Pn phase (QPn) for 1-2 Hz along the Red Sea, the Dead Sea fault system, within the Arabian Shield and in the Arabian Platform. Consistent with the Sn attenuation, we observed low QPn values of 22 and 15 along the western coast of the Arabian Plate and along the Dead Sea fault system, respectively, for a frequency of 1.5 Hz. Higher QPn values of the order of 400 were observed within the Arabian Shield and Platform for the same frequency. Our results based on Sn and Pn observations along the western and northern portions of the Arabian Plate imply the presence of a major anomalously hot and thinned lithosphere in these regions that may be caused by the extensive upper mantle anomaly that appears to span most of East Africa and western Arabia.
Magnetocentrifugally Driven Flows from Young Stars and Disks. IV. The Accretion Funnel and Dead Zone
NASA Astrophysics Data System (ADS)
Ostriker, Eve C.; Shu, Frank H.
1995-07-01
We formulate the time-steady, axisymmetric problem of stellar magnetospheric inflow of gas from a surrounding accretion disk. The computational domain is bounded on the outside by a surface of given shape containing the open field lines associated with an induced disk wind. The mechanism for this wind has been investigated in previous publications in this journal. Our zeroth-order solution incorporates an acceptable accounting of the pressure balance between the magnetic field lines loaded with accreting gas (funnel flow) and those empty of matter (dead zone). In comparison with previous models, our funnel-flow/dead-zone solution has the following novel features: (1) Because of a natural tendency for the trapped stellar magnetic flux to pinch toward the corotation radius Rx (X-point of the effective potential), most of the interesting magnetohydrodynamics is initiated within a small neighborhood of Rx (X-region), where the Keplerian angular speed of rotation in the disk equals the spin rate of the star. (2) Unimpeded funnel flow from the inner portion of the X-region to the star can occur when the amount of trapped magnetic flux equals or exceeds 1.5 times the unperturbed dipole flux that would lie outside Rx in the absence of an accretion disk. (3). Near the equatorial plane, radial infall from the X-point is terminated at a "kink" point Rk = 0.74Rx that deflects the flow away from the midplane, mediating thereby between the field topology imposed by a magnetic fan of trapped flux at Rx and the geometry of a strong stellar dipole. (4) The excess angular momentum of accretion that would otherwise spin up the star rapidly is deposited by the magnetic torques of the funnel flow into the inner portion of the X-region of the disk. (5) An induced disk wind arises in the outer portion of the .X-region, where the stellar field lines have been blown open, and removes whatever excess angular momentum that viscous torques do not transport to the outer disk. (6) The interface between open field lines loaded with outflowing matter (connected to the disk) and those not loaded (connected to the star) forms a "helmet streamer," along which major mass-ejection and reconnection events may arise in response to changing boundary conditions (e.g., stellar magnetic cycles), much the way that such events occur in the active Sun. (7) Pressure balance across the dead-zone/wind interface will probably yield an asymptotically vertical (i.e., "jetlike") trajectory for the matter ejected along the helmet streamer, but mathematical demonstration of this fact is left for future studies. (8) In steady state the overall balance of angular momentum in the star/disk/ magnetosphere system fixes the fractions, f and 1 - f, of the disk mass accretion rate into the X-region carried away, respectively, by the wind and funnel flows.
Managing Errors to Reduce Accidents in High Consequence Networked Information Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganter, J.H.
1999-02-01
Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classifymore » these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.« less
Resonant Mode-hopping Micromixing
Jang, Ling-Sheng; Chao, Shih-Hui; Holl, Mark R.; Meldrum, Deirdre R.
2009-01-01
A common micromixer design strategy is to generate interleaved flow topologies to enhance diffusion. However, problems with these designs include complicated structures and dead volumes within the flow fields. We present an active micromixer using a resonating piezoceramic/silicon composite diaphragm to generate acoustic streaming flow topologies. Circulation patterns are observed experimentally and correlate to the resonant mode shapes of the diaphragm. The dead volumes in the flow field are eliminated by rapidly switching from one discrete resonant mode to another (i.e., resonant mode-hop). Mixer performance is characterized by mixing buffer with a fluorescence tracer containing fluorescein. Movies of the mixing process are analyzed by converting fluorescent images to two-dimensional fluorescein concentration distributions. The results demonstrate that mode-hopping operation rapidly homogenized chamber contents, circumventing diffusion-isolated zones. PMID:19551159
A three pulse phase response curve to three milligrams of melatonin in humans
Burgess, Helen J; Revell, Victoria L; Eastman, Charmane I
2008-01-01
Exogenous melatonin is increasingly used for its phase shifting and soporific effects. We generated a three pulse phase response curve (PRC) to exogenous melatonin (3 mg) by administering it to free-running subjects. Young healthy subjects (n = 27) participated in two 5 day laboratory sessions, each preceded by at least a week of habitual, but fixed sleep. Each 5 day laboratory session started and ended with a phase assessment to measure the circadian rhythm of endogenous melatonin in dim light using 30 min saliva samples. In between were three days in an ultradian dim light (< 150 lux)–dark cycle (LD 2.5 : 1.5) during which each subject took one pill per day at the same clock time (3 mg melatonin or placebo, double blind, counterbalanced). Each individual's phase shift to exogenous melatonin was corrected by subtracting their phase shift to placebo (a free-run). The resulting PRC has a phase advance portion peaking about 5 h before the dim light melatonin onset, in the afternoon. The phase delay portion peaks about 11 h after the dim light melatonin onset, shortly after the usual time of morning awakening. A dead zone of minimal phase shifts occurred around the first half of habitual sleep. The fitted maximum advance and delay shifts were 1.8 h and 1.3 h, respectively. This new PRC will aid in determining the optimal time to administer exogenous melatonin to achieve desired phase shifts and demonstrates that using exogenous melatonin as a sleep aid at night has minimal phase shifting effects. PMID:18006583
NASA Astrophysics Data System (ADS)
Sourav Rout, Smruti; Wörner, Gerhard
2017-04-01
Time-scales extracted from the detailed analysis of chemically zoned minerals provide insights into crystal ages, magma storage and compositional evolution, including mixing and unmixing events. This allows having a better understanding of pre-eruptive history of large and potentially dangerous magma chambers. We present a comprehensive study of chemical diffusion across zoning and exsolution patterns of alkali feldspars in carbonatite-bearing cognate syenites from the 6.3 km3 (D.R.E) phonolitic Laacher See Tephra (LST) eruption 12.9 ka ago. The Laacher See volcano is located in the Quaternary East Eifel volcanic field of the Paleozoic Rhenish Massif in Western Germany and has produced a compositionally variable sequence in a single eruption from a magma chamber that was zoned from mafic phonolite at the base to highly evolved, actively degassing phonolite magma at the top. Diffusion chronometry is applied to major and trace element compositions obtained on alkali feldspars from carbonate-bearing syenitic cumulates. Methods used were laser ablation inductively coupled plasma mass spectrometry (LA ICP-MS) in combination with energy-dispersive and wavelength-dispersive electron microprobe analyses (EDS & WDS-EMPA). The grey scale values extracted from multiple accumulations of back-scattered electron images represent the K/Na ratio owing to the extremely low concentrations of Ba and Sr (<30 ppm). The numerical grey scale profiles and the quantitative compositional profiles are anatomized using three different fitting models in MATLAB®, Mathematica® and Origin® to estimate related time-scales with minimized error for a temperature range of 750 deg C to 800 deg C (on the basis of existing experimental data on phase transition and phase separation). A distinctive uphill diffusive analysis is used specifically for the phase separation in the case of exsolution features (comprising of albite- and orthoclase-rich phases) in sanidines. The error values are aggregates of propagated error through calculations and the uncertainty in temperature values. Trace element compositional data of distinct feldspar compositions that are assumed to have grown before and after silicate-carbonate unmixing are used to estimate partition coefficients between carbonate and silicate melt. The resulting values correlate well with available experimental data from the literature. We will present a genetic model based on the compositional data on feldspar zonation for the process and timing of silicate-carbonate unmixing prior to eruption of the host phonolite magma.
Heading error in an alignment-based magnetometer
NASA Astrophysics Data System (ADS)
Hovde, Chris; Patton, Brian; Versolato, Oscar; Corsini, Eric; Rochester, Simon; Budker, Dmitry
2011-06-01
A prototype magnetometer for anti-submarine warfare applications is being developed based on nonlinear magneto-optical rotation (NMOR) in atomic vapors. NMOR is an atomic spectroscopy technique that exploits coherences among magnetic sublevels of atoms such as cesium or rubidium to measure magnetic fields with high precision. NMOR uses stroboscopic optical pumping via frequency or amplitude modulation of a linearly polarized laser beam to create the alignment. An anti-relaxation coating on the walls of the atomic vapor cell can result in a long lifetime of 1 s or more for the coherence and enables precise measurement of the precession frequency. With proper feedback, the magnetometer can self-oscillate, resulting in accurate tracking and fast time response. The NMOR magnetic resonance spectrum of 87Rb has been measured as a function of heading in Earth's field. Optical pumping of alignment within the F=2 hyperfine manifold generates three resonances separated by the nonlinear Zeeman splitting. The spectra show a high degree of symmetry, consisting of a central peak and two side peaks of nearly equal intensity. As the heading changes, the ratio of the central peak to the average of the two side peaks changes. The amplitudes of the side peaks remain nearly equal. An analysis of the forced oscillation spectra indicates that, away from dead zones, heading error in self-oscillating mode should be less than 1 nT. A broader background is also observed in the spectra. While this background can be removed when fitting resonance spectra, understanding it will be important to achieving the small heading error in self-oscillating mode that is implied by the spectral measurements. Progress in miniaturizing the magnetometer is also reported. The new design is less than 10 cm across and includes fiber coupling of light to and from the magnetometer head. Initial tests show that the prototype has achieved a narrow spectral width and a strong polarization rotation signal.
NASA Astrophysics Data System (ADS)
Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao
2011-05-01
According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).
Analysis of single ion channel data incorporating time-interval omission and sampling
The, Yu-Kai; Timmer, Jens
2005-01-01
Hidden Markov models are widely used to describe single channel currents from patch-clamp experiments. The inevitable anti-aliasing filter limits the time resolution of the measurements and therefore the standard hidden Markov model is not adequate anymore. The notion of time-interval omission has been introduced where brief events are not detected. The developed, exact solutions to this problem do not take into account that the measured intervals are limited by the sampling time. In this case the dead-time that specifies the minimal detectable interval length is not defined unambiguously. We show that a wrong choice of the dead-time leads to considerably biased estimates and present the appropriate equations to describe sampled data. PMID:16849220
Minarik, Marek; Franc, Martin; Minarik, Milan
2018-06-15
A new instrumental approach to recycling HPLC is described. The concept is based on fast reintroduction of incremental peak sections back onto the separation column. The re-circulation is performed within a closed loop containing only the column and two synchronized switching valves. By having HPLC pump out of the cycle, the method minimizes peak broadening due to dead volume. As a result the efficiency is dramatically increased allowing for the most demanding analytical applications. In addition, a parking loop is employed for temporary storage of analytes from the middle section of the separated mixture prior to their recycling. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Markwald, Sabine
1976-01-01
Describes a German course for archeologists and art historians, given in the Louvre by the Paris Goethe Institute. Reliance is placed on the students' visual memory, with schematic presentation of pronoun and article declension. This approach sometimes fosters errors and misunderstandings. The verb system is emphasized. (Text is in German.)…
Optimized System Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Longman, Richard W.
1999-01-01
In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.
Lee, Min Su; Ju, Hojin; Song, Jin Woo; Park, Chan Gook
2015-11-06
In this paper, we present a method for finding the enhanced heading and position of pedestrians by fusing the Zero velocity UPdaTe (ZUPT)-based pedestrian dead reckoning (PDR) and the kinematic constraints of the lower human body. ZUPT is a well known algorithm for PDR, and provides a sufficiently accurate position solution for short term periods, but it cannot guarantee a stable and reliable heading because it suffers from magnetic disturbance in determining heading angles, which degrades the overall position accuracy as time passes. The basic idea of the proposed algorithm is integrating the left and right foot positions obtained by ZUPTs with the heading and position information from an IMU mounted on the waist. To integrate this information, a kinematic model of the lower human body, which is calculated by using orientation sensors mounted on both thighs and calves, is adopted. We note that the position of the left and right feet cannot be apart because of the kinematic constraints of the body, so the kinematic model generates new measurements for the waist position. The Extended Kalman Filter (EKF) on the waist data that estimates and corrects error states uses these measurements and magnetic heading measurements, which enhances the heading accuracy. The updated position information is fed into the foot mounted sensors, and reupdate processes are performed to correct the position error of each foot. The proposed update-reupdate technique consequently ensures improved observability of error states and position accuracy. Moreover, the proposed method provides all the information about the lower human body, so that it can be applied more effectively to motion tracking. The effectiveness of the proposed algorithm is verified via experimental results, which show that a 1.25% Return Position Error (RPE) with respect to walking distance is achieved.
Internal defects associated with pruned and nonpruned branch stubs in black walnut
Alex L. Shigo; E. Allen, Jr. McGinnes; David T. Funk; Nelson Rogers
1979-01-01
Dissections of 50 branch stubs from seven black walnut trees revealed that some discolored wood was associated with all stubs, and that ring shakes and dark bands of discolored wood were associated with 14 of 17 stubs that were "flush cut" (branch collar removed) 13 years earlier while they were living or dead. Ring shakes formed along the barrier zone...
NASA Astrophysics Data System (ADS)
Landry, Russell; Dodson-Robinson, Sarah E.; Turner, Neal J.; Abram, Greg
2013-07-01
Magnetorotational instability (MRI) is the most promising mechanism behind accretion in low-mass protostellar disks. Here we present the first analysis of the global structure and evolution of non-ideal MRI-driven T-Tauri disks on million-year timescales. We accomplish this in a 1+1D simulation by calculating magnetic diffusivities and utilizing turbulence activity criteria to determine thermal structure and accretion rate without resorting to a three-dimensional magnetohydrodynamical (MHD) simulation. Our major findings are as follows. First, even for modest surface densities of just a few times the minimum-mass solar nebula, the dead zone encompasses the giant planet-forming region, preserving any compositional gradients. Second, the surface density of the active layer is nearly constant in time at roughly 10 g cm-2, which we use to derive a simple prescription for viscous heating in MRI-active disks for those who wish to avoid detailed MHD computations. Furthermore, unlike a standard disk with constant-α viscosity, the disk midplane does not cool off over time, though the surface cools as the star evolves along the Hayashi track. Instead, the MRI may pile material in the dead zone, causing it to heat up over time. The ice line is firmly in the terrestrial planet-forming region throughout disk evolution and can move either inward or outward with time, depending on whether pileups form near the star. Finally, steady-state mass transport is an extremely poor description of flow through an MRI-active disk, as we see both the turnaround in the accretion flow required by conservation of angular momentum and peaks in \\dot{M}(R) bracketing each side of the dead zone. We caution that MRI activity is sensitive to many parameters, including stellar X-ray flux, grain size, gas/small grain mass ratio and magnetic field strength, and we have not performed an exhaustive parameter study here. Our 1+1D model also does not include azimuthal information, which prevents us from modeling the effects of Rossby waves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landry, Russell; Dodson-Robinson, Sarah E.; Turner, Neal J.
2013-07-10
Magnetorotational instability (MRI) is the most promising mechanism behind accretion in low-mass protostellar disks. Here we present the first analysis of the global structure and evolution of non-ideal MRI-driven T-Tauri disks on million-year timescales. We accomplish this in a 1+1D simulation by calculating magnetic diffusivities and utilizing turbulence activity criteria to determine thermal structure and accretion rate without resorting to a three-dimensional magnetohydrodynamical (MHD) simulation. Our major findings are as follows. First, even for modest surface densities of just a few times the minimum-mass solar nebula, the dead zone encompasses the giant planet-forming region, preserving any compositional gradients. Second, themore » surface density of the active layer is nearly constant in time at roughly 10 g cm{sup -2}, which we use to derive a simple prescription for viscous heating in MRI-active disks for those who wish to avoid detailed MHD computations. Furthermore, unlike a standard disk with constant-{alpha} viscosity, the disk midplane does not cool off over time, though the surface cools as the star evolves along the Hayashi track. Instead, the MRI may pile material in the dead zone, causing it to heat up over time. The ice line is firmly in the terrestrial planet-forming region throughout disk evolution and can move either inward or outward with time, depending on whether pileups form near the star. Finally, steady-state mass transport is an extremely poor description of flow through an MRI-active disk, as we see both the turnaround in the accretion flow required by conservation of angular momentum and peaks in M-dot (R) bracketing each side of the dead zone. We caution that MRI activity is sensitive to many parameters, including stellar X-ray flux, grain size, gas/small grain mass ratio and magnetic field strength, and we have not performed an exhaustive parameter study here. Our 1+1D model also does not include azimuthal information, which prevents us from modeling the effects of Rossby waves.« less
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Clarke, William L; Anderson, Stacey; Farhy, Leon; Breton, Marc; Gonder-Frederick, Linda; Cox, Daniel; Kovatchev, Boris
2005-10-01
To compare the clinical accuracy of two different continuous glucose sensors (CGS) during euglycemia and hypoglycemia using continuous glucose-error grid analysis (CG-EGA). FreeStyle Navigator (Abbott Laboratories, Alameda, CA) and MiniMed CGMS (Medtronic, Northridge, CA) CGSs were applied to the abdomens of 16 type 1 diabetic subjects (age 42 +/- 3 years) 12 h before the initiation of the study. Each system was calibrated according to the manufacturer's recommendations. Each subject underwent a hyperinsulinemic-euglycemic clamp (blood glucose goal 110 mg/dl) for 70-210 min followed by a 1-mg.dl(-1).min(-1) controlled reduction in blood glucose toward a nadir of 40 mg/dl. Arterialized blood glucose was determined every 5 min using a Beckman Glucose Analyzer (Fullerton, CA). CGS glucose recordings were matched to the reference blood glucose with 30-s precision, and rates of glucose change were calculated for 5-min intervals. CG-EGA was used to quantify the clinical accuracy of both systems by estimating combined point and rate accuracy of each system in the euglycemic (70-180 mg/dl) and hypoglycemic (<70 mg/dl) ranges. A total of 1,104 data pairs were recorded in the euglycemic range and 250 data pairs in the hypoglycemic range. Overall correlation between CGS and reference glucose was similar for both systems (Navigator, r = 0.84; CGMS, r = 0.79, NS). During euglycemia, both CGS systems had similar clinical accuracy (Navigator zones A + B, 88.8%; CGMS zones A + B, 89.3%, NS). However, during hypoglycemia, the Navigator was significantly more clinically accurate than the CGMS (zones A + B = 82.4 vs. 61.6%, Navigator and CGMS, respectively, P < 0.0005). CG-EGA is a helpful tool for evaluating and comparing the clinical accuracy of CGS systems in different blood glucose ranges. CG-EGA provides accuracy details beyond other methods of evaluation, including correlational analysis and the original EGA.
Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement
Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian
2013-01-01
Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990
Accuracy study of a robotic system for MRI-guided prostate needle placement.
Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian
2013-09-01
Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.
Park, Kyoung-Duck; Park, Doo Jae; Lee, Seung Gol; Choi, Geunchang; Kim, Dai-Sik; Byeon, Clare Chisu; Choi, Soo Bong; Jeong, Mun Seok
2014-02-21
A resonant shift and a decrease of resonance quality of a tuning fork attached to a conventional fiber optic probe in the vicinity of liquid is monitored systematically while varying the protrusion length and immersion depth of the probe. Stable zones where the resonance modification as a function of immersion depth is minimized are observed. A wet near-field scanning optical microscope (wet-NSOM) is operated for a sample within water by using such a stable zone.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1974-01-01
Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.
NASA Astrophysics Data System (ADS)
Kowalczyk, Marek; Martínez-Corral, Manuel; Cichocki, Tomasz; Andrés, Pedro
1995-02-01
Two novel algorithms for the binarization of continuous rotationally symmetric real and positive pupil filters are presented. Both algorithms are based on the one-dimensional error diffusion concept. In our numerical experiment an original gray-tone apodizer is substituted by a set of transparent and opaque concentric annular zones. Depending on the algorithm the resulting binary mask consists of either equal width or equal area zones. The diffractive behavior of binary filters is evaluated. It is shown that the filter with equal width zones gives Fraunhofer diffraction pattern more similar to that of the original gray-tone apodizer than that with equal area zones, assuming in both cases the same resolution limit of device used to print both filters.
Tian, Qinglin; Salcic, Zoran; Wang, Kevin I-Kai; Pan, Yun
2015-12-05
Pedestrian dead reckoning is a common technique applied in indoor inertial navigation systems that is able to provide accurate tracking performance within short distances. Sensor drift is the main bottleneck in extending the system to long-distance and long-term tracking. In this paper, a hybrid system integrating traditional pedestrian dead reckoning based on the use of inertial measurement units, short-range radio frequency systems and particle filter map matching is proposed. The system is a drift-free pedestrian navigation system where position error and sensor drift is regularly corrected and is able to provide long-term accurate and reliable tracking. Moreover, the whole system is implemented on a commercial off-the-shelf smartphone and achieves real-time positioning and tracking performance with satisfactory accuracy.
NASA Astrophysics Data System (ADS)
Blanks, J. K.; Hintz, C. J.; Chandler, G. T.; Shaw, T. J.; McCorkle, D. C.; Bernhard, J. M.
2007-12-01
Mg/Ca and Sr/Ca were analyzed from core-top individual Hoeglundina elegans aragonitic tests collected from three continental slope depths within the South Carolina and Little Bahama Bank continental slope environs (220 m to 1084 m). Our study utilized only individuals that labeled with the vital probe CellTracker Green - unlike bulk core-top material often stained with Rose Bengal, which has known inconsistencies in distinguishing live from dead foraminifera. DSr x 10 values were consistently 1.74 $ pm 0.23 across all sampling depths. The analytical error in DSr values (0.7%) determined by ICP-MS between repeated measurements on individual H. elegans tests across all depths was less than analytical error on repeated measurements from standards. Variation in DSr values was not directly explained by a linear temperature relationship (p=0.0003, R2=0.44) over the temperature range of 4.9-11.4°C with a sensitivity of 59.8 μmol/mol/1°C. The standard error by regressing DSr across temperature yields + 3.4°C, which is nearly 3x greater that reported in previous studies. Sr/Ca was more sensitive for calibrating temperature than Mg/Ca in H. elegans. Observed scatter in DSr was too great across individuals of the same size and of different sizes to resolve ontogenetic effects. However, higher DSr values were associated with smaller individuals and warmer/shallower sampling depths. The highest DSr values were observed at the intermediate sampling depth (~600 m). No significant ontogenetic relationship was found across DSr values in different sized individuals due to tighter overall constrained variance; however lower DSr values were observed from several smaller individuals. Several dead tests of H. elegans showed no significant differences in DSr values compared to live specimens cleaned by standard cleaning methods, unlike higher dead than live DMg values observed for the same individuals. There were no significant deviations in DSr across batches cleaned on separate days, unlike the observed sensitivity of DMg across batches. A subset of samples were reductively cleaned (hydrazine solution); and exhibited DMg values within analytical precision of those observed for non-reductively cleaned samples. Therefore, deviations in DMg values resulting from the removal of the reductive cleaning step did not explain analytical errors greater than published values for Mg/Ca or the high variance across same sized individuals. Variation in DMg values across the same cleaning methods and from dead individuals suggests the need for a careful look into how foraminiferal aragonite should be processed. These findings provide evidence that both Mg and Sr in benthic foraminiferal aragonite reflect factors in addition to temperature and pressure that may interfere with absolute temperature calibrations. Funded by NSF OCE 0351029, OCE 0437366, and OCE-0350794.
The increase in the starting torque of PMSM motor by applying of FOC method
NASA Astrophysics Data System (ADS)
Plachta, Kamil
2017-05-01
The article presents field oriented control method of synchronous permanent magnet motor equipped in optical sensors. This method allows for a wide range regulation of torque and rotational speed of the electric motor. The paper presents mathematical model of electric motor and vector control method. Optical sensors have shorter time response as compared to the inductive sensors, which allow for faster response of the electronic control system to changes of motor loads. The motor driver is based on the digital signal processor which performs advanced mathematical operations in real time. The appliance of Clark and Park transformation in the software defines the angle of rotor position. The presented solution provides smooth adjustment of the rotational speed in the first operating zone and reduces the dead zone of the torque in the second and third operating zones.
Low NO sub x heavy fuel combustor concept program
NASA Technical Reports Server (NTRS)
Russell, P.; Beal, G.; Hinton, B.
1981-01-01
A gas turbine technology program to improve and optimize the staged rich lean low NOx combustor concept is described. Subscale combustor tests to develop the design information for optimization of the fuel preparation, rich burn, quick air quench, and lean burn steps of the combustion process were run. The program provides information for the design of high pressure full scale gas turbine combustors capable of providing environmentally clean combustion of minimally of minimally processed and synthetic fuels. It is concluded that liquid fuel atomization and mixing, rich zone stoichiometry, rich zone liner cooling, rich zone residence time, and quench zone stoichiometry are important considerations in the design and scale up of the rich lean combustor.
Cirrus Cloud Retrieval Using Infrared Sounding Data: Multilevel Cloud Errors.
NASA Astrophysics Data System (ADS)
Baum, Bryan A.; Wielicki, Bruce A.
1994-01-01
In this study we perform an error analysis for cloud-top pressure retrieval using the High-Resolution Infrared Radiometric Sounder (HIRS/2) 15-µm CO2 channels for the two-layer case of transmissive cirrus overlying an overcast, opaque stratiform cloud. This analysis includes standard deviation and bias error due to instrument noise and the presence of two cloud layers, the lower of which is opaque. Instantaneous cloud pressure retrieval errors are determined for a range of cloud amounts (0.1 1.0) and cloud-top pressures (850250 mb). Large cloud-top pressure retrieval errors are found to occur when a lower opaque layer is present underneath an upper transmissive cloud layer in the satellite field of view (FOV). Errors tend to increase with decreasing upper-cloud elective cloud amount and with decreasing cloud height (increasing pressure). Errors in retrieved upper-cloud pressure result in corresponding errors in derived effective cloud amount. For the case in which a HIRS FOV has two distinct cloud layers, the difference between the retrieved and actual cloud-top pressure is positive in all casts, meaning that the retrieved upper-cloud height is lower than the actual upper-cloud height. In addition, errors in retrieved cloud pressure are found to depend upon the lapse rate between the low-level cloud top and the surface. We examined which sounder channel combinations would minimize the total errors in derived cirrus cloud height caused by instrument noise and by the presence of a lower-level cloud. We find that while the sounding channels that peak between 700 and 1000 mb minimize random errors, the sounding channels that peak at 300—500 mb minimize bias errors. For a cloud climatology, the bias errors are most critical.
Calibration method of microgrid polarimeters with image interpolation.
Chen, Zhenyue; Wang, Xia; Liang, Rongguang
2015-02-10
Microgrid polarimeters have large advantages over conventional polarimeters because of the snapshot nature and because they have no moving parts. However, they also suffer from several error sources, such as fixed pattern noise (FPN), photon response nonuniformity (PRNU), pixel cross talk, and instantaneous field-of-view (IFOV) error. A characterization method is proposed to improve the measurement accuracy in visible waveband. We first calibrate the camera with uniform illumination so that the response of the sensor is uniform over the entire field of view without IFOV error. Then a spline interpolation method is implemented to minimize IFOV error. Experimental results show the proposed method can effectively minimize the FPN and PRNU.
ERIC Educational Resources Information Center
Byars, Alvin Gregg
The objectives of this investigation are to develop, describe, assess, and demonstrate procedures for constructing mastery tests to minimize errors of classification and to maximize decision reliability. The guidelines are based on conditions where item exchangeability is a reasonable assumption and the test constructor can control the number of…
Barnette, Daniel W.
2002-01-01
The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.
Chaves, Sandra; Gadanho, Mário; Tenreiro, Rogério; Cabrita, José
1999-01-01
Metronidazole susceptibility of 100 Helicobacter pylori strains was assessed by determining the inhibition zone diameters by disk diffusion test and the MICs by agar dilution and PDM Epsilometer test (E test). Linear regression analysis was performed, allowing the definition of significant linear relations, and revealed correlations of disk diffusion results with both E-test and agar dilution results (r2 = 0.88 and 0.81, respectively). No significant differences (P = 0.84) were found between MICs defined by E test and those defined by agar dilution, taken as a standard. Reproducibility comparison between E-test and disk diffusion tests showed that they are equivalent and with good precision. Two interpretative susceptibility schemes (with or without an intermediate class) were compared by an interpretative error rate analysis method. The susceptibility classification scheme that included the intermediate category was retained, and breakpoints were assessed for diffusion assay with 5-μg metronidazole disks. Strains with inhibition zone diameters less than 16 mm were defined as resistant (MIC > 8 μg/ml), those with zone diameters equal to or greater than 16 mm but less than 21 mm were considered intermediate (4 μg/ml < MIC ≤ 8 μg/ml), and those with zone diameters of 21 mm or greater were regarded as susceptible (MIC ≤ 4 μg/ml). Error rate analysis applied to this classification scheme showed occurrence frequencies of 1% for major errors and 7% for minor errors, when the results were compared to those obtained by agar dilution. No very major errors were detected, suggesting that disk diffusion might be a good alternative for determining the metronidazole sensitivity of H. pylori strains. PMID:10203543
Treat - think - and be wary, for tomorrow they may die
Fish, F.F.
1938-01-01
For some very strange reason it is easy to minimize the villian's role, played by disease-producing organisms, in the theater of modern fish culture. Much concern is felt over the food bills footed each month by the hatcheries, but very little is thought about the dead fish which are picked from the hatchery troughs during the same period.
Zheng, Yue; Zhang, Chunxi; Li, Lijing; Song, Lailiang; Chen, Wen
2016-06-10
For a fiber-optic gyroscope (FOG) using electronic dithers to suppress the dead zone, without a fixed loop gain, the deterministic compensation for the dither signals in the control loop of the FOG cannot remain accurate, resulting in the dither residuals in the FOG rotation rate output and the navigation errors in the inertial navigation system. An all-digital automatic-gain-control method for stabilizing the loop gain of the FOG is proposed. By using a perturbation square wave to measure the loop gain of the FOG and adding an automatic gain control loop in the conventional control loop of the FOG, we successfully obtain the actual loop gain and make the loop gain converge to the reference value. The experimental results show that in the case of 20% variation in the loop gain, the dither residuals are successfully eliminated and the standard deviation of the FOG sampling outputs is decreased from 2.00 deg/h to 0.62 deg/h (sampling period 2.5 ms, 10 points smoothing). With this method, the loop gain of the FOG can be stabilized over the operation temperature range and in the long-time application, which provides a solid foundation for the engineering applications of the high-precision FOG.
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2016-01-01
This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.
Air Gaps, Size Effect, and Corner-Turning in Ambient LX-17
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souers, P C; Hernandez, A; Cabacungen, C
2007-05-30
Various ambient measurements are presented for LX-17. The size (diameter) effect has been measured with copper and Lucite confinement, where the failure radii are 4.0 and 6.5 mm, respectively. The air well corner-turn has been measured with an LX-07 booster, and the dead-zone results are comparable to the previous TATB-boosted work. Four double cylinders have been fired, and dead zones appear in all cases. The steel-backed samples are faster than the Lucite-backed samples by 0.6 {micro}s. Bare LX-07 and LX-17 of 12.7 mm-radius were fired with air gaps. Long acceptor regions were used to truly determine if detonation occurred ormore » not. The LX-07 crossed at 10 mm with a slight time delay. Steady state LX-17 crossed at 3.5 mm gap but failed to cross at 4.0 mm. LX-17 with a 12.7 mm run after the booster crossed a 1.5 mm gap but failed to cross 2.5 mm. Timing delays were measured where the detonation crossed the gaps. The Tarantula model is introduced as embedded in the Linked Cheetah V4.0 reactive flow code at 4 zones/mm. Tarantula has four pressure regions: off, initiation, failure and detonation. A report card of 25 tests run with the same settings on LX-17 is shown, possibly the most extensive simultaneous calibration yet tried with an explosive. The physical basis of some of the input parameters is considered.« less
Chaves, Maximiliano; Aguilera-Merlo, Claudia; Cruceño, Albana; Fogal, Teresa; Mohamed, Fabian
2015-11-01
The viscacha (Lagostomus maximus maximus) is a rodent with photoperiod-dependent seasonal reproduction. The aim of this work was to study the morphological variations of the prostate during periods of maximal (summer, long photoperiod) and minimal (winter, short photoperiod) reproductive activity. Prostates of adult male viscachas were studied by light and electron microscopy, immunohistochemistry for androgen receptor, and morphometric analysis. The prostate consisted of two regions: peripheral and central. The peripheral zone exhibited large adenomeres with a small number of folds and lined with a pseudostratified epithelium. The central zone had small adenomeres with pseudostratified epithelium and the mucosa showed numerous folds. The morphology of both zones showed variations during periods of maximal and minimal reproductive activity. The prostate weight, prostate-somatic index, luminal diameter of adenomeres, epithelial height and major nuclear diameter decreased during the period of minimal reproductive activity. Principal cells showed variations in their shape, size and ultrastructural characteristics during the period of minimal reproductive activity in comparison with the active period. The androgen receptor expression in epithelial and fibromuscular stromal cells was different between the studied periods. Our results suggest a reduced secretory activity of viscacha prostate during the period of minimal reproductive activity. Thus, the morphological variations observed in both the central and peripheral zones of the viscacha prostate agree with the results previously obtained in the gonads of this rodent of photoperiod-dependent reproduction. Additionally, the variations observed in the androgen receptors suggest a direct effect of the circulating testosterone on the gland. © 2015 Wiley Periodicals, Inc.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Ghali, Shadi; Turza, Kristin C; Baumann, Donald P; Butler, Charles E
2014-01-01
BACKGROUND Minimally invasive component separation (CS) with inlay bioprosthetic mesh (MICSIB) is a recently developed technique for abdominal wall reconstruction that preserves the rectus abdominis perforators and minimizes subcutaneous dead space using limited-access tunneled incisions. We hypothesized that MICSIB would result in better surgical outcomes than would conventional open CS. STUDY DESIGN All consecutive patients who underwent CS (open or minimally invasive) with inlay bioprosthetic mesh for ventral hernia repair from 2005 to 2010 were included in a retrospective analysis of prospectively collected data. Surgical outcomes including wound-healing complications, hernia recurrences, and abdominal bulge/laxity rates were compared between patient groups based on the type of CS repair: MICSIB or open. RESULTS Fifty-seven patients who underwent MICSIB and 50 who underwent open CS were included. The mean follow-ups were 15.2±7.7 months and 20.7±14.3 months, respectively. The mean fascial defect size was significantly larger in the MICSIB group (405.4±193.6 cm2 vs. 273.8±186.8 cm2; p =0.002). The incidences of skin dehiscence (11% vs. 28%; p=0.011), all wound-healing complications (14% vs. 32%; p=0.026), abdominal wall laxity/bulge (4% vs. 14%; p=0.056), and hernia recurrence (4% vs. 8%; p=0.3) were lower in the MICSIB group than in the open CS group. CONCLUSIONS MICSIB resulted in fewer wound-healing complications than did open CS used for complex abdominal wall reconstructions. These findings are likely attributable to the preservation of paramedian skin vascularity and reduction in subcutaneous dead space with MICSIB. MICSIB should be considered for complex abdominal wall reconstructions, particularly in patients at increased risk of wound-healing complications. PMID:22521439
Crustal structure of the southern Dead Sea basin derived from project DESIRE wide-angle seismic data
NASA Astrophysics Data System (ADS)
Mechie, J.; Abu-Ayyash, K.; Ben-Avraham, Z.; El-Kelani, R.; Qabbani, I.; Weber, M.
2009-07-01
As part of the DEad Sea Integrated REsearch project (DESIRE) a 235 km long seismic wide-angle reflection/refraction (WRR) profile was completed in spring 2006 across the Dead Sea Transform (DST) in the region of the southern Dead Sea basin (DSB). The DST with a total of about 107 km multi-stage left-lateral shear since about 18 Ma ago, accommodates the movement between the Arabian and African plates. It connects the spreading centre in the Red Sea with the Taurus collision zone in Turkey over a length of about 1100 km. With a sedimentary infill of about 10 km in places, the southern DSB is the largest pull-apart basin along the DST and one of the largest pull-apart basins on Earth. The WRR measurements comprised 11 shots recorded by 200 three-component and 400 one-component instruments spaced 300 m to 1.2 km apart along the whole length of the E-W trending profile. Models of the P-wave velocity structure derived from the WRR data show that the sedimentary infill associated with the formation of the southern DSB is about 8.5 km thick beneath the profile. With around an additional 2 km of older sediments, the depth to the seismic basement beneath the southern DSB is about 11 km below sea level beneath the profile. Seismic refraction data from an earlier experiment suggest that the seismic basement continues to deepen to a maximum depth of about 14 km, about 10 km south of the DESIRE profile. In contrast, the interfaces below about 20 km depth, including the top of the lower crust and the Moho, probably show less than 3 km variation in depth beneath the profile as it crosses the southern DSB. Thus the Dead Sea pull-apart basin may be essentially an upper crustal feature with upper crustal extension associated with the left-lateral motion along the DST. The boundary between the upper and lower crust at about 20 km depth might act as a decoupling zone. Below this boundary the two plates move past each other in what is essentially a shearing motion. Thermo-mechanical modelling of the DSB supports such a scenario. As the DESIRE seismic profile crosses the DST about 100 km north of where the DESERT seismic profile crosses the DST, it has been possible to construct a crustal cross-section of the region before the 107 km left-lateral shear on the DST occurred.
Salgado, Iván; Mera-Hernández, Manuel; Chairez, Isaac
2017-11-01
This study addresses the problem of designing an output-based controller to stabilize multi-input multi-output (MIMO) systems in the presence of parametric disturbances as well as uncertainties in the state model and output noise measurements. The controller design includes a linear state transformation which separates uncertainties matched to the control input and the unmatched ones. A differential neural network (DNN) observer produces a nonlinear approximation of the matched perturbation and the unknown states simultaneously in the transformed coordinates. This study proposes the use of the Attractive Ellipsoid Method (AEM) to optimize the gains of the controller and the gain observer in the DNN structure. As a consequence, the obtained control input minimizes the convergence zone for the estimation error. Moreover, the control design uses the estimated disturbance provided by the DNN to obtain a better performance in the stabilization task in comparison with a quasi-minimal output feedback controller based on a Luenberger observer and a sliding mode controller. Numerical results pointed out the advantages obtained by the nonlinear control based on the DNN observer. The first example deals with the stabilization of an academic linear MIMO perturbed system and the second example stabilizes the trajectories of a DC-motor into a predefined operation point. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Intra-specific competition (crowding) of giant sequoias (Sequoiadendron giganteum)
Stohlgren, Thomas J.
1993-01-01
Information on the size and location of 1916 giant sequoias (Sequoiadendron giganteum (Lindl.) Buchholz) in Muir Grove, Sequoia National Park, in the southern Sierra Nevada of California was used to assess intra-specific crowding. Study objectives were to: (1) determine which parameters associated with intra-specific competition (i.e. size and distance to nearest neighbor, crowding/root system area overlap, or number of neighbors) might be important in spatial pattern development, growth, and survivorship of established giant sequoias; (2) quantify the level of intra-specific crowding of different sized live sequoias based on a model of estimated overlapping root system areas (i.e. an index of relative crowding); (3) compare the level of intra-specific crowding of similarly sized live and dead giant sequoias (less than 30 cm diameter at breast height (dbh) at the time of inventory (1969). Mean distances to the nearest live giant sequoia neighbor were not significantly different (at α = 0.05) for live and dead sequoias in similar size classes. A zone of influence competition model (i.e. index of crowding) based on horizontal overlap of estimated root system areas was developed for 1753 live sequoias. The model, based only on the spatial arrangement of live sequoias, was then tested on dead sequoias of less than 30 cm dbh (n = 163 trees; also recorded in 1969). The dead sequoias had a significantly higher crowding index than 561 live trees of similar diameter. Results showed that dead sequoias of less than 16.6 cm dbh had a significantly greater mean number of live neighbors and mean crowding index than live sequoias of similar size. Intra-specific crowding may be an important mechanism in determining the spatial distribution of sequoias in old-growth forests.
NASA Astrophysics Data System (ADS)
Closson, Damien; Abou Karaki, Najib; Pasquali, Paolo; Riccardi, Paolo
2013-04-01
Since the 1980s, the Dead Sea coastal zone is affected by sinkholes. The dynamic of the salt karst system is attested by a drastic increase of collapse events. The energy available for sub-surface erosion (or cavities genesis) is related to the head difference between the water table and the lake level which drop down at an accelerating rate of more than 1 m/yr. In the region of Ghor Al Haditha, Jordan, the size of the craters increased significantly during the last decade. Up to now, the greatest compound structure observed (association of metric subsidence, decametric sinkholes, and landslides) was about 150-200 m in diameter. End of December 2012, a single circular structure having 250-300 m in diameter was identified within a 10 km x 1.5 km saltpan of the Arab Potash Company. This finding raises questions regarding the origin of the underlying cavity and the capability of prediction of all models developed up to now in Israel and Jordan regarding the Dead Sea sinkholes. The analysis of satellite images of the past shows that the appearance of this unique depression is very recent (probably less than 5 years). Cosmo-SkyMed radar images have been processed to map the associated deformation field. Ground motions attest that the overall diameter could be around 600 m. Currently, this sinkhole is threatening the stability of more than one kilometer of a 12 km long dike holding 90 million m3 of Dead Sea brine. This case study underlines the great fragility of the Dead Sea salt karst and demonstrates the need for the setting up of an early warning system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souers, P C; Haylett, D; Vitello, P
2011-10-27
Using square zoning, the 2011 version of the kinetic package Tarantula matches cylinder data, cylinder dead zones, and cylinder failure with the same settings for the first time. The key is the use of maximum pressure rather than instantaneous pressure. Runs are at 40, 200 and 360 z/cm using JWL++ as the host model. The model also does run-to-detonation, thin-pulse initiation with a P-t curve and air gap crossing, all in cylindrical geometry. Two sizes of MSAD/LX-10/LX-17 snowballs work somewhat with these settings, but are too weak, so that divergent detonation is a challenge for the future. Butterfly meshes aremore » considered but do not appear to solve the issue.« less
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
Energy Balance, Evapo-transpiration and Dew deposition in the Dead Sea Valley
NASA Astrophysics Data System (ADS)
Metzger, Jutta; Corsmeier, Ulrich
2016-04-01
The Dead Sea is a unique place on earth. It is a terminal hypersaline lake, located at the lowest point on earth with a lake level of currently -429 m above mean sea level (amsl). It is located in a transition zone of semiarid to arid climate conditions, which makes it highly sensible to climate change (Alpert1997, Smiatek2011). The Virtual Institute DEad SEa Research Venue (DESERVE) is an international project funded by the German Helmholtz Association and was established to study coupled atmospheric hydrological, and lithospheric processes in the changing environment of the Dead Sea. At the moment the most prominent environmental change is the lake level decline of approximately 1 m / year due to anthropogenic interferences (Gertman, 2002). This leads to noticeable changes in the fractions of the existing terrestrial surfaces - water, bare soil and vegetated areas - in the valley. Thus, the partitioning of the net radiation in the valley changes as well. To thoroughly study the atmospheric and hydrological processes in the Dead Sea valley, which are driven by the energy balance components, sound data of the energy fluxes of the different surfaces are necessary. Before DESERVE no long-term monitoring network simultaneously measuring the energy balance components of the different surfaces in the Dead Sea valley was available. Therefore, three energy balance stations were installed at three characteristic sites at the coast-line, over bare soil, and within vegetation, measuring all energy balance components by using the eddy covariance method. The results show, that the partitioning of the energy into sensible and latent heat flux on a diurnal scale is totally different at the three sites. This results in gradients between the sites, which are e.g. responsible for the typical diurnal wind systems at the Dead Sea. Furthermore, driving forces of evapo-transpiration at the sites were identified and a detailed analysis of the daily evaporation and dew deposition rates for a whole annual cycle will be presented. Alpert, P., Shafir, H., & Issahary, D. (1997). Recent changes in the climate at the Dead Sea-a preliminary study. Climatic Change, 37(3), 513-537. Gertman, I., & Hecht, A. (2002). The Dead Sea hydrography from 1992 to 2000. Journal of marine systems, 35(3), 169-181. Smiatek, G., Kunstmann, H., & Heckl, A. (2011). High-resolution climate change simulations for the Jordan River area. Journal of Geophysical Research: Atmospheres (1984-2012), 116(D16).
Comprehensive Measurements of Wind Systems at the Dead Sea
NASA Astrophysics Data System (ADS)
Metzger, Jutta; Corsmeier, Ulrich; Kalthoff, Norbert; Wieser, Andreas; Alpert, Pinhas; Lati, Joseph
2016-04-01
The Dead Sea is a unique place on earth. It is located at the lowest point of the Jordan Rift valley and its water level is currently at -429 m above mean sea level (amsl). To the West the Judean Mountains (up to 1000 m amsl) and to the East the Moab mountains (up to 1300 m amsl) confine the north-south oriented valley. The whole region is located in a transition zone of semi-arid to arid climate conditions and together with the steep orography, this forms a quite complex and unique environment. The Virtual Institute DEad SEa Research Venue (DESERVE) is an international project funded by the German Helmholtz Association and was established to study coupled atmospheric, hydrological, and lithospheric processes in the changing environment of the Dead Sea. Previous studies showed that the valley's atmosphere is often governed by periodic wind systems (Bitan, 1974), but most of the studies were limited to ground measurements and could therefore not resolve the three dimensional development and evolution of these wind systems. Performed airborne measurements found three distinct layers above the Dead Sea (Levin, 2005). Two layers are directly affected by the Dead Sea and the third is the commonly observed marine boundary layer over Israel. In the framework of DESERVE a field campaign with the mobile observatory KITcube was conducted to study the three dimensional structure of atmospheric processes at the Dead Sea in 2014. The combination of several in-situ and remote sensing instruments allows temporally and spatially high-resolution measurements in an atmospheric volume of about 10x10x10 km3. With this data set, the development and evolution of typical local wind systems, as well as the impact of regional scale wind conditions on the valley's atmosphere could be analyzed. The frequent development of a nocturnal drainage flow with wind velocities of over 10 m s-1, the typical lake breeze during the day, its onset and vertical extension as well as strong downslope winds in the afternoon, which are often intensified by regional scale wind systems like the Mediterranean Sea Breeze and the coupling of the synoptic flow, will be presented. Bitan, A. (1974). The wind regime in the north-west section of the Dead-Sea. Archiv für Meteorologie, Geophysik und Bioklimatologie, Serie B, 22(4), 313-335. Levin, Z., Gershon, H., & Ganor, E. (2005). Vertical distribution of physical and chemical properties of haze particles in the Dead Sea valley. Atmospheric Environment, 39(27), 4937-4945.
Sidick, Erkin
2013-09-10
An adaptive periodic-correlation (APC) algorithm was developed for use in extended-scene Shack-Hartmann wavefront sensors. It provides high accuracy even when the subimages in a frame captured by a Shack-Hartmann camera are not only shifted but also distorted relative to each other. Recently we found that the shift estimate error of the APC algorithm has a component that depends on the content of the extended scene. In this paper, we assess the amount of that error and propose a method to minimize it.
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
Adaptive Periodic-Correlation (APC) algorithm was developed for use in extended-scene Shack-Hartmann wavefront sensors. It provides high-accuracy even when the sub-images in a frame captured by a Shack-Hartmann camera are not only shifted but also distorted relative to each other. Recently we found that the shift-estimate error of the APC algorithm has a component that depends on the content of extended-scene. In this paper we assess the amount of that error and propose a method to minimize it.
Zone model predictive control: a strategy to minimize hyper- and hypoglycemic events.
Grosman, Benyamin; Dassau, Eyal; Zisser, Howard C; Jovanovic, Lois; Doyle, Francis J
2010-07-01
Development of an artificial pancreas based on an automatic closed-loop algorithm that uses a subcutaneous insulin pump and continuous glucose sensor is a goal for biomedical engineering research. However, closing the loop for the artificial pancreas still presents many challenges, including model identification and design of a control algorithm that will keep the type 1 diabetes mellitus subject in normoglycemia for the longest duration and under maximal safety considerations. An artificial pancreatic beta-cell based on zone model predictive control (zone-MPC) that is tuned automatically has been evaluated on the University of Virginia/University of Padova Food and Drug Administration-accepted metabolic simulator. Zone-MPC is applied when a fixed set point is not defined and the control variable objective can be expressed as a zone. Because euglycemia is usually defined as a range, zone-MPC is a natural control strategy for the artificial pancreatic beta-cell. Clinical data usually include discrete information about insulin delivery and meals, which can be used to generate personalized models. It is argued that mapping clinical insulin administration and meal history through two different second-order transfer functions improves the identification accuracy of these models. Moreover, using mapped insulin as an additional state in zone-MPC enriches information about past control moves, thereby reducing the probability of overdosing. In this study, zone-MPC is tested in three different modes using unannounced and announced meals at their nominal value and with 40% uncertainty. Ten adult in silico subjects were evaluated following a scenario of mixed meals with 75, 75, and 50 grams of carbohydrates (CHOs) consumed at 7 am, 1 pm, and 8 pm, respectively. Zone-MPC results are compared to those of the "optimal" open-loop preadjusted treatment. Zone-MPC succeeds in maintaining glycemic responses closer to euglycemia compared to the "optimal" open-loop treatment in te three different modes with and without meal announcement. In the face of meal uncertainty, announced zone-MPC presented only marginally improved results over unannounced zone-MPC. When considering user error in CHO estimation and the need to interact with the system, unannounced zone-MPC is an appealing alternative. Zone-MPC reduces the variability of control moves over fixed set point control without the need to detune the controller. This strategy gives zone-MPC the ability to act quickly when needed and reduce unnecessary control moves in the euglycemic range. 2010 Diabetes Technology Society.
Odeh, M.; Schrock, R.M.; Gannam, A.
2003-01-01
Hydraulic characteristics inside two research circular tanks (1.5-m and 1.2-m diameter) with the same volume of water were studied to understand how they might affect experimental bias by influencing the behavior and development of juvenile fish. Water velocities inside each tank were documented extensively and flow behavior studied. Surface inflow to the 1.5-m tank created a highly turbulent and aerated surface, and produced unevenly distributed velocities within the tank. A low-flow velocity, or "dead" zone, persisted just upstream of the surface inflow. A single submerged nozzle in the 1.2-m tank created uniform flow and did not cause undue turbulence or introduce air. Flow behavior in the 1.5-m tank is believed to have negatively affected the feeding behavior and physiological development of a group of juvenile fall chinook salmon, Oncorhynchus tshawytscha. A new inflow nozzle design provided comparable flow behavior regardless of tank size and water depth. Maintaining similar hydraulic conditions inside tanks used for various biological purposes, including fish research, would minimize experimental bias caused by differences in flow behavior. Other sources of experimental bias are discussed and recommendations given for reporting and control of experimental conditions in fishery research tank experiments.
Zone edge effects with variable rate irrigation
USDA-ARS?s Scientific Manuscript database
Variable rate irrigation (VRI) systems may offer solutions to enhance water use efficiency by addressing variability within a field. However, the design of VRI systems should be considered to maximize application uniformity within sprinkler zones, while minimizing edge effects between such zones alo...
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Round-off errors in cutting plane algorithms based on the revised simplex procedure
NASA Technical Reports Server (NTRS)
Moore, J. E.
1973-01-01
This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.
Sulcal set optimization for cortical surface registration.
Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M
2010-04-15
Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.
Karunaratne, Nicholas
2013-12-01
To compare the accuracy of the Pentacam Holladay equivalent keratometry readings with the IOL Master 500 keratometry in calculating intraocular lens power. Non-randomized, prospective clinical study conducted in private practice. Forty-five consecutive normal patients undergoing cataract surgery. Forty-five consecutive patients had Pentacam equivalent keratometry readings at the 2-, 3 and 4.5-mm corneal zone and IOL Master keratometry measurements prior to cataract surgery. For each Pentacam equivalent keratometry reading zone and IOL Master measurement the difference between the observed and expected refractive error was calculated using the Holladay 2 and Sanders, Retzlaff and Kraff theoretic (SRKT) formulas. Mean keratometric value and mean absolute refractive error. There was a statistically significantly difference between the mean keratometric values of the IOL Master, Pentacam equivalent keratometry reading 2-, 3- and 4.5-mm measurements (P < 0.0001, analysis of variance). There was no statistically significant difference between the mean absolute refraction error for the IOL Master and equivalent keratometry readings 2 mm, 3 mm and 4.5 mm zones for either the Holladay 2 formula (P = 0.14) or SRKT formula (P = 0.47). The lowest mean absolute refraction error for Holladay 2 equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.17 D). The lowest mean absolute refraction error for SRKT equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.19 D). Comparing the absolute refraction error of IOL Master and Pentacam equivalent keratometry reading, best agreement was with Holladay 2 and equivalent keratometry reading 4.5 mm, with mean of the difference of 0.02 D and 95% limits of agreement of -0.35 and 0.39 D. The IOL Master keratometry and Pentacam equivalent keratometry reading were not equivalent when used only for corneal power measurements. However, the keratometry measurements of the IOL Master and Pentacam equivalent keratometry reading 4.5 mm may be similarly effective when used in intraocular lens power calculation formulas, following constant optimization. © 2013 Royal Australian and New Zealand College of Ophthalmologists.
Keeping patients safe: Institute of Medicine looks at transforming nurses' work environment.
2004-01-01
In November 1999, the Institute of Medicine (IOM) released To Err Is Human: Building a Safer Health System, which brought to the public's attention the serious--and sometimes deadly--dangers posed by medical errors occurring in healthcare organizations. Exactly 4 years later, an IOM committee released a new report that focuses on the need to reinforce patient safety defenses in the nurses' working environments.
An ILP based Algorithm for Optimal Customer Selection for Demand Response in SmartGrids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuppannagari, Sanmukh R.; Kannan, Rajgopal; Prasanna, Viktor K.
Demand Response (DR) events are initiated by utilities during peak demand periods to curtail consumption. They ensure system reliability and minimize the utility’s expenditure. Selection of the right customers and strategies is critical for a DR event. An effective DR scheduling algorithm minimizes the curtailment error which is the absolute difference between the achieved curtailment value and the target. State-of-the-art heuristics exist for customer selection, however their curtailment errors are unbounded and can be as high as 70%. In this work, we develop an Integer Linear Programming (ILP) formulation for optimally selecting customers and curtailment strategies that minimize the curtailmentmore » error during DR events in SmartGrids. We perform experiments on real world data obtained from the University of Southern California’s SmartGrid and show that our algorithm achieves near exact curtailment values with errors in the range of 10 -7 to 10 -5, which are within the range of numerical errors. We compare our results against the state-of-the-art heuristic being deployed in practice in the USC SmartGrid. We show that for the same set of available customer strategy pairs our algorithm performs 103 to 107 times better in terms of the curtailment errors incurred.« less
2006-04-01
spring that would have the potential to create wildfires. 3.11 Grazing Management : All alternatives would have minimal impact to grazing...3.12 Invasive Plant Management : All alternatives would have minimal impact to management . 3.13 Timber Management : All alternatives would have...food and fuel within the local communities. 3.18 Coastal Zone Management : The alternatives would be consistent with the Florida Coastal Zone
The Effects of Antifoam Agent on Dead End Filtration Process
NASA Astrophysics Data System (ADS)
Mohamad Pauzi, S.; Ahmad, N.; Yahya, M. F.; Arifin, M. A.
2018-05-01
The formation of foam as a result from introducing gases during cell culture process in the bioprocess industry has indirectly affected the throughput of the product of interest. Due to that, antifoams were developed and established as one of the means to minimize the formation of foam in the cell culture. There are many types of antifoams but the silicone-type of antifoams are widely used in the bioprocess industry. Although the establishment of antifoam has aided the cell culture process, the impacts of its presence in the cell culture to the downstream process especially the dead end filtration is not widely discussed. The findings in the study emphasized on the dead end filtration performance that includes flux rate profile and the resulted filtration capacity. In this study, the concentrations of antifoam injected into the solution were varied from 0.2% v/v – 1.0% v/v and the solutions were filtered using constant flow method. The resulted maximum pressure readings and final flux rates indicated that the resistance exerted to the feed flow rate increased as the concentration of antifoam loaded in the solution increased. This later has led to the decline in the flux rates with percentage reduction between 32 – 68%. The calculated filter capacity for flux rate of 1000LMH ranged from 53 – 63L/m2 while it is in the range of 40 – 43L/m2 for flux rate of 2000LMH. The presence of antifoam agents in the feed load was determined to have negative effects on the dead end filtration performance and it may reduce the efficiency of the dead end filtration process.
Wolffs, Petra; Norling, Börje; Rådström, Peter
2005-03-01
Real-time PCR technology is increasingly used for detection and quantification of pathogens in food samples. A main disadvantage of nucleic acid detection is the inability to distinguish between signals originating from viable cells and DNA released from dead cells. In order to gain knowledge concerning risks of false-positive results due to detection of DNA originating from dead cells, quantitative PCR (qPCR) was used to investigate the degradation kinetics of free DNA in four types of meat samples. Results showed that the fastest degradation rate was observed (1 log unit per 0.5 h) in chicken homogenate, whereas the slowest rate was observed in pork rinse (1 log unit per 120.5 h). Overall results indicated that degradation occurred faster in chicken samples than in pork samples and faster at higher temperatures. Based on these results, it was concluded that, especially in pork samples, there is a risk of false-positive PCR results. This was confirmed in a quantitative study on cell death and signal persistence over a period of 28 days, employing three different methods, i.e. viable counts, direct qPCR, and finally floatation, a recently developed discontinuous density centrifugation method, followed by qPCR. Results showed that direct qPCR resulted in an overestimation of up to 10 times of the amount of cells in the samples compared to viable counts, due to detection of DNA from dead cells. However, after using floatation prior to qPCR, results resembled the viable count data. This indicates that by using of floatation as a sample treatment step prior to qPCR, the risk of false-positive PCR results due to detection of dead cells, can be minimized.
The formation of graben morphology in the Dead Sea Fault, and its implications
NASA Astrophysics Data System (ADS)
Ben-Avraham, Zvi; Katsman, Regina
2015-09-01
The Dead Sea Fault (DSF) is a 1000 km long continental transform. It forms a narrow and elongated valley with uplifted shoulders showing an east-west asymmetry, which is not common in other continental transforms. This topography may have strongly affected the course of human history. Several papers addressed the geomorphology of the DSF, but there is still no consensus with respect to the dominant mechanism of its formation. Our thermomechanical modeling demonstrates that existence of a transform prior to the rifting predefined high strain softening on the faults in the strong upper crust and created a precursor weak zone localizing deformations in the subsequent transtensional period. Together with a slow rate of extension over the Arabian plate, they controlled a narrow asymmetric morphology of the fault. This rift pattern was enhanced by a fast deposition of evaporites from the Sedom Lagoon, which occupied the rift depression for a short time period.
The life of a dead ant: the expression of an adaptive extended phenotype.
Andersen, Sandra B; Gerritsma, Sylvia; Yusah, Kalsum M; Mayntz, David; Hywel-Jones, Nigel L; Billen, Johan; Boomsma, Jacobus J; Hughes, David P
2009-09-01
Specialized parasites are expected to express complex adaptations to their hosts. Manipulation of host behavior is such an adaptation. We studied the fungus Ophiocordyceps unilateralis, a locally specialized parasite of arboreal Camponotus leonardi ants. Ant-infecting Ophiocordyceps are known to make hosts bite onto vegetation before killing them. We show that this represents a fine-tuned fungal adaptation: an extended phenotype. Dead ants were found under leaves, attached by their mandibles, on the northern side of saplings approximately 25 cm above the soil, where temperature and humidity conditions were optimal for fungal growth. Experimental relocation confirmed that parasite fitness was lower outside this manipulative zone. Host resources were rapidly colonized and further secured by extensive internal structuring. Nutritional composition analysis indicated that such structuring allows the parasite to produce a large fruiting body for spore production. Our findings suggest that the osmotrophic lifestyle of fungi may have facilitated novel exploitation strategies.
Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J
2016-01-01
A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.
Evaluation of technology-enhanced flagger devices : focus group and survey studies in Kansas.
DOT National Transportation Integrated Search
2009-04-01
Flagger-controlled work zones, by their very nature tend to utilize fewer traffic control measures than other work zones. Often these work zones are in place for only a short duration of time, so adding signing or positive protection beyond the minim...
Driver speed limit compliance in school zones : assessing the impact of sign saturation.
DOT National Transportation Integrated Search
2013-10-01
School zones are often viewed as an effective way to reduce driving speeds and thereby improve : safety near our nations schools. The effect of school zones on reducing driving speeds, however, is : minimal at best. Studies have shown that over 90...
Error Sources in Asteroid Astrometry
NASA Technical Reports Server (NTRS)
Owen, William M., Jr.
2000-01-01
Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.
Efficient Variational Quantum Simulator Incorporating Active Error Minimization
NASA Astrophysics Data System (ADS)
Li, Ying; Benjamin, Simon C.
2017-04-01
One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.
NASA Astrophysics Data System (ADS)
Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar
2017-11-01
Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-19
...-AA00 Safety Zone: Gilmerton Bridge Center Span Float-in, Elizabeth River; Norfolk, Portsmouth, and... final rule establishing a safety zone around the Gilmerton Bridge center span barge. Inadvertently, this... Gilmerton Bridge center span barge (77 FR 73541). Inadvertently, this rule included an error in the...
ITOUGH2(UNIX). Inverse Modeling for TOUGH2 Family of Multiphase Flow Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.
1999-03-01
ITOUGH2 provides inverse modeling capabilities for the TOUGH2 family of numerical simulators for non-isothermal multiphase flows in fractured-porous media. The ITOUGH2 can be used for estimating parameters by automatic modeling calibration, for sensitivity analyses, and for uncertainity propagation analyses (linear and Monte Carlo simulations). Any input parameter to the TOUGH2 simulator can be estimated based on any type of observation for which a corresponding TOUGH2 output is calculated. ITOUGH2 solves a non-linear least-squares problem using direct or gradient-based minimization algorithms. A detailed residual and error analysis is performed, which includes the evaluation of model identification criteria. ITOUGH2 can also bemore » run in forward mode, solving subsurface flow problems related to nuclear waste isolation, oil, gas, and geothermal resevoir engineering, and vadose zone hydrology.« less
Methods for minimizing plastic flow of oil shale during in situ retorting
Lewis, Arthur E.; Mallon, Richard G.
1978-01-01
In an in situ oil shale retorting process, plastic flow of hot rubblized oil shale is minimized by injecting carbon dioxide and water into spent shale above the retorting zone. These gases react chemically with the mineral constituents of the spent shale to form a cement-like material which binds the individual shale particles together and bonds the consolidated mass to the wall of the retort. This relieves the weight burden borne by the hot shale below the retorting zone and thereby minimizes plastic flow in the hot shale. At least a portion of the required carbon dioxide and water can be supplied by recycled product gases.
Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.
Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less
NASA Astrophysics Data System (ADS)
Simley, Eric; Y Pao, Lucy; Gebraad, Pieter; Churchfield, Matthew
2014-06-01
Several sources of error exist in lidar measurements for feedforward control of wind turbines including the ability to detect only radial velocities, spatial averaging, and wind evolution. This paper investigates another potential source of error: the upstream induction zone. The induction zone can directly affect lidar measurements and presents an opportunity for further decorrelation between upstream wind and the wind that interacts with the rotor. The impact of the induction zone is investigated using the combined CFD and aeroelastic code SOWFA. Lidar measurements are simulated upstream of a 5 MW turbine rotor and the true wind disturbances are found using a wind speed estimator and turbine outputs. Lidar performance in the absence of an induction zone is determined by simulating lidar measurements and the turbine response using the aeroelastic code FAST with wind inputs taken far upstream of the original turbine location in the SOWFA wind field. Results indicate that while measurement quality strongly depends on the amount of wind evolution, the induction zone has little effect. However, the optimal lidar preview distance and circular scan radius change slightly due to the presence of the induction zone.
NASA Astrophysics Data System (ADS)
Wilson, B.; Paradise, T. R.
2016-12-01
The influx of millions of Syrian refugees into Turkey has rapidly changed the population distribution along the Dead Sea Rift and East Anatolian Fault zones. In contrast to other countries in the Middle East where refugees are accommodated in camp environments, the majority of displaced individuals in Turkey are integrated into cities, towns, and villages—placing stress on urban settings and increasing potential exposure to strong shaking. Yet, displaced populations are not traditionally captured in data sources used in earthquake risk analysis or loss estimations. Accordingly, we present a district-level analysis assessing the spatial overlap of earthquake hazards and refugee locations in southeastern Turkey to determine how migration patterns are altering seismic risk in the region. Using migration estimates from the U.S. Humanitarian Information Unit, we create three district-level population scenarios that combine official population statistics, refugee camp populations, and low, median, and high bounds for integrated refugee populations. We perform probabilistic seismic hazard analysis alongside these population scenarios to map spatial variations in seismic risk between 2011 and late 2015. Our results show a significant relative southward increase of seismic risk for this period due to refugee migration. Additionally, we calculate earthquake fatalities for simulated earthquakes using a semi-empirical loss estimation technique to determine degree of under-estimation resulting from forgoing migration data in loss modeling. We find that including refugee populations increased casualties by 11-12% using median population estimates, and upwards of 20% using high population estimates. These results communicate the ongoing importance of placing environmental hazards in their appropriate regional and temporal context which unites physical, political, cultural, and socio-economic landscapes. Keywords: Earthquakes, Hazards, Loss-Estimation, Syrian Crisis, Migration, Refugees
Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations
NASA Astrophysics Data System (ADS)
Linders, Viktor; Kupiainen, Marco; Nordström, Jan
2017-07-01
We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.
ten Brink, Uri S.; Flores, C.H.
2012-01-01
Pull-apart basins are narrow zones of crustal extension bounded by strike-slip faults that can serve as analogs to the early stages of crustal rifting. We use seismic tomography, 2-D ray tracing, gravity modeling, and subsidence analysis to study crustal extension of the Dead Sea basin (DSB), a large and long-lived pull-apart basin along the Dead Sea transform (DST). The basin gradually shallows southward for 50 km from the only significant transverse normal fault. Stratigraphic relationships there indicate basin elongation with time. The basin is deepest (8-8.5 km) and widest (???15 km) under the Lisan about 40 km north of the transverse fault. Farther north, basin depth is ambiguous, but is 3 km deep immediately north of the lake. The underlying pre-basin sedimentary layer thickens gradually from 2 to 3 km under the southern edge of the DSB to 3-4 km under the northern end of the lake and 5-6 km farther north. Crystalline basement is ???11 km deep under the deepest part of the basin. The upper crust under the basin has lower P wave velocity than in the surrounding regions, which is interpreted to reflect elevated pore fluids there. Within data resolution, the lower crust below ???18 km and the Moho are not affected by basin development. The subsidence rate was several hundreds of m/m.y. since the development of the DST ???17 Ma, similar to other basins along the DST, but subsidence rate has accelerated by an order of magnitude during the Pleistocene, which allowed the accumulation of 4 km of sediment. We propose that the rapid subsidence and perhaps elongation of the DSB are due to the development of inter-connected mid-crustal ductile shear zones caused by alteration of feldspar to muscovite in the presence of pore fluids. This alteration resulted in a significant strength decrease and viscous creep. We propose a similar cause to the enigmatic rapid subsidence of the North Sea at the onset the North Atlantic mantle plume. Thus, we propose that aqueous fluid flux into a slowly extending continental crust can cause rapid basin subsidence that may be erroneously interpreted as an increased rate of tectonic activity. Copyright 2012 by the American Geophysical Union.
Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.
Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672
Aryeetey, Genevieve Cecilia; Nonvignon, Justice; Amissah, Caroline; Buckle, Gilbert; Aikins, Moses
2016-06-07
In 2004, Ghana began implementation of a National Health Insurance Scheme (NHIS) to minimize out-of-pocket expenditure at the point of use of service. The implementation of the scheme was accompanied by increased access and use of health care services. Evidence suggests most health facilities are faced with management challenges in the delivery of services. The study aimed to assess the effect of the introduction of the NHIS on health service delivery in mission health facilities in Ghana. We conceptualised the effect of NHIS on facilities using service delivery indicators such as outpatient and inpatient turn out, estimation of general service readiness, revenue and expenditure, claims processing and availability of essential medicines. We collected data from 38 mission facilities, grouped into the three ecological zones; southern, middle and northern. Structured questionnaires and exit interviews were used to collect data for the periods 2003 and 2010. The data was analysed in SPSS and MS Excel. The facilities displayed high readiness to deliver services. There were significant increases in outpatient and inpatient attendance, revenue, expenditure and improved access to medicines. Generally, facilities reported increased readiness to deliver services. However, challenging issues around high rates of non-reimbursement of NHIS claims due to errors in claims processing, lack of feedback regarding errors, and lack of clarity on claims reporting procedures were reported. The implementation of the NHIS saw improvement and expansion of services resulting in benefits to the facilities as well as constraints. The constraints could be minimized if claims processing is improved at the facility level and delays in reimbursements also reduced.
Taboo search algorithm for item assignment in synchronized zone automated order picking system
NASA Astrophysics Data System (ADS)
Wu, Yingying; Wu, Yaohua
2014-07-01
The idle time which is part of the order fulfillment time is decided by the number of items in the zone; therefore the item assignment method affects the picking efficiency. Whereas previous studies only focus on the balance of number of kinds of items between different zones but not the number of items and the idle time in each zone. In this paper, an idle factor is proposed to measure the idle time exactly. The idle factor is proven to obey the same vary trend with the idle time, so the object of this problem can be simplified from minimizing idle time to minimizing idle factor. Based on this, the model of item assignment problem in synchronized zone automated order picking system is built. The model is a form of relaxation of parallel machine scheduling problem which had been proven to be NP-complete. To solve the model, a taboo search algorithm is proposed. The main idea of the algorithm is minimizing the greatest idle factor of zones with the 2-exchange algorithm. Finally, the simulation which applies the data collected from a tobacco distribution center is conducted to evaluate the performance of the algorithm. The result verifies the model and shows the algorithm can do a steady work to reduce idle time and the idle time can be reduced by 45.63% on average. This research proposed an approach to measure the idle time in synchronized zone automated order picking system. The approach can improve the picking efficiency significantly and can be seen as theoretical basis when optimizing the synchronized automated order picking systems.
Design principles in telescope development: invariance, innocence, and the costs
NASA Astrophysics Data System (ADS)
Steinbach, Manfred
1997-03-01
Instrument design is, for the most part, a battle against errors and costs. Passive methods of error damping are in many cases effective and inexpensive. This paper shows examples of error minimization in our design of telescopes, instrumentation and evaluation instruments.
A post audit of a model-designed ground water extraction system.
Andersen, Peter F; Lu, Silong
2003-01-01
Model post audits test the predictive capabilities of ground water models and shed light on their practical limitations. In the work presented here, ground water model predictions were used to design an extraction/treatment/injection system at a military ammunition facility and then were re-evaluated using site-specific water-level data collected approximately one year after system startup. The water-level data indicated that performance specifications for the design, i.e., containment, had been achieved over the required area, but that predicted water-level changes were greater than observed, particularly in the deeper zones of the aquifer. Probable model error was investigated by determining the changes that were required to obtain an improved match to observed water-level changes. This analysis suggests that the originally estimated hydraulic properties were in error by a factor of two to five. These errors may have resulted from attributing less importance to data from deeper zones of the aquifer and from applying pumping test results to a volume of material that was larger than the volume affected by the pumping test. To determine the importance of these errors to the predictions of interest, the models were used to simulate the capture zones resulting from the originally estimated and updated parameter values. The study suggests that, despite the model error, the ground water model contributed positively to the design of the remediation system.
Custom map projections for regional groundwater models
Kuniansky, Eve L.
2017-01-01
For regional groundwater flow models (areas greater than 100,000 km2), improper choice of map projection parameters can result in model error for boundary conditions dependent on area (recharge or evapotranspiration simulated by application of a rate using cell area from model discretization) and length (rivers simulated with head-dependent flux boundary). Smaller model areas can use local map coordinates, such as State Plane (United States) or Universal Transverse Mercator (correct zone) without introducing large errors. Map projections vary in order to preserve one or more of the following properties: area, shape, distance (length), or direction. Numerous map projections are developed for different purposes as all four properties cannot be preserved simultaneously. Preservation of area and length are most critical for groundwater models. The Albers equal-area conic projection with custom standard parallels, selected by dividing the length north to south by 6 and selecting standard parallels 1/6th above or below the southern and northern extent, preserves both area and length for continental areas in mid latitudes oriented east-west. Custom map projection parameters can also minimize area and length error in non-ideal projections. Additionally, one must also use consistent vertical and horizontal datums for all geographic data. The generalized polygon for the Floridan aquifer system study area (306,247.59 km2) is used to provide quantitative examples of the effect of map projections on length and area with different projections and parameter choices. Use of improper map projection is one model construction problem easily avoided.
Genetically Engineered Autologous Cells for Antiangiogenic Therapy of Breast Cancer
2004-07-01
consisted of a large, fragmented avascular center surrounded by a thin band of vascularized matrix material, itself covered by a capsule of connective tissue...contained dead cells that showed features of coagulation necrosis . The minimal inflammatory response consisted of neutrophils scattered within the...vascularize most likely contributed to the death (coagulation necrosis ) of implanted MSCs localized in the implant core and to the fragmentation of the
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
NASA Astrophysics Data System (ADS)
Yagasaki, Kazuhiro; Ashi, Juichiro; Yokoyama, Yusuke; Miyairi, Yosuke; Kuramoto, Shin'ichi
2016-04-01
Fault activity around subduction zones have been widely studied and monitored through drilling of oceanic plates, studying piston cores, use of monitoring equipment or through visual analysis using submersible vehicles. Yet the understanding of how small scale faults near shallow regions of the seabed behave in relation to cold seep vent activity is still vague, especially determining when they were active in the past. In tectonically active margins such as the Nankai and Tokai regions off Japan, dense methane hydrate reservoirs have been identified. Cold seeps releasing methane rich hydrocarbon fluids are common here, supporting a wide variety of biological species that hold a symbiotic relationship with the chemosynthetic bacteria. In 1998 a large dead Calyptogena spp. bivalve colony (over 400m2 in size) was discovered off Tokai, Japan. It is unusual for a bivalve colony this size to mostly be dead, raising questions as to what caused their death. In this study we document the radiocarbon 14C age of these bivalve shells to attempt analysing the possible methane seep bahaviour in the past. The measured 14C age ranged in three age groups of 1396±36-1448±34, 1912±31-1938±35 and 5975±34. The 14C age of shells that were alive upon collection and the dissolved inorganic carbon (DIC) in seawater show little difference (˜100 14C age) indicating that shells are not heavily affected by the dead carbon effect from cold seeps that is of biogenic or thermogenic origin, which can make the age to become considerably older than the actual age. Thus the novel calibration model used was based on the seawater DIC collected above the Calyptogena spp. colony site (1133±31), which resulted in the dead shells to be clustered around 1900 Cal AD. This proves to be interesting as the predicted epicenter of the Ansei-Tokai earthquake (M 8.4) in 1854 is extremely close to the bibalve colony site. Using geological data obtained using visual analysis and sub-seafloor structural analysis that show multiple shallow faults and chaotic sediment structure below the colony site, the Calyptogena spp. shells have a strong connection to the coseismic faulting activity and could show potential for radiocarbon dating to be applied on marine samples providing the necessary calibration tools are available.
TED: A Tolerant Edit Distance for segmentation evaluation.
Funke, Jan; Klein, Jonas; Moreno-Noguer, Francesc; Cardona, Albert; Cook, Matthew
2017-02-15
In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods. Copyright © 2016. Published by Elsevier Inc.
Physical Properties of Cometary Nucleus Candidates
NASA Technical Reports Server (NTRS)
Jewitt, David; Hillman, John (Technical Monitor)
2003-01-01
In this proposal we aim to study the physical properties of the Centaurs and the dead comets, these being the precursors to, and the remnants from, the active cometary nuclei. The nuclei themselves are very difficult to study, because of the contaminating effects of near-nucleus coma. Systematic investigation of the nuclei both before they enter the zone of strong sublimation and after they have depleted their near-surface volatiles should neatly bracket the properties of these objects, revealing evolutionary effects.
Analysis on regulation strategies for extending service life of hydropower turbines
NASA Astrophysics Data System (ADS)
Yang, W.; Norrlund, P.; Yang, J.
2016-11-01
Since a few years, there has been a tendency that hydropower turbines experience fatigue to a greater extent, due to increasingly more regulation movements of governor actuators. The aim of this paper is to extend the service life of hydropower turbines, by reasonably decreasing the guide vane (GV) movements with appropriate regulation strategies, e.g. settings of PI (proportional-integral) governor parameters and controller filters. The accumulated distance and number of GV movements are the two main indicators of this study. The core method is to simulate the long-term GV opening of Francis turbines with MATLAB/Simulink, based on a sequence of one-month measurements of the Nordic grid frequency. Basic theoretical formulas are also discussed and compared to the simulation results, showing reasonable correspondence. Firstly, a model of a turbine governor is discussed and verified, based on on-site measurements of a Swedish hydropower plant. Then, the influence of governor parameters is discussed. Effects of different settings of controller filters (e.g. dead zone, floating dead zone and linear filter) are also examined. Moreover, a change in GV movement might affect the quality of the frequency control. This is also monitored via frequency deviation characteristics, determined by elementary simulations of the Nordic power system. The results show how the regulation settings affect the GV movements and frequency quality, supplying suggestions for optimizing the hydropower turbine operation for decreasing the wear and tear.
NASA Astrophysics Data System (ADS)
McNally, Colin P.; Nelson, Richard P.; Paardekooper, Sijme-Jan
2018-04-01
We examine the migration of low mass planets in laminar protoplanetary discs, threaded by large scale magnetic fields in the dead zone that drive radial gas flows. As shown in Paper I, a dynamical corotation torque arises due to the flow-induced asymmetric distortion of the corotation region and the evolving vortensity contrast between the librating horseshoe material and background disc flow. Using simulations of laminar torqued discs containing migrating planets, we demonstrate the existence of the four distinct migration regimes predicted in Paper I. In two regimes, the migration is approximately locked to the inward or outward radial gas flow, and in the other regimes the planet undergoes outward runaway migration that eventually settles to fast steady migration. In addition, we demonstrate torque and migration reversals induced by midplane magnetic stresses, with a bifurcation dependent on the disc surface density. We develop a model for fast migration, and show why the outward runaway saturates to a steady speed, and examine phenomenologically its termination due to changing local disc conditions. We also develop an analytical model for the corotation torque at late times that includes viscosity, for application to discs that sustain modest turbulence. Finally, we use the simulation results to develop torque prescriptions for inclusion in population synthesis models of planet formation.
NASA Astrophysics Data System (ADS)
McNally, Colin P.; Nelson, Richard P.; Paardekooper, Sijme-Jan
2018-07-01
We examine the migration of low-mass planets in laminar protoplanetary discs, threaded by large-scale magnetic fields in the dead zone that drive radial gas flows. As shown in Paper I, a dynamical corotation torque arises due to the flow-induced asymmetric distortion of the corotation region and the evolving vortensity contrast between the librating horseshoe material and background disc flow. Using simulations of laminar torqued discs containing migrating planets, we demonstrate the existence of the four distinct migration regimes predicted in Paper I. In two regimes, the migration is approximately locked to the inward or outward radial gas flow, and in the other regimes the planet undergoes outward runaway migration that eventually settles to fast steady migration. In addition, we demonstrate torque and migration reversals induced by mid-plane magnetic stresses, with a bifurcation dependent on the disc surface density. We develop a model for fast migration, and show why the outward runaway saturates to a steady speed, and examine phenomenologically its termination due to changing local disc conditions. We also develop an analytical model for the corotation torque at late times that includes viscosity, for application to discs that sustain modest turbulence. Finally, we use the simulation results to develop torque prescriptions for inclusion in population synthesis models of planet formation.
NASA Astrophysics Data System (ADS)
Sekiya, Minoru; Onishi, Isamu K.
2018-06-01
The streaming instability and Kelvin–Helmholtz instability are considered the two major sources causing clumping of dust particles and turbulence in the dust layer of a protoplanetary disk as long as we consider the dead zone where the magnetorotational instability does not grow. Extensive numerical simulations have been carried out in order to elucidate the condition for the development of particle clumping caused by the streaming instability. In this paper, a set of two parameters suitable for classifying the numerical results is proposed. One is the Stokes number that has been employed in previous works and the other is the dust particle column density that is nondimensionalized using the gas density in the midplane, Keplerian angular velocity, and difference between the Keplerian and gaseous orbital velocities. The magnitude of dust clumping is a measure of the behavior of the dust layer. Using three-dimensional numerical simulations of dust particles and gas based on Athena code v. 4.2, it is confirmed that the magnitude of dust clumping for two disk models are similar if the corresponding sets of values of the two parameters are identical to each other, even if the values of the metallicity (i.e., the ratio of the columns density of the dust particles to that of the gas) are different.
Isayeva, A M; Zibaryov, E V
2015-01-01
The article covers data on major errors in sanitary protection zones specification for civil airports, revealed through sanitary epidemiologic examination. The authors focus attention on necessity to develop unified methodic approach to evaluation of aviation noise effects, when justifying sanitary protection zone of airports and examining sanitary and epidemiologic project documents.
NASA Technical Reports Server (NTRS)
Jekeli, C.
1980-01-01
Errors in the outer zone contribution to oceanic undulation differences computed from a finite set of potential coefficients based on satellite measurements of gravity anomalies and gravity disturbances are analyzed. Equations are derived for the truncation errors resulting from the lack of high-degree coefficients and the commission errors arising from errors in the available lower-degree coefficients, and it is assumed that the inner zone (spherical cap) is sufficiently covered by surface gravity measurements in conjunction with altimetry or by gravity anomaly data. Numerical computations of error for various observational conditions reveal undulation difference errors ranging from 13 to 15 cm and from 6 to 36 cm in the cases of gravity anomaly and gravity disturbance data, respectively for a cap radius of 10 deg and mean anomalies accurate to 10 mgal, with a reduction of errors in both cases to less than 10 cm as mean anomaly accuracy is increased to 1 mgal. In the absence of a spherical cap, both cases yield error estimates of 68 cm for an accuracy of 1 mgal and between 93 and 160 cm for the lesser accuracy, which can be reduced to about 110 cm by the introduction of a perfect 30-deg reference field.
Cameron, Katherine; Murray, Alan
2008-05-01
This paper investigates whether spike-timing-dependent plasticity (STDP) can minimize the effect of mismatch within the context of a depth-from-motion algorithm. To improve noise rejection, this algorithm contains a spike prediction element, whose performance is degraded by analog very large scale integration (VLSI) mismatch. The error between the actual spike arrival time and the prediction is used as the input to an STDP circuit, to improve future predictions. Before STDP adaptation, the error reflects the degree of mismatch within the prediction circuitry. After STDP adaptation, the error indicates to what extent the adaptive circuitry can minimize the effect of transistor mismatch. The circuitry is tested with static and varying prediction times and chip results are presented. The effect of noisy spikes is also investigated. Under all conditions the STDP adaptation is shown to improve performance.
47 CFR 25.104 - Preemption of local zoning of earth stations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 2 2013-10-01 2013-10-01 false Preemption of local zoning of earth stations... SERVICES SATELLITE COMMUNICATIONS General § 25.104 Preemption of local zoning of earth stations. (a) Any... reception by satellite earth station antennas, or imposes more than minimal costs on users of such antennas...
47 CFR 25.104 - Preemption of local zoning of earth stations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 2 2014-10-01 2014-10-01 false Preemption of local zoning of earth stations... SERVICES SATELLITE COMMUNICATIONS General § 25.104 Preemption of local zoning of earth stations. (a) Any... reception by satellite earth station antennas, or imposes more than minimal costs on users of such antennas...
47 CFR 25.104 - Preemption of local zoning of earth stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Preemption of local zoning of earth stations... SERVICES SATELLITE COMMUNICATIONS General § 25.104 Preemption of local zoning of earth stations. (a) Any... reception by satellite earth station antennas, or imposes more than minimal costs on users of such antennas...
47 CFR 25.104 - Preemption of local zoning of earth stations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false Preemption of local zoning of earth stations... SERVICES SATELLITE COMMUNICATIONS General § 25.104 Preemption of local zoning of earth stations. (a) Any... reception by satellite earth station antennas, or imposes more than minimal costs on users of such antennas...
47 CFR 25.104 - Preemption of local zoning of earth stations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 2 2012-10-01 2012-10-01 false Preemption of local zoning of earth stations... SERVICES SATELLITE COMMUNICATIONS General § 25.104 Preemption of local zoning of earth stations. (a) Any... reception by satellite earth station antennas, or imposes more than minimal costs on users of such antennas...
NASA Astrophysics Data System (ADS)
Laske, G.; Weber, M.
2008-05-01
The interdisciplinary Dead Sea Rift Transect (DESERT) project that was conducted in Israel, the Palestine Territories and Jordan has provided a rich palette of data sets to examine the crust and uppermost mantle beneath one of Earth's most prominent fault systems, the Dead Sea Transform (DST). As part of the passive seismic component, thirty broad-band sensors were deployed in 2000 across the DST for roughly one year. During this deployment, we recorded 115 teleseismic earthquakes that are suitable for a fundamental mode Rayleigh wave analysis at intermediate periods (35-150s). Our initial analysis reveals overall shear velocities that are reduced by up to 4 per cent with respect to reference Earth model PREM. To the west of the DST, we find a seismically relatively fast but thin lid that is about 80 km thick. Towards the east, shallow seismic velocities are low while a deeper low velocity zone is not detected. This contradicts the currently favoured thermomechanical model for the DST that predicts lithospheric thinning through mechanical erosion by an intruding plume from the Red Sea. On the other hand, our current results are somewhat inconclusive regarding asthenosphere velocities east of the DST due to the band limitation of the recording equipment in Jordan.
Wong, Chee-Woon; Chong, Kok-Keong; Tan, Ming-Hui
2015-07-27
This paper presents an approach to optimize the electrical performance of dense-array concentrator photovoltaic system comprised of non-imaging dish concentrator by considering the circumsolar radiation and slope error effects. Based on the simulated flux distribution, a systematic methodology to optimize the layout configuration of solar cells interconnection circuit in dense array concentrator photovoltaic module has been proposed by minimizing the current mismatch caused by non-uniformity of concentrated sunlight. An optimized layout of interconnection solar cells circuit with minimum electrical power loss of 6.5% can be achieved by minimizing the effects of both circumsolar radiation and slope error.
A simplified satellite navigation system for an autonomous Mars roving vehicle.
NASA Technical Reports Server (NTRS)
Janosko, R. E.; Shen, C. N.
1972-01-01
The use of a retroflecting satellite and a laser rangefinder to navigate a Martian roving vehicle is considered in this paper. It is shown that a simple system can be employed to perform this task. An error analysis is performed on the navigation equations and it is shown that the error inherent in the scheme proposed can be minimized by the proper choice of measurement geometry. A nonlinear programming approach is used to minimize the navigation error subject to constraints that are due to geometric and laser requirements. The problem is solved for a particular set of laser parameters and the optimal solution is presented.
Two dimensional numerical analysis of snow avalanche interaction with structures
NASA Astrophysics Data System (ADS)
Bovet, Eloïse; Chiaia, Bernardino; Preziosi, Luigi
2010-05-01
The purpose of this work, within the Project "DynAval - Dynamique des avalanches: départ et interactions écoulement/obstacles" - European Territorial Cooperation objective Italy - France (Alps), is to analyse the snow avalanche and structure interaction, through a numerical analysis. The avalanche behaviour, considered as an incompressible fluid, is described by a two-dimensional, in the avalanche slope, Navier-Stokes equations to which an advection equation is coupled to take into account the shape variation. The model allows to describe the velocity and the pressure at every point, representing important features for the structural design. The simulations are carried using a FEM Multiphysics software. For a such problem different analysis can be carried. Firstly, changing the obstacle shape (circle, square, triangle) and its dimension in relation to the avalanche size, the drag coefficient Cd can be evaluated. The obtained results are then compared with the values indicated by the procedures, concerning the avalanches, available in the literature. This study is realized for different Froude numbers too. Secondarily the pressure acting on the different parts of the obstacle (up-wind, down-wind, lateral) is studied. The first investigation concerns the evaluation of the Cp coefficient and on its comparison with the wind effects. The second analysis allows to evaluates, by an integration process, the total load exerted by the avalanche on the obstacle. A practical example of a building design is presented, taking into account the results of the simulations. Thirdly the study is focused on the characterization of the two dead zones created up-wind and down-wind the obstacle. The dependence of the dead zone on the obstacle characteristics, such as dimension and shape, and on the avalanche features, such as density and velocity, is analysed. The results obtained are compared with the data available in the literature concerning snow or granular material interaction with obstacle. In addition the dead zone is studied using a two dimensional model in the avalanche section too. In this way, in fact, the jet length created in the impact, for instance with a dam, can be measured and compared with the laws proposed in the literature. Fourthly the evolution in time of the pressure during the impact is investigated, showing a peak in the first times steps of the interaction. The time and the intensity of this maximum value is related with the flow and the obstacle characteristics. In conclusion, the fan of the analysis carried recovers different and very important features that represent the starting point for reliable design of the structures in avalanche-risk zones. In addition it shows the capabilities and the deficiencies of the model proposed and, finally, it introduces some aspects that will should be furtherer experimentally studied and validated.
Damage identification on spatial Timoshenko arches by means of genetic algorithms
NASA Astrophysics Data System (ADS)
Greco, A.; D'Urso, D.; Cannizzaro, F.; Pluchino, A.
2018-05-01
In this paper a procedure for the dynamic identification of damage in spatial Timoshenko arches is presented. The proposed approach is based on the calculation of an arbitrary number of exact eigen-properties of a damaged spatial arch by means of the Wittrick and Williams algorithm. The proposed damage model considers a reduction of the volume in a part of the arch, and is therefore suitable, differently than what is commonly proposed in the main part of the dedicated literature, not only for concentrated cracks but also for diffused damaged zones which may involve a loss of mass. Different damage scenarios can be taken into account with variable location, intensity and extension of the damage as well as number of damaged segments. An optimization procedure, aiming at identifying which damage configuration minimizes the difference between its eigen-properties and a set of measured modal quantities for the structure, is implemented making use of genetic algorithms. In this context, an initial random population of chromosomes, representing different damage distributions along the arch, is forced to evolve towards the fittest solution. Several applications with different, single or multiple, damaged zones and boundary conditions confirm the validity and the applicability of the proposed procedure even in presence of instrumental errors on the measured data.
DOT National Transportation Integrated Search
2011-03-01
To minimize the severity of run-off-road collisions of vehicles with trees, departments of transportation (DOTs) : commonly establish clear zones for trees and other fixed objects. Caltrans clear zone on freeways is 30 feet : minimum (40 feet pref...
Williams, Camille K.; Tremblay, Luc; Carnahan, Heather
2016-01-01
Researchers in the domain of haptic training are now entering the long-standing debate regarding whether or not it is best to learn a skill by experiencing errors. Haptic training paradigms provide fertile ground for exploring how various theories about feedback, errors and physical guidance intersect during motor learning. Our objective was to determine how error minimizing, error augmenting and no haptic feedback while learning a self-paced curve-tracing task impact performance on delayed (1 day) retention and transfer tests, which indicate learning. We assessed performance using movement time and tracing error to calculate a measure of overall performance – the speed accuracy cost function. Our results showed that despite exhibiting the worst performance during skill acquisition, the error augmentation group had significantly better accuracy (but not overall performance) than the error minimization group on delayed retention and transfer tests. The control group’s performance fell between that of the two experimental groups but was not significantly different from either on the delayed retention test. We propose that the nature of the task (requiring online feedback to guide performance) coupled with the error augmentation group’s frequent off-target experience and rich experience of error-correction promoted information processing related to error-detection and error-correction that are essential for motor learning. PMID:28082937
Accommodative Behavior of Young Eyes Wearing Multifocal Contact Lenses.
Altoaimi, Basal H; Almutairi, Meznah S; Kollbaum, Pete S; Bradley, Arthur
2018-05-01
The effectiveness of multifocal contact lenses (MFCLs) at slowing myopia progression may hinge on the accommodative behavior of young eyes fit with these presbyopic style lenses. Can they remove hyperopic defocus? Convergence accommodation as well as pupil size and the zonal geometry are likely to contribute to the final accommodative responses. The aim of this study was to examine the accommodation behavior of young adult eyes wearing MFCLs and the effectiveness of these MFCLs at removing foveal hyperopic defocus when viewing near targets binocularly. Using a high-resolution Shack-Hartmann aberrometer, accommodation and pupil behavior of eight young adults (27.25 ± 2.05 years) were measured while subjects fixated a 20/40 character positioned between 2 m and 20 cm (0.50 to 5.00 diopters [D]) in 0.25-D steps. Refractive states were measured while viewing binocularly and monocularly with single-vision and both center-distance and center-near +2.00 D add MFCLs. Refractive state was defined using three criteria: the dioptric power that would (1) minimize the root mean square wavefront error, (2) focus the pupil center, and (3) provide the peak image quality. Refractive state pupil maps reveal the complex optics that exist in eyes wearing MFCLs. Reduced accommodative gain beyond the far point of the near add revealed that young subjects used the added plus power to help focus near targets. During accommodation to stimuli closer than the far point generated by the add power, a midperipheral region of the pupil was approximately focused, resulting in the smallest accommodative errors for the minimum root mean square-defined measures of refractive state. Paraxial images were always hyperopically or myopically defocused in eyes viewing binocularly with center-distance or center-near MFCLs, respectively. Because of zone geometry in the concentric MFCLs tested, the highly aberrated transition zone between the distance and near optics contributed a significant proportion and sometimes the majority of light to the resulting images. Young eyes fit with MFCLs containing significant transition zones accommodated to focus pupil regions between the near and distance optics, which resulted in less than optimal retinal image quality and myopic or hyperopic defocus in either the pupil center or pupil margins.
Selecting a restoration technique to minimize OCR error.
Cannon, M; Fugate, M; Hush, D R; Scovel, C
2003-01-01
This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.
Bartram, Jack; Mountjoy, Edward; Brooks, Tony; Hancock, Jeremy; Williamson, Helen; Wright, Gary; Moppett, John; Goulden, Nick; Hubank, Mike
2016-07-01
High-throughput sequencing (HTS) (next-generation sequencing) of the rearranged Ig and T-cell receptor genes promises to be less expensive and more sensitive than current methods of monitoring minimal residual disease (MRD) in patients with acute lymphoblastic leukemia. However, the adoption of new approaches by clinical laboratories requires careful evaluation of all potential sources of error and the development of strategies to ensure the highest accuracy. Timely and efficient clinical use of HTS platforms will depend on combining multiple samples (multiplexing) in each sequencing run. Here we examine the Ig heavy-chain gene HTS on the Illumina MiSeq platform for MRD. We identify errors associated with multiplexing that could potentially impact the accuracy of MRD analysis. We optimize a strategy that combines high-purity, sequence-optimized oligonucleotides, dual indexing, and an error-aware demultiplexing approach to minimize errors and maximize sensitivity. We present a probability-based, demultiplexing pipeline Error-Aware Demultiplexer that is suitable for all MiSeq strategies and accurately assigns samples to the correct identifier without excessive loss of data. Finally, using controls quantified by digital PCR, we show that HTS-MRD can accurately detect as few as 1 in 10(6) copies of specific leukemic MRD. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Lokos, William A.; Miller, Eric J.; Hudson, Larry D.; Holguin, Andrew C.; Neufeld, David C.; Haraguchi, Ronnie
2015-01-01
This paper describes the design and conduct of the strain-gage load calibration ground test of the SubsoniC Research Aircraft Testbed, Gulfstream III aircraft, and the subsequent data analysis and results. The goal of this effort was to create and validate multi-gage load equations for shear force, bending moment, and torque for two wing measurement stations. For some of the testing the aircraft was supported by three airbags in order to isolate the wing structure from extraneous load inputs through the main landing gear. Thirty-two strain gage bridges were installed on the left wing. Hydraulic loads were applied to the wing lower surface through a total of 16 load zones. Some dead-weight load cases were applied to the upper wing surface using shot bags. Maximum applied loads reached 54,000 lb. Twenty-six load cases were applied with the aircraft resting on its landing gear, and 16 load cases were performed with the aircraft supported by the nose gear and three airbags around the center of gravity. Maximum wing tip deflection reached 17 inches. An assortment of 2, 3, 4, and 5 strain-gage load equations were derived and evaluated against independent check cases. The better load equations had root mean square errors less than 1 percent. Test techniques and lessons learned are discussed.
Zhou, Jian; Lv, Xiaofeng; Mu, Yiming; Wang, Xianling; Li, Jing; Zhang, Xingguang; Wu, Jinxiao; Bao, Yuqian; Jia, Weiping
2012-08-01
The purpose of this multicenter study was to investigate the accuracy of a real-time continuous glucose monitoring sensor in Chinese diabetes patients. In total, 48 patients with type 1 or 2 diabetes from three centers in China were included in the study. The MiniMed Paradigm(®) 722 insulin pump (Medtronic, Northridge, CA) was used to monitor the real-time continuous changes of blood glucose levels for three successive days. Venous blood of the subjects was randomly collected every 15 min for seven consecutive hours on the day when the subjects were wearing the sensor. Reference values were provided by the YSI(®) 2300 STAT PLUS™ glucose and lactate analyzer (YSI Life Sciences, Yellow Springs, OH). In total, 1,317 paired YSI-sensor values were collected from the 48 patients. Of the sensor readings, 88.3% (95% confidence interval, 0.84-0.92) were within±20% of the YSI values, and 95.7% were within±30% of the YSI values. Clarke and consensus error grid analyses showed that the ratios of the YSI-sensor values in Zone A to the values in Zone B were 99.1% and 99.9%, respectively. Continuous error grid analysis showed that the ratios of the YSI-sensor values in the region of accurate reading, benign errors, and erroneous reading were 96.4%, 1.8%, and 1.8%, respectively. The mean absolute relative difference (ARD) for all subjects was 10.4%, and the median ARD was 7.8%. Bland-Altman analysis detected a mean blood glucose level of 3.84 mg/dL. Trend analysis revealed that 86.1% of the difference of the rates of change between the YSI values and the sensor readings occurred within the range of 1 mg/dL/min. The Paradigm insulin pump has high accuracy in both monitoring the real-time continuous changes and predicting the trend of changes in blood glucose level. However, actual clinical manifestations should be taken into account for diagnosis of hypoglycemia.
Dead simple OWL design patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osumi-Sutherland, David; Courtot, Melanie; Balhoff, James P.
Bio-ontologies typically require multiple axes of classification to support the needs of their users. Development of such ontologies can only be made scalable and sustainable by the use of inference to automate classification via consistent patterns of axiomatization. Many bio-ontologies originating in OBO or OWL follow this approach. These patterns need to be documented in a form that requires minimal expertise to understand and edit and that can be validated and applied using any of the various programmatic approaches to working with OWL ontologies. We describe a system, Dead Simple OWL Design Patterns (DOS-DPs), which fulfills these requirements, illustrating themore » system with examples from the Gene Ontology. In conclusion, the rapid adoption of DOS-DPs by multiple ontology development projects illustrates both the ease-of use and the pressing need for the simple design pattern system we have developed.« less
Dead simple OWL design patterns
Osumi-Sutherland, David; Courtot, Melanie; Balhoff, James P.; ...
2017-06-05
Bio-ontologies typically require multiple axes of classification to support the needs of their users. Development of such ontologies can only be made scalable and sustainable by the use of inference to automate classification via consistent patterns of axiomatization. Many bio-ontologies originating in OBO or OWL follow this approach. These patterns need to be documented in a form that requires minimal expertise to understand and edit and that can be validated and applied using any of the various programmatic approaches to working with OWL ontologies. We describe a system, Dead Simple OWL Design Patterns (DOS-DPs), which fulfills these requirements, illustrating themore » system with examples from the Gene Ontology. In conclusion, the rapid adoption of DOS-DPs by multiple ontology development projects illustrates both the ease-of use and the pressing need for the simple design pattern system we have developed.« less
Simulating the fate of water in field soil crop environment
NASA Astrophysics Data System (ADS)
Cameira, M. R.; Fernando, R. M.; Ahuja, L.; Pereira, L.
2005-12-01
This paper presents an evaluation of the Root Zone Water Quality Model(RZWQM) for assessing the fate of water in the soil-crop environment at the field scale under the particular conditions of a Mediterranean region. The RZWQM model is a one-dimensional dual porosity model that allows flow in macropores. It integrates the physical, biological and chemical processes occurring in the root zone, allowing the simulation of a wide spectrum of agricultural management practices. This study involved the evaluation of the soil, hydrologic and crop development sub-models within the RZWQM for two distinct agricultural systems, one consisting of a grain corn planted in a silty loam soil, irrigated by level basins and the other a forage corn planted in a sandy soil, irrigated by sprinklers. Evaluation was performed at two distinct levels. At the first level the model capability to fit the measured data was analyzed (calibration). At the second level the model's capability to extrapolate and predict the system behavior for conditions different than those used when fitting the model was assessed (validation). In a subsequent paper the same type of evaluation is presented for the nitrogen transformation and transport model. At the first level a change in the crop evapotranspiration (ETc) formulation was introduced, based upon the definition of the effective leaf area, resulting in a 51% decrease in the root mean square error of the ETc simulations. As a result the simulation of the root water uptake was greatly improved. A new bottom boundary condition was implemented to account for the presence of a shallow water table. This improved the simulation of the water table depths and consequently the soil water evolution within the root zone. The soil hydraulic parameters and the crop variety specific parameters were calibrated in order to minimize the simulation errors of soil water and crop development. At the second level crop yield was predicted with an error of 1.1 and 2.8% for grain and forage corn, respectively. Soil water was predicted with an efficiency ranging from 50 to 95% for the silty loam soil and between 56 and 72% for the sandy soil. The purposed calibration procedure allowed the model to predict crop development, yield and the water balance terms, with accuracy that is acceptable in practical applications for complex and spatially variable field conditions. An iterative method was required to account for the strong interaction between the different model components, based upon detailed experimental data on soils and crops.
Managing human fallibility in critical aerospace situations
NASA Astrophysics Data System (ADS)
Tew, Larry
2014-11-01
Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.
Restoration of Emergent Sandbar Habitat Complexes in the Missouri River, Nebraska and South Dakota
2013-04-01
scouring effect on Missouri River sandbars additional encroachment has occurred. The U.S. Fish and Wildlife Service (FWS) recognized that declines in...data, EPA has determined that the effects of glyphosate on birds, mammals, fish and invertebrates are minimal. Under certain use conditions...rinsate. Treatment of aquatic weeds can result in oxygen loss from decomposition for dead plants. This loss can cause fish kills. • Worker
Balachandran, Ramya; Labadie, Robert F.
2015-01-01
Purpose A minimally invasive approach for cochlear implantation involves drilling a narrow linear path through the temporal bone from the skull surface directly to the cochlea for insertion of the electrode array without the need for an invasive mastoidectomy. Potential drill positioning errors must be accounted for to predict the effectiveness and safety of the procedure. The drilling accuracy of a system used for this procedure was evaluated in bone surrogate material under a range of clinically relevant parameters. Additional experiments were performed to isolate the error at various points along the path to better understand why deflections occur. Methods An experimental setup to precisely position the drill press over a target was used. Custom bone surrogate test blocks were manufactured to resemble the mastoid region of the temporal bone. The drilling error was measured by creating divots in plastic sheets before and after drilling and using a microscope to localize the divots. Results The drilling error was within the tolerance needed to avoid vital structures and ensure accurate placement of the electrode; however, some parameter sets yielded errors that may impact the effectiveness of the procedure when combined with other error sources. The error increases when the lateral stage of the path terminates in an air cell and when the guide bushings are positioned further from the skull surface. At contact points due to air cells along the trajectory, higher errors were found for impact angles of 45° and higher as well as longer cantilevered drill lengths. Conclusion The results of these experiments can be used to define more accurate and safe drill trajectories for this minimally invasive surgical procedure. PMID:26183149
Dillon, Neal P; Balachandran, Ramya; Labadie, Robert F
2016-03-01
A minimally invasive approach for cochlear implantation involves drilling a narrow linear path through the temporal bone from the skull surface directly to the cochlea for insertion of the electrode array without the need for an invasive mastoidectomy. Potential drill positioning errors must be accounted for to predict the effectiveness and safety of the procedure. The drilling accuracy of a system used for this procedure was evaluated in bone surrogate material under a range of clinically relevant parameters. Additional experiments were performed to isolate the error at various points along the path to better understand why deflections occur. An experimental setup to precisely position the drill press over a target was used. Custom bone surrogate test blocks were manufactured to resemble the mastoid region of the temporal bone. The drilling error was measured by creating divots in plastic sheets before and after drilling and using a microscope to localize the divots. The drilling error was within the tolerance needed to avoid vital structures and ensure accurate placement of the electrode; however, some parameter sets yielded errors that may impact the effectiveness of the procedure when combined with other error sources. The error increases when the lateral stage of the path terminates in an air cell and when the guide bushings are positioned further from the skull surface. At contact points due to air cells along the trajectory, higher errors were found for impact angles of [Formula: see text] and higher as well as longer cantilevered drill lengths. The results of these experiments can be used to define more accurate and safe drill trajectories for this minimally invasive surgical procedure.
Minimizing driver errors: examining factors leading to failed target tracking and detection.
DOT National Transportation Integrated Search
2013-06-01
Driving a motor vehicle is a common practice for many individuals. Although driving becomes : repetitive and a very habitual task, errors can occur that lead to accidents. One factor that can be a : cause for such errors is a lapse in attention or a ...
Human Factors Evaluation of Conflict Detection Tool for Terminal Area
NASA Technical Reports Server (NTRS)
Verma, Savita Arora; Tang, Huabin; Ballinger, Deborah; Chinn, Fay Cherie; Kozon, Thomas E.
2013-01-01
A conflict detection and resolution tool, Terminal-area Tactical Separation-Assured Flight Environment (T-TSAFE), is being developed to improve the timeliness and accuracy of alerts and reduce the false alert rate observed with the currently deployed technology. The legacy system in use today, Conflict Alert, relies primarily on a dead reckoning algorithm, whereas T-TSAFE uses intent information to augment dead reckoning. In previous experiments, T-TSAFE was found to reduce the rate of false alerts and increase time between the alert to the controller and a loss of separation over the legacy system. In the present study, T-TSAFE was tested under two meteorological conditions, 1) all aircraft operated under instrument flight regimen, and 2) some aircraft operated under mixed operating conditions. The tool was used to visually alert controllers to predicted Losses of separation throughout the terminal airspace, and show compression errors, on final approach. The performance of T-TSAFE on final approach was compared with Automated Terminal Proximity Alert (ATPA), a tool recently deployed by the FAA. Results show that controllers did not report differences in workload or situational awareness between the T-TSAFE and ATPA cones but did prefer T-TSAFE features over ATPA functionality. T-TSAFE will provide one tool that shows alerts in the data blocks and compression errors via cones on the final approach, implementing all tactical conflict detection and alerting via one tool in TRACON airspace.
Oil shale retorting and combustion system
Pitrolo, Augustine A.; Mei, Joseph S.; Shang, Jerry Y.
1983-01-01
The present invention is directed to the extraction of energy values from l shale containing considerable concentrations of calcium carbonate in an efficient manner. The volatiles are separated from the oil shale in a retorting zone of a fluidized bed where the temperature and the concentration of oxygen are maintained at sufficiently low levels so that the volatiles are extracted from the oil shale with minimal combustion of the volatiles and with minimal calcination of the calcium carbonate. These gaseous volatiles and the calcium carbonate flow from the retorting zone into a freeboard combustion zone where the volatiles are burned in the presence of excess air. In this zone the calcination of the calcium carbonate occurs but at the expense of less BTU's than would be required by the calcination reaction in the event both the retorting and combustion steps took place simultaneously. The heat values in the products of combustion are satisfactorily recovered in a suitable heat exchange system.
2016-05-11
the phases of the system load and ground, so to size the voltage divider appropriately Vsys is set equal to the maximum phase-to-ground voltage. The...civilian and military systems is increasing due to technological improvements in power conversion and changing requirements in system loads. The development...of high-power pulsed loads on naval platforms, such as the Laser Weapon System (LaWS) and the electromagnetic railgun, calls for the ability to
Research on Advanced NDE Methods for Aerospace Structures
1989-09-01
IS RELEASABLE TO THE NATIONAL TECHNICAL INFORMATION SERVICE (NTIS), AT NTIS, IT WILL BE AVAILABLE TO THE GENERAL PUBLIC, INCLUDING FOREIGN NATIONS...Karpur, M.J. Ruddell, J.A.Fox, E.L. Klosterman and M.L. PaDD 13a. TYPE OF REPORT 13b. TIME COVERED 114. DATE OF REPORT (Year, Month, Day) 115. PAGE COUNT...spaced planes in advanced composite materials. That technique was used to simultaneously generate C-scan images of: (1) material defects in the "dead zones
Verification Test of the SURF and SURFplus Models in xRage: Part II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph
2016-06-20
The previous study used an underdriven detonation wave (steady ZND reaction zone profile followed by a scale invariant rarefaction wave) for PBX 9502 as a validation test of the implementation of the SURF and SURFplus models in the xRage code. Even with a fairly fine uniform mesh (12,800 cells for 100mm) the detonation wave profile had limited resolution due to the thin reaction zone width (0.18mm) for the fast SURF burn rate. Here we study the effect of finer resolution by comparing results of simulations with cell sizes of 8, 2 and 1 μm, which corresponds to 25, 100 andmore » 200 points within the reaction zone. With finer resolution the lead shock pressure is closer to the von Neumann spike pressure, and there is less noise in the rarefaction wave due to fluctuations within the reaction zone. As a result the average error decreases. The pointwise error is still dominated by the smearing the pressure kink in the vicinity of the sonic point which occurs at the end of the reaction zone.« less
Computation of Acoustic Waves Through Sliding-Zone Interfaces Using an Euler/Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
1996-01-01
The effect of a patched sliding-zone interface on the transmission of acoustic waves is examined for two- and three-dimensional model problems. A simple but general interpolation scheme at the patched boundary passes acoustic waves without distortion, provided that a sufficiently small time step is taken. A guideline is provided for the maximum permissible time step or zone speed that gives an acceptable error introduced by the sliding-zone interface.
Soil, water, and vegetation conditions in south Texas. [Hildago County, Texas
NASA Technical Reports Server (NTRS)
Wiegand, C. L.; Gausman, H. W.; Leamer, R. W.; Richardson, A. J. (Principal Investigator)
1975-01-01
The author has identified the following significant results. To distinguish dead from live vegetation, spectrophotometrically measured infinite reflectance of dead and live corn (Zea mays L.) leaves were compared over the 0.5 to 2.5 micron waveband. Dead leaf reflectance was reached over the entire 0.5 to 2.5 micron waveband by stacking only two to three leaves. Live leaf reflectance was attained by stacking two leaves for the 0.5 to 0.75 micron waveband (chlorophyll absorption region), eight leaves for the 0.75 to 1.35 micron waveband (near infrared region), and three leaves for the 1.35 to 2.5 micron waveband (water absorption region). LANDSAT-1 MSS digital data for 11 December 1973 overpass were used to estimate the sugar cane acreage in Hidalgo County. The computer aided estimate was 22,100 acres compared with the Texas Crop and Livestock Reporting Service estimate of 20,500 acres for the 1973-'74 crop year. Although there were errors of omission from harvested fields that were identified as bare soil and some citrus and native vegetation that were mistakenly identified as sugar cane, the mapped location of sugar cane fields in the county compared favorably with their location on the thematic map generated by the computer.
"Zones of Tolerance" in Perceptions of Library Service Quality: A LibQUAL+[TM] Study.
ERIC Educational Resources Information Center
Cook, Colleen; Heath, Fred M.; Thompson, Bruce
2003-01-01
One of the two major ways of interpreting LibQUAL+[TM] data involves placing perceived service quality ratings within "zones of tolerance" defined as the distances between minimally-acceptable and desired service quality levels. This study compared zones of tolerance on the 25 LibQUAL+[TM] items across undergraduate, graduate student and…
SMAP Level 4 Surface and Root Zone Soil Moisture
NASA Technical Reports Server (NTRS)
Reichle, R.; De Lannoy, G.; Liu, Q.; Ardizzone, J.; Kimball, J.; Koster, R.
2017-01-01
The SMAP Level 4 soil moisture (L4_SM) product provides global estimates of surface and root zone soil moisture, along with other land surface variables and their error estimates. These estimates are obtained through assimilation of SMAP brightness temperature observations into the Goddard Earth Observing System (GEOS-5) land surface model. The L4_SM product is provided at 9 km spatial and 3-hourly temporal resolution and with about 2.5 day latency. The soil moisture and temperature estimates in the L4_SM product are validated against in situ observations. The L4_SM product meets the required target uncertainty of 0.04 m(exp. 3)m(exp. -3), measured in terms of unbiased root-mean-square-error, for both surface and root zone soil moisture.
A field technique for estimating aquifer parameters using flow log data
Paillet, Frederick L.
2000-01-01
A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that systematically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that symmetrically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.
NASA Technical Reports Server (NTRS)
Russell, P. L.; Beal, G. W.; Sederquist, R. A.; Shultz, D.
1981-01-01
Rich-lean combustor concepts designed to enhance rich combustion chemistry and increase combustor flexibility for NO(x) reduction with minimally processed fuels are examined. Processes such as rich product recirculation in the rich chamber, rich-lean annihilation, and graduated air addition or staged rich combustion to release bound nitrogen in steps of reduced equivalence ratio are discussed. Variations to the baseline rapid quench section are considered, and the effect of residence time in the rich zone is investigated. The feasibility of using uncooled non-metallic materials for the rich zone combustion construction is also addressed. The preliminary results indicate that rich primary zone staged combustion provides environmentally acceptable operation with residual and/or synthetic coal-derived liquid fuels
Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Castano, Diego J.
1987-01-01
Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.
Roohipoor, Ramak; Karkhaneh, Reza; Riazi Esfahani, Mohammad; Alipour, Fateme; Haghighat, Mahtab; Ebrahimiadib, Nazanin; Zarei, Mohammad; Mehrdad, Ramin
2016-01-01
To compare refractive error changes in retinopathy of prematurity (ROP) patients treated with diode and red lasers. A randomized double-masked clinical trial was performed, and infants with threshold or prethreshold type 1 ROP were assigned to red or diode laser groups. Gestational age, birth weight, pretreatment cycloplegic refraction, time of treatment, disease stage, zone and disease severity were recorded. Patients received either red or diode laser treatment and were regularly followed up for retina assessment and refraction. The information at month 12 of corrected age was considered for comparison. One hundred and fifty eyes of 75 infants were enrolled in the study. Seventy-four eyes received diode and 76 red laser therapy. The mean gestational age and birth weight of the infants were 28.6 ± 3.2 weeks and 1,441 ± 491 g, respectively. The mean baseline refractive error was +2.3 ± 1.7 dpt. Posttreatment refraction showed a significant myopic shift (mean 2.6 ± 2.0 dpt) with significant difference between the two groups (p < 0.001). There was a greater myopic shift among children with zone I and diode laser treatment (mean 6.00 dpt) and a lesser shift among children with zone II and red laser treatment (mean 1.12 dpt). The linear regression model, using the generalized estimating equation method, showed that the type of laser used has a significant effect on myopic shift even after adjustment for other variables. Myopic shift in laser-treated ROP patients is related to the type of laser used and the involved zone. Red laser seems to cause less myopic shift than diode laser, and those with zone I involvement have a greater myopic shift than those with ROP in zone II. © 2016 S. Karger AG, Basel.
Assessing the impact of Syrian refugees on earthquake fatality estimations in southeast Turkey
NASA Astrophysics Data System (ADS)
Wilson, Bradley; Paradise, Thomas
2018-01-01
The influx of millions of Syrian refugees into Turkey has rapidly changed the population distribution along the Dead Sea Rift and East Anatolian fault zones. In contrast to other countries in the Middle East where refugees are accommodated in camp environments, the majority of displaced individuals in Turkey are integrated into local cities, towns, and villages - placing stress on urban settings and increasing potential exposure to strong earthquake shaking. Yet displaced populations are often unaccounted for in the census-based population models used in earthquake fatality estimations. This study creates a minimally modeled refugee gridded population model and analyzes its impact on semi-empirical fatality estimations across southeast Turkey. Daytime and nighttime fatality estimates were produced for five fault segments at earthquake magnitudes 5.8, 6.4, and 7.0. Baseline fatality estimates calculated from census-based population estimates for the study area varied in scale from tens to thousands of fatalities, with higher death totals in nighttime scenarios. Refugee fatality estimations were analyzed across 500 semi-random building occupancy distributions. Median fatality estimates for refugee populations added non-negligible contributions to earthquake fatalities at four of five fault locations, increasing total fatality estimates by 7-27 %. These findings communicate the necessity of incorporating refugee statistics into earthquake fatality estimations in southeast Turkey and the ongoing importance of placing environmental hazards in their appropriate regional and temporal context.
Medical Errors Reduction Initiative
2005-05-01
working with great success to minimize error. 14. SUBJECT TERMS 15. NUMBER OF PAGES Medical Error, Patient Safety, Personal Data Terminal, Barcodes, 9...AD Award Number: W81XWH-04-1-0536 TITLE: Medical Errors Reduction Initiative PRINCIPAL INVESTIGATOR: Michael L. Mutter 1To CONTRACTING ORGANIZATION...The Valley Hospital Ridgewood, NJ 07450 REPORT DATE: May 2005 TYPE OF REPORT: Annual PREPARED FOR: U.S. Army Medical Research and Materiel Command
Field Comparison between Sling Psychrometer and Meteorological Measuring Set AN/TMQ-22
the ML-224 Sling Psychrometer . From a series of independent tests designed to minimize error it was concluded that the AN/TMQ-22 yielded a more accurate...dew point reading. The average relative humidity error using the sling psychrometer was +9% while the AN/TMQ-22 had a plus or minus 2% error. Even with cautious measurement the sling yielded a +4% error.
Highway work zone capacity estimation using field data from Kansas.
DOT National Transportation Integrated Search
2015-02-01
Although extensive research has been conducted on urban freeway capacity estimation methods, minimal research has been : carried out for rural highway sections, especially sections within work zones. This study attempted to fill that void for rural :...
Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters
Park, Chan Gook
2018-01-01
An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms. PMID:29690539
NASA Astrophysics Data System (ADS)
Sulaiman, M.; El-Shafie, A.; Karim, O.; Basri, H.
2011-10-01
Flood forecasting models are a necessity, as they help in planning for flood events, and thus help prevent loss of lives and minimize damage. At present, artificial neural networks (ANN) have been successfully applied in river flow and water level forecasting studies. ANN requires historical data to develop a forecasting model. However, long-term historical water level data, such as hourly data, poses two crucial problems in data training. First is that the high volume of data slows the computation process. Second is that data training reaches its optimal performance within a few cycles of data training, due to there being a high volume of normal water level data in the data training, while the forecasting performance for high water level events is still poor. In this study, the zoning matching approach (ZMA) is used in ANN to accurately monitor flood events in real time by focusing the development of the forecasting model on high water level zones. ZMA is a trial and error approach, where several training datasets using high water level data are tested to find the best training dataset for forecasting high water level events. The advantage of ZMA is that relevant knowledge of water level patterns in historical records is used. Importantly, the forecasting model developed based on ZMA successfully achieves high accuracy forecasting results at 1 to 3 h ahead and satisfactory performance results at 6 h. Seven performance measures are adopted in this study to describe the accuracy and reliability of the forecasting model developed.
NASA Astrophysics Data System (ADS)
Abbou, S.; Dillet, J.; Maranzana, G.; Didierjean, S.; Lottin, O.
2017-02-01
Proton exchange membrane (PEM) fuel cells operate with dead-ended anode in order to reduce system cost and complexity when compared with hydrogen re-circulation systems. In the first part of this work, we showed that localized fuel starvation events may occur, because of water and nitrogen accumulation in the anode side, which could be particularly damaging to the cell performance. To prevent these degradations, the anode compartment must be purged which may lead to an overall system efficiency decrease because of significant hydrogen waste. In the second part, we present several purge strategies in order to minimize both hydrogen waste and membrane-electrode assembly degradations during dead-ended anode operation. A linear segmented cell with reference electrodes was used to monitor simultaneously the current density distribution along the gas channel and the time evolution of local anode and cathode potentials. To asses MEA damages, Platinum ElectroChemical Surface Area (ECSA) and cell performance were periodically measured. The results showed that dead-end mode operation with an anode plate maintained at a temperature 5 °C hotter than the cathode plate limits water accumulation in the anode side, reducing significantly purge frequency (and thus hydrogen losses) as well as MEA damages. As nitrogen contribution to hydrogen starvation is predominant in this thermal configuration, we also tested a microleakage solution to discharge continuously most the nitrogen accumulating in the anode side while ensuring low hydrogen losses and minimum ECSA losses provided the right microleakage flow rate is chosen.
Arba-Mosquera, Samuel; Aslanides, Ioannis M.
2012-01-01
Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.
Schmidt, Carl R; Shires, Peter; Mootoo, Mary
2012-02-01
Irreversible electroporation (IRE) is a largely non-thermal method for the ablation of solid tumours. The ability of ultrasound (US) to measure the size of the IRE ablation zone was studied in a porcine liver model. Three normal pig livers were treated in vivo with a total of 22 ablations using IRE. Ultrasound was used within minutes after ablation and just prior to liver harvest at either 6 h or 24 h after the procedure. The area of cellular necrosis was measured after staining with nitroblue tetrazolium and the percentage of cell death determined by histomorphometry. Visible changes in the hepatic parenchyma were apparent by US after all 22 ablations using IRE. The mean maximum diameter of the ablation zone measured by US during the procedure was 20.1 ± 2.7 mm. This compared with a mean cellular necrosis zone maximum diameter of 20.3 ± 2.9 mm as measured histologically. The mean percentage of dead cells within the ablation zone was 77% at 6 h and 98% at 24 h after ablation. Ultrasound is a useful modality for measuring the ablation zone within minutes of applying IRE to normal liver tissue. The area of parenchymal change measured by US correlates with the area of cellular necrosis. © 2011 International Hepato-Pancreato-Biliary Association.
NASA Astrophysics Data System (ADS)
Sampson, Danuta M.; Gong, Peijun; An, Di; Menghini, Moreno; Hansen, Alex; Mackey, David A.; Sampson, David D.; Chen, Fred K.
2017-04-01
We examined the impact of axial length on superficial retinal vessel density (SRVD) and foveal avascular zone area (FAZA) measurement using optical coherence tomography angiography. The SRVD and FAZA were quantified before and after correction for magnification error associated with axial length variation. Although SRVD did not differ before and after correction for magnification error in the parafoveal region, change in foveal SRVD and FAZA were significant. This has implications for clinical trials outcome in diseased eyes where significant capillary dropout may occur in the parafovea.
Intrinsic errors in transporting a single-spin qubit through a double quantum dot
NASA Astrophysics Data System (ADS)
Li, Xiao; Barnes, Edwin; Kestner, J. P.; Das Sarma, S.
2017-07-01
Coherent spatial transport or shuttling of a single electron spin through semiconductor nanostructures is an important ingredient in many spintronic and quantum computing applications. In this work we analyze the possible errors in solid-state quantum computation due to leakage in transporting a single-spin qubit through a semiconductor double quantum dot. In particular, we consider three possible sources of leakage errors associated with such transport: finite ramping times, spin-dependent tunneling rates between quantum dots induced by finite spin-orbit couplings, and the presence of multiple valley states. In each case we present quantitative estimates of the leakage errors, and discuss how they can be minimized. The emphasis of this work is on how to deal with the errors intrinsic to the ideal semiconductor structure, such as leakage due to spin-orbit couplings, rather than on errors due to defects or noise sources. In particular, we show that in order to minimize leakage errors induced by spin-dependent tunnelings, it is necessary to apply pulses to perform certain carefully designed spin rotations. We further develop a formalism that allows one to systematically derive constraints on the pulse shapes and present a few examples to highlight the advantage of such an approach.
Explosives remain preferred methods for platform abandonment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pulsipher, A.; Daniel, W. IV; Kiesler, J.E.
1996-05-06
Economics and safety concerns indicate that methods involving explosives remain the most practical and cost-effective means for abandoning oil and gas structures in the Gulf of Mexico. A decade has passed since 51 dead sea turtles, many endangered Kemp`s Ridleys, washed ashore on the Texas coast shortly after explosives helped remove several offshore platforms. Although no relationship between the explosions and the dead turtles was ever established, in response to widespread public concern, the US Minerals Management Service (MMS) and National Marine Fisheries Service (NMFS) implemented regulations limiting the size and timing of explosive charges. Also, more importantly, they requiredmore » that operators pay for observers to survey waters surrounding platforms scheduled for removal for 48 hr before any detonations. If observers spot sea turtles or marine mammals within the danger zone, the platform abandonment is delayed until the turtles leave or are removed. However, concern about the effects of explosives on marine life remains.« less
NASA Astrophysics Data System (ADS)
Negri, Mauro Pietro; Sanfilippo, Rossana; Basso, Daniela; Rosso, Antonietta
2015-12-01
Dead and live molluscan assemblages from the coastal area of Phetchaburi (NW Gulf of Thailand) were compared by means of multivariate analysis. Seven thanatofacies were recognized, thriving in the area after the 1960s. Five of them, scattered along the tidal flat, represent oligotypic intertidal biotopes linked to a variety of environmental factors; the remaining two mirror high-diversity infralittoral associations. Conversely, only two poor, ill-defined biofacies thrive at present between the intertidal and the shallow infralittoral zones, somewhat resembling two of the thanatofacies. Diversity indexes reveal a dramatic biodiversity decline occurred from the 1960s onwards, far beyond the effects of time-averaging and accumulation. The responsibility for this reduction is largely attributable to the high impact of human activities, such as the intensive sea bottom trawling, the wastewaters from aquaculture (shrimp and fish ponds) and dense coastal villages, and, at a minor extent, the digging of edible molluscs from the tidal flat.
A field investigation of concrete patches containing pyrament blended concrete.
DOT National Transportation Integrated Search
1994-01-01
During roadway repairs, state highway officials try to minimize lane closure times. This reduces inconvenience to travelers, reduces traffic control needs, and helps minimize work zone accidents. For rapid repairs, materials that provide high early s...
How to minimize perceptual error and maximize expertise in medical imaging
NASA Astrophysics Data System (ADS)
Kundel, Harold L.
2007-03-01
Visual perception is such an intimate part of human experience that we assume that it is entirely accurate. Yet, perception accounts for about half of the errors made by radiologists using adequate imaging technology. The true incidence of errors that directly affect patient well being is not known but it is probably at the lower end of the reported values of 3 to 25%. Errors in screening for lung and breast cancer are somewhat better characterized than errors in routine diagnosis. About 25% of cancers actually recorded on the images are missed and cancer is falsely reported in about 5% of normal people. Radiologists must strive to decrease error not only because of the potential impact on patient care but also because substantial variation among observers undermines confidence in the reliability of imaging diagnosis. Observer variation also has a major impact on technology evaluation because the variation between observers is frequently greater than the difference in the technologies being evaluated. This has become particularly important in the evaluation of computer aided diagnosis (CAD). Understanding the basic principles that govern the perception of medical images can provide a rational basis for making recommendations for minimizing perceptual error. It is convenient to organize thinking about perceptual error into five steps. 1) The initial acquisition of the image by the eye-brain (contrast and detail perception). 2) The organization of the retinal image into logical components to produce a literal perception (bottom-up, global, holistic). 3) Conversion of the literal perception into a preferred perception by resolving ambiguities in the literal perception (top-down, simulation, synthesis). 4) Selective visual scanning to acquire details that update the preferred perception. 5) Apply decision criteria to the preferred perception. The five steps are illustrated with examples from radiology with suggestions for minimizing error. The role of perceptual learning in the development of expertise is also considered.
Krabcova, Ivana; Studeny, Pavel; Jirsova, Katerina
2013-06-01
To assess the quantitative and qualitative parameters of pre-cut posterior corneal lamellae for Descemet membrane endothelial keratoplasty with a stromal rim (DMEK-S) prepared manually in the Ocular Tissue Bank Prague. All 65 successfully prepared pre-cut posterior corneal lamellae provided for grafting during a 2-year period were analyzed retrospectively. The lamellae, consisting of a central zone of endothelium-Descemet membrane surrounded by a supporting peripheral stromal rim, were prepared manually from corneoscleral buttons having an endothelial cell density higher than 2,500 cells/mm(2). The live endothelial cell density, the percentage of dead cells, the hexagonality and the coefficient of variation were assessed before and immediately after preparation as well as after 2 days of organ culture storage at 31 °C. Altogether, the endothelium of 57 lamellae was assessed. Immediately after preparation, the mean live endothelial cell density was 2,835 cells/mm(2) and, on average, 1.8 % of dead cells were found. After 2 days of storage, the cell density decreased significantly to 2,757 cells/mm(2) and the percentage of dead cells to 1.0 %. There was a significant change in the mean hexagonality and the coefficient of variation after lamellar preparation and subsequent storage. The amount of tissue wasted during the preparation was 23 %. The endothelial cell density of posterior corneal lamellae sent for DMEK-S was higher than 2,700 cells/mm(2) in average with a low percentage of dead cells; 65 pre-cut tissues were used for grafting during a 2-year period.
Wind systems the driving force of evaporation at the Dead Sea
NASA Astrophysics Data System (ADS)
Metzger, Jutta; Corsmeier, Ulrich; Alpert, Pinhas
2017-04-01
The Dead Sea is a unique place on earth. It is located in the Eastern Mediterranean at the lowest point of the Jordan Rift valley and its water level is currently at 429 m below mean sea level. The region is located in a transition zone of semi-arid to arid climate conditions and endangered by severe environmental problems, especially the rapid lake level decline (>1m/year), causing the shifting of fresh/saline groundwater interfaces and the drying up of the lake. Two key features are relevant for these environmental changes: the evaporation from the water surface and its driving mechanisms. The main driver of evaporation at the Dead Sea is the wind velocity and hence the governing wind systems with different scales in space and time. In the framework of the Virtual Institute DEad SEa Research Venue (DESERVE) an extensive field campaign was conducted to study the governing wind systems in the valley and the energy balance of the water and land surface simultaneously. The combination of several in-situ and remote sensing instruments allowed temporally and spatially high-resolution measurements to investigate the frequency of occurrence of the wind systems, their three-dimensional structure, associated wind velocities and their impact on evaporation. The characteristics of the three local wind systems governing the valley's wind field, as well as their impact on evaporation, will be presented. Mostly decoupled from the large scale flow a local lake breeze determines the conditions during the day. Strong downslope winds drive the evaporation in the afternoon, and down valley flows with wind velocities of over 10 m s-1 dominate during the night causing unusually high evaporation rates after sunset.
Calibration of stereo rigs based on the backward projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin
2016-08-01
High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.
León Blanco, José M; González-R, Pedro L; Arroyo García, Carmen Martina; Cózar-Bernal, María José; Calle Suárez, Marcos; Canca Ortiz, David; Rabasco Álvarez, Antonio María; González Rodríguez, María Luisa
2018-01-01
This work was aimed at determining the feasibility of artificial neural networks (ANN) by implementing backpropagation algorithms with default settings to generate better predictive models than multiple linear regression (MLR) analysis. The study was hypothesized on timolol-loaded liposomes. As tutorial data for ANN, causal factors were used, which were fed into the computer program. The number of training cycles has been identified in order to optimize the performance of the ANN. The optimization was performed by minimizing the error between the predicted and real response values in the training step. The results showed that training was stopped at 10 000 training cycles with 80% of the pattern values, because at this point the ANN generalizes better. Minimum validation error was achieved at 12 hidden neurons in a single layer. MLR has great prediction ability, with errors between predicted and real values lower than 1% in some of the parameters evaluated. Thus, the performance of this model was compared to that of the MLR using a factorial design. Optimal formulations were identified by minimizing the distance among measured and theoretical parameters, by estimating the prediction errors. Results indicate that the ANN shows much better predictive ability than the MLR model. These findings demonstrate the increased efficiency of the combination of ANN and design of experiments, compared to the conventional MLR modeling techniques.
Definition Of Touch-Sensitive Zones For Graphical Displays
NASA Technical Reports Server (NTRS)
Monroe, Burt L., III; Jones, Denise R.
1988-01-01
Touch zones defined simply by touching, while editing done automatically. Development of touch-screen interactive computing system, tedious task. Interactive Editor for Definition of Touch-Sensitive Zones computer program increases efficiency of human/machine communications by enabling user to define each zone interactively, minimizing redundancy in programming and eliminating need for manual computation of boundaries of touch areas. Information produced during editing process written to data file, to which access gained when needed by application program.
Multiscale Analyses of the Bone-implant Interface
Cha, J.Y.; Pereira, M.D.; Smith, A.A.; Houschyar, K.S.; Yin, X.; Mouraret, S.; Brunski, J.B.
2015-01-01
Implants placed with high insertion torque (IT) typically exhibit primary stability, which enables early loading. Whether high IT has a negative impact on peri-implant bone health, however, remains to be determined. The purpose of this study was to ascertain how peri-implant bone responds to strains and stresses created when implants are placed with low and high IT. Titanium micro-implants were inserted into murine femurs with low and high IT using torque values that were scaled to approximate those used to place clinically sized implants. Torque created in peri-implant tissues a distribution and magnitude of strains, which were calculated through finite element modeling. Stiffness tests quantified primary and secondary implant stability. At multiple time points, molecular, cellular, and histomorphometric analyses were performed to quantitatively determine the effect of high and low strains on apoptosis, mineralization, resorption, and collagen matrix deposition in peri-implant bone. Preparation of an osteotomy results in a narrow zone of dead and dying osteocytes in peri-implant bone that is not significantly enlarged in response to implants placed with low IT. Placing implants with high IT more than doubles this zone of dead and dying osteocytes. As a result, peri-implant bone develops micro-fractures, bone resorption is increased, and bone formation is decreased. Using high IT to place an implant creates high interfacial stress and strain that are associated with damage to peri-implant bone and therefore should be avoided to best preserve the viability of this tissue. PMID:25628271
Evaluation of Imaging Parameters of Ultrasound Scanners: Baseline for Future Testing
Pasicz, Katarzyna; Grabska, Iwona; Skrzyński, Witold; Ślusarczyk-Kacprzyk, Wioletta; Bulski, Wojciech
2017-01-01
Summary Background Regular quality control is required in Poland only for those methods of medical imaging which involve the use of ionizing radiation but not for ultrasonography. It is known that the quality of ultrasound images may be affected by the wearing down or malfunctioning of equipment. Material/Methods An evaluation of image quality was carried out for 22 ultrasound scanners equipped with 46 transducers. The CIRS Phantom model 040GSE was used. A set of tests was established which could be carried out with the phantom, including: depth of penetration, dead zone, distance measurement accuracy, resolution, uniformity, and visibility of structures. Results While the dead zone was 0 mm for 89% of transducers, it was 3 mm for the oldest transducer. The distances measured agreed with the actual distances by 1 mm or less in most cases, with the largest difference of 2.6 mm. The resolution in the axial direction for linear transducers did not exceed 1 mm, but it reached even 5 mm for some of the convex and sector transducers, especially at higher depths and in the lateral direction. For 29% of transducers, some distortions of anechoic structures were observed. Artifacts were detected for several transducers. Conclusions The results will serve as a baseline for future testing. Several cases of suboptimal image quality were identified along with differences in performance between similar transducers. The results could be used to decide on the applicability of a given scanner or transducer for a particular kind of examination. PMID:29657644
Freckmann, Guido; Jendrike, Nina; Baumstark, Annette; Pleus, Stefan; Liebing, Christina; Haug, Cornelia
2018-04-01
The international standard ISO 15197:2013 requires a user performance evaluation to assess if intended users are able to obtain accurate blood glucose measurement results with a self-monitoring of blood glucose (SMBG) system. In this study, user performance was evaluated for four SMBG systems on the basis of ISO 15197:2013, and possibly related insulin dosing errors were calculated. Additionally, accuracy was assessed in the hands of study personnel. Accu-Chek ® Performa Connect (A), Contour ® plus ONE (B), FreeStyle Optium Neo (C), and OneTouch Select ® Plus (D) were evaluated with one test strip lot. After familiarization with the systems, subjects collected a capillary blood sample and performed an SMBG measurement. Study personnel observed the subjects' measurement technique. Then, study personnel performed SMBG measurements and comparison measurements. Number and percentage of SMBG measurements within ± 15 mg/dl and ± 15% of the comparison measurements at glucose concentrations < 100 and ≥ 100 mg/dl, respectively, were calculated. In addition, insulin dosing errors were modelled. In the hands of lay-users three systems fulfilled ISO 15197:2013 accuracy criteria with the investigated test strip lot showing 96% (A), 100% (B), and 98% (C) of results within the defined limits. All systems fulfilled minimum accuracy criteria in the hands of study personnel [99% (A), 100% (B), 99.5% (C), 96% (D)]. Measurements with all four systems were within zones of the consensus error grid and surveillance error grid associated with no or minimal risk. Regarding calculated insulin dosing errors, all 99% ranges were between dosing errors of - 2.7 and + 1.4 units for measurements in the hands of lay-users and between - 2.5 and + 1.4 units for study personnel. Frequent lay-user errors were not checking the test strips' expiry date and applying blood incorrectly. Data obtained in this study show that not all available SMBG systems complied with ISO 15197:2013 accuracy criteria when measurements were performed by lay-users. The study was registered at ClinicalTrials.gov (NCT02916576). Ascensia Diabetes Care Deutschland GmbH.
AUV Underwater Positioning Algorithm Based on Interactive Assistance of SINS and LBL.
Zhang, Tao; Chen, Liping; Li, Yao
2015-12-30
This paper studies an underwater positioning algorithm based on the interactive assistance of a strapdown inertial navigation system (SINS) and LBL, and this algorithm mainly includes an optimal correlation algorithm with aided tracking of an SINS/Doppler velocity log (DVL)/magnetic compass pilot (MCP), a three-dimensional TDOA positioning algorithm of Taylor series expansion and a multi-sensor information fusion algorithm. The final simulation results show that compared to traditional underwater positioning algorithms, this scheme can not only directly correct accumulative errors caused by a dead reckoning algorithm, but also solves the problem of ambiguous correlation peaks caused by multipath transmission of underwater acoustic signals. The proposed method can calibrate the accumulative error of the AUV position more directly and effectively, which prolongs the underwater operating duration of the AUV.
Lessons from Crew Resource Management for Cardiac Surgeons.
Marvil, Patrick; Tribble, Curt
2017-04-30
Crew resource management (CRM) describes a system developed in the late 1970s in response to a series of deadly commercial aviation crashes. This system has been universally adopted in commercial and military aviation and is now an integral part of aviation culture. CRM is an error mitigation strategy developed to reduce human error in situations in which teams operate in complex, high-stakes environments. Over time, the principles of this system have been applied and utilized in other environments, particularly in medical areas dealing with high-stakes outcomes requiring optimal teamwork and communication. While the data from formal studies on the effectiveness of formal CRM training in medical environments have reported mixed results, it seems clear that some of these principles should have value in the practice of cardiovascular surgery.
Kan, Hirohito; Arai, Nobuyuki; Takizawa, Masahiro; Omori, Kazuyoshi; Kasai, Harumasa; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta
2018-06-11
We developed a non-regularized, variable kernel, sophisticated harmonic artifact reduction for phase data (NR-VSHARP) method to accurately estimate local tissue fields without regularization for quantitative susceptibility mapping (QSM). We then used a digital brain phantom to evaluate the accuracy of the NR-VSHARP method, and compared it with the VSHARP and iterative spherical mean value (iSMV) methods through in vivo human brain experiments. Our proposed NR-VSHARP method, which uses variable spherical mean value (SMV) kernels, minimizes L2 norms only within the volume of interest to reduce phase errors and save cortical information without regularization. In a numerical phantom study, relative local field and susceptibility map errors were determined using NR-VSHARP, VSHARP, and iSMV. Additionally, various background field elimination methods were used to image the human brain. In a numerical phantom study, the use of NR-VSHARP considerably reduced the relative local field and susceptibility map errors throughout a digital whole brain phantom, compared with VSHARP and iSMV. In the in vivo experiment, the NR-VSHARP-estimated local field could sufficiently achieve minimal boundary losses and phase error suppression throughout the brain. Moreover, the susceptibility map generated using NR-VSHARP minimized the occurrence of streaking artifacts caused by insufficient background field removal. Our proposed NR-VSHARP method yields minimal boundary losses and highly precise phase data. Our results suggest that this technique may facilitate high-quality QSM. Copyright © 2017. Published by Elsevier Inc.
Minimal entropy reconstructions of thermal images for emissivity correction
NASA Astrophysics Data System (ADS)
Allred, Lloyd G.
1999-03-01
Low emissivity with corresponding low thermal emission is a problem which has long afflicted infrared thermography. The problem is aggravated by reflected thermal energy which increases as the emissivity decreases, thus reducing the net signal-to-noise ratio, which degrades the resulting temperature reconstructions. Additional errors are introduced from the traditional emissivity-correction approaches, wherein one attempts to correct for emissivity either using thermocouples or using one or more baseline images, collected at known temperatures. These corrections are numerically equivalent to image differencing. Errors in the baseline images are therefore additive, causing the resulting measurement error to either double or triple. The practical application of thermal imagery usually entails coating the objective surface to increase the emissivity to a uniform and repeatable value. While the author recommends that the thermographer still adhere to this practice, he has devised a minimal entropy reconstructions which not only correct for emissivity variations, but also corrects for variations in sensor response, using the baseline images at known temperatures to correct for these values. The minimal energy reconstruction is actually based on a modified Hopfield neural network which finds the resulting image which best explains the observed data and baseline data, having minimal entropy change between adjacent pixels. The autocorrelation of temperatures between adjacent pixels is a feature of most close-up thermal images. A surprising result from transient heating data indicates that the resulting corrected thermal images have less measurement error and are closer to the situational truth than the original data.
Yoon, Je Moon; Shin, Dong Hoon; Kim, Sang Jin; Ham, Don-Il; Kang, Se Woong; Chang, Yun Sil; Park, Won Soon
2017-01-01
To investigate the anatomical and refractive outcomes in patients with Type 1 retinopathy of prematurity in Zone I. The medical records of 101 eyes of 51 consecutive infants with Type 1 retinopathy of prematurity in Zone I were analyzed. Infants were treated by conventional laser photocoagulation (Group I), combined intravitreal bevacizumab injection and Zone I sparing laser (Group II), or intravitreal bevacizumab with deferred laser treatment (Group III). The proportion of unfavorable anatomical outcomes including retinal fold, disc dragging, retrolental tissue obscuring the view of the posterior pole, retinal detachment, and early refractive errors were compared among the three groups. The mean gestational age at birth and the birth weight of all 51 infants were 24.3 ± 1.1 weeks and 646 ± 143 g, respectively. In Group I, an unfavorable anatomical outcome was observed in 10 of 44 eyes (22.7%). In contrast, in Groups II and III, all eyes showed favorable anatomical outcomes without reactivation or retreatment. The refractive error was less myopic in Group III than in Groups I and II (spherical equivalent of -4.62 ± 4.00 D in Group I, -5.53 ± 2.21 D in Group II, and -1.40 ± 2.19 D in Group III; P < 0.001). In Type 1 retinopathy of prematurity in Zone I, intravitreal bevacizumab with concomitant or deferred laser therapy yielded a better anatomical outcome than conventional laser therapy alone. Moreover, intravitreal bevacizumab with deferred laser treatment resulted in less myopic refractive error.
2007-10-01
5.3.1.1 Study of Surf Zone Environment........................................... 5-6 5.3.2 Research Needs: High Priority...Detection of Smaller Munitions Items Study of Surf Zone Environment Improve Navigation Error Analysis Develop Cooperative Cued Platforms...towbodies, AUVs, ROVs, HOVs, and divers. Surveys in high energy surf zones present unique difficulties. Finally, participants stressed that the survey
Unsupervised Indoor Localization Based on Smartphone Sensors, iBeacon and Wi-Fi.
Chen, Jing; Zhang, Yi; Xue, Wei
2018-04-28
In this paper, we propose UILoc, an unsupervised indoor localization scheme that uses a combination of smartphone sensors, iBeacons and Wi-Fi fingerprints for reliable and accurate indoor localization with zero labor cost. Firstly, compared with the fingerprint-based method, the UILoc system can build a fingerprint database automatically without any site survey and the database will be applied in the fingerprint localization algorithm. Secondly, since the initial position is vital to the system, UILoc will provide the basic location estimation through the pedestrian dead reckoning (PDR) method. To provide accurate initial localization, this paper proposes an initial localization module, a weighted fusion algorithm combined with a k-nearest neighbors (KNN) algorithm and a least squares algorithm. In UILoc, we have also designed a reliable model to reduce the landmark correction error. Experimental results show that the UILoc can provide accurate positioning, the average localization error is about 1.1 m in the steady state, and the maximum error is 2.77 m.
DOT National Transportation Integrated Search
2008-12-26
Dilemma zone at signalized intersection has been recognized as a major potential causing rearend : crashes, and has been widely studied by researches since it was initially proposed as the : GHM model in 1960. However, concepts conventionally defined...
Characterize dynamic dilemma zone and minimize its effect at signalized intersections.
DOT National Transportation Integrated Search
2008-12-26
Dilemma zone at signalized intersection has been recognized as a major potential causing rearend : and right-angle crashes, and has been widely studied by researches since it was initially : proposed as the GHM model in 1960. However, concepts conven...
NASA Astrophysics Data System (ADS)
Goineau, Aurélie; Gooday, Andrew J.
2017-04-01
The benthic biota of the Clarion-Clipperton Zone (CCZ, abyssal eastern equatorial Pacific) is the focus of a major research effort linked to possible future mining of polymetallic nodules. Within the framework of ABYSSLINE, a biological baseline study conducted on behalf of Seabed Resources Development Ltd. in the UK-1 exploration contract area (eastern CCZ, ~4,080 m water depth), we analysed foraminifera (testate protists), including ‘live’ (Rose Bengal stained) and dead tests, in 5 cores (0-1 cm layer, >150-μm fraction) recovered during separate megacorer deployments inside a 30 by 30 km seafloor area. In both categories (live and dead) we distinguished between complete and fragmented specimens. The outstanding feature of these assemblages is the overwhelming predominance of monothalamids, a group often ignored in foraminiferal studies. These single-chambered foraminifera, which include agglutinated tubes, spheres and komokiaceans, represented 79% of 3,607 complete tests, 98% of 1,798 fragments and 76% of the 416 morphospecies (live and dead combined) in our samples. Only 3.1% of monothalamid species and 9.8% of all species in the UK-1 assemblages are scientifically described and many are rare (29% singletons). Our results emphasise how little is known about foraminifera in abyssal areas that may experience major impacts from future mining activities.
Dubovskaia, O P; Gladyshev, M I; Makhutova, O N
2004-01-01
The vertical distribution of net zooplankton in head-water of Krasnoyarsk hydroelectric power station and its horizontal distribution in the tail-water were studied during two years in winter and summer seasons. In order to distinguish living and dead individuals the special staining was used. It was revealed that on average 77% of living plankton pass through high-head dam with deep water scoop to the tailwater. While passing through dam aggregates some individuals of the reservoir plankton are traumatized and die, that results in some increase of portion of dead individuals in the tail water near dam (from 3 to 6%). Alive zooplankton passed through the dam aggregates is eliminated under the Upper Yenisei highly turbulent conditions. There is approximately 10% of it in 32 km from the dam if compare with biomass in 20-40 m layer of reservoir, the portion of dead increases to 11%. The biomass of zooplankton suspended in the water column of the tail-water sometimes increases (till > 1 g/m3) due to large Copepoda Heteroscope borealis, which inhabits near-bottom and near-shore river zones and can be found in the central part of the river during reproductive period. Limnetic zooplankton from the reservoir cannot be considered as important food for planktivores in the tail-water.
Benthic foraminifera from the Arabian Sea oxygen minimum zone: towards a paleo-oxygenation proxy.
NASA Astrophysics Data System (ADS)
Clemence, Caulle; Meryem, Mojtahid; Karoliina, Koho; Andy, Gooday; Gert-Jan, Reichart; Gerhard, Schmiedl; Frans, Jorissen
2014-05-01
Benthic foraminifera from the Arabian Sea oxygen minimum zone: towards a paleo-oxygenation proxy. C. Caulle1, M. Mojtahid1, K. Koho2,3, A. Gooday4, G. J. Reichart2,3, G. Schmiedl5, F. Jorissen1 1UMR CNRS 6112 LPG-BIAF, University of Angers, 2 bd Lavoisier, 49045 Angers Cedex 2Utrecht University, Faculty of Geosciences, Department of Earth Sciences, Budapestlaan 4, 3584 CD Utrecht, The Netherlands 3Royal Netherland Institute for Sea Research (Royal NIOZ), Landsdiep 4, 1797 SZ 't Horntje (Texel) 4Southampton Oceanography Centre, Empress Dock, European Way, Southampton SO14 3ZH, UK 5Department of Geosciences, University of Hamburg, Bundesstraße 55, 20146 Hamburg, Germany The thermohaline circulation oxygenates the deep ocean sediment and therefore enables aerobic life on the sea-floor. In the past, interruption of this deep water formation occurred several times causing hypoxic to anoxic conditions on the sea-floor leading to major ecological turnover. A better understanding of the interaction between climate and bottom water oxygenation is therefore essential in order to predict future oceanic responses. Presently, permanent (stable over decadal timescale) low-oxygen conditions occur naturally at mid-water depths in the northern Indian Ocean (Arabian Sea). Oxygen Minimum Zones (OMZ) are key areas to understand the hypoxic-anoxic events and their impact on the benthic ecosystem. In this context, a good knowledge of the ecology and life cycle adaptations of the benthic foraminiferal assemblages living in these low oxygen areas is essential. A series of multicores were recovered from three transects showing an oxygen gradient across the OMZ: the Murray Ridge, the Oman margin and the Indian margin. The stations located at the same depths showed slightly different oxygen concentrations and large differences in organic matter content. These differences are mainly related to the geographic location in the Arabian Sea. We investigated at these stations live and dead benthic foraminiferal faunas. At each location, faunal diversity seems to be controlled by bottom-water oxygen content; limited diversity corresponding to low oxygen content. Foraminiferal abundances reflect organic matter quantity and quality; higher organic matter quality and quantity are related to higher foraminiferal abundances. When comparing the three study areas, similar foraminiferal species (live and dead) are observed suggesting that benthic foraminifera from the Arabian Sea predominantly respond to bottom-water oxygenation. Based on these observations, we aim to develop a paleo-oxygenation proxy based on live, dead and fossil faunas resulting from both our study and previous studies in the Arabian Sea.
Zhao, C.Y.; Zhang, Q.; Ding, X.-L.; Lu, Z.; Yang, C.S.; Qi, X.M.
2009-01-01
The City of Xian, China, has been experiencing significant land subsidence and ground fissure activities since 1960s, which have brought various severe geohazards including damages to buildings, bridges and other facilities. Monitoring of land subsidence and ground fissure activities can provide useful information for assessing the extent of, and mitigating such geohazards. In order to achieve robust Synthetic Aperture Radar Interferometry (InSAR) results, six interferometric pairs of Envisat ASAR data covering 2005–2006 are collected to analyze the InSAR processing errors firstly, such as temporal and spatial decorrelation error, external DEM error, atmospheric error and unwrapping error. Then the annual subsidence rate during 2005–2006 is calculated by weighted averaging two pairs of D-InSAR results with similar time spanning. Lastly, GPS measurements are applied to calibrate the InSAR results and centimeter precision is achieved. As for the ground fissure monitoring, five InSAR cross-sections are designed to demonstrate the relative subsidence difference across ground fissures. In conclusion, the final InSAR subsidence map during 2005–2006 shows four large subsidence zones in Xian hi-tech zones in western, eastern and southern suburbs of Xian City, among which two subsidence cones are newly detected and two ground fissures are deduced to be extended westward in Yuhuazhai subsidence cone. This study shows that the land subsidence and ground fissures are highly correlated spatially and temporally and both are correlated with hi-tech zone construction in Xian during the year of 2005–2006.
Vavilov, A Iu; Viter, V I
2007-01-01
Mathematical questions of data errors of modern thermometrical models of postmortem cooling of the human body are considered. The main diagnostic areas used for thermometry are analyzed to minimize these data errors. The authors propose practical recommendations to decrease data errors of determination of prescription of death coming.
Beyond the Mechanics of Spreadsheets: Using Design Instruction to Address Spreadsheet Errors
ERIC Educational Resources Information Center
Schneider, Kent N.; Becker, Lana L.; Berg, Gary G.
2017-01-01
Given that the usage and complexity of spreadsheets in the accounting profession are expected to increase, it is more important than ever to ensure that accounting graduates are aware of the dangers of spreadsheet errors and are equipped with design skills to minimize those errors. Although spreadsheet mechanics are prevalent in accounting…
DOT National Transportation Integrated Search
2006-01-01
Problem: Work zones on heavily traveled divided highways present problems to motorists in the form of traffic delays and increased accident risks due to sometimes reduced motorist guidance, dense traffic, and other driving difficulties. To minimize t...
Faisal, Ayad A H; Abd Ali, Ziad T
2017-10-01
Computer solutions (COMSOL) Multiphysics 3.5a software was used for simulating the one-dimensional equilibrium transport of the lead-phenol binary system including the sorption process through saturated sandy soil as the aquifer and granular dead anaerobic sludge (GDAS) as the permeable reactive barrier. Fourier-transform infrared spectroscopy analysis proved that the carboxylic and alcohol groups are responsible for the bio-sorption of lead onto GDAS, while phosphines, aromatic and alkane are the functional groups responsible for the bio-sorption of phenol. Batch tests have been performed to characterize the equilibrium sorption properties of the GDAS and sandy soil in lead and/or phenol containing aqueous solutions. Numerical and experimental results proved that the barrier plays a potential role in the restriction of the contaminant plume migration and there is a linear relationship between longevity and thickness of the barrier. A good agreement between these results was recognized with root mean squared error not exceeding 0.04.
Advanced Integration of WiFi and Inertial Navigation Systems for Indoor Mobile Positioning
NASA Astrophysics Data System (ADS)
Evennou, Frédéric; Marx, François
2006-12-01
This paper presents an aided dead-reckoning navigation structure and signal processing algorithms for self localization of an autonomous mobile device by fusing pedestrian dead reckoning and WiFi signal strength measurements. WiFi and inertial navigation systems (INS) are used for positioning and attitude determination in a wide range of applications. Over the last few years, a number of low-cost inertial sensors have become available. Although they exhibit large errors, WiFi measurements can be used to correct the drift weakening the navigation based on this technology. On the other hand, INS sensors can interact with the WiFi positioning system as they provide high-accuracy real-time navigation. A structure based on a Kalman filter and a particle filter is proposed. It fuses the heterogeneous information coming from those two independent technologies. Finally, the benefits of the proposed architecture are evaluated and compared with the pure WiFi and INS positioning systems.
NASA Astrophysics Data System (ADS)
Pandey, Saurabh; Majhi, Somanath; Ghorai, Prasenjit
2017-07-01
In this paper, the conventional relay feedback test has been modified for modelling and identification of a class of real-time dynamical systems in terms of linear transfer function models with time-delay. An ideal relay and unknown systems are connected through a negative feedback loop to bring the sustained oscillatory output around the non-zero setpoint. Thereafter, the obtained limit cycle information is substituted in the derived mathematical equations for accurate identification of unknown plants in terms of overdamped, underdamped, critically damped second-order plus dead time and stable first-order plus dead time transfer function models. Typical examples from the literature are included for the validation of the proposed identification scheme through computer simulations. Subsequently, the comparisons between estimated model and true system are drawn through integral absolute error criterion and frequency response plots. Finally, the obtained output responses through simulations are verified experimentally on real-time liquid level control system using Yokogawa Distributed Control System CENTUM CS3000 set up.
Designing Measurement Studies under Budget Constraints: Controlling Error of Measurement and Power.
ERIC Educational Resources Information Center
Marcoulides, George A.
1995-01-01
A methodology is presented for minimizing the mean error variance-covariance component in studies with resource constraints. The method is illustrated using a one-facet multivariate design. Extensions to other designs are discussed. (SLD)
Minimizing user delay and crash potential through highway work zone planning.
DOT National Transportation Integrated Search
2014-05-01
Lane closures due to highway work zones introduce many challenges to ensuring smooth traffic operations and a : safe environment for drivers and workers. In addition, merging has been found to be one of the most stressful : aspects of driving and a m...
Multiple-objective optimization in precision laser cutting of different thermoplastics
NASA Astrophysics Data System (ADS)
Tamrin, K. F.; Nukman, Y.; Choudhury, I. A.; Shirley, S.
2015-04-01
Thermoplastics are increasingly being used in biomedical, automotive and electronics industries due to their excellent physical and chemical properties. Due to the localized and non-contact process, use of lasers for cutting could result in precise cut with small heat-affected zone (HAZ). Precision laser cutting involving various materials is important in high-volume manufacturing processes to minimize operational cost, error reduction and improve product quality. This study uses grey relational analysis to determine a single optimized set of cutting parameters for three different thermoplastics. The set of the optimized processing parameters is determined based on the highest relational grade and was found at low laser power (200 W), high cutting speed (0.4 m/min) and low compressed air pressure (2.5 bar). The result matches with the objective set in the present study. Analysis of variance (ANOVA) is then carried out to ascertain the relative influence of process parameters on the cutting characteristics. It was found that the laser power has dominant effect on HAZ for all thermoplastics.
Mazze, Roger S; Strock, Ellie; Borgman, Sarah; Wesley, David; Stout, Philip; Racchini, Joel
2009-01-01
This study was designed to assess the accuracy, reliability, and contribution to clinical decision-making of two commercially available continuous glucose monitoring (CGM) devices using a novel analytical approach. Eleven individuals with type 1 diabetes and five with type 2 diabetes wore a Guardian RT (GRT) (Medtronic Minimed, Northridge, CA) or DexCom STS Continuous Monitoring System (DEX) (San Diego, CA) device for 200 h followed by an 8-h laboratory study. A subset of these subjects wore both devices simultaneously. Subjects produced 1,902 +/- 269 readings during the ambulatory phase. During the laboratory study we found: lag time of 21 +/- 5 min for GRT and 7 +/- 7 min for DEX (P < 0.005); mean absolute relative difference of 19.9% and 16.7%, respectively, for GRT and DEX; and glucose exposure (the ratio of study device/laboratory reference device [YSI Instruments, Inc., Yellow Springs, OH] area under the curve) of 95 +/- 6% for GRT and 101 +/- 13% for DEX. Reliability measured during laboratory study showed 82% for DEX and 99% for GRT. Clarke Error Grid analysis (YSI reference) showed for GRT 59% of values in zone A, 34% in zone B, and 7% in zone D and for DEX 70% in zone A, 28% in zone B, 1% in zone C, and 1% in zone D. Bland-Altman plots (YSI standard) yielded for DEX 3 mg/dL (95% confidence interval, -78 to 84 mg/dL) and for GRT -21 mg/dL (95% confidence interval, -124 to 82 mg/dL). Six of eight subjects completed both home and laboratory simultaneous use of DEX and GRT. Lag times were inconsistent between devices, ranging from 0 to 32 min; area under the curve revealed a tendency for DEX to report higher total glucose exposure than GRT for the same patient. CGM detects abnormalities in glycemic control in a manner heretofore impossible to obtain. However, our studies revealed sufficient incongruence between simultaneous laboratory blood glucose levels and interstitial fluid glucose (after calibrations) to question the fundamental assumption that interstitial fluid glucose and blood glucose could be made identical by resorting to algorithms based on concurrent blood glucose levels alone.
Acconcia, G; Labanca, I; Rech, I; Gulinatti, A; Ghioni, M
2017-02-01
The minimization of Single Photon Avalanche Diodes (SPADs) dead time is a key factor to speed up photon counting and timing measurements. We present a fully integrated Active Quenching Circuit (AQC) able to provide a count rate as high as 100 MHz with custom technology SPAD detectors. The AQC can also operate the new red enhanced SPAD and provide the timing information with a timing jitter Full Width at Half Maximum (FWHM) as low as 160 ps.
Multiple channel programmable coincidence counter
Arnone, Gaetano J.
1990-01-01
A programmable digital coincidence counter having multiple channels and featuring minimal dead time. Neutron detectors supply electrical pulses to a synchronizing circuit which in turn inputs derandomized pulses to an adding circuit. A random access memory circuit connected as a programmable length shift register receives and shifts the sum of the pulses, and outputs to a serializer. A counter is input by the adding circuit and downcounted by the seralizer, one pulse at a time. The decoded contents of the counter after each decrement is output to scalers.
NASA Technical Reports Server (NTRS)
Smith, David D.
2015-01-01
Next-generation space missions are currently constrained by existing spacecraft navigation systems which are not fully autonomous. These systems suffer from accumulated dead-reckoning errors and must therefore rely on periodic corrections provided by supplementary technologies that depend on line-of-sight signals from Earth, satellites, or other celestial bodies for absolute attitude and position determination, which can be spoofed, incorrectly identified, occluded, obscured, attenuated, or insufficiently available. These dead-reckoning errors originate in the ring laser gyros themselves, which constitute inertial measurement units. Increasing the time for standalone spacecraft navigation therefore requires fundamental improvements in gyroscope technologies. One promising solution to enhance gyro sensitivity is to place an anomalous dispersion or fast light material inside the gyro cavity. The fast light essentially provides a positive feedback to the gyro response, resulting in a larger measured beat frequency for a given rotation rate as shown in figure 1. Game Changing Development has been investing in this idea through the Fast Light Optical Gyros (FLOG) project, a collaborative effort which began in FY 2013 between NASA Marshall Space Flight Center (MSFC), the U.S. Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC), and Northwestern University. MSFC and AMRDEC are working on the development of a passive FLOG (PFLOG), while Northwestern is developing an active FLOG (AFLOG). The project has demonstrated new benchmarks in the state of the art for scale factor sensitivity enhancement. Recent results show cavity scale factor enhancements of approx.100 for passive cavities.
NASA Astrophysics Data System (ADS)
Wilson, A.; Jackson, R. B.; Tumber-Davila, S. J.
2017-12-01
An increase in the frequency and severity of droughts has been associated with the changing climate. These events have the potential to alter the composition and biogeography of forests, as well as increase tree mortality related to climate-induced stress. Already, an increase in tree mortality has been observed throughout the US. The recent drought in California led to millions of tree mortalities in the southern Sierra Nevada alone. In order to assess the potential impacts of these events on forest systems, it is imperative to understand what factors contribute to tree mortality. As plants become water-stressed, they may invest carbon more heavily belowground to reach a bigger pool of water, but their ability to adapt may be limited by the characteristics of the soil. In the Southern Sierra Critical Zone Observatory, a high tree mortality zone, we have selected both dead and living trees to examine the factors that contribute to root zone variability and belowground biomass investment by individual plants. A series of 15 cores surrounding the tree were taken to collect root and soil samples. These were then used to compare belowground rooting distributions with soil characteristics (texture, water holding capacity, pH, electric conductivity). Abies concolor is heavily affected by drought-induced mortality, therefore the rooting systems of dead Abies concolor trees were examined to determine the relationship between their rooting systems and environmental conditions. Examining the relationship between soil characteristics and rooting systems of trees may shed light on the plasticity of rooting systems and how trees adapt based on the characteristics of its environment. A better understanding of the factors that contribute to tree mortality can improve our ability to predict how forest systems may be impacted by climate-induced stress. Key words: Root systems, soil characteristics, drought, adaptation, terrestrial carbon, forest ecology
Factors limiting the health of semi-scavenging ducks in Bangladesh.
Hoque, M A; Skerratt, L F; Cook, A J C; Khan, S A; Grace, D; Alam, M R; Vidal-Diez, A; Debnath, N C
2011-02-01
Duck rearing is well suited to coastal and lowland areas in Bangladesh. It is an important component of sustainable livelihood strategies for poor rural communities as an additional source of household income. An epidemiological study was conducted during January 2005-June 2006 on 379 households in Chatkhil of the Noakhali District, Bangladesh which were using the recently devised "Bangladesh duck model". The overall objective of the study was to identify factors that significantly contributed to mortality and constrained productivity and to generate sufficient knowledge to enable establishment of a disease surveillance system for household ducks. The overall mortality was 15.0% in Chatkhil, with predation causing a significantly higher mortality compared with diseases (p < 0.001). Common diseases were duck plague and duck cholera. Morbid ducks frequently displayed signs associated with diseases affecting the nervous and digestive systems. Haemorrhagic lesions in various organs and white multiple foci on the liver were frequently observed in dead ducks. Epidemiological analysis with a shared frailty model that accounted for clustering of data by farm was used to estimate the association between survival time and risk factors. The overall mortality rate due to disease was significantly lower in vaccinated than in non-vaccinated ducks in all zones except zone 2 (p < 0.001). Only vaccinated ducks survived in zone 1. In conclusion, duck mortality and untimely sale of ducks appeared to be important constraints for household duck production in Chatkhil. Vaccination against duck plague appears to be an effective preventive strategy in reducing the level of associated duck mortality. A successful network was established amongst farmers and the surveillance team through which dead ducks, with accompanying information, were readily obtained for analysis. Therefore, there is an opportunity for establishing a long-term disease surveillance programme for rural ducks in Chatkhil of the Noakhali District of Bangladesh.
NASA Technical Reports Server (NTRS)
Vranish, John M. (Inventor)
2009-01-01
A gear bearing having a first gear and a second gear, each having a plurality of teeth. Each gear operates on two non-parallel surfaces of the opposing gear teeth to perform both gear and bearing functions simultaneously. The gears are moving at substantially the same speed at their contact points. The gears may be roller gear bearings or phase-shifted gear bearings, and may be arranged in a planet/sun system or used as a transmission. One preferred embodiment discloses and describes an anti-backlash feature to counter ''dead zones'' in the gear bearing movement.
Absolute Stability Analysis of a Phase Plane Controlled Spacecraft
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; Plummer, Michael; Bedrossian, Nazareth; Hall, Charles; Jackson, Mark; Spanos, Pol
2010-01-01
Many aerospace attitude control systems utilize phase plane control schemes that include nonlinear elements such as dead zone and ideal relay. To evaluate phase plane control robustness, stability margin prediction methods must be developed. Absolute stability is extended to predict stability margins and to define an abort condition. A constrained optimization approach is also used to design flex filters for roll control. The design goal is to optimize vehicle tracking performance while maintaining adequate stability margins. Absolute stability is shown to provide satisfactory stability constraints for the optimization.
Creep of Sylramic-iBN Fiber Tows at Elevated Temperature in Air and in Silicic Acid-Saturated Steam
2015-06-01
elements, R type control thermocouples and a 90-mm (3.5-in.) hot zone; reproduced from Armani [15] All tests employed an alumina susceptor (ceramic...Furnace Leff (500) = 39.9mm T = 500°C, Steam 45 4.1.2 Strain Measurement In this work tensile creep tests were performed using a dead-weight... strain and the strain rate of the specimen in the hot test section. These methods are briefly recapitulated here. Extension of the fiber tow
Pressure model of a four-way spool valve for simulating electrohydraulic control systems
NASA Technical Reports Server (NTRS)
Gebben, V. D.
1976-01-01
An equation that relates the pressure flow characteristics of hydraulic spool valves was developed. The dependent variable is valve output pressure, and the independent variables are spool position and flow. This causal form of equation is preferred in applications that simulate the effects of hydraulic line dynamics. Results from this equation are compared with those from the conventional valve equation, whose dependent variable is flow. A computer program of the valve equations includes spool stops, leakage spool clearances, and dead-zone characteristics of overlap spools.
Pirnstill, Casey W; Malik, Bilal H; Gresham, Vincent C; Coté, Gerard L
2012-09-01
Over the past 35 years considerable research has been performed toward the investigation of noninvasive and minimally invasive glucose monitoring techniques. Optical polarimetry is one noninvasive technique that has shown promise as a means to ascertain blood glucose levels through measuring the glucose concentrations in the anterior chamber of the eye. However, one of the key limitations to the use of optical polarimetry as a means to noninvasively measure glucose levels is the presence of sample noise caused by motion-induced time-varying corneal birefringence. In this article our group presents, for the first time, results that show dual-wavelength polarimetry can be used to accurately detect glucose concentrations in the presence of motion-induced birefringence in vivo using New Zealand White rabbits. In total, nine animal studies (three New Zealand White rabbits across three separate days) were conducted. Using the dual-wavelength optical polarimetric approach, in vivo, an overall mean average relative difference of 4.49% (11.66 mg/dL) was achieved with 100% Zone A+B hits on a Clarke error grid, including 100% falling in Zone A. The results indicate that dual-wavelength polarimetry can effectively be used to significantly reduce the noise due to time-varying corneal birefringence in vivo, allowing the accurate measurement of glucose concentration in the aqueous humor of the eye and correlating that with blood glucose.
NASA Astrophysics Data System (ADS)
Chang, Chueh-Hsin; Yu, Ching-Hao; Sheu, Tony Wen-Hann
2016-10-01
In this article, we numerically revisit the long-time solution behavior of the Camassa-Holm equation ut - uxxt + 2ux + 3uux = 2uxuxx + uuxxx. The finite difference solution of this integrable equation is sought subject to the newly derived initial condition with Delta-function potential. Our underlying strategy of deriving a numerical phase accurate finite difference scheme in time domain is to reduce the numerical dispersion error through minimization of the derived discrepancy between the numerical and exact modified wavenumbers. Additionally, to achieve the goal of conserving Hamiltonians in the completely integrable equation of current interest, a symplecticity-preserving time-stepping scheme is developed. Based on the solutions computed from the temporally symplecticity-preserving and the spatially wavenumber-preserving schemes, the long-time asymptotic CH solution characters can be accurately depicted in distinct regions of the space-time domain featuring with their own quantitatively very different solution behaviors. We also aim to numerically confirm that in the two transition zones their long-time asymptotics can indeed be described in terms of the theoretically derived Painlevé transcendents. Another attempt of this study is to numerically exhibit a close connection between the presently predicted finite-difference solution and the solution of the Painlevé ordinary differential equation of type II in two different transition zones.
Inverse and forward modeling under uncertainty using MRE-based Bayesian approach
NASA Astrophysics Data System (ADS)
Hou, Z.; Rubin, Y.
2004-12-01
A stochastic inverse approach for subsurface characterization is proposed and applied to shallow vadose zone at a winery field site in north California and to a gas reservoir at the Ormen Lange field site in the North Sea. The approach is formulated in a Bayesian-stochastic framework, whereby the unknown parameters are identified in terms of their statistical moments or their probabilities. Instead of the traditional single-valued estimation /prediction provided by deterministic methods, the approach gives a probability distribution for an unknown parameter. This allows calculating the mean, the mode, and the confidence interval, which is useful for a rational treatment of uncertainty and its consequences. The approach also allows incorporating data of various types and different error levels, including measurements of state variables as well as information such as bounds on or statistical moments of the unknown parameters, which may represent prior information. To obtain minimally subjective prior probabilities required for the Bayesian approach, the principle of Minimum Relative Entropy (MRE) is employed. The approach is tested in field sites for flow parameters identification and soil moisture estimation in the vadose zone and for gas saturation estimation at great depth below the ocean floor. Results indicate the potential of coupling various types of field data within a MRE-based Bayesian formalism for improving the estimation of the parameters of interest.
Rolland, Jannick; Ha, Yonggang; Fidopiastis, Cali
2004-06-01
A theoretical investigation of rendered depth and angular errors, or Albertian errors, linked to natural eye movements in binocular head-mounted displays (HMDs) is presented for three possible eye-point locations: the center of the entrance pupil, the nodal point, and the center of rotation of the eye. A numerical quantification was conducted for both the pupil and the center of rotation of the eye under the assumption that the user will operate solely in either the near field under an associated instrumentation setting or the far field under a different setting. Under these conditions, the eyes are taken to gaze in the plane of the stereoscopic images. Across conditions, results show that the center of the entrance pupil minimizes rendered angular errors, while the center of rotation minimizes rendered position errors. Significantly, this investigation quantifies that under proper setting of the HMD and correct choice of the eye points, rendered depth and angular errors can be brought to be either negligible or within specification of even the most stringent applications in performance of tasks in either the near field or the far field.
Theoretical Bounds of Direct Binary Search Halftoning.
Liao, Jan-Ray
2015-11-01
Direct binary search (DBS) produces the images of the best quality among half-toning algorithms. The reason is that it minimizes the total squared perceived error instead of using heuristic approaches. The search for the optimal solution involves two operations: (1) toggle and (2) swap. Both operations try to find the binary states for each pixel to minimize the total squared perceived error. This error energy minimization leads to a conjecture that the absolute value of the filtered error after DBS converges is bounded by half of the peak value of the autocorrelation filter. However, a proof of the bound's existence has not yet been found. In this paper, we present a proof that shows the bound existed as conjectured under the condition that at least one swap occurs after toggle converges. The theoretical analysis also indicates that a swap with a pixel further away from the center of the autocorrelation filter results in a tighter bound. Therefore, we propose a new DBS algorithm which considers toggle and swap separately, and the swap operations are considered in the order from the edge to the center of the filter. Experimental results show that the new algorithm is more efficient than the previous algorithm and can produce half-toned images of the same quality as the previous algorithm.
An optimization-based framework for anisotropic simplex mesh adaptation
NASA Astrophysics Data System (ADS)
Yano, Masayuki; Darmofal, David L.
2012-09-01
We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
Net carbon flux of dead wood in forests of the Eastern US.
Woodall, C W; Russell, M B; Walters, B F; D'Amato, A W; Fraver, S; Domke, G M
2015-03-01
Downed dead wood (DDW) in forest ecosystems is a C pool whose net flux is governed by a complex of natural and anthropogenic processes and is critical to the management of the entire forest C pool. As empirical examination of DDW C net flux has rarely been conducted across large scales, the goal of this study was to use a remeasured inventory of DDW C and ancillary forest attributes to assess C net flux across forests of the Eastern US. Stocks associated with large fine woody debris (diameter 2.6-7.6 cm) decreased over time (-0.11 Mg ha(-1) year(-1)), while stocks of larger-sized coarse DDW increased (0.02 Mg ha(-1) year(-1)). Stocks of total DDW C decreased (-0.14 Mg ha(-1) year(-1)), while standing dead and live tree stocks both increased, 0.01 and 0.44 Mg ha(-1) year(-1), respectively. The spatial distribution of DDW C stock change was highly heterogeneous with random forests model results indicating that management history, live tree stocking, natural disturbance, and growing degree days only partially explain stock change. Natural disturbances drove substantial C transfers from the live tree pool (≈-4 Mg ha(-1) year(-1)) to the standing dead tree pool (≈3 Mg ha(-1) year(-1)) with only a minimal increase in DDW C stocks (≈1 Mg ha(-1) year(-1)) in lower decay classes, suggesting a delayed transfer of C to the DDW pool. The assessment and management of DDW C flux is complicated by the diversity of natural and anthropogenic forces that drive their dynamics with the scale and timing of flux among forest C pools remaining a large knowledge gap.
Kumar, K Vasanth; Porkodi, K; Rocha, F
2008-01-15
A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.
Laboratory errors and patient safety.
Miligy, Dawlat A
2015-01-01
Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that evaluated the encountered laboratory errors and launch the great need for universal standardization and bench marking measures to control the laboratory work.
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
DOT National Transportation Integrated Search
2012-10-01
Construction work zones are among the most dangerous places to work in any industry in the world. This is because many factors in construction, such as constant change in working environments and driver errors, contribute to a workplace with a higher...
Error in telemetry studies: Effects of animal movement on triangulation
Schmutz, Joel A.; White, Gary C.
1990-01-01
We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.
NASA Astrophysics Data System (ADS)
Kim, U.; Parker, J.; Borden, R. C.
2014-12-01
In-situ chemical oxidation (ISCO) has been applied at many dense non-aqueous phase liquid (DNAPL) contaminated sites. A stirred reactor-type model was developed that considers DNAPL dissolution using a field-scale mass transfer function, instantaneous reaction of oxidant with aqueous and adsorbed contaminant and with readily oxidizable natural oxygen demand ("fast NOD"), and second-order kinetic reactions with "slow NOD." DNAPL dissolution enhancement as a function of oxidant concentration and inhibition due to manganese dioxide precipitation during permanganate injection are included in the model. The DNAPL source area is divided into multiple treatment zones with different areas, depths, and contaminant masses based on site characterization data. The performance model is coupled with a cost module that involves a set of unit costs representing specific fixed and operating costs. Monitoring of groundwater and/or soil concentrations in each treatment zone is employed to assess ISCO performance and make real-time decisions on oxidant reinjection or ISCO termination. Key ISCO design variables include the oxidant concentration to be injected, time to begin performance monitoring, groundwater and/or soil contaminant concentrations to trigger reinjection or terminate ISCO, number of monitoring wells or geoprobe locations per treatment zone, number of samples per sampling event and location, and monitoring frequency. Design variables for each treatment zone may be optimized to minimize expected cost over a set of Monte Carlo simulations that consider uncertainty in site parameters. The model is incorporated in the Stochastic Cost Optimization Toolkit (SCOToolkit) program, which couples the ISCO model with a dissolved plume transport model and with modules for other remediation strategies. An example problem is presented that illustrates design tradeoffs required to deal with characterization and monitoring uncertainty. Monitoring soil concentration changes during ISCO was found to be important to avoid decision errors associated with slow rebound of groundwater concentrations.
Medication administration error: magnitude and associated factors among nurses in Ethiopia.
Feleke, Senafikish Amsalu; Mulatu, Muluadam Abebe; Yesmaw, Yeshaneh Seyoum
2015-01-01
The significant impact of medication administration errors affect patients in terms of morbidity, mortality, adverse drug events, and increased length of hospital stay. It also increases costs for clinicians and healthcare systems. Due to this, assessing the magnitude and associated factors of medication administration error has a significant contribution for improving the quality of patient care. The aim of this study was to assess the magnitude and associated factors of medication administration errors among nurses at the Felege Hiwot Referral Hospital inpatient department. A prospective, observation-based, cross-sectional study was conducted from March 24-April 7, 2014 at the Felege Hiwot Referral Hospital inpatient department. A total of 82 nurses were interviewed using a pre-tested structured questionnaire, and observed while administering 360 medications by using a checklist supplemented with a review of medication charts. Data were analyzed by using SPSS version 20 software package and logistic regression was done to identify possible factors associated with medication administration error. The incidence of medication administration error was 199 (56.4 %). The majority (87.5 %) of the medications have documentation error, followed by technique error 263 (73.1 %) and time error 193 (53.6 %). Variables which were significantly associated with medication administration error include nurses between the ages of 18-25 years [Adjusted Odds Ratio (AOR) = 2.9, 95 % CI (1.65,6.38)], 26-30 years [AOR = 2.3, 95 % CI (1.55, 7.26)] and 31-40 years [AOR = 2.1, 95 % CI (1.07, 4.12)], work experience of less than or equal to 10 years [AOR = 1.7, 95 % CI (1.33, 4.99)], nurse to patient ratio of 7-10 [AOR = 1.6, 95 % CI (1.44, 3.19)] and greater than 10 [AOR = 1.5, 95 % CI (1.38, 3.89)], interruption of the respondent at the time of medication administration [AOR = 1.5, 95 % CI (1.14, 3.21)], night shift of medication administration [AOR = 3.1, 95 % CI (1.38, 9.66)] and age of the patients with less than 18 years [AOR = 2.3, 95 % CI (1.17, 4.62)]. In general, medication errors at the administration phase were highly prevalent in Felege Hiwot Referral Hospital. Documentation error is the most dominant type of error observed during the study. Increasing nurses' staffing levels, minimizing distraction and interruptions during medication administration by using no interruptions zones and "No-Talk" signage are recommended to overcome medication administration errors. Retaining experienced nurses from leaving to train and supervise inexperienced nurses with the focus on medication safety, in addition providing convenient sleep hours for nurses would be helpful in ensuring that medication errors don't occur as frequently as observed in this study.
Model-based color halftoning using direct binary search.
Agar, A Ufuk; Allebach, Jan P
2005-12-01
In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.
Minimizing pulling geometry errors in atomic force microscope single molecule force spectroscopy.
Rivera, Monica; Lee, Whasil; Ke, Changhong; Marszalek, Piotr E; Cole, Daniel G; Clark, Robert L
2008-10-01
In atomic force microscopy-based single molecule force spectroscopy (AFM-SMFS), it is assumed that the pulling angle is negligible and that the force applied to the molecule is equivalent to the force measured by the instrument. Recent studies, however, have indicated that the pulling geometry errors can drastically alter the measured force-extension relationship of molecules. Here we describe a software-based alignment method that repositions the cantilever such that it is located directly above the molecule's substrate attachment site. By aligning the applied force with the measurement axis, the molecule is no longer undergoing combined loading, and the full force can be measured by the cantilever. Simulations and experimental results verify the ability of the alignment program to minimize pulling geometry errors in AFM-SMFS studies.
Avoiding common pitfalls in qualitative data collection and transcription.
Easton, K L; McComish, J F; Greenberg, R
2000-09-01
The subjective nature of qualitative research necessitates scrupulous scientific methods to ensure valid results. Although qualitative methods such as grounded theory, phenomenology, and ethnography yield rich data, consumers of research need to be able to trust the findings reported in such studies. Researchers are responsible for establishing the trustworthiness of qualitative research through a variety of ways. Specific challenges faced in the field can seriously threaten the dependability of the data. However, by minimizing potential errors that can occur when doing fieldwork, researchers can increase the trustworthiness of the study. The purpose of this article is to present three of the pitfalls that can occur in qualitative research during data collection and transcription: equipment failure, environmental hazards, and transcription errors. Specific strategies to minimize the risk for avoidable errors will be discussed.
[Ichthyofauna associated to a shallow reef in Morrocoy National Park, Venezuela].
López-Ordaz, A; Rodríguez-Quintal, J G
2010-10-01
Ichthyofauna associated to a shallow reef in Morrocoy National Park, Venezuela. Morrocoy National Park is one of the most studied coastal marine environments in Venezuela; however, efforts have been concentrated in south zone. In this study we select a shallow reef located in the north zone, characterized the benthic community and the structure of the fish community was studied using visual censuses. The benthic community was dominated by dead coral covered by algae (31%) and the live coral coverage was 12%. A total of 65 fish species belonging to 24 families were recorded, being Pomacentridae (43%), Scaridae (19%) and Haemulidae (15%) the most abundant families. Significant differences in the fish species abundances were found along the depth gradient, which could be related to the habitat characteristics, nevertheless herbivorous species dominance was evident at all depth strata. There seems to be a trend towards greater richness and density in the south zone reefs, and these differences may be related to the presence of extensive seagrass meadows and mangrove forests in that area or to differences in the recruitment patterns.