Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow
NASA Astrophysics Data System (ADS)
Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke
2017-04-01
Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.
Chappell, Nick A; Jones, Timothy D; Tych, Wlodek
2017-10-15
Insufficient temporal monitoring of water quality in streams or engineered drains alters the apparent shape of storm chemographs, resulting in shifted model parameterisations and changed interpretations of solute sources that have produced episodes of poor water quality. This so-called 'aliasing' phenomenon is poorly recognised in water research. Using advances in in-situ sensor technology it is now possible to monitor sufficiently frequently to avoid the onset of aliasing. A systems modelling procedure is presented allowing objective identification of sampling rates needed to avoid aliasing within strongly rainfall-driven chemical dynamics. In this study aliasing of storm chemograph shapes was quantified by changes in the time constant parameter (TC) of transfer functions. As a proportion of the original TC, the onset of aliasing varied between watersheds, ranging from 3.9-7.7 to 54-79 %TC (or 110-160 to 300-600 min). However, a minimum monitoring rate could be identified for all datasets if the modelling results were presented in the form of a new statistic, ΔTC. For the eight H + , DOC and NO 3 -N datasets examined from a range of watershed settings, an empirically-derived threshold of 1.3(ΔTC) could be used to quantify minimum monitoring rates within sampling protocols to avoid artefacts in subsequent data analysis. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Hunt, M. J.; Nuttle, W. K.; Cosby, B. J.; Marshall, F. E.
2005-05-01
Establishing minimum flow requirements in aquatic ecosystems is one way to stipulate controls on water withdrawals in a watershed. The basis of the determination is to identify the amount of flow needed to sustain a threshold ecological function. To develop minimum flow criteria an understanding of ecological response in relation to flow is essential. Several steps are needed including: (1) identification of important resources and ecological functions, (2) compilation of available information, (3) determination of historical conditions, (4) establishment of technical relationships between inflow and resources, and (5) identification of numeric criteria that reflect the threshold at which resources are harmed. The process is interdisciplinary requiring the integration of hydrologic and ecologic principles with quantitative assessments. The tools used quantify the ecological response and key questions related to how the quantity of flow influences the ecosystem are examined by comparing minimum flow determination in two different aquatic systems in South Florida. Each system is characterized by substantial hydrologic alteration. The first, the Caloosahatchee River is a riverine system, located on the southwest coast of Florida. The second, the Everglades- Florida Bay ecotone, is a wetland mangrove ecosystem, located on the southern tip of the Florida peninsula. In both cases freshwater submerged aquatic vegetation (Vallisneria americana or Ruppia maritima), located in areas of the saltwater- freshwater interface has been identified as a basis for minimum flow criteria. The integration of field studies, laboratory studies, and literature review was required. From this information we developed ecological modeling tools to quantify and predict plant growth in response to varying environmental variables. Coupled with hydrologic modeling tools questions relating to the quantity and timing of flow and ecological consequences in relation to normal variability are addressed.
From brittle to ductile fracture in disordered materials.
Picallo, Clara B; López, Juan M; Zapperi, Stefano; Alava, Mikko J
2010-10-08
We introduce a lattice model able to describe damage and yielding in heterogeneous materials ranging from brittle to ductile ones. Ductile fracture surfaces, obtained when the system breaks once the strain is completely localized, are shown to correspond to minimum energy surfaces. The similarity of the resulting fracture paths to the limits of brittle fracture or minimum energy surfaces is quantified. The model exhibits a smooth transition from brittleness to ductility. The dynamics of yielding exhibits avalanches with a power-law distribution.
USDA-ARS?s Scientific Manuscript database
Quantifying magnitudes and frequencies of rainless times between storms (TBS), or storm occurrence, is required for generating continuous sequences of precipitation for modeling inputs to small watershed models for conservation studies. Two parameters characterize TBS, minimum TBS (MTBS) and averag...
Ho, Kai-Yu; Keyak, Joyce H; Powers, Christopher M
2014-01-03
Elevated bone principal strain (an indicator of potential bone injury) resulting from reduced cartilage thickness has been suggested to contribute to patellofemoral symptoms. However, research linking patella bone strain, articular cartilage thickness, and patellofemoral pain (PFP) remains limited. The primary purpose was to determine whether females with PFP exhibit elevated patella bone strain when compared to pain-free controls. A secondary objective was to determine the influence of patella cartilage thickness on patella bone strain. Ten females with PFP and 10 gender, age, and activity-matched pain-free controls participated. Patella bone strain fields were quantified utilizing subject-specific finite element (FE) models of the patellofemoral joint (PFJ). Input parameters for the FE model included (1) PFJ geometry, (2) elastic moduli of the patella bone, (3) weight-bearing PFJ kinematics, and (4) quadriceps muscle forces. Using quasi-static simulations, peak and average minimum principal strains as well as peak and average maximum principal strains were quantified. Cartilage thickness was quantified by computing the perpendicular distance between opposing voxels defining the cartilage edges on axial plane magnetic resonance images. Compared to the pain-free controls, individuals with PFP exhibited increased peak and average minimum and maximum principal strain magnitudes in the patella. Additionally, patella cartilage thickness was negatively associated with peak minimum principal patella strain and peak maximum principal patella strain. The elevated bone strain magnitudes resulting from reduced cartilage thickness may contribute to patellofemoral symptoms and bone injury in persons with PFP. © 2013 Published by Elsevier Ltd.
Three-dimensional modeling and animation of two carpal bones: a technique.
Green, Jason K; Werner, Frederick W; Wang, Haoyu; Weiner, Marsha M; Sacks, Jonathan M; Short, Walter H
2004-05-01
The objectives of this study were to (a). create 3D reconstructions of two carpal bones from single CT data sets and animate these bones with experimental in vitro motion data collected during dynamic loading of the wrist joint, (b). develop a technique to calculate the minimum interbone distance between the two carpal bones, and (c). validate the interbone distance calculation process. This method utilized commercial software to create the animations and an in-house program to interface with three-dimensional CAD software to calculate the minimum distance between the irregular geometries of the bones. This interbone minimum distance provides quantitative information regarding the motion of the bones studied and may help to understand and quantify the effects of ligamentous injury.
Energy Efficient Operation of Ammonia Refrigeration Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammed, Abdul Qayyum; Wenning, Thomas J; Sever, Franc
Ammonia refrigeration systems typically offer many energy efficiency opportunities because of their size and complexity. This paper develops a model for simulating single-stage ammonia refrigeration systems, describes common energy saving opportunities, and uses the model to quantify those opportunities. The simulation model uses data that are typically available during site visits to ammonia refrigeration plants and can be calibrated to actual consumption and performance data if available. Annual electricity consumption for a base-case ammonia refrigeration system is simulated. The model is then used to quantify energy savings for six specific energy efficiency opportunities; reduce refrigeration load, increase suction pressure, employmore » dual suction, decrease minimum head pressure set-point, increase evaporative condenser capacity, and reclaim heat. Methods and considerations for achieving each saving opportunity are discussed. The model captures synergistic effects that result when more than one component or parameter is changed. This methodology represents an effective method to model and quantify common energy saving opportunities in ammonia refrigeration systems. The results indicate the range of savings that might be expected from common energy efficiency opportunities.« less
Wisneski, Kimberly J; Johnson, Michelle J
2007-03-23
Robotic therapy is at the forefront of stroke rehabilitation. The Activities of Daily Living Exercise Robot (ADLER) was developed to improve carryover of gains after training by combining the benefits of Activities of Daily Living (ADL) training (motivation and functional task practice with real objects), with the benefits of robot mediated therapy (repeatability and reliability). In combining these two therapy techniques, we seek to develop a new model for trajectory generation that will support functional movements to real objects during robot training. We studied natural movements to real objects and report on how initial reaching movements are affected by real objects and how these movements deviate from the straight line paths predicted by the minimum jerk model, typically used to generate trajectories in robot training environments. We highlight key issues that to be considered in modelling natural trajectories. Movement data was collected as eight normal subjects completed ADLs such as drinking and eating. Three conditions were considered: object absent, imagined, and present. This data was compared to predicted trajectories generated from implementing the minimum jerk model. The deviations in both the plane of the table (XY) and the sagittal plane of torso (XZ) were examined for both reaches to a cup and to a spoon. Velocity profiles and curvature were also quantified for all trajectories. We hypothesized that movements performed with functional task constraints and objects would deviate from the minimum jerk trajectory model more than those performed under imaginary or object absent conditions. Trajectory deviations from the predicted minimum jerk model for these reaches were shown to depend on three variables: object presence, object orientation, and plane of movement. When subjects completed the cup reach their movements were more curved than for the spoon reach. The object present condition for the cup reach showed more curvature than in the object imagined and absent conditions. Curvature in the XZ plane of movement was greater than curvature in the XY plane for all movements. The implemented minimum jerk trajectory model was not adequate for generating functional trajectories for these ADLs. The deviations caused by object affordance and functional task constraints must be accounted for in order to allow subjects to perform functional task training in robotic therapy environments. The major differences that we have highlighted include trajectory dependence on: object presence, object orientation, and the plane of movement. With the ability to practice ADLs on the ADLER environment we hope to provide patients with a therapy paradigm that will produce optimal results and recovery.
Analysis of 20 magnetic clouds at 1 AU during a solar minimum
NASA Astrophysics Data System (ADS)
Gulisano, A. M.; Dasso, S.; Mandrini, C. H.; Démoulin, P.
We study 20 magnetic clouds, observed in situ by the spacecraft Wind, at the Lagrangian point L1, from 22 August, 1995, to 7 November, 1997. In previous works, assuming a cylindrical symmetry for the local magnetic configuration and a satellite trajectory crossing the axis of the cloud, we obtained their orientations using a minimum variance analysis. In this work we compute the orientations and magnetic configurations using a non-linear simultaneous fit of the geometric and physical parameters for a linear force-free model, including the possibility of a not null impact parameter. We quantify global magnitudes such as the relative magnetic helicity per unit length and compare the values found with both methods (minimum variance and the simultaneous fit). FULL TEXT IN SPANISH
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume. PMID:22203886
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
A novel diagnosis method for a Hall plates-based rotary encoder with a magnetic concentrator.
Meng, Bumin; Wang, Yaonan; Sun, Wei; Yuan, Xiaofang
2014-07-31
In the last few years, rotary encoders based on two-dimensional complementary metal oxide semiconductors (CMOS) Hall plates with a magnetic concentrator have been developed to measure contactless absolute angle. There are various error factors influencing the measuring accuracy, which are difficult to locate after the assembly of encoder. In this paper, a model-based rapid diagnosis method is presented. Based on an analysis of the error mechanism, an error model is built to compare minimum residual angle error and to quantify the error factors. Additionally, a modified particle swarm optimization (PSO) algorithm is used to reduce the calculated amount. The simulation and experimental results show that this diagnosis method is feasible to quantify the causes of the error and to reduce iteration significantly.
Structure for identifying, locating and quantifying physical phenomena
Richardson, John G.
2006-10-24
A method and system for detecting, locating and quantifying a physical phenomena such as strain or a deformation in a structure. A minimum resolvable distance along the structure is selected and a quantity of laterally adjacent conductors is determined. Each conductor includes a plurality of segments coupled in series which define the minimum resolvable distance along the structure. When a deformation occurs, changes in the defined energy transmission characteristics along each conductor are compared to determine which segment contains the deformation.
Richardson, John G.
2006-01-24
A method and system for detecting, locating and quantifying a physical phenomena such as strain or a deformation in a structure. A minimum resolvable distance along the structure is selected and a quantity of laterally adjacent conductors is determined. Each conductor includes a plurality of segments coupled in series which define the minimum resolvable distance along the structure. When a deformation occurs, changes in the defined energy transmission characteristics along each conductor are compared to determine which segment contains the deformation.
Modeling a simple coronal streamer during whole sun month
NASA Technical Reports Server (NTRS)
Gibson, S. E.; Bagenal, F.; Biesecker, D.; Guhathakurta, M.; Hoeksema, J. T.; Thompson, B. J.
1997-01-01
The solar minimum streamer structure observed during the whole sun month was modeled. The Van de Hulst inversion was used in order to determine the coronal electron density profiles and scale-height temperature profiles. The axisymmetric magnetostatic model of Gibson, Bagenal and Low was also used. The density, temperature, and magnetic field distribution were quantified using both coronal white light data and photospheric magnetic field data from the Wilcox Solar Observatory. The densities and temperatures obtained by the Van de Hulst and magnetostatic models are compared to the magnetic field predicted by the magnetostatic model to a potential field extrapolated from the photosphere.
A Mueller matrix model of Haidinger's brushes.
Misson, Gary P
2003-09-01
Stokes vectors and Mueller matrices are used to model the polarisation properties (birefringence, dichroism and depolarisation) of any optical system, in particular the human eye. An explanation of the form and behaviour of the entoptic phenomenon of Haidinger's brushes is derived that complements and expands upon a previous study. The relationship between the appearance of Haidinger's brushes and intrinsic ocular retardation is quantified and the model allows prediction of the effect of any retarder of any orientation placed between a source of polarised light and the eye. The simple relationship of minimum contrast of Haidinger's brushes to the cosine of total retardation is derived.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unseren, M.A.
This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kok, H. Petra, E-mail: H.P.Kok@amc.uva.nl; Crezee, Johannes; Franken, Nicolaas A.P.
2014-03-01
Purpose: To develop a method to quantify the therapeutic effect of radiosensitization by hyperthermia; to this end, a numerical method was proposed to convert radiation therapy dose distributions with hyperthermia to equivalent dose distributions without hyperthermia. Methods and Materials: Clinical intensity modulated radiation therapy plans were created for 15 prostate cancer cases. To simulate a clinically relevant heterogeneous temperature distribution, hyperthermia treatment planning was performed for heating with the AMC-8 system. The temperature-dependent parameters α (Gy{sup −1}) and β (Gy{sup −2}) of the linear–quadratic model for prostate cancer were estimated from the literature. No thermal enhancement was assumed for normalmore » tissue. The intensity modulated radiation therapy plans and temperature distributions were exported to our in-house-developed radiation therapy treatment planning system, APlan, and equivalent dose distributions without hyperthermia were calculated voxel by voxel using the linear–quadratic model. Results: The planned average tumor temperatures T90, T50, and T10 in the planning target volume were 40.5°C, 41.6°C, and 42.4°C, respectively. The planned minimum, mean, and maximum radiation therapy doses were 62.9 Gy, 76.0 Gy, and 81.0 Gy, respectively. Adding hyperthermia yielded an equivalent dose distribution with an extended 95% isodose level. The equivalent minimum, mean, and maximum doses reflecting the radiosensitization by hyperthermia were 70.3 Gy, 86.3 Gy, and 93.6 Gy, respectively, for a linear increase of α with temperature. This can be considered similar to a dose escalation with a substantial increase in tumor control probability for high-risk prostate carcinoma. Conclusion: A model to quantify the effect of combined radiation therapy and hyperthermia in terms of equivalent dose distributions was presented. This model is particularly instructive to estimate the potential effects of interaction from different treatment modalities.« less
An Air Revitalization Model (ARM) for Regenerative Life Support Systems (RLSS)
NASA Technical Reports Server (NTRS)
Hart, Maxwell M.
1990-01-01
The primary objective of the air revitalization model (ARM) is to determine the minimum buffer capacities that would be necessary for long duration space missions. Several observations are supported by the current configuration sizes: the baseline values for each gas and the day to day or month to month fluctuations that are allowed. The baseline values depend on the minimum safety tolerances and the quantities of life support consumables necessary to survive the worst case scenarios within those tolerances. Most, it not all, of these quantities can easily be determined by ARM once these tolerances are set. The day to day fluctuations also require a command decision. It is already apparent from the current configuration of ARM that the tighter these fluctuations are controlled, the more energy used, the more nonregenerable hydrazine consumed, and the larger the required capacities for the various gas generators. All of these relationships could clearly be quantified by one operational ARM.
Vrabel, Joseph; Teeple, Andrew; Kress, Wade H.
2009-01-01
With increasing demands for reliable water supplies and availability estimates, groundwater flow models often are developed to enhance understanding of surface-water and groundwater systems. Specific hydraulic variables must be known or calibrated for the groundwater-flow model to accurately simulate current or future conditions. Surface geophysical surveys, along with selected test-hole information, can provide an integrated framework for quantifying hydrogeologic conditions within a defined area. In 2004, the U.S. Geological Survey, in cooperation with the North Platte Natural Resources District, performed a surface geophysical survey using a capacitively coupled resistivity technique to map the lithology within the top 8 meters of the near-surface for 110 kilometers of the Interstate and Tri-State Canals in western Nebraska and eastern Wyoming. Assuming that leakage between the surface-water and groundwater systems is affected primarily by the sediment directly underlying the canal bed, leakage potential was estimated from the simple vertical mean of inverse-model resistivity values for depth levels with geometrically increasing layer thickness with depth which resulted in mean-resistivity values biased towards the surface. This method generally produced reliable results, but an improved analysis method was needed to account for situations where confining units, composed of less permeable material, underlie units with greater permeability. In this report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, the authors use geostatistical analysis to develop the minimum-unadjusted method to compute a relative leakage potential based on the minimum resistivity value in a vertical column of the resistivity model. The minimum-unadjusted method considers the effects of homogeneous confining units. The minimum-adjusted method also is developed to incorporate the effect of local lithologic heterogeneity on water transmission. Seven sites with differing geologic contexts were selected following review of the capacitively coupled resistivity data collected in 2004. A reevaluation of these sites using the mean, minimum-unadjusted, and minimum-adjusted methods was performed to compare the different approaches for estimating leakage potential. Five of the seven sites contained underlying confining units, for which the minimum-unadjusted and minimum-adjusted methods accounted for the confining-unit effect. Estimates of overall leakage potential were lower for the minimum-unadjusted and minimum-adjusted methods than those estimated by the mean method. For most sites, the local heterogeneity adjustment procedure of the minimum-adjusted method resulted in slightly larger overall leakage-potential estimates. In contrast to the mean method, the two minimum-based methods allowed the least permeable areas to control the overall vertical permeability of the subsurface. The minimum-adjusted method refined leakage-potential estimation by additionally including local lithologic heterogeneity effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, Paul L; Brinkman, Gregory L; Mai, Trieu T
One of the significant limitations of solar and wind deployment is declining value caused by the limited correlation of renewable energy supply and electricity demand as well as limited flexibility of the power system. Limited flexibility can result from thermal and hydro plants that cannot turn off or reduce output due to technical or economic limits. These limits include the operating range of conventional thermal power plants, the need for process heat from combined heat and power plants, and restrictions on hydro unit operation. To appropriately analyze regional and national energy policies related to renewable deployment, these limits must bemore » accurately captured in grid planning models. In this work, we summarize data sources and methods for U.S. power plants that can be used to capture minimum generation levels in grid planning tools, such as production cost models. We also provide case studies for two locations in the U.S. (California and Texas) that demonstrate the sensitivity of variable generation (VG) curtailment to grid flexibility assumptions which shows the importance of analyzing (and documenting) minimum generation levels in studies of increased VG penetration.« less
The Surface Density Distribution in the Solar Nebula
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
2004-01-01
The commonly used minimum mass power law representation of the pre-solar nebula is reanalyzed using a new cumulative-mass-model. This model predicts a smoother surface density approximation compared with methods based on direct computation of surface density. The density is quantified using two independent analytical formulations. First, a best-fit transcendental function is applied directly to the basic planetary data. Next a solution to the time-dependent disk evolution equation is parametrically adapted to the solar nebula data. The latter model is shown to be a good approximation to the finite-size early Solar Nebula, and by extension to other extra solar protoplanetary disks.
Input relegation control for gross motion of a kinematically redundant manipulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unseren, M.A.
1992-10-01
This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less
NASA Technical Reports Server (NTRS)
Swei, Sean
2014-01-01
We propose to develop a robust guidance and control system for the ADEPT (Adaptable Deployable Entry and Placement Technology) entry vehicle. A control-centric model of ADEPT will be developed to quantify the performance of candidate guidance and control architectures for both aerocapture and precision landing missions. The evaluation will be based on recent breakthroughs in constrained controllability/reachability analysis of control systems and constrained-based energy-minimum trajectory optimization for guidance development operating in complex environments.
Minimum depth of investigation for grounded-wire TEM due to self-transients
NASA Astrophysics Data System (ADS)
Zhou, Nannan; Xue, Guoqiang
2018-05-01
The grounded-wire transient electromagnetic method (TEM) has been widely used for near-surface metalliferous prospecting, oil and gas exploration, and hydrogeological surveying in the subsurface. However, it is commonly observed that such TEM signal is contaminated by the self-transient process occurred at the early stage of data acquisition. Correspondingly, there exists a minimum depth of investigation, above which the observed signal is not applicable for reliable data processing and interpretation. Therefore, for achieving a more comprehensive understanding of the TEM method, it is necessary to perform research on the self-transient process and moreover develop an approach for quantifying the minimum detection depth. In this paper, we first analyze the temporal procedure of the equivalent circuit of the TEM method and present a theoretical equation for estimating the self-induction voltage based on the inductor of the transmitting wire. Then, numerical modeling is applied for building the relationship between the minimum depth of investigation and various properties, including resistivity of the earth, offset, and source length. It is guide for the design of survey parameters when the grounded-wire TEM is applied to the shallow detection. Finally, it is verified through applications to a coal field in China.
Gaussian intrinsic entanglement for states with partial minimum uncertainty
NASA Astrophysics Data System (ADS)
Mišta, Ladislav; Baksová, Klára
2018-01-01
We develop a recently proposed theory of a quantifier of bipartite Gaussian entanglement called Gaussian intrinsic entanglement (GIE) [L. Mišta, Jr. and R. Tatham, Phys. Rev. Lett. 117, 240505 (2016), 10.1103/PhysRevLett.117.240505]. Gaussian intrinsic entanglement provides a compromise between computable and physically meaningful entanglement quantifiers and so far it has been calculated for two-mode Gaussian states including all symmetric partial minimum-uncertainty states, weakly mixed asymmetric squeezed thermal states with partial minimum uncertainty, and weakly mixed symmetric squeezed thermal states. We improve the method of derivation of GIE and show that all previously derived formulas for GIE of weakly mixed states in fact hold for states with higher mixedness. In addition, we derive analytical formulas for GIE for several other classes of two-mode Gaussian states with partial minimum uncertainty. Finally, we show that, like for all previously known states, also for all currently considered states the GIE is equal to Gaussian Rényi-2 entanglement of formation. This finding strengthens a conjecture about the equivalence of GIE and Gaussian Rényi-2 entanglement of formation for all bipartite Gaussian states.
Meteorological variables and bacillary dysentery cases in Changsha City, China.
Gao, Lu; Zhang, Ying; Ding, Guoyong; Liu, Qiyong; Zhou, Maigeng; Li, Xiujun; Jiang, Baofa
2014-04-01
This study aimed to investigate the association between meteorological-related risk factors and bacillary dysentery in a subtropical inland Chinese area: Changsha City. The cross-correlation analysis and the Autoregressive Integrated Moving Average with Exogenous Variables (ARIMAX) model were used to quantify the relationship between meteorological factors and the incidence of bacillary dysentery. Monthly mean temperature, mean relative humidity, mean air pressure, mean maximum temperature, and mean minimum temperature were significantly correlated with the number of bacillary dysentery cases with a 1-month lagged effect. The ARIMAX models suggested that a 1°C rise in mean temperature, mean maximum temperature, and mean minimum temperature might lead to 14.8%, 12.9%, and 15.5% increases in the incidence of bacillary dysentery disease, respectively. Temperature could be used as a forecast factor for the increase of bacillary dysentery in Changsha. More public health actions should be taken to prevent the increase of bacillary dysentery disease with consideration of local climate conditions, especially temperature.
Meteorological Variables and Bacillary Dysentery Cases in Changsha City, China
Gao, Lu; Zhang, Ying; Ding, Guoyong; Liu, Qiyong; Zhou, Maigeng; Li, Xiujun; Jiang, Baofa
2014-01-01
This study aimed to investigate the association between meteorological-related risk factors and bacillary dysentery in a subtropical inland Chinese area: Changsha City. The cross-correlation analysis and the Autoregressive Integrated Moving Average with Exogenous Variables (ARIMAX) model were used to quantify the relationship between meteorological factors and the incidence of bacillary dysentery. Monthly mean temperature, mean relative humidity, mean air pressure, mean maximum temperature, and mean minimum temperature were significantly correlated with the number of bacillary dysentery cases with a 1-month lagged effect. The ARIMAX models suggested that a 1°C rise in mean temperature, mean maximum temperature, and mean minimum temperature might lead to 14.8%, 12.9%, and 15.5% increases in the incidence of bacillary dysentery disease, respectively. Temperature could be used as a forecast factor for the increase of bacillary dysentery in Changsha. More public health actions should be taken to prevent the increase of bacillary dysentery disease with consideration of local climate conditions, especially temperature. PMID:24591435
NASA Astrophysics Data System (ADS)
Ahmadalipour, Ali; Moradkhani, Hamid; Rana, Arun
2017-04-01
Uncertainty is an inevitable feature of climate change impact assessments. Understanding and quantifying different sources of uncertainty is of high importance, which can help modeling agencies improve the current models and scenarios. In this study, we have assessed the future changes in three climate variables (i.e. precipitation, maximum temperature, and minimum temperature) over 10 sub-basins across the Pacific Northwest US. To conduct the study, 10 statistically downscaled CMIP5 GCMs from two downscaling methods (i.e. BCSD and MACA) were utilized at 1/16 degree spatial resolution for the historical period of 1970-2000 and future period of 2010-2099. For the future projections, two future scenarios of RCP4.5 and RCP8.5 were used. Furthermore, Bayesian Model Averaging (BMA) was employed to develop a probabilistic future projection for each climate variable. Results indicate superiority of BMA simulations compared to individual models. Increasing temperature and precipitation are projected at annual timescale. However, the changes are not uniform among different seasons. Model uncertainty shows to be the major source of uncertainty, while downscaling uncertainty significantly contributes to the total uncertainty, especially in summer.
Computational fluid dynamics endpoints to characterize obstructive sleep apnea syndrome in children
Luo, Haiyan; Persak, Steven C.; Sin, Sanghun; McDonough, Joseph M.; Isasi, Carmen R.; Arens, Raanan
2013-01-01
Computational fluid dynamics (CFD) analysis may quantify the severity of anatomical airway restriction in obstructive sleep apnea syndrome (OSAS) better than anatomical measurements alone. However, optimal CFD model endpoints to characterize or assess OSAS have not been determined. To model upper airway fluid dynamics using CFD and investigate the strength of correlation between various CFD endpoints, anatomical endpoints, and OSAS severity, in obese children with OSAS and controls. CFD models derived from magnetic resonance images were solved at subject-specific peak tidal inspiratory flow; pressure at the choanae was set by nasal resistance. Model endpoints included airway wall minimum pressure (Pmin), flow resistance in the pharynx (Rpharynx), and pressure drop from choanae to a minimum cross section where tonsils and adenoids constrict the pharynx (dPTAmax). Significance of endpoints was analyzed using paired comparisons (t-test or Wilcoxon signed rank test) and Spearman correlation. Fifteen subject pairs were analyzed. Rpharynx and dPTAmax were higher in OSAS than control and most significantly correlated to obstructive apnea-hypopnea index (oAHI), r = 0.48 and r = 0.49, respectively (P < 0.01). Airway minimum cross-sectional correlation to oAHI was weaker (r = −0.39); Pmin was not significantly correlated. CFD model endpoints based on pressure drops in the pharynx were more closely associated with the presence and severity of OSAS than pressures including nasal resistance, or anatomical endpoints. This study supports the usefulness of CFD to characterize anatomical restriction of the pharynx and as an additional tool to evaluate subjects with OSAS. PMID:24265282
NASA Astrophysics Data System (ADS)
Abaurrea, J.; Asín, J.; Cebrián, A. C.
2018-02-01
The occurrence of extreme heat events in maximum and minimum daily temperatures is modelled using a non-homogeneous common Poisson shock process. It is applied to five Spanish locations, representative of the most common climates over the Iberian Peninsula. The model is based on an excess over threshold approach and distinguishes three types of extreme events: only in maximum temperature, only in minimum temperature and in both of them (simultaneous events). It takes into account the dependence between the occurrence of extreme events in both temperatures and its parameters are expressed as functions of time and temperature related covariates. The fitted models allow us to characterize the occurrence of extreme heat events and to compare their evolution in the different climates during the observed period. This model is also a useful tool for obtaining local projections of the occurrence rate of extreme heat events under climate change conditions, using the future downscaled temperature trajectories generated by Earth System Models. The projections for 2031-60 under scenarios RCP4.5, RCP6.0 and RCP8.5 are obtained and analysed using the trajectories from four earth system models which have successfully passed a preliminary control analysis. Different graphical tools and summary measures of the projected daily intensities are used to quantify the climate change on a local scale. A high increase in the occurrence of extreme heat events, mainly in July and August, is projected in all the locations, all types of event and in the three scenarios, although in 2051-60 the increase is higher under RCP8.5. However, relevant differences are found between the evolution in the different climates and the types of event, with a specially high increase in the simultaneous ones.
Doses of Nearby Nature Simultaneously Associated with Multiple Health Benefits
Cox, Daniel T. C.; Shanahan, Danielle F.; Hudson, Hannah L.; Fuller, Richard A.; Anderson, Karen; Hancock, Steven; Gaston, Kevin J.
2017-01-01
Exposure to nature provides a wide range of health benefits. A significant proportion of these are delivered close to home, because this offers an immediate and easily accessible opportunity for people to experience nature. However, there is limited information to guide recommendations on its management and appropriate use. We apply a nature dose-response framework to quantify the simultaneous association between exposure to nearby nature and multiple health benefits. We surveyed ca. 1000 respondents in Southern England, UK, to determine relationships between (a) nature dose type, that is the frequency and duration (time spent in private green space) and intensity (quantity of neighbourhood vegetation cover) of nature exposure and (b) health outcomes, including mental, physical and social health, physical behaviour and nature orientation. We then modelled dose-response relationships between dose type and self-reported depression. We demonstrate positive relationships between nature dose and mental and social health, increased physical activity and nature orientation. Dose-response analysis showed that lower levels of depression were associated with minimum thresholds of weekly nature dose. Nearby nature is associated with quantifiable health benefits, with potential for lowering the human and financial costs of ill health. Dose-response analysis has the potential to guide minimum and optimum recommendations on the management and use of nearby nature for preventative healthcare. PMID:28208789
Doses of Nearby Nature Simultaneously Associated with Multiple Health Benefits.
Cox, Daniel T C; Shanahan, Danielle F; Hudson, Hannah L; Fuller, Richard A; Anderson, Karen; Hancock, Steven; Gaston, Kevin J
2017-02-09
Exposure to nature provides a wide range of health benefits. A significant proportion of these are delivered close to home, because this offers an immediate and easily accessible opportunity for people to experience nature. However, there is limited information to guide recommendations on its management and appropriate use. We apply a nature dose-response framework to quantify the simultaneous association between exposure to nearby nature and multiple health benefits. We surveyed ca. 1000 respondents in Southern England, UK, to determine relationships between (a) nature dose type, that is the frequency and duration (time spent in private green space) and intensity (quantity of neighbourhood vegetation cover) of nature exposure and (b) health outcomes, including mental, physical and social health, physical behaviour and nature orientation. We then modelled dose-response relationships between dose type and self-reported depression. We demonstrate positive relationships between nature dose and mental and social health, increased physical activity and nature orientation. Dose-response analysis showed that lower levels of depression were associated with minimum thresholds of weekly nature dose. Nearby nature is associated with quantifiable health benefits, with potential for lowering the human and financial costs of ill health. Dose-response analysis has the potential to guide minimum and optimum recommendations on the management and use of nearby nature for preventative healthcare.
Quantitative Assessment of the CCMC's Experimental Real-time SWMF-Geospace Results
NASA Astrophysics Data System (ADS)
Liemohn, Michael; Ganushkina, Natalia; De Zeeuw, Darren; Welling, Daniel; Toth, Gabor; Ilie, Raluca; Gombosi, Tamas; van der Holst, Bart; Kuznetsova, Maria; Maddox, Marlo; Rastaetter, Lutz
2016-04-01
Experimental real-time simulations of the Space Weather Modeling Framework (SWMF) are conducted at the Community Coordinated Modeling Center (CCMC), with results available there (http://ccmc.gsfc.nasa.gov/realtime.php), through the CCMC Integrated Space Weather Analysis (iSWA) site (http://iswa.ccmc.gsfc.nasa.gov/IswaSystemWebApp/), and the Michigan SWMF site (http://csem.engin.umich.edu/realtime). Presently, two configurations of the SWMF are running in real time at CCMC, both focusing on the geospace modules, using the BATS-R-US magnetohydrodynamic model, the Ridley Ionosphere Model, and with and without the Rice Convection Model for inner magnetospheric drift physics. While both have been running for several years, nearly continuous results are available since July 2015. Dst from the model output is compared against the Kyoto real-time Dst, in particular the daily minimum value of Dst to quantify the ability of the model to capture storms. Contingency tables are presented, showing that the run with the inner magnetosphere model is much better at reproducing storm-time values. For disturbances with a minimum Dst lower than -50 nT, this version yields a probability of event detection of 0.86 and a Heidke Skill Score of 0.60. In the other version of the SWMF, without the inner magnetospheric module included, the modeled Dst never dropped below -50 nT during the examined epoch.
Galactic cosmic ray transport methods and radiation quality issues
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.
1992-01-01
An overview of galactic cosmic ray (GCR) interaction and transport methods, as implemented in the Langley Research Center GCR transport code, is presented. Representative results for solar minimum, exo-magnetospheric GCR dose equivalents in water are presented on a component by component basis for various thicknesses of aluminum shielding. The impact of proposed changes to the currently used quality factors on exposure estimates and shielding requirements are quantified. Using the cellular track model of Katz, estimates of relative biological effectiveness (RBE) for the mixed GCR radiation fields are also made.
GCR Environmental Models III: GCR Model Validation and Propagated Uncertainties in Effective Dose
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Xu, Xiaojing; Blattnig, Steve R.; Norman, Ryan B.
2014-01-01
This is the last of three papers focused on quantifying the uncertainty associated with galactic cosmic rays (GCR) models used for space radiation shielding applications. In the first paper, it was found that GCR ions with Z>2 and boundary energy below 500 MeV/nucleon induce less than 5% of the total effective dose behind shielding. This is an important finding since GCR model development and validation have been heavily biased toward Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer measurements below 500 MeV/nucleon. Weights were also developed that quantify the relative contribution of defined GCR energy and charge groups to effective dose behind shielding. In the second paper, it was shown that these weights could be used to efficiently propagate GCR model uncertainties into effective dose behind shielding. In this work, uncertainties are quantified for a few commonly used GCR models. A validation metric is developed that accounts for measurements uncertainty, and the metric is coupled to the fast uncertainty propagation method. For this work, the Badhwar-O'Neill (BON) 2010 and 2011 and the Matthia GCR models are compared to an extensive measurement database. It is shown that BON2011 systematically overestimates heavy ion fluxes in the range 0.5-4 GeV/nucleon. The BON2010 and BON2011 also show moderate and large errors in reproducing past solar activity near the 2000 solar maximum and 2010 solar minimum. It is found that all three models induce relative errors in effective dose in the interval [-20%, 20%] at a 68% confidence level. The BON2010 and Matthia models are found to have similar overall uncertainty estimates and are preferred for space radiation shielding applications.
Analyzing a bioterror attack on the food supply: the case of botulinum toxin in milk.
Wein, Lawrence M; Liu, Yifan
2005-07-12
We developed a mathematical model of a cows-to-consumers supply chain associated with a single milk-processing facility that is the victim of a deliberate release of botulinum toxin. Because centralized storage and processing lead to substantial dilution of the toxin, a minimum amount of toxin is required for the release to do damage. Irreducible uncertainties regarding the dose-response curve prevent us from quantifying the minimum effective release. However, if terrorists can obtain enough toxin, and this may well be possible, then rapid distribution and consumption result in several hundred thousand poisoned individuals if detection from early symptomatics is not timely. Timely and specific in-process testing has the potential to eliminate the threat of this scenario at a cost of <1 cent per gallon and should be pursued aggressively. Investigation of improving the toxin inactivation rate of heat pasteurization without sacrificing taste or nutrition is warranted.
NASA Astrophysics Data System (ADS)
Feng, Bo; Ribeiro, Artur Lopes; Ramos, Helena Geirinhas
2018-04-01
This paper presents a study of the characteristics of Lamb wave (S0 mode) testing signals in carbon fiber composite laminates containing delaminations. The study was implemented by using commercial finite element simulation software - ANSYS. The delamination signal is proven to be the superposition of the two waves travelling from upper and lower sub-laminates. Dispersion curves for the two sub-laminates were calculated to show the difference between phase velocities of the waves in the sub-laminates. Two models are specifically designed to get the phase difference between the waves that travel in each of the two sub-laminates. From the simulation results, it was found that the phase difference increases with the delamination length. Furthermore, the amplitude of delamination signal decreases first, then it starts to increase after reaching the minimum value. The minimum is reached when the waves from the two sub-laminates are 180° out of phase.
Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki
2015-01-01
Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.
Information-theoretic measures of hydrogen-like ions in weakly coupled Debye plasmas
NASA Astrophysics Data System (ADS)
Zan, Li Rong; Jiao, Li Guang; Ma, Jia; Ho, Yew Kam
2017-12-01
Recent development of information theory provides researchers an alternative and useful tool to quantitatively investigate the variation of the electronic structure when atoms interact with the external environment. In this work, we make systematic studies on the information-theoretic measures for hydrogen-like ions immersed in weakly coupled plasmas modeled by Debye-Hückel potential. Shannon entropy, Fisher information, and Fisher-Shannon complexity in both position and momentum spaces are quantified in high accuracy for the hydrogen atom in a large number of stationary states. The plasma screening effect on embedded atoms can significantly affect the electronic density distributions, in both conjugate spaces, and it is quantified by the variation of information quantities. It is shown that the composite quantities (the Shannon entropy sum and the Fisher information product in combined spaces and Fisher-Shannon complexity in individual space) give a more comprehensive description of the atomic structure information than single ones. The nodes of wave functions play a significant role in the changes of composite information quantities caused by plasmas. With the continuously increasing screening strength, all composite quantities in circular states increase monotonously, while in higher-lying excited states where nodal structures exist, they first decrease to a minimum and then increase rapidly before the bound state approaches the continuum limit. The minimum represents the most reduction of uncertainty properties of the atom in plasmas. The lower bounds for the uncertainty product of the system based on composite information quantities are discussed. Our research presents a comprehensive survey in the investigation of information-theoretic measures for simple atoms embedded in Debye model plasmas.
Faust, Matthew D.; Hansen, Michael J.
2016-01-01
To determine whether a consumption-oriented fishery was compatible with a trophy-oriented fishery for Muskellunge Esox masquinongy, we modeled effects of a spearing fishery and recreational angling fishery on population size structure (i.e., numbers of fish ≥ 102, 114, and 127 cm) in northern Wisconsin. An individual-based simulation model was used to quantify the effect of harvest mortality at currently observed levels of recreational angling and tribal spearing fishery exploitation, along with simulated increases in exploitation, for three typical growth potentials (i.e., low, moderate, and high) of Muskellunge in northern Wisconsin across a variety of minimum length limits (i.e., 71, 102, 114, and 127 cm). Populations with moderate to high growth potential and minimum length limits ≥ 114 cm were predicted to have lower declines in numbers of trophy Muskellunge when subjected to angling-only and mixed fisheries at observed and increased levels of exploitation, which suggested that fisheries with disparate motivations may be able to coexist under certain conditions such as restrictive length limits and low levels of exploitation. However, for most Muskellunge populations in northern Wisconsin regulated by a 102-cm minimum length limit, both angling and spearing fisheries may reduce numbers of trophy Muskellunge as larger declines were predicted across all growth potentials. Our results may be useful if Muskellunge management options in northern Wisconsin are re-examined in the future.
Validation of the Kp Geomagnetic Index Forecast at CCMC
NASA Astrophysics Data System (ADS)
Frechette, B. P.; Mays, M. L.
2017-12-01
The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.
Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia
2016-04-01
Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model's template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, [Formula: see text], was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using [Formula: see text] from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Lai, Canhai; Marcy, Peter William
2017-05-01
A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less
THE SHEFFIELD ALCOHOL POLICY MODEL - A MATHEMATICAL DESCRIPTION.
Brennan, Alan; Meier, Petra; Purshouse, Robin; Rafia, Rachid; Meng, Yang; Hill-Macmanus, Daniel; Angus, Colin; Holmes, John
2014-09-30
This methodology paper sets out a mathematical description of the Sheffield Alcohol Policy Model version 2.0, a model to evaluate public health strategies for alcohol harm reduction in the UK. Policies that can be appraised include a minimum price per unit of alcohol, restrictions on price discounting, and broader public health measures. The model estimates the impact on consumers, health services, crime, employers, retailers and government tax revenues. The synthesis of public and commercial data sources to inform the model structure is described. A detailed algebraic description of the model is provided. This involves quantifying baseline levels of alcohol purchasing and consumption by age and gender subgroups, estimating the impact of policies on consumption, for example, using evidence on price elasticities of demand for alcohol, quantification of risk functions relating alcohol consumption to harms including 47 health conditions, crimes, absenteeism and unemployment, and finally monetary valuation of the consequences. The results framework, shown for a minimum price per unit of alcohol, has been used to provide policy appraisals for the UK government policy-makers. In discussion and online appendix, we explore issues around valuation and scope, limitations of evidence/data, how the framework can be adapted to other countries and decisions, and ongoing plans for further development. © 2014 The Authors. Health Economics published by John Wiley & Sons Ltd. © 2014 The Authors. Health Economics published by John Wiley & Sons Ltd.
Poon, Anna K; Meyer, Michelle L; Reaven, Gerald; Knowles, Joshua W; Selvin, Elizabeth; Pankow, James S; Couper, David; Loehr, Laura; Heiss, Gerardo
2018-06-01
The homeostatic model assessment of insulin resistance (HOMA-IR) and triglyceride (TG)/high-density lipoprotein cholesterol (HDL-C) ratio (TG/HDL-C) are insulin resistance indexes routinely used in clinical and population-based studies; however, their short-term repeatability is not well characterized. To quantify the short-term repeatability of insulin resistance indexes and their analytes, consisting of fasting glucose and insulin for HOMA-IR and TG and HDL-C for TG/HDL-C. Prospective cohort study. A total of 102 adults 68 to 88 years old without diabetes attended an initial examination and repeated examination (mean, 46 days; range, 28 to 102 days). Blood samples were collected, processed, shipped, and assayed following a standardized protocol. Repeatability was quantified using the intraclass correlation coefficient (ICC) and within-person coefficient of variation (CV). Minimum detectable change (MDC95) and minimum detectable difference with 95% confidence (MDD95) were quantified. For HOMA-IR, insulin, and fasting glucose, the ICCs were 0.70, 0.68, and 0.70, respectively; their respective within-person CVs were 30.4%, 28.8%, and 5.6%. For TG/HDL-C, TG, and HDL-C, the ICCs were 0.80, 0.68, and 0.91, respectively; their respective within-person CVs were 23.0%, 20.6%, and 8.2%. The MDC95 was 2.3 for HOMA-IR and 1.4 for TG/HDL-C. The MDD95 for a sample of n = 100 was 0.8 for HOMA-IR and 0.6 for TG/HDL-C. Short-term repeatability was fair to good for HOMA-IR and excellent for TG/HDL-C according to suggested benchmarks, reflecting the short-term variability of their analytes. These measurement properties can inform the use of these indexes in clinical and population-based studies.
NASA Astrophysics Data System (ADS)
Schubert, Brian A.; Jahren, A. Hope
2015-10-01
Modern and ancient wood is a valuable terrestrial record of carbon ultimately derived from the atmosphere and oxygen inherited from local meteoric water. Many modern and fossil wood specimens display rings sufficiently thick for intra-annual sampling, and analytical techniques are rapidly improving to allow for precise carbon and oxygen isotope measurements on very small samples, yielding unprecedented resolution of seasonal isotope records. However, the interpretation of these records across diverse environments has been problematic because a unifying model for the quantitative interpretation of seasonal climate parameters from oxygen isotopes in wood is lacking. Towards such a model, we compiled a dataset of intra-ring oxygen isotope measurements on modern wood cellulose (δ18Ocell) from 33 globally distributed sites. Five of these sites represent original data produced for this study, while the data for the other 28 sites were taken from the literature. We defined the intra-annual change in oxygen isotope value of wood cellulose [Δ(δ18Ocell)] as the difference between the maximum and minimum δ18Ocell values determined within the ring. Then, using the monthly-resolved dataset of the oxygen isotope composition of meteoric water (δ18OMW) provided by the Global Network of Isotopes in Precipitation database, we quantified the empirical relationship between the intra-annual change in meteoric water [Δ(δ18OMW)] and Δ(δ18Ocell). We then used monthly-resolved datasets of temperature and precipitation to develop a global relationship between Δ(δ18OMW) and maximum/minimum monthly temperatures and winter/summer precipitation amounts. By combining these relationships we produced a single equation that explains much of the variability in the intra-ring δ18Ocell signal through only changes in seasonal temperature and precipitation amount (R2 = 0.82). We show how our recent model that quantifies seasonal precipitation from intra-ring carbon isotope profiles can be incorporated into the oxygen model above in order to separately quantify both seasonal temperature and seasonal precipitation. Determination of seasonal climate variation using high-resolution isotopes in tree-ring records makes possible a new understanding of the seasonal fluctuations that control the environmental conditions to which organisms are subject, both during recent history and in the geologic past.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Effect of load introduction on graphite epoxy compression specimens
NASA Technical Reports Server (NTRS)
Reiss, R.; Yao, T. M.
1981-01-01
Compression testing of modern composite materials is affected by the manner in which the compressive load is introduced. Two such effects are investigated: (1) the constrained edge effect which prevents transverse expansion and is common to all compression testing in which the specimen is gripped in the fixture; and (2) nonuniform gripping which induces bending into the specimen. An analytical model capable of quantifying these foregoing effects was developed which is based upon the principle of minimum complementary energy. For pure compression, the stresses are approximated by Fourier series. For pure bending, the stresses are approximated by Legendre polynomials.
Projected changes in rainfall and temperature over homogeneous regions of India
NASA Astrophysics Data System (ADS)
Patwardhan, Savita; Kulkarni, Ashwini; Rao, K. Koteswara
2018-01-01
The impact of climate change on the characteristics of seasonal maximum and minimum temperature and seasonal summer monsoon rainfall is assessed over five homogeneous regions of India using a high-resolution regional climate model. Providing REgional Climate for Climate Studies (PRECIS) is developed at Hadley Centre for Climate Prediction and Research, UK. The model simulations are carried out over South Asian domain for the continuous period of 1961-2098 at 50-km horizontal resolution. Here, three simulations from a 17-member perturbed physics ensemble (PPE) produced using HadCM3 under the Quantifying Model Uncertainties in Model Predictions (QUMP) project of Hadley Centre, Met. Office, UK, have been used as lateral boundary conditions (LBCs) for the 138-year simulations of the regional climate model under Intergovernmental Panel on Climate Change (IPCC) A1B scenario. The projections indicate the increase in the summer monsoon (June through September) rainfall over all the homogeneous regions (15 to 19%) except peninsular India (around 5%). There may be marginal change in the frequency of medium and heavy rainfall events (>20 mm) towards the end of the present century. The analysis over five homogeneous regions indicates that the mean maximum surface air temperatures for the pre-monsoon season (March-April-May) as well as the mean minimum surface air temperature for winter season (January-February) may be warmer by around 4 °C towards the end of the twenty-first century.
Kunkel, Amber; McLay, Laura A
2013-03-01
Emergency medical services (EMS) provide life-saving care and hospital transport to patients with severe trauma or medical conditions. Severe weather events, such as snow events, may lead to adverse patient outcomes by increasing call volumes and service times. Adequate staffing levels during such weather events are critical for ensuring that patients receive timely care. To determine staffing levels that depend on weather, we propose a model that uses a discrete event simulation of a reliability model to identify minimum staffing levels that provide timely patient care, with regression used to provide the input parameters. The system is said to be reliable if there is a high degree of confidence that ambulances can immediately respond to a given proportion of patients (e.g., 99 %). Four weather scenarios capture varying levels of snow falling and snow on the ground. An innovative feature of our approach is that we evaluate the mitigating effects of different extrinsic response policies and intrinsic system adaptation. The models use data from Hanover County, Virginia to quantify how snow reduces EMS system reliability and necessitates increasing staffing levels. The model and its analysis can assist in EMS preparedness by providing a methodology to adjust staffing levels during weather events. A key observation is that when it is snowing, intrinsic system adaptation has similar effects on system reliability as one additional ambulance.
Chained Bell Inequality Experiment with High-Efficiency Measurements
NASA Astrophysics Data System (ADS)
Tan, T. R.; Wan, Y.; Erickson, S.; Bierhorst, P.; Kienzler, D.; Glancy, S.; Knill, E.; Leibfried, D.; Wineland, D. J.
2017-03-01
We report correlation measurements on two 9Be+ ions that violate a chained Bell inequality obeyed by any local-realistic theory. The correlations can be modeled as derived from a mixture of a local-realistic probabilistic distribution and a distribution that violates the inequality. A statistical framework is formulated to quantify the local-realistic fraction allowable in the observed distribution without the fair-sampling or independent-and-identical-distributions assumptions. We exclude models of our experiment whose local-realistic fraction is above 0.327 at the 95% confidence level. This bound is significantly lower than 0.586, the minimum fraction derived from a perfect Clauser-Horne-Shimony-Holt inequality experiment. Furthermore, our data provide a device-independent certification of the deterministically created Bell states.
NASA Astrophysics Data System (ADS)
Wu, Ming; Cheng, Zhou; Wu, Jianfeng; Wu, Jichun
2017-06-01
Representative elementary volume (REV) is important to determine properties of porous media and those involved in migration of contaminants especially dense nonaqueous phase liquids (DNAPLs) in subsurface environment. In this study, an experiment of long-term migration of the commonly used DNAPL, perchloroethylene (PCE), is performed in a two dimensional (2D) sandbox where several system variables including porosity, PCE saturation (Soil) and PCE-water interfacial area (AOW) are accurately quantified by light transmission techniques over the entire PCE migration process. Moreover, the REVs for these system variables are estimated by a criterion of relative gradient error (εgi) and results indicate that the frequency of minimum porosity-REV size closely follows a Gaussian distribution in the range of 2.0 mm and 8.0 mm. As experiment proceeds in PCE infiltration process, the frequency and cumulative frequency of both minimum Soil-REV and minimum AOW-REV sizes change their shapes from the irregular and random to the regular and smooth. When experiment comes into redistribution process, the cumulative frequency of minimum Soil-REV size reveals a linear positive correlation, while frequency of minimum AOW-REV size tends to a Gaussian distribution in the range of 2.0 mm-7.0 mm and appears a peak value in 13.0 mm-14.0 mm. Undoubtedly, this study will facilitate the quantification of REVs for materials and fluid properties in a rapid, handy and economical manner, which helps enhance our understanding of porous media and DNAPL properties at micro scale, as well as the accuracy of DNAPL contamination modeling at field-scale.
Future changes over the Himalayas: Maximum and minimum temperature
NASA Astrophysics Data System (ADS)
Dimri, A. P.; Kumar, D.; Choudhary, A.; Maharana, P.
2018-03-01
An assessment of the projection of minimum and maximum air temperature over the Indian Himalayan region (IHR) from the COordinated Regional Climate Downscaling EXperiment- South Asia (hereafter, CORDEX-SA) regional climate model (RCM) experiments have been carried out under two different Representative Concentration Pathway (RCP) scenarios. The major aim of this study is to assess the probable future changes in the minimum and maximum climatology and its long-term trend under different RCPs along with the elevation dependent warming over the IHR. A number of statistical analysis such as changes in mean climatology, long-term spatial trend and probability distribution function are carried out to detect the signals of changes in climate. The study also tries to quantify the uncertainties associated with different model experiments and their ensemble in space, time and for different seasons. The model experiments and their ensemble show prominent cold bias over Himalayas for present climate. However, statistically significant higher warming rate (0.23-0.52 °C/decade) for both minimum and maximum air temperature (Tmin and Tmax) is observed for all the seasons under both RCPs. The rate of warming intensifies with the increase in the radiative forcing under a range of greenhouse gas scenarios starting from RCP4.5 to RCP8.5. In addition to this, a wide range of spatial variability and disagreements in the magnitude of trend between different models describes the uncertainty associated with the model projections and scenarios. The projected rate of increase of Tmin may destabilize the snow formation at the higher altitudes in the northern and western parts of Himalayan region, while rising trend of Tmax over southern flank may effectively melt more snow cover. Such combined effect of rising trend of Tmin and Tmax may pose a potential threat to the glacial deposits. The overall trend of Diurnal temperature range (DTR) portrays increasing trend across entire area with highest magnitude under RCP8.5. This higher rate of increase is imparted from the predominant rise of Tmax as compared to Tmin.
Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia
2016-01-01
Abstract. Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model’s template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, Az, was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using Az from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO. PMID:27284547
Coupling between absorption and scattering in disordered colloids
NASA Astrophysics Data System (ADS)
Stephenson, Anna; Hwang, Victoria; Park, Jin-Gyu; Manoharan, Vinothan N.
We aim to understand how scattering and absorption are coupled in disordered colloidal suspensions containing absorbing molecules (dyes). When the absorption length is shorter than the transport length, absorption dominates, and absorption and scattering can be seen as two additive effects. However, when the transport length is shorter than the absorption length, the scattering and absorption become coupled, as multiple scattering increases the path length of the light in the sample, leading to a higher probability of absorption. To quantify this synergistic effect, we measure the diffuse reflectance spectra of colloidal samples of varying dye concentrations, thicknesses, and particle concentrations, and we calculate the transport length and absorption length from our measurements, using a radiative transfer model. At particle concentrations so high that the particles form disordered packings, we find a minimum in the transport length. We show that selecting a dye where the absorption peak matches the location of the minimum in the transport length allows for enhanced absorption. Kraft-Heinz Corporation, NSF GRFP 2015200426.
Molecular electrostatics for probing lone pair-π interactions.
Mohan, Neetha; Suresh, Cherumuttathu H; Kumar, Anmol; Gadre, Shridhar R
2013-11-14
An electrostatics-based approach has been proposed for probing the weak interactions between lone pair containing molecules and π deficient molecular systems. For electron-rich molecules, the negative minima in molecular electrostatic potential (MESP) topography give the location of electron localization and the MESP value at the minimum (Vmin) quantifies the electron-rich character of that region. Interactive behavior of a lone pair bearing molecule with electron deficient π-systems, such as hexafluorobenzene, 1,3,5-trinitrobenzene, 2,4,6-trifluoro-1,3,5-triazine and 1,2,4,5-tetracyanobenzene explored within DFT brings out good correlation of the lone pair-π interaction energy (E(int)) with the Vmin value of the electron-rich system. Such interaction is found to be portrayed well with the Electrostatic Potential for Intermolecular Complexation (EPIC) model. On the basis of the precise location of MESP minimum, a prediction for the orientation of a lone pair bearing molecule with an electron deficient π-system is possible in the majority of the cases studied.
Influence of the geomembrane on time-lapse ERT measurements for leachate injection monitoring.
Audebert, M; Clément, R; Grossin-Debattista, J; Günther, T; Touze-Foltz, N; Moreau, S
2014-04-01
Leachate recirculation is a key process in the operation of municipal waste landfills as bioreactors. To quantify the water content and to evaluate the leachate injection system, in situ methods are required to obtain spatially distributed information, usually electrical resistivity tomography (ERT). However, this method can present false variations in the observations due to several parameters. This study investigates the impact of the geomembrane on ERT measurements. Indeed, the geomembrane tends to be ignored in the inversion process in most previously conducted studies. The presence of the geomembrane can change the boundary conditions of the inversion models, which have classically infinite boundary conditions. Using a numerical modelling approach, the authors demonstrate that a minimum distance is required between the electrode line and the geomembrane to satisfy the good conditions of use of the classical inversion tools. This distance is a function of the electrode line length (i.e. of the unit electrode spacing) used, the array type and the orientation of the electrode line. Moreover, this study shows that if this criterion on the minimum distance is not satisfied, it is possible to significantly improve the inversion process by introducing the complex geometry and the geomembrane location into the inversion tools. These results are finally validated on a field data set gathered on a small municipal solid waste landfill cell where this minimum distance criterion cannot be satisfied. Copyright © 2014 Elsevier Ltd. All rights reserved.
Power limits for microbial life.
LaRowe, Douglas E; Amend, Jan P
2015-01-01
To better understand the origin, evolution, and extent of life, we seek to determine the minimum flux of energy needed for organisms to remain viable. Despite the difficulties associated with direct measurement of the power limits for life, it is possible to use existing data and models to constrain the minimum flux of energy required to sustain microorganisms. Here, a we apply a bioenergetic model to a well characterized marine sedimentary environment in order to quantify the amount of power organisms use in an ultralow-energy setting. In particular, we show a direct link between power consumption in this environment and the amount of biomass (cells cm(-3)) found in it. The power supply resulting from the aerobic degradation of particular organic carbon (POC) at IODP Site U1370 in the South Pacific Gyre is between ∼10(-12) and 10(-16) W cm(-3). The rates of POC degradation are calculated using a continuum model while Gibbs energies have been computed using geochemical data describing the sediment as a function of depth. Although laboratory-determined values of maintenance power do a poor job of representing the amount of biomass in U1370 sediments, the number of cells per cm(-3) can be well-captured using a maintenance power, 190 zW cell(-1), two orders of magnitude lower than the lowest value reported in the literature. In addition, we have combined cell counts and calculated power supplies to determine that, on average, the microorganisms at Site U1370 require 50-3500 zW cell(-1), with most values under ∼300 zW cell(-1). Furthermore, we carried out an analysis of the absolute minimum power requirement for a single cell to remain viable to be on the order of 1 zW cell(-1).
Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas
Bedford, Tim; Daneshkhah, Alireza
2015-01-01
Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240
Temperature fine-tunes Mediterranean Arabidopsis thaliana life-cycle phenology geographically.
Marcer, A; Vidigal, D S; James, P M A; Fortin, M-J; Méndez-Vigo, B; Hilhorst, H W M; Bentsink, L; Alonso-Blanco, C; Picó, F X
2018-01-01
To understand how adaptive evolution in life-cycle phenology operates in plants, we need to unravel the effects of geographic variation in putative agents of natural selection on life-cycle phenology by considering all key developmental transitions and their co-variation patterns. We address this goal by quantifying the temperature-driven and geographically varying relationship between seed dormancy and flowering time in the annual Arabidopsis thaliana across the Iberian Peninsula. We used data on genetic variation in two major life-cycle traits, seed dormancy (DSDS50) and flowering time (FT), in a collection of 300 A. thaliana accessions from the Iberian Peninsula. The geographically varying relationship between life-cycle traits and minimum temperature, a major driver of variation in DSDS50 and FT, was explored with geographically weighted regressions (GWR). The environmentally varying correlation between DSDS50 and FT was analysed by means of sliding window analysis across a minimum temperature gradient. Maximum local adjustments between minimum temperature and life-cycle traits were obtained in the southwest Iberian Peninsula, an area with the highest minimum temperatures. In contrast, in off-southwest locations, the effects of minimum temperature on DSDS50 were rather constant across the region, whereas those of minimum temperature on FT were more variable, with peaks of strong local adjustments of GWR models in central and northwest Spain. Sliding window analysis identified a minimum temperature turning point in the relationship between DSDS50 and FT around a minimum temperature of 7.2 °C. Above this minimum temperature turning point, the variation in the FT/DSDS50 ratio became rapidly constrained and the negative correlation between FT and DSDS50 did not increase any further with increasing minimum temperatures. The southwest Iberian Peninsula emerges as an area where variation in life-cycle phenology appears to be restricted by the duration and severity of the hot summer drought. The temperature-driven varying relationship between DSDS50 and FT detected environmental boundaries for the co-evolution between FT and DSDS50 in A. thaliana. In the context of global warming, we conclude that A. thaliana phenology from the southwest Iberian Peninsula, determined by early flowering and deep seed dormancy, might become the most common life-cycle phenotype for this annual plant in the region. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.
Quantifying the tibiofemoral joint space using x-ray tomosynthesis.
Kalinosky, Benjamin; Sabol, John M; Piacsek, Kelly; Heckel, Beth; Gilat Schmidt, Taly
2011-12-01
Digital x-ray tomosynthesis (DTS) has the potential to provide 3D information about the knee joint in a load-bearing posture, which may improve diagnosis and monitoring of knee osteoarthritis compared with projection radiography, the current standard of care. Manually quantifying and visualizing the joint space width (JSW) from 3D tomosynthesis datasets may be challenging. This work developed a semiautomated algorithm for quantifying the 3D tibiofemoral JSW from reconstructed DTS images. The algorithm was validated through anthropomorphic phantom experiments and applied to three clinical datasets. A user-selected volume of interest within the reconstructed DTS volume was enhanced with 1D multiscale gradient kernels. The edge-enhanced volumes were divided by polarity into tibial and femoral edge maps and combined across kernel scales. A 2D connected components algorithm was performed to determine candidate tibial and femoral edges. A 2D joint space width map (JSW) was constructed to represent the 3D tibiofemoral joint space. To quantify the algorithm accuracy, an adjustable knee phantom was constructed, and eleven posterior-anterior (PA) and lateral DTS scans were acquired with the medial minimum JSW of the phantom set to 0-5 mm in 0.5 mm increments (VolumeRad™, GE Healthcare, Chalfont St. Giles, United Kingdom). The accuracy of the algorithm was quantified by comparing the minimum JSW in a region of interest in the medial compartment of the JSW map to the measured phantom setting for each trial. In addition, the algorithm was applied to DTS scans of a static knee phantom and the JSW map compared to values estimated from a manually segmented computed tomography (CT) dataset. The algorithm was also applied to three clinical DTS datasets of osteoarthritic patients. The algorithm segmented the JSW and generated a JSW map for all phantom and clinical datasets. For the adjustable phantom, the estimated minimum JSW values were plotted against the measured values for all trials. A linear fit estimated a slope of 0.887 (R² = 0.962) and a mean error across all trials of 0.34 mm for the PA phantom data. The estimated minimum JSW values for the lateral adjustable phantom acquisitions were found to have low correlation to the measured values (R² = 0.377), with a mean error of 2.13 mm. The error in the lateral adjustable-phantom datasets appeared to be caused by artifacts due to unrealistic features in the phantom bones. JSW maps generated by DTS and CT varied by a mean of 0.6 mm and 0.8 mm across the knee joint, for PA and lateral scans. The tibial and femoral edges were successfully segmented and JSW maps determined for PA and lateral clinical DTS datasets. A semiautomated method is presented for quantifying the 3D joint space in a 2D JSW map using tomosynthesis images. The proposed algorithm quantified the JSW across the knee joint to sub-millimeter accuracy for PA tomosynthesis acquisitions. Overall, the results suggest that x-ray tomosynthesis may be beneficial for diagnosing and monitoring disease progression or treatment of osteoarthritis by providing quantitative images of JSW in the load-bearing knee.
Zhou, Xiangrong; Xu, Rui; Hara, Takeshi; Hirano, Yasushi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Kido, Shoji; Fujita, Hiroshi
2014-07-01
The shapes of the inner organs are important information for medical image analysis. Statistical shape modeling provides a way of quantifying and measuring shape variations of the inner organs in different patients. In this study, we developed a universal scheme that can be used for building the statistical shape models for different inner organs efficiently. This scheme combines the traditional point distribution modeling with a group-wise optimization method based on a measure called minimum description length to provide a practical means for 3D organ shape modeling. In experiments, the proposed scheme was applied to the building of five statistical shape models for hearts, livers, spleens, and right and left kidneys by use of 50 cases of 3D torso CT images. The performance of these models was evaluated by three measures: model compactness, model generalization, and model specificity. The experimental results showed that the constructed shape models have good "compactness" and satisfied the "generalization" performance for different organ shape representations; however, the "specificity" of these models should be improved in the future.
What Is a Current Equivalent to Unemployment Rates of the Past?
ERIC Educational Resources Information Center
Antos, Joseph; And Others
1979-01-01
The results of various attempts to quantify how much changes in the labor force, unemployment insurance, and minimum wages have affected unemployment rates are reasonably close; but no total effect on jobless rates can be determined. (BM)
Kusano, Kristofer D; Chen, Rong; Montgomery, Jade; Gabler, Hampton C
2015-09-01
Forward collision warning (FCW) systems are designed to mitigate the effects of rear-end collisions. Driver acceptance of these systems is crucial to their success, as perceived "nuisance" alarms may cause drivers to disable the systems. In order to make customizable FCW thresholds, system designers need to quantify the variation in braking behavior in the driving population. The objective of this study was to quantify the time to collision (TTC) that drivers applied the brakes during car following scenarios from a large scale naturalistic driving study (NDS). Because of the large amount of data generated by NDS, an automated algorithm was developed to identify lead vehicles using radar data recorded as part of the study. Using the search algorithm, all trips from 64 drivers from the 100-Car NDS were analyzed. A comparison of the algorithm to 7135 brake applications where the presence of a lead vehicle was manually identified found that the algorithm agreed with the human review 90.6% of the time. This study examined 72,123 trips that resulted in 2.6 million brake applications. Population distributions of the minimum, 1st, and 10th percentiles were computed for each driver in speed ranges between 3 and 60 mph in 10 mph increments. As speed increased, so did the minimum TTC experience by drivers as well as variance in TTC. Younger drivers (18-30) had lower TTC at brake application compared to older drivers (30-51+), especially at speeds between 40 mph and 60 mph. This is one of the first studies to use large scale NDS data to quantify braking behavior during car following. The results of this study can be used to design and evaluate FCW systems and calibrate traffic simulation models. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.
Spark Ignition of Monodisperse Fuel Sprays. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Danis, Allen M.; Cernansky, Nicholas P.; Namer, Izak
1987-01-01
A study of spark ignition energy requirements was conducted with a monodisperse spray system allowing independent control of droplet size, equivalent ratio, and fuel type. Minimum ignition energies were measured for n-heptane and methanol sprays characterized at the spark gap in terms of droplet diameter, equivalence ratio (number density) and extent of prevaporization. In addition to sprays, minimum ignition energies were measured for completely prevaporized mixtures of the same fuels over a range of equivalence ratios to provide data at the lower limit of droplet size. Results showed that spray ignition was enhanced with decreasing droplet size and increasing equivalence ratio over the ranges of the parameters studied. By comparing spray and prevaporized ignition results, the existence of an optimum droplet size for ignition was indicated for both fuels. Fuel volatility was seen to be a critical factor in spray ignition. The spray ignition results were analyzed using two different empirical ignition models for quiescent mixtures. Both models accurately predicted the experimental ignition energies for the majority of the spray conditions. Spray ignition was observed to be probabilistic in nature, and ignition was quantified in terms of an ignition frequency for a given spark energy. A model was developed to predict ignition frequencies based on the variation in spark energy and equivalence ratio in the spark gap. The resulting ignition frequency simulations were nearly identical to the experimentally observed values.
NASA Astrophysics Data System (ADS)
Papoutsis-Kiachagias, E. M.; Zymaris, A. S.; Kavvadias, I. S.; Papadimitriou, D. I.; Giannakoglou, K. C.
2015-03-01
The continuous adjoint to the incompressible Reynolds-averaged Navier-Stokes equations coupled with the low Reynolds number Launder-Sharma k-ε turbulence model is presented. Both shape and active flow control optimization problems in fluid mechanics are considered, aiming at minimum viscous losses. In contrast to the frequently used assumption of frozen turbulence, the adjoint to the turbulence model equations together with appropriate boundary conditions are derived, discretized and solved. This is the first time that the adjoint equations to the Launder-Sharma k-ε model have been derived. Compared to the formulation that neglects turbulence variations, the impact of additional terms and equations is evaluated. Sensitivities computed using direct differentiation and/or finite differences are used for comparative purposes. To demonstrate the need for formulating and solving the adjoint to the turbulence model equations, instead of merely relying upon the 'frozen turbulence assumption', the gain in the optimization turnaround time offered by the proposed method is quantified.
Atkins, Penny R; Fiorentino, Niccolo M; Aoki, Stephen K; Peters, Christopher L; Maak, Travis G; Anderson, Andrew E
2017-10-01
Ischiofemoral impingement (IFI) is a dynamic process, but its diagnosis is often based on static, supine images. To couple 3-dimensional (3D) computed tomography (CT) models with dual fluoroscopy (DF) images to quantify in vivo hip motion and the ischiofemoral space (IFS) in asymptomatic participants during weightbearing activities and evaluate the relationship of dynamic measurements with sex, hip kinematics, and the IFS measured from axial magnetic resonance imaging (MRI). Cross-sectional study; Level of evidence, 3. Eleven young, asymptomatic adults (5 female) were recruited. 3D reconstructions of the femur and pelvis were generated from MRI and CT. The axial and 3D IFS were measured from supine MRI. In vivo hip motion during weightbearing activities was quantified using DF. The bone-to-bone distance between the lesser trochanter and ischium was measured dynamically. The minimum and maximum IFS were determined and evaluated against hip joint angles using a linear mixed-effects model. The minimum IFS occurred during external rotation for 10 of 11 participants. The IFS measured from axial MRI (mean, 23.7 mm [95% CI, 19.9-27.9]) was significantly greater than the minimum IFS observed during external rotation (mean, 10.8 mm [95% CI, 8.3-13.7]; P < .001), level walking (mean, 15.5 mm [95% CI, 11.4-19.7]; P = .007), and incline walking (mean, 15.8 mm [95% CI, 11.6-20.1]; P = .004) but not for standing. The IFS was reduced with extension (β = 0.66), adduction (β = 0.22), and external rotation (β = 0.21) ( P < .001 for all) during the dynamic activities observed. The IFS was smaller in female than male participants for standing (mean, 20.9 mm [95% CI, 19.3-22.3] vs 30.4 mm [95% CI, 27.2-33.8], respectively; P = .034), level walking (mean, 8.8 mm [95% CI, 7.5-9.9] vs 21.1 mm [95% CI, 18.7-23.6], respectively; P = .001), and incline walking (mean, 9.1 mm [95% CI, 7.4-10.8] vs 21.3 mm [95% CI, 18.8-24.1], respectively; P = .003). Joint angles between the sexes were not significantly different for any of the dynamic positions of interest. The minimum IFS during dynamic activities was smaller than axial MRI measurements. Compared with male participants, the IFS in female participants was reduced during standing and walking, despite a lack of kinematic differences between the sexes. The relationship between the IFS and hip joint angles suggests that the hip should be placed into greater extension, adduction, and external rotation in clinical examinations and imaging, as the IFS measured from static images, especially in a neutral orientation, may not accurately represent the minimum IFS during dynamic motion. Nevertheless, this statement must be interpreted with caution, as only asymptomatic participants were analyzed herein.
Sul, Bora; Oppito, Zachary; Jayasekera, Shehan; Vanger, Brian; Zeller, Amy; Morris, Michael; Ruppert, Kai; Altes, Talissa; Rakesh, Vineet; Day, Steven; Robinson, Risa; Reifman, Jaques; Wallqvist, Anders
2018-05-01
Computational models are useful for understanding respiratory physiology. Crucial to such models are the boundary conditions specifying the flow conditions at truncated airway branches (terminal flow rates). However, most studies make assumptions about these values, which are difficult to obtain in vivo. We developed a computational fluid dynamics (CFD) model of airflows for steady expiration to investigate how terminal flows affect airflow patterns in respiratory airways. First, we measured in vitro airflow patterns in a physical airway model, using particle image velocimetry (PIV). The measured and computed airflow patterns agreed well, validating our CFD model. Next, we used the lobar flow fractions from a healthy or chronic obstructive pulmonary disease (COPD) subject as constraints to derive different terminal flow rates (i.e., three healthy and one COPD) and computed the corresponding airflow patterns in the same geometry. To assess airflow sensitivity to the boundary conditions, we used the correlation coefficient of the shape similarity (R) and the root-mean-square of the velocity magnitude difference (Drms) between two velocity contours. Airflow patterns in the central airways were similar across healthy conditions (minimum R, 0.80) despite variations in terminal flow rates but markedly different for COPD (minimum R, 0.26; maximum Drms, ten times that of healthy cases). In contrast, those in the upper airway were similar for all cases. Our findings quantify how variability in terminal and lobar flows contributes to airflow patterns in respiratory airways. They highlight the importance of using lobar flow fractions to examine physiologically relevant airflow characteristics.
NASA Astrophysics Data System (ADS)
Chappell, N. A.; Jones, T.; Young, P.; Krishnaswamy, J.
2015-12-01
There is increasing awareness that under-sampling may have resulted in the omission of important physicochemical information present in water quality signatures of surface waters - thereby affecting interpretation of biogeochemical processes. For dissolved organic carbon (DOC) and nitrogen this under-sampling can now be avoided using UV-visible spectroscopy measured in-situ and continuously at a fine-resolution e.g. 15 minutes ("real time"). Few methods are available to extract biogeochemical process information directly from such high-frequency data. Jones, Chappell & Tych (2014 Environ Sci Technol: 13289-97) developed one such method using optically-derived DOC data based upon a sophisticated time-series modelling tool. Within this presentation we extend the methodology to quantify the minimum sampling interval required to avoid distortion of model structures and parameters that describe fundamental biogeochemical processes. This shifting of parameters which results from under-sampling is called "aliasing". We demonstrate that storm dynamics at a variety of sites dominate over diurnal and seasonal changes and that these must be characterised by sampling that may be sub-hourly to avoid aliasing. This is considerably shorter than that used by other water quality studies examining aliasing (e.g. Kirchner 2005 Phys Rev: 069902). The modelling approach presented is being developed into a generic tool to calculate the minimum sampling for water quality monitoring in systems driven primarily by hydrology. This is illustrated with fine-resolution, optical data from watersheds in temperate Europe through to the humid tropics.
Thermal transport in tantalum oxide films for memristive applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landon, Colin D.; Wilke, Rudeger H. T.; Brumbach, Michael T.
2015-07-13
The thermal conductivity of amorphous TaO{sub x} memristive films having variable oxygen content is measured using time domain thermoreflectance. Thermal transport is described by a two-part model where the electrical contribution is quantified via the Wiedemann-Franz relation and the vibrational contribution by the minimum thermal conductivity limit for amorphous solids. The vibrational contribution remains constant near 0.9 W/mK regardless of oxygen concentration, while the electrical contribution varies from 0 to 3.3 W/mK. Thus, the dominant thermal carrier in TaO{sub x} switches between vibrations and charge carriers and is controllable either by oxygen content during deposition, or dynamically by field-induced charge state migration.
Pore geometry as a control on rock strength
NASA Astrophysics Data System (ADS)
Bubeck, A.; Walker, R. J.; Healy, D.; Dobbs, M.; Holwell, D. A.
2017-01-01
The strength of rocks in the subsurface is critically important across the geosciences, with implications for fluid flow, mineralisation, seismicity, and the deep biosphere. Most studies of porous rock strength consider the scalar quantity of porosity, in which strength shows a broadly inverse relationship with total porosity, but pore shape is not explicitly defined. Here we use a combination of uniaxial compressive strength measurements of isotropic and anisotropic porous lava samples, and numerical modelling to consider the influence of pore shape on rock strength. Micro computed tomography (CT) shows that pores range from sub-spherical to elongate and flat ellipsoids. Samples that contain flat pores are weaker if compression is applied parallel to the short axis (i.e. across the minimum curvature), compared to compression applied parallel to the long axis (i.e. across the maximum curvature). Numerical models for elliptical pores show that compression applied across the minimum curvature results in relatively broad amplification of stress, compared to compression applied across the maximum curvature. Certain pore shapes may be relatively stable and remain open in the upper crust under a given remote stress field, while others are inherently weak. Quantifying the shape, orientations, and statistical distributions of pores is therefore a critical step in strength testing of rocks.
Quantifying the Impact of Additional Laboratory Tests on the Quality of a Geomechanical Model
NASA Astrophysics Data System (ADS)
Fillion, Marie-Hélène; Hadjigeorgiou, John
2017-05-01
In an open-pit mine operation, the design of safe and economically viable slopes can be significantly influenced by the quality and quantity of collected geomechanical data. In several mining jurisdictions, codes and standards are available for reporting exploration data, but similar codes or guidelines are not formally available or enforced for geotechnical design. Current recommendations suggest a target level of confidence in the rock mass properties used for slope design. As these guidelines are qualitative and somewhat subjective, questions arise regarding the minimum number of tests to perform in order to reach the proposed level of confidence. This paper investigates the impact of defining a priori the required number of laboratory tests to conduct on rock core samples based on the geomechanical database of an operating open-pit mine in South Africa. In this review, to illustrate the process, the focus is on uniaxial compressive strength properties. Available strength data for 2 project stages were analysed using the small-sampling theory and the confidence interval approach. The results showed that the number of specimens was too low to obtain a reliable strength value for some geotechnical domains even if more specimens than the minimum proposed by the ISRM suggested methods were tested. Furthermore, the testing sequence used has an impact on the minimum number of specimens required. Current best practice cannot capture all possibilities regarding the geomechanical property distributions, and there is a demonstrated need for a method to determine the minimum number of specimens required while minimising the influence of the testing sequence.
Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin
2018-01-01
Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.
Response of winter and spring wheat grain yields to meteorological variation
NASA Technical Reports Server (NTRS)
Feyerherm, A. M.; Kanemasu, E. T.; Paulsen, G. M.
1977-01-01
Mathematical models which quantify the relation of wheat yield to selected weather-related variables are presented. Other sources of variation (amount of applied nitrogen, improved varieties, cultural practices) have been incorporated in the models to explain yield variation both singly and in combination with weather-related variables. Separate models were developed for fall-planted (winter) and spring-planted (spring) wheats. Meteorological variation is observed, basically, by daily measurements of minimum and maximum temperatures, precipitation, and tabled values of solar radiation at the edge of the atmosphere and daylength. Two different soil moisture budgets are suggested to compute simulated values of evapotranspiration; one uses the above-mentioned inputs, the other uses the measured temperatures and precipitation but replaces the tabled values (solar radiation and daylength) by measured solar radiation and satellite-derived multispectral scanner data to estimate leaf area index. Weather-related variables are defined by phenological stages, rather than calendar periods, to make the models more universally applicable.
Frost risk for overwintering crops in a changing climate
NASA Astrophysics Data System (ADS)
Vico, Giulia; Weih, Martin
2013-04-01
Climate change scenarios predict a general increase in daily temperatures and a decline in snow cover duration. On the one hand, higher temperature in fall and spring may facilitate the development of overwintering crops and allow the expansion of winter cropping in locations where the growing season is currently too short. On the other hand, higher temperatures prior to winter crop dormancy slow down frost hardening, enhancing crop vulnerability to temperature fluctuation. Such vulnerability may be exacerbated by reduced snow cover, with potential further negative impacts on yields in extremely low temperatures. We propose a parsimonious probabilistic model to quantify the winter frost damage risk for overwintering crops, based on a coupled model of air temperature, snow cover, and crop minimum tolerable temperature. The latter is determined by crop features, previous history of temperature, and snow cover. The temperature-snow cover model is tested against meteorological data collected over 50 years in Sweden and applied to winter wheat varieties differing in their ability to acquire frost resistance. Hence, exploiting experimental results assessing crop frost damage under limited temperature and snow cover realizations, this probabilistic framework allows the quantification of frost risk for different crop varieties, including in full temperature and precipitation unpredictability. Climate change scenarios are explored to quantify the effects of changes in temperature mean and variance and precipitation regime over crops differing in winter frost resistance and response to temperature.
Strain rate dependency of bovine trabecular bone under impact loading at sideways fall velocity.
Enns-Bray, William S; Ferguson, Stephen J; Helgason, Benedikt
2018-05-03
There is currently a knowledge gap in scientific literature concerning the strain rate dependent properties of trabecular bone at intermediate strain rates. Meanwhile, strain rates between 10 and 200/s have been observed in previous dynamic finite element models of the proximal femur loaded at realistic sideways fall speeds. This study aimed to quantify the effect of strain rate (ε̇) on modulus of elasticity (E), ultimate stress (σ u ), failure energy (U f ), and minimum stress (σ m ) of trabecular bone in order to improve the biofidelity of material properties used in dynamic simulations of sideways fall loading on the hip. Cylindrical cores of trabecular bone (D = 8 mm, L gauge = 16 mm, n = 34) from bovine proximal tibiae and distal femurs were scanned in µCT (10 µm), quantifying apparent density (ρ app ) and degree of anisotropy (DA), and subsequently impacted within a miniature drop tower. Force of impact was measured using a piezoelectric load cell (400 kHz), while displacement during compression was measured from high speed video (50,000 frames/s). Four groups, with similar density distributions, were loaded at different impact velocities (0.84, 1.33, 1.75, and 2.16 m/s) with constant kinetic energy (0.4 J) by adjusting the impact mass. The mean strain rates of each group were significantly different (p < 0.05) except for the two fastest impact speeds (p = 0.09). Non-linear regression models correlated strain rate, DA, and ρ app with ultimate stress (R 2 = 0.76), elastic modulus (R 2 = 0.63), failure energy (R 2 = 0.38), and minimum stress (R 2 = 0.57). These results indicate that previous estimates of σ u could be under predicting the mechanical properties at strain rates above 10/s. Copyright © 2018 Elsevier Ltd. All rights reserved.
Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets
NASA Astrophysics Data System (ADS)
Gold, P. O.; Cowgill, E.; Kreylos, O.
2009-12-01
Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point cloud, we scan from multiple locations an object of known geometry (a cylinder mounted above a square box). Preliminary results show that even in a controlled experimental scan of an object of known dimensions, there is significant variability in the precision of the registered point cloud. For example, when 3 scans of the central object are registered using 4 known points (maximum time, maximum equipment), the point clouds align to within ~1 cm (normal to the object surface). However, when the same point clouds are registered with only 1 known point (minimum time, minimum equipment), misalignment of the point clouds can range from 2.5 to 5 cm, depending on target type. The greater misalignment of the 3 point clouds when registered with fewer known points stems from the field method employed in acquiring the dataset and demonstrates the impact of field workflow on LiDAR dataset precision. By quantifying the degree of scan mismatch in results such as this, we can provide users with the information needed to maximize efficiency in remote field surveys.
Liu, Junyan; Liu, Yang; Gao, Mingxia; Zhang, Xiangmin
2012-08-01
A facile proteomic quantification method, fluorescent labeling absolute quantification (FLAQ), was developed. Instead of using MS for quantification, the FLAQ method is a chromatography-based quantification in combination with MS for identification. Multidimensional liquid chromatography (MDLC) with laser-induced fluorescence (LIF) detection with high accuracy and tandem MS system were employed for FLAQ. Several requirements should be met for fluorescent labeling in MS identification: Labeling completeness, minimum side-reactions, simple MS spectra, and no extra tandem MS fragmentations for structure elucidations. A fluorescence dye, 5-iodoacetamidofluorescein, was finally chosen to label proteins on all cysteine residues. The fluorescent dye was compatible with the process of the trypsin digestion and MALDI MS identification. Quantitative labeling was achieved with optimization of reacting conditions. A synthesized peptide and model proteins, BSA (35 cysteines), OVA (five cysteines), were used for verifying the completeness of labeling. Proteins were separated through MDLC and quantified based on fluorescent intensities, followed by MS identification. High accuracy (RSD% < 1.58) and wide linearity of quantification (1-10(5) ) were achieved by LIF detection. The limit of quantitation for the model protein was as low as 0.34 amol. Parts of proteins in human liver proteome were quantified and demonstrated using FLAQ. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multispectral imaging of aircraft exhaust
NASA Astrophysics Data System (ADS)
Berkson, Emily E.; Messinger, David W.
2016-05-01
Aircraft pollutants emitted during the landing-takeoff (LTO) cycle have significant effects on the local air quality surrounding airports. There are currently no inexpensive, portable, and unobtrusive sensors to quantify the amount of pollutants emitted from aircraft engines throughout the LTO cycle or to monitor the spatial-temporal extent of the exhaust plume. We seek to thoroughly characterize the unburned hydrocarbon (UHC) emissions from jet engine plumes and to design a portable imaging system to remotely quantify the emitted UHCs and temporally track the distribution of the plume. This paper shows results from the radiometric modeling of a jet engine exhaust plume and describes a prototype long-wave infrared imaging system capable of meeting the above requirements. The plume was modeled with vegetation and sky backgrounds, and filters were selected to maximize the detectivity of the plume. Initial calculations yield a look-up chart, which relates the minimum amount of emitted UHCs required to detect the presence of a plume to the noise-equivalent radiance of a system. Future work will aim to deploy the prototype imaging system at the Greater Rochester International Airport to assess the applicability of the system on a national scale. This project will help monitor the local pollution surrounding airports and allow better-informed decision-making regarding emission caps and pollution bylaws.
Faris, A M; Wang, H-H; Tarone, A M; Grant, W E
2016-05-31
Estimates of insect age can be informative in death investigations and, when certain assumptions are met, can be useful for estimating the postmortem interval (PMI). Currently, the accuracy and precision of PMI estimates is unknown, as error can arise from sources of variation such as measurement error, environmental variation, or genetic variation. Ecological models are an abstract, mathematical representation of an ecological system that can make predictions about the dynamics of the real system. To quantify the variation associated with the pre-appearance interval (PAI), we developed an ecological model that simulates the colonization of vertebrate remains by Cochliomyia macellaria (Fabricius) (Diptera: Calliphoridae), a primary colonizer in the southern United States. The model is based on a development data set derived from a local population and represents the uncertainty in local temperature variability to address PMI estimates at local sites. After a PMI estimate is calculated for each individual, the model calculates the maximum, minimum, and mean PMI, as well as the range and standard deviation for stadia collected. The model framework presented here is one manner by which errors in PMI estimates can be addressed in court when no empirical data are available for the parameter of interest. We show that PAI is a potential important source of error and that an ecological model is one way to evaluate its impact. Such models can be re-parameterized with any development data set, PAI function, temperature regime, assumption of interest, etc., to estimate PMI and quantify uncertainty that arises from specific prediction systems. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
On the Complexity of Item Response Theory Models.
Bonifay, Wes; Cai, Li
2017-01-01
Complexity in item response theory (IRT) has traditionally been quantified by simply counting the number of freely estimated parameters in the model. However, complexity is also contingent upon the functional form of the model. We examined four popular IRT models-exploratory factor analytic, bifactor, DINA, and DINO-with different functional forms but the same number of free parameters. In comparison, a simpler (unidimensional 3PL) model was specified such that it had 1 more parameter than the previous models. All models were then evaluated according to the minimum description length principle. Specifically, each model was fit to 1,000 data sets that were randomly and uniformly sampled from the complete data space and then assessed using global and item-level fit and diagnostic measures. The findings revealed that the factor analytic and bifactor models possess a strong tendency to fit any possible data. The unidimensional 3PL model displayed minimal fitting propensity, despite the fact that it included an additional free parameter. The DINA and DINO models did not demonstrate a proclivity to fit any possible data, but they did fit well to distinct data patterns. Applied researchers and psychometricians should therefore consider functional form-and not goodness-of-fit alone-when selecting an IRT model.
NASA Astrophysics Data System (ADS)
Frau, J.; Price, S. L.
1996-04-01
Electrostatic and structural properties of a set of β-lactam, γ-lactam and nonlactam compounds have been analyzed and compared with those of a model of the natural substrate d-alanyl- d-alanine for the carboxy- and transpeptidase enzymes. This first comparison of the electrostatic properties has been based on a distributed multipole analysis of high-quality ab initio wave functions of the substrate and potential antibiotics. The electrostatic similarity of the substrate and active compounds is apparent, and contrasts with the electrostatic properties of the noninhibitors. This has been quantified to give a reasonable correlation with the MIC (Minimum Concentration for Inhibition) and with kinetic data (k2/K) in accordance with the model for interaction of the lactam compounds with dd-peptidase. These correlations provide a better prediction of antibacterial activity than purely structural criteria.
The properties of the anti-tumor model with coupling non-Gaussian noise and Gaussian colored noise
NASA Astrophysics Data System (ADS)
Guo, Qin; Sun, Zhongkui; Xu, Wei
2016-05-01
The anti-tumor model with correlation between multiplicative non-Gaussian noise and additive Gaussian-colored noise has been investigated in this paper. The behaviors of the stationary probability distribution demonstrate that the multiplicative non-Gaussian noise plays a dual role in the development of tumor and an appropriate additive Gaussian colored noise can lead to a minimum of the mean value of tumor cell population. The mean first passage time is calculated to quantify the effects of noises on the transition time of tumors between the stable states. An increase in both the non-Gaussian noise intensity and the departure from the Gaussian noise can accelerate the transition from the disease state to the healthy state. On the contrary, an increase in cross-correlated degree will slow down the transition. Moreover, the correlation time can enhance the stability of the disease state.
Evolution of Multiscale Multifractal Turbulence in the Heliosphere
NASA Astrophysics Data System (ADS)
Macek, W. M.; Wawrzaszek, A.
2009-04-01
The aim of this study is to examine the question of scaling properties of intermittent turbulence in the space environment. We analyze time series of velocities of the slow and fast speed streams of the solar wind measured in situ by Helios 2, Advanced Composition Explorer and Voyager 2 spacecraft in the inner and outer heliosphere during solar minimum and maximum at various distances from the Sun. To quantify asymmetric scaling of solar wind turbulence, we consider a generalized two-scale weighted Cantor set with two different scales describing nonuniform distribution of the kinetic energy flux between cascading eddies of various sizes. We investigate the resulting spectrum of generalized dimensions and the corresponding multifractal singularity spectrum depending on one probability measure parameter and two rescaling parameters, demonstrating that the multifractal scaling is often rather asymmetric. In particular, we show that the degree of multifractality for the solar wind during solar minimum is greater for fast streams velocity fluctuations than that for the slow streams; the fast wind during solar minimum may exhibit strong asymmetric scaling. Moreover, we observe the evolution of multifractal scaling of the solar wind in the outer heliosphere. It is worth noting that for the model with two different scaling parameters a much better agreement with the solar wind data is obtained, especially for the negative index of the generalized dimensions. Therefore we argue that there is a need to use a two-scale cascade model. Hence we propose this new more general model as a useful tool for analysis of intermittent turbulence in various environments. References [1] W. M. Macek and A. Szczepaniak, Generalized two-scale weighted Cantor set model for solar wind turbulence, Geophys. Res. Lett., 35, L02108, doi:10.1029/2007GL032263 (2008). [2] A. Szczepaniak and W. M. Macek, Asymmetric multifractal model for solar wind intermittent turbulence, Nonlin. Processes Geophys., 15, 615-620 (2008), http://www.nonlin-processes-geophys.net/15/615/2008/. [3] W. M. Macek and A. Wawrzaszek, Evolution of asymmetric multifractal scaling of solar wind turbulence in the outer heliosphere, J. Geophys. Res., A013795, doi:10.1029/2008JA013795, in press.
Quantifying hyporheic exchange dynamics in a highly regulated large river reach
NASA Astrophysics Data System (ADS)
Zhou, T.; Bao, J.; Huang, M.; Hou, Z.; Arntzen, E.; Mackley, R.; Harding, S.; Crump, A.; Xu, Y.; Song, X.; Chen, X.; Stegen, J.; Hammond, G. E.; Thorne, P. D.; Zachara, J. M.
2016-12-01
Hyporheic exchange is an important mechanism taking place in riverbanks and riverbed sediments, where the river water and shallow groundwater mix and interact with each other. The direction and magnitude of hyporheic flux that penetrates the river bed and residence time of river water in the hyporheic zone are critical for biogeochemical processes such as carbon and nitrogen cycling, and biodegradation of organic contaminants. Hyporheic flux can be quantified using many direct and indirect measurements as well as analytical and numerical modeling tools. However, in a relatively large river, these methods can be limited by the accessibility, spatial constraints, complexity of geomorphologic features and subsurface properties, and computational power. In rivers regulated by hydroelectric dams, quantifying hyporheic fluxes becomes more challenging due to frequent hydropeaking events created by dam operations. In this study, we developed and validated methods that combined field measurements and numerical modeling for estimating hyporheic fluxes across the river bed in a 7-km long reach of the highly regulated Columbia River. The reach has a minimum width of about 800 meters and variations in river stage within a day could be up to two meters due to the upstream dam operations. In shallow water along the shoreline, vertical thermal profiles measured by self-recording thermistors were combined with time series of hydraulic gradient derived from river stage and water level at in-land wells to estimate the hyporheic flux rate. For the deep section, a high resolution computational fluid dynamics (CFD) modeling framework was developed to characterize the spatial distribution of flux rates at the river bed and the residence time of hyporheic flow at different river flow conditions. Our modeling results show that the rates of hyporheic exchange and residence time are controlled by (1) hydrostatic pressure induced by river stage fluctuations, and (2) hydrodynamic drivers associated with flow velocity variations, which also to certain extent dependent on flow conditions.
Effects of regulated river flows on habitat suitability for the robust redhorse
Fisk, J. M.; Kwak, Thomas J.; Heise, R. J.
2015-01-01
The Robust Redhorse Moxostoma robustum is a rare and imperiled fish, with wild populations occurring in three drainages from North Carolina to Georgia. Hydroelectric dams have altered the species’ habitat and restricted its range. An augmented minimum-flow regime that will affect Robust Redhorse habitat was recently prescribed for Blewett Falls Dam, a hydroelectric facility on the Pee Dee River, North Carolina. Our objective was to quantify suitable spawning and nonspawning habitat under current and proposed minimum-flow regimes. We implanted radio transmitters into 27 adult Robust Redhorses and relocated the fish from spring 2008 to summer 2009, and we described habitat at 15 spawning capture locations. Nonspawning habitat consisted of deep, slow-moving pools (mean depth D 2.3 m; mean velocity D 0.23 m/s), bedrock and sand substrates, and boulders or coarse woody debris as cover. Spawning habitat was characterized as shallower, faster-moving water (mean depth D 0.84 m; mean velocity D 0.61 m/s) with gravel and cobble as substrates and boulders as cover associated with shoals. Telemetry relocations revealed two behavioral subgroups: a resident subgroup (linear range [mean § SE] D 7.9 § 3.7 river kilometers [rkm]) that remained near spawning areas in the Piedmont region throughout the year; and a migratory subgroup (linear range D 64.3 § 8.4 rkm) that migrated extensively downstream into the Coastal Plain region. Spawning and nonspawning habitat suitability indices were developed based on field microhabitat measurements and were applied to model suitable available habitat (weighted usable area) for current and proposed augmented minimum flows. Suitable habitat (both spawning and nonspawning) increased for each proposed seasonal minimum flow relative to former minimum flows, with substantial increases for spawning sites. Our results contribute to an understanding of how regulated flows affect available habitats for imperiled species. Flow managers can use these findings to regulate discharge more effectively and to create and maintain important habitats during critical periods for priority species.
Master, Hiral; Thoma, Louise M; Christiansen, Meredith B; Polakowski, Emily; Schmitt, Laura A; White, Daniel K
2018-07-01
Evidence of physical function difficulties, such as difficulty rising from a chair, may limit daily walking for people with knee osteoarthritis (OA). The purpose of this study was to identify minimum performance thresholds on clinical tests of physical function predictive to walking ≥6,000 steps/day. This benchmark is known to discriminate people with knee OA who develop functional limitation over time from those who do not. Using data from the Osteoarthritis Initiative, we quantified daily walking as average steps/day from an accelerometer (Actigraph GT1M) worn for ≥10 hours/day over 1 week. Physical function was quantified using 3 performance-based clinical tests: 5 times sit-to-stand test, walking speed (tested over 20 meters), and 400-meter walk test. To identify minimum performance thresholds for daily walking, we calculated physical function values corresponding to high specificity (80-95%) to predict walking ≥6,000 steps/day. Among 1,925 participants (mean ± SD age 65.1 ± 9.1 years, mean ± SD body mass index 28.4 ± 4.8 kg/m 2 , and 55% female) with valid accelerometer data, 54.9% walked ≥6,000 steps/day. High specificity thresholds of physical function for walking ≥6,000 steps/day ranged 11.4-14.0 seconds on the 5 times sit-to-stand test, 1.13-1.26 meters/second for walking speed, or 315-349 seconds on the 400-meter walk test. Not meeting these minimum performance thresholds on clinical tests of physical function may indicate inadequate physical ability to walk ≥6,000 steps/day for people with knee OA. Rehabilitation may be indicated to address underlying impairments limiting physical function. © 2017, American College of Rheumatology.
Tafazal, Suhayl I; Sell, Philip J
2006-11-01
Outcome scores are very useful tools in the field of spinal surgery as they allow us to assess a patient's progress and the effect of various treatments. The clinical importance of a score change is not so clear. Although previous studies have looked at the minimum clinically important score change, the degree of score change varies considerably. Our study is a prospective cohort study of 193 patients undergoing discectomy, decompression and fusion procedures with minimum 2-year follow-up. We have used three standard outcome measures in common usage, the oswestry disability index (ODI), the low back outcome score (LBOS) and the visual analogue score (VAS). We have defined each of these scores according to a global measure of outcome graded by the patient as excellent, good, fair or poor. We have also graded patient perception and classified excellent and good as success and fair and poor as failure. Our results suggest that a median 24-point change in the ODI equates with a good outcome or is the minimum change needed for success. We have also found that different surgical disorders have very different minimal clinically important differences as perceived by patient perception. We found that for a discectomy a minimum 27-point change in the ODI would be classed as a success, for a decompression the change in ODI needed to class it as a success would be 16 points, whereas for a fusion the change in the ODI would be only 13 points. We believe that patient-rated global measures of outcome are of value and we have quantified them in terms of the standard outcome measures used in spinal surgery.
A mixing-model approach to quantifying sources of organic matter to salt marsh sediments
NASA Astrophysics Data System (ADS)
Bowles, K. M.; Meile, C. D.
2010-12-01
Salt marshes are highly productive ecosystems, where autochthonous production controls an intricate exchange of carbon and energy among organisms. The major sources of organic carbon to these systems include 1) autochthonous production by vascular plant matter, 2) import of allochthonous plant material, and 3) phytoplankton biomass. Quantifying the relative contribution of organic matter sources to a salt marsh is important for understanding the fate and transformation of organic carbon in these systems, which also impacts the timing and magnitude of carbon export to the coastal ocean. A common approach to quantify organic matter source contributions to mixtures is the use of linear mixing models. To estimate the relative contributions of endmember materials to total organic matter in the sediment, the problem is formulated as a constrained linear least-square problem. However, the type of data that is utilized in such mixing models, the uncertainties in endmember compositions and the temporal dynamics of non-conservative entitites can have varying affects on the results. Making use of a comprehensive data set that encompasses several endmember characteristics - including a yearlong degradation experiment - we study the impact of these factors on estimates of the origin of sedimentary organic carbon in a saltmarsh located in the SE United States. We first evaluate the sensitivity of linear mixing models to the type of data employed by analyzing a series of mixing models that utilize various combinations of parameters (i.e. endmember characteristics such as δ13COC, C/N ratios or lignin content). Next, we assess the importance of using more than the minimum number of parameters required to estimate endmember contributions to the total organic matter pool. Then, we quantify the impact of data uncertainty on the outcome of the analysis using Monte Carlo simulations and accounting for the uncertainty in endmember characteristics. Finally, as biogeochemical processes can alter endmember characteristics over time, we investigate the effect of early diagenesis on chosen parameters, an analysis that entails an assessment of the organic matter age distribution. Thus, estimates of the relative contributions of phytoplankton, C3 and C4 plants to bulk sediment organic matter depend not only on environmental characteristics that impact reactivity, but also on sediment mixing processes.
Perona, Paolo; Dürrenmatt, David J; Characklis, Gregory W
2013-03-30
We propose a theoretical river modeling framework for generating variable flow patterns in diverted-streams (i.e., no reservoir). Using a simple economic model and the principle of equal marginal utility in an inverse fashion we first quantify the benefit of the water that goes to the environment in relation to that of the anthropic activity. Then, we obtain exact expressions for optimal water allocation rules between the two competing uses, as well as the related statistical distributions. These rules are applied using both synthetic and observed streamflow data, to demonstrate that this approach may be useful in 1) generating more natural flow patterns in the river reach downstream of the diversion, thus reducing the ecodeficit; 2) obtaining a more enlightened economic interpretation of Minimum Flow Release (MFR) strategies, and; 3) comparing the long-term costs and benefits of variable versus MFR policies and showing the greater ecological sustainability of this new approach. Copyright © 2013 Elsevier Ltd. All rights reserved.
Pacella, Stephen R; Brown, Cheryl A; Waldbusser, George G; Labiosa, Rochelle G; Hales, Burke
2018-04-10
The role of rising atmospheric CO 2 in modulating estuarine carbonate system dynamics remains poorly characterized, likely due to myriad processes driving the complex chemistry in these habitats. We reconstructed the full carbonate system of an estuarine seagrass habitat for a summer period of 2.5 months utilizing a combination of time-series observations and mechanistic modeling, and quantified the roles of aerobic metabolism, mixing, and gas exchange in the observed dynamics. The anthropogenic CO 2 burden in the habitat was estimated for the years 1765-2100 to quantify changes in observed high-frequency carbonate chemistry dynamics. The addition of anthropogenic CO 2 alters the thermodynamic buffer factors (e.g., the Revelle factor) of the carbonate system, decreasing the seagrass habitat's ability to buffer natural carbonate system fluctuations. As a result, the most harmful carbonate system indices for many estuarine organisms [minimum pH T , minimum Ω arag , and maximum pCO 2(s.w.) ] change up to 1.8×, 2.3×, and 1.5× more rapidly than the medians for each parameter, respectively. In this system, the relative benefits of the seagrass habitat in locally mitigating ocean acidification increase with the higher atmospheric CO 2 levels predicted toward 2100. Presently, however, these mitigating effects are mixed due to intense diel cycling of CO 2 driven by aerobic metabolism. This study provides estimates of how high-frequency pH T , Ω arag , and pCO 2(s.w.) dynamics are altered by rising atmospheric CO 2 in an estuarine habitat, and highlights nonlinear responses of coastal carbonate parameters to ocean acidification relevant for water quality management.
NASA Astrophysics Data System (ADS)
Pacella, Stephen R.; Brown, Cheryl A.; Waldbusser, George G.; Labiosa, Rochelle G.; Hales, Burke
2018-04-01
The role of rising atmospheric CO2 in modulating estuarine carbonate system dynamics remains poorly characterized, likely due to myriad processes driving the complex chemistry in these habitats. We reconstructed the full carbonate system of an estuarine seagrass habitat for a summer period of 2.5 months utilizing a combination of time-series observations and mechanistic modeling, and quantified the roles of aerobic metabolism, mixing, and gas exchange in the observed dynamics. The anthropogenic CO2 burden in the habitat was estimated for the years 1765–2100 to quantify changes in observed high-frequency carbonate chemistry dynamics. The addition of anthropogenic CO2 alters the thermodynamic buffer factors (e.g., the Revelle factor) of the carbonate system, decreasing the seagrass habitat’s ability to buffer natural carbonate system fluctuations. As a result, the most harmful carbonate system indices for many estuarine organisms [minimum pHT, minimum Ωarag, and maximum pCO2(s.w.)] change up to 1.8×, 2.3×, and 1.5× more rapidly than the medians for each parameter, respectively. In this system, the relative benefits of the seagrass habitat in locally mitigating ocean acidification increase with the higher atmospheric CO2 levels predicted toward 2100. Presently, however, these mitigating effects are mixed due to intense diel cycling of CO2 driven by aerobic metabolism. This study provides estimates of how high-frequency pHT, Ωarag, and pCO2(s.w.) dynamics are altered by rising atmospheric CO2 in an estuarine habitat, and highlights nonlinear responses of coastal carbonate parameters to ocean acidification relevant for water quality management.
Santos, Hadassa C; Horimoto, Andréa V R; Tarazona-Santos, Eduardo; Rodrigues-Soares, Fernanda; Barreto, Mauricio L; Horta, Bernardo L; Lima-Costa, Maria F; Gouveia, Mateus H; Machado, Moara; Silva, Thiago M; Sanches, José M; Esteban, Nubia; Magalhaes, Wagner CS; Rodrigues, Maíra R; Kehdy, Fernanda S G; Pereira, Alexandre C
2016-01-01
The Brazilian population is considered to be highly admixed. The main contributing ancestral populations were European and African, with Amerindians contributing to a lesser extent. The aims of this study were to provide a resource for determining and quantifying individual continental ancestry using the smallest number of SNPs possible, thus allowing for a cost- and time-efficient strategy for genomic ancestry determination. We identified and validated a minimum set of 192 ancestry informative markers (AIMs) for the genetic ancestry determination of Brazilian populations. These markers were selected on the basis of their distribution throughout the human genome, and their capacity of being genotyped on widely available commercial platforms. We analyzed genotyping data from 6487 individuals belonging to three Brazilian cohorts. Estimates of individual admixture using this 192 AIM panels were highly correlated with estimates using ~370 000 genome-wide SNPs: 91%, 92%, and 74% of, respectively, African, European, and Native American ancestry components. Besides that, 192 AIMs are well distributed among populations from these ancestral continents, allowing greater freedom in future studies with this panel regarding the choice of reference populations. We also observed that genetic ancestry inferred by AIMs provides similar association results to the one obtained using ancestry inferred by genomic data (370 K SNPs) in a simple regression model with rs1426654, related to skin pigmentation, genotypes as dependent variable. In conclusion, these markers can be used to identify and accurately quantify ancestry of Latin Americans or US Hispanics/Latino individuals, in particular in the context of fine-mapping strategies that require the quantification of continental ancestry in thousands of individuals. PMID:26395555
Developing a phenological model for grapevine to assess future frost risk in Luxembourg
NASA Astrophysics Data System (ADS)
Caffarra, A.; Molitor, D.; Pertot, I.; Sinigoy, P.; Junk, J.
2012-04-01
Late frost damage represents a significant hazard to grape production in cool climate viticulture regions such as Luxembourg. The main aim of our study is to analyze the frequency of these events for the Luxembourg's winegrowing region in the future. Spring frost injuries on grape may occur when young green parts are exposed to air temperature below 0°C. The potential risk is determined by: (i) minimum air temperature conditions and the (ii) the timing of bud burst. Therefore, we developed and validated a model for budburst of the grapevine (*Vitis vinifera)* cultivar Rivaner, the most grown local variety, based on multi-annual data from 7 different sites across Europe and the US. An advantage of this approach is, that it could be applied to a wide range of climate conditions. Higher spring temperatures were projected for the future and could lead to earlier dates of budburst as well as earlier dates of last frost events in the season. However, so far it is unknown if this will increase or decrease the risk of severe late frost damages for Luxembourg's winegrowing region. To address this question results of 10 regional climate change projections from the FP6 ENSEMBLES project (spatial resolution = 25km; A1B emission scenario) were combined with the new bud burst model. The use of a multi model ensemble of climate change projections allows for a better quantification of the uncertainties. A bias corrections scheme, based on local observations, was applied to the model output. Projected daily minimum air temperatures, up to 2098, were compared to the projected date of bud burst in order to quantify the future frost risk for Luxembourg.
Performance seeking control program overview
NASA Technical Reports Server (NTRS)
Orme, John S.
1995-01-01
The Performance Seeking Control (PSC) program evolved from a series of integrated propulsion-flight control research programs flown at NASA Dryden Flight Research Center (DFRC) on an F-15. The first of these was the Digital Electronic Engine Control (DEEC) program and provided digital engine controls suitable for integration. The DEEC and digital electronic flight control system of the NASA F-15 were ideally suited for integrated controls research. The Advanced Engine Control System (ADECS) program proved that integrated engine and aircraft control could improve overall system performance. The objective of the PSC program was to advance the technology for a fully integrated propulsion flight control system. Whereas ADECS provided single variable control for an average engine, PSC controlled multiple propulsion system variables while adapting to the measured engine performance. PSC was developed as a model-based, adaptive control algorithm and included four optimization modes: minimum fuel flow at constant thrust, minimum turbine temperature at constant thrust, maximum thrust, and minimum thrust. Subsonic and supersonic flight testing were conducted at NASA Dryden covering the four PSC optimization modes and over the full throttle range. Flight testing of the PSC algorithm, conducted in a series of five flight test phases, has been concluded at NASA Dryden covering all four of the PSC optimization modes. Over a three year period and five flight test phases 72 research flights were conducted. The primary objective of flight testing was to exercise each PSC optimization mode and quantify the resulting performance improvements.
On the Spatial Spread of Rabies among Foxes
NASA Astrophysics Data System (ADS)
Murray, J. D.; Stanley, E. A.; Brown, D. L.
1986-11-01
We present a simple model for the spatial spread of rabies among foxes and use it to quantify its progress in England if rabies were introduced. The model is based on the known ecology of fox behaviour and on the assumption that the main vector for the spread of the disease is the rabid fox. Known data and facts are used to determine real parameter values involved in the model. We calculate the speed of propagation of the epizootic front, the threshold for the existence of an epidemic, the period and distance apart of the subsequent cyclical epidemics which follow the main front, and finally we quantify a means for control of the spatial spread of the disease. By way of illustration we use the model to determine the progress of rabies up through the southern part of England if it were introduced near Southampton. Estimates for the current fox density in England were used in the simulations. These suggest that the disease would reach Manchester within about 3.5 years, moving at speeds as high as 100 km per year in the central region. The model further indicates that although it might seem that the disease had disappeared after the wave had passed it would reappear in the south of England after just over 6 years and at periodic times after that. We consider the possibility of stopping the spread of the disease by creating a rabies `break' ahead of the front through vaccination to reduce the population to a level below the threshold for an epidemic to exist. Based on parameter values relevant to England, we estimate its minimum width to be about 15 km. The model suggests that vaccination has considerable advantages over severe culling.
A potato model intercomparison across varying climates and productivity levels.
Fleisher, David H; Condori, Bruno; Quiroz, Roberto; Alva, Ashok; Asseng, Senthold; Barreda, Carolina; Bindi, Marco; Boote, Kenneth J; Ferrise, Roberto; Franke, Angelinus C; Govindakrishnan, Panamanna M; Harahagazwe, Dieudonne; Hoogenboom, Gerrit; Naresh Kumar, Soora; Merante, Paolo; Nendel, Claas; Olesen, Jorgen E; Parker, Phillip S; Raes, Dirk; Raymundo, Rubi; Ruane, Alex C; Stockle, Claudio; Supit, Iwan; Vanuytrecht, Eline; Wolf, Joost; Woli, Prem
2017-03-01
A potato crop multimodel assessment was conducted to quantify variation among models and evaluate responses to climate change. Nine modeling groups simulated agronomic and climatic responses at low-input (Chinoli, Bolivia and Gisozi, Burundi)- and high-input (Jyndevad, Denmark and Washington, United States) management sites. Two calibration stages were explored, partial (P1), where experimental dry matter data were not provided, and full (P2). The median model ensemble response outperformed any single model in terms of replicating observed yield across all locations. Uncertainty in simulated yield decreased from 38% to 20% between P1 and P2. Model uncertainty increased with interannual variability, and predictions for all agronomic variables were significantly different from one model to another (P < 0.001). Uncertainty averaged 15% higher for low- vs. high-input sites, with larger differences observed for evapotranspiration (ET), nitrogen uptake, and water use efficiency as compared to dry matter. A minimum of five partial, or three full, calibrated models was required for an ensemble approach to keep variability below that of common field variation. Model variation was not influenced by change in carbon dioxide (C), but increased as much as 41% and 23% for yield and ET, respectively, as temperature (T) or rainfall (W) moved away from historical levels. Increases in T accounted for the highest amount of uncertainty, suggesting that methods and parameters for T sensitivity represent a considerable unknown among models. Using median model ensemble values, yield increased on average 6% per 100-ppm C, declined 4.6% per °C, and declined 2% for every 10% decrease in rainfall (for nonirrigated sites). Differences in predictions due to model representation of light utilization were significant (P < 0.01). These are the first reported results quantifying uncertainty for tuber/root crops and suggest modeling assessments of climate change impact on potato may be improved using an ensemble approach. © 2016 John Wiley & Sons Ltd.
Remo, Jonathan W.F.; Ickes, Brian; Ryherd, Julia K.; Guida, Ross J.; Therrell, Matthew D.
2018-01-01
The impacts of dams and levees on the long-term (>130 years) discharge record was assessed along a ~1200 km segment of the Mississippi River between St. Louis, Missouri, and Vicksburg, Mississippi. To aid in our evaluation of dam impacts, we used data from the U.S. National Inventory of Dams to calculate the rate of reservoir expansion at five long-term hydrologic monitoring stations along the study segment. We divided the hydrologic record at each station into three periods: (1) a pre-rapid reservoir expansion period; (2) a rapid reservoir expansion period; and (3) a post-rapid reservoir expansion period. We then used three approaches to assess changes in the hydrologic record at each station. Indicators of hydrologic alteration (IHA) and flow duration hydrographs were used to quantify changes in flow conditions between the pre- and post-rapid reservoir expansion periods. Auto-regressive interrupted time series analysis (ARITS) was used to assess trends in maximum annual discharge, mean annual discharge, minimum annual discharge, and standard deviation of daily discharges within a given water year. A one-dimensional HEC-RAS hydraulic model was used to assess the impact of levees on flood flows. Our results revealed that minimum annual discharges and low-flow IHA parameters showed the most significant changes. Additionally, increasing trends in minimum annual discharge during the rapid reservoir expansion period were found at three out of the five hydrologic monitoring stations. These IHA and ARITS results support previous findings consistent with the observation that reservoirs generally have the greatest impacts on low-flow conditions. River segment scale hydraulic modeling revealed levees can modestly increase peak flood discharges, while basin-scale hydrologic modeling assessments by the U.S. Army Corps of Engineers showed that tributary reservoirs reduced peak discharges by a similar magnitude (2 to 30%). This finding suggests that the effects of dams and levees on peak flood discharges are in part offsetting one another along the modeled river segments and likely other substantially leveed segments of the Mississippi River.
NASA Astrophysics Data System (ADS)
Remo, Jonathan W. F.; Ickes, Brian S.; Ryherd, Julia K.; Guida, Ross J.; Therrell, Matthew D.
2018-07-01
The impacts of dams and levees on the long-term (>130 years) discharge record was assessed along a 1200 km segment of the Mississippi River between St. Louis, Missouri, and Vicksburg, Mississippi. To aid in our evaluation of dam impacts, we used data from the U.S. National Inventory of Dams to calculate the rate of reservoir expansion at five long-term hydrologic monitoring stations along the study segment. We divided the hydrologic record at each station into three periods: (1) a pre-rapid reservoir expansion period; (2) a rapid reservoir expansion period; and (3) a post-rapid reservoir expansion period. We then used three approaches to assess changes in the hydrologic record at each station. Indicators of hydrologic alteration (IHA) and flow duration hydrographs were used to quantify changes in flow conditions between the pre- and post-rapid reservoir expansion periods. Auto-regressive interrupted time series analysis (ARITS) was used to assess trends in maximum annual discharge, mean annual discharge, minimum annual discharge, and standard deviation of daily discharges within a given water year. A one-dimensional HEC-RAS hydraulic model was used to assess the impact of levees on flood flows. Our results revealed that minimum annual discharges and low-flow IHA parameters showed the most significant changes. Additionally, increasing trends in minimum annual discharge during the rapid reservoir expansion period were found at three out of the five hydrologic monitoring stations. These IHA and ARITS results support previous findings consistent with the observation that reservoirs generally have the greatest impacts on low-flow conditions. River segment scale hydraulic modeling revealed levees can modestly increase peak flood discharges, while basin-scale hydrologic modeling assessments by the U.S. Army Corps of Engineers showed that tributary reservoirs reduced peak discharges by a similar magnitude (2 to 30%). This finding suggests that the effects of dams and levees on peak flood discharges are in part offsetting one another along the modeled river segments and likely other substantially leveed segments of the Mississippi River.
Deducing protein structures using logic programming: exploiting minimum data of diverse types.
Sibbald, P R
1995-04-21
The extent to which a protein can be modeled from constraint data depends on the amount and quality of the data. This report quantifies a relationship between the amount of data and the achievable model resolution. In an information-theoretic framework the number of bits of information per residue needed to constrain a solution was calculated. The number of bits provided by different kinds of constraints was estimated from a tetrahedral lattice where all unique molecules of 6, 9, ..., 21 atoms were enumerated. Subsets of these molecules consistent with different constraint sets were then chosen, counted, and the root-mean-square distance between them calculated. This provided the desired relations. In a discrete system the number of possible models can be severely limited with relatively few constraints. An expert system that can model a protein from data of different types was built to illustrate the principle and was tested using known proteins as examples. C-alpha resolutions of 5 A are obtainable from 5 bits of information per amino acid and, in principle, from data that could be rapidly collected using standard biophysical techniques.
Gunawardena, Harsha P; O'Brien, Jonathon; Wrobel, John A; Xie, Ling; Davies, Sherri R; Li, Shunqiang; Ellis, Matthew J; Qaqish, Bahjat F; Chen, Xian
2016-02-01
Single quantitative platforms such as label-based or label-free quantitation (LFQ) present compromises in accuracy, precision, protein sequence coverage, and speed of quantifiable proteomic measurements. To maximize the quantitative precision and the number of quantifiable proteins or the quantifiable coverage of tissue proteomes, we have developed a unified approach, termed QuantFusion, that combines the quantitative ratios of all peptides measured by both LFQ and label-based methodologies. Here, we demonstrate the use of QuantFusion in determining the proteins differentially expressed in a pair of patient-derived tumor xenografts (PDXs) representing two major breast cancer (BC) subtypes, basal and luminal. Label-based in-spectra quantitative peptides derived from amino acid-coded tagging (AACT, also known as SILAC) of a non-malignant mammary cell line were uniformly added to each xenograft with a constant predefined ratio, from which Ratio-of-Ratio estimates were obtained for the label-free peptides paired with AACT peptides in each PDX tumor. A mixed model statistical analysis was used to determine global differential protein expression by combining complementary quantifiable peptide ratios measured by LFQ and Ratio-of-Ratios, respectively. With minimum number of replicates required for obtaining the statistically significant ratios, QuantFusion uses the distinct mechanisms to "rescue" the missing data inherent to both LFQ and label-based quantitation. Combined quantifiable peptide data from both quantitative schemes increased the overall number of peptide level measurements and protein level estimates. In our analysis of the PDX tumor proteomes, QuantFusion increased the number of distinct peptide ratios by 65%, representing differentially expressed proteins between the BC subtypes. This quantifiable coverage improvement, in turn, not only increased the number of measurable protein fold-changes by 8% but also increased the average precision of quantitative estimates by 181% so that some BC subtypically expressed proteins were rescued by QuantFusion. Thus, incorporating data from multiple quantitative approaches while accounting for measurement variability at both the peptide and global protein levels make QuantFusion unique for obtaining increased coverage and quantitative precision for tissue proteomes. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Shelf-life of a 2.5% sodium hypochlorite solution as determined by Arrhenius equation.
Nicoletti, Maria Aparecida; Siqueira, Evandro Luiz; Bombana, Antonio Carlos; Oliveira, Gabriella Guimarães de
2009-01-01
Accelerated stability tests are indicated to assess, within a short time, the degree of chemical degradation that may affect an active substance, either alone or in a formula, under normal storage conditions. This method is based on increased stress conditions to accelerate the rate of chemical degradation. Based on the equation of the straight line obtained as a function of the reaction order (at 50 and 70 degrees C) and using Arrhenius equation, the speed of the reaction was calculated for the temperature of 20 degrees C (normal storage conditions). This model of accelerated stability test makes it possible to predict the chemical stability of any active substance at any given moment, as long as the method to quantify the chemical substance is available. As an example of the applicability of Arrhenius equation in accelerated stability tests, a 2.5% sodium hypochlorite solution was analyzed due to its chemical instability. Iodometric titration was used to quantify free residual chlorine in the solutions. Based on data obtained keeping this solution at 50 and 70 degrees C, using Arrhenius equation and considering 2.0% of free residual chlorine as the minimum acceptable threshold, the shelf-life was equal to 166 days at 20 degrees C. This model, however, makes it possible to calculate shelf-life at any other given temperature.
Noninvasive measurement of dynamic correlation functions
NASA Astrophysics Data System (ADS)
Uhrich, Philipp; Castrignano, Salvatore; Uys, Hermann; Kastner, Michael
2017-08-01
The measurement of dynamic correlation functions of quantum systems is complicated by measurement backaction. To facilitate such measurements we introduce a protocol, based on weak ancilla-system couplings, that is applicable to arbitrary (pseudo)spin systems and arbitrary equilibrium or nonequilibrium initial states. Different choices of the coupling operator give access to the real and imaginary parts of the dynamic correlation function. This protocol reduces disturbances due to the early-time measurements to a minimum, and we quantify the deviation of the measured correlation functions from the theoretical, unitarily evolved ones. Implementations of the protocol in trapped ions and other experimental platforms are discussed. For spin-1 /2 models and single-site observables we prove that measurement backaction can be avoided altogether, allowing for the use of ancilla-free protocols.
NASA Astrophysics Data System (ADS)
Alhamwi, Alaa; Kleinhans, David; Weitemeyer, Stefan; Vogt, Thomas
2014-12-01
Renewable Energy sources are gaining importance in the Middle East and North Africa (MENA) region. The purpose of this study is to quantify the optimal mix of renewable power generation in the MENA region, taking Morocco as a case study. Based on hourly meteorological data and load data, a 100% solar-plus-wind only scenario for Morocco is investigated. For the optimal mix analyses, a mismatch energy modelling approach is adopted with the objective to minimise the required storage capacities. For a hypothetical Moroccan energy supply system which is entirely based on renewable energy sources, our results show that the minimum storage capacity is achieved at a share of 63% solar and 37% wind power generations.
Recent Studies of the Behavior of the Sun's White-Light Corona Over Time
NASA Technical Reports Server (NTRS)
SaintCyr, O. C.; Young, D. E.; Pesnell, W. D.; Lecinski, A.; Eddy, J.
2008-01-01
Predictions of upcoming solar cycles are often related to the nature and dynamics of the Sun's polar magnetic field and its influence on the corona. For the past 30 years we have a more-or-less continuous record of the Sun's white-light corona from groundbased and spacebased coronagraphs. Over that interval, the large scale features of the corona have varied in what we now consider a 'predictable' fashion--complex, showing multiple streamers at all latitudes during solar activity maximum; and a simple dipolar shape aligned with the rotational pole during solar minimum. Over the past three decades the white-light corona appears to be a better indicator of 'true' solar minimum than sunspot number since sunspots disappear for months (even years) at solar minimum. Since almost all predictions of the timing of the next solar maximum depend on the timing of solar minimum, the white-light corona is a potentially important observational discriminator for future predictors. In this contribution we describe recent work quantifying the large-scale appearance of the Sun's corona to correlate it with the sunspot record, especially around solar minimum. These three decades can be expanded with the HAO archive of eclipse photographs which, although sparse compared to the coronagraphic coverage, extends back to 1869. A more extensive understanding of this proxy would give researchers confidence in using the white-light corona as an indicator of solar minimum conditions.
USDA-ARS?s Scientific Manuscript database
As the channel x blue hybrid catfish is stocked by an increasing number of catfish farmers, it is important to quantify the production response of this fish to dissolved oxygen management strategies. The purpose of this study was to compare the production and water quality responses of the channel x...
A microhistological technique for analysis of food habits of mycophagous rodents.
Patrick W. McIntire; Andrew B. Carey
1989-01-01
We present a technique, based on microhistological analysis of fecal pellets, for quantifying the diets of forest rodents. This technique provides for the simultaneous recording of fungal spores and vascular plant material. Fecal samples should be freeze dried, weighed, and rehydrated with distilled water. We recommend a minimum sampling intensity of 50 fields of view...
Substrate-induced respiration in Puerto Rican soils: minimum glucose amendment
Marcela Zalamea; Grizelle Gonzalez
2007-01-01
Soil microbiota âusually quantified as microbial biomass âis a key component of terrestrial ecosystems, regulating nutrient cycling and organic matter turnover. Among the several methods developed for estimating soil microbial biomass, Substrate-Induced Respiration (SIR) is considered reliable and easy to implement; once the maximum respiratory response is determined...
Analysis of near-surface biases in ERA-Interim over the Canadian Prairies
NASA Astrophysics Data System (ADS)
Betts, Alan K.; Beljaars, Anton C. M.
2017-09-01
We quantify the biases in the diurnal cycle of temperature in ERA-Interim for both warm and cold season using hourly climate station data for four stations in Saskatchewan from 1979 to 2006. The warm season biases increase as opaque cloud cover decreases, and change substantially from April to October. The bias in mean temperature increases almost monotonically from small negative values in April to small positive values in the fall. Under clear skies, the bias in maximum temperature is of the order of -1°C in June and July, and -2°C in spring and fall; while the bias in minimum temperature increases almost monotonically from +1°C in spring to +2.5°C in October. The bias in the diurnal temperature range falls under clear skies from -2.5°C in spring to -5°C in fall. The cold season biases with surface snow have a different structure. The biases in maximum, mean and minimum temperature with a stable BL reach +1°C, +2.6°C and +3°C respectively in January under clear skies. The cold season bias in diurnal range increases from about -1.8°C in the fall to positive values in March. These diurnal biases in 2 m temperature and their seasonal trends are consistent with a high bias in both the diurnal and seasonal amplitude of the model ground heat flux, and a warm season daytime bias resulting from the model fixed leaf area index. Our results can be used as bias corrections in agricultural modeling that use these reanalysis data, and also as a framework for understanding model biases.
NASA Astrophysics Data System (ADS)
Niemeyer, Daniela; Kemena, Tronje P.; Meissner, Katrin J.; Oschlies, Andreas
2017-05-01
Observations indicate an expansion of oxygen minimum zones (OMZs) over the past 50 years, likely related to ongoing deoxygenation caused by reduced oxygen solubility, changes in stratification and circulation, and a potential acceleration of organic matter turnover in a warming climate. The overall area of ocean sediments that are in direct contact with low-oxygen bottom waters also increases with expanding OMZs. This leads to a release of phosphorus from ocean sediments. If anthropogenic carbon dioxide emissions continue unabated, higher temperatures will cause enhanced weathering on land, which, in turn, will increase the phosphorus and alkalinity fluxes into the ocean and therefore raise the ocean's phosphorus inventory even further. A higher availability of phosphorus enhances biological production, remineralisation and oxygen consumption, and might therefore lead to further expansions of OMZs, representing a positive feedback. A negative feedback arises from the enhanced productivity-induced drawdown of carbon and also increased uptake of CO2 due to weathering-induced alkalinity input. This feedback leads to a decrease in atmospheric CO2 and weathering rates. Here, we quantify these two competing feedbacks on millennial timescales for a high CO2 emission scenario. Using the University of Victoria (UVic) Earth System Climate Model of intermediate complexity, our model results suggest that the positive benthic phosphorus release feedback has only a minor impact on the size of OMZs in the next 1000 years. The increase in the marine phosphorus inventory under assumed business-as-usual global warming conditions originates, on millennial timescales, almost exclusively (> 80 %) from the input via terrestrial weathering and causes a 4- to 5-fold expansion of the suboxic water volume in the model.
The impact of reforestation in the northeast United States on precipitation and surface temperature
NASA Astrophysics Data System (ADS)
Clark, Allyson
Since the 1920s, forest coverage in the northeastern United States has recovered from disease, clearing for agricultural and urban development, and the demands of the timber industry. Such a dramatic change in ground cover can influence heat and moisture fluxes to the atmosphere, as measured in altered landscapes in Australia, Israel, and the Amazon. In this study, the impacts of recent reforestation in the northeastern United States on summertime precipitation and surface temperature were quantified by comparing average modern values to 1950s values. Weak positive (negative) relationships between reforestation and average monthly precipitation and daily minimum temperatures (average daily maximum surface temperature) were found. There was no relationship between reforestation and average surface temperature. Results of the observational analysis were compared with results obtained from reforestation scenarios simulated with the BUGS5 global climate model. The single difference between the model runs was the amount of forest coverage in the northeast United States; three levels of forest were defined - a grassland state, with 0% forest coverage, a completely forested state, with approximately 100% forest coverage, and a control state, with forest coverage closely resembling modern forest coverage. The three simulations were compared, and had larger magnitude average changes in precipitation and in all temperature variables. The difference in magnitudes between the model simulations observations was much larger than the difference in the amount of reforestation in each case. Additionally, unlike in observations, a negative relationship was found between average daily minimum temperature and amount of forest coverage, implying that additional factors influence temperature and precipitation in the real world that are not accounted for in the model.
NASA Astrophysics Data System (ADS)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
NASA Technical Reports Server (NTRS)
Ghatas, Rania W.; Jack, Devin P.; Tsakpinis, Dimitrios; Vincent, Michael J.; Sturdy, James L.; Munoz, Cesar A.; Hoffler, Keith D.; Dutle, Aaron M.; Myer, Robert R.; Dehaven, Anna M.;
2017-01-01
As Unmanned Aircraft Systems (UAS) make their way to mainstream aviation operations within the National Airspace System (NAS), research efforts are underway to develop a safe and effective environment for their integration into the NAS. Detect and Avoid (DAA) systems are required to account for the lack of "eyes in the sky" due to having no human on-board the aircraft. The current NAS relies on pilot's vigilance and judgement to remain Well Clear (CFR 14 91.113) of other aircraft. RTCA SC-228 has defined DAA Well Clear (DAAWC) to provide a quantified Well Clear volume to allow systems to be designed and measured against. Extended research efforts have been conducted to understand and quantify system requirements needed to support a UAS pilot's ability to remain well clear of other aircraft. The efforts have included developing and testing sensor, algorithm, alerting, and display requirements. More recently, sensor uncertainty and uncertainty mitigation strategies have been evaluated. This paper discusses results and lessons learned from an End-to-End Verification and Validation (E2-V2) simulation study of a DAA system representative of RTCA SC-228's proposed Phase I DAA Minimum Operational Performance Standards (MOPS). NASA Langley Research Center (LaRC) was called upon to develop a system that evaluates a specific set of encounters, in a variety of geometries, with end-to-end DAA functionality including the use of sensor and tracker models, a sensor uncertainty mitigation model, DAA algorithmic guidance in both vertical and horizontal maneuvering, and a pilot model which maneuvers the ownship aircraft to remain well clear from intruder aircraft, having received collective input from the previous modules of the system. LaRC developed a functioning batch simulation and added a sensor/tracker model from the Federal Aviation Administration (FAA) William J. Hughes Technical Center, an in-house developed sensor uncertainty mitigation strategy, and implemented a pilot model similar to one from the Massachusetts Institute of Technology's Lincoln Laboratory (MIT/LL). The resulting simulation provides the following key parameters, among others, to evaluate the effectiveness of the MOPS DAA system: severity of loss of well clear (SLoWC), alert scoring, and number of increasing alerts (alert jitter). The technique, results, and lessons learned from a detailed examination of DAA system performance over specific test vectors and encounter cases during the simulation experiment will be presented in this paper.
NASA Astrophysics Data System (ADS)
Cambaliza, M. O. L.; Shepson, P. B.; Caulton, D. R.; Stirm, B.; Samarov, D.; Gurney, K. R.; Turnbull, J.; Davis, K. J.; Possolo, A.; Karion, A.; Sweeney, C.; Moser, B.; Hendricks, A.; Lauvaux, T.; Mays, K.; Whetstone, J.; Huang, J.; Razlivanov, I.; Miles, N. L.; Richardson, S. J.
2014-09-01
Urban environments are the primary contributors to global anthropogenic carbon emissions. Because much of the growth in CO2 emissions will originate from cities, there is a need to develop, assess, and improve measurement and modeling strategies for quantifying and monitoring greenhouse gas emissions from large urban centers. In this study the uncertainties in an aircraft-based mass balance approach for quantifying carbon dioxide and methane emissions from an urban environment, focusing on Indianapolis, IN, USA, are described. The relatively level terrain of Indianapolis facilitated the application of mean wind fields in the mass balance approach. We investigate the uncertainties in our aircraft-based mass balance approach by (1) assessing the sensitivity of the measured flux to important measurement and analysis parameters including wind speed, background CO2 and CH4, boundary layer depth, and interpolation technique, and (2) determining the flux at two or more downwind distances from a point or area source (with relatively large source strengths such as solid waste facilities and a power generating station) in rapid succession, assuming that the emission flux is constant. When we quantify the precision in the approach by comparing the estimated emissions derived from measurements at two or more downwind distances from an area or point source, we find that the minimum and maximum repeatability were 12 and 52%, with an average of 31%. We suggest that improvements in the experimental design can be achieved by careful determination of the background concentration, monitoring the evolution of the boundary layer through the measurement period, and increasing the number of downwind horizontal transect measurements at multiple altitudes within the boundary layer.
Litter and dead wood dynamics in ponderosa pine forests along a 160-year chronosequence.
Hall, S A; Burke, I C; Hobbs, N T
2006-12-01
Disturbances such as fire play a key role in controlling ecosystem structure. In fire-prone forests, organic detritus comprises a large pool of carbon and can control the frequency and intensity of fire. The ponderosa pine forests of the Colorado Front Range, USA, where fire has been suppressed for a century, provide an ideal system for studying the long-term dynamics of detrital pools. Our objectives were (1) to quantify the long-term temporal dynamics of detrital pools; and (2) to determine to what extent present stand structure, topography, and soils constrain these dynamics. We collected data on downed dead wood, litter, duff (partially decomposed litter on the forest floor), stand structure, topographic position, and soils for 31 sites along a 160-year chronosequence. We developed a compartment model and parameterized it to describe the temporal trends in the detrital pools. We then developed four sets of statistical models, quantifying the hypothesized relationship between pool size and (1) stand structure, (2) topography, (3) soils variables, and (4) time since fire. We contrasted how much support each hypothesis had in the data using Akaike's Information Criterion (AIC). Time since fire explained 39-80% of the variability in dead wood of different size classes. Pool size increased to a peak as material killed by the fire fell, then decomposed rapidly to a minimum (61-85 years after fire for the different pools). It then increased, presumably as new detritus was produced by the regenerating stand. Litter was most strongly related to canopy cover (r2 = 77%), suggesting that litter fall, rather than decomposition, controls its dynamics. The temporal dynamics of duff were the hardest to predict. Detrital pool sizes were more strongly related to time since fire than to environmental variables. Woody debris peak-to-minimum time was 46-67 years, overlapping the range of historical fire return intervals (1 to > 100 years). Fires may therefore have burned under a wide range of fuel conditions, supporting the hypothesis that this region's fire regime was mixed severity.
Responses of rubber leaf phenology to climatic variations in Southwest China
NASA Astrophysics Data System (ADS)
Zhai, De-Li; Yu, Haiying; Chen, Si-Chong; Ranjitkar, Sailesh; Xu, Jianchu
2017-11-01
The phenology of rubber trees (Hevea brasiliensis) could be influenced by meteorological factors and exhibits significant changes under different geoclimates. In the sub-optimal environment in Xishuangbanna, rubber trees undergo lengthy periods of defoliation and refoliation. The timing of refoliation from budburst to leaf aging could be affected by powdery mildew disease (Oidium heveae), which negatively impacts seed and latex production. Rubber trees are most susceptible to powdery mildew disease at the copper and leaf changing stages. Understanding and predicting leaf phenology of rubber trees are helpful to develop effective means of controlling the disease. This research investigated the effect of several meteorological factors on different leaf phenological stages in a sub-optimal environment for rubber cultivation in Jinghong, Yunnan in Southwest China. Partial least square regression was used to quantify the relationship between meteorological factors and recorded rubber phenologies from 2003 to 2011. Minimum temperature in December was found to be the critical factor for the leaf phenology development of rubber trees. Comparing the delayed effects of minimum temperature, the maximum temperature, diurnal temperature range, and sunshine hours were found to advancing leaf phenologies. A comparatively lower minimum temperature in December would facilitate the advancing of leaf phenologies of rubber trees. Higher levels of precipitation in February delayed the light green and the entire process of leaf aging. Delayed leaf phenology was found to be related to severe rubber powdery mildew disease. These results were used to build predictive models that could be applied to early warning systems of rubber powdery mildew disease.
How can my research paper be useful for future meta-analyses on forest restoration practices?
Enrique Andivia; Pedro Villar‑Salvador; Juan A. Oliet; Jaime Puertolas; R. Kasten Dumroese
2018-01-01
Statistical meta-analysis is a powerful and useful tool to quantitatively synthesize the information conveyed in published studies on a particular topic. It allows identifying and quantifying overall patterns and exploring causes of variation. The inclusion of published works in meta-analyses requires, however, a minimum quality standard of the reported data and...
Optimal nodal flyby with near-Earth asteroids using electric sail
NASA Astrophysics Data System (ADS)
Mengali, Giovanni; Quarta, Alessandro A.
2014-11-01
The aim of this paper is to quantify the performance of an Electric Solar Wind Sail for accomplishing flyby missions toward one of the two orbital nodes of a near-Earth asteroid. Assuming a simplified, two-dimensional mission scenario, a preliminary mission analysis has been conducted involving the whole known population of those asteroids at the beginning of the 2013 year. The analysis of each mission scenario has been performed within an optimal framework, by calculating the minimum-time trajectory required to reach each orbital node of the target asteroid. A considerable amount of simulation data have been collected, using the spacecraft characteristic acceleration as a parameter to quantify the Electric Solar Wind Sail propulsive performance. The minimum time trajectory exhibits a different structure, which may or may not include a solar wind assist maneuver, depending both on the Sun-node distance and the value of the spacecraft characteristic acceleration. Simulations show that over 60% of near-Earth asteroids can be reached with a total mission time less than 100 days, whereas the entire population can be reached in less than 10 months with a spacecraft characteristic acceleration of 1 mm/s2.
Stress and efficiency studies in EFG
NASA Technical Reports Server (NTRS)
1986-01-01
The goals of this program were: (1) to define minimum stress configurations for silicon sheet growth at high speeds; (2) to quantify dislocation electrical activity and their limits on minority carrier diffusion length in deformed silicon; and (3) to study reasons for degradation of lifetime with increases in doping level in edge-defined film-fed growth (EFG) materials. A finite element model was developed for calculating residual stress with plastic deformation. A finite element model was verified for EFG control variable relationships to temperature field of the sheet to permit prediction of profiles and stresses encountered in EFG systems. A residual stress measurement technique was developed for finite size EFG material blanks using shadow Moire interferometry. Transient creep response of silicon was investigated in the temperature range between 800 and 1400 C in strain and strain regimes of interest in stress analysis of sheet growth. Quantitative relationships were established between minority carrier diffusion length and dislocation densities using Electron Beam Induced Current (EBIC) measurement in FZ silicon deformed in four point bending tests.
Yan, Renhua; Li, Lingling; Gao, Junfeng
2018-05-08
Exploring the hydrological regulation of a lowland polder is essential for increasing knowledge regarding the role of polders associated with pumping stations in lowlands. In this study, the Lowland Polder Hydrology and Phosphorus modelling System (PHPS) was applied to the Jianwei polder as a case study for quantifying the regulation effects of a lowland polder with pumping on discharge and phosphorus loads. The results indicate that the polder significantly affected the temporal distribution and annual amount of catchment discharge. Compared with a no-pumping scenario, an agricultural polder with pumping stations generated a sharper discharge hydrograph with higher peak-values and lower minimum-values, as well as an 8.6% reduction in average annual discharge. It also decreased the phosphorus export to downstream water bodies by 5.33 kg/hm 2 /yr because of widespread ditches and ponds, a lower hydraulic gradient, and increased retention times of surface water in ponds. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Heiber, Michael C.; Nguyen, Thuc-Quyen; Deibel, Carsten
2016-05-01
Understanding how the complex intermolecular configurations and nanostructure present in organic semiconductor donor-acceptor blends impacts charge carrier motion, interactions, and recombination behavior is a critical fundamental issue with a particularly major impact on organic photovoltaic applications. In this study, kinetic Monte Carlo (KMC) simulations are used to numerically quantify the complex bimolecular charge carrier recombination behavior in idealized phase-separated blends. Recent KMC simulations have identified how the encounter-limited bimolecular recombination rate in these blends deviates from the often used Langevin model and have been used to construct the new power mean mobility model. Here, we make a challenging but crucial expansion to this work by determining the charge carrier concentration dependence of the encounter-limited bimolecular recombination coefficient. In doing so, we find that an accurate treatment of the long-range electrostatic interactions between charge carriers is critical, and we further argue that many previous KMC simulation studies have used a Coulomb cutoff radius that is too small, which causes a significant overestimation of the recombination rate. To shed more light on this issue, we determine the minimum cutoff radius required to reach an accuracy of less than ±10 % as a function of the domain size and the charge carrier concentration and then use this knowledge to accurately quantify the charge carrier concentration dependence of the recombination rate. Using these rigorous methods, we finally show that the parameters of the power mean mobility model are determined by a newly identified dimensionless ratio of the domain size to the average charge carrier separation distance.
Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Hunsberger, Randolph J
This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less
Beyond Group: Multiple Person Tracking via Minimal Topology-Energy-Variation.
Gao, Shan; Ye, Qixiang; Xing, Junliang; Kuijper, Arjan; Han, Zhenjun; Jiao, Jianbin; Ji, Xiangyang
2017-12-01
Tracking multiple persons is a challenging task when persons move in groups and occlude each other. Existing group-based methods have extensively investigated how to make group division more accurately in a tracking-by-detection framework; however, few of them quantify the group dynamics from the perspective of targets' spatial topology or consider the group in a dynamic view. Inspired by the sociological properties of pedestrians, we propose a novel socio-topology model with a topology-energy function to factor the group dynamics of moving persons and groups. In this model, minimizing the topology-energy-variance in a two-level energy form is expected to produce smooth topology transitions, stable group tracking, and accurate target association. To search for the strong minimum in energy variation, we design the discrete group-tracklet jump moves embedded in the gradient descent method, which ensures that the moves reduce the energy variation of group and trajectory alternately in the varying topology dimension. Experimental results on both RGB and RGB-D data sets show the superiority of our proposed model for multiple person tracking in crowd scenes.
Brassey, Charlotte A.; Margetts, Lee; Kitchener, Andrew C.; Withers, Philip J.; Manning, Phillip L.; Sellers, William I.
2013-01-01
Classic beam theory is frequently used in biomechanics to model the stress behaviour of vertebrate long bones, particularly when creating intraspecific scaling models. Although methodologically straightforward, classic beam theory requires complex irregular bones to be approximated as slender beams, and the errors associated with simplifying complex organic structures to such an extent are unknown. Alternative approaches, such as finite element analysis (FEA), while much more time-consuming to perform, require no such assumptions. This study compares the results obtained using classic beam theory with those from FEA to quantify the beam theory errors and to provide recommendations about when a full FEA is essential for reasonable biomechanical predictions. High-resolution computed tomographic scans of eight vertebrate long bones were used to calculate diaphyseal stress owing to various loading regimes. Under compression, FEA values of minimum principal stress (σmin) were on average 142 per cent (±28% s.e.) larger than those predicted by beam theory, with deviation between the two models correlated to shaft curvature (two-tailed p = 0.03, r2 = 0.56). Under bending, FEA values of maximum principal stress (σmax) and beam theory values differed on average by 12 per cent (±4% s.e.), with deviation between the models significantly correlated to cross-sectional asymmetry at midshaft (two-tailed p = 0.02, r2 = 0.62). In torsion, assuming maximum stress values occurred at the location of minimum cortical thickness brought beam theory and FEA values closest in line, and in this case FEA values of τtorsion were on average 14 per cent (±5% s.e.) higher than beam theory. Therefore, FEA is the preferred modelling solution when estimates of absolute diaphyseal stress are required, although values calculated by beam theory for bending may be acceptable in some situations. PMID:23173199
Achieving cost-neutrality with long-acting reversible contraceptive methods.
Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna
2015-01-01
This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20-29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. Copyright © 2014 Elsevier Inc. All rights reserved.
Achieving cost-neutrality with long-acting reversible contraceptive methods⋆
Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna
2014-01-01
Objectives This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. Study design A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20–29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. Results The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. Conclusions This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Implications Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. PMID:25282161
Quantifying environmental limiting factors on tree cover using geospatial data.
Greenberg, Jonathan A; Santos, Maria J; Dobrowski, Solomon Z; Vanderbilt, Vern C; Ustin, Susan L
2015-01-01
Environmental limiting factors (ELFs) are the thresholds that determine the maximum or minimum biological response for a given suite of environmental conditions. We asked the following questions: 1) Can we detect ELFs on percent tree cover across the eastern slopes of the Lake Tahoe Basin, NV? 2) How are the ELFs distributed spatially? 3) To what extent are unmeasured environmental factors limiting tree cover? ELFs are difficult to quantify as they require significant sample sizes. We addressed this by using geospatial data over a relatively large spatial extent, where the wall-to-wall sampling ensures the inclusion of rare data points which define the minimum or maximum response to environmental factors. We tested mean temperature, minimum temperature, potential evapotranspiration (PET) and PET minus precipitation (PET-P) as potential limiting factors on percent tree cover. We found that the study area showed system-wide limitations on tree cover, and each of the factors showed evidence of being limiting on tree cover. However, only 1.2% of the total area appeared to be limited by the four (4) environmental factors, suggesting other unmeasured factors are limiting much of the tree cover in the study area. Where sites were near their theoretical maximum, non-forest sites (tree cover < 25%) were primarily limited by cold mean temperatures, open-canopy forest sites (tree cover between 25% and 60%) were primarily limited by evaporative demand, and closed-canopy forests were not limited by any particular environmental factor. The detection of ELFs is necessary in order to fully understand the width of limitations that species experience within their geographic range.
Preisler, Haiganoush K; Hicke, Jeffrey A; Ager, Alan A; Hayes, Jane L
2012-11-01
Widespread outbreaks of mountain pine beetle in North America have drawn the attention of scientists, forest managers, and the public. There is strong evidence that climate change has contributed to the extent and severity of recent outbreaks. Scientists are interested in quantifying relationships between bark beetle population dynamics and trends in climate. Process models that simulate climate suitability for mountain pine beetle outbreaks have advanced our understanding of beetle population dynamics; however, there are few studies that have assessed their accuracy across multiple outbreaks or at larger spatial scales. This study used the observed number of trees killed by mountain pine beetles per square kilometer in Oregon and Washington, USA, over the past three decades to quantify and assess the influence of climate and weather variables on beetle activity over longer time periods and larger scales than previously studied. Influences of temperature and precipitation in addition to process model output variables were assessed at annual and climatological time scales. The statistical analysis showed that new attacks are more likely to occur at locations with climatological mean August temperatures >15 degrees C. After controlling for beetle pressure, the variables with the largest effect on the odds of an outbreak exceeding a certain size were minimum winter temperature (positive relationship) and drought conditions in current and previous years. Precipitation levels in the year prior to the outbreak had a positive effect, possibly an indication of the influence of this driver on brood size. Two-year cumulative precipitation had a negative effect, a possible indication of the influence of drought on tree stress. Among the process model variables, cold tolerance was the strongest indicator of an outbreak increasing to epidemic size. A weather suitability index developed from the regression analysis indicated a 2.5x increase in the odds of outbreak at locations with highly suitable weather vs. locations with low suitability. The models were useful for estimating expected amounts of damage (total area with outbreaks) and for quantifying the contribution of climate to total damage. Overall, the results confirm the importance of climate and weather on the spatial expansion of bark beetle outbreaks over time.
Global distortion of GPS networks associated with satellite antenna model errors
NASA Astrophysics Data System (ADS)
Cardellach, E.; Elósegui, P.; Davis, J. L.
2007-07-01
Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by ˜1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PCO errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm yr-1 level, which will impact high-precision crustal deformation studies.
Global Distortion of GPS Networks Associated with Satellite Antenna Model Errors
NASA Technical Reports Server (NTRS)
Cardellach, E.; Elosequi, P.; Davis, J. L.
2007-01-01
Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by approx.1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PC0 errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm/yr level, which will impact high-precision crustal deformation studies.
NASA Astrophysics Data System (ADS)
von Hillebrandt-Andrade, C.; Huerfano Moreno, V. A.; McNamara, D. E.; Saurel, J. M.
2014-12-01
The magnitude-9.3 Sumatra-Andaman Islands earthquake of December 26, 2004, increased global awareness to the destructive hazard of earthquakes and tsunamis. Post event assessments of global coastline vulnerability highlighted the Caribbean as a region of high hazard and risk and that it was poorly monitored. Nearly 100 tsunamis have been reported for the Caribbean region and Adjacent Regions in the past 500 years and continue to pose a threat for its nations, coastal areas along the Gulf of Mexico, and the Atlantic seaboard of North and South America. Significant efforts to improve monitoring capabilities have been undertaken since this time including an expansion of the United States Geological Survey (USGS) Global Seismographic Network (GSN) (McNamara et al., 2006) and establishment of the United Nations Educational, Scientific and Cultural Organization (UNESCO) Intergovernmental Coordination Group (ICG) for the Tsunami and other Coastal Hazards Warning System for the Caribbean and Adjacent Regions (CARIBE EWS). The minimum performance standards it recommended for initial earthquake locations include: 1) Earthquake detection within 1 minute, 2) Minimum magnitude threshold = M4.5, and 3) Initial hypocenter error of <30 km. In this study, we assess current compliance with performance standards and model improvements in earthquake and tsunami monitoring capabilities in the Caribbean region since the first meeting of the UNESCO ICG-Caribe EWS in 2006. The three measures of network capability modeled in this study are: 1) minimum Mw detection threshold; 2) P-wave detection time of an automatic processing system and; 3) theoretical earthquake location uncertainty. By modeling three measures of seismic network capability, we can optimize the distribution of ICG-Caribe EWS seismic stations and select an international network that will be contributed from existing real-time broadband national networks in the region. Sea level monitoring improvements both offshore and along the coast will also be addressed. With the support of Member States and other countries and organizations it has been possible to significantly expand the sea level network thus reducing the amount of time it now takes to verify tsunamis.
Kucharik, Christopher J.; VanLoocke, Andy; Lenters, John D.; Motew, Melissa M.
2013-01-01
Miscanthus is an intriguing cellulosic bioenergy feedstock because its aboveground productivity is high for low amounts of agrochemical inputs, but soil temperatures below −3.5°C could threaten successful cultivation in temperate regions. We used a combination of observed soil temperatures and the Agro-IBIS model to investigate how strategic residue management could reduce the risk of rhizome threatening soil temperatures. This objective was addressed using a historical (1978–2007) reconstruction of extreme minimum 10 cm soil temperatures experienced across the Midwest US and model sensitivity studies that quantified the impact of crop residue on soil temperatures. At observation sites and for simulations that had bare soil, two critical soil temperature thresholds (50% rhizome winterkill at −3.5°C and −6.0°C for different Miscanthus genotypes) were reached at rhizome planting depth (10 cm) over large geographic areas. The coldest average annual extreme 10 cm soil temperatures were between −8°C to −11°C across North Dakota, South Dakota, and Minnesota. Large portions of the region experienced 10 cm soil temperatures below −3.5°C in 75% or greater for all years, and portions of North and South Dakota, Minnesota, and Wisconsin experienced soil temperatures below −6.0°C in 50–60% of all years. For simulated management options that established varied thicknesses (1–5 cm) of miscanthus straw following harvest, extreme minimum soil temperatures increased by 2.5°C to 6°C compared to bare soil, with the greatest warming associated with thicker residue layers. While the likelihood of 10 cm soil temperatures reaching −3.5°C was greatly reduced with 2–5 cm of surface residue, portions of the Dakotas, Nebraska, Minnesota, and Wisconsin still experienced temperatures colder than −3.5°C in 50–80% of all years. Nonetheless, strategic residue management could help increase the likelihood of overwintering of miscanthus rhizomes in the first few years after establishment, although low productivity and biomass availability during these early stages could hamper such efforts. PMID:23844244
NASA Technical Reports Server (NTRS)
Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard
2013-01-01
Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.
Convective Transport of Very-short-lived Bromocarbons to the Stratosphere
NASA Technical Reports Server (NTRS)
Liang, Qing; Atlas, Elliot Leonard; Blake, Donald Ray; Dorf, Marcel; Pfeilsticker, Klaus August; Schauffler, Sue Myhre
2014-01-01
We use the NASA GEOS Chemistry Climate Model (GEOSCCM) to quantify the contribution of two most important brominated very short-lived substances (VSLS), bromoform (CHBr3) and dibromomethane (CH2Br2), to stratospheric bromine and its sensitivity to convection strength. Model simulations suggest that the most active transport of VSLS from the marine boundary layer through the tropopause occurs over the tropical Indian Ocean, the Western Pacific warm pool, and off the Pacific coast of Mexico. Together, convective lofting of CHBr3 and CH2Br2 and their degradation products supplies 8 ppt total bromine to the base of the Tropical Tropopause Layer (TTL, 150 hPa), similar to the amount of VSLS organic bromine available in the marine boundary layer (7.8-8.4 ppt) in the above active convective lofting regions. Of the total 8 ppt VSLS-originated bromine that enters the base of TTL at 150 hPa, half is in the form of source gas injection (SGI) and half as product gas injection (PGI). Only a small portion (< 10%) the VSLS-originated bromine is removed via wet scavenging in the TTL before reaching the lower stratosphere. On global and annual average, CHBr3 and CH2Br2, together, contribute 7.7 pptv to the present-day inorganic bromine in the stratosphere. However, varying model deep convection strength between maximum and minimum convection conditions can introduce a 2.6 pptv uncertainty in the contribution of VSLS to inorganic bromine in the stratosphere (BryVSLS). Contrary to the conventional wisdom, minimum convection condition leads to a larger BryVSLS as the reduced scavenging in soluble product gases, thus a significant increase in PGI (2-3 ppt), greatly exceeds the relative minor decrease in SGI (a few 10ths ppt.
Growth of left ventricular mass with military basic training in army recruits.
Batterham, Alan M; George, Keith P; Birch, Karen M; Pennell, Dudley J; Myerson, Saul G
2011-07-01
Exercise-induced left ventricular hypertrophy is well documented, but whether this occurs merely in line with concomitant increases in lean body mass is unclear. Our aim was to model the extent of left ventricular hypertrophy associated with increased lean body mass attributable to an exercise training program. Cardiac and whole-body magnetic resonance imaging was performed before and after a 10-wk intensive British Army basic training program in a sample of 116 healthy Caucasian males (aged 17-28 yr). The within-subjects repeated-measures allometric relationship between lean body mass and left ventricular mass was modeled to allow the proper normalization of changes in left ventricular mass for attendant changes in lean body mass. To linearize the general allometric model (Y=aXb), data were log-transformed before analysis; the resulting effects were therefore expressed as percent changes. We quantified the probability that the true population increase in normalized left ventricular mass was greater than a predefined minimum important difference of 0.2 SD, assigning a probabilistic descriptive anchor for magnitude-based inference. The absolute increase in left ventricular mass was 4.8% (90% confidence interval=3.5%-6%), whereas lean body mass increased by 2.6% (2.1%-3.0%). The change in left ventricular mass adjusted for the change in lean body mass was 3.5% (1.9%-5.1%), equivalent to an increase of 0.25 SD (0.14-0.37). The probability that this effect size was greater than or equal to our predefined minimum important change of 0.2 SD was 0.78-likely to be important. After correction for allometric growth rates, left ventricular hypertrophy and lean body mass changes do not occur at the same magnitude in response to chronic exercise.
NASA Astrophysics Data System (ADS)
Taylor, M. J.; Fornusek, C.; de Chazal, P.; Ruys, A. J.
2017-10-01
Functional Electrical Stimulation (FES) activates nerves and muscles that have been ravished and rendered paralysed by disease. As such, it is advantageous to study joint torques that arise due to electrical stimulation of muscle, to measure fatigue in an indirect, minimally-invasive way. Dynamometry is one way in which this can be achieved. In this paper, torque data is presented from an FES experiment on quadriceps, using isometric dynamometry to measure torque. A library of fatigue metrics to quantify these data are put forward. These metrics include; start and end torque peaks, percentage changes in torque over time, and maximum and minimum torque period algorithms (MTPA 1 and 2), and associated torque-time plots. It is illustrated, by example, how this novel library of metrics can model fatigue over time. Furthermore, these methods are critiqued by a qualitative assessment and compared against one another for their utility in modelling fatigue. Linear trendlines with coefficients of correlation (R 2) and qualitative descriptions of data are used to achieve this. We find that although arduous, individual peak plots yield the most relevant values upon which fatigue can be assessed. Methods to calculate peaks in data have less of a utility, offset by an order of magnitude of ˜101 in comparison with theoretically expected peak numbers. In light of this, we suggest that future methods would be well-inclined to investigate optimized form of peak analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.
New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less
Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.; ...
2017-10-17
New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less
Perry, Bonnie E; Evans, Emily K; Stokic, Dobrivoje S
2017-02-17
Armeo®Spring exoskeleton is widely used for upper extremity rehabilitation; however, weight compensation provided by the device appears insufficiently characterized to fully utilize it in clinical and research settings. Weight compensation was quantified by measuring static force in the sagittal plane with a load cell attached to the elbow joint of Armeo®Spring. All upper spring settings were examined in 5° increments at the minimum, maximum, and two intermediate upper and lower module length settings, while keeping the lower spring at minimum. The same measurements were made for minimum upper spring setting and maximum lower spring setting at minimum and maximum module lengths. Weight compensation was plotted against upper module angles, and slope was analyzed for each condition. The Armeo®Spring design prompted defining the slack angle and exoskeleton balance angle, which, depending on spring and length settings, divide the operating range into different unloading and loading regions. Higher spring tensions and shorter module lengths provided greater unloading (≤6.32 kg of support). Weight compensation slope decreased faster with shorter length settings (minimum length = -0.082 ± 0.002 kg/°; maximum length = -0.046 ± 0.001 kg/°) independent of spring settings. Understanding the impact of different settings on the Armeo®Spring weight compensation should help define best clinical practice and improve fidelity of research.
Shapiro, Lillian L M; Whitehead, Shelley A; Thomas, Matthew B
2017-10-01
Malaria transmission is known to be strongly impacted by temperature. The current understanding of how temperature affects mosquito and parasite life history traits derives from a limited number of empirical studies. These studies, some dating back to the early part of last century, are often poorly controlled, have limited replication, explore a narrow range of temperatures, and use a mixture of parasite and mosquito species. Here, we use a single pairing of the Asian mosquito vector, An. stephensi and the human malaria parasite, P. falciparum to conduct a comprehensive evaluation of the thermal performance curves of a range of mosquito and parasite traits relevant to transmission. We show that biting rate, adult mortality rate, parasite development rate, and vector competence are temperature sensitive. Importantly, we find qualitative and quantitative differences to the assumed temperature-dependent relationships. To explore the overall implications of temperature for transmission, we first use a standard model of relative vectorial capacity. This approach suggests a temperature optimum for transmission of 29°C, with minimum and maximum temperatures of 12°C and 38°C, respectively. However, the robustness of the vectorial capacity approach is challenged by the fact that the empirical data violate several of the model's simplifying assumptions. Accordingly, we present an alternative model of relative force of infection that better captures the observed biology of the vector-parasite interaction. This model suggests a temperature optimum for transmission of 26°C, with a minimum and maximum of 17°C and 35°C, respectively. The differences between the models lead to potentially divergent predictions for the potential impacts of current and future climate change on malaria transmission. The study provides a framework for more detailed, system-specific studies that are essential to develop an improved understanding on the effects of temperature on malaria transmission.
Alarcon Falconi, Tania M; Kulinkina, Alexandra V; Mohan, Venkata Raghava; Francis, Mark R; Kattula, Deepthi; Sarkar, Rajiv; Ward, Honorine; Kang, Gagandeep; Balraj, Vinohar; Naumova, Elena N
2017-01-01
Municipal water sources in India have been found to be highly contaminated, with further water quality deterioration occurring during household storage. Quantifying water quality deterioration requires knowledge about the exact source tap and length of water storage at the household, which is not usually known. This study presents a methodology to link source and household stored water, and explores the effects of spatial assumptions on the association between tap-to-household water quality deterioration and enteric infections in two semi-urban slums of Vellore, India. To determine a possible water source for each household sample, we paired household and tap samples collected on the same day using three spatial approaches implemented in GIS: minimum Euclidean distance; minimum network distance; and inverse network-distance weighted average. Logistic and Poisson regression models were used to determine associations between water quality deterioration and household-level characteristics, and between diarrheal cases and water quality deterioration. On average, 60% of households had higher fecal coliform concentrations in household samples than at source taps. Only the weighted average approach detected a higher risk of water quality deterioration for households that do not purify water and that have animals in the home (RR=1.50 [1.03, 2.18], p=0.033); and showed that households with water quality deterioration were more likely to report diarrheal cases (OR=3.08 [1.21, 8.18], p=0.02). Studies to assess contamination between source and household are rare due to methodological challenges and high costs associated with collecting paired samples. Our study demonstrated it is possible to derive useful spatial links between samples post hoc; and that the pairing approach affects the conclusions related to associations between enteric infections and water quality deterioration. Copyright © 2016 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Candra, S.; Batan, I. M. L.; Berata, W.; Pramono, A. S.
2017-11-01
This paper presents the mathematical approach of minimum blank holder force to prevent wrinkling in deep drawing process of the cylindrical cup. Based on the maximum of minor-major strain ratio, the slab method was applied to determine the modeling of minimum variable blank holder force (VBHF) and it compared to FE simulation. The Tin steel sheet of T4-CA grade, with the thickness of 0.2 mm was used in this study. The modeling of minimum VBHF can be used as a simple reference to prevent wrinkling in deep drawing.
NASA Astrophysics Data System (ADS)
Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott
2017-09-01
We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.
Agatonovic-Kustrin, S; Loescher, Christine M
2013-10-10
Calendula officinalis, commonly known Marigold, has been traditionally used for its anti-inflammatory effects. The aim of this study was to investigate the capacity of an artificial neural network (ANN) to analyse thin layer chromatography (TLC) chromatograms as fingerprint patterns for quantitative estimation of chlorogenic acid, caffeic acid and rutin in Calendula plant extracts. By applying samples with different weight ratios of marker compounds to the system, a database of chromatograms was constructed. A hundred and one signal intensities in each of the HPTLC chromatograms were correlated to the amounts of applied chlorogenic acid, caffeic acid, and rutin using an ANN. The developed ANN correlation was used to quantify the amounts of 3 marker compounds in calendula plant extracts. The minimum quantifiable level (MQL) of 610, 190 and 940 ng and the limit of detection (LD) of 183, 57 and 282 ng were established for chlorogenic, caffeic acid and rutin, respectively. A novel method for quality control of herbal products, based on HPTLC separation, high resolution digital plate imaging and ANN data analysis has been developed. The proposed method can be adopted for routine evaluation of the phytochemical variability in calendula extracts. Copyright © 2013 Elsevier B.V. All rights reserved.
Quantifying the sources of uncertainty in an ensemble of hydrological climate-impact projections
NASA Astrophysics Data System (ADS)
Aryal, Anil; Shrestha, Sangam; Babel, Mukand S.
2018-01-01
The objective of this paper is to quantify the various sources of uncertainty in the assessment of climate change impact on hydrology in the Tamakoshi River Basin, located in the north-eastern part of Nepal. Multiple climate and hydrological models were used to simulate future climate conditions and discharge in the basin. The simulated results of future climate and river discharge were analysed for the quantification of sources of uncertainty using two-way and three-way ANOVA. The results showed that temperature and precipitation in the study area are projected to change in near- (2010-2039), mid- (2040-2069) and far-future (2070-2099) periods. Maximum temperature is likely to rise by 1.75 °C under Representative Concentration Pathway (RCP) 4.5 and by 3.52 °C under RCP 8.5. Similarly, the minimum temperature is expected to rise by 2.10 °C under RCP 4.5 and by 3.73 °C under RCP 8.5 by the end of the twenty-first century. Similarly, the precipitation in the study area is expected to change by - 2.15% under RCP 4.5 and - 2.44% under RCP 8.5 scenarios. The future discharge in the study area was projected using two hydrological models, viz. Soil and Water Assessment Tool (SWAT) and Hydrologic Engineering Center's Hydrologic Modelling System (HEC-HMS). The SWAT model projected discharge is expected to change by small amount, whereas HEC-HMS model projected considerably lower discharge in future compared to the baseline period. The results also show that future climate variables and river hydrology contain uncertainty due to the choice of climate models, RCP scenarios, bias correction methods and hydrological models. During wet days, more uncertainty is observed due to the use of different climate models, whereas during dry days, the use of different hydrological models has a greater effect on uncertainty. Inter-comparison of the impacts of different climate models reveals that the REMO climate model shows higher uncertainty in the prediction of precipitation and, consequently, in the prediction of future discharge and maximum probable flood.
NASA Astrophysics Data System (ADS)
Crimp, Steven; Jin, Huidong; Kokic, Philip; Bakar, Shuvo; Nicholls, Neville
2018-04-01
Anthropogenic climate change has already been shown to effect the frequency, intensity, spatial extent, duration and seasonality of extreme climate events. Understanding these changes is an important step in determining exposure, vulnerability and focus for adaptation. In an attempt to support adaptation decision-making we have examined statistical modelling techniques to improve the representation of global climate model (GCM) derived projections of minimum temperature extremes (frosts) in Australia. We examine the spatial changes in minimum temperature extreme metrics (e.g. monthly and seasonal frost frequency etc.), for a region exhibiting the strongest station trends in Australia, and compare these changes with minimum temperature extreme metrics derived from 10 GCMs, from the Coupled Model Inter-comparison Project Phase 5 (CMIP 5) datasets, and via statistical downscaling. We compare the observed trends with those derived from the "raw" GCM minimum temperature data as well as examine whether quantile matching (QM) or spatio-temporal (spTimerQM) modelling with Quantile Matching can be used to improve the correlation between observed and simulated extreme minimum temperatures. We demonstrate, that the spTimerQM modelling approach provides correlations with observed daily minimum temperatures for the period August to November of 0.22. This represents an almost fourfold improvement over either the "raw" GCM or QM results. The spTimerQM modelling approach also improves correlations with observed monthly frost frequency statistics to 0.84 as opposed to 0.37 and 0.81 for the "raw" GCM and QM results respectively. We apply the spatio-temporal model to examine future extreme minimum temperature projections for the period 2016 to 2048. The spTimerQM modelling results suggest the persistence of current levels of frost risk out to 2030, with the evidence of continuing decadal variation.
NASA Astrophysics Data System (ADS)
Fillion, Anthony; Bocquet, Marc; Gratton, Serge
2018-04-01
The analysis in nonlinear variational data assimilation is the solution of a non-quadratic minimization. Thus, the analysis efficiency relies on its ability to locate a global minimum of the cost function. If this minimization uses a Gauss-Newton (GN) method, it is critical for the starting point to be in the attraction basin of a global minimum. Otherwise the method may converge to a local extremum, which degrades the analysis. With chaotic models, the number of local extrema often increases with the temporal extent of the data assimilation window, making the former condition harder to satisfy. This is unfortunate because the assimilation performance also increases with this temporal extent. However, a quasi-static (QS) minimization may overcome these local extrema. It accomplishes this by gradually injecting the observations in the cost function. This method was introduced by Pires et al. (1996) in a 4D-Var context. We generalize this approach to four-dimensional strong-constraint nonlinear ensemble variational (EnVar) methods, which are based on both a nonlinear variational analysis and the propagation of dynamical error statistics via an ensemble. This forces one to consider the cost function minimizations in the broader context of cycled data assimilation algorithms. We adapt this QS approach to the iterative ensemble Kalman smoother (IEnKS), an exemplar of nonlinear deterministic four-dimensional EnVar methods. Using low-order models, we quantify the positive impact of the QS approach on the IEnKS, especially for long data assimilation windows. We also examine the computational cost of QS implementations and suggest cheaper algorithms.
Morignat, Eric; Gay, Emilie; Vinard, Jean-Luc; Calavas, Didier; Hénaux, Viviane
2015-07-01
In the context of climate change, the frequency and severity of extreme weather events are expected to increase in temperate regions, and potentially have a severe impact on farmed cattle through production losses or deaths. In this study, we used distributed lag non-linear models to describe and quantify the relationship between a temperature-humidity index (THI) and cattle mortality in 12 areas in France. THI incorporates the effects of both temperature and relative humidity and was already used to quantify the degree of heat stress on dairy cattle because it does reflect physical stress deriving from extreme conditions better than air temperature alone. Relationships between daily THI and mortality were modeled separately for dairy and beef cattle during the 2003-2006 period. Our general approach was to first determine the shape of the THI-mortality relationship in each area by modeling THI with natural cubic splines. We then modeled each relationship assuming a three-piecewise linear function, to estimate the critical cold and heat THI thresholds, for each area, delimiting the thermoneutral zone (i.e. where the risk of death is at its minimum), and the cold and heat effects below and above these thresholds, respectively. Area-specific estimates of the cold or heat effects were then combined in a hierarchical Bayesian model to compute the pooled effects of THI increase or decrease on dairy and beef cattle mortality. A U-shaped relationship, indicating a mortality increase below the cold threshold and above the heat threshold was found in most of the study areas for dairy and beef cattle. The pooled estimate of the mortality risk associated with a 1°C decrease in THI below the cold threshold was 5.0% for dairy cattle [95% posterior interval: 4.4, 5.5] and 4.4% for beef cattle [2.0, 6.5]. The pooled mortality risk associated with a 1°C increase above the hot threshold was estimated to be 5.6% [5.0, 6.2] for dairy and 4.6% [0.9, 8.7] for beef cattle. Knowing the thermoneutral zone and temperature effects outside this zone is of primary interest for farmers because it can help determine when to implement appropriate preventive and mitigation measures. Copyright © 2015 Elsevier Inc. All rights reserved.
Oesophageal bioadhesion of sodium alginate suspensions: particle swelling and mucosal retention.
Richardson, J Craig; Dettmar, Peter W; Hampson, Frank C; Melia, Colin D
2004-09-01
This paper describes a prospective bioadhesive liquid dosage form designed to specifically adhere to the oesophageal mucosa. It contains a swelling polymer, sodium alginate, suspended in a water-miscible vehicle and is activated by dilution with saliva to form an adherent layer of polymer on the mucosal surface. The swelling of alginate particles and the bioadhesion of 40% (w/w) sodium alginate suspensions were investigated in a range of vehicles: glycerol, propylene glycol, PEG 200 and PEG 400. Swelling of particles as a function of vehicle dilution with artificial saliva was quantified microscopically using 1,9-dimethyl methylene blue (DMMB) as a visualising agent. The minimum vehicle dilution to initiate swelling varied between vehicles: glycerol required 30% (w/w) dilution whereas PEG 400 required nearly 60% (w/w). Swelling commenced when the Hildebrand solubility parameter of the diluted vehicle was raised to 37 MPa(1/2). The bioadhesive properties of suspensions were examined by quantifying the amount of sodium alginate retained on oesophageal mucosa after washing in artificial saliva. Suspensions exhibited considerable mucoretention and strong correlations were obtained between mucosal retention, the minimum dilution to initiate swelling, and the vehicle Hildebrand solubility parameter. These relationships may allow predictive design of suspensions with specific mucoretentive properties, through judicious choice of vehicle characteristics.
Analysis of on-orbit thermal characteristics of the 15-meter hoop/column antenna
NASA Technical Reports Server (NTRS)
Andersen, Gregory C.; Farmer, Jeffery T.; Garrison, James
1987-01-01
In recent years, interest in large deployable space antennae has led to the development of the 15 meter hoop/column antenna. The thermal environment the antenna is expected to experience during orbit is examined and the temperature distributions leading to reflector surface distortion errors are determined. Two flight orientations corresponding to: (1) normal operation, and (2) use in a Shuttle-attached flight experiment are examined. A reduced element model was used to determine element temperatures at 16 orbit points for both flight orientations. The temperature ranged from a minimum of 188 K to a maximum of 326 K. Based on the element temperatures, orbit position leading to possible worst case surface distortions were determined, and the subsequent temperatures were used in a static finite element analysis to quantify surface control cord deflections. The predicted changes in the control cord lengths were in the submillimeter ranges.
Process Design and Techno-economic Analysis for Materials to Treat Produced Waters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heimer, Brandon Walter; Paap, Scott M; Sasan, Koroush
Significant quantities of water are produced during enhanced oil recovery making these “produced water” streams attractive candidates for treatment and reuse. However, high concentrations of dissolved silica raise the propensity for fouling. In this paper, we report the design and economic analysis for a new ion exchange process using calcined hydrotalcite (HTC) to remove silica from water. This process improves upon known technologies by minimizing sludge product, reducing process fouling, and lowering energy use. Process modeling outputs included raw material requirements, energy use, and the minimum water treatment price (MWTP). Monte Carlo simulations quantified the impact of uncertainty and variabilitymore » in process inputs on MWTP. These analyses showed that cost can be significantly reduced if the HTC materials are optimized. Specifically, R&D improving HTC reusability, silica binding capacity, and raw material price can reduce MWTP by 40%, 13%, and 20%, respectively. Optimizing geographic deployment further improves cost competitiveness.« less
Structural optimization for joined-wing synthesis
NASA Technical Reports Server (NTRS)
Gallman, John W.; Kroo, Ilan M.
1992-01-01
The differences between fully stressed and minimum-weight joined-wing structures are identified, and these differences are quantified in terms of weight, stress, and direct operating cost. A numerical optimization method and a fully stressed design method are used to design joined-wing structures. Both methods determine the sizes of 204 structural members, satisfying 1020 stress constraints and five buckling constraints. Monotonic splines are shown to be a very effective way of linking spanwise distributions of material to a few design variables. Both linear and nonlinear analyses are employed to formulate the buckling constraints. With a constraint on buckling, the fully stressed design is shown to be very similar to the minimum-weight structure. It is suggested that a fully stressed design method based on nonlinear analysis is adequate for an aircraft optimization study.
Short pauses in thalamic deep brain stimulation promote tremor and neuronal bursting.
Swan, Brandon D; Brocker, David T; Hilliard, Justin D; Tatter, Stephen B; Gross, Robert E; Turner, Dennis A; Grill, Warren M
2016-02-01
We conducted intraoperative measurements of tremor during DBS containing short pauses (⩽50 ms) to determine if there is a minimum pause duration that preserves tremor suppression. Nine subjects with ET and thalamic DBS participated during IPG replacement surgery. Patterns of DBS included regular 130 Hz stimulation interrupted by 0, 15, 25 or 50 ms pauses. The same patterns were applied to a model of the thalamic network to quantify effects of pauses on activity of model neurons. All patterns of DBS decreased tremor relative to 'off'. Patterns with pauses generated less tremor reduction than regular high frequency DBS. The model revealed that rhythmic burst-driver inputs to thalamus were masked during DBS, but pauses in stimulation allowed propagation of bursting activity. The mean firing rate of bursting-type model neurons as well as the firing pattern entropy of model neurons were both strongly correlated with tremor power across stimulation conditions. The temporal pattern of stimulation influences the efficacy of thalamic DBS. Pauses in stimulation resulted in decreased tremor suppression indicating that masking of pathological bursting is a mechanism of thalamic DBS for tremor. Pauses in stimulation decreased the efficacy of open-loop DBS for suppression of tremor. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rios, Richard; Acosta, Oscar; Lafond, Caroline; Espinosa, Jairo; de Crevoisier, Renaud
2017-11-01
In radiotherapy for prostate cancer the dose at the treatment planning for the bladder may be a bad surrogate of the actual delivered dose as the bladder presents the largest inter-fraction shape variations during treatment. This paper presents PCA models as a virtual tool to estimate dosimetric uncertainties for the bladder produced by motion and deformation between fractions. Our goal is to propose a methodology to determine the minimum number of modes required to quantify dose uncertainties of the bladder for motion/deformation models based on PCA. We trained individual PCA models using the bladder contours available from three patients with a planning computed tomography (CT) and on-treatment cone-beam CTs (CBCTs). Based on the above models and via deformable image registration (DIR), we estimated two accumulated doses: firstly, an accumulated dose obtained by integrating the planning dose over the Gaussian probability distribution of the PCA model; and secondly, an accumulated dose obtained by simulating treatment courses via a Monte Carlo approach. We also computed a reference accumulated dose for each patient using his available images via DIR. Finally, we compared the planning dose with the three accumulated doses, and we calculated local dose variability and dose-volume histogram uncertainties.
The Future of US Nuclear Deterrence and the Impact of the Global Zero Movement
2013-02-10
The most appealing GZC recommendation is for a de facto minimum deterrence model en route to GZ. To its detriment, however, the GZC proposal relies...difficult to dispute that this path leads, at least eventually, to a minimum deterrence model . It is likely that the US can continue to wield a...instead, a capable, credible deterrent is critical to countering these threats and is especially so in a minimum deterrence model . Two key
Landkamer, Lee L.; Harvey, Ronald W.; Scheibe, Timothy D.; Ryan, Joseph N.
2013-01-01
A colloid transport model is introduced that is conceptually simple yet captures the essential features of colloid transport and retention in saturated porous media when colloid retention is dominated by the secondary minimum because an electrostatic barrier inhibits substantial deposition in the primary minimum. This model is based on conventional colloid filtration theory (CFT) but eliminates the empirical concept of attachment efficiency. The colloid deposition rate is computed directly from CFT by assuming all predicted interceptions of colloids by collectors result in at least temporary deposition in the secondary minimum. Also, a new paradigm for colloid re-entrainment based on colloid population heterogeneity is introduced. To accomplish this, the initial colloid population is divided into two fractions. One fraction, by virtue of physiochemical characteristics (e.g., size and charge), will always be re-entrained after capture in a secondary minimum. The remaining fraction of colloids, again as a result of physiochemical characteristics, will be retained “irreversibly” when captured by a secondary minimum. Assuming the dispersion coefficient can be estimated from tracer behavior, this model has only two fitting parameters: (1) the fraction of the initial colloid population that will be retained “irreversibly” upon interception by a secondary minimum, and (2) the rate at which reversibly retained colloids leave the secondary minimum. These two parameters were correlated to the depth of the Derjaguin-Landau-Verwey-Overbeek (DLVO) secondary energy minimum and pore-water velocity, two physical forces that influence colloid transport. Given this correlation, the model serves as a heuristic tool for exploring the influence of physical parameters such as surface potential and fluid velocity on colloid transport.
Multiple-rule bias in the comparison of classification rules
Yousefi, Mohammadmahdi R.; Hua, Jianping; Dougherty, Edward R.
2011-01-01
Motivation: There is growing discussion in the bioinformatics community concerning overoptimism of reported results. Two approaches contributing to overoptimism in classification are (i) the reporting of results on datasets for which a proposed classification rule performs well and (ii) the comparison of multiple classification rules on a single dataset that purports to show the advantage of a certain rule. Results: This article provides a careful probabilistic analysis of the second issue and the ‘multiple-rule bias’, resulting from choosing a classification rule having minimum estimated error on the dataset. It quantifies this bias corresponding to estimating the expected true error of the classification rule possessing minimum estimated error and it characterizes the bias from estimating the true comparative advantage of the chosen classification rule relative to the others by the estimated comparative advantage on the dataset. The analysis is applied to both synthetic and real data using a number of classification rules and error estimators. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routines and error estimation methods. The code for multiple-rule analysis is implemented in MATLAB. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi11a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21546390
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
Minimum separation distances for natural gas pipeline and boilers in the 300 area, Hanford Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daling, P.M.; Graham, T.M.
1997-08-01
The U.S. Department of Energy (DOE) is proposing actions to reduce energy expenditures and improve energy system reliability at the 300 Area of the Hanford Site. These actions include replacing the centralized heating system with heating units for individual buildings or groups of buildings, constructing a new natural gas distribution system to provide a fuel source for many of these units, and constructing a central control building to operate and maintain the system. The individual heating units will include steam boilers that are to be housed in individual annex buildings located at some distance away from nearby 300 Area nuclearmore » facilities. This analysis develops the basis for siting the package boilers and natural gas distribution systems to be used to supply steam to 300 Area nuclear facilities. The effects of four potential fire and explosion scenarios involving the boiler and natural gas pipeline were quantified to determine minimum separation distances that would reduce the risks to nearby nuclear facilities. The resulting minimum separation distances are shown in Table ES.1.« less
Green, W. Reed; Galloway, Joel M.; Richards, Joseph M.; Wesolowski, Edwin A.
2003-01-01
Outflow from Table Rock Lake and other White River reservoirs support a cold-water trout fishery of substantial economic yield in south-central Missouri and north-central Arkansas. The Missouri Department of Conservation has requested an increase in existing minimum flows through the Table Rock Lake Dam from the U.S. Army Corps of Engineers to increase the quality of fishable waters downstream in Lake Taneycomo. Information is needed to assess the effect of increased minimum flows on temperature and dissolved- oxygen concentrations of reservoir water and the outflow. A two-dimensional, laterally averaged, hydrodynamic, temperature, and dissolved-oxygen model, CE-QUAL-W2, was developed and calibrated for Table Rock Lake, located in Missouri, north of the Arkansas-Missouri State line. The model simulates water-surface elevation, heat transport, and dissolved-oxygen dynamics. The model was developed to assess the effects of proposed increases in minimum flow from about 4.4 cubic meters per second (the existing minimum flow) to 11.3 cubic meters per second (the increased minimum flow). Simulations included assessing the effect of (1) increased minimum flows and (2) increased minimum flows with increased water-surface elevations in Table Rock Lake, on outflow temperatures and dissolved-oxygen concentrations. In both minimum flow scenarios, water temperature appeared to stay the same or increase slightly (less than 0.37 ?C) and dissolved oxygen appeared to decrease slightly (less than 0.78 mg/L) in the outflow during the thermal stratification season. However, differences between the minimum flow scenarios for water temperature and dissolved- oxygen concentration and the calibrated model were similar to the differences between measured and simulated water-column profile values.
Benchmarking image fusion system design parameters
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2013-06-01
A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.
On the bandwidth of the plenoptic function.
Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin
2012-02-01
The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Yan, Tiezhu; Shen, Zhenyao; Heng, Lee; Dercon, Gerd
2016-04-01
Future climate change information is important to formulate adaptation and mitigation strategies for climate change. In this study, a statistical downscaling model (SDSM) was established using both NCEP reanalysis data and ground observations (daily maximum and minimum temperature) during the period 1971-2010, and then calibrated model was applied to generate the future maximum and minimum temperature projections using predictors from the two CMIP5 models (MPI-ESM-LR and CNRM-CM5) under two Representative Concentration Pathway (RCP2.6 and RCP8.5) during the period 2011-2100 for the Haihe River Basin, China. Compared to the baseline period, future change in annual and seasonal maximum and minimum temperature was computed after bias correction. The spatial distribution and trend change of annual maximum and minimum temperature were also analyzed using ensemble projections. The results shows that: (1)The downscaling model had a good applicability on reproducing daily and monthly mean maximum and minimum temperature over the whole basin. (2) Bias was observed when using historical predictors from CMIP5 models and the performance of CNRM-CM5 was a little worse than that of MPI-ESM-LR. (3) The change in annual mean maximum and minimum temperature under the two scenarios in 2020s, 2050s and 2070s will increase and magnitude of maximum temperature will be higher than minimum temperature. (4) The increase in temperature in the mountains and along the coastline is remarkably high than the other parts of the studies basin. (5) For annual maximum and minimum temperature, the significant upward trend will be obtained under RCP 8.5 scenario and the magnitude will be 0.37 and 0.39 ℃ per decade, respectively; the increase in magnitude under RCP 2.6 scenario will be upward in 2020s and then decrease in 2050s and 2070s, and the magnitude will be 0.01 and 0.01℃ per decade, respectively.
Van Dyke, Miriam E; Komro, Kelli A; Shah, Monica P; Livingston, Melvin D; Kramer, Michael R
2018-07-01
Despite substantial declines since the 1960's, heart disease remains the leading cause of death in the United States (US) and geographic disparities in heart disease mortality have grown. State-level socioeconomic factors might be important contributors to geographic differences in heart disease mortality. This study examined the association between state-level minimum wage increases above the federal minimum wage and heart disease death rates from 1980 to 2015 among 'working age' individuals aged 35-64 years in the US. Annual, inflation-adjusted state and federal minimum wage data were extracted from legal databases and annual state-level heart disease death rates were obtained from CDC Wonder. Although most minimum wage and health studies to date use conventional regression models, we employed marginal structural models to account for possible time-varying confounding. Quasi-experimental, marginal structural models accounting for state, year, and state × year fixed effects estimated the association between increases in the state-level minimum wage above the federal minimum wage and heart disease death rates. In models of 'working age' adults (35-64 years old), a $1 increase in the state-level minimum wage above the federal minimum wage was on average associated with ~6 fewer heart disease deaths per 100,000 (95% CI: -10.4, -1.99), or a state-level heart disease death rate that was 3.5% lower per year. In contrast, for older adults (65+ years old) a $1 increase was on average associated with a 1.1% lower state-level heart disease death rate per year (b = -28.9 per 100,000, 95% CI: -71.1, 13.3). State-level economic policies are important targets for population health research. Copyright © 2018 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutton, Spencer M.; Fisk, William J.
For a stand-alone retail building, a primary school, and a secondary school in each of the 16 California climate zones, the EnergyPlus building energy simulation model was used to estimate how minimum mechanical ventilation rates (VRs) affect energy use and indoor air concentrations of an indoor-generated contaminant. The modeling indicates large changes in heating energy use, but only moderate changes in total building energy use, as minimum VRs in the retail building are changed. For example, predicted state-wide heating energy consumption in the retail building decreases by more than 50% and total building energy consumption decreases by approximately 10% asmore » the minimum VR decreases from the Title 24 requirement to no mechanical ventilation. The primary and secondary schools have notably higher internal heat gains than in the retail building models, resulting in significantly reduced demand for heating. The school heating energy use was correspondingly less sensitive to changes in the minimum VR. The modeling indicates that minimum VRs influence HVAC energy and total energy use in schools by only a few percent. For both the retail building and the school buildings, minimum VRs substantially affected the predicted annual-average indoor concentrations of an indoor generated contaminant, with larger effects in schools. The shape of the curves relating contaminant concentrations with VRs illustrate the importance of avoiding particularly low VRs.« less
Quantifying In Situ Contaminant Mobility in Marine Sediments
2008-01-01
and rinsing the collection and sensor chambers and the circulation subsystem with prepared solutions is followed. For metals, a nitric acid soak/rinse...fluids beginning with tap water, then de-ionized water, then a special detergent (“RBS”), then de- ionized water, then nitric acid for metals or Methanol...component parts are soaked, four-hours minimum, in each fluid. A 25% concentration of ultra-pure nitric acid is used to soak Teflon™ parts (bottles, lids
Output Feedback Adaptive Control of Non-Minimum Phase Systems Using Optimal Control Modification
NASA Technical Reports Server (NTRS)
Nguyen, Nhan; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan
2018-01-01
This paper describes output feedback adaptive control approaches for non-minimum phase SISO systems with relative degree 1 and non-strictly positive real (SPR) MIMO systems with uniform relative degree 1 using the optimal control modification method. It is well-known that the standard model-reference adaptive control (MRAC) cannot be used to control non-SPR plants to track an ideal SPR reference model. Due to the ideal property of asymptotic tracking, MRAC attempts an unstable pole-zero cancellation which results in unbounded signals for non-minimum phase SISO systems. The optimal control modification can be used to prevent the unstable pole-zero cancellation which results in a stable adaptation of non-minimum phase SISO systems. However, the tracking performance using this approach could suffer if the unstable zero is located far away from the imaginary axis. The tracking performance can be recovered by using an observer-based output feedback adaptive control approach which uses a Luenberger observer design to estimate the state information of the plant. Instead of explicitly specifying an ideal SPR reference model, the reference model is established from the linear quadratic optimal control to account for the non-minimum phase behavior of the plant. With this non-minimum phase reference model, the observer-based output feedback adaptive control can maintain stability as well as tracking performance. However, in the presence of the mismatch between the SPR reference model and the non-minimum phase plant, the standard MRAC results in unbounded signals, whereas a stable adaptation can be achieved with the optimal control modification. An application of output feedback adaptive control for a flexible wing aircraft illustrates the approaches.
NASA Astrophysics Data System (ADS)
Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen
2012-03-01
In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.
49 CFR 538.7 - Petitions for reduction of minimum driving range.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 6 2013-10-01 2013-10-01 false Petitions for reduction of minimum driving range... ALTERNATIVE FUEL VEHICLES § 538.7 Petitions for reduction of minimum driving range. (a) A manufacturer of a... diesel fuel may petition for a reduced minimum driving range for that model type in accordance with...
Large signal-to-noise ratio quantification in MLE for ARARMAX models
NASA Astrophysics Data System (ADS)
Zou, Yiqun; Tang, Xiafei
2014-06-01
It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.
An Observation-Driven Agent-Based Modeling and Analysis Framework for C. elegans Embryogenesis.
Wang, Zi; Ramsey, Benjamin J; Wang, Dali; Wong, Kwai; Li, Husheng; Wang, Eric; Bao, Zhirong
2016-01-01
With cutting-edge live microscopy and image analysis, biologists can now systematically track individual cells in complex tissues and quantify cellular behavior over extended time windows. Computational approaches that utilize the systematic and quantitative data are needed to understand how cells interact in vivo to give rise to the different cell types and 3D morphology of tissues. An agent-based, minimum descriptive modeling and analysis framework is presented in this paper to study C. elegans embryogenesis. The framework is designed to incorporate the large amounts of experimental observations on cellular behavior and reserve data structures/interfaces that allow regulatory mechanisms to be added as more insights are gained. Observed cellular behaviors are organized into lineage identity, timing and direction of cell division, and path of cell movement. The framework also includes global parameters such as the eggshell and a clock. Division and movement behaviors are driven by statistical models of the observations. Data structures/interfaces are reserved for gene list, cell-cell interaction, cell fate and landscape, and other global parameters until the descriptive model is replaced by a regulatory mechanism. This approach provides a framework to handle the ongoing experiments of single-cell analysis of complex tissues where mechanistic insights lag data collection and need to be validated on complex observations.
Thrust Direction Optimization: Satisfying Dawn's Attitude Agility Constraints
NASA Technical Reports Server (NTRS)
Whiffen, Gregory J.
2013-01-01
The science objective of NASA's Dawn Discovery mission is to explore the two largest members of the main asteroid belt, the giant asteroid Vesta and the dwarf planet Ceres. Dawn successfully completed its orbital mission at Vesta. The Dawn spacecraft has complex, difficult to quantify, and in some cases severe limitations on its attitude agility. The low-thrust transfers between science orbits at Vesta required very complex time varying thrust directions due to the strong and complex gravity and various science objectives. Traditional thrust design objectives (like minimum (Delta)V or minimum transfer time) often result in thrust direction time evolutions that can not be accommodated with the attitude control system available on Dawn. This paper presents several new optimal control objectives, collectively called thrust direction optimization that were developed and necessary to successfully navigate Dawn through all orbital transfers at Vesta.
Thermally activated switching at long time scales in exchange-coupled magnetic grains
NASA Astrophysics Data System (ADS)
Almudallal, Ahmad M.; Mercer, J. I.; Whitehead, J. P.; Plumer, M. L.; van Ek, J.; Fal, T. J.
2015-10-01
Rate coefficients of the Arrhenius-Néel form are calculated for thermally activated magnetic moment reversal for dual layer exchange-coupled composite (ECC) media based on the Langer formalism and are applied to study the sweep rate dependence of M H hysteresis loops as a function of the exchange coupling I between the layers. The individual grains are modeled as two exchange-coupled Stoner-Wohlfarth particles from which the minimum energy paths connecting the minimum energy states are calculated using a variant of the string method and the energy barriers and attempt frequencies calculated as a function of the applied field. The resultant rate equations describing the evolution of an ensemble of noninteracting ECC grains are then integrated numerically in an applied field with constant sweep rate R =-d H /d t and the magnetization calculated as a function of the applied field H . M H hysteresis loops are presented for a range of values I for sweep rates 105Oe /s ≤R ≤1010Oe /s and a figure of merit that quantifies the advantages of ECC media is proposed. M H hysteresis loops are also calculated based on the stochastic Landau-Lifshitz-Gilbert equations for 108Oe /s ≤R ≤1010Oe /s and are shown to be in good agreement with those obtained from the direct integration of rate equations. The results are also used to examine the accuracy of certain approximate models that reduce the complexity associated with the Langer-based formalism and which provide some useful insight into the reversal process and its dependence on the coupling strength and sweep rate. Of particular interest is the clustering of minimum energy states that are separated by relatively low-energy barriers into "metastates." It is shown that while approximating the reversal process in terms of "metastates" results in little loss of accuracy, it can reduce the run time of a kinetic Monte Carlo (KMC) simulation of the magnetic decay of an ensemble of dual layer ECC media by 2 -3 orders of magnitude. The essentially exact results presented in this work for two coupled grains are analogous to the Stoner-Wohlfarth model of a single grain and serve as an important precursor to KMC-based simulation studies on systems of interacting dual layer ECC media.
A Bayesian Framework of Uncertainties Integration in 3D Geological Model
NASA Astrophysics Data System (ADS)
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
Development of real-time PCR methods to quantify patulin-producing molds in food products.
Rodríguez, Alicia; Luque, M Isabel; Andrade, María J; Rodríguez, Mar; Asensio, Miguel A; Córdoba, Juan J
2011-09-01
Patulin is a mycotoxin produced by different Penicillium and Aspergillus strains isolated from food products. To improve food safety, the presence of patulin-producing molds in foods should be quantified. In the present work, two real-time (RTi) PCR protocols based on SYBR Green and TaqMan were developed. Thirty four patulin producers and 28 non-producers strains belonging to different species usually reported in food products were used. The patulin production was tested by mycellar electrokinetic capillary electrophoresis (MECE) and high-pressure liquid chromatography-mass spectrometry (HPLC-MS). A primer pair F-idhtrb/R-idhtrb and the probe IDHprobe were designed from the isoepoxydon dehydrogenase (idh) gene, involved in patulin biosynthesis. The functionality of the developed method was demonstrated by the high linear relationship of the standard curves constructed with the idh gene copy number and Ct values for the different patulin producers tested. The ability to quantify patulin producers of the developed SYBR Green and TaqMan assays in artificially inoculated food samples was successful, with a minimum threshold of 10 conidia g(-1) per reaction. The developed methods quantified with high efficiency fungal load in foods. These RTi-PCR protocols, are proposed to be used to quantify patulin-producing molds in food products and to prevent patulin from entering the food chain. Copyright © 2011 Elsevier Ltd. All rights reserved.
Simulation of hydrodynamics, temperature, and dissolved oxygen in Beaver Lake, Arkansas, 1994-1995
Haggard, Brian; Green, W. Reed
2002-01-01
The tailwaters of Beaver Lake and other White River reservoirs support a cold-water trout fishery of significant economic yield in northwestern Arkansas. The Arkansas Game and Fish Commission has requested an increase in existing minimum flows through the Beaver Lake dam to increase the amount of fishable waters downstream. Information is needed to assess the impact of additional minimum flows on temperature and dissolved-oxygen qualities of reservoir water above the dam and the release water. A two-dimensional, laterally averaged hydrodynamic, thermal and dissolved-oxygen model was developed and calibrated for Beaver Lake, Arkansas. The model simulates surface-water elevation, currents, heat transport and dissolved-oxygen dynamics. The model was developed to assess the impacts of proposed increases in minimum flows from 1.76 cubic meters per second (the existing minimum flow) to 3.85 cubic meters per second (the additional minimum flow). Simulations included assessing (1) the impact of additional minimum flows on tailwater temperature and dissolved-oxygen quality and (2) increasing initial water-surface elevation 0.5 meter and assessing the impact of additional minimum flow on tailwater temperatures and dissolved-oxygen concentrations. The additional minimum flow simulation (without increasing initial pool elevation) appeared to increase the water temperature (<0.9 degrees Celsius) and decrease dissolved oxygen concentration (<2.2 milligrams per liter) in the outflow discharge. Conversely, the additional minimum flow plus initial increase in pool elevation (0.5 meter) simulation appeared to decrease outflow water temperature (0.5 degrees Celsius) and increase dissolved oxygen concentration (<1.2 milligrams per liter) through time. However, results from both minimum flow scenarios for both water temperature and dissolved oxygen concentration were within the boundaries or similar to the error between measured and simulated water column profile values.
NASA Astrophysics Data System (ADS)
Flanagan, S.; Hurtt, G. C.; Fisk, J. P.; Rourke, O.
2012-12-01
A robust understanding of the sensitivity of the pattern, structure, and dynamics of ecosystems to climate, climate variability, and climate change is needed to predict ecosystem responses to current and projected climate change. We present results of a study designed to first quantify the sensitivity of ecosystems to climate through the use of climate and ecosystem data, and then use the results to test the sensitivity of the climate data in a state-of the art ecosystem model. A database of available ecosystem characteristics such as mean canopy height, above ground biomass, and basal area was constructed from sources like the National Biomass and Carbon Dataset (NBCD). The ecosystem characteristics were then paired by latitude and longitude with the corresponding climate characteristics temperature, precipitation, photosynthetically active radiation (PAR) and dew point that were retrieved from the North American Regional Reanalysis (NARR). The average yearly and seasonal means of the climate data, and their associated maximum and minimum values, over the 1979-2010 time frame provided by NARR were constructed and paired with the ecosystem data. The compiled results provide natural patterns of vegetation structure and distribution with regard to climate data. An advanced ecosystem model, the Ecosystem Demography model (ED), was then modified to allow yearly alterations to its mechanistic climate lookup table and used to predict the sensitivities of ecosystem pattern, structure, and dynamics to climate data. The combined ecosystem structure and climate data results were compared to ED's output to check the validity of the model. After verification, climate change scenarios such as those used in the last IPCC were run and future forest structure changes due to climate sensitivities were identified. The results of this study can be used to both quantify and test key relationships for next generation models. The sensitivity of ecosystem characteristics to climate data shown in the database construction and by the model reinforces the need for high-resolution datasets and stresses the importance of understanding and incorporating climate change scenarios into earth system models.
Bonebrake, Timothy C; Syphard, Alexandra D; Franklin, Janet; Anderson, Kurt E; Akçakaya, H Resit; Mizerek, Toni; Winchell, Clark; Regan, Helen M
2014-08-01
Most species face multiple anthropogenic disruptions. Few studies have quantified the cumulative influence of multiple threats on species of conservation concern, and far fewer have quantified the potential relative value of multiple conservation interventions in light of these threats. We linked spatial distribution and population viability models to explore conservation interventions under projected climate change, urbanization, and changes in fire regime on a long-lived obligate seeding plant species sensitive to high fire frequencies, a dominant plant functional type in many fire-prone ecosystems, including the biodiversity hotspots of Mediterranean-type ecosystems. First, we investigated the relative risk of population decline for plant populations in landscapes with and without land protection under an existing habitat conservation plan. Second, we modeled the effectiveness of relocating both seedlings and seeds from a large patch with predicted declines in habitat area to 2 unoccupied recipient patches with increasing habitat area under 2 projected climate change scenarios. Finally, we modeled 8 fire return intervals (FRIs) approximating the outcomes of different management strategies that effectively control fire frequency. Invariably, long-lived obligate seeding populations remained viable only when FRIs were maintained at or above a minimum level. Land conservation and seedling relocation efforts lessened the impact of climate change and land-use change on obligate seeding populations to differing degrees depending on the climate change scenario, but neither of these efforts was as generally effective as frequent translocation of seeds. While none of the modeled strategies fully compensated for the effects of land-use and climate change, an integrative approach managing multiple threats may diminish population declines for species in complex landscapes. Conservation plans designed to mitigate the impacts of a single threat are likely to fail if additional threats are ignored. © 2014 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Simon, Dirk; Marzocchi, Alice; Flecker, Rachel; Lunt, Daniel J.; Hilgen, Frits J.; Meijer, Paul Th.
2017-08-01
The cyclic sedimentary record of the late Miocene Mediterranean shows a clear transition from open marine to restricted conditions and finally to evaporitic environments associated with the Messinian Salinity Crisis. This evolution has been attributed to changes in Mediterranean-Atlantic connectivity and regional climate, which has a strong precessional pulse. 31 Coupled climate simulations with different orbital configurations have been combined in a regression model that estimates the evolution of the freshwater budget of the Mediterranean throughout the late Miocene. The study suggests that wetter conditions occur at precession minima and are enhanced at eccentricity maxima. We use the wetter peaks to predict synthetic sapropel records. Using these to retune two Mediterranean sediment successions indicates that the overall net freshwater budget is the most likely mechanism driving sapropel formation in the late Miocene. Our sapropel timing is offset from precession minima and boreal summer insolation maxima during low eccentricity if the present-day drainage configuration across North Africa is used. This phase offset is removed if at least 50% more water drained into the Mediterranean during the late Miocene, capturing additional North African monsoon precipitation, for example via the Chad-Eosahabi catchment in Libya. In contrast with the clear expression of precession and eccentricity in the model results, obliquity, which is visible in the sapropel record during minimum eccentricity, does not have a strong signal in our model. By exploring the freshwater evolution curve in a box model that also includes Mediterranean-Atlantic exchange, we are able, for the first time, to estimate the Mediterranean's salinity evolution, which is quantitatively consistent with precessional control. Additionally, we separate and quantify the distinct contributions regional climate and tectonic restriction make to the lithological changes associated with the Messinian Salinity Crisis. The novel methodology and results of this study have numerous potential applications to other regions and geological scenarios, as well as to astronomical tuning.
NASA Astrophysics Data System (ADS)
Subha Anand, S.; Rengarajan, R.; Sarma, V. V. S. S.; Sudheer, A. K.; Bhushan, R.; Singh, S. K.
2017-05-01
The northern Indian Ocean is globally significant for its seasonally reversing winds, upwelled nutrients, high biological production, and expanding oxygen minimum zones. The region acts as sink and source for atmospheric CO2. However, the efficiency of the biological carbon pump to sequester atmospheric CO2 and export particulate organic carbon from the surface is not well known. To quantify the upper ocean carbon export flux and to estimate the efficiency of biological carbon pump in the Bay of Bengal and the Indian Ocean, seawater profiles of total 234Th were measured from surface to 300 m depth at 13 stations from 19.9°N to 25.3°S in a transect along 87°E, during spring intermonsoon period (March-April 2014). Results showed enhanced in situ primary production in the equatorial Indian Ocean and the central Bay of Bengal and varied from 13.2 to 173.8 mmol C m-2 d-1. POC export flux in this region varied from 0 to 7.7 mmol C m-2 d-1. Though high carbon export flux was found in the equatorial region, remineralization of organic carbon in the surface and subsurface waters considerably reduced organic carbon export in the Bay of Bengal. Annually recurring anticyclonic eddies enhanced organic carbon utilization and heterotrophy. Oxygen minimum zone developed due to stratification and poor ventilation was intensified by subsurface remineralization. 234Th-based carbon export fluxes were not comparable with empirical statistical model estimates based on primary production and temperature. Region-specific refinement of model parameters is required to accurately predict POC export fluxes.
Shifting relative importance of climatic constraints on land surface phenology
NASA Astrophysics Data System (ADS)
Garonna, Irene; de Jong, Rogier; Stöckli, Reto; Schmid, Bernhard; Schenkel, David; Schimel, David; Schaepman, Michael E.
2018-02-01
Land surface phenology (LSP), the study of seasonal dynamics of vegetated land surfaces from remote sensing, is a key indicator of global change, that both responds to and influences weather and climate. The effects of climatic changes on LSP depend on the relative importance of climatic constraints in specific regions—which are not well understood at global scale. Understanding the climatic constraints that underlie LSP is crucial for explaining climate change effects on global vegetation phenology. We used a combination of modelled and remotely-sensed vegetation activity records to quantify the interplay of three climatic constraints on land surface phenology (namely minimum temperature, moisture availability, and photoperiod), as well as the dynamic nature of these constraints. Our study examined trends and the relative importance of the three constrains at the start and the end of the growing season over eight global environmental zones, for the past three decades. Our analysis revealed widespread shifts in the relative importance of climatic constraints in the temperate and boreal biomes during the 1982-2011 period. These changes in the relative importance of the three climatic constraints, which ranged up to 8% since 1982 levels, varied with latitude and between start and end of the growing season. We found a reduced influence of minimum temperature on start and end of season in all environmental zones considered, with a biome-dependent effect on moisture and photoperiod constraints. For the end of season, we report that the influence of moisture has on average increased for both the temperate and boreal biomes over 8.99 million km2. A shifting relative importance of climatic constraints on LSP has implications both for understanding changes and for improving how they may be modelled at large scales.
Minimum Wage Effects on Educational Enrollments in New Zealand
ERIC Educational Resources Information Center
Pacheco, Gail A.; Cruickshank, Amy A.
2007-01-01
This paper empirically examines the impact of minimum wages on educational enrollments in New Zealand. A significant reform to the youth minimum wage since 2000 has resulted in some age groups undergoing a 91% rise in their real minimum wage over the last 10 years. Three panel least squares multivariate models are estimated from a national sample…
Human equivalent power: towards an optimum energy level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafner, E.
1979-01-01
How much energy would be needed to support the average individual in an efficient technological culture. Present knowledge provides information about minimum dietary power needs; but so far we have not been able to find ways of analyzing other human needs which, in a civilized society, rise far above the power of metabolism. Thus we understand the level at its minimum but not at its optimum. This paper attempts to quantify an optimum power level for civilized society. The author describes a method he uses in seminars to quantify how many servants in units of human equivalent power (HEP) aremore » needed to supply a person in a upper-middle-class lifestyle. Typical seminar participants determine a per-capita power budget of 15 HEPs (perfect servants) would be required. Each human being on earth today is, according to the author, the master of forty slaves; in the U.S., he says, the number is close to 200. He concludes that a highly civilized standard of living may be closely associated with an optimum per capita power budget of 1500 watts; and since the average individual in the U.S. participates in energy turnover at almost ten times the rate he knows intuitively to be reasonable, reformation of American power habits will require reconstruction that shakes the house from top to bottom.« less
Zhang, Liangmao; Wei, Caidi; Zhang, Hui; Song, Mingwei
2017-10-01
The typical environmental endocrine disruptor nonylphenol is becoming an increasingly common pollutant in both fresh and salt water; it compromises the growth and development of many aquatic organisms. As yet, water quality criteria with respect to nonylphenol pollution have not been established in China. Here, the predicted "no effect concentration" of nonylphenol was derived from an analysis of species sensitivity distribution covering a range of species mainly native to China, as a means of quantifying the ecological risk of nonylphenol in surface fresh water. The resulting model, based on the log-logistic distribution, proved to be robust; the minimum sample sizes required for generating a stable estimate of HC 5 were 12 for acute toxicity and 13 for chronic toxicity. The criteria maximum concentration and criteria continuous concentration were, respectively 18.49 μg L -1 and 1.85 μg L -1 . Among the 24 sites surveyed, two were associated with a high ecological risk (risk quotient >1) and 12 with a moderate ecological risk (risk quotient >0.1). The potentially affected fraction ranged from 0.008% to 24.600%. The analysis provides a theoretical basis for both short- and long-term risk assessments with respect to nonylphenol, and also a means to quantify the risk to aquatic ecosystems. Copyright © 2017 Elsevier Ltd. All rights reserved.
Satellite broadcasting system study
NASA Technical Reports Server (NTRS)
1972-01-01
The study to develop a system model and computer program representative of broadcasting satellite systems employing community-type receiving terminals is reported. The program provides a user-oriented tool for evaluating performance/cost tradeoffs, synthesizing minimum cost systems for a given set of system requirements, and performing sensitivity analyses to identify critical parameters and technology. The performance/ costing philosophy and what is meant by a minimum cost system is shown graphically. Topics discussed include: main line control program, ground segment model, space segment model, cost models and launch vehicle selection. Several examples of minimum cost systems resulting from the computer program are presented. A listing of the computer program is also included.
[Medical image segmentation based on the minimum variation snake model].
Zhou, Changxiong; Yu, Shenglin
2007-02-01
It is difficult for traditional parametric active contour (Snake) model to deal with automatic segmentation of weak edge medical image. After analyzing snake and geometric active contour model, a minimum variation snake model was proposed and successfully applied to weak edge medical image segmentation. This proposed model replaces constant force in the balloon snake model by variable force incorporating foreground and background two regions information. It drives curve to evolve with the criterion of the minimum variation of foreground and background two regions. Experiments and results have proved that the proposed model is robust to initial contours placements and can segment weak edge medical image automatically. Besides, the testing for segmentation on the noise medical image filtered by curvature flow filter, which preserves edge features, shows a significant effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Qili; Institute of Robotics and Automatic Information System, Nankai University, Tianjin 300071; Shirinzadeh, Bijan
2015-07-28
A novel weighing method for cells with spherical and other regular shapes is proposed in this paper. In this method, the relationship between the cell mass and the minimum aspiration pressure to immobilize the cell (referred to as minimum immobilization pressure) is derived for the first time according to static theory. Based on this relationship, a robotic cell weighing process is established using a traditional micro-injection system. Experimental results on porcine oocytes demonstrate that the proposed method is able to weigh cells at an average speed of 16.3 s/cell and with a success rate of more than 90%. The derived cellmore » mass and density are in accordance with those reported in other published results. The experimental results also demonstrated that this method is able to detect less than 1% variation of the porcine oocyte mass quantitatively. It can be conducted by a pair of traditional micropipettes and a commercial pneumatic micro-injection system, and is expected to perform robotic operation on batch cells. At present, the minimum resolution of the proposed method for measuring the cell mass can be 1.25 × 10{sup −15 }kg. Above advantages make it very appropriate for quantifying the amount of the materials injected into or moved out of the cells in the biological applications, such as nuclear enucleations and embryo microinjections.« less
A method for minimum risk portfolio optimization under hybrid uncertainty
NASA Astrophysics Data System (ADS)
Egorova, Yu E.; Yazenin, A. V.
2018-03-01
In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.
The Potential Effects of Minimum Wage Changes on Naval Accessions
2017-03-01
price floor affects the market’s demand for labor and utilizes the two-sector and search models to demonstrate how the minimum wage market ...and search models to demonstrate how the minimum wage market correlates to military ascensions. Finally, the report examines studies that show the...that Derives from a Price Floor. Source: “Price Floor” (n.d.). .....14 Figure 3. Price Floor below the Market . Source: “Price Floor” (n.d
Flow convergence caused by a salinity minimum in a tidal channel
Warner, John C.; Schoellhamer, David H.; Burau, Jon R.; Schladow, S. Geoffrey
2006-01-01
Residence times of dissolved substances and sedimentation rates in tidal channels are affected by residual (tidally averaged) circulation patterns. One influence on these circulation patterns is the longitudinal density gradient. In most estuaries the longitudinal density gradient typically maintains a constant direction. However, a junction of tidal channels can create a local reversal (change in sign) of the density gradient. This can occur due to a difference in the phase of tidal currents in each channel. In San Francisco Bay, the phasing of the currents at the junction of Mare Island Strait and Carquinez Strait produces a local salinity minimum in Mare Island Strait. At the location of a local salinity minimum the longitudinal density gradient reverses direction. This paper presents four numerical models that were used to investigate the circulation caused by the salinity minimum: (1) A simple one-dimensional (1D) finite difference model demonstrates that a local salinity minimum is advected into Mare Island Strait from the junction with Carquinez Strait during flood tide. (2) A three-dimensional (3D) hydrodynamic finite element model is used to compute the tidally averaged circulation in a channel that contains a salinity minimum (a change in the sign of the longitudinal density gradient) and compares that to a channel that contains a longitudinal density gradient in a constant direction. The tidally averaged circulation produced by the salinity minimum is characterized by converging flow at the bed and diverging flow at the surface, whereas the circulation produced by the constant direction gradient is characterized by converging flow at the bed and downstream surface currents. These velocity fields are used to drive both a particle tracking and a sediment transport model. (3) A particle tracking model demonstrates a 30 percent increase in the residence time of neutrally buoyant particles transported through the salinity minimum, as compared to transport through a constant direction density gradient. (4) A sediment transport model demonstrates increased deposition at the near-bed null point of the salinity minimum, as compared to the constant direction gradient null point. These results are corroborated by historically noted large sedimentation rates and a local maximum of selenium accumulation in clams at the null point in Mare Island Strait.
Forecast of Frost Days Based on Monthly Temperatures
NASA Astrophysics Data System (ADS)
Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.
2009-04-01
Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.
NASA Astrophysics Data System (ADS)
Venturini, M.
2016-06-01
Using a 1D steady-state free-space coherent synchrotron radiation (CSR) model, we identify a special design setting for a triple-bend isochronous achromat that yields vanishing emittance growth from CSR. When a more refined CSR model with transient effects is included in the analysis, numerical simulations show that the main effect of the transients is to shift the emittance growth minimum slightly, with the minimum changing only modestly.
Venturini, M.
2016-06-09
Using a 1D steady-state free-space coherent synchrotron radiation (CSR) model, we identify a special design setting for a triple-bend isochronous achromat that yields vanishing emittance growth from CSR. When a more refined CSR model with transient effects is included in the analysis, numerical simulations show that the main effect of the transients is to shift the emittance growth minimum slightly, with the minimum changing only modestly.
NASA Astrophysics Data System (ADS)
Hyun, J. Y.; Yang, Y. C. E.; Tidwell, V. C.; Macknick, J.
2017-12-01
Modeling human behaviors and decisions in water resources management is a challenging issue due to its complexity and uncertain characteristics that affected by both internal (such as stakeholder's beliefs on any external information) and external factors (such as future policies and weather/climate forecast). Stakeholders' decision regarding how much water they need is usually not entirely rational in the real-world cases, so it is not quite suitable to model their decisions with a centralized (top-down) approach that assume everyone in a watershed follow the same order or pursue the same objective. Agent-based modeling (ABM) uses a decentralized approach (bottom-up) that allow each stakeholder to make his/her own decision based on his/her own objective and the belief of information acquired. In this study, we develop an ABM which incorporates the psychological human decision process by the theory of risk perception. The theory of risk perception quantifies human behaviors and decisions uncertainties using two sequential methodologies: the Bayesian Inference and the Cost-Loss Problem. The developed ABM is coupled with a regulation-based water system model: Riverware (RW) to evaluate different human decision uncertainties in water resources management. The San Juan River Basin in New Mexico (Figure 1) is chosen as a case study area, while we define 19 major irrigation districts as water use agents and their primary decision is to decide the irrigated area on an annual basis. This decision will be affected by three external factors: 1) upstream precipitation forecast (potential amount of water availability), 2) violation of the downstream minimum flow (required to support ecosystems), and 3) enforcement of a shortage sharing plan (a policy that is currently undertaken in the region for drought years). Three beliefs (as internal factors) that correspond to these three external factors will also be considered in the modeling framework. The objective of this study is to use the two-way coupling between ABM and RW to mimic how stakeholders' uncertain decisions that have been made through the theory of risk perception will affect local and basin-wide water uses.
46 CFR 160.047-4 - Construction.
Code of Federal Regulations, 2012 CFR
2012-10-01
... AF-1, CFM-1, and CFS-1. The buoyant pad inserts for Models AF-1, CFM-1, and CFS-1 buoyant vests shall...)—Distribution of Fibrous Glass in Buoyant Pad Inserts Model AF-1 (minimum) Model CFM-1 (minimum) Model CFS-1... and CFM-1 Each Models CKS-1 and CFS-1 Each Front pads 61/4 pounds ±1/4 pound 41/4 pounds ±1/4 pound 23...
46 CFR 160.047-4 - Construction.
Code of Federal Regulations, 2011 CFR
2011-10-01
... AF-1, CFM-1, and CFS-1. The buoyant pad inserts for Models AF-1, CFM-1, and CFS-1 buoyant vests shall...)—Distribution of Fibrous Glass in Buoyant Pad Inserts Model AF-1 (minimum) Model CFM-1 (minimum) Model CFS-1... and CFM-1 Each Models CKS-1 and CFS-1 Each Front pads 61/4 pounds ±1/4 pound 41/4 pounds ±1/4 pound 23...
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Giampaolo, Salvatore M.; Illuminati, Fabrizio
2007-10-01
We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1×M bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself and the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a , uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.
Appel, T; Bierhoff, E; Appel, K; von Lindern, J-J; Bergé, S; Niederhagen, B
2003-06-01
We did a morphometric analysis of 130 histological sections of basal cell carcinoma (BCC) of the face to find out whether morphometric variables in the structure of the nuclei of BCC cells could serve as predictors of the biological behaviour. We considered the following variables: maximum and minimum diameters, perimeter, nuclear area and five form factors that characterise and quantify the shape of a structure (axis ratio, shape factor, nuclear contour index, nuclear roundness and circumference ratio). We did a statistical analysis of primary and recurring tumours and four histology-based groups (multifocal superficial BCCs, nodular BCCs, sclerosing BCCs and miscellaneous forms) using a two-sided t test for independent samples. Multifocal superficial BCCs showed significantly smaller values for the directly measured variables (maximum and minimum diameters, perimeter and nuclear area). Morphometry could not distinguish between primary and recurring tumours.
Thrust Direction Optimization: Satisfying Dawn's Attitude Agility Constraints
NASA Technical Reports Server (NTRS)
Whiffen, Gregory J.
2013-01-01
The science objective of NASA's Dawn Discovery mission is to explore the giant asteroid Vesta and the dwarf planet Ceres, the two largest members of the main asteroid belt. Dawn successfully completed its orbital mission at Vesta. The Dawn spacecraft has complex, difficult to quantify, and in some cases severe limitations on its attitude agility. The low-thrust transfers between science orbits at Vesta required very complex time varying thrust directions due to the strong and complex gravity and various science objectives. Traditional low-thrust design objectives (like minimum change in velocity or minimum transfer time) often result in thrust direction time evolutions that cannot be accommodated with the attitude control system available on Dawn. This paper presents several new optimal control objectives, collectively called thrust direction optimization that were developed and turned out to be essential to the successful navigation of Dawn at Vesta.
Power optimal single-axis articulating strategies
NASA Technical Reports Server (NTRS)
Kumar, Renjith R.; Heck, Michael L.
1991-01-01
Power optimal single axis articulating PV array motion for Space Station Freedom is investigated. The motivation is to eliminate one of the articular joints to reduce Station costs. Optimal (maximum power) Beta tracking is addressed for local vertical local horizontal (LVLH) and non-LVLH attitudes. Effects of intra-array shadowing are also presented. Maximum power availability while Beta tracking is compared to full sun tracking and optimal alpha tracking. The results are quantified in orbital and yearly minimum, maximum, and average values of power availability.
Quantifying Hydrogen Bond Cooperativity in Water: VRT Spectroscopy of the Water Tetramer
NASA Astrophysics Data System (ADS)
Cruzan, J. D.; Braly, L. B.; Liu, Kun; Brown, M. G.; Loeser, J. G.; Saykally, R. J.
1996-01-01
Measurement of the far-infrared vibration-rotation tunneling spectrum of the perdeuterated water tetramer is described. Precisely determined rotational constants and relative intensity measurements indicate a cyclic quasi-planar minimum energy structure, which is in agreement with recent ab initio calculations. The O-O separation deduced from the data indicates a rapid exponential convergence to the ordered bulk value with increasing cluster size. Observed quantum tunneling splittings are interpreted in terms of hydrogen bond rearrangements connecting two degenerate structures.
Minimum resolvable power contrast model
NASA Astrophysics Data System (ADS)
Qian, Shuai; Wang, Xia; Zhou, Jingjing
2018-01-01
Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance-Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given.
Dagdeviren, Omur E
2018-08-03
The effect of surface disorder, load, and velocity on friction between a single asperity contact and a model surface is explored with one-dimensional and two-dimensional Prandtl-Tomlinson (PT) models. We show that there are fundamental physical differences between the predictions of one-dimensional and two-dimensional models. The one-dimensional model estimates a monotonic increase in friction and energy dissipation with load, velocity, and surface disorder. However, a two-dimensional PT model, which is expected to approximate a tip-sample system more realistically, reveals a non-monotonic trend, i.e. friction is inert to surface disorder and roughness in wearless friction regime. The two-dimensional model discloses that the surface disorder starts to dominate the friction and energy dissipation when the tip and the sample interact predominantly deep into the repulsive regime. Our numerical calculations address that tracking the minimum energy path and the slip-stick motion are two competing effects that determine the load, velocity, and surface disorder dependence of friction. In the two-dimensional model, the single asperity can follow the minimum energy path in wearless regime; however, with increasing load and sliding velocity, the slip-stick movement dominates the dynamic motion and results in an increase in friction by impeding tracing the minimum energy path. Contrary to the two-dimensional model, when the one-dimensional PT model is employed, the single asperity cannot escape to the minimum energy minimum due to constraint motion and reveals only a trivial dependence of friction on load, velocity, and surface disorder. Our computational analyses clarify the physical differences between the predictions of the one-dimensional and two-dimensional models and open new avenues for disordered surfaces for low energy dissipation applications in wearless friction regime.
Variability of space climate and its extremes with successive solar cycles
NASA Astrophysics Data System (ADS)
Chapman, Sandra; Hush, Phillip; Tindale, Elisabeth; Dunlop, Malcolm; Watkins, Nicholas
2016-04-01
Auroral geomagnetic indices coupled with in situ solar wind monitors provide a comprehensive data set, spanning several solar cycles. Space climate can be considered as the distribution of space weather. We can then characterize these observations in terms of changing space climate by quantifying how the statistical properties of ensembles of these observed variables vary between different phases of the solar cycle. We first consider the AE index burst distribution. Bursts are constructed by thresholding the AE time series; the size of a burst is the sum of the excess in the time series for each time interval over which the threshold is exceeded. The distribution of burst sizes is two component with a crossover in behaviour at thresholds ≈ 1000 nT. Above this threshold, we find[1] a range over which the mean burst size is almost constant with threshold for both solar maxima and minima. The burst size distribution of the largest events has a functional form which is exponential. The relative likelihood of these large events varies from one solar maximum and minimum to the next. If the relative overall activity of a solar maximum/minimum can be estimated, these results then constrain the likelihood of extreme events of a given size for that solar maximum/minimum. We next develop and apply a methodology to quantify how the full distribution of geomagnetic indices and upstream solar wind observables are changing between and across different solar cycles. This methodology[2] estimates how different quantiles of the distribution, or equivalently, how the return times of events of a given size, are changing. [1] Hush, P., S. C. Chapman, M. W. Dunlop, and N. W. Watkins (2015), Robust statistical properties of the size of large burst events in AE, Geophys. Res. Lett.,42 doi:10.1002/2015GL066277 [2] Chapman, S. C., D. A. Stainforth, N. W. Watkins, (2013) On estimating long term local climate trends , Phil. Trans. Royal Soc., A,371 20120287 DOI:10.1098/rsta.2012.0287
Bowden, Joseph D; Bauerle, William L
2008-11-01
We investigated which parameters required by the MAESTRA model were most important in predicting leaf-area-based transpiration in 5-year-old trees of five deciduous hardwood species-yoshino cherry (Prunus x yedoensis Matsum.), red maple (Acer rubrum L. 'Autumn Flame'), trident maple (Acer buergeranum Miq.), Japanese flowering cherry (Prunus serrulata Lindl. 'Kwanzan') and London plane-tree (Platanus x acerifolia (Ait.) Willd.). Transpiration estimated from sap flow measured by the heat balance method in branches and trunks was compared with estimates predicted by the three-dimensional transpiration, photosynthesis and absorbed radiation model, MAESTRA. MAESTRA predicted species-specific transpiration from the interactions of leaf-level physiology and spatially explicit micro-scale weather patterns in a mixed deciduous hardwood plantation on a 15-min time step. The monthly differences between modeled mean daily transpiration estimates and measured mean daily sap flow ranged from a 35% underestimation for Acer buergeranum in June to a 25% overestimation for A. rubrum in July. The sensitivity of the modeled transpiration estimates was examined across a 30% error range for seven physiological input parameters. The minimum value of stomatal conductance as incident solar radiation tends to zero was determined to be eight times more influential than all other physiological model input parameters. This work quantified the major factors that influence modeled species-specific transpiration and confirmed the ability to scale leaf-level physiological attributes to whole-crown transpiration on a species-specific basis.
A Comparison of the Fit of Empirical Data to Two Latent Trait Models. Report No. 92.
ERIC Educational Resources Information Center
Hutten, Leah R.
Goodness of fit of raw test score data were compared, using two latent trait models: the Rasch model and the Birnbaum three-parameter logistic model. Data were taken from various achievement tests and the Scholastic Aptitude Test (Verbal). A minimum sample size of 1,000 was required, and the minimum test length was 40 items. Results indicated that…
Vos, Susan S; Sabus, Ashley; Seyfer, Jennifer; Umlah, Laura; Gross-Advani, Colleen; Thompson-Oster, Jackie
2018-05-01
Objective. To illustrate a method for integrating co-curricular activities, quantify co-curricular activities, and evaluate student perception of achievement of goals. Methods. Throughout a longitudinal course, students engaged in self-selected, co-curricular activities in three categories: professional service, leadership, and community engagement. Hours were documented online with minimum course requirements. Students reflected on experiences and assessed goal attainment. Assignments were reviewed by faculty and feedback was given to each student. Results. From 2010 to 2016, there were 29,341 co-curricular hours documented by 756 students. The most popular events were attending pharmacy organization meetings and participating in immunization clinics. More than half of the students agreed they were able to meet all of their professional goals (mix of career and course goals) while 70% indicated goals were challenging to meet. Conclusion. This method for integrating co-curricular activities using a continuing professional development model demonstrates a sustainable system for promoting professional development through experience and self-reflection.
NASA Astrophysics Data System (ADS)
Sugla, R.; Norris, R. D.; Lyakov, J.
2017-12-01
In his book The Nature of the Stratigraphical Record, Derek Ager made the remarkable observation that the geologic eras of the Phanerozoic could be identified by coloration patterns of carbonate sediments in outcrops. This observation, however, was never quantified nor explained by Ager. Here, we present a record of spectral reflectance of carbonate sediments collected from sections worldwide. While sediment color is governed by many factors, global and abrupt shifts in sediment color across depositional envrionments observed here may represent a shift towards rising oxygen concentrations. Such a shift would explain changes in redox state of iron or organic matter concentrations, both factors which influence sediment color. This record is combined with a simple model of physiological requirements of marine fauna in order to infer a minimum pO2 in the atmosphere to support life. Results indicate a strong threshold change in the Earth system near the Triassic-Jurassic boundary, potentially reflecting rising atmospheric oxygen concentrations not previously recorded.
Resonant photoacoustic detection of NO2 traces with a Q-switched green laser
NASA Astrophysics Data System (ADS)
Slezak, Verónica; Codnia, Jorge; Peuriot, Alejandro L.; Santiago, Guillermo
2003-01-01
Resonant photoacoustic detection of NO2 traces by means of a high repetition pulsed green laser is presented. The resonator is a cylindrical Pyrex glass cell with a measured Q factor 380 for the first radial mode in air at atmospheric pressure. The system is calibrated with known mixtures in dry air and a minimum detectable volume concentration of 50 parts in 109 is obtained (S/N=1). Its sensitivity allows one to detect and quantify NO2 traces in the exhaust gases of cars. Previously, the analysis of gas adsorption and desorption on the walls and of changes in the sample composition is carried out in order to minimize errors in the determination of NO2 content upon application of the extractive method. The efficiency of catalytic converters of several models of automobiles is studied and the NO2 concentration in samples from exhausts of different types of engine (gasoline, diesel, and methane gas) at idling operation are measured.
Non-laser-based scanner for three-dimensional digitization of historical artifacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, Daniel V.; Baldwin, Kevin C.; Duncan, Donald D
2007-05-20
A 3D scanner, based on incoherent illumination techniques, and associated data-processing algorithms are presented that can be used to scan objects at lateral resolutions ranging from 5 to100 {mu}m (or more) and depth resolutions of approximately 2 {mu}m.The scanner was designed with the specific intent to scan cuneiform tablets but can be utilized for other applications. Photometric stereo techniques are used to obtain both a surface normal map and a parameterized model of the object's bidirectional reflectance distribution function. The normal map is combined with height information,gathered by structured light techniques, to form a consistent 3D surface. Data from Lambertianmore » and specularly diffuse spherical objects are presented and used to quantify the accuracy of the techniques. Scans of a cuneiform tablet are also presented. All presented data are at a lateral resolution of 26.8 {mu}m as this is approximately the minimum resolution deemed necessary to accurately represent cuneiform.« less
Jet-like correlations with direct-photon and neutral-pion triggers at √{sNN} = 200 GeV
NASA Astrophysics Data System (ADS)
Adamczyk, L.; Adkins, J. K.; Agakishiev, G.; Aggarwal, M. M.; Ahammed, Z.; Alekseev, I.; Anderson, D. M.; Aparin, A.; Arkhipkin, D.; Aschenauer, E. C.; Ashraf, M. U.; Attri, A.; Averichev, G. S.; Bai, X.; Bairathi, V.; Bellwied, R.; Bhasin, A.; Bhati, A. K.; Bhattarai, P.; Bielcik, J.; Bielcikova, J.; Bland, L. C.; Bordyuzhin, I. G.; Bouchet, J.; Brandenburg, J. D.; Brandin, A. V.; Bunzarov, I.; Butterworth, J.; Caines, H.; Calderón de la Barca Sánchez, M.; Campbell, J. M.; Cebra, D.; Chakaberia, I.; Chaloupka, P.; Chang, Z.; Chatterjee, A.; Chattopadhyay, S.; Chen, X.; Chen, J. H.; Cheng, J.; Cherney, M.; Christie, W.; Contin, G.; Crawford, H. J.; Das, S.; De Silva, L. C.; Debbe, R. R.; Dedovich, T. G.; Deng, J.; Derevschikov, A. A.; di Ruzza, B.; Didenko, L.; Dilks, C.; Dong, X.; Drachenberg, J. L.; Draper, J. E.; Du, C. M.; Dunkelberger, L. E.; Dunlop, J. C.; Efimov, L. G.; Engelage, J.; Eppley, G.; Esha, R.; Evdokimov, O.; Eyser, O.; Fatemi, R.; Fazio, S.; Federic, P.; Fedorisin, J.; Feng, Z.; Filip, P.; Fisyak, Y.; Flores, C. E.; Fulek, L.; Gagliardi, C. A.; Garand, D.; Geurts, F.; Gibson, A.; Girard, M.; Greiner, L.; Grosnick, D.; Gunarathne, D. S.; Guo, Y.; Gupta, S.; Gupta, A.; Guryn, W.; Hamad, A. I.; Hamed, A.; Haque, R.; Harris, J. W.; He, L.; Heppelmann, S.; Heppelmann, S.; Hirsch, A.; Hoffmann, G. W.; Horvat, S.; Huang, T.; Huang, B.; Huang, X.; Huang, H. Z.; Huck, P.; Humanic, T. J.; Igo, G.; Jacobs, W. W.; Jang, H.; Jentsch, A.; Jia, J.; Jiang, K.; Judd, E. G.; Kabana, S.; Kalinkin, D.; Kang, K.; Kauder, K.; Ke, H. W.; Keane, D.; Kechechyan, A.; Khan, Z. H.; Kikoła, D. P.; Kisel, I.; Kisiel, A.; Kochenda, L.; Koetke, D. D.; Kosarzewski, L. K.; Kraishan, A. F.; Kravtsov, P.; Krueger, K.; Kumar, L.; Lamont, M. A. C.; Landgraf, J. M.; Landry, K. D.; Lauret, J.; Lebedev, A.; Lednicky, R.; Lee, J. H.; Li, X.; Li, Y.; Li, C.; Li, W.; Li, X.; Lin, T.; Lisa, M. A.; Liu, F.; Liu, Y.; Ljubicic, T.; Llope, W. J.; Lomnitz, M.; Longacre, R. S.; Luo, X.; Luo, S.; Ma, G. L.; Ma, L.; Ma, Y. G.; Ma, R.; Magdy, N.; Majka, R.; Manion, A.; Margetis, S.; Markert, C.; Matis, H. S.; McDonald, D.; McKinzie, S.; Meehan, K.; Mei, J. C.; Miller, Z. W.; Minaev, N. G.; Mioduszewski, S.; Mishra, D.; Mohanty, B.; Mondal, M. M.; Morozov, D. A.; Mustafa, M. K.; Nandi, B. K.; Nasim, Md.; Nayak, T. K.; Nigmatkulov, G.; Niida, T.; Nogach, L. V.; Noh, S. Y.; Novak, J.; Nurushev, S. B.; Odyniec, G.; Ogawa, A.; Oh, K.; Okorokov, V. A.; Olvitt, D.; Page, B. S.; Pak, R.; Pan, Y. X.; Pandit, Y.; Panebratsev, Y.; Pawlik, B.; Pei, H.; Perkins, C.; Pile, P.; Pluta, J.; Poniatowska, K.; Porter, J.; Posik, M.; Poskanzer, A. M.; Pruthi, N. K.; Przybycien, M.; Putschke, J.; Qiu, H.; Quintero, A.; Ramachandran, S.; Ray, R. L.; Reed, R.; Ritter, H. G.; Roberts, J. B.; Rogachevskiy, O. V.; Romero, J. L.; Ruan, L.; Rusnak, J.; Rusnakova, O.; Sahoo, N. R.; Sahu, P. K.; Sakrejda, I.; Salur, S.; Sandweiss, J.; Sarkar, A.; Schambach, J.; Scharenberg, R. P.; Schmah, A. M.; Schmidke, W. B.; Schmitz, N.; Seger, J.; Seyboth, P.; Shah, N.; Shahaliev, E.; Shanmuganathan, P. V.; Shao, M.; Sharma, A.; Sharma, B.; Sharma, M. K.; Shen, W. Q.; Shi, Z.; Shi, S. S.; Shou, Q. Y.; Sichtermann, E. P.; Sikora, R.; Simko, M.; Singha, S.; Skoby, M. J.; Smirnov, D.; Smirnov, N.; Solyst, W.; Song, L.; Sorensen, P.; Spinka, H. M.; Srivastava, B.; Stanislaus, T. D. S.; Stepanov, M.; Stock, R.; Strikhanov, M.; Stringfellow, B.; Sumbera, M.; Summa, B.; Sun, Y.; Sun, Z.; Sun, X. M.; Surrow, B.; Svirida, D. N.; Tang, Z.; Tang, A. H.; Tarnowsky, T.; Tawfik, A.; Thäder, J.; Thomas, J. H.; Timmins, A. R.; Tlusty, D.; Todoroki, T.; Tokarev, M.; Trentalange, S.; Tribble, R. E.; Tribedy, P.; Tripathy, S. K.; Tsai, O. D.; Ullrich, T.; Underwood, D. G.; Upsal, I.; Van Buren, G.; van Nieuwenhuizen, G.; Vandenbroucke, M.; Varma, R.; Vasiliev, A. N.; Vertesi, R.; Videbæk, F.; Vokal, S.; Voloshin, S. A.; Vossen, A.; Wang, H.; Wang, F.; Wang, Y.; Wang, J. S.; Wang, G.; Wang, Y.; Webb, J. C.; Webb, G.; Wen, L.; Westfall, G. D.; Wieman, H.; Wissink, S. W.; Witt, R.; Wu, Y.; Xiao, Z. G.; Xie, W.; Xie, G.; Xin, K.; Xu, N.; Xu, Q. H.; Xu, Z.; Xu, J.; Xu, H.; Xu, Y. F.; Yang, S.; Yang, Y.; Yang, C.; Yang, Y.; Yang, Y.; Yang, Q.; Ye, Z.; Ye, Z.; Yi, L.; Yip, K.; Yoo, I.-K.; Yu, N.; Zbroszczyk, H.; Zha, W.; Zhang, Z.; Zhang, J. B.; Zhang, S.; Zhang, S.; Zhang, X. P.; Zhang, Y.; Zhang, J.; Zhang, J.; Zhao, J.; Zhong, C.; Zhou, L.; Zhu, X.; Zoulkarneeva, Y.; Zyzak, M.; STAR Collaboration
2016-09-01
Azimuthal correlations of charged hadrons with direct-photon (γdir) and neutral-pion (π0) trigger particles are analyzed in central Au+Au and minimum-bias p + p collisions at √{sNN} = 200 GeV in the STAR experiment. The charged-hadron per-trigger yields at mid-rapidity from central Au+Au collisions are compared with p + p collisions to quantify the suppression in Au+Au collisions. The suppression of the away-side associated-particle yields per γdir trigger is independent of the transverse momentum of the trigger particle (pTtrig), whereas the suppression is smaller at low transverse momentum of the associated charged hadrons (pTassoc). Within uncertainty, similar levels of suppression are observed for γdir and π0 triggers as a function of zT (≡ pTassoc/pTtrig). The results are compared with energy-loss-inspired theoretical model predictions. Our studies support previous conclusions that the lost energy reappears predominantly at low transverse momentum, regardless of the trigger energy.
A phase transition in energy-filtered RNA secondary structures.
Han, Hillary S W; Reidys, Christian M
2012-10-01
In this article we study the effect of energy parameters on minimum free energy (mfe) RNA secondary structures. Employing a simplified combinatorial energy model that is only dependent on the diagram representation and is not sequence-specific, we prove the following dichotomy result. Mfe structures derived via the Turner energy parameters contain only finitely many complex irreducible substructures, and just minor parameter changes produce a class of mfe structures that contain a large number of small irreducibles. We localize the exact point at which the distribution of irreducibles experiences this phase transition from a discrete limit to a central limit distribution and, subsequently, put our result into the context of quantifying the effect of sparsification of the folding of these respective mfe structures. We show that the sparsification of realistic mfe structures leads to a constant time and space reduction, and that the sparsification of the folding of structures with modified parameters leads to a linear time and space reduction. We, furthermore, identify the limit distribution at the phase transition as a Rayleigh distribution.
NASA Technical Reports Server (NTRS)
Chan, William Machado; Pandya, Shishir Ashok; Rogers, Stuart E.
2013-01-01
Recent developments on the automation of the X-rays approach to hole-cutting in over- set grids is further improved. A fast method to compute an auxiliary wall-distance function used in providing a rst estimate of the hole boundary location is introduced. Subsequent iterations lead to automatically-created hole boundaries with a spatially-variable o set from the minimum hole. For each hole boundary location, an averaged cell attribute measure over all fringe points is used to quantify the compatibility between the fringe points and their respective donor cells. The sensitivity of aerodynamic loads to di erent hole boundary locations and cell attribute compatibilities is investigated using four test cases: an isolated re-entry capsule, a two-rocket con guration, the AIAA 4th Drag Prediction Workshop Common Research Model (CRM), and the D8 \\Double Bubble" subsonic aircraft. When best practices in hole boundary treatment are followed, only small variations in integrated loads and convergence rates are observed for different hole boundary locations.
Detection of a dynamic topography signal in last interglacial sea-level records
Austermann, Jacqueline; Mitrovica, Jerry X.; Huybers, Peter; Rovere, Alessio
2017-01-01
Estimating minimum ice volume during the last interglacial based on local sea-level indicators requires that these indicators are corrected for processes that alter local sea level relative to the global average. Although glacial isostatic adjustment is generally accounted for, global scale dynamic changes in topography driven by convective mantle flow are generally not considered. We use numerical models of mantle flow to quantify vertical deflections caused by dynamic topography and compare predictions at passive margins to a globally distributed set of last interglacial sea-level markers. The deflections predicted as a result of dynamic topography are significantly correlated with marker elevations (>95% probability) and are consistent with construction and preservation attributes across marker types. We conclude that a dynamic topography signal is present in the elevation of last interglacial sea-level records and that the signal must be accounted for in any effort to determine peak global mean sea level during the last interglacial to within an accuracy of several meters. PMID:28695210
The evolving Planck mass in classically scale-invariant theories
NASA Astrophysics Data System (ADS)
Kannike, K.; Raidal, M.; Spethmann, C.; Veermäe, H.
2017-04-01
We consider classically scale-invariant theories with non-minimally coupled scalar fields, where the Planck mass and the hierarchy of physical scales are dynamically generated. The classical theories possess a fixed point, where scale invariance is spontaneously broken. In these theories, however, the Planck mass becomes unstable in the presence of explicit sources of scale invariance breaking, such as non-relativistic matter and cosmological constant terms. We quantify the constraints on such classical models from Big Bang Nucleosynthesis that lead to an upper bound on the non-minimal coupling and require trans-Planckian field values. We show that quantum corrections to the scalar potential can stabilise the fixed point close to the minimum of the Coleman-Weinberg potential. The time-averaged motion of the evolving fixed point is strongly suppressed, thus the limits on the evolving gravitational constant from Big Bang Nucleosynthesis and other measurements do not presently constrain this class of theories. Field oscillations around the fixed point, if not damped, contribute to the dark matter density of the Universe.
NASA Astrophysics Data System (ADS)
Pepiot, Perrine; Liang, Youwen; Newale, Ashish; Pope, Stephen
2016-11-01
A pre-partitioned adaptive chemistry (PPAC) approach recently developed and validated in the simplified framework of a partially-stirred reactor is applied to the simulation of turbulent flames using a LES/particle PDF framework. The PPAC approach was shown to simultaneously provide significant savings in CPU and memory requirements, two major limiting factors in LES/particle PDF. The savings are achieved by providing each particle in the PDF method with a specialized reduced representation and kinetic model adjusted to its changing composition. Both representation and model are identified efficiently from a pre-determined list using a low-dimensional binary-tree search algorithm, thereby keeping the run-time overhead associated with the adaptive strategy to a minimum. The Sandia D flame is used as benchmark to quantify the performance of the PPAC algorithm in a turbulent combustion setting. In particular, the CPU and memory benefits, the distribution of the various representations throughout the computational domain, and the relationship between the user-defined error tolerances used to derive the reduced representations and models and the actual errors observed in LES/PDF are characterized. This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Basic Energy Sciences under Award Number DE-FG02-90ER14128.
Hamlett, Christopher A E; Shirtcliffe, Neil J; McHale, Glen; Ahn, Sujung; Bryant, Robert; Doerr, Stefan H; Newton, Michael I
2011-11-15
The wettability of soil is of great importance for plants and soil biota, and in determining the risk for preferential flow, surface runoff, flooding,and soil erosion. The molarity of ethanol droplet (MED) test is widely used for quantifying the severity of water repellency in soils that show reduced wettability and is assumed to be independent of soil particle size. The minimum ethanol concentration at which droplet penetration occurs within a short time (≤ 10 s) provides an estimate of the initial advancing contact angle at which spontaneous wetting is expected. In this study, we test the assumption of particle size independence using a simple model of soil, represented by layers of small (~0.2-2 mm) diameter beads that predict the effect of changing bead radius in the top layer on capillary driven imbibition. Experimental results using a three-layer bead system show broad agreement with the model and demonstrate a dependence of the MED test on particle size. The results show that the critical initial advancing contact angle for penetration can be considerably less than 90° and varies with particle size, demonstrating that a key assumption currently used in the MED testing of soil is not necessarily valid.
Babu, Dinesh; Crandall, Philip G; Johnson, Casey L; O'Bryan, Corliss A; Ricke, Steven C
2013-12-01
Growers and processors of USDA certified organic foods are in need of suitable organic antimicrobials. The purpose of the research reported here was to develop and test natural antimicrobials derived from an all-natural by-product, organic pecan shells. Unroasted and roasted organic pecan shells were subjected to solvent free extraction to produce antimicrobials that were tested against Listeria spp. and L. monocytogenes serotypes to determine the minimum inhibitory concentrations (MIC) of antimicrobials. The effectiveness of pecan shell extracts were further tested using a poultry skin model system and the growth inhibition of the Listeria cells adhered onto the skin model were quantified. The solvent free extracts of pecan shells inhibited Listeria strains at MICs as low as 0.38%. The antimicrobial effectiveness tests on a poultry skin model exhibited nearly a 2 log reduction of the inoculated cocktail mix of Listeria strains when extracts of pecan shell powder were used. The extracts also produced greater than a 4 log reduction of the indigenous spoilage bacteria on the chicken skin. Thus, the pecan shell extracts may prove to be very effective alternative antimicrobials against food pathogens and supplement the demand for effective natural antimicrobials for use in organic meat processing. © 2013 Institute of Food Technologists®
Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo
2017-04-01
Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.
Are There Long-Run Effects of the Minimum Wage?
Sorkin, Isaac
2014-01-01
An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices. PMID:25937790
Are There Long-Run Effects of the Minimum Wage?
Sorkin, Isaac
2015-04-01
An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Z; Terry, N; Hubbard, S S
2013-02-12
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.
2013-02-22
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
From complexity to reality: providing useful frameworks for defining systems of care.
Levison-Johnson, Jody; Wenz-Gross, Melodie
2010-02-01
Because systems of care are not uniform across communities, there is a need to better document the process of system development, define the complexity, and describe the development of the structures, processes, and relationships within communities engaged in system transformation. By doing so, we begin to identify the necessary and sufficient components that, at minimum, move us from usual care within a naturally occurring system to a true system of care. Further, by documenting and measuring the degree to which key components are operating, we may be able to identify the most successful strategies in creating system reform. The theory of change and logic model offer a useful framework for communities to begin the adaptive work necessary to effect true transformation. Using the experience of two system of care communities, this new definition and the utility of a theory of change and logic model framework for defining local system transformation efforts will be discussed. Implications for the field, including the need to further examine the natural progression of systems change and to create quantifiable measures of transformation, will be raised as new challenges for the evolving system of care movement.
Quantifying the dynamic wing morphing of hovering hummingbird
Nakata, Toshiyuki; Kitamura, Ikuo; Tanaka, Hiroto
2017-01-01
Animal wings are lightweight and flexible; hence, during flapping flight their shapes change. It has been known that such dynamic wing morphing reduces aerodynamic cost in insects, but the consequences in vertebrate flyers, particularly birds, are not well understood. We have developed a method to reconstruct a three-dimensional wing model of a bird from the wing outline and the feather shafts (rachides). The morphological and kinematic parameters can be obtained using the wing model, and the numerical or mechanical simulations may also be carried out. To test the effectiveness of the method, we recorded the hovering flight of a hummingbird (Amazilia amazilia) using high-speed cameras and reconstructed the right wing. The wing shape varied substantially within a stroke cycle. Specifically, the maximum and minimum wing areas differed by 18%, presumably due to feather sliding; the wing was bent near the wrist joint, towards the upward direction and opposite to the stroke direction; positive upward camber and the ‘washout’ twist (monotonic decrease in the angle of incidence from the proximal to distal wing) were observed during both half-strokes; the spanwise distribution of the twist was uniform during downstroke, but an abrupt increase near the wrist joint was found during upstroke. PMID:28989736
NASA Technical Reports Server (NTRS)
Chin, Mian; Thornton, Donald; Bandy, Alan; Huebert, Barry; Einaudi, Franco (Technical Monitor)
2000-01-01
The impact of anthropogenic activities on the SO2 and sulfate aerosol levels over the Pacific region is examined in the Georgia Tech/Goddard Global Ozone Chemistry Aerosol Radiation and Transport (GOCART) model. We focus on the analysis of the data from the NASA Pacific Exploratory Missions (PEM) over the western North Pacific and the tropical Pacific. These missions include PEM-West A in September-October 1991, when the Asian outflow was at the minimum but the upper atmosphere was heavily influenced by the Pinatubo volcanic eruption, and PEM-West B in March-April 1994 when the Asian outflow was at the maximum, and PEM-Tropics A in August-September at a region relatively free of direct anthropogenic influences. Specifically, we will examine the relative importance of anthropogenic, volcanic and biogenic sources to the SO2 and sulfate concentrations over the Pacific, and quantify the processes controlling the distributions of SO2 and sulfate in both the boundary layer and the free troposphere. We will also assess the global impact of SO2 emission in Asia on the sulfate aerosol loading.
DIRECTIONAL CULTURAL CHANGE BY MODIFICATION AND REPLACEMENT OF MEMES
Cardoso, Gonçalo C.; Atwell, Jonathan W.
2017-01-01
Evolutionary approaches to culture remain contentious. A source of contention is that cultural mutation may be substantial and, if it drives cultural change, then current evolutionary models are not adequate. But we lack studies quantifying the contribution of mutations to directional cultural change. We estimated the contribution of one type of cultural mutations—modification of memes—to directional cultural change using an amenable study system: learned birdsongs in a species that recently entered an urban habitat. Many songbirds have higher minimum song frequency in cities, to alleviate masking by low-frequency noise. We estimated that the input of meme modifications in an urban songbird population explains about half the extent of the population divergence in song frequency. This contribution of cultural mutations is large, but insufficient to explain the entire population divergence. The remaining divergence is due to selection of memes or creation of new memes. We conclude that the input of cultural mutations can be quantitatively important, unlike in genetic evolution, and that it operates together with other mechanisms of cultural evolution. For this and other traits, in which the input of cultural mutations might be important, quantitative studies of cultural mutation are necessary to calibrate realistic models of cultural evolution. PMID:20722726
Primordial black holes from polynomial potentials in single field inflation
NASA Astrophysics Data System (ADS)
Hertzberg, Mark P.; Yamada, Masaki
2018-04-01
Within canonical single field inflation models, we provide a method to reverse engineer and reconstruct the inflaton potential from a given power spectrum. This is not only a useful tool to find a potential from observational constraints, but also gives insight into how to generate a large amplitude spike in density perturbations, especially those that may lead to primordial black holes (PBHs). In accord with other works, we find that the usual slow-roll conditions need to be violated in order to generate a significant spike in the spectrum. We find that a way to achieve a very large amplitude spike in single field models is for the classical roll of the inflaton to overshoot a local minimum during inflation. We provide an example of a quintic polynomial potential that implements this idea and leads to the observed spectral index, observed amplitude of fluctuations on large scales, significant PBH formation on small scales, and is compatible with other observational constraints. We quantify how much fine-tuning is required to achieve this in a family of random polynomial potentials, which may be useful to estimate the probability of PBH formation in the string landscape.
NASA Astrophysics Data System (ADS)
Jensen, Christian H.; Nerukh, Dmitry; Glen, Robert C.
2008-03-01
We investigate the sensitivity of a Markov model with states and transition probabilities obtained from clustering a molecular dynamics trajectory. We have examined a 500ns molecular dynamics trajectory of the peptide valine-proline-alanine-leucine in explicit water. The sensitivity is quantified by varying the boundaries of the clusters and investigating the resulting variation in transition probabilities and the average transition time between states. In this way, we represent the effect of clustering using different clustering algorithms. It is found that in terms of the investigated quantities, the peptide dynamics described by the Markov model is sensitive to the clustering; in particular, the average transition times are found to vary up to 46%. Moreover, inclusion of nonphysical sparsely populated clusters can lead to serious errors of up to 814%. In the investigation, the time step used in the transition matrix is determined by the minimum time scale on which the system behaves approximately Markovian. This time step is found to be about 100ps. It is concluded that the description of peptide dynamics with transition matrices should be performed with care, and that using standard clustering algorithms to obtain states and transition probabilities may not always produce reliable results.
Practical implementation of channelized hotelling observers: effect of ROI size
NASA Astrophysics Data System (ADS)
Ferrero, Andrea; Favazza, Christopher P.; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H.
2017-03-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.
Practical implementation of Channelized Hotelling Observers: Effect of ROI size.
Ferrero, Andrea; Favazza, Christopher P; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H
2017-03-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.
Varley, Matthew C; Jaspers, Arne; Helsen, Werner F; Malone, James J
2017-09-01
Sprints and accelerations are popular performance indicators in applied sport. The methods used to define these efforts using athlete-tracking technology could affect the number of efforts reported. This study aimed to determine the influence of different techniques and settings for detecting high-intensity efforts using global positioning system (GPS) data. Velocity and acceleration data from a professional soccer match were recorded via 10-Hz GPS. Velocity data were filtered using either a median or an exponential filter. Acceleration data were derived from velocity data over a 0.2-s time interval (with and without an exponential filter applied) and a 0.3-second time interval. High-speed-running (≥4.17 m/s 2 ), sprint (≥7.00 m/s 2 ), and acceleration (≥2.78 m/s 2 ) efforts were then identified using minimum-effort durations (0.1-0.9 s) to assess differences in the total number of efforts reported. Different velocity-filtering methods resulted in small to moderate differences (effect size [ES] 0.28-1.09) in the number of high-speed-running and sprint efforts detected when minimum duration was <0.5 s and small to very large differences (ES -5.69 to 0.26) in the number of accelerations when minimum duration was <0.7 s. There was an exponential decline in the number of all efforts as minimum duration increased, regardless of filtering method, with the largest declines in acceleration efforts. Filtering techniques and minimum durations substantially affect the number of high-speed-running, sprint, and acceleration efforts detected with GPS. Changes to how high-intensity efforts are defined affect reported data. Therefore, consistency in data processing is advised.
Quinn, TA; Granite, S; Allessie, MA; Antzelevitch, C; Bollensdorff, C; Bub, G; Burton, RAB; Cerbai, E; Chen, PS; Delmar, M; DiFrancesco, D; Earm, YE; Efimov, IR; Egger, M; Entcheva, E; Fink, M; Fischmeister, R; Franz, MR; Garny, A; Giles, WR; Hannes, T; Harding, SE; Hunter, PJ; Iribe, G; Jalife, J; Johnson, CR; Kass, RS; Kodama, I; Koren, G; Lord, P; Markhasin, VS; Matsuoka, S; McCulloch, AD; Mirams, GR; Morley, GE; Nattel, S; Noble, D; Olesen, SP; Panfilov, AV; Trayanova, NA; Ravens, U; Richard, S; Rosenbaum, DS; Rudy, Y; Sachs, F; Sachse, FB; Saint, DA; Schotten, U; Solovyova, O; Taggart, P; Tung, L; Varró, A; Volders, PG; Wang, K; Weiss, JN; Wettwer, E; White, E; Wilders, R; Winslow, RL; Kohl, P
2011-01-01
Cardiac experimental electrophysiology is in need of a well-defined Minimum Information Standard for recording, annotating, and reporting experimental data. As a step toward establishing this, we present a draft standard, called Minimum Information about a Cardiac Electrophysiology Experiment (MICEE). The ultimate goal is to develop a useful tool for cardiac electrophysiologists which facilitates and improves dissemination of the minimum information necessary for reproduction of cardiac electrophysiology research, allowing for easier comparison and utilisation of findings by others. It is hoped that this will enhance the integration of individual results into experimental, computational, and conceptual models. In its present form, this draft is intended for assessment and development by the research community. We invite the reader to join this effort, and, if deemed productive, implement the Minimum Information about a Cardiac Electrophysiology Experiment standard in their own work. PMID:21745496
The Effect of an Increased Minimum Wage on Infant Mortality and Birth Weight
Livingston, Melvin D.; Markowitz, Sara; Wagenaar, Alexander C.
2016-01-01
Objectives. To investigate the effects of state minimum wage laws on low birth weight and infant mortality in the United States. Methods. We estimated the effects of state-level minimum wage laws using a difference-in-differences approach on rates of low birth weight (< 2500 g) and postneonatal mortality (28–364 days) by state and month from 1980 through 2011. All models included state and year fixed effects as well as state-specific covariates. Results. Across all models, a dollar increase in the minimum wage above the federal level was associated with a 1% to 2% decrease in low birth weight births and a 4% decrease in postneonatal mortality. Conclusions. If all states in 2014 had increased their minimum wages by 1 dollar, there would likely have been 2790 fewer low birth weight births and 518 fewer postneonatal deaths for the year. PMID:27310355
The Effect of an Increased Minimum Wage on Infant Mortality and Birth Weight.
Komro, Kelli A; Livingston, Melvin D; Markowitz, Sara; Wagenaar, Alexander C
2016-08-01
To investigate the effects of state minimum wage laws on low birth weight and infant mortality in the United States. We estimated the effects of state-level minimum wage laws using a difference-in-differences approach on rates of low birth weight (< 2500 g) and postneonatal mortality (28-364 days) by state and month from 1980 through 2011. All models included state and year fixed effects as well as state-specific covariates. Across all models, a dollar increase in the minimum wage above the federal level was associated with a 1% to 2% decrease in low birth weight births and a 4% decrease in postneonatal mortality. If all states in 2014 had increased their minimum wages by 1 dollar, there would likely have been 2790 fewer low birth weight births and 518 fewer postneonatal deaths for the year.
Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model
NASA Astrophysics Data System (ADS)
Yang, Yuefang; Gan, Chunhui; Shen, Tingting
2017-05-01
In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.
Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson
2008-01-01
We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...
NASA Technical Reports Server (NTRS)
Harris, Franklin D.
1996-01-01
The helicopter industry is vigorously pursuing development of civil tiltrotors. One key to efficient high speed performance of this rotorcraft is prop-rotor performance. Of equal, if not greater, importance is assurance that the flight envelope is free of aeroelastic instabilities well beyond currently envisioned cuise speeds. This later condition requires study at helical tip Match numbers well in excess of 1.0. Two 1940's 'supersonic' propeller experiments conducted by NACA have provided an immensely valuable data bank with which to study prop-rotor behavior at transonic and supersonic helical tip Mach numbers. Very accurate 'blades alone' data were obtained by using nearly an infinite hub. Tabulated data were recreated from the many thrust and power figures and are included in two Appendices to this report. This data set is exceptionally well suited to re-evaluating classical blade element theories as well as evolving computational fluid dynamic (CFD) analyses. A limited comparison of one propeller's experimental results to a modem rotorcraft CFD code is made. This code, referred to as TURNS, gives very encouraging results. Detailed analysis of the performance data from both propellers is provided in Appendix A. This appendix quantifies the minimum power required to produce usable prop-rotor thrust. The dependence of minimum profile power on Reynolds number is quantified. First order compressibility power losses are quantified as well and a first approximation to design air-foil thickness ratio to avoid compressibility losses is provided. Appendix A's results are applied to study high speed civil tiltrotor cruise performance. Predicted tiltrotor performance is compared to two turboprop commercial transports. The comparison shows that there is no fundamental aerodynamic reason why the rotorcraft industry could not develop civil tiltrotor aircraft which have competitive cruise performance with today's regional, turboprop airlines. Recommendations for future study that will insure efficient prop-rotor performance to well beyond 400 knots are given.
A pore-network model for foam formation and propagation in porous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharabaf, H.; Yortsos, Y.C.
1996-12-31
We present a pore-network model, based on a pores-and-throats representation of the porous medium, to simulate the generation and mobilization of foams in porous media. The model allows for various parameters or processes, empirically treated in current models, to be quantified and interpreted. Contrary to previous works, we also consider a dynamic (invasion) in addition to a static process. We focus on the properties of the displacement, the onset of foam flow and mobilization, the foam texture and the sweep efficiencies obtained. The model simulates an invasion process, in which gas invades a porous medium occupied by a surfactant solution.more » The controlling parameter is the snap-off probability, which in turn determines the foam quality for various size distributions of pores and throats. For the front to advance, the applied pressure gradient needs to be sufficiently high to displace a series of lamellae along a minimum capillary resistance (threshold) path. We determine this path using a novel algorithm. The fraction of the flowing lamellae, X{sub f} (and, consequently, the fraction of the trapped lamellae, X{sub f}) which are currently empirical, are also calculated. The model allows the delineation of conditions tinder which high-quality (strong) or low-quality (weak) foams form. In either case, the sweep efficiencies in displacements in various media are calculated. In particular, the invasion by foam of low permeability layers during injection in a heterogeneous system is demonstrated.« less
NASA Astrophysics Data System (ADS)
Lovell, Mark R.; Zavala, Jesús; Vogelsberger, Mark; Shen, Xuejian; Cyr-Racine, Francis-Yan; Pfrommer, Christoph; Sigurdson, Kris; Boylan-Kolchin, Michael; Pillepich, Annalisa
2018-07-01
We contrast predictions for the high-redshift galaxy population and reionization history between cold dark matter (CDM) and an alternative self-interacting dark matter model based on the recently developed ETHOS framework that alleviates the small-scale CDM challenges within the Local Group. We perform the highest resolution hydrodynamical cosmological simulations (a 36 Mpc3 volume with gas cell mass of ˜ 105 M_{⊙} and minimum gas softening of ˜180 pc) within ETHOS to date - plus a CDM counterpart - to quantify the abundance of galaxies at high redshift and their impact on reionization. We find that ETHOS predicts galaxies with higher ultraviolet (UV) luminosities than their CDM counterparts and a faster build-up of the faint end of the UV luminosity function. These effects, however, make the optical depth to reionization less sensitive to the power spectrum cut-off: the ETHOS model differs from the CDM τ value by only 10 per cent and is consistent with Planck limits if the effective escape fraction of UV photons is 0.1-0.5. We conclude that current observations of high-redshift luminosity functions cannot differentiate between ETHOS and CDM models, but deep James Webb Space Telescope surveys of strongly lensed, inherently faint galaxies have the potential to test non-CDM models that offer attractive solutions to CDM's Local Group problems.
Ground Reaction Forces of the Lead and Trail Limbs when Stepping Over an Obstacle
Bovonsunthonchai, Sunee; Khobkhun, Fuengfa; Vachalathiti, Roongtiwa
2015-01-01
Background Precise force generation and absorption during stepping over different obstacles need to be quantified for task accomplishment. This study aimed to quantify how the lead limb (LL) and trail limb (TL) generate and absorb forces while stepping over obstacle of various heights. Material/Methods Thirteen healthy young women participated in the study. Force data were collected from 2 force plates when participants stepped over obstacles. Two limbs (right LL and left TL) and 4 conditions of stepping (no obstacle, stepping over 5 cm, 20 cm, and 30 cm obstacle heights) were tested for main effect and interaction effect by 2-way ANOVA. Paired t-test and 1-way repeated-measure ANOVA were used to compare differences of variables between limbs and among stepping conditions, respectively. The main effects on the limb were found in first peak vertical force, minimum vertical force, propulsive peak force, and propulsive impulse. Results Significant main effects of condition were found in time to minimum force, time to the second peak force, time to propulsive peak force, first peak vertical force, braking peak force, propulsive peak force, vertical impulse, braking impulse, and propulsive impulse. Interaction effects of limb and condition were found in first peak vertical force, propulsive peak force, braking impulse, and propulsive impulse. Conclusions Adaptations of force generation in the LL and TL were found to involve adaptability to altered external environment during stepping in healthy young adults. PMID:26169293
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Kano, Takuya; Cai, Yunliang; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Fujita, Hiroshi
2016-03-01
This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans.
The Urban Forest Effects (UFORE) model: quantifying urban forest structure and functions
David J. Nowak; Daniel E. Crane
2000-01-01
The Urban Forest Effects (UFORE) computer model was developed to help managers and researchers quantify urban forest structure and functions. The model quantifies species composition and diversity, diameter distribution, tree density and health, leaf area, leaf biomass, and other structural characteristics; hourly volatile organic compound emissions (emissions that...
Quantifying the Role of Agriculture and Urbanization in the Nitrogen Cycle across Texas
NASA Astrophysics Data System (ADS)
Helper, L. C.; Yang, Z.
2011-12-01
Over-enrichment of nutrients throughout coastal areas has been a growing problem as population growth has enhanced agricultural and industrial processes. Enhanced nitrogen (N) fluxes from land to coast continue to be the result of over fertilization and pollution deposition. This over-enrichment of nutrients has led to eutrophication and hypoxic conditions in coastal environments. Global estimates indicate rivers export 48 Tg N yr -1 to coastal zones, and regionally North America exports 7.2 Tg N yr-1. These exports are primarily from anthropogenic N inputs (Boyer et al. 2006). Currently the U.S. is home to the second largest hypoxic zone in the world, the Mississippi River Basin, and previous work from Howarth et al. (2002) suggest much of the over enrichment of N is a result of agricultural practices. Aforementioned work has focused on global and large regional estimates; however an inventory has not been conducted on the full scope of N sources along the Gulf of Mexico. This study was conducted along the Gulf, through the state of Texas, in order to quantify all sources of N in a region which contains a large precipitation gradient, three major metropolitan areas, and one of the top livestock industries in the United States. Nitrogen inputs from fertilizer, livestock, and crop fixation were accounted for and totaled to be 0.91 Tg N for the year of 2007. Using estimates of leaching rates from Howarth et al. (2002), riverine export of N was at a minimum of 0.18 Tg for that year. Atmospheric deposition inputs were also analyzed using the Weather Research and Forecasting model with online chemistry (WRF-Chem) and were found to be significantly smaller than those of agriculture. The developed regional high-resolution gridded N budget is now available to be used as N input to next-generation land surface models for nutrient leaching and riverine transport modeling. Ultimately, this comprehensive dataset will help better understand the full pathways of anthropogenic influences on coastal systems.
Ecological covariates based predictive model of malaria risk in the state of Chhattisgarh, India.
Kumar, Rajesh; Dash, Chinmaya; Rani, Khushbu
2017-09-01
Malaria being an endemic disease in the state of Chhattisgarh and ecologically dependent mosquito-borne disease, the study is intended to identify the ecological covariates of malaria risk in districts of the state and to build a suitable predictive model based on those predictors which could assist developing a weather based early warning system. This secondary data based analysis used one month lagged district level malaria positive cases as response variable and ecological covariates as independent variables which were tested with fixed effect panelled negative binomial regression models. Interactions among the covariates were explored using two way factorial interaction in the model. Although malaria risk in the state possesses perennial characteristics, higher parasitic incidence was observed during the rainy and winter seasons. The univariate analysis indicated that the malaria incidence risk was statistically significant associated with rainfall, maximum humidity, minimum temperature, wind speed, and forest cover ( p < 0.05). The efficient predictive model include the forest cover [IRR-1.033 (1.024-1.042)], maximum humidity [IRR-1.016 (1.013-1.018)], and two-way factorial interactions between district specific averaged monthly minimum temperature and monthly minimum temperature, monthly minimum temperature was statistically significant [IRR-1.44 (1.231-1.695)] whereas the interaction term has a protective effect [IRR-0.982 (0.974-0.990)] against malaria infections. Forest cover, maximum humidity, minimum temperature and wind speed emerged as potential covariates to be used in predictive models for modelling the malaria risk in the state which could be efficiently used for early warning systems in the state.
Land motion due to 20th century mass balance of the Greenland Ice Sheet
NASA Astrophysics Data System (ADS)
Kjeldsen, K. K.; Khan, S. A.
2017-12-01
Quantifying the contribution from ice sheets and glaciers to past sea level change is of great value for understanding sea level projections into the 21st century. However, quantifying and understanding past changes are equally important, in particular understanding the impact in the near-field where the signal is highest. We assess the impact of 20th century mass balance of the Greenland Ice Sheet on land motion using results from Kjeldsen et al, 2015. These results suggest that the ice sheet on average lost a minimum of 75 Gt/yr, but also show that the mass balance was highly spatial- and temporal variable, and moreover that on a centennial time scale changes were driven by a decreasing surface mass balance. Based on preliminary results we discuss land motion during the 20th century due to mass balance changes and the driving components surface mass balance and ice dynamics.
The Minimum Wage, Restaurant Prices, and Labor Market Structure
ERIC Educational Resources Information Center
Aaronson, Daniel; French, Eric; MacDonald, James
2008-01-01
Using store-level and aggregated Consumer Price Index data, we show that restaurant prices rise in response to minimum wage increases under several sources of identifying variation. We introduce a general model of employment determination that implies minimum wage hikes cause prices to rise in competitive labor markets but potentially fall in…
Grappling with Weight Cutting. The Wisconsin Wrestling Minimum Weight Project.
ERIC Educational Resources Information Center
Oppliger, Robert A.; And Others
1995-01-01
In response to a new state rule, the Wisconsin Minimum Weight Project curtails weight cutting among high school wrestlers. The project uses skinfold testing to determine a minimum competitive weight and nutrition education to help the wrestler diet safety. It serves as a model for other states and other sports. (Author/SM)
NASA Astrophysics Data System (ADS)
Ji, Qixing; Widner, Brittany; Jayakumar, Amal; Ward, Bess; Mulholland, Margaret
2017-04-01
In coastal upwelling regions, high surface productivity leads to high export and intense remineralization consuming oxygen. This, in combination with slow ventilation, creates oxygen minimum zones (OMZ) in eastern boundary regions of the ocean, such as the one off the Peruvian coast in the Eastern Tropical South Pacific. The OMZ is characterized by a layer of high nitrite concentration coinciding with water column anoxia. Sharp oxygen gradients are located above and below the anoxic layer (upper and lower oxyclines). Thus, the OMZ harbors diverse microbial metabolisms, several of which involve the production and consumption of nitrite. The sources of nitrite are ammonium oxidation and nitrate reduction. The sinks of nitrite include anaerobic ammonium oxidation (anammox), canonical denitrification and nitrite oxidation to nitrate. To quantify the sources and sinks of nitrite in the Peruvian OMZ, incubation experiments with 15N-labeled substrates (ammonium, nitrite and nitrate) were conducted on a research cruise in January 2015. The direct measurements of instantaneous nitrite production and consumption rates were compared with ambient nitrite concentrations to evaluate the turnover rate of nitrite in the OMZ. The distribution of nitrite in the water column showed a two-peak structure. A primary nitrite maximum (up to 0.5 μM) was located in the upper oxycline. A secondary nitrite maximum (up to 10 μM) was found in the anoxic layer. A nitrite concentration minimum occurred at the oxic-anoxic interface just below the upper oxycline. For the sources of nitrite, highest rates of ammonium oxidation and nitrate reduction were detected in the upper oxycline, where both nitrite and oxygen concentrations were low. Lower rates of nitrite production were detected within the layer of secondary nitrite maximum. For the sinks of nitrite, the rates of anammox, denitrification and nitrite oxidation were the highest just below the oxic-anoxic interface. Low nitrite consumption rates were also detected within the layer of the secondary nitrite maximum. The imbalances between nitrite production and consumption rates help to explain the distribution of nitrite in the water column. The primary nitrite maximum in the upper oxycline is consistent with ammonium oxidation exceeding nitrite oxidation. Nitrite consumption rates exceeding rates of nitrite production result in the low nitrite concentration at the oxic-anoxic interface. Within the secondary nitrite maximum in the anoxic layer, production and consumption of nitrite are equivalent within measurement error. These low turnover rates suggest the stability of the nitrite pool in the secondary nitrite maximum over long time scales (decades to millennial). These data could be implemented into biogeochemical models to decipher the origin and the evolution of nitrite distribution in the OMZs.
ERIC Educational Resources Information Center
Katikireddi, Srinivasa Vittal; Hilton, Shona; Bond, Lyndal
2016-01-01
The minimum unit pricing (MUP) alcohol policy debate has been informed by the Sheffield model, a study which predicts impacts of different alcohol pricing policies. This paper explores the Sheffield model's influences on the policy debate by drawing on 36 semi-structured interviews with policy actors who were involved in the policy debate.…
High resolution urban morphology data for urban wind flow modeling
NASA Astrophysics Data System (ADS)
Cionco, Ronald M.; Ellefsen, Richard
The application of urban forestry methods and technologies to a number of practical problems can be further enhanced by the use and incorporation of localized, high resolution wind and temperature fields into their analysis methods. The numerical simulation of these micrometeorological fields will represent the interactions and influences of urban structures, vegetation elements, and variable terrain as an integral part of the dynamics of an urban domain. Detailed information of the natural and man-made components that make up the urban area is needed to more realistically model meteorological fields in urban domains. Simulating high resolution wind and temperatures over and through an urban domain utilizing detailed morphology data can also define and quantify local areas where urban forestry applications can contribute to better solutions. Applications such as the benefits of planting trees for shade purposes can be considered, planned, and evaluated for their impact on conserving energy and cooling costs as well as the possible reconfiguration or removal of trees and other barriers for improved airflow ventilation and similar processes. To generate these fields, a wind model must be provided, as a minimum, the location, type, height, structural silhouette, and surface roughness of these components, in order to account for the presence and effects of these land morphology features upon the ambient airflow. The morphology of Sacramento, CA has been characterized and quantified in considerable detail primarily for wind flow modeling, simulation, and analyses, but can also be used for improved meteorological analyses, urban forestry, urban planning, and other urban related activities. Morphology methods previously developed by Ellefsen are applied to the Sacramento scenario with a high resolution grid of 100 m × 100 m. The Urban Morphology Scheme defines Urban Terrain Zones (UTZ) according to how buildings and other urban elements are structured and placed with respect to each other. The urban elements within the 100 m × 100 m cells (one hectare) are further described and digitized as building height, building footprint (in percent), reflectivity of its roof, pitched roof or flat, building's long axis orientation, footprint of impervious surface and its reflectivity, footprint of canopy elements, footprint of woodlots, footprint of grass area, and footprint of water surface. A variety of maps, satellite images, low level aerial photographs, and street level photographs are the raw data used to quantify these urban properties. The final digitized morphology database resides in a spreadsheet ready for use on ordinary personal computers.
Agatonovic-Kustrin, S; Loescher, Christine M; Singh, Ragini
2013-01-01
Echinacea preparations are among the most popular herbal remedies worldwide. Although it is generally assigned immune enhancement activities, the effectiveness of Echinacea is highly dependent on the Echinacea species, part of the plant used, the age of the plant, its location and the method of extraction. The aim of this study was to investigate the capacity of an artificial neural network (ANN) to analyse thin-layer chromatography (TLC) chromatograms as fingerprint patterns for quantitative estimation of three phenylpropanoid markers (chicoric acid, chlorogenic acid and echinacoside) in commercial Echinacea products. By applying samples with different weight ratios of marker compounds to the system, a database of chromatograms was constructed. One hundred and one signal intensities in each of the TLC chromatograms were correlated to the amounts of applied echinacoside, chlorogenic acid and chicoric acid using an ANN. The developed ANN correlation was used to quantify the amounts of three marker compounds in Echinacea commercial formulations. The minimum quantifiable level of 63, 154 and 98 ng and the limit of detection of 19, 46 and 29 ng were established for echinacoside, chlorogenic acid and chicoric acid respectively. A novel method for quality control of herbal products, based on TLC separation, high-resolution digital plate imaging and ANN data analysis has been developed. The method proposed can be adopted for routine evaluation of the phytochemical variability in Echinacea formulations available in the market. Copyright © 2012 John Wiley & Sons, Ltd.
Topography significantly influencing low flows in snow-dominated watersheds
NASA Astrophysics Data System (ADS)
Li, Qiang; Wei, Xiaohua; Yang, Xin; Giles-Hansen, Krysta; Zhang, Mingfang; Liu, Wenfei
2018-03-01
Watershed topography plays an important role in determining the spatial heterogeneity of ecological, geomorphological, and hydrological processes. Few studies have quantified the role of topography in various flow variables. In this study, 28 watersheds with snow-dominated hydrological regimes were selected with daily flow records from 1989 to 1996. These watersheds are located in the Southern Interior of British Columbia, Canada, and range in size from 2.6 to 1780 km2. For each watershed, 22 topographic indices (TIs) were derived, including those commonly used in hydrology and other environmental fields. Flow variables include annual mean flow (Qmean), Q10 %, Q25 %, Q50 %, Q75 %, Q90 %, and annual minimum flow (Qmin), where Qx % is defined as the daily flow that occurred each year at a given percentage (x). Factor analysis (FA) was first adopted to exclude some redundant or repetitive TIs. Then, multiple linear regression models were employed to quantify the relative contributions of TIs to each flow variable in each year. Our results show that topography plays a more important role in low flows (flow magnitudes ≤ Q75 %) than high flows. However, the effects of TIs on different flow magnitudes are not consistent. Our analysis also determined five significant TIs: perimeter, slope length factor, surface area, openness, and terrain characterization index. These can be used to compare watersheds when low flow assessments are conducted, specifically in snow-dominated regions with the watershed size less than several thousand square kilometres.
NASA Astrophysics Data System (ADS)
Nicewonger, M. R.; Aydin, M.; Prather, M. J.; Saltzman, E. S.
2017-12-01
This study examines ethane (C2H6) and acetylene (C2H2) in polar ice cores in order to reconstruct variations in the atmospheric levels of these trace gases over the past 2,000 years. Both of these non-methane hydrocarbons are released from fossil fuel, biofuel, and biomass burning. Ethane, but not acetylene, is also emitted from natural geologic outgassing of hydrocarbons. In an earlier study, we reported ethane levels in Greenland and Antarctic ice cores showing roughly equal contributions from biomass burning and geologic emissions to preindustrial atmospheric ethane levels (Nicewonger et al., 2016). Here we introduce acetylene as an additional constraint to better quantify preindustrial variations in the emissions from these natural hydrocarbon sources. Here we present 30 new measurements of ethane and acetylene from the WDC-06A ice core from WAIS Divide and the newly drilled South Pole ice core (SPICECORE). Ethane results display a gradual decline from peak levels of 110 ppt at 1400 CE to a minimum of 60-80 ppt during 1700-1875 CE. Acetylene correlates with ethane (r2 > 0.4), dropping from peak levels of 35 ppt at 1400 CE to 15-20 ppt at 1875 CE. The covariance between the two trace gases implies that the observed changes are likely caused by decreasing emissions from low latitude biomass burning. We will discuss results from chemical transport modeling and sensitivity tests and the implications for the preindustrial ethane and acetylene budgets.
Inflight fuel tank temperature survey data
NASA Technical Reports Server (NTRS)
Pasion, A. J.
1979-01-01
Statistical summaries of the fuel and air temperature data for twelve different routes and for different aircraft models (B747, B707, DC-10 and DC-8), are given. The minimum fuel, total air and static air temperature expected for a 0.3% probability were summarized in table form. Minimum fuel temperature extremes agreed with calculated predictions and the minimum fuel temperature did not necessarily equal the minimum total air temperature even for extreme weather, long range flights.
Secondary electric power generation with minimum engine bleed
NASA Technical Reports Server (NTRS)
Tagge, G. E.
1983-01-01
Secondary electric power generation with minimum engine bleed is discussed. Present and future jet engine systems are compared. The role of auxiliary power units is evaluated. Details of secondary electric power generation systems with and without auxiliary power units are given. Advanced bleed systems are compared with minimum bleed systems. A cost model of ownership is given. The difference in the cost of ownership between a minimum bleed system and an advanced bleed system is given.
Li, Xiangyong; Rafaliya, N; Baki, M Fazle; Chaouch, Ben A
2017-03-01
Scheduling of surgeries in the operating rooms under limited competing resources such as surgical and nursing staff, anesthesiologist, medical equipment, and recovery beds in surgical wards is a complicated process. A well-designed schedule should be concerned with the welfare of the entire system by allocating the available resources in an efficient and effective manner. In this paper, we develop an integer linear programming model in a manner useful for multiple goals for optimally scheduling elective surgeries based on the availability of surgeons and operating rooms over a time horizon. In particular, the model is concerned with the minimization of the following important goals: (1) the anticipated number of patients waiting for service; (2) the underutilization of operating room time; (3) the maximum expected number of patients in the recovery unit; and (4) the expected range (the difference between maximum and minimum expected number) of patients in the recovery unit. We develop two goal programming (GP) models: lexicographic GP model and weighted GP model. The lexicographic GP model schedules operating rooms when various preemptive priority levels are given to these four goals. A numerical study is conducted to illustrate the optimal master-surgery schedule obtained from the models. The numerical results demonstrate that when the available number of surgeons and operating rooms is known without error over the planning horizon, the proposed models can produce good schedules and priority levels and preference weights of four goals affect the resulting schedules. The results quantify the tradeoffs that must take place as the preemptive-weights of the four goals are changed.
The impact of a federal cigarette minimum pack price policy on cigarette use in the USA.
Doogan, Nathan J; Wewers, Mary Ellen; Berman, Micah
2018-03-01
Increasing cigarette prices reduce cigarette use. The US Food and Drug Administration has the authority to regulate the sale and promotion-and therefore the price-of tobacco products. To examine the potential effect of federal minimum price regulation on the sales of cigarettes in the USA. We used yearly state-level data from the Tax Burden on Tobacco and other sources to model per capita cigarette sales as a function of price. We used the fitted model to compare the status quo sales with counterfactual scenarios in which a federal minimum price was set. The minimum price scenarios ranged from $0 to $12. The estimated price effect in our model was comparable with that found in the literature. Our counterfactual analyses suggested that the impact of a minimum price requirement could range from a minimal effect at the $4 level to a reduction of 5.7 billion packs sold per year and 10 million smokers at the $10 level. A federal minimum price policy has the potential to greatly benefit tobacco control and public health by uniformly increasing the price of cigarettes and by eliminating many price-reducing strategies currently available to both sellers and consumers. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Kasiviswanathan, K.; Sudheer, K.
2013-05-01
Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph
FUPSOL: Modelling the Future and Past Solar Influence on Earth Climate
NASA Astrophysics Data System (ADS)
Anet, J. G.; Rozanov, E.; Peter, T.
2012-04-01
Global warming is becoming one of the main threats to mankind. There is growing evidence that anthropogenic greenhouse gases have become the dominant factor since about 1970. At the same time natural factors of climate change such as solar and volcanic forcings cannot be neglected on longer time scales. Despite growing scientific efforts over the last decades in both, observations and simulations, the uncertainty of the solar contribution to the past climate change remained unacceptably high (IPCC, 2007), the reasons being on one hand missing observations of solar irradiance prior to the satellite era, and on the other hand a majority of models so far not including all processes relevant for solar-climate interactions. This project aims at elucidating the processes governing the effects of solar activity variations on Earth's climate. We use the state-of-the-art coupled atmosphere-ocean-chemistry-climate model (AOCCM) SOCOL (Schraner et al, 2008) developed in Switzerland by coupling the community Earth System Model (ESM) COSMOS distributed by MPI for Meteorology (Hamburg, Germany) with a comprehensive atmospheric chemistry module. The model solves an extensive set of equations describing the dynamics of the atmosphere and ocean, radiative transfer, transport of species, their chemical transformations, cloud formation and the hydrological cycle. The intention is to show how past solar variations affected climate and how the decrease in solar forcing expected for the next decades will affect climate on global and regional scales. We will simulate the global climate system behavior during Dalton minimum (1790 and 1830) and first half of 21st century with a series of multiyear ensemble experiments and perform these experiments using all known anthropogenic and natural climate forcing taken in different combinations to understand the effects of solar irradiance in different spectral regions and particle precipitation variability. Further on, we will quantify the solar influence on global climate in the future by evaluating the simulations and using information from past analogs such as the Dalton minimum. In the end, the project aims at reducing the uncertainty of the solar contribution to past and future climate change, which so far remained high despite many years of analyses of observational records and theoretical investigations with climate models of different complexity.
Practical implementation of Channelized Hotelling Observers: Effect of ROI size
Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H.
2017-01-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO’s performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO’s performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies. PMID:28943699
Inception and variability of the Antarctic ice sheet across the Eocene-Oligocene transition
NASA Astrophysics Data System (ADS)
Stocchi, Paolo; Galeotti, Simone; Ladant, Jan-Baptiste; DeConto, Robert; Vermeersen, Bert; Rugenstein, Maria
2014-05-01
Climate cooling throughout middle to late Eocene (~48 - 34 Million years ago, Ma) triggered the transition from hot-house to ice-house conditions. Based on deep-sea marine δ18O values, a continental-scale Antarctic Ice Sheet (AIS) rapidly developed across the Eocene-Oligocene transition (EOT) in two ~200 kyr-spaced phases between 34.0 - 33.5 Ma. Regardless of the geographical configuration of southern ocean gateways, geochemical data and ice-sheet modelling show that AIS glaciation initiated as atmospheric CO2 fell below ~2.5 times pre-industrial values. AIS likely reached or even exceeded present-day dimensions. Quantifying the magnitude and timing of AIS volume variations by means of δ18O records is hampered by the fact that the latter reflect a coupled signal of temperature and ice-sheet volume. Besides, bathymetric variations based on marine geologic sections are affected by large uncertainties and, most importantly, reflect the local response of relative sea level (rsl) to ice volume fluctuations rather than the global eustatic signal. AIS proximal and Northern Hemisphere (NH) marine settings show an opposite trend of rsl change across the EOT. In fact, consistently with central values based on δ18O records, an 60 ± 20m rsl drop is estimated from NH low-latitude shallow marine sequences. Conversely, sedimentary facies from shallow shelfal areas in the proximity of the AIS witness an 50 - 150m rsl rise across the EOT. Accounting for ice-load-induced crustal and geoidal deformations and for the mutual gravitational attraction between the growing AIS and the ocean water is a necessary requirement to reconcile near- and far-field rsl sites, regardless of tectonics and of any other possible local contamination. In this work we investigate the AIS inception and variability across the EOT by combining the observed rsl changes with predictions based on numerical modeling of Glacial Isostatic Adjustment (GIA). We solve the gravitationally self-consistent Sea Level Equation for two different and independent AIS models both driven by atmospheric CO2 variations and evolving on different Antarctic topographies. In particular, minimum and maximum AIS volumes, respectively of ~55m and ~70m equivalent sea level (esl), stem from a smaller and a larger Antarctic topography. Minimum and maximum GIA predictions at the NH rsl sites respectively correspond to the lower limit and central value of the EOT rsl drop inferred from geological data. For both GIA models, the departures from the eustatic trend significantly increase southward toward Antarctica, where the AIS growth is accompanied by a rsl rise. Accordingly, the cyclochronological record of sedimentary cycles retrieved from Cape Roberts Project Drillcore CRP-3 (Victoria Land Basin) witness a deepening across the EOT. Most importantly, CRP-3 record shows that full glacial conditions consistent with the maximum AIS model dimensions were reached only at ~32.8 Ma, while ice-sheet volumes fluctuations around the minimum AIS model volume persisted during the first million years of glaciation.
Míguez, A; Iftimi, A; Montes, F
2016-09-01
Epidemiologists agree that there is a prevailing seasonality in the presentation of epidemic waves of respiratory syncytial virus (RSV) infections and influenza. The aim of this study is to quantify the potential relationship between the activity of RSV, with respect to the influenza virus, in order to use the RSV seasonal curve as a predictor of the evolution of an influenza virus epidemic wave. Two statistical tools, logistic regression and time series, are used for predicting the evolution of influenza. Both logistic models and time series of influenza consider RSV information from previous weeks. Data consist of influenza and confirmed RSV cases reported in Comunitat Valenciana (Spain) during the period from week 40 (2010) to week 8 (2014). Binomial logistic regression models used to predict the two states of influenza wave, basal or peak, result in a rate of correct classification higher than 92% with the validation set. When a finer three-states categorization is established, basal, increasing peak and decreasing peak, the multinomial logistic model performs well in 88% of cases of the validation set. The ARMAX model fits well for influenza waves and shows good performance for short-term forecasts up to 3 weeks. The seasonal evolution of influenza virus can be predicted a minimum of 4 weeks in advance using logistic models based on RSV. It would be necessary to study more inter-pandemic seasons to establish a stronger relationship between the epidemic waves of both viruses.
Elemental GCR Observations during the 2009-2010 Solar Minimum Period
NASA Technical Reports Server (NTRS)
Lave, K. A.; Israel, M. H.; Binns, W. R.; Christian, E. R.; Cummings, A. C.; Davis, A. J.; deNolfo, G. A.; Leske, R. A.; Mewaldt, R. A.; Stone, E. C.;
2013-01-01
Using observations from the Cosmic Ray Isotope Spectrometer (CRIS) onboard the Advanced Composition Explorer (ACE), we present new measurements of the galactic cosmic ray (GCR) elemental composition and energy spectra for the species B through Ni in the energy range approx. 50-550 MeV/nucleon during the record setting 2009-2010 solar minimum period. These data are compared with our observations from the 1997-1998 solar minimum period, when solar modulation in the heliosphere was somewhat higher. For these species, we find that the intensities during the 2009-2010 solar minimum were approx. 20% higher than those in the previous solar minimum, and in fact were the highest GCR intensities recorded during the space age. Relative abundances for these species during the two solar minimum periods differed by small but statistically significant amounts, which are attributed to the combination of spectral shape differences between primary and secondary GCRs in the interstellar medium and differences between the levels of solar modulation in the two solar minima. We also present the secondary-to-primary ratios B/C and (Sc+Ti+V)/Fe for both solar minimum periods, and demonstrate that these ratios are reasonably well fit by a simple "leaky-box" galactic transport model that is combined with a spherically symmetric solar modulation model.
40 CFR 60.2735 - Is there a minimum amount of monitoring data I must obtain?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Is there a minimum amount of monitoring... Construction On or Before November 30, 1999 Model Rule-Monitoring § 60.2735 Is there a minimum amount of monitoring data I must obtain? (a) Except for monitoring malfunctions, associated repairs, and required...
75 FR 18256 - Petition for Exemption; Summary of Petition Received
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-09
... an exemption from the specific dimensions of the passenger entry door of the Hawker Beechcraft Model 390-2. The door has basic dimensions greater than the minimum required by Sec. 23.783(f)(1). The total... than the minimum area required by Sec. 23.783(f)(1); however, the minimum width dimension cannot be met...
Reference respiratory waveforms by minimum jerk model analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anetai, Yusuke, E-mail: anetai@radonc.med.osaka-u.ac.jp; Sumida, Iori; Takahashi, Yutaka
Purpose: CyberKnife{sup ®} robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony{sup ®} mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimummore » jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife{sup ®}. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony{sup ®} mode, a tracking laser projection from CyberKnife{sup ®} was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy affected by respiratory phase was improved in the minimum jerk theoretical model by 7.0% and 13% compared with that of the waveforms modeled by cosine and free-breathing model, respectively. Conclusions: The minimum jerk theoretical respiratory wave can achieve smooth tracking by CyberKnife{sup ®} and may provide patient-specific respiratory modeling, which may be useful for respiratory training and coaching, as well as quality assurance of the mechanical CyberKnife{sup ®} robotic trajectory.« less
Chadsuthi, Sudarat; Iamsirithaworn, Sopon; Triampo, Wannapong; Modchang, Charin
2015-01-01
Influenza is a worldwide respiratory infectious disease that easily spreads from one person to another. Previous research has found that the influenza transmission process is often associated with climate variables. In this study, we used autocorrelation and partial autocorrelation plots to determine the appropriate autoregressive integrated moving average (ARIMA) model for influenza transmission in the central and southern regions of Thailand. The relationships between reported influenza cases and the climate data, such as the amount of rainfall, average temperature, average maximum relative humidity, average minimum relative humidity, and average relative humidity, were evaluated using cross-correlation function. Based on the available data of suspected influenza cases and climate variables, the most appropriate ARIMA(X) model for each region was obtained. We found that the average temperature correlated with influenza cases in both central and southern regions, but average minimum relative humidity played an important role only in the southern region. The ARIMAX model that includes the average temperature with a 4-month lag and the minimum relative humidity with a 2-month lag is the appropriate model for the central region, whereas including the minimum relative humidity with a 4-month lag results in the best model for the southern region.
Bowman, Angela S; Owusu, Andrew; Trueblood, Amber B; Bosumtwi-Sam, Cynthia
2018-05-07
To examine the prevalence, determinants, and impact of local school health management committees on implementation of minimum-recommended school health services delivery among basic and secondary schools in Ghana. National level cross-sectional data from the first-ever assessment of Ghana Global-School Health Policies and Practices Survey was utilized. Complex sample analyses were used to quantify school-level implementation of recommended minimum package for health services delivery. Of 307 schools, 98% were basic and government run, and 33% offered at least half of the recommended health service delivery areas measured. Schools with a school health management committee (53%) were 4.8 (95% CI = 3.23-5.18) times as likely to offer at least 50% of the minimum health services package than schools that did not. There is significant deficit concerning delivery of school health services in schools across Ghana. However, school health management committees positively impact implementation of health service delivery. School health management committees provide a significant impact on delivery of school health services; thus, it is recommended that policy makers and programmers place greater emphasis on the value and need for these advisory boards in all Ghanaian schools. Copyright © 2018 John Wiley & Sons, Ltd.
An absolute method for determination of misalignment of an immersion ultrasonic transducer.
Narayanan, M M; Singh, Narender; Kumar, Anish; Babu Rao, C; Jayakumar, T
2014-12-01
An absolute methodology has been developed for quantification of misalignment of an ultrasonic transducer using a corner-cube retroreflector. The amplitude based and the time of flight (TOF) based C-scans of the reflector are obtained for various misalignments of the transducer. At zero degree orientation of the transducer, the vertical positions of the maximum amplitude and the minimum TOF in the C-scan coincide. At any other orientation of the transducer with the horizontal plane, there is a vertical shift in the position of the maximum amplitude with respect to the minimum TOF. The position of the minimum (TOF) remains the same irrespective of the orientation of the transducer and hence is used as a reference for any misalignment of the transducer. With the measurement of the vertical shift and the horizontal distance between the transducer and the vertex of the reflector, the misalignment of the transducer is quantified. Based on the methodology developed in the present study, retroreflectors are placed in the Indian 500MWe Prototype Fast Breeder Reactor for assessment of the orientation of the ultrasonic transducer prior to the under-sodium ultrasonic scanning for detection of any protrusion of the subassemblies. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Quackenbush, A.
2015-12-01
Urban land cover and associated impervious surface area is expected to increase by as much as 50% over the next few decades across substantial portions of the United States. In combination with urban expansion, increases in temperature and changes in precipitation are expected to impact ecosystems through changes in productivity, disturbance and hydrological properties. In this study, we use the NASA Terrestrial Observation and Prediction System Biogeochemical Cycle (TOPS-BGC) model to explore the combined impacts of urbanization and climate change on hydrologic dynamics (snowmelt, runoff, and evapotranspiration) and vegetation carbon uptake (gross productivity). The model is driven using land cover predictions from the Spatially Explicit Regional Growth Model (SERGoM) to quantify projected changes in impervious surface area, and climate projections from the 30 arc-second NASA Earth Exchange Downscaled Climate Projection (NEX-DCP30) dataset derived from the CMIP5 climate scenarios. We present the modeling approach and an analysis of the ecosystem impacts projected to occur in the US, with an emphasis on protected areas in the Great Northern and Appalachian Landscape Conservation Cooperatives (LCC). Under the ensemble average of the CMIP5 models and land cover change scenarios for both representative concentration pathways (RCPs) 4.5 and 8.5, both LCCs are predicted to experience increases in maximum and minimum temperatures as well as annual average precipitation. In the Great Northern LCC, this is projected to lead to increased annual runoff, especially under RCP 8.5. Earlier melt of the winter snow pack and increased evapotranspiration, however, reduces summer streamflow and soil water content, leading to a net reduction in vegetation productivity across much of the Great Northern LCC, with stronger trends occurring under RCP 8.5. Increased runoff is also projected to occur in the Appalachian LCC under both RCP 4.5 and 8.5. However, under RCP 4.5, the model predicts that the warmer wetter conditions will lead to increases in vegetation productivity across much of the Appalachian LCC, while under RCP 8.5, the effects of increased precipitation are not enough to keep up with increases in evapotranspiration, leading to projected reductions in vegetation productivity for this LCC by the end of this century.
Minimum probe length for unique identification of all open reading frames in a microbial genome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokhansanj, B A; Ng, J; Fitch, J P
2000-03-05
In this paper, we determine the minimum hybridization probe length to uniquely identify at least 95% of the open reading frame (ORF) in an organism. We analyze the whole genome sequences of 17 species, 11 bacteria, 4 archaea, and 2 eukaryotes. We also present a mathematical model for minimum probe length based on assuming that all ORFs are random, of constant length, and contain an equal distribution of bases. The model accurately predicts the minimum probe length for all species, but it incorrectly predicts that all ORFs may be uniquely identified. However, a probe length of just 9 bases ismore » adequate to identify over 95% of the ORFs for all 15 prokaryotic species we studied. Using a minimum probe length, while accepting that some ORFs may not be identified and that data will be lost due to hybridization error, may result in significant savings in microarray and oligonucleotide probe design.« less
Acoustic and aerodynamic testing of a scale model variable pitch fan
NASA Technical Reports Server (NTRS)
Jutras, R. R.; Kazin, S. B.
1974-01-01
A fully reversible pitch scale model fan with variable pitch rotor blades was tested to determine its aerodynamic and acoustic characteristics. The single-stage fan has a design tip speed of 1160 ft/sec (353.568 m/sec) at a bypass pressure ratio of 1.5. Three operating lines were investigated. Test results show that the blade pitch for minimum noise also resulted in the highest efficiency for all three operating lines at all thrust levels. The minimum perceived noise on a 200-ft (60.96 m) sideline was obtained with the nominal nozzle. At 44% of takeoff thrust, the PNL reduction between blade pitch and minimum noise blade pitch is 1.8 PNdB for the nominal nozzle and decreases with increasing thrust. The small nozzle (6% undersized) has the highest efficiency at all part thrust conditions for the minimum noise blade pitch setting; although, the noise is about 1.0 PNdB higher for the small nozzle at the minimum noise blade pitch position.
A novel method for rapid in vitro radiobioassay
NASA Astrophysics Data System (ADS)
Crawford, Evan Bogert
Rapid and accurate analysis of internal human exposure to radionuclides is essential to the effective triage and treatment of citizens who have possibly been exposed to radioactive materials in the environment. The two most likely scenarios in which a large number of citizens would be exposed are the detonation of a radiation dispersal device (RDD, "dirty bomb") or the accidental release of an isotope from an industrial source such as a radioisotopic thermal generator (RTG). In the event of the release and dispersion of radioactive materials into the environment in a large city, the entire population of the city -- including all commuting workers and tourists -- would have to be rapidly tested, both to satisfy the psychological needs of the citizens who were exposed to the mental trauma of a possible radiation dose, and to satisfy the immediate medical needs of those who received the highest doses and greatest levels of internal contamination -- those who would best benefit from rapid, intensive medical care. In this research a prototype rapid screening method to screen urine samples for the presence of up to five isotopes, both individually and in a mixture, has been developed. The isotopes used to develop this method are Co-60, Sr-90, Cs-137, Pu-238, and Am-241. This method avoids time-intensive chemical separations via the preparation and counting of a single sample on multiple detectors, and analyzing the spectra for isotope-specific markers. A rapid liquid-liquid separation using an organic extractive scintillator can be used to help quantify the activity of the alpha-emitting isotopes. The method provides quantifiable results in less than five minutes for the activity of beta/gamma-emitting isotopes when present in the sample at the intervention level as defined by the Centers for Disease Control and Prevention (CDC), and quantifiable results for the activity levels of alpha-emitting isotopes present at their respective intervention levels in approximately 30 minutes of sample preparation and counting time. Radiation detector spectra -- e.g. those from high-purity germanium (HPGe) gamma detectors and liquid scintillation detectors -- which contain decay signals from multiple isotopes often have overlapping signals: the counts from one isotope's decay can appear in energy channels associated with another isotope's decay, complicating the calculation of each isotope's activity. The uncertainties associated with analyzing these spectra have been traced in order to determine the effects of one isotope's count rate on the sensitivity and uncertainty associated with each other isotope. The method that was developed takes advantage of activated carbon filtration to eliminate quenching effects and to make the liquid scintillation spectra from different urine samples comparable. The method uses pulse-shape analysis to reduce the interference from beta emitters in the liquid scintillation spectrum and improve the minimum detectable activity (MDA) and minimum quantifiable activity (MQA) for alpha emitters. The method uses an HPGe detector to quantify the activity of gamma emitters, and subtract their isotopes' contributions to the liquid scintillation spectra via a calibration factor, such that the pure beta and pure alpha emitters can be identified and quantified from the resulting liquid scintillation spectra. Finally, the method optionally uses extractive scintillators to rapidly separate the alpha emitters from the beta emitters when the activity from the beta emitters is too great to detect or quantify the activity from the alpha emitters without such a separation. The method is able to detect and quantify all five isotopes, with uncertainties and biases usually in the 10-40% range, depending upon the isotopic mixtures and the activity ratios between each of the isotopes.
Empirical downscaling of daily minimum air temperature at very fine resolutions in complex terrain
Zachary A. Holden; John T. Abatzoglou; Charles H. Luce; L. Scott Baggett
2011-01-01
Available air temperature models do not adequately account for the influence of terrain on nocturnal air temperatures. An empirical model for night time air temperatures was developed using a network of one hundred and forty inexpensive temperature sensors deployed across the Bitterroot National Forest, Montana. A principle component analysis (PCA) on minimum...
({The) Solar System Large Planets influence on a new Maunder Miniμm}
NASA Astrophysics Data System (ADS)
Yndestad, Harald; Solheim, Jan-Erik
2016-04-01
In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.
Teixeira, Andreia Sofia; Monteiro, Pedro T; Carriço, João A; Ramirez, Mário; Francisco, Alexandre P
2015-01-01
Trees, including minimum spanning trees (MSTs), are commonly used in phylogenetic studies. But, for the research community, it may be unclear that the presented tree is just a hypothesis, chosen from among many possible alternatives. In this scenario, it is important to quantify our confidence in both the trees and the branches/edges included in such trees. In this paper, we address this problem for MSTs by introducing a new edge betweenness metric for undirected and weighted graphs. This spanning edge betweenness metric is defined as the fraction of equivalent MSTs where a given edge is present. The metric provides a per edge statistic that is similar to that of the bootstrap approach frequently used in phylogenetics to support the grouping of taxa. We provide methods for the exact computation of this metric based on the well known Kirchhoff's matrix tree theorem. Moreover, we implement and make available a module for the PHYLOViZ software and evaluate the proposed metric concerning both effectiveness and computational performance. Analysis of trees generated using multilocus sequence typing data (MLST) and the goeBURST algorithm revealed that the space of possible MSTs in real data sets is extremely large. Selection of the edge to be represented using bootstrap could lead to unreliable results since alternative edges are present in the same fraction of equivalent MSTs. The choice of the MST to be presented, results from criteria implemented in the algorithm that must be based in biologically plausible models.
Sources of tropospheric ozone along the Asian Pacific Rim: An analysis of ozonesonde observations
NASA Astrophysics Data System (ADS)
Liu, Hongyu; Jacob, Daniel J.; Chan, Lo Yin; Oltmans, Samuel J.; Bey, Isabelle; Yantosca, Robert M.; Harris, Joyce M.; Duncan, Bryan N.; Martin, Randall V.
2002-11-01
The sources contributing to tropospheric ozone over the Asian Pacific Rim in different seasons are quantified by analysis of Hong Kong and Japanese ozonesonde observations with a global three-dimensional (3-D) chemical transport model (GEOS-CHEM) driven by assimilated meteorological observations. Particular focus is placed on the extensive observations available from Hong Kong in 1996. In the middle-upper troposphere (MT-UT), maximum Asian pollution influence along the Pacific Rim occurs in summer, reflecting rapid convective transport of surface pollution. In the lower troposphere (LT) the season of maximum Asian pollution influence shifts to summer at midlatitudes from fall at low latitudes due to monsoonal influence. The UT ozone minimum and high variability observed over Hong Kong in winter reflects frequent tropical intrusions alternating with stratospheric intrusions. Asian biomass burning makes a major contribution to ozone at <32°N in spring. Maximum European pollution influence (<5 ppbv) occurs in spring in the LT. North American pollution influence exceeds European influence in the UT-MT, reflecting the uplift from convection and the warm conveyor belts over the eastern seaboard of North America. African outflow makes a major contribution to ozone in the low-latitude MT-UT over the Pacific Rim during November-April. Lightning influence over the Pacific Rim is minimum in summer due to westward UT transport at low latitudes associated with the Tibetan anticyclone. The Asian outflow flux of ozone to the Pacific is maximum in spring and fall and includes a major contribution from Asian anthropogenic sources year-round.
Sources of Tropospheric Ozone along the Asian Pacific Rim: An Analysis of Ozonesonde Observations
NASA Technical Reports Server (NTRS)
Liu, Hong-Yu; Jacob, Daniel J.; Chan, Lo Yin; Oltmans, Samuel J.; Bey, Isabelle; Yantosca, Robert M.; Harris, Joyce M.; Duncan, Bryan N.; Martin, Randall V.
2002-01-01
The sources contributing to tropospheric ozone over the Asian Pacific Rim in different seasons are quantified by analysis of Hong Kong and Japanese ozonesonde observations with a global three-dimensional (3-D) chemical transport model (GEOS-CHEM) driven by assimilated meteorological observations. Particular focus is placed on the extensive observations available from Hong Kong in 1996. In the middle-upper troposphere (MT- UT), maximum Asian pollution influence along the Pacific Rim occurs in summer, reflecting rapid convective transport of surface pollution. In the lower troposphere (LT) the season of maximum Asian pollution influence shifts to summer at midlatitudes from fall at low latitudes due to monsoonal influence. The UT ozone minimum and high variability observed over Hong Kong in winter reflects frequent tropical intrusions alternating with stratospheric intrusions. Asian biomass burning makes a major contribution to ozone at less than 32 deg.N in spring. Maximum European pollution influence (less than 5 ppbv) occurs in spring in the LT. North American pollution influence exceeds European influence in the UT-MT, reflecting the uplift from convection and the warm conveyor belts over the eastern seaboard of North America. African outflow makes a major contribution to ozone in the low-latitude MT-UT over the Pacific Rim during November- April. Lightning influence over the Pacific Rim is minimum in summer due to westward UT transport at low latitudes associated with the Tibetan anticyclone. The Asian outflow flux of ozone to the Pacific is maximum in spring and fall and includes a major contribution from Asian anthropogenic sources year-round.
Dietary species richness as a measure of food biodiversity and nutritional quality of diets
Raneri, Jessica E.; Smith, Katherine Walker; Kolsteren, Patrick; Van Damme, Patrick; Verzelen, Kaat; Penafiel, Daniela; Vanhove, Wouter; Kennedy, Gina; Hunter, Danny; Odhiambo, Francis Oduor; Ntandou-Bouzitou, Gervais; De Baets, Bernard; Ratnasekera, Disna; Ky, Hoang The; Remans, Roseline; Termote, Céline
2018-01-01
Biodiversity is key for human and environmental health. Available dietary and ecological indicators are not designed to assess the intricate relationship between food biodiversity and diet quality. We applied biodiversity indicators to dietary intake data from and assessed associations with diet quality of women and young children. Data from 24-hour diet recalls (55% in the wet season) of n = 6,226 participants (34% women) in rural areas from seven low- and middle-income countries were analyzed. Mean adequacies of vitamin A, vitamin C, folate, calcium, iron, and zinc and diet diversity score (DDS) were used to assess diet quality. Associations of biodiversity indicators with nutrient adequacy were quantified using multilevel models, receiver operating characteristic curves, and test sensitivity and specificity. A total of 234 different species were consumed, of which <30% were consumed in more than one country. Nine species were consumed in all countries and provided, on average, 61% of total energy intake and a significant contribution of micronutrients in the wet season. Compared with Simpson’s index of diversity and functional diversity, species richness (SR) showed stronger associations and better diagnostic properties with micronutrient adequacy. For every additional species consumed, dietary nutrient adequacy increased by 0.03 (P < 0.001). Diets with higher nutrient adequacy were mostly obtained when both SR and DDS were maximal. Adding SR to the minimum cutoff for minimum diet diversity improved the ability to detect diets with higher micronutrient adequacy in women but not in children. Dietary SR is recommended as the most appropriate measure of food biodiversity in diets. PMID:29255049
Minimum-dissipation scalar transport model for large-eddy simulation of turbulent flows
NASA Astrophysics Data System (ADS)
Abkar, Mahdi; Bae, Hyun J.; Moin, Parviz
2016-08-01
Minimum-dissipation models are a simple alternative to the Smagorinsky-type approaches to parametrize the subfilter turbulent fluxes in large-eddy simulation. A recently derived model of this type for subfilter stress tensor is the anisotropic minimum-dissipation (AMD) model [Rozema et al., Phys. Fluids 27, 085107 (2015), 10.1063/1.4928700], which has many desirable properties. It is more cost effective than the dynamic Smagorinsky model, it appropriately switches off in laminar and transitional flows, and it is consistent with the exact subfilter stress tensor on both isotropic and anisotropic grids. In this study, an extension of this approach to modeling the subfilter scalar flux is proposed. The performance of the AMD model is tested in the simulation of a high-Reynolds-number rough-wall boundary-layer flow with a constant and uniform surface scalar flux. The simulation results obtained from the AMD model show good agreement with well-established empirical correlations and theoretical predictions of the resolved flow statistics. In particular, the AMD model is capable of accurately predicting the expected surface-layer similarity profiles and power spectra for both velocity and scalar concentration.
Climate Change and Health Risks from Extreme Heat and Air Pollution in the Eastern United States
NASA Astrophysics Data System (ADS)
Limaye, V.; Vargo, J.; Harkey, M.; Holloway, T.; Meier, P.; Patz, J.
2013-12-01
Climate change is expected to exacerbate health risks from exposure to extreme heat and air pollution through both direct and indirect mechanisms. Directly, warmer ambient temperatures promote biogenic emissions of ozone precursors and favor the formation of ground-level ozone, while an anticipated increase in the frequency of stagnant air masses will allow fine particulates to accumulate. Indirectly, warmer summertime temperatures stimulate energy demand and exacerbate polluting emissions from the electricity sector. Thus, while technological adaptations such as air conditioning can reduce risks from exposures to extreme heat, they can trigger downstream damage to air quality and public health. Through an interdisciplinary modeling effort, we quantify the impacts of climate change on ambient temperatures, summer energy demand, air quality, and public health. The first phase of this work explores how climate change will directly impact the burden of heat-related mortality. Climatic patterns, demographic trends, and epidemiologic risk models suggest that populations in the eastern United States are likely to experience an increasing heat stress mortality burden in response to rising summertime air temperatures. We use North American Regional Climate Change Assessment Program modeling data to estimate mid-century 2-meter air temperatures and humidity across the eastern US from June-August, and quantify how long-term changes in actual and apparent temperatures from present-day will affect the annual burden of heat-related mortality across this region. With the US Environmental Protection Agency's Environmental Benefits Mapping and Analysis Program, we estimate health risks using concentration-response functions, which relate temperature increases to changes in annual mortality rates. We compare mid-century summertime temperature data, downscaled using the Weather Research and Forecasting model, to 2007 baseline temperatures at a 12 km resolution in order to estimate the number of annual excess deaths attributable to increased summer temperatures. Warmer average temperatures are expected to cause 173 additional deaths due to cardiovascular stress, while higher minimum temperatures will cause 67 additional deaths. This work particularly improves on the spatial resolution of published analyses of heat-related mortality in the US.
Jepson, Alys K; Schwarz-Linek, Jana; Ryan, Lloyd; Ryadnov, Maxim G; Poon, Wilson C K
2016-01-01
We measured the minimum inhibitory concentration (MIC) of the antimicrobial peptide pexiganan acting on Escherichia coli , and found an intrinsic variability in such measurements. These results led to a detailed study of the effect of pexiganan on the growth curve of E. coli, using a plate reader and manual plating (i.e. time-kill curves). The measured growth curves, together with single-cell observations and peptide depletion assays, suggested that addition of a sub-MIC concentration of pexiganan to a population of this bacterium killed a fraction of the cells, reducing peptide activity during the process, while leaving the remaining cells unaffected. This pharmacodynamic hypothesis suggests a considerable inoculum effect, which we quantified. Our results cast doubt on the use of the MIC as 'a measure of the concentration needed for peptide action' and show how 'coarse-grained' studies at the population level give vital information for the correct planning and interpretation of MIC measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adesso, Gerardo; CNR-INFM Coherentia, Naples; CNISM, Unita di Salerno, Salerno
2007-10-15
We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1xM bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself andmore » the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a, uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.« less
Interannual Variation in Stand Transpiration is Dependent Upon Tree Species
NASA Astrophysics Data System (ADS)
Ewers, B. E.; Mackay, D. S.; Burrows, S. N.; Ahl, D. E.; Samanta, S.
2003-12-01
In order to successfully predict transpirational water fluxes from forested watersheds, interannual variability in transpiration must be quantified and understood. In a heterogeneous forested landscape in northern Wisconsin, we quantified stand transpiration across four forest cover types representing more than 80 percent of the land area in order to 1) quantify differences in stand transpiration and leaf area over two years and 2) determine the mechanisms governing the changes in transpiration over two years. We measured sap flux in eight trees of each tree species in the four cover types. We found that in northern hardwoods, the leaf area of sugar maple increased between the two measurement years with transpiration per unit ground area increasing even more than could be explained by leaf area. In an aspen stand, tent caterpillars completely defoliated the stand for approximately a month until a new set of leaves flushed out. The new set of leaves resulted in a lower leaf area but the same transpiration per unit leaf area indicating there was no physiological compensation for the lower leaf area. At the same time, balsam fir growing underneath the aspen increased their transpiration rate in response to greater light penetration through the dominant aspen canopy Red pine had a thirty percent change in leaf area within a growing season due to multiple cohorts of leaves and transpiration followed this leaf area dynamic. In a forested wetland, white cedar transpiration was proportional to surface water depth between the two years. Despite the specific tree species' effects on stand transpiration, all species displayed a minimum water potential regulation resulting in a saturating response of transpiration to vapor pressure deficit that did not vary across the two years. This physiological set point will allow future water flux models to explain mechanistically interannual variability in transpiration of this and similar forests.
The Formation Environment of Jupiter's Moons
NASA Technical Reports Server (NTRS)
Turner, Neal; Lee, Man Hoi; Sano, Takayoshi
2012-01-01
Do circumjovian disk models have conductivities consistent with the assumed accretion stresses? Broadly, YES, for both minimum-mass and gas-starved models: magnetic stresses are weak in the MM models, as needed to keep the material in place. Stresses are stronger in the gas-starved models, as assumed in deriving the flow to the planet. However, future minimum-mass modeling may need to consider the loss of dust-depleted gas from the surface layers to the planet. The gas-starved models should have stress varying in radius. Dust evolution is a key process for further study, since the recombination occurs on the grains.
A model for estimating pathogen variability in shellfish and predicting minimum depuration times.
McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick
2018-01-01
Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist norovirus risk managers with future control strategies.
Cyclic Evolution of Coronal Fields from a Coupled Dynamo Potential-Field Source-Surface Model.
Dikpati, Mausumi; Suresh, Akshaya; Burkepile, Joan
The structure of the Sun's corona varies with the solar-cycle phase, from a near spherical symmetry at solar maximum to an axial dipole at solar minimum. It is widely accepted that the large-scale coronal structure is governed by magnetic fields that are most likely generated by dynamo action in the solar interior. In order to understand the variation in coronal structure, we couple a potential-field source-surface model with a cyclic dynamo model. In this coupled model, the magnetic field inside the convection zone is governed by the dynamo equation; these dynamo-generated fields are extended from the photosphere to the corona using a potential-field source-surface model. Assuming axisymmetry, we take linear combinations of associated Legendre polynomials that match the more complex coronal structures. Choosing images of the global corona from the Mauna Loa Solar Observatory at each Carrington rotation over half a cycle (1986 - 1991), we compute the coefficients of the associated Legendre polynomials up to degree eight and compare with observations. We show that at minimum the dipole term dominates, but it fades as the cycle progresses; higher-order multipolar terms begin to dominate. The amplitudes of these terms are not exactly the same for the two limbs, indicating that there is a longitude dependence. While both the 1986 and the 1996 minimum coronas were dipolar, the minimum in 2008 was unusual, since there was a substantial departure from a dipole. We investigate the physical cause of this departure by including a North-South asymmetry in the surface source of the magnetic fields in our flux-transport dynamo model, and find that this asymmetry could be one of the reasons for departure from the dipole in the 2008 minimum.
Sink fast and swim harder! Round-trip cost-of-transport for buoyant divers.
Miller, Patrick J O; Biuw, Martin; Watanabe, Yuuki Y; Thompson, Dave; Fedak, Mike A
2012-10-15
Efficient locomotion between prey resources at depth and oxygen at the surface is crucial for breath-hold divers to maximize time spent in the foraging layer, and thereby net energy intake rates. The body density of divers, which changes with body condition, determines the apparent weight (buoyancy) of divers, which may affect round-trip cost-of-transport (COT) between the surface and depth. We evaluated alternative predictions from external-work and actuator-disc theory of how non-neutral buoyancy affects round-trip COT to depth, and the minimum COT speed for steady-state vertical transit. Not surprisingly, the models predict that one-way COT decreases (increases) when buoyancy aids (hinders) one-way transit. At extreme deviations from neutral buoyancy, gliding at terminal velocity is the minimum COT strategy in the direction aided by buoyancy. In the transit direction hindered by buoyancy, the external-work model predicted that minimum COT speeds would not change at greater deviations from neutral buoyancy, but minimum COT speeds were predicted to increase under the actuator disc model. As previously documented for grey seals, we found that vertical transit rates of 36 elephant seals increased in both directions as body density deviated from neutral buoyancy, indicating that actuator disc theory may more closely predict the power requirements of divers affected by gravity than an external work model. For both models, minor deviations from neutral buoyancy did not affect minimum COT speed or round-trip COT itself. However, at body-density extremes, both models predict that savings in the aided direction do not fully offset the increased COT imposed by the greater thrusting required in the hindered direction.
Ishibashi, Hiroki; Miyamoto, Morikazu; Shinnmoto, Hiroshi; Murakami, Wakana; Soyama, Hiroaki; Nakatsuka, Masaya; Natsuyama, Takahiro; Yoshida, Masashi; Takano, Masashi; Furuya, Kenichi
2017-10-01
The aim of this study was to prenatally predict placenta accreta in posterior placenta previa using magnetic resonance imaging (MRI). This retrospective study was approved by the Institutional Review Board of our hospital. We identified 81 patients with singleton pregnancy who had undergone cesarean section due to posterior placenta previa at our hospital between January 2012 and December 2016. We calculated the sensitivity and specificity of several well-known findings, and of cervical varicosities quantified using magnetic resonance imaging, in predicting placenta accreta in posterior placenta previa. To quantify cervical varicosities, we calculated the A/B ratio, where "A" was the minimum distance from the most dorsal cervical varicosity to the deciduous placenta, and "B" was the minimum distance from the most dorsal cervical varicosity to the amniotic placenta. The appropriate cut-off value of the A/B ratio was determined using a receiver operating characteristic (ROC) curve. Three patients (3.7%) were diagnosed as having placenta accreta. The sensitivity and specificity of the well-known findings were 0 and 97.4%, respectively. Furthermore, the A/B ratio ranged from 0.02 to 0.79. ROC curve analysis revealed that the area under the combined placenta accreta and A/B ratio curve was 0.96. When the cutoff value of the A/B ratio was set 0.18, the sensitivity and specificity were 100 and 91%, respectively. It was difficult to diagnose placenta accreta in the posterior placenta previa using the well-known findings. The quantification of cervical varicosities could effectively predict placenta accreta.
NASA Astrophysics Data System (ADS)
Liu, H.; Richmond, A. D.
2013-12-01
In this study we quantify the contribution of individual large-scale waves to ionospheric electrodynamics, and examine the dependence of the ionospheric perturbations on solar activity. We focus on migrating diurnal tide (DW1) plus mean winds, migrating semidiurnal tide (SW2), quasi-stationary planetary wave 1 (QSPW1), and nonmigrating semidiurnal westward wave 1 (SW1) under northern winter conditions, when QSPW1 and SW1 are climatologically strong. From TIME-GCM simulations under solar minimum conditions, we calculate equatorial vertical ExB drifts due to mean winds and DW1, SW2, SW1 and QSPW1. In particular, wind components of both SW2 and SW1 become large at mid to high latitudes in the E-region, and kernel functions obtained from numerical experiments reveal that they can significantly affect the equatorial ion drift, likely through modulating the E-region wind dynamo. The most evident changes of total ionospheric vertical drift when solar activity is increased are seen around dawn and dusk, reflecting the more dominant role of large F-region Pedersen conductivity and of the F-region dynamo under high solar activity. Therefore, the lower atmosphere driving of the ionospheric variability is more evident under solar minimum conditions, not only because variability is more identifiable in a quieter background, but also because the E-region wind dynamo is more significant. These numerical experiments also demonstrate that the amplitudes, phases and latitudinal and vertical structures of large-scale waves are important in quantifying the ionospheric responses.
Oxygen Pathways and Budget for the Eastern South Pacific Oxygen Minimum Zone
NASA Astrophysics Data System (ADS)
Llanillo, P. J.; Pelegrí, J. L.; Talley, L. D.; Peña-Izquierdo, J.; Cordero, R. R.
2018-03-01
Ventilation of the eastern South Pacific Oxygen Minimum Zone (ESP-OMZ) is quantified using climatological Argo and dissolved oxygen data, combined with reanalysis wind stress data. We (1) estimate all oxygen fluxes (advection and turbulent diffusion) ventilating this OMZ, (2) quantify for the first time the oxygen contribution from the subtropical versus the traditionally studied tropical-equatorial pathway, and (3) derive a refined annual-mean oxygen budget for the ESP-OMZ. In the upper OMZ layer, net oxygen supply is dominated by tropical-equatorial advection, with more than one-third of this supply upwelling into the Ekman layer through previously unevaluated vertical advection, within the overturning component of the regional Subtropical Cell (STC). Below the STC, at the OMZ's core, advection is weak and turbulent diffusion (isoneutral and dianeutral) accounts for 89% of the net oxygen supply, most of it coming from the oxygen-rich subtropical gyre. In the deep OMZ layer, net oxygen supply occurs only through turbulent diffusion and is dominated by the tropical-equatorial pathway. Considering the entire OMZ, net oxygen supply (3.84 ± 0.42 µmol kg-1 yr-1) is dominated by isoneutral turbulent diffusion (56.5%, split into 32.3% of tropical-equatorial origin and 24.2% of subtropical origin), followed by isoneutral advection (32.0%, split into 27.6% of tropical-equatorial origin and 4.4% of subtropical origin) and dianeutral diffusion (11.5%). One-quarter (25.8%) of the net oxygen input escapes through dianeutral advection (most of it upwelling) and, assuming steady state, biological consumption is responsible for most of the oxygen loss (74.2%).
Harrington, Charles R; Storey, John M D; Clunas, Scott; Harrington, Kathleen A; Horsley, David; Ishaq, Ahtsham; Kemp, Steven J; Larch, Christopher P; Marshall, Colin; Nicoll, Sarah L; Rickard, Janet E; Simpson, Michael; Sinclair, James P; Storey, Lynda J; Wischik, Claude M
2015-04-24
Alzheimer disease (AD) is a degenerative tauopathy characterized by aggregation of Tau protein through the repeat domain to form intraneuronal paired helical filaments (PHFs). We report two cell models in which we control the inherent toxicity of the core Tau fragment. These models demonstrate the properties of prion-like recruitment of full-length Tau into an aggregation pathway in which template-directed, endogenous truncation propagates aggregation through the core Tau binding domain. We use these in combination with dissolution of native PHFs to quantify the activity of Tau aggregation inhibitors (TAIs). We report the synthesis of novel stable crystalline leucomethylthioninium salts (LMTX®), which overcome the pharmacokinetic limitations of methylthioninium chloride. LMTX®, as either a dihydromesylate or a dihydrobromide salt, retains TAI activity in vitro and disrupts PHFs isolated from AD brain tissues at 0.16 μM. The Ki value for intracellular TAI activity, which we have been able to determine for the first time, is 0.12 μM. These values are close to the steady state trough brain concentration of methylthioninium ion (0.18 μM) that is required to arrest progression of AD on clinical and imaging end points and the minimum brain concentration (0.13 μM) required to reverse behavioral deficits and pathology in Tau transgenic mice. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
The reliability of the Adelaide in-shoe foot model.
Bishop, Chris; Hillier, Susan; Thewlis, Dominic
2017-07-01
Understanding the biomechanics of the foot is essential for many areas of research and clinical practice such as orthotic interventions and footwear development. Despite the widespread attention paid to the biomechanics of the foot during gait, what largely remains unknown is how the foot moves inside the shoe. This study investigated the reliability of the Adelaide In-Shoe Foot Model, which was designed to quantify in-shoe foot kinematics and kinetics during walking. Intra-rater reliability was assessed in 30 participants over five walking trials whilst wearing shoes during two data collection sessions, separated by one week. Sufficient reliability for use was interpreted as a coefficient of multiple correlation and intra-class correlation coefficient of >0.61. Inter-rater reliability was investigated separately in a second sample of 10 adults by two researchers with experience in applying markers for the purpose of motion analysis. The results indicated good consistency in waveform estimation for most kinematic and kinetic data, as well as good inter-and intra-rater reliability. The exception is the peak medial ground reaction force, the minimum abduction angle and the peak abduction/adduction external hindfoot joint moments which resulted in less than acceptable repeatability. Based on our results, the Adelaide in-shoe foot model can be used with confidence for 24 commonly measured biomechanical variables during shod walking. Copyright © 2017 Elsevier B.V. All rights reserved.
Universal inverse power-law distribution for temperature and rainfall in the UK region
NASA Astrophysics Data System (ADS)
Selvam, A. M.
2014-06-01
Meteorological parameters, such as temperature, rainfall, pressure, etc., exhibit selfsimilar space-time fractal fluctuations generic to dynamical systems in nature such as fluid flows, spread of forest fires, earthquakes, etc. The power spectra of fractal fluctuations display inverse power-law form signifying long-range correlations. A general systems theory model predicts universal inverse power-law form incorporating the golden mean for the fractal fluctuations. The model predicted distribution was compared with observed distribution of fractal fluctuations of all size scales (small, large and extreme values) in the historic month-wise temperature (maximum and minimum) and total rainfall for the four stations Oxford, Armagh, Durham and Stornoway in the UK region, for data periods ranging from 92 years to 160 years. For each parameter, the two cumulative probability distributions, namely cmax and cmin starting from respectively maximum and minimum data value were used. The results of the study show that (i) temperature distributions (maximum and minimum) follow model predicted distribution except for Stornowy, minimum temperature cmin. (ii) Rainfall distribution for cmin follow model predicted distribution for all the four stations. (iii) Rainfall distribution for cmax follows model predicted distribution for the two stations Armagh and Stornoway. The present study suggests that fractal fluctuations result from the superimposition of eddy continuum fluctuations.
Does the Current Minimum Validate (or Invalidate) Cycle Prediction Methods?
NASA Technical Reports Server (NTRS)
Hathaway, David H.
2010-01-01
This deep, extended solar minimum and the slow start to Cycle 24 strongly suggest that Cycle 24 will be a small cycle. A wide array of solar cycle prediction techniques have been applied to predicting the amplitude of Cycle 24 with widely different results. Current conditions and new observations indicate that some highly regarded techniques now appear to have doubtful utility. Geomagnetic precursors have been reliable in the past and can be tested with 12 cycles of data. Of the three primary geomagnetic precursors only one (the minimum level of geomagnetic activity) suggests a small cycle. The Sun's polar field strength has also been used to successfully predict the last three cycles. The current weak polar fields are indicative of a small cycle. For the first time, dynamo models have been used to predict the size of a solar cycle but with opposite predictions depending on the model and the data assimilation. However, new measurements of the surface meridional flow indicate that the flow was substantially faster on the approach to Cycle 24 minimum than at Cycle 23 minimum. In both dynamo predictions a faster meridional flow should have given a shorter cycle 23 with stronger polar fields. This suggests that these dynamo models are not yet ready for solar cycle prediction.
Huijbregts, Mark A J; Gilijamse, Wim; Ragas, Ad M J; Reijnders, Lucas
2003-06-01
The evaluation of uncertainty is relatively new in environmental life-cycle assessment (LCA). It provides useful information to assess the reliability of LCA-based decisions and to guide future research toward reducing uncertainty. Most uncertainty studies in LCA quantify only one type of uncertainty, i.e., uncertainty due to input data (parameter uncertainty). However, LCA outcomes can also be uncertain due to normative choices (scenario uncertainty) and the mathematical models involved (model uncertainty). The present paper outlines a new methodology that quantifies parameter, scenario, and model uncertainty simultaneously in environmental life-cycle assessment. The procedure is illustrated in a case study that compares two insulation options for a Dutch one-family dwelling. Parameter uncertainty was quantified by means of Monte Carlo simulation. Scenario and model uncertainty were quantified by resampling different decision scenarios and model formulations, respectively. Although scenario and model uncertainty were not quantified comprehensively, the results indicate that both types of uncertainty influence the case study outcomes. This stresses the importance of quantifying parameter, scenario, and model uncertainty simultaneously. The two insulation options studied were found to have significantly different impact scores for global warming, stratospheric ozone depletion, and eutrophication. The thickest insulation option has the lowest impact on global warming and eutrophication, and the highest impact on stratospheric ozone depletion.
Impact of air pollution and temperature on adverse birth outcomes: Madrid, 2001-2009.
Arroyo, Virginia; Díaz, Julio; Carmona, Rocío; Ortiz, Cristina; Linares, Cristina
2016-11-01
Low birth weight (<2500 g) (LBW), premature birth (<37 weeks of gestation) (PB), and late foetal death (<24 h of life) (LFD) are causes of perinatal morbi-mortality, with short- and long-term social and economic health impacts. This study sought to identify gestational windows of susceptibility during pregnancy and to analyse and quantify the impact of different air pollutants, noise and temperature on the adverse birth outcomes. Time-series study to assess the impact of mean daily PM 2.5 , NO 2 and O 3 (μg/m 3 ), mean daily diurnal (Leqd) and nocturnal (Leqn) noise levels (dB(A)), maximum and minimum daily temperatures (°C) on the number of births with LBW, PB or LFD in Madrid across the period 2001-2009. We controlled for linear trend, seasonality and autoregression. Poisson regression models were fitted for quantification of the results. The final models were expressed as relative risk (RR) and population attributable risk (PAR). Leqd was observed to have the following impacts in LBW: at onset of gestation, in the second trimester and in the week of birth itself. NO 2 had an impact in the second trimester. In the case of PB, the following: Leqd in the second trimester, Leqn in the week before birth and PM 2.5 in the second trimester. In the case of LFD, impacts were observed for both PM 2.5 in the third trimester, and minimum temperature. O 3 proved significant in the first trimester for LBW and PB, and in the second trimester for LFD. Pollutants concentrations, noise and temperature influenced the weekly average of new-borns with LBW, PB and LFD in Madrid. Special note should be taken of the effect of diurnal noise on LBW across the entire pregnancy. The exposure of pregnant population to the environmental factors analysed should therefore be controlled with a view to reducing perinatal morbi-mortality. Copyright © 2016 Elsevier Ltd. All rights reserved.
Know your limits? Climate extremes impact the range of Scots pine in unexpected places
Julio Camarero, J.; Gazol, Antonio; Sancho-Benages, Santiago; Sangüesa-Barreda, Gabriel
2015-01-01
Background and Aims Although extreme climatic events such as drought are known to modify forest dynamics by triggering tree dieback, the impact of extreme cold events, especially at the low-latitude margin (‘rear edge’) of species distributional ranges, has received little attention. The aim of this study was to examine the impact of one such extreme cold event on a population of Scots pine (Pinus sylvestris) along the species’ European southern rear-edge range limit and to determine how such events can be incorporated into species distribution models (SDMs). Methods A combination of dendrochronology and field observation was used to quantify how an extreme cold event in 2001 in eastern Spain affected growth, needle loss and mortality of Scots pine. Long-term European climatic data sets were used to contextualize the severity of the 2001 event, and an SDM for Scots pine in Europe was used to predict climatic range limits. Key Results The 2001 winter reached record minimum temperatures (equivalent to the maximum European-wide diurnal ranges) and, for trees already stressed by a preceding dry summer and autumn, this caused dieback and large-scale mortality. Needle loss and mortality were particularly evident in south-facing sites, where post-event recovery was greatly reduced. The SDM predicted European Scots pine distribution mainly on the basis of responses to maximum and minimum monthly temperatures, but in comparison with this the observed effects of the 2001 cold event at the southerly edge of the range limit were unforeseen. Conclusions The results suggest that in order to better forecast how anthropogenic climate change might affect future forest distributions, distribution modelling techniques such as SDMs must incorporate climatic extremes. For Scots pine, this study shows that the effects of cold extremes should be included across the entire distribution margin, including the southern ‘rear edge’, in order to avoid biased predictions based solely on warmer climatic scenarios. PMID:26292992
The effect of the learner license Graduated Driver Licensing components on teen drivers' crashes.
Ehsani, Johnathon Pouya; Bingham, C Raymond; Shope, Jean T
2013-10-01
Most studies evaluating the effectiveness of Graduated Driver Licensing (GDL) have focused on the overall system. Studies examining individual components have rarely accounted for the confounding of multiple, simultaneously implemented components. The purpose of this paper is to quantify the effects of a required learner license duration and required hours of supervised driving on teen driver fatal crashes. States that introduced a single GDL component independent of any other during the period 1990-2009 were identified. Monthly and quarterly fatal crash rates per 100,000 population of 16- and 17-year-old drivers were analyzed using single-state time series analysis, adjusting for adult crash rates and gasoline prices. Using the parameter estimates from each state's time series model, the pooled effect of each GDL component on 16- and 17-year-old drivers' fatal crashes was estimated using a random effects meta-analytic model to combine findings across states. In three states, a six-month minimum learner license duration was associated with a significant decline in combined 16- and 17-year-old drivers' fatal crash rates. The pooled effect of the minimum learner license duration across all states in the sample was associated with a significant change in combined 16- and 17-year-old driver fatal crash rates of -.07 (95% Confidence Interval [CI] -.11, -.03). Following the introduction of 30 h of required supervised driving in one state, novice drivers' fatal crash rates increased 35%. The pooled effect across all states in the study sample of having a supervised driving hour requirement was not significantly different from zero (.04, 95% CI -.15, .22). These findings suggest that a learner license duration of at least six-months may be necessary to achieve a significant decline in teen drivers' fatal crash rates. Evidence of the effect of required hours of supervised driving on teen drivers' fatal crash rates was mixed. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ahmad, Rashid A.; McCool, Alex (Technical Monitor)
2001-01-01
An enhanced performance solid rocket booster concept for the space shuttle system has been proposed. The concept booster will have strong commonality with the existing, proven, reliable four-segment Space Shuttle Reusable Solid Rocket Motors (RSRM) with individual component design (nozzle, insulator, etc.) optimized for a five-segment configuration. Increased performance is desirable to further enhance safety/reliability and/or increase payload capability. Performance increase will be achieved by adding a fifth propellant segment to the current four-segment booster and opening the throat to accommodate the increased mass flow while maintaining current pressure levels. One development concept under consideration is the static test of a "standard" RSRM with a fifth propellant segment inserted and appropriate minimum motor modifications. Feasibility studies are being conducted to assess the potential for any significant departure in component performance/loading from the well-characterized RSRM. An area of concern is the aft motor (submerged nozzle inlet, aft dome, etc.) where the altered internal flow resulting from the performance enhancing features (25% increase in mass flow rate, higher Mach numbers, modified subsonic nozzle contour) may result in increased component erosion and char. To assess this issue and to define the minimum design changes required to successfully static test a fifth segment RSRM engineering test motor, internal flow studies have been initiated. Internal aero-thermal environments were quantified in terms of conventional convective heating and discrete phase alumina particle impact/concentration and accretion calculations via Computational Fluid Dynamics (CFD) simulation. Two sets of comparative CFD simulations of the RSRM and the five-segment (IBM) concept motor were conducted with CFD commercial code FLUENT. The first simulation involved a two-dimensional axi-symmetric model of the full motor, initial grain RSRM. The second set of analyses included three-dimensional models of the RSRM and FSM aft motors with four-degree vectored nozzles.
NASA Technical Reports Server (NTRS)
Teren, F.
1977-01-01
Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.
Development of a Risk-Based Comparison Methodology of Carbon Capture Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Dalton, Angela C.; Dale, Crystal
2014-06-01
Given the varying degrees of maturity among existing carbon capture (CC) technology alternatives, an understanding of the inherent technical and financial risk and uncertainty associated with these competing technologies is requisite to the success of carbon capture as a viable solution to the greenhouse gas emission challenge. The availability of tools and capabilities to conduct rigorous, risk–based technology comparisons is thus highly desirable for directing valuable resources toward the technology option(s) with a high return on investment, superior carbon capture performance, and minimum risk. To address this research need, we introduce a novel risk-based technology comparison method supported by anmore » integrated multi-domain risk model set to estimate risks related to technological maturity, technical performance, and profitability. Through a comparison between solid sorbent and liquid solvent systems, we illustrate the feasibility of estimating risk and quantifying uncertainty in a single domain (modular analytical capability) as well as across multiple risk dimensions (coupled analytical capability) for comparison. This method brings technological maturity and performance to bear on profitability projections, and carries risk and uncertainty modeling across domains via inter-model sharing of parameters, distributions, and input/output. The integration of the models facilitates multidimensional technology comparisons within a common probabilistic risk analysis framework. This approach and model set can equip potential technology adopters with the necessary computational capabilities to make risk-informed decisions about CC technology investment. The method and modeling effort can also be extended to other industries where robust tools and analytical capabilities are currently lacking for evaluating nascent technologies.« less
NASA Astrophysics Data System (ADS)
Karlovits, G. S.; Villarini, G.; Bradley, A.; Vecchi, G. A.
2014-12-01
Forecasts of seasonal precipitation and temperature can provide information in advance of potentially costly disruptions caused by flood and drought conditions. The consequences of these adverse hydrometeorological conditions may be mitigated through informed planning and response, given useful and skillful forecasts of these conditions. However, the potential value and applicability of these forecasts is unavoidably linked to their forecast quality. In this work we evaluate the skill of four global circulation models (GCMs) part of the North American Multi-Model Ensemble (NMME) project in forecasting seasonal precipitation and temperature over the continental United States. The GCMs we consider are the Geophysical Fluid Dynamics Laboratory (GFDL)-CM2.1, NASA Global Modeling and Assimilation Office (NASA-GMAO)-GEOS-5, The Center for Ocean-Land-Atmosphere Studies - Rosenstiel School of Marine & Atmospheric Science (COLA-RSMAS)-CCSM3, Canadian Centre for Climate Modeling and Analysis (CCCma) - CanCM4. These models are available at a resolution of 1-degree and monthly, with a minimum forecast lead time of nine months, up to one year. These model ensembles are compared against gridded monthly temperature and precipitation data created by the PRISM Climate Group, which represent the reference observation dataset in this work. Aspects of forecast quality are quantified using a diagnostic skill score decomposition that allows the evaluation of the potential skill and conditional and unconditional biases associated with these forecasts. The evaluation of the decomposed GCM forecast skill over the continental United States, by season and by lead time allows for a better understanding of the utility of these models for flood and drought predictions. Moreover, it also represents a diagnostic tool that could provide model developers feedback about strengths and weaknesses of their models.
The SME gauge sector with minimum length
NASA Astrophysics Data System (ADS)
Belich, H.; Louzada, H. L. C.
2017-12-01
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.
Chalmers, Jenny; Carragher, Natacha; Davoren, Sondra; O'Brien, Paula
2013-11-01
A burgeoning body of empirical evidence demonstrates that increases in the price of alcohol can reduce per capita alcohol consumption and harmful drinking. Taxes on alcohol can be raised to increase prices, but this strategy can be undermined if the industry absorbs the tax increase and cross-subsidises the price of one alcoholic beverage with other products. Such loss-leading strategies are not possible with minimum pricing. We argue that a minimum (or floor) price for alcohol should be used as a complement to alcohol taxation. Several jurisdictions have already introduced minimum pricing (e.g., Canada, Ukraine) and others are currently investigating pathways to introduce a floor price (e.g., Scotland). Tasked by the Australian government to examine the public interest case for a minimum price, Australia's peak preventative health agency recommended against setting one at the present time. The agency was concerned that there was insufficient Australian specific modelling evidence to make robust estimates of the net benefits. Nonetheless, its initial judgement was that it would be difficult for a minimum price to produce benefits for Australia at the national level. Whilst modelling evidence is certainly warranted to support the introduction of the policy, the development and uptake of policy is influenced by more than just empirical evidence. This article considers three potential impediments to minimum pricing: public opinion and misunderstandings or misgivings about the operation of a minimum price; the strength of alcohol industry objections and measures to undercut the minimum price through discounts and promotions; and legal obstacles including competition and trade law. The analysis of these factors is situated in an Australian context, but has salience internationally. Copyright © 2013 Elsevier B.V. All rights reserved.
Zeng, Xing; Chen, Cheng; Wang, Yuanyuan
2012-12-01
In this paper, a new beamformer which combines the eigenspace-based minimum variance (ESBMV) beamformer with the Wiener postfilter is proposed for medical ultrasound imaging. The primary goal of this work is to further improve the medical ultrasound imaging quality on the basis of the ESBMV beamformer. In this method, we optimize the ESBMV weights with a Wiener postfilter. With the optimization of the Wiener postfilter, the output power of the new beamformer becomes closer to the actual signal power at the imaging point than the ESBMV beamformer. Different from the ordinary Wiener postfilter, the output signal and noise power needed in calculating the Wiener postfilter are estimated respectively by the orthogonal signal subspace and noise subspace constructed from the eigenstructure of the sample covariance matrix. We demonstrate the performance of the new beamformer when resolving point scatterers and cyst phantom using both simulated data and experimental data and compare it with the delay-and-sum (DAS), the minimum variance (MV) and the ESBMV beamformer. We use the full width at half maximum (FWHM) and the peak-side-lobe level (PSL) to quantify the performance of imaging resolution and the contrast ratio (CR) to quantify the performance of imaging contrast. The FWHM of the new beamformer is only 15%, 50% and 50% of those of the DAS, MV and ESBMV beamformer, while the PSL is 127.2dB, 115dB and 60dB lower. What is more, an improvement of 239.8%, 232.5% and 32.9% in CR using simulated data and an improvement of 814%, 1410.7% and 86.7% in CR using experimental data are achieved compared to the DAS, MV and ESBMV beamformer respectively. In addition, the effect of the sound speed error is investigated by artificially overestimating the speed used in calculating the propagation delay and the results show that the new beamformer provides better robustness against the sound speed errors. Therefore, the proposed beamformer offers a better performance than the DAS, MV and ESBMV beamformer, showing its potential in medical ultrasound imaging. Copyright © 2012 Elsevier B.V. All rights reserved.
Characterizing Physical Habitat of a Mixed-Land Use Stream of the Central U.S.
NASA Astrophysics Data System (ADS)
Hooper, L. W.; Hubbart, J. A.; Hosmer, G. W.; Hogan, M. L.
2014-12-01
Land use altered flow regime impacts on aquatic biological habitat can be quantified by means of a physical habitat assessment (PHA). PHA metrics include (but are not limited to) channel substrate, width and wetted width, bank slope, and bank height. Hinkson Creek, located in Boone County, Missouri, was placed on the Missouri Department of Natural Resources list of impaired waters (Section 303d) of the Clean Water Act in 1998. A physical habitat assessment of Hinkson Creek in 2014 provides quantitative data characterizing the current potential of Hinkson Creek to fully support aquatic life, specifically macroinvertebrates (a goal for delisting). The PHA was conducted every 100m of Hinkson Creek (56km). Results from the lower 87.9% (contiguous) of the drainage indicate channel width ranged from a maximum of 70m to a minimum of 4.6m, with a mean width of 17m and standard deviation (SD) of 7.4m. Bankfull width ranged from a maximum of 74m to a minimum of 8.8m (mean = 26.1m, SD = 8.2). Bank height ranged from a maximum of 5.8m to a minimum of 0.4m (mean = 2.9m, SD = 1m). Mean bank angle for the left and right banks was nearly equivalent (left = 33.8°, right = 34.6°). Bank height and bankfull width increased with increasing drainage distance. Trench pools were the dominant channel unit at 71.4% of the sample transects, while riffles were present at 16.6%. Analysis of stream channel bed composition was conducted using a modified Wolman pebble count survey at each site and Thalweg profile between sites. Size class results were quantified as follows: 56.1% fines (16mm or less), 36.2% intermediate (16mm to 1000mm, plus vegetation and wood), 8.7% large/bedrock (greater than 1000mm, riprap and bedrock). Study results provide science-based information to better equip land planners in Hinkson Creek watershed and similar multi-use watersheds of the central United States for future management decisions and development scenarios.
Seasonal regional forecast of the minimum sea ice extent in the LapteV Sea
NASA Astrophysics Data System (ADS)
Tremblay, B.; Brunette, C.; Newton, R.
2017-12-01
Late winter anomaly of sea ice export from the peripheral seas of the Atctic Ocean was found to be a useful predictor for the minimum sea ice extent (SIE) in the Arctic Ocean (Williams et al., 2017). In the following, we present a proof of concept for a regional seasonal forecast of the min SIE for the Laptev Sea based on late winter coastal divergence quantified using a Lagrangian Ice Tracking System (LITS) forced with satellite derived sea-ice drifts from the Polar Pathfinder. Following Nikolaeva and Sesterikov (1970), we track an imaginary line just offshore of coastal polynyas in the Laptev Sea from December of the previous year to May 1 of the following year using LITS. Results show that coastal divergence in the Laptev Sea between February 1st and May 1st is best correlated (r = -0.61) with the following September minimum SIE in accord with previous results from Krumpen et al. (2013, for the Laptev Sea) and Williams et a. (2017, for the pan-Arctic). This gives a maximum seasonal predictability of Laptev Sea min SIE anomalies from observations of approximately 40%. Coastal ice divergence leads to formation of thinner ice that melts earlier in early summer, hence creating areas of open water that have a lower albedo and trigger an ice-albedo feedback. In the Laptev Sea, we find that anomalies of coastal divergence in late winter are amplified threefold to result in the September SIE. We also find a correlation coefficient r = 0.49 between February-March-April (FMA) anomalies of coastal divergence with the FMA averaged AO index. Interestingly, the correlation is stronger, r = 0.61, when comparing the FMA coastal divergence anomalies to the DJFMA averaged AO index. It is hypothesized that the AO index at the beginning of the winter (and the associated anomalous sea ice export) also contains information that impact the magnitude of coastal divergence opening later in the winter. Our approach differs from previous approaches (e.g. Krumpen et al and Williams et al) in that the coastal divergence is quantified directly by following the edge of the mobile pack ice in a Lagrangian manner.
Urban-rural migration: uncertainty and the effect of a change in the minimum wage.
Ingene, C A; Yu, E S
1989-01-01
"This paper extends the neoclassical, Harris-Todaro model of urban-rural migration to the case of production uncertainty in the agricultural sector. A unique feature of the Harris-Todaro model is an exogenously determined minimum wage in the urban sector that exceeds the rural wage. Migration occurs until the rural wage equals the expected urban wage ('expected' due to employment uncertainty). The effects of a change in the minimum wage upon regional outputs, resource allocation, factor rewards, expected profits, and expected national income are explored, and the influence of production uncertainty upon the obtained results are delineated." The geographical focus is on developing countries. excerpt
NASA Astrophysics Data System (ADS)
Kurosaki, Yuzuru; Artamonov, Maxim; Ho, Tak-San; Rabitz, Herschel
2009-07-01
Quantum wave packet optimal control simulations with intense laser pulses have been carried out for studying molecular isomerization dynamics of a one-dimensional (1D) reaction-path model involving a dominant competing dissociation channel. The 1D intrinsic reaction coordinate model mimics the ozone open→cyclic ring isomerization along the minimum energy path that successively connects the ozone cyclic ring minimum, the transition state (TS), the open (global) minimum, and the dissociative O2+O asymptote on the O3 ground-state A1' potential energy surface. Energetically, the cyclic ring isomer, the TS barrier, and the O2+O dissociation channel lie at ˜0.05, ˜0.086, and ˜0.037 hartree above the open isomer, respectively. The molecular orientation of the modeled ozone is held constant with respect to the laser-field polarization and several optimal fields are found that all produce nearly perfect isomerization. The optimal control fields are characterized by distinctive high temporal peaks as well as low frequency components, thereby enabling abrupt transfer of the time-dependent wave packet over the TS from the open minimum to the targeted ring minimum. The quick transition of the ozone wave packet avoids detrimental leakage into the competing O2+O channel. It is possible to obtain weaker optimal laser fields, resulting in slower transfer of the wave packets over the TS, when a reduced level of isomerization is satisfactory.
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage.
Cadena, Brian C
2014-03-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants' location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents.
The Effect of Minimum Wages on Adolescent Fertility: A Nationwide Analysis.
Bullinger, Lindsey Rose
2017-03-01
To investigate the effect of minimum wage laws on adolescent birth rates in the United States. I used a difference-in-differences approach and vital statistics data measured quarterly at the state level from 2003 to 2014. All models included state covariates, state and quarter-year fixed effects, and state-specific quarter-year nonlinear time trends, which provided plausibly causal estimates of the effect of minimum wage on adolescent birth rates. A $1 increase in minimum wage reduces adolescent birth rates by about 2%. The effects are driven by non-Hispanic White and Hispanic adolescents. Nationwide, increasing minimum wages by $1 would likely result in roughly 5000 fewer adolescent births annually.
Perrakis, Konstantinos; Gryparis, Alexandros; Schwartz, Joel; Le Tertre, Alain; Katsouyanni, Klea; Forastiere, Francesco; Stafoggia, Massimo; Samoli, Evangelia
2014-12-10
An important topic when estimating the effect of air pollutants on human health is choosing the best method to control for seasonal patterns and time varying confounders, such as temperature and humidity. Semi-parametric Poisson time-series models include smooth functions of calendar time and weather effects to control for potential confounders. Case-crossover (CC) approaches are considered efficient alternatives that control seasonal confounding by design and allow inclusion of smooth functions of weather confounders through their equivalent Poisson representations. We evaluate both methodological designs with respect to seasonal control and compare spline-based approaches, using natural splines and penalized splines, and two time-stratified CC approaches. For the spline-based methods, we consider fixed degrees of freedom, minimization of the partial autocorrelation function, and general cross-validation as smoothing criteria. Issues of model misspecification with respect to weather confounding are investigated under simulation scenarios, which allow quantifying omitted, misspecified, and irrelevant-variable bias. The simulations are based on fully parametric mechanisms designed to replicate two datasets with different mortality and atmospheric patterns. Overall, minimum partial autocorrelation function approaches provide more stable results for high mortality counts and strong seasonal trends, whereas natural splines with fixed degrees of freedom perform better for low mortality counts and weak seasonal trends followed by the time-season-stratified CC model, which performs equally well in terms of bias but yields higher standard errors. Copyright © 2014 John Wiley & Sons, Ltd.
Lagrangian model of copepod dynamics: Clustering by escape jumps in turbulence
NASA Astrophysics Data System (ADS)
Ardeshiri, H.; Benkeddad, I.; Schmitt, F. G.; Souissi, S.; Toschi, F.; Calzavarini, E.
2016-04-01
Planktonic copepods are small crustaceans that have the ability to swim by quick powerful jumps. Such an aptness is used to escape from high shear regions, which may be caused either by flow perturbations, produced by a large predator (i.e., fish larvae), or by the inherent highly turbulent dynamics of the ocean. Through a combined experimental and numerical study, we investigate the impact of jumping behavior on the small-scale patchiness of copepods in a turbulent environment. Recorded velocity tracks of copepods displaying escape response jumps in still water are here used to define and tune a Lagrangian copepod (LC) model. The model is further employed to simulate the behavior of thousands of copepods in a fully developed hydrodynamic turbulent flow obtained by direct numerical simulation of the Navier-Stokes equations. First, we show that the LC velocity statistics is in qualitative agreement with available experimental observations of copepods in turbulence. Second, we quantify the clustering of LC, via the fractal dimension D2. We show that D2 can be as low as ˜2.3 and that it critically depends on the shear-rate sensitivity of the proposed LC model, in particular it exhibits a minimum in a narrow range of shear-rate values. We further investigate the effect of jump intensity, jump orientation, and geometrical aspect ratio of the copepods on the small-scale spatial distribution. At last, possible ecological implications of the observed clustering on encounter rates and mating success are discussed.
HYDRAULICS AND MIXING EVALUATIONS FOR NT-21/41 TANKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.; Barnes, O.
2014-11-17
The hydraulic results demonstrate that pump head pressure of 20 psi recirculates about 5.6 liters/min flowrate through the existing 0.131-inch orifice when a valve connected to NT-41 is closed. In case of the valve open to NT-41, the solution flowrates to HB-Line tanks, NT-21 and NT-41, are found to be about 0.5 lpm and 5.2 lpm, respectively. The modeling calculations for the mixing operations of miscible fluids contained in the HB-Line tank NT-21 were performed by taking a three-dimensional Computational Fluid Dynamics (CFD) approach. The CFD modeling results were benchmarked against the literature results and the previous SRNL test resultsmore » to validate the model. Final performance calculations were performed for the nominal case by using the validated model to quantify the mixing time for the HB-Line tank. The results demonstrate that when a pump recirculates a solution volume of 5.7 liters every minute out of the 72-liter tank contents containing two acid solutions of 2.7 M and 0 M concentrations (i.e., water), a minimum mixing time of 1.5 hours is adequate for the tank contents to get the tank contents adequately mixed. In addition, the sensitivity results for the tank contents of 8 M existing solution and 1.5 M incoming species show that the mixing time takes about 2 hours to get the solutions mixed.« less
Pakdel, Amir R; Whyne, Cari M; Fialkov, Jeffrey A
2017-06-01
The trend towards optimizing stabilization of the craniomaxillofacial skeleton (CMFS) with the minimum amount of fixation required to achieve union, and away from maximizing rigidity, requires a quantitative understanding of craniomaxillofacial biomechanics. This study uses computational modeling to quantify the structural biomechanics of the CMFS under maximal physiologic masticatory loading. Using an experimentally validated subject-specific finite element (FE) model of the CMFS, the patterns of stress and strain distribution as a result of physiological masticatory loading were calculated. The trajectories of the stresses were plotted to delineate compressive and tensile regimes over the entire CMFS volume. The lateral maxilla was found to be the primary vertical buttress under maximal bite force loading, with much smaller involvement of the naso-maxillary buttress. There was no evidence that the pterygo-maxillary region is a buttressing structure, counter to classical buttress theory. The stresses at the zygomatic sutures suggest that two-point fixation of zygomatic complex fractures may be sufficient for fixation under bite force loading. The current experimentally validated biomechanical FE model of the CMFS is a practical tool for in silico optimization of current practice techniques and may be used as a foundation for the development of design criteria for future technologies for the treatment of CMFS injury and disease. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Niazi, M. Khalid Khan; Hemminger, Jessica; Kurt, Habibe; Lozanski, Gerard; Gurcan, Metin
2014-03-01
Vascularity represents an important element of tissue/tumor microenvironment and is implicated in tumor growth, metastatic potential and resistence to therapy. Small blood vessels can be visualized using immunohistochemical stains specific to vascular cells. However, currently used manual methods to assess vascular density are poorly reproducible and are at best semi quantitative. Computer based quantitative and objective methods to measure microvessel density are urgently needed to better understand and clinically utilize microvascular density information. We propose a new method to quantify vascularity from images of bone marrow biopsies stained for CD34 vascular lining cells protein as a model. The method starts by automatically segmenting the blood vessels by methods of maxlink thresholding and minimum graph cuts. The segmentation is followed by morphological post-processing to reduce blast and small spurious objects from the bone marrow images. To classify the images into one of the four grades, we extracted 20 features from the segmented blood vessel images. These features include first four moments of the distribution of the area of blood vessels, first four moments of the distribution of 1) the edge weights in the minimum spanning tree of the blood vessels, 2) the shortest distance between blood vessels, 3) the homogeneity of the shortest distance (absolute difference in distance between consecutive blood vessels along the shortest path) between blood vessels and 5) blood vessel orientation. The method was tested on 26 bone marrow biopsy images stained with CD34 IHC stain, which were evaluated by three pathologists. The pathologists took part in this study by quantifying blood vessel density using gestalt assessment in hematopoietic bone marrow portions of bone marrow core biopsies images. To determine the intra-reader variability, each image was graded twice by each pathologist with two-week interval in between their readings. For each image, the ground truth (grade) was acquired through consensus among the three pathologists at the end of the study. A ranking of the features reveals that the fourth moment of the distribution of the area of blood vessels along with the first moment of the distribution of the shortest distance between blood vessels can correctly grade 68.2% of the bone marrow biopsies, while the intra- and inter-reader variability among the pathologists are 66.9% and 40.0%, respectively.
Graham, Ryan B; Brown, Stephen H M
2012-06-01
Stability of the spinal column is critical to bear loads, allow movement, and at the same time avoid injury and pain. However, there has been a debate in recent years as to how best to define and quantify spine stability, with the outcome being that different methods are used without a clear understanding of how they relate to one another. Therefore, the goal of the present study was to directly compare lumbar spine rotational stiffness, calculated with an EMG-driven biomechanical model, to local dynamic spine stability calculated using Lyapunov analyses of kinematic data, during a series of continuous dynamic lifting challenges. Twelve healthy male subjects performed 30 repetitive lifts under three varying load and three varying rate conditions. With an increase in the load lifted (constant rate) there was a significant increase in mean, maximum, and minimum spine rotational stiffness (p<0.001) and a significant increase in local dynamic stability (p<0.05); both stability measures were moderately to strongly related to one another (r=-0.55 to -0.71). With an increase in lifting rate (constant load), there was also a significant increase in mean and maximum spine rotational stiffness (p<0.01); however, there was a non-significant decrease in the minimum rotational stiffness and a non-significant decrease in local dynamic stability (p>0.05). Weak linear relationships were found for the varying rate conditions (r=-0.02 to -0.27). The results suggest that spine rotational stiffness and local dynamic stability are closely related to one another, as they provided similar information when movement rate was controlled. However, based on the results from the changing lifting rate conditions, it is evident that both models provide unique information and that future research is required to completely understand the relationship between the two models. Using both techniques concurrently may provide the best information regarding the true effects of (in) stability under different loading and movement scenarios, and in comparing healthy and clinical populations. Copyright © 2012 Elsevier Ltd. All rights reserved.
Entanglement witnesses in spin models
NASA Astrophysics Data System (ADS)
Tóth, Géza
2005-01-01
We construct entanglement witnesses using fundamental quantum operators of spin models which contain two-particle interactions and have a certain symmetry. By choosing the Hamiltonian as such an operator, our method can be used for detecting entanglement by energy measurement. We apply this method to the Heisenberg model in a cubic lattice with a magnetic field, the XY model, and other familiar spin systems. Our method provides a temperature bound for separable states for systems in thermal equilibrium. We also study the Bose-Hubbard model and relate its energy minimum for separable states to the minimum obtained from the Gutzwiller ansatz.
The influence of periodic wind turbine noise on infrasound array measurements
NASA Astrophysics Data System (ADS)
Pilger, Christoph; Ceranna, Lars
2017-02-01
Aerodynamic noise emissions from the continuously growing number of wind turbines in Germany are creating increasing problems for infrasound recording systems. These systems are equipped with highly sensitive micro pressure sensors accurately measuring acoustic signals in a frequency range inaudible to the human ear. Ten years of data (2006-2015) from the infrasound array IGADE in Northern Germany are analysed to quantify the influence of wind turbine noise on infrasound recordings. Furthermore, a theoretical model is derived and validated by a field experiment with mobile micro-barometer stations. Fieldwork was carried out 2004 to measure the infrasonic pressure level of a single horizontal-axis wind turbine and to extrapolate the sound effect for a larger number of nearby wind turbines. The model estimates the generated sound pressure level of wind turbines and thus enables for specifying the minimum allowable distance between wind turbines and infrasound stations for undisturbed recording. This aspect is particularly important to guarantee the monitoring performance of the German infrasound stations I26DE in the Bavarian Forest and I27DE in Antarctica. These stations are part of the International Monitoring System (IMS) verifying compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT), and thus have to meet stringent specifications with respect to infrasonic background noise.
CMB bispectrum, trispectrum, non-Gaussianity, and the Cramer-Rao bound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamionkowski, Marc; Smith, Tristan L.; Heavens, Alan
Minimum-variance estimators for the parameter f{sub nl} that quantifies local-model non-Gaussianity can be constructed from the cosmic microwave background (CMB) bispectrum (three-point function) and also from the trispectrum (four-point function). Some have suggested that a comparison between the estimates for the values of f{sub nl} from the bispectrum and trispectrum allow a consistency test for the model. But others argue that the saturation of the Cramer-Rao bound--which gives a lower limit to the variance of an estimator--by the bispectrum estimator implies that no further information on f{sub nl} can be obtained from the trispectrum. Here, we elaborate the nature ofmore » the correlation between the bispectrum and trispectrum estimators for f{sub nl}. We show that the two estimators become statistically independent in the limit of large number of CMB pixels, and thus that the trispectrum estimator does indeed provide additional information on f{sub nl} beyond that obtained from the bispectrum. We explain how this conclusion is consistent with the Cramer-Rao bound. Our discussion of the Cramer-Rao bound may be of interest to those doing Fisher-matrix parameter-estimation forecasts or data analysis in other areas of physics as well.« less
Directional cultural change by modification and replacement of memes.
Cardoso, Gonçalo C; Atwell, Jonathan W
2011-01-01
Evolutionary approaches to culture remain contentious. A source of contention is that cultural mutation may be substantial and, if it drives cultural change, then current evolutionary models are not adequate. But we lack studies quantifying the contribution of mutations to directional cultural change. We estimated the contribution of one type of cultural mutations--modification of memes--to directional cultural change using an amenable study system: learned birdsongs in a species that recently entered an urban habitat. Many songbirds have higher minimum song frequency in cities, to alleviate masking by low-frequency noise. We estimated that the input of meme modifications in an urban songbird population explains about half the extent of the population divergence in song frequency. This contribution of cultural mutations is large, but insufficient to explain the entire population divergence. The remaining divergence is due to selection of memes or creation of new memes. We conclude that the input of cultural mutations can be quantitatively important, unlike in genetic evolution, and that it operates together with other mechanisms of cultural evolution. For this and other traits, in which the input of cultural mutations might be important, quantitative studies of cultural mutation are necessary to calibrate realistic models of cultural evolution. © 2010 The Author(s). Evolution© 2010 The Society for the Study of Evolution.
Oil release from Macondo well MC252 following the Deepwater Horizon accident.
Griffiths, Stewart K
2012-05-15
Oil flow rates and cumulative discharge from the BP Macondo Prospect well in the Gulf of Mexico are calculated using a physically based model along with wellhead pressures measured at the blowout preventer (BOP) over the 86-day period following the Deepwater Horizon accident. Parameters appearing in the model are determined empirically from pressures measured during well shut-in and from pressures and flow rates measured the preceding day. This methodology rigorously accounts for ill-characterized evolution of the marine riser, installation and removal of collection caps, and any erosion at the wellhead. The calculated initial flow rate is 67,100 stock-tank barrels per day (stbd), which decays to 54,400 stbd just prior to installation of the capping stack and subsequent shut-in. The calculated cumulative discharge is 5.4 million stock-tank barrels, of which 4.6 million barrels entered the Gulf. Quantifiable uncertainties in these values are -9.3% and +7.5%, yielding a likely total discharge in the range from 4.9 to 5.8 million barrels. Minimum and maximum credible values of this discharge are 4.6 and 6.2 million barrels. Alternative calculations using the reservoir and sea-floor pressures indicate that any erosion within the BOP had little affect on cumulative discharge.
Ji, Lei; Peters, Albert J.
2004-01-01
The relationship between vegetation and climate in the grassland and cropland of the northern US Great Plains was investigated with Normalized Difference Vegetation Index (NDVI) (1989–1993) images derived from the Advanced Very High Resolution Radiometer (AVHRR), and climate data from automated weather stations. The relationship was quantified using a spatial regression technique that adjusts for spatial autocorrelation inherent in these data. Conventional regression techniques used frequently in previous studies are not adequate, because they are based on the assumption of independent observations. Six climate variables during the growing season; precipitation, potential evapotranspiration, daily maximum and minimum air temperature, soil temperature, solar irradiation were regressed on NDVI derived from a 10-km weather station buffer. The regression model identified precipitation and potential evapotranspiration as the most significant climatic variables, indicating that the water balance is the most important factor controlling vegetation condition at an annual timescale. The model indicates that 46% and 24% of variation in NDVI is accounted for by climate in grassland and cropland, respectively, indicating that grassland vegetation has a more pronounced response to climate variation than cropland. Other factors contributing to NDVI variation include environmental factors (soil, groundwater and terrain), human manipulation of crops, and sensor variation.
Speaking-rate-induced variability in F2 trajectories.
Tjaden, K; Weismer, G
1998-10-01
This study examined speaking-rate-induced spectral and temporal variability of F2 formant trajectories for target words produced in a carrier phrase at speaking rates ranging from fast to slow. F2 onset frequency measured at the first glottal pulse following the stop consonant release in target words was used to quantify the extent to which adjacent consonantal and vocalic gestures overlapped; F2 target frequency was operationally defined as the first occurrence of a frequency minimum or maximum following F2 onset frequency. Regression analyses indicated 70% of functions relating F2 onset and vowel duration were statistically significant. The strength of the effect was variable, however, and the direction of significant functions often differed from that predicted by a simple model of overlapping, sliding gestures. Results of a partial correlation analysis examining interrelationships among F2 onset, F2 target frequency, and vowel duration across the speaking rate range indicated that covariation of F2 target with vowel duration may obscure the relationship between F2 onset and vowel duration across rate. The results further suggested that a sliding based model of acoustic variability associated with speaking rate change only partially accounts for the present data, and that such a view accounts for some speakers' data better than others.
Recio-Spinoso, Alberto; Fan, Yun-Hui; Ruggero, Mario A
2011-05-01
Basilar-membrane responses to white Gaussian noise were recorded using laser velocimetry at basal sites of the chinchilla cochlea with characteristic frequencies near 10 kHz and first-order Wiener kernels were computed by cross correlation of the stimuli and the responses. The presence or absence of minimum-phase behavior was explored by fitting the kernels with discrete linear filters with rational transfer functions. Excellent fits to the kernels were obtained with filters with transfer functions including zeroes located outside the unit circle, implying nonminimum-phase behavior. These filters accurately predicted basilar-membrane responses to other noise stimuli presented at the same level as the stimulus for the kernel computation. Fits with all-pole and other minimum-phase discrete filters were inferior to fits with nonminimum-phase filters. Minimum-phase functions predicted from the amplitude functions of the Wiener kernels by Hilbert transforms were different from the measured phase curves. These results, which suggest that basilar-membrane responses do not have the minimum-phase property, challenge the validity of models of cochlear processing, which incorporate minimum-phase behavior. © 2011 IEEE
Controllability and observability analysis for vertex domination centrality in directed networks
NASA Astrophysics Data System (ADS)
Wang, Bingbo; Gao, Lin; Gao, Yong; Deng, Yue; Wang, Yu
2014-06-01
Topological centrality is a significant measure for characterising the relative importance of a node in a complex network. For directed networks that model dynamic processes, however, it is of more practical importance to quantify a vertex's ability to dominate (control or observe) the state of other vertices. In this paper, based on the determination of controllable and observable subspaces under the global minimum-cost condition, we introduce a novel direction-specific index, domination centrality, to assess the intervention capabilities of vertices in a directed network. Statistical studies demonstrate that the domination centrality is, to a great extent, encoded by the underlying network's degree distribution and that most network positions through which one can intervene in a system are vertices with high domination centrality rather than network hubs. To analyse the interaction and functional dependence between vertices when they are used to dominate a network, we define the domination similarity and detect significant functional modules in glossary and metabolic networks through clustering analysis. The experimental results provide strong evidence that our indices are effective and practical in accurately depicting the structure of directed networks.
Controllability and observability analysis for vertex domination centrality in directed networks
Wang, Bingbo; Gao, Lin; Gao, Yong; Deng, Yue; Wang, Yu
2014-01-01
Topological centrality is a significant measure for characterising the relative importance of a node in a complex network. For directed networks that model dynamic processes, however, it is of more practical importance to quantify a vertex's ability to dominate (control or observe) the state of other vertices. In this paper, based on the determination of controllable and observable subspaces under the global minimum-cost condition, we introduce a novel direction-specific index, domination centrality, to assess the intervention capabilities of vertices in a directed network. Statistical studies demonstrate that the domination centrality is, to a great extent, encoded by the underlying network's degree distribution and that most network positions through which one can intervene in a system are vertices with high domination centrality rather than network hubs. To analyse the interaction and functional dependence between vertices when they are used to dominate a network, we define the domination similarity and detect significant functional modules in glossary and metabolic networks through clustering analysis. The experimental results provide strong evidence that our indices are effective and practical in accurately depicting the structure of directed networks. PMID:24954137
Radiation exposure for manned Mars surface missions
NASA Technical Reports Server (NTRS)
Simonsen, Lisa C.; Nealy, John E.; Townsend, Lawrence W.; Wilson, John W.
1990-01-01
The Langley cosmic ray transport code and the Langley nucleon transport code (BRYNTRN) are used to quantify the transport and attenuation of galactic cosmic rays (GCR) and solar proton flares through the Martian atmosphere. Surface doses are estimated using both a low density and a high density carbon dioxide model of the atmosphere which, in the vertical direction, provides a total of 16 g/sq cm and 22 g/sq cm of protection, respectively. At the Mars surface during the solar minimum cycle, a blood-forming organ (BFO) dose equivalent of 10.5 to 12 rem/yr due to galactic cosmic ray transport and attenuation is calculated. Estimates of the BFO dose equivalents which would have been incurred from the three large solar flare events of August 1972, November 1960, and February 1956 are also calculated at the surface. Results indicate surface BFO dose equivalents of approximately 2 to 5, 5 to 7, and 8 to 10 rem per event, respectively. Doses are also estimated at altitudes up to 12 km above the Martian surface where the atmosphere will provide less total protection.
Space radiation dose estimates on the surface of Mars
NASA Technical Reports Server (NTRS)
Simonsen, Lisa C.; Nealy, John E.; Townsend, Lawrence W.; Wilson, John W.
1990-01-01
The Langley cosmic ray transport code and the Langley nucleon transport code (BRYNTRN) are used to quantify the transport and attenuation of galactic cosmic rays (GCR) and solar proton flares through the Martian atmosphere. Surface doses are estimated using both a low density and a high density carbon dioxide model of the atmosphere which, in the vertical direction, provides a total of 16 g/sq cm and 22 g/sq cm of protection, respectively. At the Mars surface during the solar minimum cycle, a blood-forming organ (BFO) dose equivalent of 10.5 to 12 rem/yr due to galactic cosmic ray transport and attenuation is calculated. Estimates of the BFO dose equivalents which would have been incurred from the three large solar flare events of August 1972, November 1960, and February 1956 are also calculated at the surface. Results indicate surface BFO dose equivalents of approximately 2 to 5, 5 to 7, and 8 to 10 rem per event, respectively. Doses are also estimated at altitudes up to 12 km above the Martian surface where the atmosphere will provide less total protection.
How Extreme is TRAPPIST-1? A look into the planetary system’s extreme-UV radiation environment
NASA Astrophysics Data System (ADS)
Peacock, Sarah; Barman, Travis; Shkolnik, Evgenya L.
2018-01-01
The ultracool dwarf star TRAPPIST-1 hosts three earth-sized planets at orbital distances where water has the potential to exist in liquid form on the planets’ surface. Close-in exoplanets, such as these, become vulnerable to water loss as stellar XUV radiation heats and expands their upper atmospheres. Currently, little is known about the high-energy radiation environment around TRAPPIST-1. Recent efforts to quantify the XUV radiation rely on empirical relationships based on X-ray or Lyman alpha line observations and yield very different results. The scaling relations used between the X-ray and EUV emission result in high-energy irradiation of the planets 10-1000x greater than present day Earth, stripping atmospheres and oceans in 1 Gyr, while EUV estimated from Lyman alpha flux is much lower. Here we present upper-atmosphere PHOENIX models representing the minimum and maximum potential EUV stellar flux from TRAPPIST-1. We use GALEX FUV and NUV photometry for similar aged M stars to determine the UV flux extrema in an effort to better constrain the high-energy radiation environment around TRAPPIST-1.
Do Energy Efficiency Standards Improve Quality? Evidence from a Revealed Preference Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houde, Sebastien; Spurlock, C. Anna
Minimum energy efficiency standards have occupied a central role in U.S. energy policy for more than three decades, but little is known about their welfare effects. In this paper, we employ a revealed preference approach to quantify the impact of past revisions in energy efficiency standards on product quality. The micro-foundation of our approach is a discrete choice model that allows us to compute a price-adjusted index of vertical quality. Focusing on the appliance market, we show that several standard revisions during the period 2001-2011 have led to an increase in quality. We also show that these standards have hadmore » a modest effect on prices, and in some cases they even led to decreases in prices. For revision events where overall quality increases and prices decrease, the consumer welfare effect of tightening the standards is unambiguously positive. Finally, we show that after controlling for the effect of improvement in energy efficiency, standards have induced an expansion of quality in the non-energy dimension. We discuss how imperfect competition can rationalize these results.« less
Guidelines to electrode positioning for human and animal electrical impedance myography research
NASA Astrophysics Data System (ADS)
Sanchez, Benjamin; Pacheck, Adam; Rutkove, Seward B.
2016-09-01
The positioning of electrodes in electrical impedance myography (EIM) is critical for accurately assessing disease progression and effectiveness of treatment. In human and animal trials for neuromuscular disorders, inconsistent electrode positioning adds errors to the muscle impedance. Despite its importance, how the reproducibility of resistance and reactance, the two parameters that define EIM, are affected by changes in electrode positioning remains unknown. In this paper, we present a novel approach founded on biophysical principles to study the reproducibility of resistance and reactance to electrode misplacements. The analytical framework presented allows the user to quantify a priori the effect on the muscle resistance and reactance using only one parameter: the uncertainty placing the electrodes. We also provide quantitative data on the precision needed to position the electrodes and the minimum muscle length needed to achieve a pre-specified EIM reproducibility. The results reported here are confirmed with finite element model simulations and measurements on five healthy subjects. Ultimately, our data can serve as normative values to enhance the reliability of EIM as a biomarker and facilitate comparability of future human and animal studies.
A computational imaging target specific detectivity metric
NASA Astrophysics Data System (ADS)
Preece, Bradley L.; Nehmetallah, George
2017-05-01
Due to the large quantity of low-cost, high-speed computational processing available today, computational imaging (CI) systems are expected to have a major role for next generation multifunctional cameras. The purpose of this work is to quantify the performance of theses CI systems in a standardized manner. Due to the diversity of CI system designs that are available today or proposed in the near future, significant challenges in modeling and calculating a standardized detection signal-to-noise ratio (SNR) to measure the performance of these systems. In this paper, we developed a path forward for a standardized detectivity metric for CI systems. The detectivity metric is designed to evaluate the performance of a CI system searching for a specific known target or signal of interest, and is defined as the optimal linear matched filter SNR, similar to the Hotelling SNR, calculated in computational space with special considerations for standardization. Therefore, the detectivity metric is designed to be flexible, in order to handle various types of CI systems and specific targets, while keeping the complexity and assumptions of the systems to a minimum.
Jet-like correlations with direct-photon and neutral-pion triggers at s N N = 200 GeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adamczyk, L.; Adkins, J. K.; Agakishiev, G.
2016-07-22
Azimuthal correlations of charged hadrons with direct-photon (γ dir) and neutral-pion (π 0) trigger particles are analyzed in central Au+Au and minimum-bias p+p collisions atmore » $$\\sqrt{s}$$$_{NN}$$ =200 GeV in the STAR experiment. The charged-hadron per-trigger yields at mid-rapidity from central Au+Au collisions are compared with p+p collisions to quantify the suppression in Au+Au collisions. The suppression of the away-side associated-particle yields per γ dir trigger is independent of the transverse momentum of the trigger particle ( P$$trig\\atop{T}$$, whereas the suppression is smaller at low transverse momentum of the associated charged hadrons ( P$$assoc\\atop{T}$$). Within uncertainty, similar levels of suppression are observed for γ dir and π 0 triggers as a function of z T ($$\\equiv$$ P$$assoc\\atop{T}$$/$ P$$trig\\atop{T}$$). The results are compared with energy-loss-inspired theoretical model predictions. In conclusion, our studies support previous conclusions that the lost energy reappears predominantly at low transverse momentum, regardless of the trigger energy.« less
Applications of active adaptive noise control to jet engines
NASA Technical Reports Server (NTRS)
Shoureshi, Rahmat; Brackney, Larry
1993-01-01
During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.
An algorithm for minimum-cost set-point ordering in a cryogenic wind tunnel
NASA Technical Reports Server (NTRS)
Tripp, J. S.
1981-01-01
An algorithm for minimum cost ordering of set points in a cryogenic wind tunnel is developed. The procedure generates a matrix of dynamic state transition costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-cost state-transition control strategies. A branch and bound algorithm is employed to determine the least costly sequence of state transitions from the transition-cost matrix. Some numerical results based on data for the National Transonic Facility are presented which show a strong preference for state transitions that consume to coolant. Results also show that the choice of the terminal set point in an open odering can produce a wide variation in total cost.
The Preventive Control of a Dengue Disease Using Pontryagin Minimum Principal
NASA Astrophysics Data System (ADS)
Ratna Sari, Eminugroho; Insani, Nur; Lestari, Dwi
2017-06-01
Behaviour analysis for host-vector model without control of dengue disease is based on the value of basic reproduction number obtained using next generation matrices. Furthermore, the model is further developed involving a preventive control to minimize the contact between host and vector. The purpose is to obtain an optimal preventive strategy with minimal cost. The Pontryagin Minimum Principal is used to find the optimal control analytically. The derived optimality model is then solved numerically to investigate control effort to reduce infected class.
Theoretical and Experimental Analysis of the Constant-Area, Supersonic- Supersonic Ejector
1976-10-01
4.3-1 Optimum Chemical Laser System Data, Case No. 1 ....... .... 85 (a) Minn;wum ... ..* ............ 85 ( b ) Minimum P60/P 2...Optimum Chemical Laser System Data, Case No. 2 .... ....... 89 (a) Minimum Wp/Ws .... .. . . . . . . . . . . . . 89 ( b ) Minimum P6 0 /P2...6 Photograph- of the Ejector Model Components .......... ... 48 (a) Front View cf the Secondary Stagnation Chamber * * , 48 ( b ) Rear View of the
On the diversity and statistical properties of protostellar discs
NASA Astrophysics Data System (ADS)
Bate, Matthew R.
2018-04-01
We present results from the first population synthesis study of protostellar discs. We analyse the evolution and properties of a large sample of protostellar discs formed in a radiation hydrodynamical simulation of star cluster formation. Due to the chaotic nature of the star formation process, we find an enormous diversity of young protostellar discs, including misaligned discs, and discs whose orientations vary with time. Star-disc interactions truncate discs and produce multiple systems. Discs may be destroyed in dynamical encounters and/or through ram-pressure stripping, but reform by later gas accretion. We quantify the distributions of disc mass and radii for protostellar ages up to ≈105 yr. For low-mass protostars, disc masses tend to increase with both age and protostellar mass. Disc radii range from of order 10 to a few hundred au, grow in size on time-scales ≲ 104 yr, and are smaller around lower mass protostars. The radial surface density profiles of isolated protostellar discs are flatter than the minimum mass solar nebula model, typically scaling as Σ ∝ r-1. Disc to protostar mass ratios rarely exceed two, with a typical range of Md/M* = 0.1-1 to ages ≲ 104 yr and decreasing thereafter. We quantify the relative orientation angles of circumstellar discs and the orbit of bound pairs of protostars, finding a preference for alignment that strengths with decreasing separation. We also investigate how the orientations of the outer parts of discs differ from the protostellar and inner disc spins for isolated protostars and pairs.
Dias, Luís G; Veloso, Ana C A; Sousa, Mara E B C; Estevinho, Letícia; Machado, Adélio A S C; Peres, António M
2015-11-05
Nowadays the main honey producing countries require accurate labeling of honey before commercialization, including floral classification. Traditionally, this classification is made by melissopalynology analysis, an accurate but time-consuming task requiring laborious sample pre-treatment and high-skilled technicians. In this work the potential use of a potentiometric electronic tongue for pollinic assessment is evaluated, using monofloral and polyfloral honeys. The results showed that after splitting honeys according to color (white, amber and dark), the novel methodology enabled quantifying the relative percentage of the main pollens (Castanea sp., Echium sp., Erica sp., Eucaliptus sp., Lavandula sp., Prunus sp., Rubus sp. and Trifolium sp.). Multiple linear regression models were established for each type of pollen, based on the best sensors' sub-sets selected using the simulated annealing algorithm. To minimize the overfitting risk, a repeated K-fold cross-validation procedure was implemented, ensuring that at least 10-20% of the honeys were used for internal validation. With this approach, a minimum average determination coefficient of 0.91 ± 0.15 was obtained. Also, the proposed technique enabled the correct classification of 92% and 100% of monofloral and polyfloral honeys, respectively. The quite satisfactory performance of the novel procedure for quantifying the relative pollen frequency may envisage its applicability for honey labeling and geographical origin identification. Nevertheless, this approach is not a full alternative to the traditional melissopalynologic analysis; it may be seen as a practical complementary tool for preliminary honey floral classification, leaving only problematic cases for pollinic evaluation. Copyright © 2015 Elsevier B.V. All rights reserved.
Mathematical Modeling of Extinction of Inhomogeneous Populations
Karev, G.P.; Kareva, I.
2016-01-01
Mathematical models of population extinction have a variety of applications in such areas as ecology, paleontology and conservation biology. Here we propose and investigate two types of sub-exponential models of population extinction. Unlike the more traditional exponential models, the life duration of sub-exponential models is finite. In the first model, the population is assumed to be composed clones that are independent from each other. In the second model, we assume that the size of the population as a whole decreases according to the sub-exponential equation. We then investigate the “unobserved heterogeneity”, i.e. the underlying inhomogeneous population model, and calculate the distribution of frequencies of clones for both models. We show that the dynamics of frequencies in the first model is governed by the principle of minimum of Tsallis information loss. In the second model, the notion of “internal population time” is proposed; with respect to the internal time, the dynamics of frequencies is governed by the principle of minimum of Shannon information loss. The results of this analysis show that the principle of minimum of information loss is the underlying law for the evolution of a broad class of models of population extinction. Finally, we propose a possible application of this modeling framework to mechanisms underlying time perception. PMID:27090117
Reliability of a Parallel Pipe Network
NASA Technical Reports Server (NTRS)
Herrera, Edgar; Chamis, Christopher (Technical Monitor)
2001-01-01
The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.
Statistical indicators of collective behavior and functional clusters in gene networks of yeast
NASA Astrophysics Data System (ADS)
Živković, J.; Tadić, B.; Wick, N.; Thurner, S.
2006-03-01
We analyze gene expression time-series data of yeast (S. cerevisiae) measured along two full cell-cycles. We quantify these data by using q-exponentials, gene expression ranking and a temporal mean-variance analysis. We construct gene interaction networks based on correlation coefficients and study the formation of the corresponding giant components and minimum spanning trees. By coloring genes according to their cell function we find functional clusters in the correlation networks and functional branches in the associated trees. Our results suggest that a percolation point of functional clusters can be identified on these gene expression correlation networks.
O'Connor, B.L.; Hondzo, Miki; Harvey, J.W.
2009-01-01
Traditionally, dissolved oxygen (DO) fluxes have been calculated using the thin-film theory with DO microstructure data in systems characterized by fine sediments and low velocities. However, recent experimental evidence of fluctuating DO concentrations near the sediment-water interface suggests that turbulence and coherent motions control the mass transfer, and the surface renewal theory gives a more mechanistic model for quantifying fluxes. Both models involve quantifying the mass transfer coefficient (k) and the relevant concentration difference (??C). This study compared several empirical models for quantifying k based on both thin-film and surface renewal theories, as well as presents a new method for quantifying ??C (dynamic approach) that is consistent with the observed DO concentration fluctuations near the interface. Data were used from a series of flume experiments that includes both physical and kinetic uptake limitations of the flux. Results indicated that methods for quantifying k and ??C using the surface renewal theory better estimated the DO flux across a range of fluid-flow conditions. ?? 2009 ASCE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harikrishnan, R.; Hareland, G.; Warpinski, N.R.
This paper evaluates the correlation between values of minimum principal in situ stress derived from two different models which use data obtained from triaxial core tests and coefficient for earth at rest correlations. Both models use triaxial laboratory tests with different confining pressures. The first method uses a vcrified fit to the Mohr failure envelope as a function of average rock grain size, which was obtained from detailed microscopic analyses. The second method uses the Mohr-Coulomb failure criterion. Both approaches give an angle in internal friction which is used to calculate the coefficient for earth at rest which gives themore » minimum principal in situ stress. The minimum principal in situ stress is then compared to actual field mini-frac test data which accurately determine the minimum principal in situ stress and are used to verify the accuracy of the correlations. The cores and the mini-frac stress test were obtained from two wells, the Gas Research Institute`s (GRIs) Staged Field Experiment (SFE) no. 1 well through the Travis Peak Formation in the East Texas Basin, and the Department of Energy`s (DOE`s) Multiwell Experiment (MWX) wells located west-southwest of the town of Rifle, Colorado, near the Rulison gas field. Results from this study indicates that the calculated minimum principal in situ stress values obtained by utilizing the rock failure envelope as a function of average rock grain size correlation are in better agreement with the measured stress values (from mini-frac tests) than those obtained utilizing Mohr-Coulomb failure criterion.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slosman, D.; Susskind, H.; Bossuyt, A.
1986-03-01
Ventilation imaging can be improved by gating scintigraphic data with the respiratory cycle using temporal Fourier analysis (TFA) to quantify the temporal behavior of the ventilation. Sixteen consecutive images, representing equal-time increments of an average respiratory cycle, were produced by TFA in the posterior view on a pixel-by-pixel basis. An Efficiency Index (EFF), defined as the ratio of the summation of all the differences between maximum and minimum counts for each pixel to that for the entire lung during the respiratory cycle, was derived to describe the pattern of ventilation. The gated ventilation studies were carried out with Xe-127 inmore » 12 subjects: normal lung function (4), small airway disease (2), COPD (5), and restrictive disease (1). EFF for the first three harmonics correlated linearly with FEV1 (r = 0.701, p< 0.01). This approach is suggested as a very sensitive method to quantify the extent and regional distribution of airway obstruction.« less
Isolated cell behavior drives the evolution of antibiotic resistance
Artemova, Tatiana; Gerardin, Ylaine; Dudley, Carmel; Vega, Nicole M; Gore, Jeff
2015-01-01
Bacterial antibiotic resistance is typically quantified by the minimum inhibitory concentration (MIC), which is defined as the minimal concentration of antibiotic that inhibits bacterial growth starting from a standard cell density. However, when antibiotic resistance is mediated by degradation, the collective inactivation of antibiotic by the bacterial population can cause the measured MIC to depend strongly on the initial cell density. In cases where this inoculum effect is strong, the relationship between MIC and bacterial fitness in the antibiotic is not well defined. Here, we demonstrate that the resistance of a single, isolated cell—which we call the single-cell MIC (scMIC)—provides a superior metric for quantifying antibiotic resistance. Unlike the MIC, we find that the scMIC predicts the direction of selection and also specifies the antibiotic concentration at which selection begins to favor new mutants. Understanding the cooperative nature of bacterial growth in antibiotics is therefore essential in predicting the evolution of antibiotic resistance. PMID:26227664
Atomistic modeling of dropwise condensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikarwar, B. S., E-mail: bssikarwar@amity.edu; Singh, P. L.; Muralidhar, K.
The basic aim of the atomistic modeling of condensation of water is to determine the size of the stable cluster and connect phenomena occurring at atomic scale to the macroscale. In this paper, a population balance model is described in terms of the rate equations to obtain the number density distribution of the resulting clusters. The residence time is taken to be large enough so that sufficient time is available for all the adatoms existing in vapor-phase to loose their latent heat and get condensed. The simulation assumes clusters of a given size to be formed from clusters of smallermore » sizes, but not by the disintegration of the larger clusters. The largest stable cluster size in the number density distribution is taken to be representative of the minimum drop radius formed in a dropwise condensation process. A numerical confirmation of this result against predictions based on a thermodynamic model has been obtained. Results show that the number density distribution is sensitive to the surface diffusion coefficient and the rate of vapor flux impinging on the substrate. The minimum drop radius increases with the diffusion coefficient and the impinging vapor flux; however, the dependence is weak. The minimum drop radius predicted from thermodynamic considerations matches the prediction of the cluster model, though the former does not take into account the effect of the surface properties on the nucleation phenomena. For a chemically passive surface, the diffusion coefficient and the residence time are dependent on the surface texture via the coefficient of friction. Thus, physical texturing provides a means of changing, within limits, the minimum drop radius. The study reveals that surface texturing at the scale of the minimum drop radius does not provide controllability of the macro-scale dropwise condensation at large timescales when a dynamic steady-state is reached.« less
Improvement of solar-cycle prediction: Plateau of solar axial dipole moment
NASA Astrophysics Data System (ADS)
Iijima, H.; Hotta, H.; Imada, S.; Kusano, K.; Shiota, D.
2017-11-01
Aims: We report the small temporal variation of the axial dipole moment near the solar minimum and its application to the solar-cycle prediction by the surface flux transport (SFT) model. Methods: We measure the axial dipole moment using the photospheric synoptic magnetogram observed by the Wilcox Solar Observatory (WSO), the ESA/NASA Solar and Heliospheric Observatory Michelson Doppler Imager (MDI), and the NASA Solar Dynamics Observatory Helioseismic and Magnetic Imager (HMI). We also use the SFT model for the interpretation and prediction of the observed axial dipole moment. Results: We find that the observed axial dipole moment becomes approximately constant during the period of several years before each cycle minimum, which we call the axial dipole moment plateau. The cross-equatorial magnetic flux transport is found to be small during the period, although a significant number of sunspots are still emerging. The results indicate that the newly emerged magnetic flux does not contribute to the build up of the axial dipole moment near the end of each cycle. This is confirmed by showing that the time variation of the observed axial dipole moment agrees well with that predicted by the SFT model without introducing new emergence of magnetic flux. These results allow us to predict the axial dipole moment at the Cycle 24/25 minimum using the SFT model without introducing new flux emergence. The predicted axial dipole moment at the Cycle 24/25 minimum is 60-80 percent of Cycle 23/24 minimum, which suggests the amplitude of Cycle 25 is even weaker than the current Cycle 24. Conclusions: The plateau of the solar axial dipole moment is an important feature for the longer-term prediction of the solar cycle based on the SFT model.
NASA Astrophysics Data System (ADS)
García-Cueto, O. Rafael; Cavazos, M. Tereza; de Grau, Pamela; Santillán-Soto, Néstor
2014-04-01
The generalized extreme value distribution is applied in this article to model the statistical behavior of the maximum and minimum temperature distribution tails in four cities of Baja California in northwestern Mexico, using data from 1950-2010. The approach used of the maximum of annual time blocks. Temporal trends were included as covariates in the location parameter (μ), which resulted in significant improvements to the proposed models, particularly for the extreme maximum temperature values in the cities of Mexicali, Tijuana, and Tecate, and the extreme minimum temperature values in Mexicali and Ensenada. These models were used to estimate future probabilities over the next 100 years (2015-2110) for different time periods, and they were compared with changes in the extreme (P90th and P10th) percentiles of maximum and minimum temperature scenarios for a set of six general circulation models under low (RCP4.5) and high (RCP8.5) radiative forcings. By the end of the twenty-first century, the scenarios of the changes in extreme maximum summer temperature are of the same order in both the statistical model and the high radiative scenario (increases of 4-5 °C). The low radiative scenario is more conservative (increases of 2-3 °C). The winter scenario shows that minimum temperatures could be less severe; the temperature increases suggested by the probabilistic model are greater than those projected for the end of the century by the set of global models under RCP4.5 and RCP8.5 scenarios. The likely impacts on the region are discussed.
Magnetic Coupling in the Disks around Young Gas Giant Planets
NASA Astrophysics Data System (ADS)
Turner, N. J.; Lee, Man Hoi; Sano, T.
2014-03-01
We examine the conditions under which the disks of gas and dust orbiting young gas giant planets are sufficiently conducting to experience turbulence driven by the magneto-rotational instability. By modeling the ionization and conductivity in the disk around proto-Jupiter, we find that turbulence is possible if the X-rays emitted near the Sun reach the planet's vicinity and either (1) the gas surface densities are in the range of the minimum-mass models constructed by augmenting Jupiter's satellites to solar composition, while dust is depleted from the disk atmosphere, or (2) the surface densities are much less, and in the range of gas-starved models fed with material from the solar nebula, but not so low that ambipolar diffusion decouples the neutral gas from the plasma. The results lend support to both minimum-mass and gas-starved models of the protojovian disk. (1) The dusty minimum-mass models have internal conductivities low enough to prevent angular momentum transfer by magnetic forces, as required for the material to remain in place while the satellites form. (2) The gas-starved models have magnetically active surface layers and a decoupled interior "dead zone." Similar active layers in the solar nebula yield accretion stresses in the range assumed in constructing the circumjovian gas-starved models. Our results also point to aspects of both classes of models that can be further developed. Non-turbulent minimum-mass models will lose dust from their atmospheres by settling, enabling gas to accrete through a thin surface layer. For the gas-starved models it is crucial to learn whether enough stellar X-ray and ultraviolet photons reach the circumjovian disk. Additionally, the stress-to-pressure ratio ought to increase with distance from the planet, likely leading to episodic accretion outbursts.
Droplet squeezing through a narrow constriction: Minimum impulse and critical velocity
NASA Astrophysics Data System (ADS)
Zhang, Zhifeng; Drapaca, Corina; Chen, Xiaolin; Xu, Jie
2017-07-01
Models of a droplet passing through narrow constrictions have wide applications in science and engineering. In this paper, we report our findings on the minimum impulse (momentum change) of pushing a droplet through a narrow circular constriction. The existence of this minimum impulse is mathematically derived and numerically verified. The minimum impulse happens at a critical velocity when the time-averaged Young-Laplace pressure balances the total minor pressure loss in the constriction. Finally, numerical simulations are conducted to verify these concepts. These results could be relevant to problems of energy optimization and studies of chemical and biomedical systems.
NASA Astrophysics Data System (ADS)
Koenig, Theodore K.; Volkamer, Rainer; Baidar, Sunil; Dix, Barbara; Wang, Siyuan; Anderson, Daniel C.; Salawitch, Ross J.; Wales, Pamela A.; Cuevas, Carlos A.; Fernandez, Rafael P.; Saiz-Lopez, Alfonso; Evans, Mathew J.; Sherwen, Tomás; Jacob, Daniel J.; Schmidt, Johan; Kinnison, Douglas; Lamarque, Jean-François; Apel, Eric C.; Bresch, James C.; Campos, Teresa; Flocke, Frank M.; Hall, Samuel R.; Honomichl, Shawn B.; Hornbrook, Rebecca; Jensen, Jørgen B.; Lueb, Richard; Montzka, Denise D.; Pan, Laura L.; Reeves, J. Michael; Schauffler, Sue M.; Ullmann, Kirk; Weinheimer, Andrew J.; Atlas, Elliot L.; Donets, Valeria; Navarro, Maria A.; Riemer, Daniel; Blake, Nicola J.; Chen, Dexian; Huey, L. Gregory; Tanner, David J.; Hanisco, Thomas F.; Wolfe, Glenn M.
2017-12-01
We report measurements of bromine monoxide (BrO) and use an observationally constrained chemical box model to infer total gas-phase inorganic bromine (Bry) over the tropical western Pacific Ocean (tWPO) during the CONTRAST field campaign (January-February 2014). The observed BrO and inferred Bry profiles peak in the marine boundary layer (MBL), suggesting the need for a bromine source from sea-salt aerosol (SSA), in addition to organic bromine (CBry). Both profiles are found to be C-shaped with local maxima in the upper free troposphere (FT). The median tropospheric BrO vertical column density (VCD) was measured as 1.6×1013 molec cm-2, compared to model predictions of 0.9×1013 molec cm-2 in GEOS-Chem (CBry but no SSA source), 0.4×1013 molec cm-2 in CAM-Chem (CBry and SSA), and 2.1×1013 molec cm-2 in GEOS-Chem (CBry and SSA). Neither global model fully captures the C-shape of the Bry profile. A local Bry maximum of 3.6 ppt (2.9-4.4 ppt; 95 % confidence interval, CI) is inferred between 9.5 and 13.5 km in air masses influenced by recent convective outflow. Unlike BrO, which increases from the convective tropical tropopause layer (TTL) to the aged TTL, gas-phase Bry decreases from the convective TTL to the aged TTL. Analysis of gas-phase Bry against multiple tracers (CFC-11, H2O / O3 ratio, and potential temperature) reveals a Bry minimum of 2.7 ppt (2.3-3.1 ppt; 95 % CI) in the aged TTL, which agrees closely with a stratospheric injection of 2.6 ± 0.6 ppt of inorganic Bry (estimated from CFC-11 correlations), and is remarkably insensitive to assumptions about heterogeneous chemistry. Bry increases to 6.3 ppt (5.6-7.0 ppt; 95 % CI) in the stratospheric "middleworld" and 6.9 ppt (6.5-7.3 ppt; 95 % CI) in the stratospheric "overworld". The local Bry minimum in the aged TTL is qualitatively (but not quantitatively) captured by CAM-Chem, and suggests a more complex partitioning of gas-phase and aerosol Bry species than previously recognized. Our data provide corroborating evidence that inorganic bromine sources (e.g., SSA-derived gas-phase Bry) are needed to explain the gas-phase Bry budget in the upper free troposphere and TTL. They are also consistent with observations of significant bromide in Upper Troposphere-Lower Stratosphere aerosols. The total Bry budget in the TTL is currently not closed, because of the lack of concurrent quantitative measurements of gas-phase Bry species (i.e., BrO, HOBr, HBr, etc.) and aerosol bromide. Such simultaneous measurements are needed to (1) quantify SSA-derived Bry in the upper FT, (2) test Bry partitioning, and possibly explain the gas-phase Bry minimum in the aged TTL, (3) constrain heterogeneous reaction rates of bromine, and (4) account for all of the sources of Bry to the lower stratosphere.
NASA Astrophysics Data System (ADS)
Chaibou Begou, Jamilatou; Jomaa, Seifeddine; Benabdallah, Sihem; Rode, Michael
2015-04-01
Due to the climate change, drier conditions have prevailed in West Africa, since the seventies, and the consequences are important on water resources. In order to identify and implement management strategies of adaptation to climate change in the sector of water, it is crucial to improve our physical understanding of water resources evolution in the region. To this end, hydrologic modelling is an appropriate tool for flow predictions under changing climate and land use conditions. In this study, the applicability and performance of the recent version of Soil and Water Assessment Tool (SWAT2012) model were tested on the Bani catchment in West Africa under limited data condition. Model parameters identification was also tested using one site and multisite calibration approaches. The Bani is located in the upper part of the Niger River and drains an area of about 101, 000 km2 at the outlet of Douna. The climate is tropical, humid to semi-arid from the South to the North with an average annual rainfall of 1050 mm (period 1981-2000). Global datasets were used for the model setup such as: USGS hydrosheds DEM, USGS LCI GlobCov2009 and the FAO Digital Soil Map of the World. Daily measured rainfall from nine rain gauges and maximum and minimum temperature from five weather stations covering the period 1981-1997 were used for model setup. Sensitivity analysis, calibration and validation are performed within SWATCUP using GLUE procedure at Douna station first (one site calibration), then at three additional internal stations, Bougouni, Pankourou and Kouoro1 (multi-site calibration). Model parameters were calibrated at daily time step for the period 1983-1992, then validated for the period 1993-1997. A period of two years (1981-1982) was used for model warming up. Results of one-site calibration showed that the model performance is evaluated by 0.76 and 0.79 for Nash-Sutcliffe (NS) and correlation coefficient (R2), respectively. While for the validation period the performance improved considerably with NS and R2 equal to 0.84 and 0.87, respectively. The degree of total uncertainties is quantified by a minimum P-factor of 0.61 and a maximum R-factor of 0.59. These statistics suggest that the model performance can be judged as very good, especially considering limited data condition and high climate, land use and soil variability in the studied basin. The most sensitive parameters (CN2, OVN and SLSUBBSN) are related to surface runoff reflecting the dominance of this process on the streamflow generation. In the next step, multisite calibration approach will be performed on the BANI basin to assess how much additional observations improve the model parameter identification.
Maximum Relative Entropy of Coherence: An Operational Coherence Measure.
Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde
2017-10-13
The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.
[Impacts of forest and precipitation on runoff and sediment in Tianshui watershed and GM models].
Ouyang, H
2000-12-01
This paper analyzed the impacts of foret stand volume and precipitation on annual erosion modulus, mean sediment, maximum sediment, mean runoff, maximum runoff, minimum runoff, mean water level, maximum water level and minimum water level in Tianshui watershed, and also analyzed the effect of the variation of forest stand volume on monthly mean runoff, minimum runoff and mean water level. The dynamic models of grey system GM(1, N) were constructed to simulate the changes of these hydrological elements. The dynamic GM models on the impact of stand volumes of different forest types(Chinese fir, masson pine and broad-leaved forests) with different age classes(young, middle-aged, mature and over-mature) and that of precipitation on the hydrological elements were also constructed, and their changes with time were analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landkamer, Lee L.; Harvey, Ronald W.; Scheibe, Timothy D.
A new colloid transport model is introduced that is conceptually simple but captures the essential features of complicated attachment and detachment behavior of colloids when conditions of secondary minimum attachment exist. This model eliminates the empirical concept of collision efficiency; the attachment rate is computed directly from colloid filtration theory. Also, a new paradigm for colloid detachment based on colloid population heterogeneity is introduced. Assuming the dispersion coefficient can be estimated from tracer behavior, this model has only two fitting parameters: (1) the fraction of colloids that attach irreversibly and (2) the rate at which reversibly attached colloids leave themore » surface. These two parameters were correlated to physical parameters that control colloid transport such as the depth of the secondary minimum and pore water velocity. Given this correlation, the model serves as a heuristic tool for exploring the influence of physical parameters such as surface potential and fluid velocity on colloid transport. This model can be extended to heterogeneous systems characterized by both primary and secondary minimum deposition by simply increasing the fraction of colloids that attach irreversibly.« less
Vijayalakshmi, Subramanian; Nadanasabhapathi, Shanmugam; Kumar, Ranganathan; Sunny Kumar, S
2018-03-01
The presence of aflatoxin, a carcinogenic and toxigenic secondary metabolite produced by Aspergillus species, in food matrix has been a major worldwide problem for years now. Food processing methods such as roasting, extrusion, etc. have been employed for effective destruction of aflatoxins, which are known for their thermo-stable nature. The high temperature treatment, adversely affects the nutritive and other quality attributes of the food, leading to the necessity of application of non-thermal processing techniques such as ultrasonication, gamma irradiation, high pressure processing, pulsed electric field (PEF), etc. The present study was focused on analysing the efficacy of the PEF process in the reduction of the toxin content, which was subsequently quantified using HPLC. The process parameters of different pH model system (potato dextrose agar) artificially spiked with aflatoxin mix standard was optimized using the response surface methodology. The optimization of PEF process effects on the responses aflatoxin B1 and total aflatoxin reduction (%) by pH (4-10), pulse width (10-26 µs) and output voltage (20-65%), fitted 2FI model and quadratic model respectively. The response surface plots obtained for the processes were of saddle point type, with the absence of minimum or maximum response at the centre point. The implemented numerical optimization showed that the predicted and actual values were similar, proving the adequacy of the fitted models and also proved the possible application of PEF in toxin reduction.
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage*
Cadena, Brian C.
2014-01-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants’ location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents. PMID:24999288
NASA Technical Reports Server (NTRS)
Haering, E. A., Jr.; Burcham, F. W., Jr.
1984-01-01
A simulation study was conducted to optimize minimum time and fuel consumption paths for an F-15 airplane powered by two F100 Engine Model Derivative (EMD) engines. The benefits of using variable stall margin (uptrim) to increase performance were also determined. This study supports the NASA Highly Integrated Digital Electronic Control (HIDEC) program. The basis for this comparison was minimum time and fuel used to reach Mach 2 at 13,716 m (45,000 ft) from the initial conditions of Mach 0.15 at 1524 m (5000 ft). Results were also compared to a pilot's estimated minimum time and fuel trajectory determined from the F-15 flight manual and previous experience. The minimum time trajectory took 15 percent less time than the pilot's estimate for the standard EMD engines, while the minimum fuel trajectory used 1 percent less fuel than the pilot's estimate for the minimum fuel trajectory. The F-15 airplane with EMD engines and uptrim, was 23 percent faster than the pilot's estimate. The minimum fuel used was 5 percent less than the estimate.
Infrastructure Vulnerability Assessment Model (I-VAM).
Ezell, Barry Charles
2007-06-01
Quantifying vulnerability to critical infrastructure has not been adequately addressed in the literature. Thus, the purpose of this article is to present a model that quantifies vulnerability. Vulnerability is defined as a measure of system susceptibility to threat scenarios. This article asserts that vulnerability is a condition of the system and it can be quantified using the Infrastructure Vulnerability Assessment Model (I-VAM). The model is presented and then applied to a medium-sized clean water system. The model requires subject matter experts (SMEs) to establish value functions and weights, and to assess protection measures of the system. Simulation is used to account for uncertainty in measurement, aggregate expert assessment, and to yield a vulnerability (Omega) density function. Results demonstrate that I-VAM is useful to decisionmakers who prefer quantification to qualitative treatment of vulnerability. I-VAM can be used to quantify vulnerability to other infrastructures, supervisory control and data acquisition systems (SCADA), and distributed control systems (DCS).
Arem, Hannah; Moore, Steven C; Patel, Alpa; Hartge, Patricia; Berrington de Gonzalez, Amy; Visvanathan, Kala; Campbell, Peter T; Freedman, Michal; Weiderpass, Elisabete; Adami, Hans Olov; Linet, Martha S; Lee, I-Min; Matthews, Charles E
2015-06-01
The 2008 Physical Activity Guidelines for Americans recommended a minimum of 75 vigorous-intensity or 150 moderate-intensity minutes per week (7.5 metabolic-equivalent hours per week) of aerobic activity for substantial health benefit and suggested additional benefits by doing more than double this amount. However, the upper limit of longevity benefit or possible harm with more physical activity is unclear. To quantify the dose-response association between leisure time physical activity and mortality and define the upper limit of benefit or harm associated with increased levels of physical activity. We pooled data from 6 studies in the National Cancer Institute Cohort Consortium (baseline 1992-2003). Population-based prospective cohorts in the United States and Europe with self-reported physical activity were analyzed in 2014. A total of 661,137 men and women (median age, 62 years; range, 21-98 years) and 116,686 deaths were included. We used Cox proportional hazards regression with cohort stratification to generate multivariable-adjusted hazard ratios (HRs) and 95% CIs. Median follow-up time was 14.2 years. Leisure time moderate- to vigorous-intensity physical activity. The upper limit of mortality benefit from high levels of leisure time physical activity. Compared with individuals reporting no leisure time physical activity, we observed a 20% lower mortality risk among those performing less than the recommended minimum of 7.5 metabolic-equivalent hours per week (HR, 0.80 [95% CI, 0.78-0.82]), a 31% lower risk at 1 to 2 times the recommended minimum (HR, 0.69 [95% CI, 0.67-0.70]), and a 37% lower risk at 2 to 3 times the minimum (HR, 0.63 [95% CI, 0.62-0.65]). An upper threshold for mortality benefit occurred at 3 to 5 times the physical activity recommendation (HR, 0.61 [95% CI, 0.59-0.62]); however, compared with the recommended minimum, the additional benefit was modest (31% vs 39%). There was no evidence of harm at 10 or more times the recommended minimum (HR, 0.69 [95% CI, 0.59-0.78]). A similar dose-response relationship was observed for mortality due to cardiovascular disease and to cancer. Meeting the 2008 Physical Activity Guidelines for Americans minimum by either moderate- or vigorous-intensity activities was associated with nearly the maximum longevity benefit. We observed a benefit threshold at approximately 3 to 5 times the recommended leisure time physical activity minimum and no excess risk at 10 or more times the minimum. In regard to mortality, health care professionals should encourage inactive adults to perform leisure time physical activity and do not need to discourage adults who already participate in high-activity levels.
Quantifying uncertainties of climate signals related to the 11-year solar cycle
NASA Astrophysics Data System (ADS)
Kruschke, T.; Kunze, M.; Matthes, K. B.; Langematz, U.; Wahl, S.
2017-12-01
Although state-of-the-art reconstructions based on proxies and (semi-)empirical models converge in terms of total solar irradiance, they still significantly differ in terms of spectral solar irradiance (SSI) with respect to the mean spectral distribution of energy input and temporal variability. This study aims at quantifying uncertainties for the Earth's climate related to the 11-year solar cycle by forcing two chemistry-climate models (CCMs) - CESM1(WACCM) and EMAC - with five different SSI reconstructions (NRLSSI1, NRLSSI2, SATIRE-T, SATIRE-S, CMIP6-SSI) and the reference spectrum RSSV1-ATLAS3, derived from observations. We conduct a unique set of timeslice experiments. External forcings and boundary conditions are fixed and identical for all experiments, except for the solar forcing. The set of analyzed simulations consists of one solar minimum simulation, employing RSSV1-ATLAS3 and five solar maximum experiments. The latter are a result of adding the amplitude of solar cycle 22 according to the five reconstructions to RSSV1-ATLAS3. Our results show that the climate response to the 11y solar cycle is generally robust across CCMs and SSI forcings. However, analyzing the variance of the solar maximum ensemble by means of ANOVA-statistics reveals additional information on the uncertainties of the mean climate signals. The annual mean response agrees very well between the two CCMs for most parts of the lower and middle atmosphere. Only the upper mesosphere is subject to significant differences related to the choice of the model. However, the different SSI forcings lead to significant differences in ozone concentrations, shortwave heating rates, and temperature throughout large parts of the mesosphere and upper stratosphere. Regarding the seasonal evolution of the climate signals, our findings for short wave heating rates, and temperature are similar to the annual means with respect to the relative importance of the choice of the model or the SSI forcing for the respective atmospheric layer. On the other hand, the predominantly dynamically driven signal in zonal wind is quite dependent on the choice of a CCM, mainly due to spatio-temporal shifts of similar responses. Within a given "model world" dynamical signals related to the different SSI forcings agree very well even under this monthly perspective.
Code of Federal Regulations, 2013 CFR
2013-04-01
... observations cannot be less than six months. Historical data sets must be updated at least every three months... quantitative aspects of the model which at a minimum must adhere to the criteria set forth in paragraph (e) of..., a description of how its own theoretical pricing model contains the minimum pricing factors set...
Code of Federal Regulations, 2012 CFR
2012-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
Code of Federal Regulations, 2011 CFR
2011-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
Code of Federal Regulations, 2014 CFR
2014-07-01
... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...
Code of Federal Regulations, 2013 CFR
2013-07-01
... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...
An Algebraic Implicitization and Specialization of Minimum KL-Divergence Models
NASA Astrophysics Data System (ADS)
Dukkipati, Ambedkar; Manathara, Joel George
In this paper we study representation of KL-divergence minimization, in the cases where integer sufficient statistics exists, using tools from polynomial algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. In particular, we also study the case of Kullback-Csisźar iteration scheme. We present implicit descriptions of these models and show that implicitization preserves specialization of prior distribution. This result leads us to a Gröbner bases method to compute an implicit representation of minimum KL-divergence models.
Katikireddi, Srinivasa Vittal; Hilton, Shona; Bond, Lyndal
2016-11-01
The minimum unit pricing (MUP) alcohol policy debate has been informed by the Sheffield model, a study which predicts impacts of different alcohol pricing policies. This paper explores the Sheffield model's influences on the policy debate by drawing on 36 semi-structured interviews with policy actors who were involved in the policy debate. Although commissioned by policy makers, the model's influence has been far broader than suggested by views of 'rational' policy making. While findings from the Sheffield model have been used in instrumental ways, they have arguably been more important in helping debate competing values underpinning policy goals.
Quantifying parametric uncertainty in the Rothermel model
S. Goodrick
2008-01-01
The purpose of the present work is to quantify parametric uncertainty in the Rothermel wildland fire spreadmodel (implemented in software such as fire spread models in the United States. This model consists of a non-linear system of equations that relates environmentalvariables (input parameter groups...
Understanding and quantifying the uncertainty of model parameters and predictions has gained more interest in recent years with the increased use of computational models in chemical risk assessment. Fully characterizing the uncertainty in risk metrics derived from linked quantita...
Pore-scale discretisation limits of multiphase lattice-Boltzmann methods
NASA Astrophysics Data System (ADS)
Li, Z.; Middleton, J.; Varslot, T.; Sheppard, A.
2015-12-01
Lattice-Boltzmann (LB) modeling is a popular method for the numerical solution of the Navier-Stokes equations and several multi-component LB models are widely used to simulate immiscible two-phase fluid flow in porous media. However, there has been relatively little study of the models' ability to make optimal use of 3D imagery by considering the minimum number of grid points that are needed to represent geometric features such as pore throats. This is of critical importance since 3D images of geological samples are a compromise between resolution and field of view. In this work we explore the discretisation limits of LB models, their behavior near these limits, and the consequences of this behavior for simulations of drainage and imbibition. We quantify the performance of two commonly used multiphase LB models: Shan-Chen (SC) and Rothman-Keller (RK) models in a set of tests, including simulations of bubbles in bulk fluid, on flat surfaces, confined in flat/tilted tubes, and fluid invasion into single tubes. Simple geometries like these allow better quantification of model behavior and better understanding of breakdown mechanisms. In bulk fluid, bubble radii less than 2.5 grid units (image voxels) cause numerical instability in SC model; the RK model is stable to a radius of 2.5 units and below, but with poor agreement with the Laplace's law. When confined to a flat duct, the SC model can simulate similar radii to RK model, but with higher interface spurious currents than the RK model and some risk of instability. In tilted ducts with 'staircase' voxel-level roughness, the SC model seems to average the roughness, whereas for RK model only the 'peaks' of the surface are relevant. Overall, our results suggest that LB models can simulate fluid capillary pressure corresponding to interfacial radii of just 1.5 grid units, with the RK model exhibiting significantly better stability.
Assessing the Responses of Streamflow to Pollution Release in South Carolina
NASA Astrophysics Data System (ADS)
Maze, G.; Chovancak, N. A.; Samadi, S. Z.
2017-12-01
The purpose of this investigation was to examine the effects of various stream flows on the transport of a pollutant downstream and to evaluate the uncertainty associated with using a single stream flow value when the true flow is unknown in the model. The area used for this study was Horse Creek in South Carolina where a chlorine pollutant spill has occurred in the past resulting from a train derailment in Graniteville, SC. In the example scenario used, the chlorine gas pollutant was released into the environment, where it killed plants, infected groundwater, and caused evacuation of the city. Tracking the movement and concentrations at various points downstream in the river system is crucial to understanding how a single accidental pollutant release can affect the surrounding areas. As a result of the lack of real-time data available this emergency response model uses historical monthly averages, however, these monthly averages do not reflect how widely the flow can vary within that month. Therefore, the assumption to use the historical monthly average flow data may not be accurate, and this investigation aims at quantifying the uncertainty associated with using a single stream flow value when the true stream flow may vary greatly. For the purpose of this investigation, the event in Graniteville was used as a case study to evaluate the emergency response model. This investigation was conducted by adjusting the STREAM II V7 program developed by Savannah River National Laboratory (SRNL) to model a confluence at the Horse Creek and the Savannah River system. This adjusted program was utilized to track the progress of the chlorine pollutant release and examine how it was transported downstream. By adjusting this program, the concentrations and time taken to reach various points downstream of the release were obtained and can be used not only to analyze this particular pollutant release in Graniteville, but can continue to be adjusted and used as a technical tool for emergency responders in future accidents. Further, the program was run with monthly maximum, minimum, and average advective flows and an uncertainty analysis was conducted to examine the error associated with the input data. These results underscore to profound influence that streamflow magnitudes (maximum, minimum, and average) have on shaping downstream water quality.
Gibson, Oliver R; Dennis, Alex; Parfitt, Tony; Taylor, Lee; Watt, Peter W; Maxwell, Neil S
2014-05-01
Extracellular heat shock protein 72 (eHsp72) concentration increases during exercise-heat stress when conditions elicit physiological strain. Differences in severity of environmental and exercise stimuli have elicited varied response to stress. The present study aimed to quantify the extent of increased eHsp72 with increased exogenous heat stress, and determine related endogenous markers of strain in an exercise-heat model. Ten males cycled for 90 min at 50 % [Formula: see text] in three conditions (TEMP, 20 °C/63 % RH; HOT, 30.2 °C/51%RH; VHOT, 40.0 °C/37%RH). Plasma was analysed for eHsp72 pre, immediately post and 24-h post each trial utilising a commercially available ELISA. Increased eHsp72 concentration was observed post VHOT trial (+172.4 %) (p < 0.05), but not TEMP (-1.9 %) or HOT (+25.7 %) conditions. eHsp72 returned to baseline values within 24 h in all conditions. Changes were observed in rectal temperature (Trec), rate of Trec increase, area under the curve for Trec of 38.5 and 39.0 °C, duration Trec ≥38.5 and ≥39.0 °C, and change in muscle temperature, between VHOT, and TEMP and HOT, but not between TEMP and HOT. Each condition also elicited significantly increasing physiological strain, described by sweat rate, heart rate, physiological strain index, rating of perceived exertion and thermal sensation. Stepwise multiple regression reported rate of Trec increase and change in Trec to be predictors of increased eHsp72 concentration. Data suggests eHsp72 concentration increases once systemic temperature and sympathetic activity exceeds a minimum endogenous criteria elicited during VHOT conditions and is likely to be modulated by large, rapid changes in core temperature.
Sadoul, Bastien C; Schuring, Ewoud A H; Mela, David J; Peters, Harry P F
2014-12-01
Several studies have assessed relationships of self-reported appetite (eating motivations, mainly by Visual Analogue Scales, VAS) with subsequent energy intake (EI), though usually in small data sets with limited power and variable designs. The objectives were therefore to better quantify the relationships of self-reports (incorporating subject characteristics) to subsequent EI, and to estimate the quantitative differences in VAS corresponding to consistent, significant differences in EI. Data were derived from an opportunity sample of 23 randomized controlled studies involving 549 subjects, testing the effects of various food ingredients in meal replacers or 100-150 ml mini-drinks. In all studies, scores on several VAS were recorded for 30 min to 5 h post-meal, when EI was assessed by ad libitum meal consumption. The relationships between pre-meal VAS scores and EI were examined using correlation, linear models (including subject characteristics) and a cross-validation procedure. VAS correlations with subsequent EI were statistically significant, but of low magnitude, up to r = 0.26. Hunger, age, gender, body weight and estimated basal metabolic rate explained 25% of the total variance in EI. Without hunger the prediction of EI was modestly but significantly lower (19%, P < 0.001). A change of ≥15-25 mm on a 100 mm VAS was the minimum effect consistently corresponding to a significant change in subsequent EI, depending on the starting VAS level. Eating motivations add in a small but consistently significant way to other known predictors of acute EI. Differences of about 15 mm on a 100 mm VAS appear to be the minimum effect expected to result in consistent, significant differences in subsequent EI. Copyright © 2014 Elsevier Ltd. All rights reserved.
Experimental Monitoring of Cr(VI) Bio-reduction Using Electrochemical Geophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birsen Canan; Gary R. Olhoeft; William A. Smith
2007-09-01
Many Department of Energy (DOE) sites are contaminated with highly carcinogenic hexavalent chromium (Cr(VI)). In this research, we explore the feasibility of applying complex resistivity to the detection and monitoring of microbially-induced reduction of hexavalent chromium (Cr(VI)) to a less toxic form (Cr(III)). We hope to measure the change in ionic concentration that occurs during this reduction reaction. This form of reduction promises to be an attractive alternative to more expensive remedial treatment methods. The specific goal of this research is to define the minimum and maximum concentration of the chemical and biological compounds in contaminated samples for which themore » Cr(VI) - Cr(III) reduction processes could be detected via complex resistivity. There are three sets of experiments, each comprised of three sample columns. The first experiment compares three concentrations of Cr(VI) at the same bacterial cell concentration. The second experiment establishes background samples with, and without, Cr(VI) and bacterial cells. The third experiment examines the influence of three different bacterial cell counts on the same concentration of Cr(VI). A polarization relaxation mechanism was observed between 10 and 50 Hz. The polarization mechanism, unfortunately, was not unique to bio-chemically active samples. Spectral analysis of complex resistivity data, however, showed that the frequency where the phase minimum occurred was not constant for bio-chemically active samples throughout the experiment. A significant shifts in phase minima occurred between 10 to 20 Hz from the initiation to completion of Cr(VI) reduction. This phenomena was quantified using the Cole-Cole model and the Marquardt-Levenberg nonlinear least square minimization method. The data suggests that the relaxation time and the time constant of this relaxation are the Cole-Cole parameters most sensitive to changes in biologically-induced reduction of Cr(VI).« less
Membré, Jeanne-Marie; Bassett, John; Gorris, Leon G M
2007-09-01
The objective of this study was to investigate the practicality of designing a heat treatment process in a food manufacturing operation for a product governed by a Food Safety Objective (FSO). Salmonella in cooked poultry meat was taken as the working example. Although there is no FSO for this product in current legislation, this may change in the (near) future. Four different process design calculations were explored by means of deterministic and probabilistic approaches to mathematical data handling and modeling. It was found that the probabilistic approach was a more objective, transparent, and quantifiable approach to establish the stringency of food safety management systems. It also allowed the introduction of specific prevalence rates. The key input analyzed in this study was the minimum time required for the heat treatment at a fixed temperature to produce a product that complied with the criterion for product safety, i.e., the FSO. By means of the four alternative process design calculations, the minimum time requirement at 70 degrees C was established and ranged from 0.26 to 0.43 min. This is comparable to the U.S. regulation recommendations and significantly less than that of 2 min at 70 degrees C used, for instance, in the United Kingdom regulation concerning vegetative microorganisms in ready-to-eat foods. However, the objective of this study was not to challenge existing regulations but to provide an illustration of how an FSO established by a competent authority can guide decisions on safe product and process designs in practical operation; it hopefully contributes to the collaborative work between regulators, academia, and industries that need to continue learning and gaining experience from each other in order to translate risk-based concepts such as the FSO into everyday operational practice.
George, Kelly A; Archer, Melanie S; Green, Lauren M; Conlan, Xavier A; Toop, Tes
2009-12-15
Insect specimens collected from decomposing bodies enable forensic entomologists to estimate the minimum post-mortem interval (PMI). Drugs and toxins within a corpse may affect the development rate of insects that feed on them and it is vital to quantify these effects to accurately calculate minimum PMI. This study investigated the effects of morphine on growth rates of the native Australian blowfly, Calliphora stygia (Fabricius) (Diptera: Calliphoridae). Several morphine concentrations were incorporated into pet mince to simulate post-mortem concentrations in morphine, codeine and/or heroin-dosed corpses. There were four treatments for feeding larvae; T 1: control (no morphine); T 2: 2 microg/g morphine; T 3: 10 microg/g morphine; and T 4: 20 microg/g morphine. Ten replicates of 50 larvae were grown at 22 degrees C for each treatment and their development was compared at four comparison intervals; CI 1: 4-day-old larvae; CI 2: 7-day-old larvae; CI 3: pupae; and CI 4: adults. Length and width were measured for larvae and pupae, and costae and tibiae were measured for adults. Additionally, day of pupariation, day of adult eclosion, and survivorship were calculated for each replicate. The continued presence of morphine in meat was qualitatively verified using high-performance liquid chromatography with acidic potassium permanganate chemiluminescence detection. Growth rates of C. stygia fed on morphine-spiked mince did not differ significantly from those fed on control mince for any comparison interval or parameter measured. This suggests that C. stygia is a reliable model to use to accurately age a corpse containing morphine at any of the concentrations investigated.
Effect of lungeing on head and pelvic movement asymmetry in horses with induced lameness.
Rhodin, M; Pfau, T; Roepstorff, L; Egenvall, A
2013-12-01
Lungeing is an important part of lameness examinations, since the circular path enforced during lungeing is thought to accentuate low grade lameness. However, during lungeing the movement of sound horses becomes naturally asymmetric, which may mimic lameness. Also, compensatory movements in the opposite half of the body may mimic lameness. The aim of this study was to objectively study the presence of circle-dependent and compensatory movement asymmetries in horses with induced lameness. Ten horses were trotted in a straight line and lunged in both directions on a hard surface. Lameness was induced (reversible hoof pressure) in each limb, one at a time, in random order. Vertical head and pelvic movements were measured with body-mounted, uni-axial accelerometers. Differences between maximum and minimum height observed during/after left and right stance phases for the head (HDmax, HDmin) and pelvis (PDmax, PDmin) were measured. Mixed models were constructed to study the effect of lungeing direction and induction, and to quantify secondary compensatory asymmetry mechanisms in the forelimbs and hind limbs. Head and pelvic movement symmetries were affected by lungeing. Minimum pelvic height difference (PDmin) changed markedly, increasing significantly during lungeing, giving the impression of inner hind limb lameness. Primary hind limb lameness induced compensatory head movement, which mimicked an ipsilateral forelimb lameness of almost equal magnitude to the primary hind limb lameness. This could contribute to difficulty in correctly detecting hind limb lameness. Induced forelimb lameness caused both a compensatory contralateral (change in PDmax) and an ipsilateral (change in PDmin) hind limb asymmetry, potentially mimicking hind limb lameness, but of smaller magnitude. Both circle-dependent and compensatory movement mechanisms must be taken into account when evaluating lameness. Copyright © 2013 Elsevier Ltd. All rights reserved.
Do Atmospheric Rivers explain the extreme precipitation events over East Asia?
NASA Astrophysics Data System (ADS)
Dairaku, K.; Nayak, S.
2017-12-01
Extreme precipitation events are now of serious concern due to their damaging societal impacts over last few decades. Thus, climate indices are widely used to identify and quantify variability and changes in particular aspects of the climate system, especially when considering extremes. In our study, we focus on few climate indices of annual precipitation extremes for the period 1979-2013 over East Asia to discuss some straightforward information and interpretation of certain aspects of extreme precipitation events that occur over the region. To do so, we first discuss different percentiles of precipitation and maximum length of wet spell with different thresholds from a regional climate model (NRAMS) simulation at 20km. Results indicate that the 99 percentile of precipitation events correspond to about 80mm/d over few regions of East Asia during 1979-2013 and maximum length of wet spell with minimum 20mm precipitation corresponds to about 10days (Figure 1). We then linked the extreme precipitation events with the intense moisture transport events associated with atmospheric rivers (ARs). The ARs are identified by computing the vertically integrated horizontal water vapor transport (IVT) between 1000hpa and 300hpa with IVT ≥ 250 kg/m/s and 2000 km minimum long. With this threshold and condition (set by previous research), our results indicate that some extreme propitiation events are associated with some ARs over East Asia, while some events are not associated with any ARs. Similarly, some ARs are associated with some extreme precipitation events, while some ARs are not associated with any events. Since the ARs are sensitive to the threshold and condition depending on region, so we will analyze the characteristics of ARs (frequency, duration, and annual variability) with different thresholds and discuss their relationship with extreme precipitation events over East Asia.
NASA Astrophysics Data System (ADS)
Liu, Saiyan; Huang, Shengzhi; Xie, Yangyang; Huang, Qiang; Leng, Guoyong; Hou, Beibei; Zhang, Ying; Wei, Xiu
2018-05-01
Due to the important role of temperature in the global climate system and energy cycles, it is important to investigate the spatial-temporal change patterns, causes and implications of annual maximum (Tmax) and minimum (Tmin) temperatures. In this study, the Cloud model were adopted to fully and accurately analyze the changing patterns of annual Tmax and Tmin from 1958 to 2008 by quantifying their mean, uniformity, and stability in the Wei River Basin (WRB), a typical arid and semi-arid region in China. Additionally, the cross wavelet analysis was applied to explore the correlations among annual Tmax and Tmin and the yearly sunspots number, Arctic Oscillation, Pacific Decadal Oscillation, and soil moisture with an aim to determine possible causes of annual Tmax and Tmin variations. Furthermore, temperature-related impacts on vegetation cover and precipitation extremes were also examined. Results indicated that: (1) the WRB is characterized by increasing trends in annual Tmax and Tmin, with a more evident increasing trend in annual Tmin, which has a higher dispersion degree and is less uniform and stable than annual Tmax; (2) the asymmetric variations of Tmax and Tmin can be generally explained by the stronger effects of solar activity (primarily), large-scale atmospheric circulation patterns, and soil moisture on annual Tmin than on annual Tmax; and (3) increasing annual Tmax and Tmin have exerted strong influences on local precipitation extremes, in terms of their duration, intensity, and frequency in the WRB. This study presents new analyses of Tmax and Tmin in the WRB, and the findings may help guide regional agricultural production and water resources management.
Publishing 13C metabolic flux analysis studies: A review and future perspectives
Crown, Scott B.; Antoniewicz, Maciek R.
2018-01-01
13C-Metabolic flux analysis (13C-MFA) is a powerful model-based analysis technique for determining intracellular metabolic fluxes in living cells. It has become a standard tool in many labs for quantifying cell physiology, e.g. in metabolic engineering, systems biology, biotechnology, and biomedical research. With the increasing number of 13C-MFA studies published each year, it is now ever more important to provide practical guidelines for performing and publishing 13C-MFA studies so that quality is not sacrificed as the number of publications increases. The main purpose of this paper is to provide an overview of good practices in 13C-MFA, which can eventually be used as minimum data standards for publishing 13C-MFA studies. The motivation for this work is two-fold: (1) currently, there is no general consensus among researchers and journal editors as to what minimum data standards should be required for publishing 13C-MFA studies; as a result, there are great discrepancies in terms of quality and consistency; and (2) there is a growing number of studies that cannot be reproduced or verified independently due to incomplete information provided in these publications. This creates confusion, e.g. when trying to reconcile conflicting results, and hinders progress in the field. Here, we review current status in the 13C-MFA field and highlight some of the shortcomings with regards to 13C-MFA publications. We then propose a checklist that encompasses good practices in 13C-MFA. We hope that these guidelines will be a valuable resource for the community and allow 13C-flux studies to be more easily reproduced and accessed by others in the future. PMID:24025367
NASA Astrophysics Data System (ADS)
Szeliga, Walter; Bilham, Roger; Schelling, Daniel; Kakar, Din Mohamed; Lodi, Sarosh
2009-10-01
Surface deformation associated with the 27 August 1931 earthquake near Mach in Baluchistan is quantified from spirit-leveling data and from detailed structural sections of the region interpreted from seismic reflection data constrained by numerous well logs. Mean slip on the west dipping Dezghat/Bannh fault system amounted to 1.2 m on a 42 km × 72 km thrust plane with slip locally attaining 3.2 m up dip of an inferred locking line at ˜9 km depth. Slip also occurred at depths below the interseismic locking line. In contrast, negligible slip occurred in the 4 km near the interseismic locking line. The absence of slip here in the 4 years following the earthquake suggests that elastic energy there must either dissipate slowly in the interseismic cycle, or that a slip deficit remains, pending its release in a large future earthquake. Elastic models of the earthquake cycle in this fold and thrust belt suggest that slip on the frontal thrust fault is reduced by a factor of 2 to 8 compared to that anticipated from convergence of the hinterland, a partitioning process that is presumably responsible for thickening of the fold and thrust belt at the expense of slip on the frontal thrust. Near the latitude of Quetta, GPS measurements indicate that convergence is ˜5 mm/yr. Hence the minimum renewal time between earthquakes with 1.2-m mean displacement should be as little as 240 years. However, when the partitioning of fold belt convergence to frontal thrust slip is taken into account the minimum renewal time may exceed 2000 years.
Facile Thermal and Optical Ignition of Silicon Nanoparticles and Micron Particles.
Huang, Sidi; Parimi, Venkata Sharat; Deng, Sili; Lingamneni, Srilakshmi; Zheng, Xiaolin
2017-10-11
Silicon (Si) particles are widely utilized as high-capacity electrodes for Li-ion batteries, elements for thermoelectric devices, agents for bioimaging and therapy, and many other applications. However, Si particles can ignite and burn in air at elevated temperatures or under intense illumination. This poses potential safety hazards when handling, storing, and utilizing these particles for those applications. In order to avoid the problem of accidental ignition, it is critical to quantify the ignition properties of Si particles such as their sizes and porosities. To do so, we first used differential scanning calorimetry to experimentally determine the reaction onset temperature of Si particles under slow heating rates (∼0.33 K/s). We found that the reaction onset temperature of Si particles increased with the particle diameter from 805 °C at 20-30 nm to 935 °C at 1-5 μm. Then, we used a xenon (Xe) flash lamp to ignite Si particles under fast heating rates (∼10 3 to 10 6 K/s) and measured the minimum ignition radiant fluence (i.e., the radiant energy per unit surface area of Si particle beds required for ignition). We found that the measured minimum ignition radiant fluence decreased with decreasing Si particle size and was most sensitive to the porosity of the Si particle bed. These trends for the Xe flash ignition experiments were also confirmed by our one-dimensional unsteady simulation to model the heat transfer process. The quantitative information on Si particle ignition included in this Letter will guide the safe handling, storage, and utilization of Si particles for diverse applications and prevent unwanted fire hazards.
Structure and anomalous solubility for hard spheres in an associating lattice gas model.
Szortyka, Marcia M; Girardi, Mauricio; Henriques, Vera B; Barbosa, Marcia C
2012-08-14
In this paper we investigate the solubility of a hard-sphere gas in a solvent modeled as an associating lattice gas. The solution phase diagram for solute at 5% is compared with the phase diagram of the original solute free model. Model properties are investigated both through Monte Carlo simulations and a cluster approximation. The model solubility is computed via simulations and is shown to exhibit a minimum as a function of temperature. The line of minimum solubility (TmS) coincides with the line of maximum density (TMD) for different solvent chemical potentials, in accordance with the literature on continuous realistic models and on the "cavity" picture.
A statistical analysis of the effects of a uniform minimum drinking age
DOT National Transportation Integrated Search
1987-04-01
This report examines the relationship between minimum drinking age (MDA) and : highway fatalities during the 1975-1985 period, when 35 states changed their : MDAs. An econometric model of fatalities involving the 18-20 year-old driver : normalized by...
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
Quantifier Comprehension in Corticobasal Degeneration
ERIC Educational Resources Information Center
McMillan, Corey T.; Clark, Robin; Moore, Peachie; Grossman, Murray
2006-01-01
In this study, we investigated patients with focal neurodegenerative diseases to examine a formal linguistic distinction between classes of generalized quantifiers, like "some X" and "less than half of X." Our model of quantifier comprehension proposes that number knowledge is required to understand both first-order and higher-order quantifiers.…
The Minimum-Mass Surface Density of the Solar Nebula using the Disk Evolution Equation
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
2005-01-01
The Hayashi minimum-mass power law representation of the pre-solar nebula (Hayashi 1981, Prog. Theo. Phys.70,35) is revisited using analytic solutions of the disk evolution equation. A new cumulative-planetary-mass-model (an integrated form of the surface density) is shown to predict a smoother surface density compared with methods based on direct estimates of surface density from planetary data. First, a best-fit transcendental function is applied directly to the cumulative planetary mass data with the surface density obtained by direct differentiation. Next a solution to the time-dependent disk evolution equation is parametrically adapted to the planetary data. The latter model indicates a decay rate of r -1/2 in the inner disk followed by a rapid decay which results in a sharper outer boundary than predicted by the minimum mass model. The model is shown to be a good approximation to the finite-size early Solar Nebula and by extension to extra solar protoplanetary disks.
Reliability of sonographic assessment of tendinopathy in tennis elbow.
Poltawski, Leon; Ali, Syed; Jayaram, Vijay; Watson, Tim
2012-01-01
To assess the reliability and compute the minimum detectable change using sonographic scales to quantify the extent of pathology and hyperaemia in the common extensor tendon in people with tennis elbow. The lateral elbows of 19 people with tennis elbow were assessed sonographically twice, 1-2 weeks apart. Greyscale and power Doppler images were recorded for subsequent rating of abnormalities. Tendon thickening, hypoechogenicity, fibrillar disruption and calcification were each rated on four-point scales, and scores were summed to provide an overall rating of structural abnormality; hyperaemia was scored on a five point scale. Inter-rater reliability was established using the intraclass correlation coefficient (ICC) to compare scores assigned independently to the same set of images by a radiologist and a physiotherapist with training in musculoskeletal imaging. Test-retest reliability was assessed by comparing scores assigned by the physiotherapist to images recorded at the two sessions. The minimum detectable change (MDC) was calculated from the test-retest reliability data. ICC values for inter-rater reliability ranged from 0.35 (95% CI: 0.05, 0.60) for fibrillar disruption to 0.77 (0.55, 0.88) for overall greyscale score, and 0.89 (0.79, 0.95) for hyperaemia. Test-retest reliability ranged from 0.70 (0.48, 0.84) for tendon thickening to 0.82 (0.66, 0.90) for overall greyscale score and 0.86 (0.73, 0.93) for calcification. The MDC for the greyscale total score was 2.0/12 and for the hyperaemia score was 1.1/5. The sonographic scoring system used in this study may be used reliably to quantify tendon abnormalities and change over time. A relatively inexperienced imager can conduct the assessment and use the rating scales reliably.
MacDonald, Sharyn L S; Cowan, Ian A; Floyd, Richard A; Graham, Rob
2013-10-01
Accurate and transparent measurement and monitoring of radiologist workload is highly desirable for management of daily workflow in a radiology department, and for informing decisions on department staffing needs. It offers the potential for benchmarking between departments and assessing future national workforce and training requirements. We describe a technique for quantifying, with minimum subjectivity, all the work carried out by radiologists in a tertiary department. Six broad categories of clinical activities contributing to radiologist workload were identified: reporting, procedures, trainee supervision, clinical conferences and teaching, informal case discussions, and administration related to referral forms. Time required for reporting was measured using data from the radiology information system. Other activities were measured by observation and timing by observers, and based on these results and extensive consultation, the time requirements and frequency of each activity was agreed on. An activity list was created to record this information and to calculate the total clinical hours required to meet the demand for radiologist services. Diagnostic reporting accounted for approximately 35% of radiologist clinical time; procedures, 23%; trainee supervision, 15%; conferences and tutorials, 14%; informal case discussions, 10%; and referral-related administration, 3%. The derived data have been proven reliable for workload planning over the past 3 years. A transparent and robust method of measuring radiologists' workload has been developed, with subjective assessments kept to a minimum. The technique has value for daily workload and longer term planning. It could be adapted for widespread use. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.
Kim, Junho; Lee, Kyung Soo; Kong, Sang Won; Kim, Taikon; Kim, Mi Jung; Park, Si-Bog
2014-01-01
Objective To evaluate the clinical utility of the electrically calculated quantitative pain degree (QPD) and to correlate it with subjective assessments of pain degree including a visual analogue scale (VAS) and the McGill Pain Questionnaire (MPQ). Methods We recruited 25 patients with low back pain. Of them, 21 patients suffered from low back pain for more than 3 months. The QPD was calculated using the PainVision (PV, PS-2100; Nipro Co., Osaka, Japan). We applied electrodes to the medial forearm of the subjects and the electrical stimulus was amplified sequentially. Minimum perceived current (MPC) and pain equivalent current (PEC) were defined as minimum electrical stimulation that could be sensed by the subject and electrical stimulation that could trigger actual pain itself. To eliminate individual differences, we defined QPD as the following: QPD=PEC-MPC/MPC. We scored pre-treatment QPD three times at admission and post-treatment QPD once at discharge. The VAS, MPQ, and QPD were evaluated and correlations between the scales were analyzed. Results Result showed significant test-retest reliability (ICC=0.967, p<0.001) and the correlation between QDP and MPQ was significant (at admission SRCC=0.619 and p=0.001; at discharge SRCC=0.628, p=0.001). However, the correlation between QPD and VAS was not significant (at admission SRCC=0.240, p=0.248; at discharge SRCC=0.289, p=0.161). Conclusion Numerical values measured with PV showed consistent results with repeated calculations. Electrically measured QPD showed an excellent correlation with MPQ but not with VAS. These results demonstrate that PV is a significantly reliable device for quantifying the intensity of low back pain. PMID:25379496
Kim, Junho; Lee, Kyung Soo; Kong, Sang Won; Kim, Taikon; Kim, Mi Jung; Park, Si-Bog; Lee, Kyu Hoon
2014-10-01
To evaluate the clinical utility of the electrically calculated quantitative pain degree (QPD) and to correlate it with subjective assessments of pain degree including a visual analogue scale (VAS) and the McGill Pain Questionnaire (MPQ). We recruited 25 patients with low back pain. Of them, 21 patients suffered from low back pain for more than 3 months. The QPD was calculated using the PainVision (PV, PS-2100; Nipro Co., Osaka, Japan). We applied electrodes to the medial forearm of the subjects and the electrical stimulus was amplified sequentially. Minimum perceived current (MPC) and pain equivalent current (PEC) were defined as minimum electrical stimulation that could be sensed by the subject and electrical stimulation that could trigger actual pain itself. To eliminate individual differences, we defined QPD as the following: QPD=PEC-MPC/MPC. We scored pre-treatment QPD three times at admission and post-treatment QPD once at discharge. The VAS, MPQ, and QPD were evaluated and correlations between the scales were analyzed. Result showed significant test-retest reliability (ICC=0.967, p<0.001) and the correlation between QDP and MPQ was significant (at admission SRCC=0.619 and p=0.001; at discharge SRCC=0.628, p=0.001). However, the correlation between QPD and VAS was not significant (at admission SRCC=0.240, p=0.248; at discharge SRCC=0.289, p=0.161). Numerical values measured with PV showed consistent results with repeated calculations. Electrically measured QPD showed an excellent correlation with MPQ but not with VAS. These results demonstrate that PV is a significantly reliable device for quantifying the intensity of low back pain.
Empirical Model of the Location of the Main Ionospheric Trough
NASA Astrophysics Data System (ADS)
Deminov, M. G.; Shubin, V. N.
2018-05-01
The empirical model of the location of the main ionospheric trough (MIT) is developed based on an analysis of data from CHAMP satellite measured at the altitudes of 350-450 km during 2000-2007; the model is presented in the form of the analytical dependence of the invariant latitude of the trough minimum Φm on the magnetic local time (MLT), the geomagnetic activity, and the geographical longitude for the Northern and Southern Hemispheres. The time-weighted average index Kp(τ), the coefficient of which τ = 0.6 is determined by the requirement of the model minimum deviation from experimental data, is used as an indicator of geomagnetic activity. The model has no limitations, either in local time or geomagnetic activity. However, the initial set of MIT minima mainly contains data dealing with an interval of 16-08 MLT for Kp(τ) < 6; therefore, the model is rather qualitative outside this interval. It is also established that (a) the use of solar local time (SLT) instead of MLT increases the model error no more than by 5-10%; (b) the amplitude of the longitudinal effect at the latitude of MIT minimum in geomagnetic (invariant) coordinates is ten times lower than that in geographical coordinates.
Sun, Li; Li, Donghai; Gao, Zhiqiang; Yang, Zhao; Zhao, Shen
2016-09-01
Control of the non-minimum phase (NMP) system is challenging, especially in the presence of modelling uncertainties and external disturbances. To this end, this paper presents a combined feedforward and model-assisted Active Disturbance Rejection Control (MADRC) strategy. Based on the nominal model, the feedforward controller is used to produce a tracking performance that has minimum settling time subject to a prescribed undershoot constraint. On the other hand, the unknown disturbances and uncertain dynamics beyond the nominal model are compensated by MADRC. Since the conventional Extended State Observer (ESO) is not suitable for the NMP system, a model-assisted ESO (MESO) is proposed based on the nominal observable canonical form. The convergence of MESO is proved in time domain. The stability, steady-state characteristics and robustness of the closed-loop system are analyzed in frequency domain. The proposed strategy has only one tuning parameter, i.e., the bandwidth of MESO, which can be readily determined with a prescribed robustness level. Some comparative examples are given to show the efficacy of the proposed method. This paper depicts a promising prospect of the model-assisted ADRC in dealing with complex systems. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, J.; Lacava, W.; Austin, J.
2015-02-01
This work investigates the minimum level of fidelity required to accurately simulate wind turbine gearboxes using state-of-the-art design tools. Excessive model fidelity including drivetrain complexity, gearbox complexity, excitation sources, and imperfections, significantly increases computational time, but may not provide a commensurate increase in the value of the results. Essential designparameters are evaluated, including the planetary load-sharing factor, gear tooth load distribution, and sun orbit motion. Based on the sensitivity study results, recommendations for the minimum model fidelities are provided.
Canadian crop calendars in support of the early warning project
NASA Technical Reports Server (NTRS)
Trenchard, M. H.; Hodges, T. (Principal Investigator)
1980-01-01
The Canadian crop calendars for LACIE are presented. Long term monthly averages of daily maximum and daily minimum temperatures for subregions of provinces were used to simulate normal daily maximum and minimum temperatures. The Robertson (1968) spring wheat and Williams (1974) spring barley phenology models were run using the simulated daily temperatures and daylengths for appropriate latitudes. Simulated daily temperatures and phenology model outputs for spring wheat and spring barley are given.
Code of Federal Regulations, 2014 CFR
2014-04-01
... less than six months. Historical data sets must be updated at least every three months and reassessed... model which at a minimum must adhere to the criteria set forth in paragraph (e) of this Appendix F. The... theoretical pricing model contains the minimum pricing factors set forth in Appendix A (§ 240.15c3-1a). The...
NASA Astrophysics Data System (ADS)
Farhang, Nastaran; Safari, Hossein; Wheatland, Michael S.
2018-05-01
Solar flares are an abrupt release of magnetic energy in the Sun’s atmosphere due to reconnection of the coronal magnetic field. This occurs in response to turbulent flows at the photosphere that twist the coronal field. Similar to earthquakes, solar flares represent the behavior of a complex system, and expectedly their energy distribution follows a power law. We present a statistical model based on the principle of minimum energy in a coronal loop undergoing magnetic reconnection, which is described as an avalanche process. We show that the distribution of peaks for the flaring events in this self-organized critical system is scale-free. The obtained power-law index of 1.84 ± 0.02 for the peaks is in good agreement with satellite observations of soft X-ray flares. The principle of minimum energy can be applied for general avalanche models to describe many other phenomena.
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Karpel, Mordechay
1989-01-01
Various control analysis, design, and simulation techniques for aeroelastic applications require the equations of motion to be cast in a linear time-invariant state-space form. Unsteady aerodynamics forces have to be approximated as rational functions of the Laplace variable in order to put them in this framework. For the minimum-state method, the number of denominator roots in the rational approximation. Results are shown of applying various approximation enhancements (including optimization, frequency dependent weighting of the tabular data, and constraint selection) with the minimum-state formulation to the active flexible wing wind-tunnel model. The results demonstrate that good models can be developed which have an order of magnitude fewer augmenting aerodynamic equations more than traditional approaches. This reduction facilitates the design of lower order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena.
Reactive Power Compensation Method Considering Minimum Effective Reactive Power Reserve
NASA Astrophysics Data System (ADS)
Gong, Yiyu; Zhang, Kai; Pu, Zhang; Li, Xuenan; Zuo, Xianghong; Zhen, Jiao; Sudan, Teng
2017-05-01
According to the calculation model of minimum generator reactive power reserve of power system voltage stability under the premise of the guarantee, the reactive power management system with reactive power compensation combined generator, the formation of a multi-objective optimization problem, propose a reactive power reserve is considered the minimum generator reactive power compensation optimization method. This method through the improvement of the objective function and constraint conditions, when the system load growth, relying solely on reactive power generation system can not meet the requirement of safe operation, increase the reactive power reserve to solve the problem of minimum generator reactive power compensation in the case of load node.
Optimal short-range trajectories for helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slater, G.L.; Erzberger, H.
1982-12-01
An optimal flight path algorithm using a simplified altitude state model and a priori climb cruise descent flight profile was developed and applied to determine minimum fuel and minimum cost trajectories for a helicopter flying a fixed range trajectory. In addition, a method was developed for obtaining a performance model in simplified form which is based on standard flight manual data and which is applicable to the computation of optimal trajectories. The entire performance optimization algorithm is simple enough that on line trajectory optimization is feasible with a relatively small computer. The helicopter model used is the Silorsky S-61N. Themore » results show that for this vehicle the optimal flight path and optimal cruise altitude can represent a 10% fuel saving on a minimum fuel trajectory. The optimal trajectories show considerable variability because of helicopter weight, ambient winds, and the relative cost trade off between time and fuel. In general, reasonable variations from the optimal velocities and cruise altitudes do not significantly degrade the optimal cost. For fuel optimal trajectories, the optimum cruise altitude varies from the maximum (12,000 ft) to the minimum (0 ft) depending on helicopter weight.« less
Automated extraction and classification of RNA tertiary structure cyclic motifs
Lemieux, Sébastien; Major, François
2006-01-01
A minimum cycle basis of the tertiary structure of a large ribosomal subunit (LSU) X-ray crystal structure was analyzed. Most cycles are small, as they are composed of 3- to 5 nt, and repeated across the LSU tertiary structure. We used hierarchical clustering to quantify and classify the 4 nt cycles. One class is defined by the GNRA tetraloop motif. The inspection of the GNRA class revealed peculiar instances in sequence. First is the presence of UA, CA, UC and CC base pairs that substitute the usual sheared GA base pair. Second is the revelation of GNR(Xn)A tetraloops, where Xn is bulged out of the classical GNRA structure, and of GN/RA formed by the two strands of interior-loops. We were able to unambiguously characterize the cycle classes using base stacking and base pairing annotations. The cycles identified correspond to small and cyclic motifs that compose most of the LSU RNA tertiary structure and contribute to its thermodynamic stability. Consequently, the RNA minimum cycles could well be used as the basic elements of RNA tertiary structure prediction methods. PMID:16679452
Crop production and economic loss due to wind erosion in hot arid ecosystem of India
NASA Astrophysics Data System (ADS)
Santra, Priyabrata; Moharana, P. C.; Kumar, Mahesh; Soni, M. L.; Pandey, C. B.; Chaudhari, S. K.; Sikka, A. K.
2017-10-01
Wind erosion is a severe land degradation process in hot arid western India and affects the agricultural production system. It affects crop yield directly by damaging the crops through abrasion, burial, dust deposition etc. and indirectly by reducing soil fertility. In this study, an attempt was made to quantify the indirect impact of wind erosion process on crop production loss and associated economic loss in hot arid ecosystem of India. It has been observed that soil loss due to wind erosion varies from minimum 1.3 t ha-1 to maximum 83.3 t ha-1 as per the severity. Yield loss due to wind erosion was found maximum for groundnut (Arachis hypogea) (5-331 kg ha-1 yr-1), whereas minimum for moth bean (Vigna aconitifolia) (1-93 kg ha-1 yr-1). For pearl millet (Pennisetum glaucum), which covers a major portion of arable lands in western Rajasthan, the yield loss was found 3-195 kg ha-1 yr-1. Economic loss was found higher for groundnut and clusterbean (Cyamopsis tetragonoloba) than rest crops, which are about
Comparison of Transport Codes, HZETRN, HETC and FLUKA, Using 1977 GCR Solar Minimum Spectra
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Slaba, Tony C.; Tripathi, Ram K.; Blattnig, Steve R.; Norbury, John W.; Badavi, Francis F.; Townsend, Lawrence W.; Handler, Thomas; Gabriel, Tony A.; Pinsky, Lawrence S.;
2009-01-01
The HZETRN deterministic radiation transport code is one of several tools developed to analyze the effects of harmful galactic cosmic rays (GCR) and solar particle events (SPE) on mission planning, astronaut shielding and instrumentation. This paper is a comparison study involving the two Monte Carlo transport codes, HETC-HEDS and FLUKA, and the deterministic transport code, HZETRN. Each code is used to transport ions from the 1977 solar minimum GCR spectrum impinging upon a 20 g/cm2 Aluminum slab followed by a 30 g/cm2 water slab. This research is part of a systematic effort of verification and validation to quantify the accuracy of HZETRN and determine areas where it can be improved. Comparisons of dose and dose equivalent values at various depths in the water slab are presented in this report. This is followed by a comparison of the proton fluxes, and the forward, backward and total neutron fluxes at various depths in the water slab. Comparisons of the secondary light ion 2H, 3H, 3He and 4He fluxes are also examined.
On a thermonuclear origin for the 1980-81 deep light minimum of the symbiotic nova PU Vul
NASA Technical Reports Server (NTRS)
Sion, Edward M.
1993-01-01
The puzzling 1980-81 deep light minimum of the symbiotic nova PU Vul is discussed in terms of a sequence of quasi-static evolutionary models of a hot, 0.5 solar mass white dwarf accreting H-rich matter at a rate 1 x 10 exp -8 solar mass/yr. On the basis of the morphological behavior of the models, it is suggested that the deep light minimum of PU Vul could have been the result of two successive, closely spaced, hydrogen shell flashes on an accreting white dwarf whose core thermal structure and accreted H-rich envelope was not in a long-term thermal 'cycle-averaged' steady state with the rate of accretion.
McCarrier, Kelly P; Martin, Diane P; Ralston, James D; Zimmerman, Frederick J
2010-05-01
Minimum wage policies have been advanced as mechanisms to improve the economic conditions of the working poor. Both positive and negative effects of such policies on health care access have been hypothesized, but associations have yet to be thoroughly tested. To examine whether the presence of minimum wage policies in excess of the federal standard of $5.15 per hour was associated with health care access indicators among low-skilled adults of working age, a cross-sectional analysis of 2004 Behavioral Risk Factor Surveillance System data was conducted. Self-reported health insurance status and experience with cost-related barriers to needed medical care were adjusted in multi-level logistic regression models to control for potential confounding at the state, county, and individual levels. State-level wage policy was not found to be associated with insurance status or unmet medical need in the models, providing early evidence that increased minimum wage rates may neither strengthen nor weaken access to care as previously predicted.
Barton, Gary J.; McDonald, Richard R.; Nelson, Jonathan M.
2009-01-01
During 2005, the U.S. Geological Survey (USGS) developed, calibrated, and validated a multidimensional flow model for simulating streamflow in the white sturgeon spawning habitat of the Kootenai River in Idaho. The model was developed as a tool to aid understanding of the physical factors affecting quality and quantity of spawning and rearing habitat used by the endangered white sturgeon (Acipenser transmontanus) and for assessing the feasibility of various habitat-enhancement scenarios to re-establish recruitment of white sturgeon. At the request of the Kootenai Tribe of Idaho, the USGS extended the two-dimensional flow model developed in 2005 into a braided reach upstream of the current white sturgeon spawning reach. Many scientists consider the braided reach a suitable substrate with adequate streamflow velocities for re-establishing recruitment of white sturgeon. The 2005 model was extended upstream to help assess the feasibility of various strategies to encourage white sturgeon to spawn in the reach. At the request of the Idaho Department of Fish and Game, the USGS also extended the two-dimensional flow model several kilometers downstream of the white sturgeon spawning reach. This modified model can quantify the physical characteristics of a reach that white sturgeon pass through as they swim upstream from Kootenay Lake to the spawning reach. The USGS Multi-Dimensional Surface-Water Modeling System was used for the 2005 modeling effort and for this subsequent modeling effort. This report describes the model applications and limitations, presents the results of a few simple simulations, and demonstrates how the model can be used to link physical characteristics of streamflow to the location of white sturgeon spawning events during 1994-2001. Model simulations also were used to report on the length and percentage of longitudinal profiles that met the minimum criteria during May and June 2006 and 2007 as stipulated in the U.S. Fish and Wildlife Biological Opinion.
Daviaud, Emmanuelle; Chopra, Mickey
2008-01-01
To quantify staff requirements in primary health care facilities in South Africa through an adaptation of the WHO workload indicator of staff needs tool. We use a model to estimate staffing requirements at primary health care facilities. The model integrates several empirically-based assumptions including time and type of health worker required for each type of consultation, amount of management time required, amount of clinical support required and minimum staff requirements per type of facility. We also calculate the number of HIV-related consultations per district. The model incorporates type of facility, monthly travelling time for mobile clinics, opening hours per week, yearly activity and current staffing and calculates the expected staffing per category of staff per facility and compares it to the actual staffing. Across all the districts there is either an absence of doctors visiting clinics or too few doctors to cover the opening times of community health centres. Overall the number of doctors is only 7% of the required amount. There is 94% of the required number of professional nurses but with wide variations between districts, with a few districts having excesses while most have shortages. The number of enrolled nurses is 60% of what it should be. There are 17% too few enrolled nurse assistants. Across all districts there is wide variation in staffing levels between facilities leading to inefficient use of professional staff. The application of an adapted WHO workload tool identified important human resource planning issues.
NASA Astrophysics Data System (ADS)
Lee, H.; Kim, S.; Brioude, J.; Cooper, O. R.; Frost, G. J.; Trainer, M.; Kim, C.
2012-12-01
Nitrogen dioxide (NO2) columns observed from space have been useful in detecting the increase of NOx emissions over East Asia in accordance with rapid growth in its economy. In addition to emissions, transport can be an important factor to determine the observed satellite NO2 columns in this region. Satellite tropospheric NO2 columns showed maximum in winter and minimum in summer over the high emission areas in China, as lifetime of NO2 decreases with increase of sunlight. However, secondary peaks in the satellite NO2 columns were found in spring in both Korea and Japan, which may be influenced by transport of NOx within East Asia. Surface in-situ observations confirm the findings from the satellite measurements. The large-scale distribution of satellite NO2 columns over East Asia and the Pacific Ocean showed that the locations of NO2 column maxima coincided with wind convergence zones that change with seasons. In spring, the convergence zone is located over 30-40°N, leading to the most efficient transport of the emissions from southern China to downwind areas including Korea, Japan, and western coastal regions of the United States. We employed a Lagrangian particle dispersion model to identify the sources of the observed springtime maximum NO2. In order to understand chemical processing during the transport and quantify the roles of emissions and transport in local NOx budgets, we will also present the results from a regional chemical transport model.
Giezendanner-Thoben, Robert; Meier, Ulrich; Meier, Wolfgang; Heinze, Johannes; Aigner, Manfred
2005-11-01
Two-line OH planar laser-induced fluorescence (PLIF) thermometry was applied to a swirling CH4/air flame in a gas turbine (GT) model combustor at atmospheric pressure, which exhibited self-excited combustion instability. The potential and limitations of the method are discussed with respect to applications in GT-like flames. A major drawback of using OH as a temperature indicator is that no temperature information can be obtained from regions where OH radicals are missing or present in insufficient concentration. The resulting bias in the average temperature is addressed and quantified for one operating condition by a comparison with results from laser Raman measurements applied in the same flame. Care was taken to minimize saturation effects by decreasing the spectral laser power density to a minimum while keeping an acceptable spatial resolution and signal-to-noise ratio. In order to correct for the influence of laser light attenuation, absorption measurements were performed on a single-shot basis and a correction procedure was applied. The accuracy was determined to 4%-7% depending on the location within the flame and on the temperature level. A GT model combustor with an optical combustion chamber is described, and phase-locked 2D temperature distributions from a pulsating flame are presented. The temperature variations during an oscillation cycle are specified, and the general flame behavior is described. Our main goals are the evaluation of the OH PLIF thermometry and the characterization of a pulsating GT-like flame.
Zollanvari, Amin; Dougherty, Edward R
2014-06-01
The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.
Risk of spacecraft on-orbit obsolescence: Novel framework, stochastic modeling, and implications
NASA Astrophysics Data System (ADS)
Dubos, Gregory F.; Saleh, Joseph H.
2010-07-01
The Government Accountability Office (GAO) has repeatedly noted the difficulties encountered by the Department of Defense (DOD) in keeping its acquisition of space systems on schedule and within budget. Among the recommendations provided by GAO, a minimum Technology Readiness Level (TRL) for technologies to be included in the development of a space system is advised. The DOD considers this recommendation impractical arguing that if space systems were designed with only mature technologies (high TRL), they would likely become obsolete on-orbit fairly quickly. The risk of on-orbit obsolescence is a key argument in the DOD's position for dipping into low technology maturity for space acquisition programs, but this policy unfortunately often results in the cost growth and schedule slippage criticized by the GAO. The concept of risk of on-orbit obsolescence has remained qualitative to date. In this paper, we formulate a theory of risk of on-orbit obsolescence by building on the traditional notion of obsolescence and adapting it to the specificities of space systems. We develop a stochastic model for quantifying and analyzing the risk of on-orbit obsolescence, and we assess, in its light, the appropriateness of DOD's rationale for maintaining low TRL technologies in its acquisition of space assets as a strategy for mitigating on-orbit obsolescence. Our model and results contribute one step towards the resolution of the conceptual stalemate on this matter between the DOD and the GAO, and we hope will inspire academics to further investigate the risk of on-orbit obsolescence.
Hydrophobic hydration and the anomalous partial molar volumes in ethanol-water mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Ming-Liang; Te, Jerez; Cendagorta, Joseph R.
2015-02-14
The anomalous behavior in the partial molar volumes of ethanol-water mixtures at low concentrations of ethanol is studied using molecular dynamics simulations. Previous work indicates that the striking minimum in the partial molar volume of ethanol V{sub E} as a function of ethanol mole fraction X{sub E} is determined mainly by water-water interactions. These results were based on simulations that used one water model for the solute-water interactions but two different water models for the water-water interactions. This is confirmed here by using two more water models for the water-water interactions. Furthermore, the previous work indicates that the initial decreasemore » is caused by association of the hydration shells of the hydrocarbon tails, and the minimum occurs at the concentration where all of the hydration shells are touching each other. Thus, the characteristics of the hydration of the tail that cause the decrease and the features of the water models that reproduce this type of hydration are also examined here. The results show that a single-site multipole water model with a charge distribution that mimics the large quadrupole and the p-orbital type electron density out of the molecular plane has “brittle” hydration with hydrogen bonds that break as the tails touch, which reproduces the deep minimum. However, water models with more typical site representations with partial charges lead to flexible hydration that tends to stay intact, which produces a shallow minimum. Thus, brittle hydration may play an essential role in hydrophobic association in water.« less
Uncertainties in observations and climate projections for the North East India
NASA Astrophysics Data System (ADS)
Soraisam, Bidyabati; Karumuri, Ashok; D. S., Pai
2018-01-01
The Northeast-India has undergone many changes in climatic-vegetation related issues in the last few decades due to increased human activities. However, lack of observations makes it difficult to ascertain the climate change. The study involves the mean, seasonal cycle, trend and extreme-month analysis for summer-monsoon and winter seasons of observed climate data from Indian Meteorological Department (1° × 1°) and Aphrodite & CRU-reanalysis (both 0.5° × 0.5°), and five regional-climate-model simulations (LMDZ, MPI, GFDL, CNRM and ACCESS) data from AR5/CORDEX-South-Asia (0.5° × 0.5°). Long-term (1970-2005) observed, minimum and maximum monthly temperature and precipitation, and the corresponding CORDEX-South-Asia data for historical (1970-2005) and future-projections of RCP4.5 (2011-2060) have been analyzed for long-term trends. A large spread is found across the models in spatial distributions of various mean maximum/minimum climate statistics, though models capture a similar trend in the corresponding area-averaged seasonal cycles qualitatively. Our observational analysis broadly suggests that there is no significant trend in rainfall. Significant trends are observed in the area-averaged minimum temperature during winter. All the CORDEX-South-Asia simulations for the future project either a decreasing insignificant trend in seasonal precipitation, but increasing trend for both seasonal maximum and minimum temperature over the northeast India. The frequency of extreme monthly maximum and minimum temperature are projected to increase. It is not clear from future projections how the extreme rainfall months during JJAS may change. The results show the uncertainty exists in the CORDEX-South-Asia model projections over the region in spite of the relatively high resolution.
Minimum risk wavelet shrinkage operator for Poisson image denoising.
Cheng, Wu; Hirakawa, Keigo
2015-05-01
The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.
Advancements in dynamic kill calculations for blowout wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouba, G.E.; MacDougall, G.R.; Schumacher, B.W.
1993-09-01
This paper addresses the development, interpretation, and use of dynamic kill equations. To this end, three simple calculation techniques are developed for determining the minimum dynamic kill rate. Two techniques contain only single-phase calculations and are independent of reservoir inflow performance. Despite these limitations, these two methods are useful for bracketing the minimum flow rates necessary to kill a blowing well. For the third technique, a simplified mechanistic multiphase-flow model is used to determine a most-probable minimum kill rate.
Stockpile Model of Personal Protective Equipment in Taiwan.
Chen, Yu-Ju; Chiang, Po-Jung; Cheng, Yu-Hsin; Huang, Chun-Wei; Kao, Hui-Yun; Chang, Chih-Kai; Huang, Hsun-Miao; Liu, Pei-Yin; Wang, Jen-Hsin; Chih, Yi-Chien; Chou, Shu-Mei; Yang, Chin-Hui; Chen, Chang-Hsun
The Taiwan Centers for Disease Control (Taiwan CDC) has established a 3-tier personal protective equipment (PPE) stockpiling framework that could maintain a minimum stockpile for the surge demand of PPE in the early stage of a pandemic. However, PPE stockpiling efforts must contend with increasing storage fees and expiration problems. In 2011, the Taiwan CDC initiated a stockpile replacement model in order to optimize the PPE stockpiling efficiency, ensure a minimum stockpile, use the government's limited funds more effectively, and achieve the goal of sustainable management. This stockpile replacement model employs a first-in-first-out principle in which the oldest stock in the central government stockpile is regularly replaced and replenished with the same amount of new and qualified products, ensuring the availability and maintenance of the minimum stockpiles. In addition, a joint electronic procurement platform has been established for merchandising the replaced PPE to local health authorities and medical and other institutions for their routine or epidemic use. In this article, we describe the PPE stockpile model in Taiwan, including the 3-tier stockpiling framework, the operational model, the components of the replacement system, implementation outcomes, epidemic supports, and the challenges and prospects of this model.
Stockpile Model of Personal Protective Equipment in Taiwan
Chen, Yu-Ju; Cheng, Yu-Hsin; Huang, Chun-Wei; Kao, Hui-Yun; Chang, Chih-Kai; Huang, Hsun-Miao; Liu, Pei-Yin; Wang, Jen-Hsin; Chih, Yi-Chien; Chou, Shu-Mei; Yang, Chin-Hui; Chen, Chang-Hsun
2017-01-01
The Taiwan Centers for Disease Control (Taiwan CDC) has established a 3-tier personal protective equipment (PPE) stockpiling framework that could maintain a minimum stockpile for the surge demand of PPE in the early stage of a pandemic. However, PPE stockpiling efforts must contend with increasing storage fees and expiration problems. In 2011, the Taiwan CDC initiated a stockpile replacement model in order to optimize the PPE stockpiling efficiency, ensure a minimum stockpile, use the government's limited funds more effectively, and achieve the goal of sustainable management. This stockpile replacement model employs a first-in-first-out principle in which the oldest stock in the central government stockpile is regularly replaced and replenished with the same amount of new and qualified products, ensuring the availability and maintenance of the minimum stockpiles. In addition, a joint electronic procurement platform has been established for merchandising the replaced PPE to local health authorities and medical and other institutions for their routine or epidemic use. In this article, we describe the PPE stockpile model in Taiwan, including the 3-tier stockpiling framework, the operational model, the components of the replacement system, implementation outcomes, epidemic supports, and the challenges and prospects of this model. PMID:28418743
Kim, Hyun Jung; Griffiths, Mansel W; Fazil, Aamir M; Lammerding, Anna M
2009-09-01
Foodborne illness contracted at food service operations is an important public health issue in Korea. In this study, the probabilities for growth of, and enterotoxin production by, Staphylococcus aureus in pork meat-based foods prepared in food service operations were estimated by the Monte Carlo simulation. Data on the prevalence and concentration of S. aureus as well as compliance to guidelines for time and temperature controls during food service operations were collected. The growth of S. aureus was initially estimated by using the U.S. Department of Agriculture's Pathogen Modeling Program. A second model based on raw pork meat was derived to compare cell number predictions. The correlation between toxin level and cell number as well as minimum toxin dose obtained from published data was adopted to quantify the probability of staphylococcal intoxication. When data gaps were found, assumptions were made based on guidelines for food service practices. Baseline risk model and scenario analyses were performed to indicate possible outcomes of staphylococcal intoxication under the scenarios generated based on these data gaps. Staphylococcal growth was predicted during holding before and after cooking, and the highest estimated concentration (4.59 log CFU/g for the 99.9th percentile value) of S. aureus was observed in raw pork initially contaminated with S. aureus and held before cooking. The estimated probability for staphylococcal intoxication was very low, using currently available data. However, scenario analyses revealed an increased possibility of staphylococcal intoxication when increased levels of initial contamination in the raw meat, andlonger holding time both before and after cooking the meat occurred.
NASA Astrophysics Data System (ADS)
Wang, X.; Murtugudde, R. G.; Zhang, D.
2016-12-01
Photosynthesis and respiration are important processes in all ecosystems on the Earth, in which carbon and oxygen are the two main elements. However, the oxygen cycle has received much less attention (relative to the carbon cycle) despite its big role in the earth system. Oxygen is a sensitive indicator of physical and biogeochemical processes in the ocean thus a key parameter for understanding the ocean's ecosystem and biogeochemistry. The Oxygen-Minimum-Zone (OMZ), often seen below 200 m, is a profound feature in the world oceans. There has been evidence of OMZ expansion over the past few decades in the tropical oceans. Climate models project that there would be a continued decline in dissolved oxygen (DO) and an expansion of the tropical OMZs under future warming conditions, which is of great concern because of the implications for marine organisms. We employ a validated three-dimensional model that simulates physical transport (circulation and vertical mixing), biological processes (O2 production and consumption) and ocean-atmosphere O2 exchange to quantify various sources and sinks of DO over 1980-2015. We show how we use observational data to improve our model simulation. Then we assess the spatial and temporal variability in simulated DO in the tropical Pacific Ocean, and explore the impacts of physical and biogeochemical processes on the DO dynamics, with a focus on the MOZ. Our analyses indicate that DO in the OMZ has a positive relationship with the 13ºC isotherm depth and a negative relationship with the concentration of dissolved organic material.
NASA Technical Reports Server (NTRS)
Harrison, Keith P.; Grimm, Robert E.
2002-01-01
Models of hydrothermal groundwater circulation can quantify limits to the role of hydrothermal activity in Martian crustal processes. We present here the results of numerical simulations of convection in a porous medium due to the presence of a hot intruded magma chamber. The parameter space includes magma chamber depth, volume, aspect ratio, and host rock permeability and porosity. A primary goal of the models is the computation of surface discharge. Discharge increases approximately linearly with chamber volume, decreases weakly with depth (at low geothermal gradients), and is maximized for equant-shaped chambers. Discharge increases linearly with permeability until limited by the energy available from the intrusion. Changes in the average porosity are balanced by changes in flow velocity and therefore have little effect. Water/rock ratios of approximately 0.1, obtained by other workers from models based on the mineralogy of the Shergotty meteorite, imply minimum permeabilities of 10(exp -16) sq m2 during hydrothermal alteration. If substantial vapor volumes are required for soil alteration, the permeability must exceed 10(exp -15) sq m. The principal application of our model is to test the viability of hydrothermal circulation as the primary process responsible for the broad spatial correlation of Martian valley networks with magnetic anomalies. For host rock permeabilities as low as 10(exp -17) sq m and intrusion volumes as low as 50 cu km, the total discharge due to intrusions building that part of the southern highlands crust associated with magnetic anomalies spans a comparable range as the inferred discharge from the overlying valley networks.
Regulator Loss Functions and Hierarchical Modeling for Safety Decision Making.
Hatfield, Laura A; Baugh, Christine M; Azzone, Vanessa; Normand, Sharon-Lise T
2017-07-01
Regulators must act to protect the public when evidence indicates safety problems with medical devices. This requires complex tradeoffs among risks and benefits, which conventional safety surveillance methods do not incorporate. To combine explicit regulator loss functions with statistical evidence on medical device safety signals to improve decision making. In the Hospital Cost and Utilization Project National Inpatient Sample, we select pediatric inpatient admissions and identify adverse medical device events (AMDEs). We fit hierarchical Bayesian models to the annual hospital-level AMDE rates, accounting for patient and hospital characteristics. These models produce expected AMDE rates (a safety target), against which we compare the observed rates in a test year to compute a safety signal. We specify a set of loss functions that quantify the costs and benefits of each action as a function of the safety signal. We integrate the loss functions over the posterior distribution of the safety signal to obtain the posterior (Bayes) risk; the preferred action has the smallest Bayes risk. Using simulation and an analysis of AMDE data, we compare our minimum-risk decisions to a conventional Z score approach for classifying safety signals. The 2 rules produced different actions for nearly half of hospitals (45%). In the simulation, decisions that minimize Bayes risk outperform Z score-based decisions, even when the loss functions or hierarchical models are misspecified. Our method is sensitive to the choice of loss functions; eliciting quantitative inputs to the loss functions from regulators is challenging. A decision-theoretic approach to acting on safety signals is potentially promising but requires careful specification of loss functions in consultation with subject matter experts.
NASA Astrophysics Data System (ADS)
Gaertner, B. A.; Zegre, N.
2015-12-01
Climate change is surfacing as one of the most important environmental and social issues of the 21st century. Over the last 100 years, observations show increasing trends in global temperatures and intensity and frequency of precipitation events such as flooding, drought, and extreme storms. Global circulation models (GCM) show similar trends for historic and future climate indicators, albeit with geographic and topographic variability at regional and local scale. In order to assess the utility of GCM projections for hydrologic modeling, it is important to quantify how robust GCM outputs are compared to robust historical observations at finer spatial scales. Previous research in the United States has primarily focused on the Western and Northeastern regions due to dominance of snow melt for runoff and aquifer recharge but the impact of climate warming in the mountainous central Appalachian Region is poorly understood. In this research, we assess the performance of GCM-generated historical climate compared to historical observations primarily in the context of forcing data for macro-scale hydrologic modeling. Our results show significant spatial heterogeneity of modeled climate indices when compared to observational trends at the watershed scale. Observational data is showing considerable variability within maximum temperature and precipitation trends, with consistent increases in minimum temperature. The geographic, temperature, and complex topographic gradient throughout the central Appalachian region is likely the contributing factor in temperature and precipitation variability. Variable climate changes are leading to more severe and frequent climate events such as temperature extremes and storm events, which can have significant impacts on our drinking water supply, infrastructure, and health of all downstream communities.
Cross-scale modeling of surface temperature and tree seedling establishment inmountain landscapes
Dingman, John; Sweet, Lynn C.; McCullough, Ian M.; Davis, Frank W.; Flint, Alan L.; Franklin, Janet; Flint, Lorraine E.
2013-01-01
Abstract: Introduction: Estimating surface temperature from above-ground field measurements is important for understanding the complex landscape patterns of plant seedling survival and establishment, processes which occur at heights of only several centimeters. Currently, future climate models predict temperature at 2 m above ground, leaving ground-surface microclimate not well characterized. Methods: Using a network of field temperature sensors and climate models, a ground-surface temperature method was used to estimate microclimate variability of minimum and maximum temperature. Temperature lapse rates were derived from field temperature sensors and distributed across the landscape capturing differences in solar radiation and cold air drainages modeled at a 30-m spatial resolution. Results: The surface temperature estimation method used for this analysis successfully estimated minimum surface temperatures on north-facing, south-facing, valley, and ridgeline topographic settings, and when compared to measured temperatures yielded an R2 of 0.88, 0.80, 0.88, and 0.80, respectively. Maximum surface temperatures generally had slightly more spatial variability than minimum surface temperatures, resulting in R2 values of 0.86, 0.77, 0.72, and 0.79 for north-facing, south-facing, valley, and ridgeline topographic settings. Quasi-Poisson regressions predicting recruitment of Quercus kelloggii (black oak) seedlings from temperature variables were significantly improved using these estimates of surface temperature compared to air temperature modeled at 2 m. Conclusion: Predicting minimum and maximum ground-surface temperatures using a downscaled climate model coupled with temperature lapse rates estimated from field measurements provides a method for modeling temperature effects on plant recruitment. Such methods could be applied to improve projections of species’ range shifts under climate change. Areas of complex topography can provide intricate microclimates that may allow species to redistribute locally as climate changes.
This presentation, Particle-Resolved Simulations for Quantifying Black Carbon Climate Impact and Model Uncertainty, was given at the STAR Black Carbon 2016 Webinar Series: Changing Chemistry over Time held on Oct. 31, 2016.
Space Tug avionics definition study. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1975-01-01
A top down approach was used to identify, compile, and develop avionics functional requirements for all flight and ground operational phases. Such requirements as safety mission critical functions and criteria, minimum redundancy levels, software memory sizing, power for tug and payload, data transfer between payload, tug, shuttle, and ground were established. Those functional requirements that related to avionics support of a particular function were compiled together under that support function heading. This unique approach provided both organizational efficiency and traceability back to the applicable operational phase and event. Each functional requirement was then allocated to the appropriate subsystems and its particular characteristics were quantified.
ATM Quality of Service Parameters at 45 Mbps Using a Satellite Emulator: Laboratory Measurements
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Bobinsky, Eric A.
1997-01-01
Results of 45-Mbps DS3 intermediate-frequency loopback measurements of asynchronous transfer mode (ATM) quality of service parameters (cell error ratio and cell loss ratio) are presented. These tests, which were conducted at the NASA Lewis Research Center in support of satellite-ATM interoperability research, represent initial efforts to quantify the minimum parameters for stringent ATM applications, such as MPEG-1 and MPEG-2 video transmission. Portions of these results were originally presented to the International Telecommunications Union's ITU-R Working Party 4B in February 1996 in support of their Draft Preliminary Recommendation on the Transmission of ATM Traffic via Satellite.
Development and evaluation of a technique for in vivo monitoring of 60Co in human liver
NASA Astrophysics Data System (ADS)
Gomes, GH; Silva, MC; Mello, JQ; Dantas, ALA; Dantas, BM
2018-03-01
60Co is an artificial radioactive metal produced by activation of iron with neutrons. It decays by beta particles and gamma radiation and represents a risk of internal exposure of workers involved in the maintenance of nuclear power reactors. Intakes can be quantified through in vivo monitoring. This work describes the development of a technique for the quantification of 60Co in human liver. The sensitivity of the method is evaluated based on the minimum detectable effective doses. The results allow to state that the technique is suitable either for monitoring of occupational exposures or evaluation of accidental intakes.
Trevett, Andrew F.; Carter, Richard C.
2008-01-01
In developing countries, it has been observed that drinking-water frequently becomes recontaminated following its collection and during storage in the home. This paper proposes a semi-quantified ‘disease risk index' (DRI) designed to identify communities or households that are ‘most at risk' from consuming recontaminated drinking-water. A brief review of appropriate physical and educational intervention measures is presented, and their effective use is discussed. It is concluded that incorporating a simple appraisal tool, such as the proposed DRI, into a community water-supply programme would be useful in shaping the overall strategy requiring only a minimum of organizational learning. PMID:18686547
To help address the Food Quality Protection Act of 1996, a physically-based, two-stage Monte Carlo probabilistic model has been developed to quantify and analyze aggregate exposure and dose to pesticides via multiple routes and pathways. To illustrate model capabilities and ide...
Displacement Models for THUNDER Actuators having General Loads and Boundary Conditions
NASA Technical Reports Server (NTRS)
Wieman, Robert; Smith, Ralph C.; Kackley, Tyson; Ounaies, Zoubeida; Bernd, Jeff; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
This paper summarizes techniques for quantifying the displacements generated in THUNDER actuators in response to applied voltages for a variety of boundary conditions and exogenous loads. The PDE (partial differential equations) models for the actuators are constructed in two steps. In the first, previously developed theory quantifying thermal and electrostatic strains is employed to model the actuator shapes which result from the manufacturing process and subsequent repoling. Newtonian principles are then employed to develop PDE models which quantify displacements in the actuator due to voltage inputs to the piezoceramic patch. For this analysis, drive levels are assumed to be moderate so that linear piezoelectric relations can be employed. Finite element methods for discretizing the models are developed and the performance of the discretized models are illustrated through comparison with experimental data.
Using multilevel models to quantify heterogeneity in resource selection
Wagner, Tyler; Diefenbach, Duane R.; Christensen, Sonja; Norton, Andrew S.
2011-01-01
Models of resource selection are being used increasingly to predict or model the effects of management actions rather than simply quantifying habitat selection. Multilevel, or hierarchical, models are an increasingly popular method to analyze animal resource selection because they impose a relatively weak stochastic constraint to model heterogeneity in habitat use and also account for unequal sample sizes among individuals. However, few studies have used multilevel models to model coefficients as a function of predictors that may influence habitat use at different scales or quantify differences in resource selection among groups. We used an example with white-tailed deer (Odocoileus virginianus) to illustrate how to model resource use as a function of distance to road that varies among deer by road density at the home range scale. We found that deer avoidance of roads decreased as road density increased. Also, we used multilevel models with sika deer (Cervus nippon) and white-tailed deer to examine whether resource selection differed between species. We failed to detect differences in resource use between these two species and showed how information-theoretic and graphical measures can be used to assess how resource use may have differed. Multilevel models can improve our understanding of how resource selection varies among individuals and provides an objective, quantifiable approach to assess differences or changes in resource selection.
Haseli, Y
2016-05-01
The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
40 CFR 600.010-08 - Vehicle test requirements and minimum data requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., US06, SC03 and Cold temperature FTP data from each subconfiguration included within the model type. (2... data requirements. 600.010-08 Section 600.010-08 Protection of Environment ENVIRONMENTAL PROTECTION... Provisions § 600.010-08 Vehicle test requirements and minimum data requirements. (a) Unless otherwise...
40 CFR 600.010-86 - Vehicle test requirements and minimum data requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... additional model types established under § 600.207(a)(2), data from each subconfiguration included within the... data requirements. 600.010-86 Section 600.010-86 Protection of Environment ENVIRONMENTAL PROTECTION... requirements and minimum data requirements. (a) For each certification vehicle defined in this part, and for...
DOT National Transportation Integrated Search
2007-10-01
This study was aimed at completing the research to develop and scrutinize minimum levels for pavement marking retroreflectivity to meet nighttime driving needs. A previous study carried out in the 1990s was based on the CARVE model developed at Ohio ...
NASA Astrophysics Data System (ADS)
Gómez, I.; Estrela, M.
2009-09-01
Extreme temperature events have a great impact on human society. Knowledge of minimum temperatures during winter is very useful for both the general public and organisations whose workers have to operate in the open, e.g. railways, roadways, tourism, etc. Moreover, winter minimum temperatures are considered a parameter of interest and concern since persistent cold-waves can affect areas as diverse as public health, energy consumption, etc. Thus, an accurate forecasting of these temperatures could help to predict cold-wave conditions and permit the implementation of strategies aimed at minimizing the negative effects that low temperatures have on human health. The aim of this work is to evaluate the skill of the RAMS model in determining daily minimum temperatures during winter over the Valencia Region. For this, we have used the real-time configuration of this model currently running at the CEAM Foundation. To carry out the model verification process, we have analysed not only the global behaviour of the model for the whole Valencia Region, but also its behaviour for the individual stations distributed within this area. The study has been performed for the winter forecast period from 1 December 2007 - 31 March 2008. The results obtained are encouraging and indicate a good agreement between the observed and simulated minimum temperatures. Moreover, the model captures quite well the temperatures in the extreme cold episodes. Acknowledgement. This work was supported by "GRACCIE" (CSD2007-00067, Programa Consolider-Ingenio 2010), by the Spanish Ministerio de Educación y Ciencia, contract number CGL2005-03386/CLI, and by the Regional Government of Valencia Conselleria de Sanitat, contract "Simulación de las olas de calor e invasiones de frío y su regionalización en la Comunidad Valenciana" ("Heat wave and cold invasion simulation and their regionalization at Valencia Region"). The CEAM Foundation is supported by the Generalitat Valenciana and BANCAIXA (Valencia, Spain).
`Dem DEMs: Comparing Methods of Digital Elevation Model Creation
NASA Astrophysics Data System (ADS)
Rezza, C.; Phillips, C. B.; Cable, M. L.
2017-12-01
Topographic details of Europa's surface yield implications for large-scale processes that occur on the moon, including surface strength, modification, composition, and formation mechanisms for geologic features. In addition, small scale details presented from this data are imperative for future exploration of Europa's surface, such as by a potential Europa Lander mission. A comparison of different methods of Digital Elevation Model (DEM) creation and variations between them can help us quantify the relative accuracy of each model and improve our understanding of Europa's surface. In this work, we used data provided by Phillips et al. (2013, AGU Fall meeting, abs. P34A-1846) and Schenk and Nimmo (2017, in prep.) to compare DEMs that were created using Ames Stereo Pipeline (ASP), SOCET SET, and Paul Schenk's own method. We began by locating areas of the surface with multiple overlapping DEMs, and our initial comparisons were performed near the craters Manannan, Pwyll, and Cilix. For each region, we used ArcGIS to draw profile lines across matching features to determine elevation. Some of the DEMs had vertical or skewed offsets, and thus had to be corrected. The vertical corrections were applied by adding or subtracting the global minimum of the data set to create a common zero-point. The skewed data sets were corrected by rotating the plot so that it had a global slope of zero and then subtracting for a zero-point vertical offset. Once corrections were made, we plotted the three methods on one graph for each profile of each region. Upon analysis, we found relatively good feature correlation between the three methods. The smoothness of a DEM depends on both the input set of images and the stereo processing methods used. In our comparison, the DEMs produced by SOCET SET were less smoothed than those from ASP or Schenk. Height comparisons show that ASP and Schenk's model appear similar, alternating in maximum height. SOCET SET has more topographic variability due to its decreased smoothing, which is borne out by preliminary offset calculations. In the future, we plan to expand upon this preliminary work with more regions of Europa, continue quantifying the height differences and relative accuracy of each method, and generate more DEMs to expand our available comparison regions.
Influence of the ischaemic tourniquet in antibiotic prophylaxis in total knee replacement.
Prats, Laura; Valls, Joan; Ros, Joaquim; Jover, Alfredo; Pérez-Villar, Ferran; Fernández-Martínez, José Juan
2015-01-01
There is level iv evidence that the preoperative administration of antibiotics helps in the prevention of prosthetic infection. There is controversy on whether the ischemia applied during surgery may affect the minimum inhibitory concentration of the antibiotic in the peri-prosthetic tissues. The aim of this study is to review this phenomenon through the determination of antibiotic concentration in the synovial tissue. A prospective observational clinical study was conducted on 32 patients undergoing total knee replacement. Cefonicid 2g was administered as prophylaxis, with a tourniquet used for all patients. The antibiotic concentration was quantified by high performance liquid chromatography in samples of synovial tissue collected at the beginning and at the end of the intervention. The mean concentration of antibiotic was 23.16 μg/g (95% CI 19.19 to 27.13) in the samples at the beginning of the intervention and 15.45 μg/g (95% CI 13.20 to 17.69) in the final samples, being higher than the minimum inhibitory concentration of cefonicid, set at 8 μg/g. These results were statistically significant for both concentrations (P<.00001). The antibiotic concentration throughout the standard total knee prosthesis surgery performed with tourniquet gradually decreases throughout the intervention. The concentration determined at the end of the intervention was higher than the minimum inhibitory concentration required for the antibiotic studied. In conclusion, the use of a tourniquet does not increase the risk of infection. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.
Valladares, Fernando; Gianoli, Ernesto; Saldaña, Alfredo
2011-08-01
While the climbing habit allows vines to reach well-lit canopy areas with a minimum investment in support biomass, many of them have to survive under the dim understorey light during certain stages of their life cycle. But, if the growth/survival trade-off widely reported for trees hold for climbing plants, they cannot maximize both light-interception efficiency and shade avoidance (i.e. escaping from the understorey). The seven most important woody climbers occurring in a Chilean temperate evergreen rainforest were studied with the hypothesis that light-capture efficiency of climbers would be positively associated with their abundance in the understorey. Species abundance in the understorey was quantified from their relative frequency and density in field plots, the light environment was quantified by hemispherical photography, the photosynthetic response to light was measured with portable gas-exchange analyser, and the whole shoot light-interception efficiency and carbon gain was estimated with the 3-D computer model Y-plant. Species differed in specific leaf area, leaf mass fraction, above ground leaf area ratio, light-interception efficiency and potential carbon gain. Abundance of species in the understorey was related to whole shoot features but not to leaf level features such as specific leaf area. Potential carbon gain was inversely related to light-interception efficiency. Mutual shading among leaves within a shoot was very low (<20 %). The abundance of climbing plants in this southern rainforest understorey was directly related to their capacity to intercept light efficiently but not to their potential carbon gain. The most abundant climbers in this ecosystem match well with a shade-tolerance syndrome in contrast to the pioneer-like nature of climbers observed in tropical studies. The climbers studied seem to sacrifice high-light searching for coping with the dim understorey light.
Quantifying intervertebral disc mechanics: a new definition of the neutral zone
2011-01-01
Background The neutral zone (NZ) is the range over which a spinal motion segment (SMS) moves with minimal resistance. Clear as this may seem, the various methods to quantify NZ described in the literature depend on rather arbitrary criteria. Here we present a stricter, more objective definition. Methods To mathematically represent load-deflection of a SMS, the asymmetric curve was fitted by a summed sigmoid function. The first derivative of this curve represents the SMS compliance and the region with the highest compliance (minimal stiffness) is the NZ. To determine the boundaries of this region, the inflection points of compliance can be used as unique points. These are defined by the maximum and the minimum in the second derivative of the fitted curve, respectively. The merits of the model were investigated experimentally: eight porcine lumbar SMS's were bent in flexion-extension, before and after seven hours of axial compression. Results The summed sigmoid function provided an excellent fit to the measured data (r2 > 0.976). The NZ by the new definition was on average 2.4 (range 0.82-7.4) times the NZ as determined by the more commonly used angulation difference at zero loading. Interestingly, NZ consistently and significantly decreased after seven hours of axial compression when determined by the new definition. On the other hand, NZ increased when defined as angulation difference, probably reflecting the increase of hysteresis. The methods thus address different aspects of the load-deflection curve. Conclusions A strict mathematical definition of the NZ is proposed, based on the compliance of the SMS. This operational definition is objective, conceptually correct, and does not depend on arbitrarily chosen criteria. PMID:21299900
Das, Debobrato; Reed, Stephanie; Klokkevold, Perry R; Wu, Benjamin M
2013-02-01
3D digital microscopy was used to develop a rapid alternative approach to quantify the effects of specific laser parameters on soft tissue ablation and charring in vitro without the use of conventional tissue processing techniques. Two diode lasers operating at 810 and 980 nm wavelengths were used to ablate three tissue types (bovine liver, turkey breast, and bovine muscle) at varying laser power (0.3, 1.0, and 2.0 W) and velocities (1-50 mm/s). Spectrophotometric analyses were performed on each tissue to determine tissue-specific absorption coefficients and were considered in creating wavelength-dependent energy attenuation models to evaluate minimum heat of tissue ablations. 3D surface contour profiles characterizing tissue damage revealed that ablation depth and tissue charring increased with laser power and decreased with lateral velocity independent of wavelength and tissue type. While bovine liver ablation and charring were statistically higher at 810 than 980 nm (p < 0.05), turkey breast and bovine muscle ablated and charred more at 980 than 810 nm (p < 0.05). Spectrophotometric analysis revealed that bovine liver tissue had a greater tissue-specific absorption coefficient at 810 than 980 nm, while turkey breast and bovine muscle had a larger absorption coefficient at 980 nm (p < 0.05). This rapid 3D microscopic analysis of robot-driven laser ablation yielded highly reproducible data that supported well-defined trends related to laser-tissue interactions and enabled high throughput characterization of many laser-tissue permutations. Since 3D microscopy quantifies entire lesions without altering the tissue specimens, conventional and immunohistologic techniques can be used, if desired, to further interrogate specific sections of the digitized lesions.
Upper ocean O2 trends: 1958-2015
NASA Astrophysics Data System (ADS)
Ito, Takamitsu; Minobe, Shoshiro; Long, Matthew C.; Deutsch, Curtis
2017-05-01
Historic observations of dissolved oxygen (O2) in the ocean are analyzed to quantify multidecadal trends and variability from 1958 to 2015. Additional quality control is applied, and the resultant oxygen anomaly field is used to quantify upper ocean O2 trends at global and hemispheric scales. A widespread negative O2 trend is beginning to emerge from the envelope of interannual variability. Ocean reanalysis data are used to evaluate relationships with changes in ocean heat content (OHC) and oxygen solubility (O2,sat). Global O2 decline is evident after the 1980s, accompanied by an increase in global OHC. The global upper ocean O2 inventory (0-1000 m) changed at the rate of -243 ± 124 T mol O2 per decade. Further, the O2 inventory is negatively correlated with the OHC (r = -0.86; 0-1000 m) and the regression coefficient of O2 to OHC is approximately -8.2 ± 0.66 nmol O2 J-1, on the same order of magnitude as the simulated O2-heat relationship typically found in ocean climate models. Variability and trends in the observed upper ocean O2 concentration are dominated by the apparent oxygen utilization component with relatively small contributions from O2,sat. This indicates that changing ocean circulation, mixing, and/or biochemical processes, rather than the direct thermally induced solubility effects, are the primary drivers for the observed O2 changes. The spatial patterns of the multidecadal trend include regions of enhanced ocean deoxygenation including the subpolar North Pacific, eastern boundary upwelling systems, and tropical oxygen minimum zones. Further studies are warranted to understand and attribute the global O2 trends and their regional expressions.
Kinematic Optimization in Birds, Bats and Ornithopters
NASA Astrophysics Data System (ADS)
Reichert, Todd
Birds and bats employ a variety of advanced wing motions in the efficient production of thrust. The purpose of this thesis is to quantify the benefit of these advanced wing motions, determine the optimal theoretical wing kinematics for a given flight condition, and to develop a methodology for applying the results in the optimal design of flapping-wing aircraft (ornithopters). To this end, a medium-fidelity, combined aero-structural model has been developed that is capable of simulating the advanced kinematics seen in bird flight, as well as the highly non-linear structural deformations typical of high-aspect ratio wings. Five unique methods of thrust production observed in natural species have been isolated, quantified and thoroughly investigated for their dependence on Reynolds number, airfoil selection, frequency, amplitude and relative phasing. A gradient-based optimization algorithm has been employed to determined the wing kinematics that result in the minimum required power for a generalized aircraft or species in any given flight condition. In addition to the theoretical work, with the help of an extended team, the methodology was applied to the design and construction of the world's first successful human-powered ornithopter. The Snowbird Human-Powered Ornithopter, is used as an example aircraft to show how additional design constraints can pose limits on the optimal kinematics. The results show significant trends that give insight into the kinematic operation of natural species. The general result is that additional complexity, whether it be larger twisting deformations or advanced wing-folding mechanisms, allows for the possibility of more efficient flight. At its theoretical optimum, the efficiency of flapping-wings exceeds that of current rotors and propellers, although these efficiencies are quite difficult to achieve in practice.
NASA Astrophysics Data System (ADS)
Gilmore, Troy E.; Genereux, David P.; Solomon, D. Kip; Farrell, Kathleen M.; Mitasova, Helena
2016-11-01
Novel groundwater sampling (age, flux, and nitrate) carried out beneath a streambed and in wells was used to estimate (1) the current rate of change of nitrate storage, dSNO3/dt, in a contaminated unconfined aquifer, and (2) future [NO3-]FWM (the flow-weighted mean nitrate concentration in groundwater discharge) and fNO3 (the nitrate flux from aquifer to stream). Estimates of dSNO3/dt suggested that at the time of sampling (2013) the nitrate storage in the aquifer was decreasing at an annual rate (mean = -9 mmol/m2yr) equal to about one-tenth the rate of nitrate input by recharge. This is consistent with data showing a slow decrease in the [NO3-] of groundwater recharge in recent years. Regarding future [NO3-]FWM and fNO3, predictions based on well data show an immediate decrease that becomes more rapid after ˜5 years before leveling out in the early 2040s. Predictions based on streambed data generally show an increase in future [NO3-]FWM and fNO3 until the late 2020s, followed by a decrease before leveling out in the 2040s. Differences show the potential value of using information directly from the groundwater—surface water interface to quantify the future impact of groundwater nitrate on surface water quality. The choice of denitrification kinetics was similarly important; compared to zero-order kinetics, a first-order rate law levels out estimates of future [NO3-]FWM and fNO3 (lower peak, higher minimum) as legacy nitrate is flushed from the aquifer. Major fundamental questions about nonpoint-source aquifer contamination can be answered without a complex numerical model or long-term monitoring program.
NASA Astrophysics Data System (ADS)
Cuthbert, M. O.; Acworth, I. R.; Halloran, L. J. S.; Rau, G. C.; Bernadi, T. L.
2017-12-01
It has long been recognised that hydraulic properties can be derived from the response of piezometric heads to tidal loadings. However, there is a degree of subjectivity in existing graphical approaches most commonly used to calculate barometric efficiency leading to uncertainties in derived values of compressible storage. Here we demonstrate a novel approach to remove these uncertainties by objectively deriving the barometric efficiency from groundwater hydraulic head responses using a frequency domain method. We take advantage of the presence of worldwide and ubiquitous atmospheric tide fluctuations which occur at 2 cycles per day (cpd). First we use a Fourier transform to calculate the amplitudes of the 2 cpd signals from co-located atmospheric pressure and hydraulic head time series measurements. Next we show how the Earth tide response at the same frequency can be quantified and removed so that this effect does not interfere with the calculation of the barometric efficiency. Finally, the ratio of the amplitude of the response at 2 cpd of hydraulic head to atmospheric pressure is used to quantify the barometric efficiency. This new method allows an objective quantification using `passive' in situ monitoring rather than resorting to aquifer pumping or laboratory tests. The minimum data requirements are 15 days duration of 6-hourly hydraulic head and atmospheric pressure measurements, and modelled Earth tide records which are readily conducted using freely available software. The new approach allows for a rapid and cost-effective alternative to traditional methods of estimating aquifer compressible storage properties without the subjectivity of existing approaches, and will be of importance to improving the spatial coverage of subsurface characterisation for groundwater resource evaluation and land subsidence assessment.
Bissacco, Alessandro; Chiuso, Alessandro; Soatto, Stefano
2007-11-01
We address the problem of performing decision tasks, and in particular classification and recognition, in the space of dynamical models in order to compare time series of data. Motivated by the application of recognition of human motion in image sequences, we consider a class of models that include linear dynamics, both stable and marginally stable (periodic), both minimum and non-minimum phase, driven by non-Gaussian processes. This requires extending existing learning and system identification algorithms to handle periodic modes and nonminimum phase behavior, while taking into account higher-order statistics of the data. Once a model is identified, we define a kernel-based cord distance between models that includes their dynamics, their initial conditions as well as input distribution. This is made possible by a novel kernel defined between two arbitrary (non-Gaussian) distributions, which is computed by efficiently solving an optimal transport problem. We validate our choice of models, inference algorithm, and distance on the tasks of human motion synthesis (sample paths of the learned models), and recognition (nearest-neighbor classification in the computed distance). However, our work can be applied more broadly where one needs to compare historical data while taking into account periodic trends, non-minimum phase behavior, and non-Gaussian input distributions.
NASA Astrophysics Data System (ADS)
Salis, Michele; Arca, Bachisio; Bacciu, Valentina; Spano, Donatella; Duce, Pierpaolo; Santoni, Paul; Ager, Alan; Finney, Mark
2010-05-01
Characterizing the spatial pattern of large fire occurrence and severity is an important feature of the fire management planning in the Mediterranean region. The spatial characterization of fire probabilities, fire behavior distributions and value changes are key components for quantitative risk assessment and for prioritizing fire suppression resources, fuel treatments and law enforcement. Because of the growing wildfire severity and frequency in recent years (e.g.: Portugal, 2003 and 2005; Italy and Greece, 2007 and 2009), there is an increasing demand for models and tools that can aid in wildfire prediction and prevention. Newer wildfire simulation systems offer promise in this regard, and allow for fine scale modeling of wildfire severity and probability. Several new applications has resulted from the development of a minimum travel time (MTT) fire spread algorithm (Finney, 2002), that models the fire growth searching for the minimum time for fire to travel among nodes in a 2D network. The MTT approach makes computationally feasible to simulate thousands of fires and generate burn probability and fire severity maps over large areas. The MTT algorithm is imbedded in a number of research and fire modeling applications. High performance computers are typically used for MTT simulations, although the algorithm is also implemented in the FlamMap program (www.fire.org). In this work, we described the application of the MTT algorithm to estimate spatial patterns of burn probability and to analyze wildfire severity in three fire prone areas of the Mediterranean Basin, specifically Sardinia (Italy), Sicily (Italy) and Corsica (France) islands. We assembled fuels and topographic data for the simulations in 500 x 500 m grids for the study areas. The simulations were run using 100,000 ignitions under weather conditions that replicated severe and moderate weather conditions (97th and 70th percentile, July and August weather, 1995-2007). We used both random ignition locations and ignition probability grids (1000 x 1000 m) built from historical fire data (1995-2007). The simulation outputs were then examined to understand relationships between burn probability and specific vegetation types and ignition sources. Wildfire threats to specific values of human interest were quantified to map landscape patterns of wildfire risk. The simulation outputs also allowed us to differentiate between areas of the landscape that were progenitors of fires versus "victims" of large fires. The results provided spatially explicit data on wildfire likelihood and intensity that can be used in a variety of strategic and tactical planning forums to mitigate wildfire threats to human and other values in the Mediterranean Basin.
NASA Astrophysics Data System (ADS)
Popova, E.; Zharkova, V. V.; Shepherd, S. J.; Zharkov, S.
2016-12-01
Using the principal components of solar magnetic field variations derived from the synoptic maps for solar cycles 21-24 with Principal Components Analysis (PCA) (Zharkova et al, 2015) we confirm our previous prediction of the upcoming Maunder minimum to occur in cycles 25-27, or in 2020-2055. We also use a summary curve of the two eigen vectors of solar magnetic field oscillations (or two dynamo waves) to extrapolate solar activity backwards to the three millennia and to compare it with relevant historic and Holocene data. Extrapolation of the summary curve confirms the eight grand cycles of 350-400-years superimposed on 22 year-cycles caused by beating effect of the two dynamo waves generated in the two (deep and shallow) layers of the solar interior. The grand cycles in different periods comprise a different number of individual 22-year cycles; the longer the grand cycles the larger number of 22 year cycles and the smaller their amplitudes. We also report the super-grand cycle of about 2000 years often found in solas activity with spectral analysis. Furthermore, the summary curve reproduces a remarkable resemblance to the sunspot and terrestrial activity reported in the past: the recent Maunder Minimum (1645-1715), Dalton minimum (1790-1815), Wolf minimum (1200), Homeric minimum (800-900 BC), the Medieval Warmth Period (900-1200), the Roman Warmth Period (400-10BC) and so on. Temporal variations of these dynamo waves are modelled with the two layer mean dynamo model with meridional circulation revealing a remarkable resemblance of the butterfly diagram to the one derived for the last Maunder minimum in 17 century and predicting the one for the upcoming Maunder minimum in 2020-2055.
NASA Astrophysics Data System (ADS)
Sun, Weijun; Qin, Xiang; Wang, Yetang; Chen, Jizu; Du, Wentao; Zhang, Tong; Huai, Baojuan
2017-08-01
To understand how a continental glacier responds to climate change, it is imperative to quantify the surface energy fluxes and identify factors controlling glacier mass balance using surface energy balance (SEB) model. Light absorbing impurities (LAIs) at the glacial surface can greatly decrease surface albedo and increase glacial melt. An automatic weather station was set up and generated a unique 6-year meteorological dataset for the ablation zone of Laohugou Glacier No. 12. Based on these data, the surface energy budget was calculated and an experiment on the glacial melt process was carried out. The effect of reduced albedo on glacial melting was analyzed. Owing to continuous accumulation of LAIs, the ablation zone had been darkening since 2010. The mean value of surface albedo in melt period (June through September) dropped from 0.52 to 0.43, and the minimum of daily mean value was as small as 0.1. From the records of 2010-2015, keeping the clean ice albedo fixed in the range of 0.3-0.4, LAIs caused an increase of +7.1 to +16 W m-2 of net shortwave radiation and an removal of 1101-2663 mm water equivalent. Calculation with the SEB model showed equivalent increases in glacial melt were obtained by increasing air temperature by 1.3 and 3.2 K, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jager, Yetta; Smith, Brennan T
Hydroelectric power provides a cheap source of electricity with few carbon emissions. Yet, reservoirs are not operated sustainably, which we define as meeting societal needs for water and power while protecting long-term health of the river ecosystem. Reservoirs that generate hydropower are typically operated with the goal of maximizing energy revenue, while meeting other legal water requirements. Reservoir optimization schemes used in practice do not seek flow regimes that maximize aquatic ecosystem health. Here, we review optimization studies that considered environmental goals in one of three approaches. The first approach seeks flow regimes that maximize hydropower generations while satisfying legalmore » requirements, including environmental (or minimum) flows. Solutions from this approach are often used in practice to operate hydropower projects. In the second approach, flow releases from a dam are timed to meet water quality constraints on dissolved oxygen (DO), temperature and nutrients. In the third approach, flow releases are timed to improve the health of fish populations. We conclude by suggesting three steps for bringing multi-objective reservoir operation closer to the goal of ecological sustainability: (1) conduct research to identify which features of flow variation are essential for river health and to quantify these relationships, (2) develop valuation methods to assess the total value of river health and (3) develop optimal control softwares that combine water balance modeling with models that predict ecosystem responses to flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jager, Yetta; Smith, Brennan T
Hydroelectric power provides a cheap source of electricity with few carbon emissions. Yet, reservoirs are not operated sustainably, which we define as meeting societal needs for water and power while protecting long-term health of the river ecosystem. Reservoirs that generate hydropower are typically operated with the goal of maximizing energy revenue, while meeting other legal water requirements. Reservoir optimization schemes used in practice do not seek flow regimes that maximize aquatic ecosystem health. Here, we review optimization studies that considered environmental goals in one of three approaches. The first approach seeks flow regimes that maximize hydropower generation, while satisfying legalmore » requirements, including environmental (or minimum) flows. Solutions from this approach are often used in practice to operate hydropower projects. In the second approach, flow releases from a dam are timed to meet water quality constraints on dissolved oxygen (DO), temperature and nutrients. In the third approach, flow releases are timed to improve the health of fish populations. We conclude by suggesting three steps for bringing multi-objective reservoir operation closer to the goal of ecological sustainability: (1) conduct research to identify which features of flow variation are essential for river health and to quantify these relationships, (2) develop valuation methods to assess the total value of river health and (3) develop optimal control softwares that combine water balance modelling with models that predict ecosystem responses to flow.« less
The Influence of Blood Pressure on Fetal Aortic Distensibility: An Animal Validation Study.
Wohlmuth, Christoph; Moise, Kenneth J; Papanna, Ramesha; Gheorghe, Ciprian; Johnson, Anthony; Morales, Yisel; Gardiner, Helena M
2018-01-01
Aortic distension waveforms describe the change in diameter or cross-sectional area over the cardiac cycle. We aimed to validate the association of aortic fractional area change (AFAC) with blood pressure (BP) in a fetal lamb model. Four pregnant ewes underwent open fetal surgery under general anesthesia at 107-120 gestational days. A 4-Fr catheter was introduced into the fetal femoral artery and vein, or the carotid artery and jugular vein. The thoracic aorta was imaged using real-time ultrasound; AFAC was calculated using offline speckle tracking software. Measurements of invasive BP and AFAC were obtained simultaneously and averaged over 10 cardiac cycles. BP was increased by norepinephrine infusion and the association of aortic distensibility with BP was assessed. Baseline measurements were obtained from 4 lambs, and changes in aortic distensibility with increasing BP were recorded from 3 of them. A positive correlation was found between AFAC and systolic BP (r = 0.692, p = 0.001), diastolic BP (r = 0.647, p = 0.004), mean BP (r = 0.692, p = 0.001), and BP amplitude (r = 0.558, p = 0.016) controlled for heart rate. No association was found between BP and maximum or minimum aortic area. AFAC provides a quantifiable measure of aortic distensibility and correlates with systolic BP, diastolic BP, mean BP, and BP amplitude in a fetal lamb model. © 2017 S. Karger AG, Basel.
Jendrzej, Sandra; Gökce, Bilal; Amendola, Vincenzo; Barcikowski, Stephan
2016-02-01
Unintended post-synthesis growth of noble metal colloids caused by excess amounts of reactants or highly reactive atom clusters represents a fundamental problem in colloidal chemistry, affecting product stability or purity. Hence, quantified kinetics could allow defining nanoparticle size determination in dependence of the time. Here, we investigate in situ the growth kinetics of ps pulsed laser-fragmented platinum nanoparticles in presence of naked atom clusters in water without any influence of reducing agents or surfactants. The nanoparticle growth is investigated for platinum covering a time scale of minutes to 50days after nanoparticle generation, it is also supplemented by results obtained from gold and palladium. Since a minimum atom cluster concentration is exceeded, a significant growth is determined by time resolved UV/Vis spectroscopy, analytical disc centrifugation, zeta potential measurement and transmission electron microscopy. We suggest a decrease of atom cluster concentration over time, since nanoparticles grow at the expense of atom clusters. The growth mechanism during early phase (<1day) of laser-synthesized colloid is kinetically modeled by rapid barrierless coalescence. The prolonged slow nanoparticle growth is kinetically modeled by a combination of coalescence and Lifshitz-Slyozov-Wagner kinetic for Ostwald ripening, validated experimentally by the temperature dependence of Pt nanoparticle size and growth quenching by Iodide anions. Copyright © 2015. Published by Elsevier Inc.
Simulation of blue and green water resources in the Wei River basin, China
NASA Astrophysics Data System (ADS)
Xu, Z.; Zuo, D.
2014-09-01
The Wei River is the largest tributary of the Yellow River in China and it is suffering from water scarcity and water pollution. In order to quantify the amount of water resources in the study area, a hydrological modelling approach was applied by using SWAT (Soil and Water Assessment Tool), calibrated and validated with SUFI-2 (Sequential Uncertainty Fitting program) based on river discharge in the Wei River basin (WRB). Sensitivity and uncertainty analyses were also performed to improve the model performance. Water resources components of blue water flow, green water flow and green water storage were estimated at the HRU (Hydrological Response Unit) scales. Water resources in HRUs were also aggregated to sub-basins, river catchments, and then city/region scales for further analysis. The results showed that most parts of the WRB experienced a decrease in blue water resources between the 1960s and 2000s, with a minimum value in the 1990s. The decrease is particularly significant in the most southern part of the WRB (Guanzhong Plain), one of the most important grain production basements in China. Variations of green water flow and green water storage were relatively small on the spatial and temporal dimensions. This study provides strategic information for optimal utilization of water resources and planning of cultivating seasons in the Wei River basin.
Using Multispectral Analysis in GIS to Model the Potential for Urban Agriculture in Philadelphia
NASA Astrophysics Data System (ADS)
Dmochowski, J. E.; Cooper, W. P.
2010-12-01
In the context of growing concerns about the international food system’s dependence on fossil fuels, soil degradation, climate change, and other diverse issues, a number of initiatives have arisen to develop and implement sustainable agricultural practices. Many seeking to reform the food system look to urban agriculture as a means to create localized, sustainable agricultural production, while simultaneously providing a locus for community building, encouraging better nutrition, and promoting the rebirth of depressed urban areas. The actual impact of such system, however, is not well understood, and many critics of urban agriculture regard its implementation as impractical and unrealistic. This project uses multispectral imagery from United States Department of Agriculture’s National Agricultural Imagery Program with a one-meter resolution to quantify the potential for increasing urban agriculture in an effort to create a sustainable food system in Philadelphia. Color infrared images are classified with a minimum distance algorithm in ArcGIS to generate baseline data on vegetative cover in Philadelphia. These data, in addition to mapping on the ground, form the basis of a model of land suitable for conversion to agriculture in Philadelphia, which will help address questions related to potential yields, workforce, and energy requirements. This research will help city planners, entrepreneurs, community leaders, and citizens understand how urban agriculture can contribute to creating a sustainable food system in a major North American city.
NASA Astrophysics Data System (ADS)
Georgiou, Harris
2009-10-01
Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.
NASA Astrophysics Data System (ADS)
Vitelaru, Catalin; Aijaz, Asim; Constantina Parau, Anca; Kiss, Adrian Emil; Sobetkii, Arcadie; Kubart, Tomas
2018-04-01
Pressure and target voltage driven discharge runaway from low to high discharge current density regimes in high power impulse magnetron sputtering of carbon is investigated. The main purpose is to provide a meaningful insight of the discharge dynamics, with the ultimate goal to establish a correlation between discharge properties and process parameters to control the film growth. This is achieved by examining a wide range of pressures (2–20 mTorr) and target voltages (700–850 V) and measuring ion saturation current density at the substrate position. We show that the minimum plasma impedance is an important parameter identifying the discharge transition as well as establishing a stable operating condition. Using the formalism of generalized recycling model, we introduce a new parameter, ‘recycling ratio’, to quantify the process gas recycling for specific process conditions. The model takes into account the ion flux to the target, the amount of gas available, and the amount of gas required for sustaining the discharge. We show that this parameter describes the relation between the gas recycling and the discharge current density. As a test case, we discuss the pressure and voltage driven transitions by changing the gas composition when adding Ne into the discharge. We propose that standard Ar HiPIMS discharges operated with significant gas recycling do not require Ne to increase the carbon ionization.
Baumes, Laurent A
2006-01-01
One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.
Minimum required capture radius in a coplanar model of the aerial combat problem
NASA Technical Reports Server (NTRS)
Breakwell, J. V.; Merz, A. W.
1977-01-01
Coplanar aerial combat is modeled with constant speeds and specified turn rates. The minimum capture radius which will always permit capture, regardless of the initial conditions, is calculated. This 'critical' capture radius is also the maximum range which the evader can guarantee indefinitely if the initial range, for example, is large. A composite barrier is constructed which gives the boundary, at any heading, of relative positions for which the capture radius is less than critical.
On bound-states of the Gross Neveu model with massive fundamental fermions
NASA Astrophysics Data System (ADS)
Frishman, Yitzhak; Sonnenschein, Jacob
2018-01-01
In the search for QFT's that admit boundstates, we reinvestigate the two dimensional Gross-Neveu model, but with massive fermions. By computing the self-energy for the auxiliary boundstate field and the effective potential, we show that there are no bound states around the lowest minimum, but there is a meta-stable bound state around the other minimum, a local one. The latter decays by tunneling. We determine the dependence of its lifetime on the fermion mass and coupling constant.
Software Development Cost Estimation Executive Summary
NASA Technical Reports Server (NTRS)
Hihn, Jairus M.; Menzies, Tim
2006-01-01
Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.
2016-09-01
Laboratory Change in Weather Research and Forecasting (WRF) Model Accuracy with Age of Input Data from the Global Forecast System (GFS) by JL Cogan...analysis. As expected, accuracy generally tended to decline as the large-scale data aged , but appeared to improve slightly as the age of the large...19 Table 7 Minimum and maximum mean RMDs for each WRF time (or GFS data age ) category. Minimum and
Koftayan, Tamar; Milano, Jahiro; D'Armas, Haydelba; Salazar, Gabriel
2011-03-01
The species Perna viridis is a highly consumed species, which fast growth makes it an interesting aquaculture alternative for Venezuelan and Trinidad coasts. With the aim to contribute with its nutritional value information, this study analyzed lipid and fatty acid contents from samples taken in five locations from Eastern Venezuela and three from Trinidad West Coast. Total lipids were extracted and quantified, from a pooled sample of 100 organisms per location, by standard gravimetric methods, and their identification and quantification was done by TLC/FID (Iatroscan system). Furthermore, the esterified fatty acids of total lipid, phospholipids and triacylglycerols were identified and quantified by gas chromatography. Eastern Venezuela samples from Los Cedros, La Brea and Chaguaramas showed the highest total lipid values of 7.92, 7.74 and 7.53, respectively, and the minimum values were obtained for La Restinga (6.08%). Among lipid composition, Chacopata samples showed the lowest phospholipid concentration (48.86%) and the maximum values for cholesterol (38.87%) and triacylglycerols (12.26%); besides, La Esmeralda and Rio Caribe samples exhibited maximum phospholipids (88.71 and 84.93 respectively) and minimum cholesterol (6.50 and 4.42%) concentrations. Saturated fatty acids represented between 15.04% and 65.55% within total lipid extracts, with maximum and minimum values for La Esmeralda and Chacopata, respectively. Polyunsaturated results resulted between 7.80 and 37.18%, with higher values in La Brea and lower values in La Esmeralda. For phospholipids, saturated fatty acids concentrations varied between 38.81 and 48.68% for Chaguaramas and Chacopata samples, respectively. In the case of polyunsaturated fatty acids, these varied between non detected and 34.51%, with high concentrations in Los Cedros (27.97%) and Chaguaramas (34.51%) samples. For the triacylglycerols, the saturated fatty acids composition oscillated between 14.27 and 53.80% with low concentrations for Chacopata and high concentration for La Restinga; the polyunsaturated fatty acids were between 4.66 and 35.55% with lower values for Chacopata and higher values for Chaguaramas samples. P. viridis is recommended for human being consumption, according to the high content of unsaturated fatty acids found for this species.
Quantifying uncertainty in climate change science through empirical information theory.
Majda, Andrew J; Gershgorin, Boris
2010-08-24
Quantifying the uncertainty for the present climate and the predictions of climate change in the suite of imperfect Atmosphere Ocean Science (AOS) computer models is a central issue in climate change science. Here, a systematic approach to these issues with firm mathematical underpinning is developed through empirical information theory. An information metric to quantify AOS model errors in the climate is proposed here which incorporates both coarse-grained mean model errors as well as covariance ratios in a transformation invariant fashion. The subtle behavior of model errors with this information metric is quantified in an instructive statistically exactly solvable test model with direct relevance to climate change science including the prototype behavior of tracer gases such as CO(2). Formulas for identifying the most sensitive climate change directions using statistics of the present climate or an AOS model approximation are developed here; these formulas just involve finding the eigenvector associated with the largest eigenvalue of a quadratic form computed through suitable unperturbed climate statistics. These climate change concepts are illustrated on a statistically exactly solvable one-dimensional stochastic model with relevance for low frequency variability of the atmosphere. Viable algorithms for implementation of these concepts are discussed throughout the paper.
Urióstegui, Stephanie H.; Bibby, Richard K.; Esser, Bradley K.; ...
2016-04-23
By identifying groundwater retention times near managed aquifer recharge (MAR) facilities is a high priority for managing water quality, especially for operations that incorporate recycled wastewater. In order to protect public health, California guidelines for Groundwater Replenishment Reuse Projects require a minimum 2–6 month subsurface retention time for recycled water depending on the level of disinfection, which highlights the importance of quantifying groundwater travel times on short time scales. This study developed and evaluated a new intrinsic tracer method using the naturally occurring radioisotope sulfur-35 (35S). The 87.5 day half-life of 35S is ideal for investigating groundwater travel times onmore » the <1 year timescale of interest to MAR managers. Natural concentrations of 35S found in water as dissolved sulfate (35SO4) were measured in source waters and groundwater at the Rio Hondo Spreading Grounds in Los Angeles County, CA, and Orange County Groundwater Recharge Facilities in Orange County, CA. 35SO4 travel times are comparable to travel times determined by well-established deliberate tracer studies. The study also revealed that 35SO4 in MAR source water can vary seasonally and therefore careful characterization of 35SO4 is needed to accurately quantify groundwater travel time. But, more data is needed to fully assess whether or not this tracer could become a valuable tool for managers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urióstegui, Stephanie H.; Bibby, Richard K.; Esser, Bradley K.
By identifying groundwater retention times near managed aquifer recharge (MAR) facilities is a high priority for managing water quality, especially for operations that incorporate recycled wastewater. In order to protect public health, California guidelines for Groundwater Replenishment Reuse Projects require a minimum 2–6 month subsurface retention time for recycled water depending on the level of disinfection, which highlights the importance of quantifying groundwater travel times on short time scales. This study developed and evaluated a new intrinsic tracer method using the naturally occurring radioisotope sulfur-35 (35S). The 87.5 day half-life of 35S is ideal for investigating groundwater travel times onmore » the <1 year timescale of interest to MAR managers. Natural concentrations of 35S found in water as dissolved sulfate (35SO4) were measured in source waters and groundwater at the Rio Hondo Spreading Grounds in Los Angeles County, CA, and Orange County Groundwater Recharge Facilities in Orange County, CA. 35SO4 travel times are comparable to travel times determined by well-established deliberate tracer studies. The study also revealed that 35SO4 in MAR source water can vary seasonally and therefore careful characterization of 35SO4 is needed to accurately quantify groundwater travel time. But, more data is needed to fully assess whether or not this tracer could become a valuable tool for managers.« less
NASA Astrophysics Data System (ADS)
Urióstegui, Stephanie H.; Bibby, Richard K.; Esser, Bradley K.; Clark, Jordan F.
2016-12-01
Identifying groundwater retention times near managed aquifer recharge (MAR) facilities is a high priority for managing water quality, especially for operations that incorporate recycled wastewater. To protect public health, California guidelines for Groundwater Replenishment Reuse Projects require a minimum 2-6 month subsurface retention time for recycled water depending on the level of disinfection, which highlights the importance of quantifying groundwater travel times on short time scales. This study developed and evaluated a new intrinsic tracer method using the naturally occurring radioisotope sulfur-35 (35S). The 87.5 day half-life of 35S is ideal for investigating groundwater travel times on the <1 year timescale of interest to MAR managers. Natural concentrations of 35S found in water as dissolved sulfate (35SO4) were measured in source waters and groundwater at the Rio Hondo Spreading Grounds in Los Angeles County, CA, and Orange County Groundwater Recharge Facilities in Orange County, CA. 35SO4 travel times are comparable to travel times determined by well-established deliberate tracer studies. The study also revealed that 35SO4 in MAR source water can vary seasonally and therefore careful characterization of 35SO4 is needed to accurately quantify groundwater travel time. More data is needed to fully assess whether or not this tracer could become a valuable tool for managers.
Multifractal Turbulence in the Heliosphere
NASA Astrophysics Data System (ADS)
Macek, Wieslaw M.; Wawrzaszek, Anna
2010-05-01
We consider a solar wind plasma with frozen-in interplanetary magnetic fields, which is a complex nonlinear system that may exhibit chaos and intermittency, resulting in a multifractal scaling of plasma characteristics. We analyze time series of plasma velocity and interplanetary magnetic field strengths measured during space missions onboard various spacecraft, such as Helios, Advanced Composition Explorer, Ulysses, and Voyager, exploring different regions of the heliosphere during solar minimum and maximum. To quantify the multifractality of solar wind turbulence, we use a generalized two-scale weighted Cantor set with two different rescaling parameters [1]. We investigate the resulting spectrum of generalized dimensions and the corresponding multifractal singularity spectrum depending on the parameters of this new cascade model [2]. We show that using the model with two different scaling parameters one can explain the multifractal singularity spectrum, which is often asymmetric. In particular, the multifractal scaling of magnetic fields is asymmetric in the outer heliosphere, in contrast to the symmetric spectrum observed in the heliosheath as described by the standard one-scale model [3]. We hope that the generalized multifractal model will be a useful tool for analysis of intermittent turbulence in the heliospheric plasma. We thus believe that multifractal analysis of various complex environments can shed light on the nature of turbulence. [1] W. M. Macek and A. Szczepaniak, Generalized two-scale weighted Cantor set model for solar wind turbulence, Geophys. Res. Lett., 35, L02108 (2008), doi:10.1029/2007GL032263. [2] W. M. Macek and A. Wawrzaszek, Evolution of asymmetric multifractal scaling of solar wind turbulence in the outer heliosphere, J. Geophys. Res., A013795 (2009), doi:10.1029/2008JA013795. [3] W. M. Macek and A. Wawrzaszek, Multifractal turbulence at the termination shock, in Solar Wind Twelve, edited by M. Maximovic et al., American Institute of Physics, 2010.
TASS Model Application for Testing the TDWAP Model
NASA Technical Reports Server (NTRS)
Switzer, George F.
2009-01-01
One of the operational modes of the Terminal Area Simulation System (TASS) model simulates the three-dimensional interaction of wake vortices within turbulent domains in the presence of thermal stratification. The model allows the investigation of turbulence and stratification on vortex transport and decay. The model simulations for this work all assumed fully-periodic boundary conditions to remove the effects from any surface interaction. During the Base Period of this contract, NWRA completed generation of these datasets but only presented analysis for the neutral stratification runs of that set (Task 3.4.1). Phase 1 work began with the analysis of the remaining stratification datasets, and in the analysis we discovered discrepancies with the vortex time to link predictions. This finding necessitated investigating the source of the anomaly, and we found a problem with the background turbulence. Using the most up to date version TASS with some important defect fixes, we regenerated a larger turbulence domain, and verified the vortex time to link with a few cases before proceeding to regenerate the entire 25 case set (Task 3.4.2). The effort of Phase 2 (Task 3.4.3) concentrated on analysis of several scenarios investigating the effects of closely spaced aircraft. The objective was to quantify the minimum aircraft separations necessary to avoid vortex interactions between neighboring aircraft. The results consist of spreadsheets of wake data and presentation figures prepared for NASA technical exchanges. For these formation cases, NASA carried out the actual TASS simulations and NWRA performed the analysis of the results by making animations, line plots, and other presentation figures. This report contains the description of the work performed during this final phase of the contract, the analysis procedures adopted, and sample plots of the results from the analysis performed.
Taylor, Anne E; Giguere, Andrew T; Zoebelein, Conor M; Myrold, David D; Bottomley, Peter J
2017-04-01
Soil nitrification potential (NP) activities of ammonia-oxidizing archaea and bacteria (AOA and AOB, respectively) were evaluated across a temperature gradient (4-42 °C) imposed upon eight soils from four different sites in Oregon and modeled with both the macromolecular rate theory and the square root growth models to quantify the thermodynamic responses. There were significant differences in response by the dominant AOA and AOB contributing to the NPs. The optimal temperatures (T opt ) for AOA- and AOB-supported NPs were significantly different (P<0.001), with AOA having T opt >12 °C greater than AOB. The change in heat capacity associated with the temperature dependence of nitrification (ΔC P ‡ ) was correlated with T opt across the eight soils, and the ΔC P ‡ of AOB activity was significantly more negative than that of AOA activity (P<0.01). Model results predicted, and confirmatory experiments showed, a significantly lower minimum temperature (T min ) and different, albeit very similar, maximum temperature (T max ) values for AOB than for AOA activity. The results also suggested that there may be different forms of AOA AMO that are active over different temperature ranges with different T min , but no evidence of multiple T min values within the AOB. Fundamental differences in temperature-influenced properties of nitrification driven by AOA and AOB provides support for the idea that the biochemical processes associated with NH 3 oxidation in AOA and AOB differ thermodynamically from each other, and that also might account for the difficulties encountered in attempting to model the response of nitrification to temperature change in soil environments.
NASA Astrophysics Data System (ADS)
Xue, Zhenyu; Charonko, John J.; Vlachos, Pavlos P.
2014-11-01
In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we extend the original work by Charonko and Vlachos and present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. Several corrections have been applied in this work. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations by applying a subtraction of the minimum correlation value to remove the effect of the background image noise. In addition, the notion of a ‘valid’ measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct from an ‘outlier’ measurement. Finally the type and significance of the error distribution function is investigated. These advancements lead to more robust and reliable uncertainty estimation models compared with the original work by Charonko and Vlachos. The models are tested against both synthetic benchmark data as well as experimental measurements. In this work, {{U}68.5} uncertainties are estimated at the 68.5% confidence level while {{U}95} uncertainties are estimated at 95% confidence level. For all cases the resulting calculated coverage factors approximate the expected theoretical confidence intervals, thus demonstrating the applicability of these new models for estimation of uncertainty for individual PIV measurements.
NASA Astrophysics Data System (ADS)
Roper, Marcus Leigh
This thesis describes the numerical and asymptotic analysis of symmetry breaking phenomena in three fluid dynamical systems. The first part concerns modeling of a micrometer sized swimming device, comprising a filament composed of superparamagnetic micron-sized beads and driven by an applied magnetic field. The swimming mechanics are deciphered in order to show how actuation by a spatially-homogeneous but temporally-varying torque leads to propagation of a bending wave along the filament and thence to propulsion. Absence of swimming unless the lateral symmetry of the filament is broken by tethering one end to a high drag body is explained. The model is used to determine whether, and to what extent, the micro-swimmer behaves like a flagellated eukaryotic cell. The second part concerns modeling of locomotion using a reversible stroke. Although forbidden at low Reynolds numbers, such symmetric gaits are favored by some microscopic planktonic swimmers. We analyze the constraints upon generation of propulsive force by such swimmers using a numerical model for a flapped limb. Effective locomotion is shown to be possible at arbitrarily low rates of energy expenditure, escaping a formerly postulated time-symmetry constraint, if the limb is shaped in order to exploit slow inertial-streaming eddies. Finally we consider the evolution of explosively launched ascomycete spores toward perfect projectile shapes---bodies that are designed to experience minimum drag in flight---using the variance of spore shapes between species in order to quantify the stiffness of the drag minimization constraint. A surprising observation about the persistent fore-aft symmetry of perfect projectiles, even up to Reynolds numbers great enough that the flow around the projectile is highly asymmetric, points both toward a model for spore ontogeny and to a novel linear approximation for moderate Reynolds flows.
Baral, Subhasish; Roy, Rahul; Dixit, Narendra M
2018-05-09
A fraction of chronic hepatitis C patients treated with direct-acting antivirals (DAAs) achieved sustained virological responses (SVR), or cure, despite having detectable viremia at the end of treatment (EOT). This observation, termed EOT + /SVR, remains puzzling and precludes rational optimization of treatment durations. One hypothesis to explain EOT + /SVR, the immunologic hypothesis, argues that the viral decline induced by DAAs during treatment reverses the exhaustion of cytotoxic T lymphocytes (CTLs), which then clear the infection after treatment. Whether the hypothesis is consistent with data of viral load changes in patients who experienced EOT + /SVR is unknown. Here, we constructed a mathematical model of viral kinetics incorporating the immunologic hypothesis and compared its predictions with patient data. We found the predictions to be in quantitative agreement with patient data. Using the model, we unraveled an underlying bistability that gives rise to EOT + /SVR and presents a new avenue to optimize treatment durations. Infected cells trigger both activation and exhaustion of CTLs. CTLs in turn kill infected cells. Due to these competing interactions, two stable steady states, chronic infection and viral clearance, emerge, separated by an unstable steady state with intermediate viremia. When treatment during chronic infection drives viremia sufficiently below the unstable state, spontaneous viral clearance results post-treatment, marking EOT + /SVR. The duration to achieve this desired reduction in viremia defines the minimum treatment duration required for ensuring SVR, which our model can quantify. Estimating parameters defining the CTL response of individuals to HCV infection would enable the application of our model to personalize treatment durations. © 2018 The Authors Immunology & Cell Biology published by John Wiley & Sons Australia, Ltd on behalf of Australasian Society for Immunology Inc.
Taylor, Anne E; Giguere, Andrew T; Zoebelein, Conor M; Myrold, David D; Bottomley, Peter J
2017-01-01
Soil nitrification potential (NP) activities of ammonia-oxidizing archaea and bacteria (AOA and AOB, respectively) were evaluated across a temperature gradient (4–42 °C) imposed upon eight soils from four different sites in Oregon and modeled with both the macromolecular rate theory and the square root growth models to quantify the thermodynamic responses. There were significant differences in response by the dominant AOA and AOB contributing to the NPs. The optimal temperatures (Topt) for AOA- and AOB-supported NPs were significantly different (P<0.001), with AOA having Topt>12 °C greater than AOB. The change in heat capacity associated with the temperature dependence of nitrification (ΔCP‡) was correlated with Topt across the eight soils, and the ΔCP‡ of AOB activity was significantly more negative than that of AOA activity (P<0.01). Model results predicted, and confirmatory experiments showed, a significantly lower minimum temperature (Tmin) and different, albeit very similar, maximum temperature (Tmax) values for AOB than for AOA activity. The results also suggested that there may be different forms of AOA AMO that are active over different temperature ranges with different Tmin, but no evidence of multiple Tmin values within the AOB. Fundamental differences in temperature-influenced properties of nitrification driven by AOA and AOB provides support for the idea that the biochemical processes associated with NH3 oxidation in AOA and AOB differ thermodynamically from each other, and that also might account for the difficulties encountered in attempting to model the response of nitrification to temperature change in soil environments. PMID:27996979
Masina, Isabella; Notari, Alessio
2012-05-11
For a narrow band of values of the top quark and Higgs boson masses, the standard model Higgs potential develops a false minimum at energies of about 10(16) GeV, where primordial inflation could have started in a cold metastable state. A graceful exit to a radiation-dominated era is provided, e.g., by scalar-tensor gravity models. We pointed out that if inflation happened in this false minimum, the Higgs boson mass has to be in the range 126.0±3.5 GeV, where ATLAS and CMS subsequently reported excesses of events. Here we show that for these values of the Higgs boson mass, the inflationary gravitational wave background has be discovered with a tensor-to-scalar ratio at hand of future experiments. We suggest that combining cosmological observations with measurements of the top quark and Higgs boson masses represent a further test of the hypothesis that the standard model false minimum was the source of inflation in the universe.
NASA Astrophysics Data System (ADS)
Li, Gang; Zhao, Qing
2017-03-01
In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.
A systematic analysis of model performance during simulations based on observed landcover/use change is used to quantify errors associated with simulations of known "future" conditions. Calibrated and uncalibrated assessments of relative change over different lengths of...
Xu, Tianhua; Karanov, Boris; Shevchenko, Nikita A; Lavery, Domaniç; Liga, Gabriele; Killey, Robert I; Bayvel, Polina
2017-10-11
Nyquist-spaced transmission and digital signal processing have proved effective in maximising the spectral efficiency and reach of optical communication systems. In these systems, Kerr nonlinearity determines the performance limits, and leads to spectral broadening of the signals propagating in the fibre. Although digital nonlinearity compensation was validated to be promising for mitigating Kerr nonlinearities, the impact of spectral broadening on nonlinearity compensation has never been quantified. In this paper, the performance of multi-channel digital back-propagation (MC-DBP) for compensating fibre nonlinearities in Nyquist-spaced optical communication systems is investigated, when the effect of signal spectral broadening is considered. It is found that accounting for the spectral broadening effect is crucial for achieving the best performance of DBP in both single-channel and multi-channel communication systems, independent of modulation formats used. For multi-channel systems, the degradation of DBP performance due to neglecting the spectral broadening effect in the compensation is more significant for outer channels. Our work also quantified the minimum bandwidths of optical receivers and signal processing devices to ensure the optimal compensation of deterministic nonlinear distortions.
Adsorption losses from urine-based cannabinoid calibrators during routine use.
Blanc, J A; Manneh, V A; Ernst, R; Berger, D E; de Keczer, S A; Chase, C; Centofanti, J M; DeLizza, A J
1993-08-01
The major metabolite of cannabis found in urine, 11-nor-delta 9-tetrahydrocannabinol-9-carboxylic acid (delta 9-THC), is the compound most often used to calibrate cannabinoid immunoassays. The hydrophobic delta 9-THC molecule is known to adsorb to solid surfaces. This loss of analyte from calibrator solutions can lead to inaccuracy in the analytical system. Because the calibrators remain stable when not used, analyte loss is most probably caused by handling techniques. In an effort to develop an effective means of overcoming adsorption losses, we quantified cannabinoid loss from calibrators during the testing process. In studying handling of these solutions, we found noticeable, significant losses attributable to both the kind of pipette used for transfer and the contact surface-to-volume ratio of calibrator solution in the analyzer cup. Losses were quantified by immunoassay and by radioactive tracer. We suggest handling techniques that can minimize adsorption of delta 9-THC to surfaces. Using the appropriate pipette and maintaining a minimum surface-to-volume ratio in the analyzer cup effectively reduces analyte loss.
NASA Technical Reports Server (NTRS)
Koenig, Theodore K.; Volkamer, Rainer; Baidar, Sunil; Dix, Barbara; Wang, Siyuan; Anderson, Daniel C.; Salawitch, Ross J.; Wales, Pamela A.; Cuevas, Carlos A.; Fernandez, Rafael P.;
2017-01-01
We report measurements of bromine monoxide (BrO) and use an observationally constrained chemical box model to infer total gas-phase inorganic bromine (Br(sub y)) over the tropical western Pacific Ocean (tWPO) during the CONTRAST field campaign (January-February 2014). The observed BrO and inferred Bry profiles peak in the marine boundary layer (MBL), suggesting the need for a bromine source from sea-salt aerosol (SSA), in addition to organic bromine (CBry ). Both profiles are found to be C-shaped with local maxima in the upper free troposphere (FT). The median tropospheric BrO vertical column density (VCD) was measured as 1.6 x 10(exp 13) molec cm(exp -2), compared to model predictions of 0.9 x 10(exp 13) molec cm(exp -2) in GEOS-Chem (CBr(sub y) but no SSA source), 0.4 x 10(exp 13) molec cm(exp -2) in CAM-Chem (CBr(sub y) and SSA), and 2.1 x 10(exp 13) molec cm(exp -2) in GEOS-Chem (CBry and SSA). Neither global model fully captures the Cshape of the Br(sun y) profile. A local Br(sub y) maximum of 3.6 ppt (2.9-4.4 ppt; 95% confidence interval, CI) is inferred between 9.5 and 13.5 km in air masses influenced by recent convective outflow. Unlike BrO, which increases from the convective tropical tropopause layer (TTL) to the aged TTL, gas-phase Br(sub y) decreases from the convective TTL to the aged TTL. Analysis of gas-phase Br(sub y) against multiple tracers (CFC-11, H2O/O3 ratio, and potential temperature) reveals a Br(sub y) minimum of 2.7 ppt (2.3-3.1 ppt; 95% CI) in the aged TTL, which agrees closely with a stratospheric injection of 2.6 +/- 0.6 ppt of inorganic Br(sub y) (estimated from CFC-11 correlations), and is remarkably insensitive to assumptions about heterogeneous chemistry. Bry increases to 6.3 ppt (5.6-7.0 ppt; 95% CI) in the stratospheric "middleworld" and 6.9 ppt (6.5-7.3 ppt; 95% CI) in the stratospheric "overworld". The local Br(sub y) minimum in the aged TTL is qualitatively (but not quantitatively) captured by CAM-Chem, and suggests a more complex partitioning of gas-phase and aerosol Br(sub y) species than previously recognized. Our data provide corroborating evidence that inorganic bromine sources (e.g., SSA-derived gas-phase Br(sub y) ) are needed to explain the gas-phase Br(sub y) budget in the upper free troposphere and TTL. They are also consistent with observations of significant bromide in Upper Troposphere-Lower Stratosphere aerosols. The total Br(sub y) budget in the TTL is currently not closed, because of the lack of concurrent quantitative measurements of gas-phase Br(sub y) species (i.e., BrO, HOBr, HBr, etc.) and aerosol bromide. Such simultaneous measurements are needed to (1) quantify SSA-derived Br(sub y) in the upper FT, (2) test Br(sub y) partitioning, and possibly explain the gas-phase Br(sub y) minimum in the aged TTL, (3) constrain heterogeneous reaction rates of bromine, and (4) account for all of the sources of Br(sub y) to the lower stratosphere.
Gorban, Alexander N; Pokidysheva, Lyudmila I; Smirnova, Elena V; Tyukina, Tatiana A
2011-09-01
The "Law of the Minimum" states that growth is controlled by the scarcest resource (limiting factor). This concept was originally applied to plant or crop growth (Justus von Liebig, 1840, Salisbury, Plant physiology, 4th edn., Wadsworth, Belmont, 1992) and quantitatively supported by many experiments. Some generalizations based on more complicated "dose-response" curves were proposed. Violations of this law in natural and experimental ecosystems were also reported. We study models of adaptation in ensembles of similar organisms under load of environmental factors and prove that violation of Liebig's law follows from adaptation effects. If the fitness of an organism in a fixed environment satisfies the Law of the Minimum then adaptation equalizes the pressure of essential factors and, therefore, acts against the Liebig's law. This is the the Law of the Minimum paradox: if for a randomly chosen pair "organism-environment" the Law of the Minimum typically holds, then in a well-adapted system, we have to expect violations of this law.For the opposite interaction of factors (a synergistic system of factors which amplify each other), adaptation leads from factor equivalence to limitations by a smaller number of factors.For analysis of adaptation, we develop a system of models based on Selye's idea of the universal adaptation resource (adaptation energy). These models predict that under the load of an environmental factor a population separates into two groups (phases): a less correlated, well adapted group and a highly correlated group with a larger variance of attributes, which experiences problems with adaptation. Some empirical data are presented and evidences of interdisciplinary applications to econometrics are discussed. © Society for Mathematical Biology 2010
Current Status of Multidisciplinary Care in Psoriatic Arthritis in Spain: NEXUS 2.0 Project.
Queiro, Rubén; Coto, Pablo; Joven, Beatriz; Rivera, Raquel; Navío Marco, Teresa; de la Cueva, Pablo; Alvarez Vega, Jose Luis; Narváez Moreno, Basilio; Rodriguez Martínez, Fernando José; Pardo Sánchez, José; Feced Olmos, Carlos; Pujol, Conrad; Rodríguez, Jesús; Notario, Jaume; Pujol Busquets, Manel; García Font, Mercè; Galindez, Eva; Pérez Barrio, Silvia; Urruticoechea-Arana, Ana; Hergueta, Merce; López Montilla, M Dolores; Vélez García-Nieto, Antonio; Maceiras, Francisco; Rodríguez Pazos, Laura; Rubio Romero, Esteban; Rodríguez Fernandez Freire, Lourdes; Luelmo, Jesús; Gratacós, Jordi
2018-02-26
1) To analyze the implementation of multidisciplinary care models in psoriatic arthritis (PsA) patients, 2) To define minimum and excellent standards of care. A survey was sent to clinicians who already performed multidisciplinary care or were in the process of undertaking it, asking: 1) Type of multidisciplinary care model implemented; 2) Degree, priority and feasibility of the implementation of quality standards in the structure, process and result for care. In 6 regional meetings the results of the survey were presented and discussed, and the ultimate priority of quality standards for care was defined. At a nominal meeting group, 11 experts (rheumatologists and dermatologists) analyzed the results of the survey and the regional meetings. With this information, they defined which standards of care are currently considered as minimum and which are excellent. The simultaneous and parallel models of multidisciplinary care are those most widely implemented, but the implementation of quality standards is highly variable. In terms of structure it ranges from 22% to 74%, in those related to process from 17% to 54% and in the results from 2% to 28%. Of the 25 original quality standards for care, 9 were considered only minimum, 4 were excellent and 12 defined criteria for minimum level and others for excellence. The definition of minimum and excellent quality standards for care will help achieve the goal of multidisciplinary care for patients with PAs, which is the best healthcare possible. Copyright © 2018 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.
Meier, Petra S; Holmes, John; Angus, Colin; Ally, Abdallah K; Meng, Yang; Brennan, Alan
2016-02-01
While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO "best buy" intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities. An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol); and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA) and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as follows: current tax increase, -3.2%; value-based tax, -2.9%; strength-based tax, -6.1%; minimum unit pricing, -7.8%) and lesser impacts among drinkers in professional/managerial occupations (for heavy drinkers: current tax increase, -1.3%; value-based tax, -1.4%; strength-based tax, +0.2%; minimum unit pricing, +0.8%). Results from the PSA give slightly greater mean effects for both the routine/manual (current tax increase, -3.6% [95% uncertainty interval (UI) -6.1%, -0.6%]; value-based tax, -3.3% [UI -5.1%, -1.7%]; strength-based tax, -7.5% [UI -13.7%, -3.9%]; minimum unit pricing, -10.3% [UI -10.3%, -7.0%]) and professional/managerial occupation groups (current tax increase, -1.8% [UI -4.7%, +1.6%]; value-based tax, -1.9% [UI -3.6%, +0.4%]; strength-based tax, -0.8% [UI -6.9%, +4.0%]; minimum unit pricing, -0.7% [UI -5.6%, +3.6%]). Impacts of price changes on moderate drinkers were small regardless of income or socioeconomic group. Analysis of uncertainty shows that the relative effectiveness of the four policies is fairly stable, although uncertainty in the absolute scale of effects exists. Volumetric taxation and minimum unit pricing consistently outperform increasing the current tax or adding an ad valorem tax in terms of reducing mortality among the heaviest drinkers and reducing alcohol-related health inequalities (e.g., in the routine/manual occupation group, volumetric taxation reduces deaths more than increasing the current tax in 26 out of 30 probabilistic runs, minimum unit pricing reduces deaths more than volumetric tax in 21 out of 30 runs, and minimum unit pricing reduces deaths more than increasing the current tax in 30 out of 30 runs). Study limitations include reducing model complexity by not considering a largely ineffective ban on below-tax alcohol sales, special duty rates covering only small shares of the market, and the impact of tax fraud or retailer non-compliance with minimum unit prices. Our model estimates that, compared to tax increases under the current system or introducing taxation based on product value, alcohol-content-based taxation or minimum unit pricing would lead to larger reductions in health inequalities across income groups. We also estimate that alcohol-content-based taxation and minimum unit pricing would have the largest impact on harmful drinking, with minimal effects on those drinking in moderation.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
The SMM Model as a Boundary Value Problem Using the Discrete Diffusion Equation
NASA Technical Reports Server (NTRS)
Campbell, Joel
2007-01-01
A generalized single step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.
The SMM model as a boundary value problem using the discrete diffusion equation.
Campbell, Joel
2007-12-01
A generalized single-step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.
NASA Astrophysics Data System (ADS)
Keeble, James; Brown, Hannah; Abraham, N. Luke; Harris, Neil R. P.; Pyle, John A.
2018-06-01
Total column ozone values from an ensemble of UM-UKCA model simulations are examined to investigate different definitions of progress on the road to ozone recovery. The impacts of modelled internal atmospheric variability are accounted for by applying a multiple linear regression model to modelled total column ozone values, and ozone trend analysis is performed on the resulting ozone residuals. Three definitions of recovery are investigated: (i) a slowed rate of decline and the date of minimum column ozone, (ii) the identification of significant positive trends and (iii) a return to historic values. A return to past thresholds is the last state to be achieved. Minimum column ozone values, averaged from 60° S to 60° N, occur between 1990 and 1995 for each ensemble member, driven in part by the solar minimum conditions during the 1990s. When natural cycles are accounted for, identification of the year of minimum ozone in the resulting ozone residuals is uncertain, with minimum values for each ensemble member occurring at different times between 1992 and 2000. As a result of this large variability, identification of the date of minimum ozone constitutes a poor measure of ozone recovery. Trends for the 2000-2017 period are positive at most latitudes and are statistically significant in the mid-latitudes in both hemispheres when natural cycles are accounted for. This significance results largely from the large sample size of the multi-member ensemble. Significant trends cannot be identified by 2017 at the highest latitudes, due to the large interannual variability in the data, nor in the tropics, due to the small trend magnitude, although it is projected that significant trends may be identified in these regions soon thereafter. While significant positive trends in total column ozone could be identified at all latitudes by ˜ 2030, column ozone values which are lower than the 1980 annual mean can occur in the mid-latitudes until ˜ 2050, and in the tropics and high latitudes deep into the second half of the 21st century.
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2012-01-01
When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282
NASA Astrophysics Data System (ADS)
White, Ronald; Lipson, Jane
Free volume has a storied history in polymer physics. To introduce our own results, we consider how free volume has been defined in the past, e.g. in the works of Fox and Flory, Doolittle, and the equation of Williams, Landel, and Ferry. We contrast these perspectives with our own analysis using our Locally Correlated Lattice (LCL) model where we have found a striking connection between polymer free volume (analyzed using PVT data) and the polymer's corresponding glass transition temperature, Tg. The pattern, covering over 50 different polymers, is robust enough to be reasonably predictive based on melt properties alone; when a melt hits this T-dependent boundary of critical minimum free volume it becomes glassy. We will present a broad selection of results from our thermodynamic analysis, and make connections with historical treatments. We will discuss patterns that have emerged across the polymers in the energy and entropy when quantified as ''per LCL theoretical segment''. Finally we will relate the latter trend to the point of view popularized in the theory of Adam and Gibbs. The authors gratefully acknowledge support from NSF DMR-1403757.
Quantitative Assessment of Antimicrobial Activity of PLGA Films Loaded with 4-Hexylresorcinol
Kemme, Michael; Heinzel-Wieland, Regina
2018-01-01
Profound screening and evaluation methods for biocide-releasing polymer films are crucial for predicting applicability and therapeutic outcome of these drug delivery systems. For this purpose, we developed an agar overlay assay embedding biopolymer composite films in a seeded microbial lawn. By combining this approach with model-dependent analysis for agar diffusion, antimicrobial potency of the entrapped drug can be calculated in terms of minimum inhibitory concentrations (MICs). Thus, the topical antiseptic 4-hexylresorcinol (4-HR) was incorporated into poly(lactic-co-glycolic acid) (PLGA) films at different loadings up to 3.7 mg/cm2 surface area through a solvent casting technique. The antimicrobial activity of 4-HR released from these composite films was assessed against a panel of Gram-negative and Gram–positive bacteria, yeasts and filamentous fungi by the proposed assay. All the microbial strains tested were susceptible to PLGA-4-HR films with MIC values down to 0.4% (w/w). The presented approach serves as a reliable method in screening and quantifying the antimicrobial activity of polymer composite films. Moreover, 4-HR-loaded PLGA films are a promising biomaterial that may find future application in the biomedical and packaging sector. PMID:29324696
Practical protocols for fast histopathology by Fourier transform infrared spectroscopic imaging
NASA Astrophysics Data System (ADS)
Keith, Frances N.; Reddy, Rohith K.; Bhargava, Rohit
2008-02-01
Fourier transform infrared (FT-IR) spectroscopic imaging is an emerging technique that combines the molecular selectivity of spectroscopy with the spatial specificity of optical microscopy. We demonstrate a new concept in obtaining high fidelity data using commercial array detectors coupled to a microscope and Michelson interferometer. Next, we apply the developed technique to rapidly provide automated histopathologic information for breast cancer. Traditionally, disease diagnoses are based on optical examinations of stained tissue and involve a skilled recognition of morphological patterns of specific cell types (histopathology). Consequently, histopathologic determinations are a time consuming, subjective process with innate intra- and inter-operator variability. Utilizing endogenous molecular contrast inherent in vibrational spectra, specially designed tissue microarrays and pattern recognition of specific biochemical features, we report an integrated algorithm for automated classifications. The developed protocol is objective, statistically significant and, being compatible with current tissue processing procedures, holds potential for routine clinical diagnoses. We first demonstrate that the classification of tissue type (histology) can be accomplished in a manner that is robust and rigorous. Since data quality and classifier performance are linked, we quantify the relationship through our analysis model. Last, we demonstrate the application of the minimum noise fraction (MNF) transform to improve tissue segmentation.
Partial volume segmentation in 3D of lesions and tissues in magnetic resonance images
NASA Astrophysics Data System (ADS)
Johnston, Brian; Atkins, M. Stella; Booth, Kellogg S.
1994-05-01
An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem to date has been distinguishing the partial volume constituents of each voxel in the image volume. Most proposed solutions to this problem involve analysis of planar images, in sequence, in two dimensions only. We have extended a model-based method of image segmentation which applies the technique of iterated conditional modes in three dimensions. A minimum of user intervention is required to train the algorithm. Partial volume estimates for each voxel in the image are obtained yielding fractional compositions of multiple tissue types for individual voxels. A multispectral approach is applied, where spatially registered data sets are available. The algorithm is simple and has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment dual echo MRI data sets of multiple sclerosis patients using lesions, gray matter, white matter, and cerebrospinal fluid as the partial volume constituents. The results of the application of the algorithm to these datasets is presented and compared to the manual lesion segmentation of the same data.
Genetic toxicity of dillapiol and spinosad larvicides in somatic cells of Drosophila melanogaster.
Aciole, Eliezer H Pires; Guimarães, Nilza N; Silva, Andre S; Amorim, Erima M; Nunomura, Sergio M; Garcia, Ana Cristina L; Cunha, Kênya S; Rohde, Claudia
2014-04-01
Higher rates of diseases transmitted from insects to humans led to the increased use of organophosphate insecticides, proven to be harmful to human health and the environment. New, more effective chemical formulations with minimum genetic toxicity effects have become the object of intense research. These formulations include larvicides derived from plant extracts such as dillapiol, a phenylpropanoid extracted from Piper aduncum, and from microorganisms such as spinosad, formed by spinosyns A and D derived from the Saccharopolyspora spinosa fermentation process. This study investigated the genotoxicity of dillapiol and spinosad, characterising and quantifying mutation events and chromosomal and/or mitotic recombination using the somatic mutation and recombination test (SMART) in wings of Drosophila melanogaster. Standard cross larvae (72 days old) were treated with different dillapiol and spinosad concentrations. Both compounds presented positive genetic toxicity, mainly as mitotic recombination events. Distilled water and doxorubicin were used as negative and positive controls respectively. Spinosad was 14 times more genotoxic than dillapiol, and the effect was found to be purely recombinogenic. However, more studies on the potential risks of insecticides such as spinosad and dillapiol are necessary, based on other experimental models and methodologies, to ensure safe use. © 2013 Society of Chemical Industry.