NASA Astrophysics Data System (ADS)
Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao
2018-06-01
This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
The Impact of Uncertain Physical Parameters on HVAC Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Elizondo, Marcelo A.; Lu, Shuai
HVAC units are currently one of the major resources providing demand response (DR) in residential buildings. Models of HVAC with DR function can improve understanding of its impact on power system operations and facilitate the deployment of DR technologies. This paper investigates the importance of various physical parameters and their distributions to the HVAC response to DR signals, which is a key step to the construction of HVAC models for a population of units with insufficient data. These parameters include the size of floors, insulation efficiency, the amount of solid mass in the house, and efficiency of the HVAC units.more » These parameters are usually assumed to follow Gaussian or Uniform distributions. We study the effect of uncertainty in the chosen parameter distributions on the aggregate HVAC response to DR signals, during transient phase and in steady state. We use a quasi-Monte Carlo sampling method with linear regression and Prony analysis to evaluate sensitivity of DR output to the uncertainty in the distribution parameters. The significance ranking on the uncertainty sources is given for future guidance in the modeling of HVAC demand response.« less
The effect of noise-induced variance on parameter recovery from reaction times.
Vadillo, Miguel A; Garaizar, Pablo
2016-03-31
Technical noise can compromise the precision and accuracy of the reaction times collected in psychological experiments, especially in the case of Internet-based studies. Although this noise seems to have only a small impact on traditional statistical analyses, its effects on model fit to reaction-time distributions remains unexplored. Across four simulations we study the impact of technical noise on parameter recovery from data generated from an ex-Gaussian distribution and from a Ratcliff Diffusion Model. Our results suggest that the impact of noise-induced variance tends to be limited to specific parameters and conditions. Although we encourage researchers to adopt all measures to reduce the impact of noise on reaction-time experiments, we conclude that the typical amount of noise-induced variance found in these experiments does not pose substantial problems for statistical analyses based on model fitting.
NASA Astrophysics Data System (ADS)
Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby
2013-12-01
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.
ERIC Educational Resources Information Center
Kim, Kyung Yong; Lee, Won-Chan
2017-01-01
This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…
Life cycle assessment of overhead and underground primary power distribution.
Bumby, Sarah; Druzhinina, Ekaterina; Feraldi, Rebe; Werthmann, Danae; Geyer, Roland; Sahl, Jack
2010-07-15
Electrical power can be distributed in overhead or underground systems, both of which generate a variety of environmental impacts at all stages of their life cycles. While there is considerable literature discussing the trade-offs between both systems in terms of aesthetics, safety, cost, and reliability, environmental assessments are relatively rare and limited to power cable production and end-of-life management. This paper assesses environmental impacts from overhead and underground medium voltage power distribution systems as they are currently built and managed by Southern California Edison (SCE). It uses process-based life cycle assessment (LCA) according to ISO 14044 (2006) and SCE-specific primary data to the extent possible. Potential environmental impacts have been calculated using a wide range of midpoint indicators, and robustness of the results has been investigated through sensitivity analysis of the most uncertain and potentially significant parameters. The studied underground system has higher environmental impacts in all indicators and for all parameter values, mostly due to its higher material intensity. For both systems and all indicators the majority of impact occurs during cable production. Promising strategies for impact reduction are thus cable failure rate reduction for overhead and cable lifetime extension for underground systems.
LDEF's map experiment foil perforations yield hypervelocity impact penetration parameters
NASA Technical Reports Server (NTRS)
Mcdonnell, J. A. M.
1992-01-01
The space exposure of LDEF for 5.75 years, forming a host target in low earth orbit (LEO) orbit to a wide distribution of hypervelocity particulates of varying dimensions and different impact velocities, has yielded a multiplicity of impact features. Although the projectile parameters are generally unknown and, in fact not identical for any two impacts on a target, the great number of impacts provides statistically meaningful basis for the valid comparison of the response of different targets. Given sufficient impacts for example, a comparison of impact features (even without knowledge of the project parameters) is possible between: (1) differing material types (for the same incident projectile distribution); (2) differing target configurations (e.g., thick and thin targets for the same material projectiles; and (3) different velocities (using LDEF's different faces). A comparison between different materials is presented for infinite targets of aluminum, Teflon, and brass in the same pointing direction; the maximum finite-target penetration (ballistic limit) is also compared to that of the penetration of similar materials comprising of a semi-infinite target. For comparison of impacts on similar materials at different velocities, use is made of the pointing direction relative to LDEF's orbital motion. First, however, care must be exercised to separate the effect of spatial flux anisotropies from those resulting from the spacecraft velocity through a geocentrically referenced dust distribution. Data comprising thick and thin target impacts, impacts on different materials, and in different pointing directions is presented; hypervelocity impact parameters are derived. Results are also shown for flux modeling codes developed to decode the relative fluxes of Earth orbital and unbound interplanetary components intercepting LDEF. Modeling shows the west and space pointing faces are dominated by interplanetary particles and yields a mean velocity of 23.5 km/s at LDEF, corresponding to a V(infinity) Earth approach velocity = 20.9 km/s. Normally resolved average impact velocities on LDEF's cardinal point faces are shown. As 'excess' flux on the east, north, and south faces is observed, compatible with an Earth orbital component below some 5 microns in particle diameter.
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
Impact of dynamic distribution of floc particles on flocculation effect.
Nan, Jun; He, Weipeng; Song, Xinin; Li, Guibai
2009-01-01
Polyaluminum chloride (PAC) was used as coagulant and suspended particles in kaolin water. Online instruments including turbidimeter and particle counter were used to monitor the flocculation process. An evaluation model for demonstrating the impact on the flocculation effect was established based on the multiple linear regression analysis method. The parameter of the index weight of channels quantitatively described how the variation of floc particle population in different size ranges cause the decrement of turbidity. The study showed that the floc particles in different size ranges contributed differently to the decrease of turbidity and that the index weight of channel could excellently indicate the impact degree of floc particles dynamic distribution on flocculation effect. Therefore, the parameter may significantly benefit the development of coagulation and sedimentation techniques as well as the optimal coagulant selection.
Velocity distribution of fragments of catastrophic impacts
NASA Technical Reports Server (NTRS)
Takagi, Yasuhiko; Kato, Manabu; Mizutani, Hitoshi
1992-01-01
Three dimensional velocities of fragments produced by laboratory impact experiments were measured for basalts and pyrophyllites. The velocity distribution of fragments obtained shows that the velocity range of the major fragments is rather narrow, at most within a factor of 3 and that no clear dependence of velocity on the fragment mass is observed. The NonDimensional Impact Stress (NDIS) defined by Mizutani et al. (1990) is found to be an appropriate scaling parameter to describe the overall fragment velocity as well as the antipodal velocity.
Relating centrality to impact parameter in nucleus-nucleus collisions
NASA Astrophysics Data System (ADS)
Das, Sruthy Jyothi; Giacalone, Giuliano; Monard, Pierre-Amaury; Ollitrault, Jean-Yves
2018-01-01
In ultrarelativistic heavy-ion experiments, one estimates the centrality of a collision by using a single observable, say n , typically given by the transverse energy or the number of tracks observed in a dedicated detector. The correlation between n and the impact parameter b of the collision is then inferred by fitting a specific model of the collision dynamics, such as the Glauber model, to experimental data. The goal of this paper is to assess precisely which information about b can be extracted from data without any specific model of the collision. Under the sole assumption that the probability distribution of n for a fixed b is Gaussian, we show that the probability distribution of the impact parameter in a narrow centrality bin can be accurately reconstructed up to 5 % centrality. We apply our methodology to data from the Relativistic Heavy Ion Collider and the Large Hadron Collider. We propose a simple measure of the precision of the centrality determination, which can be used to compare different experiments.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Properties of CGM-Absorbing Galaxies
NASA Astrophysics Data System (ADS)
Hamill, Colin; Conway, Matthew; Apala, Elizabeth; Scott, Jennifer
2018-01-01
We extend the results of a study of the sightlines of 45 low-redshift quasars (0.06 < z < 0.85) observed by HST/COS that lie within the Sloan Digital Sky Survey. We have used photometric data from the SDSS DR12, along with the known absorption characteristics of the intergalactic medium and circumgalactic medium, to identify the most probable galaxy matches to absorbers in the spectroscopic dataset. Here, we use photometric data and measured galaxy parameters from SDSS DR12 to examine the distributions of galaxy properties such as virial radius, morphology, and position angle among those that match to absorbers within a specific range of impact parameters. We compare those distributions to galaxies within the same impact parameter range that are not matched to any absorber in the HST/COS spectrum in order to investigate global properties of the circumgalactic medium.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yu; Hou, Zhangshuan; Huang, Maoyi
2013-12-10
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find thatmore » using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.« less
Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald
2018-10-01
In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
1973-01-01
The HD 220 program was created as part of the space shuttle solid rocket booster recovery system definition. The model was generated to investigate the damage to SRB components under water impact loads. The random nature of environmental parameters, such as ocean waves and wind conditions, necessitates estimation of the relative frequency of occurrence for these parameters. The nondeterministic nature of component strengths also lends itself to probabilistic simulation. The Monte Carlo technique allows the simultaneous perturbation of multiple independent parameters and provides outputs describing the probability distribution functions of the dependent parameters. This allows the user to determine the required statistics for each output parameter.
IBS FOR ION DISTRIBUTION UNDER ELECTRON COOLING.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FEDOTOV,A.V.; BEN-ZVI,I.; EIDELMAN, YU.
Standard models of the intra-beam scattering (IBS) are based on the growth of the rms beam parameters for a Gaussian distribution. As a result of electron cooling, the core of beam distribution is cooled much faster than the tails, producing a denser core. In this paper, we compare various approaches to IBS treatment for such distribution. Its impact on the luminosity is also discussed.
The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds
NASA Astrophysics Data System (ADS)
Li, Zhi; Brissette, Fancois; Chen, Jie
2013-04-01
Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.
NASA Astrophysics Data System (ADS)
Harris, Alan W.; Morbidelli, Alessandro; Granvik, Mikael
2016-10-01
Modeling the distribution of orbits with near-zero orbital parameters requires special attention to the dimensionality of the parameters in question. This is even more true since orbits of near-zero MOID, (e, i), or q are especially interesting as sources or sinks of NEAs. An essentially zero value of MOID (Minimum Orbital Intersection Distance) with respect to the Earth's orbit is a requirement for an impact trajectory, and initially also for ejecta from lunar impacts into heliocentric orbits. The collision cross section of the Earth goes up greatly with decreasing relative encounter velocity, venc, thus the impact flux onto the Earth is enhanced in such low-venc objects, which correspond to near-zero (e,i) orbits. And lunar ejecta that escapes from the Earth-moon system mostly does so at only barely greater than minimum velocity for escape (Gladman, et al., 1995, Icarus 118, 302-321), so the Earth-moon system is both a source and a sink of such low-venc orbits, and understanding the evolution of these populations requires accurately modeling the orbit distributions. Lastly, orbits of very low heliocentric perihelion distance, q, are particularly interesting as a "sink" in the NEA population as asteroids "fall into the sun" (Farinella, et al., 1994, Nature 371, 314-317). Understanding this process, and especially the role of disintegration of small asteroids as they evolve into low-q orbits (Granvik et al., 2016, Nature 530, 303-306), requires accurate modeling of the q distribution that would exist in the absence of a "sink" in the distribution. In this paper, we derive analytical expressions for the expected steady-state distributions near zero of MOID, (e,i), and q in the absence of sources or sinks, compare those to numerical simulations of orbit distributions, and lastly evaluate the distributions of discovered NEAs to try to understand the sources and sinks of NEAs "near zero" of these orbital parameters.
Single particle momentum and angular distributions in hadron-hadron collisions at ultrahigh energies
NASA Technical Reports Server (NTRS)
Chou, T. T.; Chen, N. Y.
1985-01-01
The forward-backward charged multiplicity distribution (P n sub F, n sub B) of events in the 540 GeV antiproton-proton collider has been extensively studied by the UA5 Collaboration. It was pointed out that the distribution with respect to n = n sub F + n sub B satisfies approximate KNO scaling and that with respect to Z = n sub F - n sub B is binomial. The geometrical model of hadron-hadron collision interprets the large multiplicity fluctuation as due to the widely different nature of collisions at different impact parameters b. For a single impact parameter b, the collision in the geometrical model should exhibit stochastic behavior. This separation of the stochastic and nonstochastic (KNO) aspects of multiparticle production processes gives conceptually a lucid and attractive picture of such collisions, leading to the concept of partition temperature T sub p and the single particle momentum spectrum to be discussed in detail.
Effects of composition of grains of debris flow on its impact force
NASA Astrophysics Data System (ADS)
Tang, jinbo; Hu, Kaiheng; Cui, Peng
2017-04-01
Debris flows compose of solid material with broad size distribution from fine sand to boulders. Impact force imposed by debris flows is a very important issue for protection engineering design and strongly influenced by their grain composition. However, this issue has not been studied in depth and the effects of grain composition not been considered in the calculation of the impact force. In this present study, the small-scale flume experiments with five kinds of compositions of grains for debris flow were carried out to study the effect of the composition of grains of debris flow on its impact force. The results show that the impact force of debris flow increases with the grain size, the hydrodynamic pressure of debris flow is calibrated based on the normalization parameter dmax/d50, in which dmax is the maximum size and d50 is the median size. Furthermore, a log-logistic statistic distribution could be used to describe the distribution of magnitude of impact force of debris flow, where the mean and the variance of the present distribution increase with grain size. This distribution proposed in the present study could be used to the reliability analysis of structures impacted by debris flow.
Absorption, distribution, metabolism, and excretion (ADME) impact chemical concentration and activation of molecular initiating events of Adverse Outcome Pathways (AOPs) in cellular, tissue, and organ level targets. In order to better describe ADME parameters and how they modulat...
NASA Astrophysics Data System (ADS)
McKague, Darren Shawn
2001-12-01
The statistical properties of clouds and precipitation on a global scale are important to our understanding of climate. Inversion methods exist to retrieve the needed cloud and precipitation properties from satellite data pixel-by-pixel that can then be summarized over large data sets to obtain the desired statistics. These methods can be quite computationally expensive, and typically don't provide errors on the statistics. A new method is developed to directly retrieve probability distributions of parameters from the distribution of measured radiances. The method also provides estimates of the errors on the retrieved distributions. The method can retrieve joint distributions of parameters that allows for the study of the connection between parameters. A forward radiative transfer model creates a mapping from retrieval parameter space to radiance space. A Monte Carlo procedure uses the mapping to transform probability density from the observed radiance histogram to a two- dimensional retrieval property probability distribution function (PDF). An estimate of the uncertainty in the retrieved PDF is calculated from random realizations of the radiance to retrieval parameter PDF transformation given the uncertainty of the observed radiances, the radiance PDF, the forward radiative transfer, the finite number of prior state vectors, and the non-unique mapping to retrieval parameter space. The retrieval method is also applied to the remote sensing of precipitation from SSM/I microwave data. A method of stochastically generating hydrometeor fields based on the fields from a numerical cloud model is used to create the precipitation parameter radiance space transformation. The impact of vertical and horizontal variability within the hydrometeor fields has a significant impact on algorithm performance. Beamfilling factors are computed from the simulated hydrometeor fields. The beamfilling factors vary quite a bit depending upon the horizontal structure of the rain. The algorithm is applied to SSM/I images from the eastern tropical Pacific and is compared to PDFs of rain rate computed using pixel-by-pixel retrievals from Wilheit and from Liu and Curry. Differences exist between the three methods, but good general agreement is seen between the PDF retrieval algorithm and the algorithm of Liu and Curry. (Abstract shortened by UMI.)
A Model of Objective Weighting for EIA.
ERIC Educational Resources Information Center
Ying, Long Gen; Liu, You Ci
1995-01-01
In the research of environmental impact assessment (EIA), the problem of weight distribution for a set of parameters has not yet been properly solved. Presents an approach of objective weighting by using a procedure of Pij principal component-factor analysis (Pij PCFA), which suits specifically those parameters measured directly by physical…
Herrmann, Frank; Baghdadi, Nicolas; Blaschek, Michael; Deidda, Roberto; Duttmann, Rainer; La Jeunesse, Isabelle; Sellami, Haykel; Vereecken, Harry; Wendland, Frank
2016-02-01
We used observed climate data, an ensemble of four GCM-RCM combinations (global and regional climate models) and the water balance model mGROWA to estimate present and future groundwater recharge for the intensively-used Thau lagoon catchment in southern France. In addition to a highly resolved soil map, soil moisture distributions obtained from SAR-images (Synthetic Aperture Radar) were used to derive the spatial distribution of soil parameters covering the full simulation domain. Doing so helped us to assess the impact of different soil parameter sources on the modelled groundwater recharge levels. Groundwater recharge was simulated in monthly time steps using the ensemble approach and analysed in its spatial and temporal variability. The soil parameters originating from both sources led to very similar groundwater recharge rates, proving that soil parameters derived from SAR images may replace traditionally used soil maps in regions where soil maps are sparse or missing. Additionally, we showed that the variance in different GCM-RCMs influences the projected magnitude of future groundwater recharge change significantly more than the variance in the soil parameter distributions derived from the two different sources. For the period between 1950 and 2100, climate change impacts based on the climate model ensemble indicated that overall groundwater recharge will possibly show a low to moderate decrease in the Thau catchment. However, as no clear trend resulted from the ensemble simulations, reliable recommendations for adapting the regional groundwater management to changed available groundwater volumes could not be derived. Copyright © 2015 Elsevier B.V. All rights reserved.
Probabilistic SSME blades structural response under random pulse loading
NASA Technical Reports Server (NTRS)
Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.
1987-01-01
The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.
A Probabilistic Asteroid Impact Risk Model
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.
2016-01-01
Asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data has little effect on the metrics of interest.
USDA-ARS?s Scientific Manuscript database
This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...
USDA-ARS?s Scientific Manuscript database
The study of health impacts, emission estimation of particulate matter (PM), and development of new control technologies require knowledge of PM characteristics. Among these PM characteristics, the particle size distribution (PSD) is perhaps the most important physical parameter governing particle b...
NASA Astrophysics Data System (ADS)
Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten
2017-07-01
Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral
sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral
parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.
Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.
2015-01-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5–95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535
Nucleon form factors in generalized parton distributions at high momentum transfers
NASA Astrophysics Data System (ADS)
Sattary Nikkhoo, Negin; Shojaei, Mohammad Reza
2018-05-01
This paper aims at calculating the elastic form factors for a nucleon by considering the extended Regge and modified Gaussian ansatzes based on the generalized parton distributions. To reach this goal, we have considered three different parton distribution functions (PDFs) and have compared the obtained results among them for high momentum transfer ranges. Minimum free parameters have been applied in our parametrization. After achieving the form factors, we calculate the electric radius and the transversely unpolarized and polarized densities for the nucleon. Furthermore, we obtain the impact-parameter-dependent PDFs. Finally, we compare our obtained data with the results of previous studies.
A probabilistic asteroid impact risk model: assessment of sub-300 m impacts
NASA Astrophysics Data System (ADS)
Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.
2017-06-01
A comprehensive asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain input parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions for objects up to 300 m in diameter. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data have little effect on the metrics of interest.
NASA Astrophysics Data System (ADS)
Murga, Alicia; Sano, Yusuke; Kawamoto, Yoichi; Ito, Kazuhide
2017-10-01
Mechanical and passive ventilation strategies directly impact indoor air quality. Passive ventilation has recently become widespread owing to its ability to reduce energy demand in buildings, such as the case of natural or cross ventilation. To understand the effect of natural ventilation on indoor environmental quality, outdoor-indoor flow paths need to be analyzed as functions of urban atmospheric conditions, topology of the built environment, and indoor conditions. Wind-driven natural ventilation (e.g., cross ventilation) can be calculated through the wind pressure coefficient distributions of outdoor wall surfaces and openings of a building, allowing the study of indoor air parameters and airborne contaminant concentrations. Variations in outside parameters will directly impact indoor air quality and residents' health. Numerical modeling can contribute to comprehend these various parameters because it allows full control of boundary conditions and sampling points. In this study, numerical weather prediction modeling was used to calculate wind profiles/distributions at the atmospheric scale, and computational fluid dynamics was used to model detailed urban and indoor flows, which were then integrated into a dynamic downscaling analysis to predict specific urban wind parameters from the atmospheric to built-environment scale. Wind velocity and contaminant concentration distributions inside a factory building were analyzed to assess the quality of the human working environment by using a computer simulated person. The impact of cross ventilation flows and its variations on local average contaminant concentration around a factory worker, and inhaled contaminant dose, were then discussed.
Sanni, Steinar; Lyng, Emily; Pampanin, Daniela M; Smit, Mathijs G D
2017-06-01
The aim of this paper is to bridge gaps between biomarker and whole organism responses related to oil based offshore discharges. These biomarker bridges will facilitate acceptance criteria for biomarker data linked to environmental risk assessment and translate biomarker results to higher order effects. Biomarker based species sensitivity distributions (SSD biomarkers ) have been constructed for relevant groups of biomarkers based on laboratory data from oil exposures. SSD curves express the fraction of species responding to different types of biomarkers. They have been connected to SSDs for whole organism responses (WORs) constructed in order to relate the SSD biomarkers to animal fitness parameters that are commonly used in environmental risk assessment. The resulting SSD curves show that biomarkers and WORs can be linked through their potentially affected fraction of species (PAF) distributions, enhancing the capability to monitor field parameters with better correlation to impact and risk assessment criteria and providing improved chemical/biological integration. Copyright © 2016 Elsevier Ltd. All rights reserved.
Neuner, B; Berger, K
2010-11-01
Apart from individual resources and individual risk factors, environmental socioeconomic factors are determinants of individual health and illness. The aim of this investigation was to evaluate the association of small-area environmental socioeconomic parameters (proportion of 14-year-old and younger population, proportion of married citizens, proportion of unemployed, and the number of private cars per inhabitant) with individual socioeconomic parameters (education, income, unemployment, social class and the country of origin) in Dortmund, a major city in Germany. After splitting the small-area environmental socioeconomic parameters of 62 statistical administration units into quintiles, differences in the distribution of individual social parameters were evaluated using adjusted tests for trend. Overall, 1,312 study participants (mean age 53.6 years, 52.9% women) were included. Independently of age and gender, individual social parameters were unequally distributed across areas with different small-area environmental socioeconomic parameters. A place of birth abroad and social class were significantly associated with all small-area environmental socioeconomic parameters. If the impact of environmental socioeconomic parameters on individual health or illness is determined, the unequal small-area distribution of individual social parameters should be considered. © Georg Thieme Verlag KG Stuttgart · New York.
On the issues of probability distribution of GPS carrier phase observations
NASA Astrophysics Data System (ADS)
Luo, X.; Mayer, M.; Heck, B.
2009-04-01
In common practice the observables related to Global Positioning System (GPS) are assumed to follow a Gauss-Laplace normal distribution. Actually, full knowledge of the observables' distribution is not required for parameter estimation by means of the least-squares algorithm based on the functional relation between observations and unknown parameters as well as the associated variance-covariance matrix. However, the probability distribution of GPS observations plays a key role in procedures for quality control (e.g. outlier and cycle slips detection, ambiguity resolution) and in reliability-related assessments of the estimation results. Under non-ideal observation conditions with respect to the factors impacting GPS data quality, for example multipath effects and atmospheric delays, the validity of the normal distribution postulate of GPS observations is in doubt. This paper presents a detailed analysis of the distribution properties of GPS carrier phase observations using double difference residuals. For this purpose 1-Hz observation data from the permanent SAPOS
NASA Astrophysics Data System (ADS)
Križan, Peter; Matúš, Miloš; Beniak, Juraj; Šooš, Ľubomír
2018-01-01
During the biomass densification can be recognized various technological variables and also material parameters which significantly influences the final solid biofuels (pellets) quality. In this paper, we will present the research findings concerning relationships between technological and material variables during densification of sunflower hulls. Sunflower hulls as an unused source is a typical product of agricultural industry in Slovakia and belongs to the group of herbaceous biomass. The main goal of presented experimental research is to determine the impact of compression pressure, compression temperature and material particle size distribution on final biofuels quality. Experimental research described in this paper was realized by single-axis densification, which was represented by experimental pressing stand. The impact of mentioned investigated variables on the final briquettes density and briquettes dilatation was determined. Mutual interactions of these variables on final briquettes quality are showing the importance of mentioned variables during the densification process. Impact of raw material particle size distribution on final biofuels quality was also proven by experimental research on semi-production pelleting plant.
Assessment of the Economic Potential of Distributed Wind in Colorado, Minnesota, and New York
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baring-Gould, Edward I; McCabe, Kevin; Sigrin, Benjamin O
Stakeholders in the small and distributed wind space require access to better tools and data for more informed decisions on high-impact topics, including project planning, policymaking, and funding allocation. A major challenge in obtaining improved information is in the identification of favorable sites - namely, the intersection of sufficient wind resource with economic parameters such as retail rates, incentives, and other policies. This presentation made at the AWEA WINDPOWER Conference and Exhibition in Chicago in 2018 explores the researchers' objective: To understand the spatial variance of key distributed wind parameters and identify where they intersect to form pockets of favorablemore » areas in Colorado, Minnesota, and New York.« less
Spiridonov, S I; Teten'kin, V L; Mukusheva, M K; Solomatin, V M
2008-01-01
Advisability of using risks as indicators for estimating radiation impacts on environmental objects and humans has been jusified. Results are presented from identification of dose burdens distribution to various cohorts of the population living within the Semipalatinsk Test Site (STS) and consuming contaminated farm products. Parameters of dose burden distributions are estimated for areas of livestock grazing and the most contaminated sectors within these areas. Dose distributions to meadow plants for the above areas have been found. Regulatory radiation risks for the STS population and meadow ecosystem components have been calculated. Based on the parameters estimated, levels of radiation exposure of the population and herbaceous plants have been compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idilbi, Ahmad; Ji Xiangdong; Yuan Feng
The hadron-energy evolution (Collins and Soper) equation for all the leading-twist transverse-momentum and spin dependent parton distributions is derived in the impact parameter space. Based on this equation, we present a resummation formulas for the spin dependent structure functions of the semi-inclusive deep-inelastic scattering.
Wigner distributions for an electron
NASA Astrophysics Data System (ADS)
Kumar, Narinder; Mondal, Chandan
2018-06-01
We study the Wigner distributions for a physical electron, which reveal the multidimensional images of the electron. The physical electron is considered as a composite system of a bare electron and photon. The Wigner distributions for unpolarized, longitudinally polarized and transversely polarized electron are presented in transverse momentum plane as well as in impact-parameter plane. The spin-spin correlations between the bare electron and the physical electron are discussed. We also evaluate all the leading twist generalized transverse momentum distributions (GTMDs) for electron.
Simulations of impacts on rubble-pile asteroids
NASA Astrophysics Data System (ADS)
Deller, J.; Snodgrass, C.; Lowry, S.; Price, M.; Sierks, H.
2014-07-01
Rubble-pile asteroids can contain a high level of macroporosity. For some asteroids, porosities of 40 % or even more have been measured [1]. While little is known about the exact distribution of the voids inside rubble-pile asteroids, assumptions have to be made for the modeling of impact events on these bodies. Most hydrocodes do not distinguish between micro- and macroporosity, instead describing brittle material by a constitutive model as homogeneous. We developed a method to model rubble-pile structures in hypervelocity impact events explicitly. The formation of the asteroid is modelled as a gravitational aggregation of spherical `pebbles', that form the building blocks of our target. This aggregate is then converted into a high-resolution Smoothed Particle Hydrodynamics (SPH) model, which also accounts for macroporosity inside the pebbles. We present results of a study that quantifies the influence of our model parameters on the outcome of a typical impact event of two small main-belt asteroids. The existence of void space in our model increases the resistance against collisional disruption, a behavior observed before [2]. We show that for our model no a priori knowledge of the rubble-pile constituents in the asteroid is needed, as the choice of the corresponding parameters does not directly correlate with the impact outcome. The size distribution of the pebbles used as building blocks in the formation of an asteroid is only poorly constrained. As a starting point, we use a power law N(>r) ∝ r^α to describe the distribution of radii of the pebbles. Reasonable values for the slope α range around α=-2.5, as found in the size distribution of main-belt objects [3,4]. The cut-off values for pebbles, r_{min} and r_{max} are given by practical considerations: In the SPH formalism, properties are represented by weighted averages of particles within their smoothing length h, preventing the resolution of structures below that scale. Using spheres with radius in the range of h results in a practically monolithic body, as well as using spheres of a radius similar to the asteroid itself. We quantify the sensitivity of impact outcomes to the choice of parameters. Propagation of the shock front inside the asteroid depends on the pebble size distribution. While larger pebbles transmit the shock wave further into the structure, resulting in a steeper crater, small pebbles result in a more evenly distributed shock front and a wider crater. Because the shock wave is transmitted only at the small contact area of the pebbles, the shock wave is focused at the contact points and material can be compressed or damaged even at a distance to the impact zone. We create maps of the displacement of pebbles at the surface of the asteroid on the opposing site of the impact event. This can possibly be used to relate surface features on asteroids like Šteins or Itokawa to specific impact events.
Event-scale power law recession analysis: quantifying methodological uncertainty
NASA Astrophysics Data System (ADS)
Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.
2017-01-01
The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.
The Effect of Clustering on Estimations of the UV Ionizing Background from the Proximity Effect
NASA Astrophysics Data System (ADS)
Pascarelle, S. M.; Lanzetta, K. M.; Chen, H. W.
1999-09-01
There have been several determinations of the ionizing background using the proximity effect observed in the distibution of Lyman-alpha absorption lines in the spectra of QSOs at high redshift. It is usually assumed that the distribution of lines should be the same at very small impact parameters to the QSO as it is at large impact parameters, and any decrease in line density at small impact parameters is due to ionizing radiation from the QSO. However, if these Lyman-alpha absorption lines arise in galaxies (Lanzetta et al. 1995, Chen et al. 1998), then the strength of the proximity effect may have been underestimated in previous work, since galaxies are known to cluster around QSOs. Therefore, the UV background estimations have likely been overestimated by the same factor.
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2015-10-01
Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.
3D glasma initial state for relativistic heavy ion collisions
Schenke, Björn; Schlichting, Sören
2016-10-13
We extend the impact-parameter-dependent Glasma model to three dimensions using explicit small-x evolution of the two incoming nuclear gluon distributions. We compute rapidity distributions of produced gluons and the early-time energy momentum tensor as a function of space-time rapidity and transverse coordinates. Finally, we study rapidity correlations and fluctuations of the initial geometry and multiplicity distributions and make comparisons to existing models for the three-dimensional initial state.
NASA Astrophysics Data System (ADS)
Galliano, Frédéric
2018-05-01
This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.
NASA Astrophysics Data System (ADS)
Ramzan, M.; Bilal, M.; Kanwal, Shamsa; Chung, Jae Dong
2017-06-01
Present analysis discusses the boundary layer flow of Eyring Powell nanofluid past a constantly moving surface under the influence of nonlinear thermal radiation. Heat and mass transfer mechanisms are examined under the physically suitable convective boundary condition. Effects of variable thermal conductivity and chemical reaction are also considered. Series solutions of all involved distributions using Homotopy Analysis method (HAM) are obtained. Impacts of dominating embedded flow parameters are discussed through graphical illustrations. It is observed that thermal radiation parameter shows increasing tendency in relation to temperature profile. However, chemical reaction parameter exhibits decreasing behavior versus concentration distribution. Supported by the World Class 300 Project (No. S2367878) of the SMBA (Korea)
Low-Energy Impacts onto Lunar Regolith Simulant
NASA Astrophysics Data System (ADS)
Seward, Laura M.; Colwell, J.; Mellon, M.; Stemm, B.
2012-10-01
Low-Energy Impacts onto Lunar Regolith Simulant Laura M. Seward1, Joshua E. Colwell1, Michael T. Mellon2, and Bradley A. Stemm1, 1Department of Physics, University of Central Florida, Orlando, Florida, 2Southwest Research Institute, Boulder, Colorado. Impacts and cratering in space play important roles in the formation and evolution of planetary bodies. Low-velocity impacts and disturbances to planetary regolith are also a consequence of manned and robotic exploration of planetary bodies such as the Moon, Mars, and asteroids. We are conducting a program of laboratory experiments to study low-velocity impacts of 1 to 5 m/s into JSC-1 lunar regolith simulant, JSC-Mars-1 Martian regolith simulant, and silica targets under 1 g. We use direct measurement of ejecta mass and high-resolution video tracking of ejecta particle trajectories to derive ejecta mass velocity distributions. Additionally, we conduct similar experiments under microgravity conditions in a laboratory drop tower and on parabolic aircraft with velocities as low as 10 cm/s. We wish to characterize and understand the collision parameters that control the outcome of low-velocity impacts into regolith, including impact velocity, impactor mass, target shape and size distribution, regolith depth, target relative density, and crater depth, and to experimentally determine the functional dependencies of the outcomes of low-velocity collisions (ejecta mass and ejecta velocities) on the controlling parameters of the collision. We present results from our ongoing study showing the positive correlation between impact energy and ejecta mass. The total ejecta mass is also dependent on the packing density (porosity) of the regolith. We find that ejecta mass velocity fits a power-law or broken power-law distribution. Our goal is to understand the physics of ejecta production and regolith compaction in low-energy impacts and experimentally validate predictive models for dust flow and deposition. We will present our results from one-g and microgravity impact experiments.
Electron-impact Multiple-ionization Cross Sections for Atoms and Ions of Helium through Zinc
NASA Astrophysics Data System (ADS)
Hahn, M.; Müller, A.; Savin, D. W.
2017-12-01
We compiled a set of electron-impact multiple-ionization (EIMI) cross section for astrophysically relevant ions. EIMIs can have a significant effect on the ionization balance of non-equilibrium plasmas. For example, it can be important if there is a rapid change in the electron temperature or if there is a non-thermal electron energy distribution, such as a kappa distribution. Cross section for EIMI are needed in order to account for these processes in plasma modeling and for spectroscopic interpretation. Here, we describe our comparison of proposed semiempirical formulae to available experimental EIMI cross-section data. Based on this comparison, we interpolated and extrapolated fitting parameters to systems that have not yet been measured. A tabulation of the fit parameters is provided for 3466 EIMI cross sections and the associated Maxwellian plasma rate coefficients. We also highlight some outstanding issues that remain to be resolved.
Finite-size analysis of continuous-variable measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Xueying; Zhang, Yichen; Zhao, Yijia; Wang, Xiangyu; Yu, Song; Guo, Hong
2017-10-01
We study the impact of the finite-size effect on the continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocol, mainly considering the finite-size effect on the parameter estimation procedure. The central-limit theorem and maximum likelihood estimation theorem are used to estimate the parameters. We also analyze the relationship between the number of exchanged signals and the optimal modulation variance in the protocol. It is proved that when Charlie's position is close to Bob, the CV-MDI QKD protocol has the farthest transmission distance in the finite-size scenario. Finally, we discuss the impact of finite-size effects related to the practical detection in the CV-MDI QKD protocol. The overall results indicate that the finite-size effect has a great influence on the secret-key rate of the CV-MDI QKD protocol and should not be ignored.
Statistics of initial density perturbations in heavy ion collisions and their fluid dynamic response
NASA Astrophysics Data System (ADS)
Floerchinger, Stefan; Wiedemann, Urs Achim
2014-08-01
An interesting opportunity to determine thermodynamic and transport properties in more detail is to identify generic statistical properties of initial density perturbations. Here we study event-by-event fluctuations in terms of correlation functions for two models that can be solved analytically. The first assumes Gaussian fluctuations around a distribution that is fixed by the collision geometry but leads to non-Gaussian features after averaging over the reaction plane orientation at non-zero impact parameter. In this context, we derive a three-parameter extension of the commonly used Bessel-Gaussian event-by-event distribution of harmonic flow coefficients. Secondly, we study a model of N independent point sources for which connected n-point correlation functions of initial perturbations scale like 1 /N n-1. This scaling is violated for non-central collisions in a way that can be characterized by its impact parameter dependence. We discuss to what extent these are generic properties that can be expected to hold for any model of initial conditions, and how this can improve the fluid dynamical analysis of heavy ion collisions.
Spatiotemporal distribution modeling of PET tracer uptake in solid tumors.
Soltani, Madjid; Sefidgar, Mostafa; Bazmara, Hossein; Casey, Michael E; Subramaniam, Rathan M; Wahl, Richard L; Rahmim, Arman
2017-02-01
Distribution of PET tracer uptake is elaborately modeled via a general equation used for solute transport modeling. This model can be used to incorporate various transport parameters of a solid tumor such as hydraulic conductivity of the microvessel wall, transvascular permeability as well as interstitial space parameters. This is especially significant because tracer delivery and drug delivery to solid tumors are determined by similar underlying tumor transport phenomena, and quantifying the former can enable enhanced prediction of the latter. We focused on the commonly utilized FDG PET tracer. First, based on a mathematical model of angiogenesis, the capillary network of a solid tumor and normal tissues around it were generated. The coupling mathematical method, which simultaneously solves for blood flow in the capillary network as well as fluid flow in the interstitium, is used to calculate pressure and velocity distributions. Subsequently, a comprehensive spatiotemporal distribution model (SDM) is applied to accurately model distribution of PET tracer uptake, specifically FDG in this work, within solid tumors. The different transport mechanisms, namely convention and diffusion from vessel to tissue and in tissue, are elaborately calculated across the domain of interest and effect of each parameter on tracer distribution is investigated. The results show the convection terms to have negligible effect on tracer transport and the SDM can be solved after eliminating these terms. The proposed framework of spatiotemporal modeling for PET tracers can be utilized to comprehensively assess the impact of various parameters on the spatiotemporal distribution of PET tracers.
Impact of signal scattering and parametric uncertainties on receiver operating characteristics
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.
2017-05-01
The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Li, Yandong; Chang, Ching-Fu; Tan, Benjamin; Chen, Ziyang; Sege, Jon; Wang, Changhong; Rubin, Yoram
2018-01-01
Modeling of uncertainty associated with subsurface dynamics has long been a major research topic. Its significance is widely recognized for real-life applications. Despite the huge effort invested in the area, major obstacles still remain on the way from theory and applications. Particularly problematic here is the confusion between modeling uncertainty and modeling spatial variability, which translates into a (mis)conception, in fact an inconsistency, in that it suggests that modeling of uncertainty and modeling of spatial variability are equivalent, and as such, requiring a lot of data. This paper investigates this challenge against the backdrop of a 7 km, deep underground tunnel in China, where environmental impacts are of major concern. We approach the data challenge by pursuing a new concept for Rapid Impact Modeling (RIM), which bypasses altogether the need to estimate posterior distributions of model parameters, focusing instead on detailed stochastic modeling of impacts, conditional to all information available, including prior, ex-situ information and in-situ measurements as well. A foundational element of RIM is the construction of informative priors for target parameters using ex-situ data, relying on ensembles of well-documented sites, pre-screened for geological and hydrological similarity to the target site. The ensembles are built around two sets of similarity criteria: a physically-based set of criteria and an additional set covering epistemic criteria. In another variation to common Bayesian practice, we update the priors to obtain conditional distributions of the target (environmental impact) dependent variables and not the hydrological variables. This recognizes that goal-oriented site characterization is in many cases more useful in applications compared to parameter-oriented characterization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less
Garcia, A G; Godoy, W A C
2017-06-01
Studies of the influence of biological parameters on the spatial distribution of lepidopteran insects can provide useful information for managing agricultural pests, since the larvae of many species cause serious impacts on crops. Computational models to simulate the spatial dynamics of insect populations are increasingly used, because of their efficiency in representing insect movement. In this study, we used a cellular automata model to explore different patterns of population distribution of Spodoptera frugiperda (J.E. Smith) (Lepidoptera: Noctuidae), when the values of two biological parameters that are able to influence the spatial pattern (larval viability and adult longevity) are varied. We mapped the spatial patterns observed as the parameters varied. Additionally, by using population data for S. frugiperda obtained in different hosts under laboratory conditions, we were able to describe the expected spatial patterns occurring in corn, cotton, millet, and soybean crops based on the parameters varied. The results are discussed from the perspective of insect ecology and pest management. We concluded that computational approaches can be important tools to study the relationship between the biological parameters and spatial distributions of lepidopteran insect pests.
Coupling SPH and thermochemical models of planets: Methodology and example of a Mars-sized body
NASA Astrophysics Data System (ADS)
Golabek, G. J.; Emsenhuber, A.; Jutzi, M.; Asphaug, E. I.; Gerya, T. V.
2018-02-01
Giant impacts have been suggested to explain various characteristics of terrestrial planets and their moons. However, so far in most models only the immediate effects of the collisions have been considered, while the long-term interior evolution of the impacted planets was not studied. Here we present a new approach, combining 3-D shock physics collision calculations with 3-D thermochemical interior evolution models. We apply the combined methods to a demonstration example of a giant impact on a Mars-sized body, using typical collisional parameters from previous studies. While the material parameters (equation of state, rheology model) used in the impact simulations can have some effect on the long-term evolution, we find that the impact angle is the most crucial parameter for the resulting spatial distribution of the newly formed crust. The results indicate that a dichotomous crustal pattern can form after a head-on collision, while this is not the case when considering a more likely grazing collision. Our results underline that end-to-end 3-D calculations of the entire process are required to study in the future the effects of large-scale impacts on the evolution of planetary interiors.
A discrete element modelling approach for block impacts on trees
NASA Astrophysics Data System (ADS)
Toe, David; Bourrier, Franck; Olmedo, Ignatio; Berger, Frederic
2015-04-01
These past few year rockfall models explicitly accounting for block shape, especially those using the Discrete Element Method (DEM), have shown a good ability to predict rockfall trajectories. Integrating forest effects into those models still remain challenging. This study aims at using a DEM approach to model impacts of blocks on trees and identify the key parameters controlling the block kinematics after the impact on a tree. A DEM impact model of a block on a tree was developed and validated using laboratory experiments. Then, key parameters were assessed using a global sensitivity analyse. Modelling the impact of a block on a tree using DEM allows taking into account large displacements, material non-linearities and contacts between the block and the tree. Tree stems are represented by flexible cylinders model as plastic beams sustaining normal, shearing, bending, and twisting loading. Root soil interactions are modelled using a rotation stiffness acting on the bending moment at the bottom of the tree and a limit bending moment to account for tree overturning. The crown is taken into account using an additional mass distribute uniformly on the upper part of the tree. The block is represented by a sphere. The contact model between the block and the stem consists of an elastic frictional model. The DEM model was validated using laboratory impact tests carried out on 41 fresh beech (Fagus Sylvatica) stems. Each stem was 1,3 m long with a diameter between 3 to 7 cm. Wood stems were clamped on a rigid structure and impacted by a 149 kg charpy pendulum. Finally an intensive simulation campaign of blocks impacting trees was done to identify the input parameters controlling the block kinematics after the impact on a tree. 20 input parameters were considered in the DEM simulation model : 12 parameters were related to the tree and 8 parameters to the block. The results highlight that the impact velocity, the stem diameter, and the block volume are the three input parameters that control the block kinematics after impact.
Savic, Radovan; Ondrasek, Gabrijel; Blagojevic, Bosko; Bubalo Kovacic, Marina; Zemunac, Rados
2017-12-29
Waters are among to the most vulnerable environmental resources exposed to the impact of various point and non-point pollutants from rural/urban activities. Systematic and long-term monitoring of hydro-resources is therefore of crucial importance for sustainable water management, although such practice is lacking across many (agro-)hydro-ecosystems. In the presented study, for the first time, the spatial distribution (covering almost 9000 ha) and temporal variation (2006-2013) in certain quality parameters was characterized in drainage watercourses Tatarnica and Subic, whose catchment is rural and suburban areas close to the city of Novi Sad, Republic of Serbia. Based on majority of observed parameters, both watercourses belonged to I and II water quality classes, with occasional presence of certain parameters (e.g., suspended solids, total phosphorus; ammonium) at extreme values exacerbating both watercourses to classes IV and V. The value of the synthetic pollution index (i.e., a combined effect of all considered parameters) showed a higher degree of water pollution in watercourse Subic (on average 2.00) than Tatarnica (on average 0.72). Also, cluster analysis for watercourse Tatarnica detected two groups of parameters (mostly related to nutrients and organic matter), indicating more complex impacts on water quality during the observed period, in which elucidation thus established water quality monitoring program would be of great importance.
Global behavior of a vibro-impact system with asymmetric clearances
NASA Astrophysics Data System (ADS)
Li, Guofang; Ding, Wangcai
2018-06-01
A simple dynamic model of a vibro-impact system subjected to harmonic excitation with two asymmetric clearances is considered. The Semi-Analytical Method for getting periodic solutions of the vibro-impact system is proposed. Diversity and evolution of the fundamental periodic impact motions are analyzed. The formation mechanism of the complete chatting-impact periodic motion with sticking motion by the influence of gazing bifurcation is analyzed. The transitional law of periodic motions in the periodical inclusions area is presented. The coexistence of periodic motions and the extreme sensitivity of the initial value within the high frequency region are studied. The global distribution of the periodic and chaos motions of the system is obtained by the state-parameter space co-simulation method which very few have considered before. The distribution of the attractor and the corresponding attracting domain corresponding to different periodic motions are also studied.
Comparative study of anti-drift nozzles' wear.
Bolly, G; Huyghebaert, B; Mostade, O; Oger, R
2002-01-01
When spraying, the drift is a restricting factor which reduces the efficiency of pesticides treatments and increases their impact on the environment. The use of anti-drift nozzles is the most common technique to reduce the drift effect. The basic principle of all anti-drift nozzles is to produce bigger droplets (Imag DLO, 1999) being less sensitive to the wind. The increase of the droplets' size is possible whether by reducing the spraying pressure (anti-drift fan nozzle) or by injecting air in the nozzle (air injection nozzles). This study aims at comparing the performances of the main anti-drift nozzles available on the Belgian market (Teejet DG and AI, Albuz ADI and AVI, Hardi ISO LD et AI). The study made it possible to compare thirteen different nozzles' sets according to their trademark, type and material. The study is based on the analysis of macroscopic parameters (flowrate, transversal distribution and individual distribution) as well as on the analysis of microscopic parameters (spraying deposit on artificial target). The evolution of these parameters is analysed according to the nozzle's wear. The wear is carried out artificially according to the "ISO 5682-1" standard (ISO 5682-1, 1996). The results confirmed the major influence of the manufacturing material on the nozzles' wear, ceramic being the most resistant material. Macroscopic as well as microscopic parameters variated according to the utilization time without any direct correlation. Indeed, most parameters variate in an uncertain way. It was however possible to establish a correlation between the wear time and the recovering rate and flowrate parameters. The utilization length is different depending on the type of nozzle, air injection nozzles being more resistant. At last, the analysis of microscopic parameters (spraying deposit) (Degré A., 1999), shows that the number of impacts is stable depending on the wear, while the size of impacts and the recovering rate increase.
2014-09-01
has highlighted the need for physically consistent radiation pressure and Bidirectional Reflectance Distribution Function ( BRDF ) models . This paper...seeks to evaluate the impact of BRDF -consistent radiation pres- sure models compared to changes in the other BRDF parameters. The differences in...orbital position arising because of changes in the shape, attitude, angular rates, BRDF parameters, and radiation pressure model are plotted as a
The geomagnetically trapped radiation environment: A radiological point of view
NASA Technical Reports Server (NTRS)
Holly, F. E.
1972-01-01
The regions of naturally occurring, geomagnetically trapped radiation are briefly reviewed in terms of physical parameters such as; particle types, fluxes, spectrums, and spatial distributions. The major emphasis is placed upon a description of this environment in terms of the radiobiologically relevant parameters of absorbed dose and dose-rate and a discussion of the radiological implications in terms of the possible impact on space vehicle design and mission planning.
An Evaluation of Compressed Work Schedules and Their Impact on Electricity Use
2010-03-01
problems by introducing uncertainty to the known parameters of a given process ( Sobol , 1975). The MCS output represents approximate values of the...process within the observed parameters; the output is provided within a statistical distribution of likely outcomes ( Sobol , 1975). 31 In this...The Monte Carlo method is appropriate for “any process whose development is affected by random factors” ( Sobol , 1975:10). MCS introduces
Strong-field ionization with twisted laser pulses
NASA Astrophysics Data System (ADS)
Paufler, Willi; Böning, Birger; Fritzsche, Stephan
2018-04-01
We apply quantum trajectory Monte Carlo computations in order to model strong-field ionization of atoms by twisted Bessel pulses and calculate photoelectron momentum distributions (PEMD). Since Bessel beams can be considered as an infinite superposition of circularly polarized plane waves with the same helicity, whose wave vectors lie on a cone, we compared the PEMD of such Bessel pulses to those of a circularly polarized pulse. We focus on the momentum distributions in propagation direction of the pulse and show how these momentum distributions are affected by experimental accessible parameters, such as the opening angle of the beam or the impact parameter of the atom with regard to the beam axis. In particular, we show that we can find higher momenta of the photoelectrons, if the opening angle is increased.
Hypervelocity Impact Test Facility: A gun for hire
NASA Technical Reports Server (NTRS)
Johnson, Calvin R.; Rose, M. F.; Hill, D. C.; Best, S.; Chaloupka, T.; Crawford, G.; Crumpler, M.; Stephens, B.
1994-01-01
An affordable technique has been developed to duplicate the types of impacts observed on spacecraft, including the Shuttle, by use of a certified Hypervelocity Impact Facility (HIF) which propels particulates using capacitor driven electric gun techniques. The fully operational facility provides a flux of particles in the 10-100 micron diameter range with a velocity distribution covering the space debris and interplanetary dust particle environment. HIF measurements of particle size, composition, impact angle and velocity distribution indicate that such parameters can be controlled in a specified, tailored test designed for or by the user. Unique diagnostics enable researchers to fully describe the impact for evaluating the 'targets' under full power or load. Users regularly evaluate space hardware, including solar cells, coatings, and materials, exposing selected portions of space-qualified items to a wide range of impact events and environmental conditions. Benefits include corroboration of data obtained from impact events, flight simulation of designs, accelerated aging of systems, and development of manufacturing techniques.
Interval Estimation of Seismic Hazard Parameters
NASA Astrophysics Data System (ADS)
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2017-03-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
Dust particle radial confinement in a dc glow discharge.
Sukhinin, G I; Fedoseev, A V; Antipov, S N; Petrov, O F; Fortov, V E
2013-01-01
A self-consistent nonlocal model of the positive column of a dc glow discharge with dust particles is presented. Radial distributions of plasma parameters and the dust component in an axially homogeneous glow discharge are considered. The model is based on the solution of a nonlocal Boltzmann equation for the electron energy distribution function, drift-diffusion equations for ions, and the Poisson equation for a self-consistent electric field. The radial distribution of dust particle density in a dust cloud was fixed as a given steplike function or was chosen according to an equilibrium Boltzmann distribution. The balance of electron and ion production in argon ionization by an electron impact and their losses on the dust particle surface and on the discharge tube walls is taken into account. The interrelation of discharge plasma and the dust cloud is studied in a self-consistent way, and the radial distributions of the discharge plasma and dust particle parameters are obtained. It is shown that the influence of the dust cloud on the discharge plasma has a nonlocal behavior, e.g., density and charge distributions in the dust cloud substantially depend on the plasma parameters outside the dust cloud. As a result of a self-consistent evolution of plasma parameters to equilibrium steady-state conditions, ionization and recombination rates become equal to each other, electron and ion radial fluxes become equal to zero, and the radial component of electric field is expelled from the dust cloud.
NASA Astrophysics Data System (ADS)
Zus, F.; Deng, Z.; Wickert, J.
2017-08-01
The impact of higher-order ionospheric effects on the estimated station coordinates and clocks in Global Navigation Satellite System (GNSS) Precise Point Positioning (PPP) is well documented in literature. Simulation studies reveal that higher-order ionospheric effects have a significant impact on the estimated tropospheric parameters as well. In particular, the tropospheric north-gradient component is most affected for low-latitude and midlatitude stations around noon. In a practical example we select a few hundred stations randomly distributed over the globe, in March 2012 (medium solar activity), and apply/do not apply ionospheric corrections in PPP. We compare the two sets of tropospheric parameters (ionospheric corrections applied/not applied) and find an overall good agreement with the prediction from the simulation study. The comparison of the tropospheric parameters with the tropospheric parameters derived from the ERA-Interim global atmospheric reanalysis shows that ionospheric corrections must be consistently applied in PPP and the orbit and clock generation. The inconsistent application results in an artificial station displacement which is accompanied by an artificial "tilting" of the troposphere. This finding is relevant in particular for those who consider advanced GNSS tropospheric products for meteorological studies.
NASA Astrophysics Data System (ADS)
da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho
2018-04-01
A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.
NASA Astrophysics Data System (ADS)
Xu, C.; Shyu, J. B. H.; Xu, X.
2014-07-01
The 12 January 2010 Port-au-Prince, Haiti, earthquake (Mw= 7.0) triggered tens of thousands of landslides. The purpose of this study is to investigate the correlations of the occurrence of landslides and the thicknesses of their erosion with topographic, geologic, and seismic parameters. A total of 30 828 landslides triggered by the earthquake covered a total area of 15.736 km2, distributed in an area more than 3000 km2, and the volume of landslide accumulation materials is estimated to be about 29 700 000 m3. These landslides are of various types, mostly belonging to shallow disrupted landslides and rock falls, but also include coherent deep-seated landslides and rock slides. These landslides were delineated using pre- and post-earthquake high-resolution satellite images. Spatial distribution maps and contour maps of landslide number density, landslide area percentage, and landslide erosion thickness were constructed in order to analyze the spatial distribution patterns of co-seismic landslides. Statistics of size distribution and morphometric parameters of co-seismic landslides were carried out and were compared with other earthquake events in the world. Four proxies of co-seismic landslide abundance, including landslides centroid number density (LCND), landslide top number density (LTND), landslide area percentage (LAP), and landslide erosion thickness (LET) were used to correlate co-seismic landslides with various environmental parameters. These parameters include elevation, slope angle, slope aspect, slope curvature, topographic position, distance from drainages, lithology, distance from the epicenter, distance from the Enriquillo-Plantain Garden fault, distance along the fault, and peak ground acceleration (PGA). A comparison of these impact parameters on co-seismic landslides shows that slope angle is the strongest impact parameter on co-seismic landslide occurrence. Our co-seismic landslide inventory is much more detailed than other inventories in several previous publications. Therefore, we carried out comparisons of inventories of landslides triggered by the Haiti earthquake with other published results and proposed possible reasons for any differences. We suggest that the empirical functions between earthquake magnitude and co-seismic landslides need to be updated on the basis of the abundant and more complete co-seismic landslide inventories recently available.
NASA Astrophysics Data System (ADS)
Xu, C.; Shyu, J. B. H.; Xu, X.-W.
2014-02-01
The 12 January 2010 Port-au-Prince, Haiti, earthquake (Mw 7.0) triggered tens of thousands of landslides. The purpose of this study is to investigate the correlations of the occurrence of landslides and their erosion thicknesses with topographic factors, seismic parameters, and their distance from roads. A total of 30 828 landslides triggered by the earthquake covered a total area of 15.736 km2, distributed in an area more than 3000 km2, and the volume of landslide accumulation materials is estimated to be about 29 700 000 m3. These landslides are of various types, mostly belonging to shallow disrupted landslides and rock falls, but also include coherent deep-seated landslides and rock slides. These landslides were delineated using pre- and post-earthquake high-resolutions satellite images. Spatial distribution maps and contour maps of landslide number density, landslide area percentage, and landslide erosion thickness were constructed in order to analyze the spatial distribution patterns of co-seismic landslides. Statistics of size distribution and morphometric parameters of co-seismic landslides were carried out and were compared with other earthquake events in the world. Four proxies of co-seismic landslide abundance, including landslides centroid number density (LCND), landslide top number density (LTND), landslide area percentage (LAP), and landslide erosion thickness (LET) were used to correlate co-seismic landslides with various landslide controlling parameters. These controlling parameters include elevation, slope angle, slope aspect, slope curvature, topographic position, distance from drainages, lithology, distance from the epicenter, distance from the Enriquillo-Plantain Garden fault, distance along the fault, and peak ground acceleration (PGA). A comparison of these impact parameters on co-seismic landslides shows that slope angle is the strongest impact parameter on co-seismic landslide occurrence. Our co-seismic landslide inventory is much more detailed than other inventories in several previous publications. Therefore, we carried out comparisons of inventories of landslides triggered by the Haiti earthquake with other published results and proposed possible reasons of any differences. We suggest that the empirical functions between earthquake magnitude and co-seismic landslides need to update on the basis of the abundant and more complete co-seismic landslide inventories recently available.
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2016-01-01
Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.
Procedure for assessing the performance of a rockfall fragmentation model
NASA Astrophysics Data System (ADS)
Matas, Gerard; Lantada, Nieves; Corominas, Jordi; Gili, Josep Antoni; Ruiz-Carulla, Roger; Prades, Albert
2017-04-01
A Rockfall is a mass instability process frequently observed in road cuts, open pit mines and quarries, steep slopes and cliffs. It is frequently observed that the detached rock mass becomes fragmented when it impacts with the slope surface. The consideration of the fragmentation of the rockfall mass is critical for the calculation of block's trajectories and their impact energies, to further assess their potential to cause damage and design adequate preventive structures. We present here the performance of the RockGIS model. It is a GIS-Based tool that simulates stochastically the fragmentation of the rockfalls, based on a lumped mass approach. In RockGIS, the fragmentation initiates by the disaggregation of the detached rock mass through the pre-existing discontinuities just before the impact with the ground. An energy threshold is defined in order to determine whether the impacting blocks break or not. The distribution of the initial mass between a set of newly generated rock fragments is carried out stochastically following a power law. The trajectories of the new rock fragments are distributed within a cone. The model requires the calibration of both the runout of the resultant blocks and the spatial distribution of the volumes of fragments generated by breakage during their propagation. As this is a coupled process which is controlled by several parameters, a set of performance criteria to be met by the simulation have been defined. The criteria includes: position of the centre of gravity of the whole block distribution, histogram of the runout of the blocks, extent and boundaries of the young debris cover over the slope surface, lateral dispersion of trajectories, total number of blocks generated after fragmentation, volume distribution of the generated fragments, the number of blocks and volume passages past a reference line and the maximum runout distance Since the number of parameters to fit increases significantly when considering fragmentation, the final parameters selected after the calibration process are a compromise which meet all considered criteria. This methodology has been tested in some recent rockfall where high fragmentation was observed. The RockGIS tool and the fragmentation laws using data collected from recent rockfall have been developed within the RockRisk project (2014-2016, BIA2013-42582-P). This project was funded by the Spanish Ministerio de Economía y Competitividad.
Karmakar, Chandan; Udhayakumar, Radhagayathri K; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy ( DistEn ) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters-the embedding dimension m , and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy ( ApEn ) and sample entropy ( SampEn ) measures. The performance of DistEn can also be affected by the data length N . In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter ( m or M ) or combination of two parameters ( N and M ). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn . The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series.
Yang, Ben; Zhang, Yaocun; Qian, Yun; ...
2014-03-26
Reasonably modeling the magnitude, south-north gradient and seasonal propagation of precipitation associated with the East Asian Summer Monsoon (EASM) is a challenging task in the climate community. In this study we calibrate five key parameters in the Kain-Fritsch convection scheme in the WRF model using an efficient importance-sampling algorithm to improve the EASM simulation. We also examine the impacts of the improved EASM precipitation on other physical process. Our results suggest similar model sensitivity and values of optimized parameters across years with different EASM intensities. By applying the optimal parameters, the simulated precipitation and surface energy features are generally improved.more » The parameters related to downdraft, entrainment coefficients and CAPE consumption time (CCT) can most sensitively affect the precipitation and atmospheric features. Larger downdraft coefficient or CCT decrease the heavy rainfall frequency, while larger entrainment coefficient delays the convection development but build up more potential for heavy rainfall events, causing a possible northward shift of rainfall distribution. The CCT is the most sensitive parameter over wet region and the downdraft parameter plays more important roles over drier northern region. Long-term simulations confirm that by using the optimized parameters the precipitation distributions are better simulated in both weak and strong EASM years. Due to more reasonable simulated precipitation condensational heating, the monsoon circulations are also improved. Lastly, by using the optimized parameters the biases in the retreating (beginning) of Mei-yu (northern China rainfall) simulated by the standard WRF model are evidently reduced and the seasonal and sub-seasonal variations of the monsoon precipitation are remarkably improved.« less
Modeling Source Water Threshold Exceedances with Extreme Value Theory
NASA Astrophysics Data System (ADS)
Rajagopalan, B.; Samson, C.; Summers, R. S.
2016-12-01
Variability in surface water quality, influenced by seasonal and long-term climate changes, can impact drinking water quality and treatment. In particular, temperature and precipitation can impact surface water quality directly or through their influence on streamflow and dilution capacity. Furthermore, they also impact land surface factors, such as soil moisture and vegetation, which can in turn affect surface water quality, in particular, levels of organic matter in surface waters which are of concern. All of these will be exacerbated by anthropogenic climate change. While some source water quality parameters, particularly Total Organic Carbon (TOC) and bromide concentrations, are not directly regulated for drinking water, these parameters are precursors to the formation of disinfection byproducts (DBPs), which are regulated in drinking water distribution systems. These DBPs form when a disinfectant, added to the water to protect public health against microbial pathogens, most commonly chlorine, reacts with dissolved organic matter (DOM), measured as TOC or dissolved organic carbon (DOC), and inorganic precursor materials, such as bromide. Therefore, understanding and modeling the extremes of TOC and Bromide concentrations is of critical interest for drinking water utilities. In this study we develop nonstationary extreme value analysis models for threshold exceedances of source water quality parameters, specifically TOC and bromide concentrations. In this, the threshold exceedances are modeled as Generalized Pareto Distribution (GPD) whose parameters vary as a function of climate and land surface variables - thus, enabling to capture the temporal nonstationarity. We apply these to model threshold exceedance of source water TOC and bromide concentrations at two locations with different climate and find very good performance.
A multi-particle crushing apparatus for studying rock fragmentation due to repeated impacts
NASA Astrophysics Data System (ADS)
Huang, S.; Mohanty, B.; Xia, K.
2017-12-01
Rock crushing is a common process in mining and related operations. Although a number of particle crushing tests have been proposed in the literature, most of them are concerned with single-particle crushing, i.e., a single rock sample is crushed in each test. Considering the realistic scenario in crushers where many fragments are involved, a laboratory crushing apparatus is developed in this study. This device consists of a Hopkinson pressure bar system and a piston-holder system. The Hopkinson pressure bar system is used to apply calibrated dynamic loads to the piston-holder system, and the piston-holder system is used to hold rock samples and to recover fragments for subsequent particle size analysis. The rock samples are subjected to three to seven impacts under three impact velocities (2.2, 3.8, and 5.0 m/s), with the feed size of the rock particle samples limited between 9.5 and 12.7 mm. Several key parameters are determined from this test, including particle size distribution parameters, impact velocity, loading pressure, and total work. The results show that the total work correlates well with resulting fragmentation size distribution, and the apparatus provides a useful tool for studying the mechanism of crushing, which further provides guidelines for the design of commercial crushers.
Immiscible impact dynamics of droplets onto millimetric films
NASA Astrophysics Data System (ADS)
Shaikh, S.; Toyofuku, G.; Hoang, R.; Marston, J. O.
2018-01-01
The impact of liquid droplets onto a film of an immiscible liquid is studied experimentally across a broad range of parameters [Re = O(101-103), We = O(102-103)] with the aid of high-speed photography and image analysis. Above a critical impact parameter, Re^{1/2}We^{1/4} ≈ 100, the droplet fragments into multiple satellite droplets, which typically occurs as the result of a fingering instability. Statistical analysis indicates that the satellite droplets are approximately log-normally distributed, in agreement with some previous studies and the theoretical predictions of Wu (Prob Eng Mech 18:241-249, 2003). However, in contrast to a recent study by Lhuissier et al. (Phys Rev Lett 110:264503, 2013), we find that it is the modal satellite diameter, not the mean diameter, that scales inversely with the impact speed (or Weber number) and that the dependence is d_{mod} ˜ We^{-1/4}.
Impact of selected parameters on the development of boiling and flow resistance in the minichannel
NASA Astrophysics Data System (ADS)
Piasecka, Magdalena; Ziętala, Kinga
2015-05-01
The paper presents results of flow boiling in a rectangular minichannel 1 mm deep, 40 mm wide and 360 mm long. The heating element for FC-72 flowing in the minichannel was the thin alloy foil designated as Haynes-230. There was a microstructure on the side of the foil which comes into contact with fluid in the channel. Two types of microstructured heating surfaces: one with micro-recesses distributed evenly and another with mini-recesses distributed unevenly were used. The paper compares the impact of the microstructured heating surface and minichannel positions on the development of boiling and two phase flow pressure drop. The local heat transfer coefficients and flow resistance obtained in experiment using three positions of the minichannel, e.g.: 0°, 90° and 180° were analyzed. The study of the selected thermal and flow parameters (mass flux density and inlet pressure), geometric parameters and type of cooling liquid on the boiling heat transfer was also conducted. The most important factor turned out to be channel orientation. Application of the enhanced heating surface caused the increase of the heat transfer coefficient from several to several tens per cent, in relation to the plain surface.
Radiative Impacts of Cloud Heterogeneity and Overlap in an Atmospheric General Circulation Model
NASA Technical Reports Server (NTRS)
Oreopoulos, L.; Lee, D.; Sud, Y. C.; Suarez, M. J.
2012-01-01
The radiative impacts of introducing horizontal heterogeneity of layer cloud condensate, and vertical overlap of condensate and cloud fraction are examined with the aid of a new radiation package operating in the GEOS-5 Atmospheric General Circulation Model. The impacts are examined in terms of diagnostic top-of-the-atmosphere shortwave (SW) and longwave (LW) cloud radiative effect (CRE) calculations for a range of assumptions and parameter specifications about the overlap. The investigation is conducted for two distinct cloud schemes, the one that comes with the standard GEOS-5 distribution, and another which has been recently used experimentally for its enhanced GEOS-5 distribution, and another which has been recently used experimentally for its enhanced cloud microphysical capabilities; both are coupled to a cloud generator allowing arbitrary cloud overlap specification. We find that cloud overlap radiative impacts are significantly stronger for the operational cloud scheme for which a change of cloud fraction overlap from maximum-random to generalized results to global changes of SW and LW CRE of approximately 4 Watts per square meter, and zonal changes of up to approximately 10 Watts per square meter. This is because of fewer occurrences compared to the other scheme of large layer cloud fractions and of multi-layer situations with large numbers of atmospheric being simultaneously cloudy, conditions that make overlap details more important. The impact on CRE of the details of condensate distribution overlap is much weaker. Once generalized overlap is adopted, both cloud schemes are only modestly sensitive to the exact values of the overlap parameters. We also find that if one of the CRE components is overestimated and the other underestimated, both cannot be driven towards observed values by adjustments to cloud condensate heterogeneity and overlap alone.
A study of small impact parameter ion channeling effects in thin crystals
NASA Astrophysics Data System (ADS)
Motapothula, Mallikarjuna Rao; Breese, Mark B. H.
2018-03-01
We have recorded channeling patterns produced by 1-2 MeV protons aligned with ⟨1 1 1⟩ axes in 55 nm thick silicon crystals which exhibit characteristic angular structure for deflection angles up to and beyond the axial critical angle, ψ a . Such large angular deflections are produced by ions incident on atomic strings with small impact parameters, resulting in trajectories which pass through several radial rings of atomic strings before exiting the thin crystal. Each ring may focus, steer or scatter the channeled ions in the transverse direction and the resulting characteristic angular structure beyond 0.6 ψ a at different depths can be related to peaks and troughs in the nuclear encounter probability. Such "radial focusing" underlies other axial channeling phenomena in thin crystals including planar channeling of small impact parameter trajectories, peaks around the azimuthal distribution at small tilts and large shoulders in the nuclear encounter probability at tilts beyond ψ a .
Ensemble Forecasting of Coronal Mass Ejections Using the WSA-ENLIL with CONED Model
NASA Technical Reports Server (NTRS)
Emmons, D.; Acebal, A.; Pulkkinen, A.; Taktakishvili, A.; MacNeice, P.; Odstricil, D.
2013-01-01
The combination of the Wang-Sheeley-Arge (WSA) coronal model, ENLIL heliospherical model version 2.7, and CONED Model version 1.3 (WSA-ENLIL with CONED Model) was employed to form ensemble forecasts for 15 halo coronal mass ejections (halo CMEs). The input parameter distributions were formed from 100 sets of CME cone parameters derived from the CONED Model. The CONED Model used image processing along with the bootstrap approach to automatically calculate cone parameter distributions from SOHO/LASCO imagery based on techniques described by Pulkkinen et al. (2010). The input parameter distributions were used as input to WSA-ENLIL to calculate the temporal evolution of the CMEs, which were analyzed to determine the propagation times to the L1 Lagrangian point and the maximum Kp indices due to the impact of the CMEs on the Earth's magnetosphere. The Newell et al. (2007) Kp index formula was employed to calculate the maximum Kp indices based on the predicted solar wind parameters near Earth assuming two magnetic field orientations: a completely southward magnetic field and a uniformly distributed clock-angle in the Newell et al. (2007) Kp index formula. The forecasts for 5 of the 15 events had accuracy such that the actual propagation time was within the ensemble average plus or minus one standard deviation. Using the completely southward magnetic field assumption, 10 of the 15 events contained the actual maximum Kp index within the range of the ensemble forecast, compared to 9 of the 15 events when using a uniformly distributed clock angle.
Electron Impact Multiple Ionization Cross Sections for Solar Physics
NASA Astrophysics Data System (ADS)
Hahn, M.; Savin, D. W.; Mueller, A.
2017-12-01
We have compiled a set of electron-impact multiple ionization (EIMI) cross sections for astrophysically relevant ions. EIMI can have a significant effect on the ionization balance of non-equilibrium plasmas. For example, it can be important if there is a rapid change in the electron temperature, as in solar flares or in nanoflare coronal heating. EIMI is also likely to be significant when the electron energy distribution is non-thermal, such as if the electrons follow a kappa distribution. Cross sections for EIMI are needed in order to account for these processes in plasma modeling and for spectroscopic interpretation. Here, we describe our comparison of proposed semiempirical formulae to the available experimental EIMI cross section data. Based on this comparison, we have interpolated and extrapolated fitting parameters to systems that have not yet been measured. A tabulation of the fit parameters is provided for thousands of EIMI cross sections. We also highlight some outstanding issues that remain to be resolved.
Impact of Saw Dust Application on the Distribution of Potentially Toxic Metals in Contaminated Soil.
Awokunmi, Emmmanuel E
2017-12-01
The need to develop an approach for the reclamation of contaminated site using locally available agricultural waste has been considered. The present study investigated the application of sawdust as an effective amendment in the immobilization of potentially toxic metals (PTMs) by conducting a greenhouse experiment on soil collected from an automobile dumpsite. The amended and non-amended soil samples were analyzed for their physicochemical parameters and sequential extraction of PTMs. The results revealed that application of amendment had positive impact on the physicochemical parameters as organic matter content and cation exchange capacity increased from 12.1% to 12.8% and 16.4 to 16.8 meq/100 g respectively. However, the mobility and bioavalability of these metals was reduced as they were found to be distributed mostly in the non-exchangeable phase of soil. Therefore, application of sawdust successfully immobilized PTMs and could be applied for future studies in agricultural soil reclamation.
NASA Astrophysics Data System (ADS)
Bruzzone, Agostino G.; Revetria, Roberto; Simeoni, Simone; Viazzo, Simone; Orsoni, Alessandra
2004-08-01
In logistics and industrial production managers must deal with the impact of stochastic events to improve performances and reduce costs. In fact, production and logistics systems are generally designed considering some parameters as deterministically distributed. While this assumption is mostly used for preliminary prototyping, it is sometimes also retained during the final design stage, and especially for estimated parameters (i.e. Market Request). The proposed methodology can determine the impact of stochastic events in the system by evaluating the chaotic threshold level. Such an approach, based on the application of a new and innovative methodology, can be implemented to find the condition under which chaos makes the system become uncontrollable. Starting from problem identification and risk assessment, several classification techniques are used to carry out an effect analysis and contingency plan estimation. In this paper the authors illustrate the methodology with respect to a real industrial case: a production problem related to the logistics of distributed chemical processing.
The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Khavaran, Abbas
2010-01-01
Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.
Statistical regularities in the rank-citation profile of scientists
Petersen, Alexander M.; Stanley, H. Eugene; Succi, Sauro
2011-01-01
Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile ci(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each ci(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different ci(r) profiles, our results demonstrate the utility of the βi scaling parameter in conjunction with hi for quantifying individual publication impact. We show that the total number of citations Ci tallied from a scientist's Ni papers scales as . Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress. PMID:22355696
Vulnerabilities to Rock-Slope Failure Impacts from Christchurch, NZ Case History Analysis
NASA Astrophysics Data System (ADS)
Grant, A.; Wartman, J.; Massey, C. I.; Olsen, M. J.; Motley, M. R.; Hanson, D.; Henderson, J.
2015-12-01
Rock-slope failures during the 2010/11 Canterbury (Christchurch), New Zealand Earthquake Sequence resulted in 5 fatalities and caused an estimated US$400 million of damage to buildings and infrastructure. Reducing losses from rock-slope failures requires consideration of both hazard (i.e. likelihood of occurrence) and risk (i.e. likelihood of losses given an occurrence). Risk assessment thus requires information on the vulnerability of structures to rock or boulder impacts. Here we present 32 case histories of structures impacted by boulders triggered during the 2010/11 Canterbury earthquake sequence, in the Port Hills region of Christchurch, New Zealand. The consequences of rock fall impacts on structures, taken as penetration distance into structures, are shown to follow a power-law distribution with impact energy. Detailed mapping of rock fall sources and paths from field mapping, aerial lidar digital elevation model (DEM) data, and high-resolution aerial imagery produced 32 well-constrained runout paths of boulders that impacted structures. Impact velocities used for structural analysis were developed using lumped mass 2-D rock fall runout models using 1-m resolution lidar elevation data. Model inputs were based on calibrated surface parameters from mapped runout paths of 198 additional boulder runouts. Terrestrial lidar scans and structure from motion (SfM) imagery generated 3-D point cloud data used to measure structural damage and impacting boulders. Combining velocity distributions from 2-D analysis and high-precision boulder dimensions, kinetic energy distributions were calculated for all impacts. Calculated impact energy versus penetration distance for all cases suggests a power-law relationship between damage and impact energy. These case histories and resulting fragility curve should serve as a foundation for future risk analysis of rock fall hazards by linking vulnerability data to the predicted energy distributions from the hazard analysis.
NASA Astrophysics Data System (ADS)
Kumar, Ashwani; Vijay Babu, P.; Murty, V. V. S. N.
2017-06-01
Rapidly increasing electricity demands and capacity shortage of transmission and distribution facilities are the main driving forces for the growth of distributed generation (DG) integration in power grids. One of the reasons for choosing a DG is its ability to support voltage in a distribution system. Selection of effective DG characteristics and DG parameters is a significant concern of distribution system planners to obtain maximum potential benefits from the DG unit. The objective of the paper is to reduce the power losses and improve the voltage profile of the radial distribution system with optimal allocation of the multiple DG in the system. The main contribution in this paper is (i) combined power loss sensitivity (CPLS) based method for multiple DG locations, (ii) determination of optimal sizes for multiple DG units at unity and lagging power factor, (iii) impact of DG installed at optimal, that is, combined load power factor on the system performance, (iv) impact of load growth on optimal DG planning, (v) Impact of DG integration in distribution systems on voltage stability index, (vi) Economic and technical Impact of DG integration in the distribution systems. The load growth factor has been considered in the study which is essential for planning and expansion of the existing systems. The technical and economic aspects are investigated in terms of improvement in voltage profile, reduction in total power losses, cost of energy loss, cost of power obtained from DG, cost of power intake from the substation, and savings in cost of energy loss. The results are obtained on IEEE 69-bus radial distribution systems and also compared with other existing methods.
Karmakar, Chandan; Udhayakumar, Radhagayathri K.; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy (DistEn) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters—the embedding dimension m, and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy (ApEn) and sample entropy (SampEn) measures. The performance of DistEn can also be affected by the data length N. In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter (m or M) or combination of two parameters (N and M). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn. The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series. PMID:28979215
NASA Astrophysics Data System (ADS)
Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen
2017-12-01
We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.
NASA Astrophysics Data System (ADS)
Waqas, M.; Hayat, T.; Shehzad, S. A.; Alsaedi, A.
2018-01-01
Impact of gyrotactic microorganisms on two-dimensional (2D) stratified flow of an Oldroyd-B nanomaterial is highlighted. Applied magnetic field along with mixed convection is considered in the formulation. Theory of microorganisms is utilized just to stabilize the suspended nanoparticles through bioconvection induced by combined effects of buoyancy forces and magnetic field. Convergent series solutions for the obtained nonlinear differential systems are derived. Impacts of different emerging parameters on velocity, temperature, concentration, motile microorganisms density, density number of motile microorganisms and local Nusselt and Sherwood numbers are graphically addressed. It is observed that thermal, concentration and motile density stratification parameters result in reduction of temperature, concentration and motile microorganisms density distributions respectively.
Analytical performance evaluation of SAR ATR with inaccurate or estimated models
NASA Astrophysics Data System (ADS)
DeVore, Michael D.
2004-09-01
Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
NASA Astrophysics Data System (ADS)
Dhakal, N.; Jain, S.
2013-12-01
Rare and unusually large events (such as hurricanes and floods) can create unusual and interesting trends in statistics. Generalized Extreme Value (GEV) distribution is usually used to statistically describe extreme rainfall events. A number of the recent studies have shown that the frequency of extreme rainfall events has increased over the last century and as a result, there has been change in parameters of GEV distribution with the time (non-stationary). But what impact does a single unusually large rainfall event (e.g., hurricane Irene) have on the GEV parameters and consequently on the level of risks or the return periods used in designing the civil infrastructures? In other words, if such a large event occurs today, how will it influence the level of risks (estimated based on past rainfall records) for the civil infrastructures? To answer these questions, we performed sensitivity analysis of the distribution parameters of GEV as well as the return periods to unusually large outlier events. The long-term precipitation records over the period of 1981-2010 from 12 USHCN stations across the state of Maine were used for analysis. For most of the stations, addition of each outlier event caused an increase in the shape parameter with a huge decrease on the corresponding return period. This is a key consideration for time-varying engineering design. These isolated extreme weather events should simultaneously be considered with traditional statistical methodology related to extreme events while designing civil infrastructures (such as dams, bridges, and culverts). Such analysis is also useful in understanding the statistical uncertainty of projecting extreme events into future.
Differential cross sections for electron capture in p + H2 collisions
NASA Astrophysics Data System (ADS)
Igarashi, Akinori; Gulyás, Laszlo; Ohsaki, Akihiko
2017-11-01
Projectile angular distributions for electron capture in p + H2 collisions at 25 and 75 keV impact energies, measured by Sharma et al. [Phys. Rev. A 86, 022706 (2012)], are calculated using the CDW-EIS and eikonal approximations. Angular distributions evaluated in the CDW-EIS approximation are in good agreement with the experimental data measured for coherent projectile beams. Incoherent projectile scatterings are also considered by folding the coherent angular distributions over the transverse momentum distribution of the projectile wave-packet. Reasonable agreements with the measurements are obtained only with coherence parameters very different from those reported in the experiments.
Temperature based Restricted Boltzmann Machines
NASA Astrophysics Data System (ADS)
Li, Guoqi; Deng, Lei; Xu, Yi; Wen, Changyun; Wang, Wei; Pei, Jing; Shi, Luping
2016-01-01
Restricted Boltzmann machines (RBMs), which apply graphical models to learning probability distribution over a set of inputs, have attracted much attention recently since being proposed as building blocks of multi-layer learning systems called deep belief networks (DBNs). Note that temperature is a key factor of the Boltzmann distribution that RBMs originate from. However, none of existing schemes have considered the impact of temperature in the graphical model of DBNs. In this work, we propose temperature based restricted Boltzmann machines (TRBMs) which reveals that temperature is an essential parameter controlling the selectivity of the firing neurons in the hidden layers. We theoretically prove that the effect of temperature can be adjusted by setting the parameter of the sharpness of the logistic function in the proposed TRBMs. The performance of RBMs can be improved by adjusting the temperature parameter of TRBMs. This work provides a comprehensive insights into the deep belief networks and deep learning architectures from a physical point of view.
NASA Astrophysics Data System (ADS)
Kvinnsland, Yngve; Muren, Ludvig Paul; Dahl, Olav
2004-08-01
Calculations of normal tissue complication probability (NTCP) values for the rectum are difficult because it is a hollow, non-rigid, organ. Finding the true cumulative dose distribution for a number of treatment fractions requires a CT scan before each treatment fraction. This is labour intensive, and several surrogate distributions have therefore been suggested, such as dose wall histograms, dose surface histograms and histograms for the solid rectum, with and without margins. In this study, a Monte Carlo method is used to investigate the relationships between the cumulative dose distributions based on all treatment fractions and the above-mentioned histograms that are based on one CT scan only, in terms of equivalent uniform dose. Furthermore, the effect of a specific choice of histogram on estimates of the volume parameter of the probit NTCP model was investigated. It was found that the solid rectum and the rectum wall histograms (without margins) gave equivalent uniform doses with an expected value close to the values calculated from the cumulative dose distributions in the rectum wall. With the number of patients available in this study the standard deviations of the estimates of the volume parameter were large, and it was not possible to decide which volume gave the best estimates of the volume parameter, but there were distinct differences in the mean values of the values obtained.
Constraining the Formation of Haumea using the Distribution of Haumea Family Members
NASA Astrophysics Data System (ADS)
Proudfoot, Benjamin; Ragozzine, Darin
2017-10-01
Collisions are a central component of the formation and evolution of the outer Solar System. The dwarf planet Haumea and its compact collisional family provide a unique empirical view into how collisions take place in the outer Solar System. Although there have been many publications dedicated to understanding Haumea, there have yet to be any fully self-consistent models for the formation of Haumea and its family. In particular, it is a challenge to explain why the relative velocities of family members ("Delta v") is several times smaller than would be expected. Using a much larger number of Haumea family members (see Maggard & Ragozzine, this meeting), we focus on finding the best empirical model for the three-dimensional "Delta v" distribution of Haumea family members. We consider an isotropic ejection from Haumea, a planar ejection resulting from a graze and merge type impact (e.g., Leinhardt et al. 2010), and an isotropic ejection from a satellite of Haumea (e.g., Schlichting & Sari 2009). These models create a large simulated family with tunable parameters that result in a unique distribution in a-e-i-Deltav-H space. Preliminary results indicated that the graze-and-merge impact is inconsistent with the observed distribution of family members (Ragozzine & Proudfoot, DDA 2017). We explore this more rigorously here by including tunable parameters, a Bayesian methodology, and the influence of background interlopers.
Spatial variability of theaflavins and thearubigins fractions and their impact on black tea quality.
Bhuyan, Lakshi Prasad; Borah, Paban; Sabhapondit, Santanu; Gogoi, Ramen; Bhattacharyya, Pradip
2015-12-01
The spatial distribution of theaflavin and thearubigin fractions and their impact on black tea quality were investigated using multivariate and geostatistics techniques. Black tea samples were collected from tea gardens of six geographical regions of Assam and West Bengal, India. Total theaflavin (TF) and its four fractions of upper Assam, south bank and North Bank teas were higher than the other regions. Simple theaflavin showed highest significant correlation with tasters' quality. Low molecular weight thearubigins of south bank and North Bank were significantly higher than other regions. Total thearubigin (TR) and its fractions revealed significant positive correlation with tasters' organoleptic valuations. Tea tasters' parameters were significantly and positively correlated with each other. The semivariogram for quality parameters were best represented by gaussian models. The nugget/sill ratio indicated a strong/moderate spatial dependence of the studied parameters. Spatial variation of tea quality parameters may be used for quality assessment in the tea growing areas of India.
Measurement of the bottom hadron lifetime at the Z 0 resonancce
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujino, Donald Hideo
1992-06-01
We have measured the bottom hadron lifetime from bmore » $$\\bar{b}$$ events produced at the Z 0 resonance. Using the precision vertex detectors of the Mark II detector at the Stanford Linear Collider, we developed an impact parameter tag to identify bottom hadrons. The vertex tracking system resolved impact parameters to 30 μm for high momentum tracks, and 70 μm for tracks with a momentum of 1 GeV. We selected B hadrons with an efficiency of 40% and a sample purity of 80%, by requiring there be at least two tracks in a single jet that significantly miss the Z 0 decay vertex. From a total of 208 hadronic Z 0 events collected by the Mark II detector in 1990, we tagged 53 jets, of which 22 came from 11 double-tagged events. The jets opposite the tagged ones, referred as the ``untagged`` sample, are rich in B hadrons and unbiased in B decay times. The variable Σδ is the sum of impact parameters from tracks in the jet, and contains vital information on the B decay time. We measured the B lifetime from a one-parameter likelihood fit to the untagged Σδ distribution, obtaining τ b = 1.53 $$+0.55\\atop{-0.45}$$ ± 0.16 ps which agrees with the current world average. The first error is statistical and the second is systematic. The systematic error was dominated by uncertainties in the track resolution function. As a check, we also obtained consistent results using the Σδ distribution from the tagged jets and from the entire hadronic sample without any bottom enrichment.« less
Measurement of the bottom hadron lifetime at the Z sup 0 resonancce
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujino, D.H.
1992-06-01
We have measured the bottom hadron lifetime from b{bar b} events produced at the Z{sup 0} resonance. Using the precision vertex detectors of the Mark II detector at the Stanford Linear Collider, we developed an impact parameter tag to identify bottom hadrons. The vertex tracking system resolved impact parameters to 30 {mu}m for high momentum tracks, and 70 {mu}m for tracks with a momentum of 1 GeV. We selected B hadrons with an efficiency of 40% and a sample purity of 80%, by requiring there be at least two tracks in a single jet that significantly miss the Z{sup 0}more » decay vertex. From a total of 208 hadronic Z{sup 0} events collected by the Mark II detector in 1990, we tagged 53 jets, of which 22 came from 11 double-tagged events. The jets opposite the tagged ones, referred as the untagged'' sample, are rich in B hadrons and unbiased in B decay times. The variable {Sigma}{delta} is the sum of impact parameters from tracks in the jet, and contains vital information on the B decay time. We measured the B lifetime from a one-parameter likelihood fit to the untagged {Sigma}{delta} distribution, obtaining {tau}{sub b} = 1.53{sub {minus}0.45}{sup +0.55}{plus minus}0.16 ps which agrees with the current world average. The first error is statistical and the second is systematic. The systematic error was dominated by uncertainties in the track resolution function. As a check, we also obtained consistent results using the {Sigma}{delta} distribution from the tagged jets and from the entire hadronic sample without any bottom enrichment.« less
Impacts of a Stochastic Ice Mass-Size Relationship on Squall Line Ensemble Simulations
NASA Astrophysics Data System (ADS)
Stanford, M.; Varble, A.; Morrison, H.; Grabowski, W.; McFarquhar, G. M.; Wu, W.
2017-12-01
Cloud and precipitation structure, evolution, and cloud radiative forcing of simulated mesoscale convective systems (MCSs) are significantly impacted by ice microphysics parameterizations. Most microphysics schemes assume power law relationships with constant parameters for ice particle mass, area, and terminal fallspeed relationships as a function of size, despite observations showing that these relationships vary in both time and space. To account for such natural variability, a stochastic representation of ice microphysical parameters was developed using the Predicted Particle Properties (P3) microphysics scheme in the Weather Research and Forecasting model, guided by in situ aircraft measurements from a number of field campaigns. Here, the stochastic framework is applied to the "a" and "b" parameters of the unrimed ice mass-size (m-D) relationship (m=aDb) with co-varying "a" and "b" values constrained by observational distributions tested over a range of spatiotemporal autocorrelation scales. Diagnostically altering a-b pairs in three-dimensional (3D) simulations of the 20 May 2011 Midlatitude Continental Convective Clouds Experiment (MC3E) squall line suggests that these parameters impact many important characteristics of the simulated squall line, including reflectivity structure (particularly in the anvil region), surface rain rates, surface and top of atmosphere radiative fluxes, buoyancy and latent cooling distributions, and system propagation speed. The stochastic a-b P3 scheme is tested using two frameworks: (1) a large ensemble of two-dimensional idealized squall line simulations and (2) a smaller ensemble of 3D simulations of the 20 May 2011 squall line, for which simulations are evaluated using observed radar reflectivity and radial velocity at multiple wavelengths, surface meteorology, and surface and satellite measured longwave and shortwave radiative fluxes. Ensemble spreads are characterized and compared against initial condition ensemble spreads for a range of variables.
NASA Technical Reports Server (NTRS)
Drusano, George L.
1991-01-01
The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.
Study of metal transfer in CO2 laser+GMAW-P hybrid welding using argon-helium mixtures
NASA Astrophysics Data System (ADS)
Zhang, Wang; Hua, Xueming; Liao, Wei; Li, Fang; Wang, Min
2014-03-01
The metal transfer in CO2 Laser+GMAW-P hybrid welding by using argon-helium mixtures was investigated and the effect of the laser on the mental transfer is discussed. A 650 nm laser, in conjunction with the shadow graph technique, is used to observe the metal transfer process. In order to analyze the heat input to the droplet and the droplet internal current line distribution. An optical emission spectroscopy system was employed to estimate default parameter and optimized plasma temperature, electron number densities distribution. The results indicate that the CO2 plasma plume have a significant impact to the electrode melting, droplet formation, detachment, impingement onto the workpiece and weld morphology. Since the current distribution direction flow changes to the keyhole, to obtain a metal transfer mode of one droplet per pulse, the welding parameters should be adjusted to a higher pulse time (TP) and a lower voltage.
Cilliers, Cornelius; Guo, Hans; Liao, Jianshan; Christodolu, Nikolas; Thurber, Greg M
2016-09-01
Antibody-drug conjugates exhibit complex pharmacokinetics due to their combination of macromolecular and small molecule properties. These issues range from systemic concerns, such as deconjugation of the small molecule drug during the long antibody circulation time or rapid clearance from nonspecific interactions, to local tumor tissue heterogeneity, cell bystander effects, and endosomal escape. Mathematical models can be used to study the impact of these processes on overall distribution in an efficient manner, and several types of models have been used to analyze varying aspects of antibody distribution including physiologically based pharmacokinetic (PBPK) models and tissue-level simulations. However, these processes are quantitative in nature and cannot be handled qualitatively in isolation. For example, free antibody from deconjugation of the small molecule will impact the distribution of conjugated antibodies within the tumor. To incorporate these effects into a unified framework, we have coupled the systemic and organ-level distribution of a PBPK model with the tissue-level detail of a distributed parameter tumor model. We used this mathematical model to analyze new experimental results on the distribution of the clinical antibody-drug conjugate Kadcyla in HER2-positive mouse xenografts. This model is able to capture the impact of the drug-antibody ratio (DAR) on tumor penetration, the net result of drug deconjugation, and the effect of using unconjugated antibody to drive ADC penetration deeper into the tumor tissue. This modeling approach will provide quantitative and mechanistic support to experimental studies trying to parse the impact of multiple mechanisms of action for these complex drugs.
Cilliers, Cornelius; Guo, Hans; Liao, Jianshan; Christodolu, Nikolas; Thurber, Greg M.
2016-01-01
Antibody drug conjugates exhibit complex pharmacokinetics due to their combination of macromolecular and small molecule properties. These issues range from systemic concerns, such as deconjugation of the small molecule drug during the long antibody circulation time or rapid clearance from non-specific interactions, to local tumor tissue heterogeneity, cell bystander effects, and endosomal escape. Mathematical models can be used to study the impact of these processes on overall distribution in an efficient manner, and several types of models have been used to analyze varying aspects of antibody distribution including physiologically based pharmacokinetic (PBPK) models and tissue-level simulations. However, these processes are quantitative in nature and cannot be handled qualitatively in isolation. For example, free antibody from deconjugation of the small molecule will impact the distribution of conjugated antibodies within the tumor. To incorporate these effects into a unified framework, we have coupled the systemic and organ-level distribution of a PBPK model with the tissue-level detail of a distributed parameter tumor model. We used this mathematical model to analyze new experimental results on the distribution of the clinical antibody drug conjugate Kadcyla in HER2 positive mouse xenografts. This model is able to capture the impact of the drug antibody ratio (DAR) on tumor penetration, the net result of drug deconjugation, and the effect of using unconjugated antibody to drive ADC penetration deeper into the tumor tissue. This modeling approach will provide quantitative and mechanistic support to experimental studies trying to parse the impact of multiple mechanisms of action for these complex drugs. PMID:27287046
NASA Astrophysics Data System (ADS)
Schenke, Björn; Tribedy, Prithwish; Venugopalan, Raju
2012-09-01
The event-by-event multiplicity distribution, the energy densities and energy density weighted eccentricity moments ɛn (up to n=6) at early times in heavy-ion collisions at both the BNL Relativistic Heavy Ion Collider (RHIC) (s=200GeV) and the CERN Large Hardron Collider (LHC) (s=2.76TeV) are computed in the IP-Glasma model. This framework combines the impact parameter dependent saturation model (IP-Sat) for nucleon parton distributions (constrained by HERA deeply inelastic scattering data) with an event-by-event classical Yang-Mills description of early-time gluon fields in heavy-ion collisions. The model produces multiplicity distributions that are convolutions of negative binomial distributions without further assumptions or parameters. In the limit of large dense systems, the n-particle gluon distribution predicted by the Glasma-flux tube model is demonstrated to be nonperturbatively robust. In the general case, the effect of additional geometrical fluctuations is quantified. The eccentricity moments are compared to the MC-KLN model; a noteworthy feature is that fluctuation dominated odd moments are consistently larger than in the MC-KLN model.
NASA Astrophysics Data System (ADS)
Jacquin, A. P.
2012-04-01
This study is intended to quantify the impact of uncertainty about precipitation spatial distribution on predictive uncertainty of a snowmelt runoff model. This problem is especially relevant in mountain catchments with a sparse precipitation observation network and relative short precipitation records. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment's glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation at a station and a precipitation factor FPi. If other precipitation data are not available, these precipitation factors must be adjusted during the calibration process and are thus seen as parameters of the model. In the case of the fifth zone, glaciers are seen as an inexhaustible source of water that melts when the snow cover is depleted.The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. The model's predictive uncertainty is measured in terms of the output variance of the mean squared error of the Box-Cox transformed discharge, the relative volumetric error, and the weighted average of snow water equivalent in the elevation zones at the end of the simulation period. Sobol's variance decomposition (SVD) method is used for assessing the impact of precipitation spatial distribution, represented by the precipitation factors FPi, on the models' predictive uncertainty. In the SVD method, the first order effect of a parameter (or group of parameters) indicates the fraction of predictive uncertainty that could be reduced if the true value of this parameter (or group) was known. Similarly, the total effect of a parameter (or group) measures the fraction of predictive uncertainty that would remain if the true value of this parameter (or group) was unknown, but all the remaining model parameters could be fixed. In this study, first order and total effects of the group of precipitation factors FP1- FP4, and the precipitation factor FP5, are calculated separately. First order and total effects of the group FP1- FP4 are much higher than first order and total effects of the factor FP5, which are negligible This situation is due to the fact that the actual value taken by FP5 does not have much influence in the contribution of the glacier zone to the catchment's output discharge, mainly limited by incident solar radiation. In addition to this, first order effects indicate that, in average, nearly 25% of predictive uncertainty could be reduced if the true values of the precipitation factors FPi could be known, but no information was available on the appropriate values for the remaining model parameters. Finally, the total effects of the precipitation factors FP1- FP4 are close to 41% in average, implying that even if the appropriate values for the remaining model parameters could be fixed, predictive uncertainty would be still quite high if the spatial distribution of precipitation remains unknown. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279.
Impact of viscosity variation and micro rotation on oblique transport of Cu-water fluid.
Tabassum, Rabil; Mehmood, R; Nadeem, S
2017-09-01
This study inspects the influence of temperature dependent viscosity on Oblique flow of micropolar nanofluid. Fluid viscosity is considered as an exponential function of temperature. Governing equations are converted into dimensionless forms with aid of suitable transformations. Outcomes of the study are shown in graphical form and discussed in detail. Results revealed that viscosity parameter has pronounced effects on velocity profiles, temperature distribution, micro-rotation, streamlines, shear stress and heat flux. It is found that viscosity parameter enhances the temperature distribution, tangential velocity profile, normal component of micro-rotation and shear stress at the wall while it has decreasing effect on tangential component of micro-rotation and local heat flux. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Eisner, Stephanie; Huang, Shaochun; Majasalmi, Titta; Bright, Ryan; Astrup, Rasmus; Beldring, Stein
2017-04-01
Forests are recognized for their decisive effect on landscape water balance with structural forest characteristics as stand density or species composition determining energy partitioning and dominant flow paths. However, spatial and temporal variability in forest structure is often poorly represented in hydrological modeling frameworks, in particular in regional to large scale hydrological modeling and impact analysis. As a common practice, prescribed land cover classes (including different generic forest types) are linked to parameter values derived from literature, or parameters are determined by calibration. While national forest inventory (NFI) data provide comprehensive, detailed information on hydrologically relevant forest characteristics, their potential to inform hydrological simulation over larger spatial domains is rarely exploited. In this study we present a modeling framework that couples the distributed hydrological model HBV with forest structural information derived from the Norwegian NFI and multi-source remote sensing data. The modeling framework, set up for the entire of continental Norway at 1 km spatial resolution, is explicitly designed to study the combined and isolated impacts of climate change, forest management and land use change on hydrological fluxes. We use a forest classification system based on forest structure rather than biomes which allows to implicitly account for impacts of forest management on forest structural attributes. In the hydrological model, different forest classes are represented by three parameters: leaf area index (LAI), mean tree height and surface albedo. Seasonal cycles of LAI and surface albedo are dynamically simulated to make the framework applicable under climate change conditions. Based on a hindcast for the pilot regions Nord-Trøndelag and Sør-Trøndelag, we show how forest management has affected regional hydrological fluxes during the second half of the 20th century as contrasted to climate variability.
Flow distribution in parallel microfluidic networks and its effect on concentration gradient
Guermonprez, Cyprien; Michelin, Sébastien; Baroud, Charles N.
2015-01-01
The architecture of microfluidic networks can significantly impact the flow distribution within its different branches and thereby influence tracer transport within the network. In this paper, we study the flow rate distribution within a network of parallel microfluidic channels with a single input and single output, using a combination of theoretical modeling and microfluidic experiments. Within the ladder network, the flow rate distribution follows a U-shaped profile, with the highest flow rate occurring in the initial and final branches. The contrast with the central branches is controlled by a single dimensionless parameter, namely, the ratio of hydrodynamic resistance between the distribution channel and the side branches. This contrast in flow rates decreases when the resistance of the side branches increases relative to the resistance of the distribution channel. When the inlet flow is composed of two parallel streams, one of which transporting a diffusing species, a concentration variation is produced within the side branches of the network. The shape of this concentration gradient is fully determined by two dimensionless parameters: the ratio of resistances, which determines the flow rate distribution, and the Péclet number, which characterizes the relative speed of diffusion and advection. Depending on the values of these two control parameters, different distribution profiles can be obtained ranging from a flat profile to a step distribution of solute, with well-distributed gradients between these two limits. Our experimental results are in agreement with our numerical model predictions, based on a simplified 2D advection-diffusion problem. Finally, two possible applications of this work are presented: the first one combines the present design with self-digitization principle to encapsulate the controlled concentration in nanoliter chambers, while the second one extends the present design to create a continuous concentration gradient within an open flow chamber. PMID:26487905
Hijnen, W A M; Schurer, R; Bahlman, J A; Ketelaars, H A M; Italiaander, R; van der Wal, A; van der Wielen, P W J J
2018-02-01
It is possible to distribute drinking water without a disinfectant residual when the treated water is biologically stable. The objective of this study was to determine the impact of easily and slowly biodegradable compounds on the biostability of the drinking water at three full-scale production plants which use the same surface water, and on the regrowth conditions in the related distribution systems. Easily biodegradable compounds in the drinking water were determined with AOC-P17/Nox during 2012-2015. Slowly biodegradable organic compounds measured as particulate and/or high-molecular organic carbon (PHMOC), were monitored at the inlet and after the different treatment stages of the three treatments during the same period. The results show that PHMOC (300-470 μg C L -1 ) was approximately 10% of the TOC in the surface water and was removed to 50-100 μg C L -1 . The PHMOC in the water consisted of 40-60% of carbohydrates and 10% of proteins. A significant and strong positive correlation was observed for PHMOC concentrations and two recently introduced bioassay methods for slowly biodegradable compounds (AOC-A3 and biomass production potential, BPC 14 ). Moreover, these three parameters in the biological active carbon effluent (BACF) of the three plants showed a positive correlation with regrowth in the drinking water distribution system, which was assessed with Aeromonas, heterotrophic plate counts, coliforms and large invertebrates. In contrast, the AOC-P17/Nox concentrations did not correlate with these regrowth parameters. We therefore conclude that slowly biodegradable compounds in the treated water from these treatment plants seem to have a greater impact on regrowth in the distribution system than easily biodegradable compounds. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chen, Jing; Edwards, Aurélie; Layton, Anita T
2009-08-01
we extended the region-based mathematical model of the urine-concentrating mechanism in the rat outer medulla (OM) developed by Layton and Layton (Am J Physiol Renal Physiol 289: F1346-F1366, 2005) to examine the impact of the complex structural organization of the OM on O(2) transport and distribution. In the present study, we investigated the sensitivity of predicted Po(2) profiles to several parameters that characterize the degree of OM regionalization, boundary conditions, structural dimensions, transmural transport properties, and relative positions and distributions of tubules and vessels. Our results suggest that the fraction of O(2) supplied to descending vasa recta (DVR) that reaches the inner medulla, i.e., a measure of the axial Po(2) gradient in the OM, is insensitive to parameter variations as a result of the sequestration of long DVR in the vascular bundles. In contrast, O(2) distribution among the regions surrounding the vascular core strongly depends on the radial positions of medullary thick ascending limbs (mTALs) relative to the vascular core, the degree of regionalization, and the distribution of short DVR along the corticomedullary axis. Moreover, if it is assumed that the mTAL active Na(+) transport rate decreases when mTAL Po(2) falls below a critical level, O(2) availability to mTALs has a significant impact on the concentrating capability of the model OM. The model also predicts that when the OM undergoes hypertrophy, its concentrating capability increases significantly only when anaerobic metabolism supports a substantial fraction of the mTAL active Na(+) transport and is otherwise critically reduced by low interstitial and mTAL luminal Po(2) in a hypertrophied OM.
Geometric Modeling of Inclusions as Ellipsoids
NASA Technical Reports Server (NTRS)
Bonacuse, Peter J.
2008-01-01
Nonmetallic inclusions in gas turbine disk alloys can have a significant detrimental impact on fatigue life. Because large inclusions that lead to anomalously low lives occur infrequently, probabilistic approaches can be utilized to avoid the excessively conservative assumption of lifing to a large inclusion in a high stress location. A prerequisite to modeling the impact of inclusions on the fatigue life distribution is a characterization of the inclusion occurrence rate and size distribution. To help facilitate this process, a geometric simulation of the inclusions was devised. To make the simulation problem tractable, the irregularly sized and shaped inclusions were modeled as arbitrarily oriented, three independent dimensioned, ellipsoids. Random orientation of the ellipsoid is accomplished through a series of three orthogonal rotations of axes. In this report, a set of mathematical models for the following parameters are described: the intercepted area of a randomly sectioned ellipsoid, the dimensions and orientation of the intercepted ellipse, the area of a randomly oriented sectioned ellipse, the depth and width of a randomly oriented sectioned ellipse, and the projected area of a randomly oriented ellipsoid. These parameters are necessary to determine an inclusion s potential to develop a propagating fatigue crack. Without these mathematical models, computationally expensive search algorithms would be required to compute these parameters.
General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.
de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael
2016-11-01
Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.
Examining System-Wide Impacts of Solar PV Control Systems with a Power Hardware-in-the-Loop Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Tess L.; Fuller, Jason C.; Schneider, Kevin P.
2014-06-08
High penetration levels of distributed solar PV power generation can lead to adverse power quality impacts, such as excessive voltage rise, voltage flicker, and reactive power values that result in unacceptable voltage levels. Advanced inverter control schemes have been developed that have the potential to mitigate many power quality concerns. However, local closed-loop control may lead to unintended behavior in deployed systems as complex interactions can occur between numerous operating devices. To enable the study of the performance of advanced control schemes in a detailed distribution system environment, a test platform has been developed that integrates Power Hardware-in-the-Loop (PHIL) withmore » concurrent time-series electric distribution system simulation. In the test platform, GridLAB-D, a distribution system simulation tool, runs a detailed simulation of a distribution feeder in real-time mode at the Pacific Northwest National Laboratory (PNNL) and supplies power system parameters at a point of common coupling. At the National Renewable Energy Laboratory (NREL), a hardware inverter interacts with grid and PV simulators emulating an operational distribution system. Power output from the inverters is measured and sent to PNNL to update the real-time distribution system simulation. The platform is described and initial test cases are presented. The platform is used to study the system-wide impacts and the interactions of inverter control modes—constant power factor and active Volt/VAr control—when integrated into a simulated IEEE 8500-node test feeder. We demonstrate that this platform is well-suited to the study of advanced inverter controls and their impacts on the power quality of a distribution feeder. Additionally, results are used to validate GridLAB-D simulations of advanced inverter controls.« less
Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bekele, E. G.; Nicklow, J. W.
2005-12-01
Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.
Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena
2010-01-31
In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mantry, Sonny; Petriello, Frank
2010-05-01
We derive a factorization theorem for the Higgs boson transverse momentum (pT) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for mh≫pT≫ΛQCD, where mh denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the pT scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the pT-scale physics simplifies the implementation of higher order radiative corrections in αs(pT). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in pT/mh and ΛQCD/pT can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-pT resummation.
Single quantum dot tracking reveals the impact of nanoparticle surface on intracellular state.
Zahid, Mohammad U; Ma, Liang; Lim, Sung Jun; Smith, Andrew M
2018-05-08
Inefficient delivery of macromolecules and nanoparticles to intracellular targets is a major bottleneck in drug delivery, genetic engineering, and molecular imaging. Here we apply live-cell single-quantum-dot imaging and tracking to analyze and classify nanoparticle states after intracellular delivery. By merging trajectory diffusion parameters with brightness measurements, multidimensional analysis reveals distinct and heterogeneous populations that are indistinguishable using single parameters alone. We derive new quantitative metrics of particle loading, cluster distribution, and vesicular release in single cells, and evaluate intracellular nanoparticles with diverse surfaces following osmotic delivery. Surface properties have a major impact on cell uptake, but little impact on the absolute cytoplasmic numbers. A key outcome is that stable zwitterionic surfaces yield uniform cytosolic behavior, ideal for imaging agents. We anticipate that this combination of quantum dots and single-particle tracking can be widely applied to design and optimize next-generation imaging probes, nanoparticle therapeutics, and biologics.
NASA Technical Reports Server (NTRS)
Sepehry-Fard, F.; Coulthard, Maurice H.
1995-01-01
The process of predicting the values of maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability and maintenance support costs. There are two types of parameters in the logistics and maintenance world: a. Fixed; b. Variable Fixed parameters, such as cost per man hour, are relatively easy to predict and forecast. These parameters normally follow a linear path and they do not change randomly. However, the variable parameters subject to the study in this report such as MTBF do not follow a linear path and they normally fall within the distribution curves which are discussed in this publication. The very challenging task then becomes the utilization of statistical techniques to accurately forecast the future non-linear time dependent variable arisings and events with a high confidence level. This, in turn, shall translate in tremendous cost savings and improved availability all around.
Sensitivity of Asteroid Impact Risk to Uncertainty in Asteroid Properties and Entry Parameters
NASA Astrophysics Data System (ADS)
Wheeler, Lorien; Mathias, Donovan; Dotson, Jessie L.; NASA Asteroid Threat Assessment Project
2017-10-01
A central challenge in assessing the threat posed by asteroids striking Earth is the large amount of uncertainty inherent throughout all aspects of the problem. Many asteroid properties are not well characterized and can range widely from strong, dense, monolithic irons to loosely bound, highly porous rubble piles. Even for an object of known properties, the specific entry velocity, angle, and impact location can swing the potential consequence from no damage to causing millions of casualties. Due to the extreme rarity of large asteroid strikes, there are also large uncertainties in how different types of asteroids will interact with the atmosphere during entry, how readily they may break up or ablate, and how much surface damage will be caused by the resulting airbursts or impacts.In this work, we use our Probabilistic Asteroid Impact Risk (PAIR) model to investigate the sensitivity of asteroid impact damage to uncertainties in key asteroid properties, entry parameters, or modeling assumptions. The PAIR model combines physics-based analytic models of asteroid entry and damage in a probabilistic Monte Carlo framework to assess the risk posed by a wide range of potential impacts. The model samples from uncertainty distributions of asteroid properties and entry parameters to generate millions of specific impact cases, and models the atmospheric entry and damage for each case, including blast overpressure, thermal radiation, tsunami inundation, and global effects. To assess the risk sensitivity, we alternately fix and vary the different input parameters and compare the effect on the resulting range of damage produced. The goal of these studies is to help guide future efforts in asteroid characterization and model refinement by determining which properties most significantly affect the potential risk.
Number distribution of emitted electrons by MeV H+ impact on carbon
NASA Astrophysics Data System (ADS)
Ogawa, H.; Koyanagi, Y.; Hongo, N.; Ishii, K.; Kaneko, T.
2017-09-01
The statistical distributions of the number of the forward- and backward-emitted secondary electrons (SE's) from a thin carbon foil have been measured in coincidence with foil-transmitted H+ ions of 0.5-3.0 MeV in every 0.5 MeV step. The measured SE energy spectra were fitted by assuming a Pólya distribution for the simultaneous n-SE emission probabilities. For our previous data with a couple of the carbon foils with different thicknesses, a similar analysis has been carried out. As a result, it was found that the measured spectra could be reproduced as well as by an analysis without placing any restriction on the emission probabilities both for the forward and backward SE emission. The obtained b-parameter of the Pólya distribution, which is a measure of the deviation from a Poisson distribution due to the cascade multiplication by high energy internal SE's, increases monotonically with the incident energy of proton beams. On the other hand, a clear foil-thickness dependence is not observed for the b-parameter. A theoretical model which could reproduced the magnitude of the b-parameter for the SE energy spectra obtained with thick Au, Cu and Al targets is found to overestimates our values for thin carbon foils significantly. Another model calculation is found to reproduce our b-values very well.
NASA Astrophysics Data System (ADS)
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
USDA-ARS?s Scientific Manuscript database
Thinopyrum intermedium, commonly known as intermediate wheatgrass (IWG) is a perennial crop shown to have both environmental and nutritional benefits. We have previously shown that in comparison to wheat controls, IWG lines had higher protein and dietary fiber contents. However, a deficiency in hi...
Projectile fragmentation of 40,48Ca and isotopic scaling in a transport approach
NASA Astrophysics Data System (ADS)
Mikhailova, T. I.; Erdemchimeg, B.; Artukh, A. G.; Di Toro, M.; Wolter, H. H.
2016-07-01
We investigate theoretically projectile fragmentation in reactions of 40,48Ca on 9Be and 181Ta targets using a Boltzmann-type transport approach, which is supplemented by a statistical decay code to describe the de-excitation of the hot primary fragments. We determine the thermodynamical properties of the primary fragments and calculate the isotope distributions of the cold final fragments. These describe the data reasonably well. For the pairs of projectiles with different isotopic content we analyze the isotopic scaling (or isoscaling) of the final fragment distributions, which has been used to extract the symmetry energy of the primary source. The calculation exhibits isoscaling behavior for the total yields as do the experiments. We also perform an impact-parameter-dependent isoscaling analysis in view of the fact that the primary systems at different impact parameters have very different properties. Then the isoscaling behavior is less stringent, which we can attribute to specific structure effects of the 40,48Ca pair. The symmetry energy determined in this way depends on these structure effects.
Projectile fragmentation of {sup 40,48}Ca and isotopic scaling in a transport approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikhailova, T. I., E-mail: tmikh@jinr.ru; Erdemchimeg, B.; Artukh, A. G.
2016-07-15
We investigate theoretically projectile fragmentation in reactions of {sup 40,48}Ca on {sup 9}Be and {sup 181}Ta targets using a Boltzmann-type transport approach, which is supplemented by a statistical decay code to describe the de-excitation of the hot primary fragments. We determine the thermodynamical properties of the primary fragments and calculate the isotope distributions of the cold final fragments. These describe the data reasonably well. For the pairs of projectiles with different isotopic content we analyze the isotopic scaling (or isoscaling) of the final fragment distributions, which has been used to extract the symmetry energy of the primary source. The calculationmore » exhibits isoscaling behavior for the total yields as do the experiments. We also perform an impact-parameter-dependent isoscaling analysis in view of the fact that the primary systems at different impact parameters have very different properties. Then the isoscaling behavior is less stringent, which we can attribute to specific structure effects of the {sup 40,48}Ca pair. The symmetry energy determined in this way depends on these structure effects.« less
Beekman, Alice; Shan, Daxian; Ali, Alana; Dai, Weiguo; Ward-Smith, Stephen; Goldenberg, Merrill
2005-04-01
This study evaluated the effect of the imaginary component of the refractive index on laser diffraction particle size data for pharmaceutical samples. Excipient particles 1-5 microm in diameter (irregular morphology) were measured by laser diffraction. Optical parameters were obtained and verified based on comparison of calculated vs. actual particle volume fraction. Inappropriate imaginary components of the refractive index can lead to inaccurate results, including false peaks in the size distribution. For laser diffraction measurements, obtaining appropriate or "effective" imaginary components of the refractive index was not always straightforward. When the recommended criteria such as the concentration match and the fit of the scattering data gave similar results for very different calculated size distributions, a supplemental technique, microscopy with image analysis, was used to decide between the alternatives. Use of effective optical parameters produced a good match between laser diffraction data and microscopy/image analysis data. The imaginary component of the refractive index can have a major impact on particle size results calculated from laser diffraction data. When performed properly, laser diffraction and microscopy with image analysis can yield comparable results.
Packets Distributing Evolutionary Algorithm Based on PSO for Ad Hoc Network
NASA Astrophysics Data System (ADS)
Xu, Xiao-Feng
2018-03-01
Wireless communication network has such features as limited bandwidth, changeful channel and dynamic topology, etc. Ad hoc network has lots of difficulties in accessing control, bandwidth distribution, resource assign and congestion control. Therefore, a wireless packets distributing Evolutionary algorithm based on PSO (DPSO)for Ad Hoc Network is proposed. Firstly, parameters impact on performance of network are analyzed and researched to obtain network performance effective function. Secondly, the improved PSO Evolutionary Algorithm is used to solve the optimization problem from local to global in the process of network packets distributing. The simulation results show that the algorithm can ensure fairness and timeliness of network transmission, as well as improve ad hoc network resource integrated utilization efficiency.
Space shuttle solid rocket booster recovery system definition, volume 1
NASA Technical Reports Server (NTRS)
1973-01-01
The performance requirements, preliminary designs, and development program plans for an airborne recovery system for the space shuttle solid rocket booster are discussed. The analyses performed during the study phase of the program are presented. The basic considerations which established the system configuration are defined. A Monte Carlo statistical technique using random sampling of the probability distribution for the critical water impact parameters was used to determine the failure probability of each solid rocket booster component as functions of impact velocity and component strength capability.
Statistical regularities in the rank-citation profile of scientists.
Petersen, Alexander M; Stanley, H Eugene; Succi, Sauro
2011-01-01
Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile c(i)(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each c(i)(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different c(i)(r) profiles, our results demonstrate the utility of the β(i) scaling parameter in conjunction with h(i) for quantifying individual publication impact. We show that the total number of citations C(i) tallied from a scientist's N(i) papers scales as [Formula: see text]. Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress.
Inference of R 0 and Transmission Heterogeneity from the Size Distribution of Stuttering Chains
Blumberg, Seth; Lloyd-Smith, James O.
2013-01-01
For many infectious disease processes such as emerging zoonoses and vaccine-preventable diseases, and infections occur as self-limited stuttering transmission chains. A mechanistic understanding of transmission is essential for characterizing the risk of emerging diseases and monitoring spatio-temporal dynamics. Thus methods for inferring and the degree of heterogeneity in transmission from stuttering chain data have important applications in disease surveillance and management. Previous researchers have used chain size distributions to infer , but estimation of the degree of individual-level variation in infectiousness (as quantified by the dispersion parameter, ) has typically required contact tracing data. Utilizing branching process theory along with a negative binomial offspring distribution, we demonstrate how maximum likelihood estimation can be applied to chain size data to infer both and the dispersion parameter that characterizes heterogeneity. While the maximum likelihood value for is a simple function of the average chain size, the associated confidence intervals are dependent on the inferred degree of transmission heterogeneity. As demonstrated for monkeypox data from the Democratic Republic of Congo, this impacts when a statistically significant change in is detectable. In addition, by allowing for superspreading events, inference of shifts the threshold above which a transmission chain should be considered anomalously large for a given value of (thus reducing the probability of false alarms about pathogen adaptation). Our analysis of monkeypox also clarifies the various ways that imperfect observation can impact inference of transmission parameters, and highlights the need to quantitatively evaluate whether observation is likely to significantly bias results. PMID:23658504
Modeling High-Impact Weather and Climate: Lessons From a Tropical Cyclone Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Done, James; Holland, Greg; Bruyere, Cindy
2013-10-19
Although the societal impact of a weather event increases with the rarity of the event, our current ability to assess extreme events and their impacts is limited by not only rarity but also by current model fidelity and a lack of understanding of the underlying physical processes. This challenge is driving fresh approaches to assess high-impact weather and climate. Recent lessons learned in modeling high-impact weather and climate are presented using the case of tropical cyclones as an illustrative example. Through examples using the Nested Regional Climate Model to dynamically downscale large-scale climate data the need to treat bias inmore » the driving data is illustrated. Domain size, location, and resolution are also shown to be critical and should be guided by the need to: include relevant regional climate physical processes; resolve key impact parameters; and to accurately simulate the response to changes in external forcing. The notion of sufficient model resolution is introduced together with the added value in combining dynamical and statistical assessments to fill out the parent distribution of high-impact parameters. Finally, through the example of a tropical cyclone damage index, direct impact assessments are resented as powerful tools that distill complex datasets into concise statements on likely impact, and as highly effective communication devices.« less
Rojas, Kristians Diaz; Montero, Maria L.; Yao, Jorge; Messing, Edward; Fazili, Anees; Joseph, Jean; Ou, Yangming; Rubens, Deborah J.; Parker, Kevin J.; Davatzikos, Christos; Castaneda, Benjamin
2015-01-01
Abstract. A methodology to study the relationship between clinical variables [e.g., prostate specific antigen (PSA) or Gleason score] and cancer spatial distribution is described. Three-dimensional (3-D) models of 216 glands are reconstructed from digital images of whole mount histopathological slices. The models are deformed into one prostate model selected as an atlas using a combination of rigid, affine, and B-spline deformable registration techniques. Spatial cancer distribution is assessed by counting the number of tumor occurrences among all glands in a given position of the 3-D registered atlas. Finally, a difference between proportions is used to compare different spatial distributions. As a proof of concept, we compare spatial distributions from patients with PSA greater and less than 5 ng/ml and from patients older and younger than 60 years. Results suggest that prostate cancer has a significant difference in the right zone of the prostate between populations with PSA greater and less than 5 ng/ml. Age does not have any impact in the spatial distribution of the disease. The proposed methodology can help to comprehend prostate cancer by understanding its spatial distribution and how it changes according to clinical parameters. Finally, this methodology can be easily adapted to other organs and pathologies. PMID:26236756
NASA Astrophysics Data System (ADS)
Chen, Y.
2017-12-01
Urbanization is the world development trend for the past century, and the developing countries have been experiencing much rapider urbanization in the past decades. Urbanization brings many benefits to human beings, but also causes negative impacts, such as increasing flood risk. Impact of urbanization on flood response has long been observed, but quantitatively studying this effect still faces great challenges. For example, setting up an appropriate hydrological model representing the changed flood responses and determining accurate model parameters are very difficult in the urbanized or urbanizing watershed. In the Pearl River Delta area, rapidest urbanization has been observed in China for the past decades, and dozens of highly urbanized watersheds have been appeared. In this study, a physically based distributed watershed hydrological model, the Liuxihe model is employed and revised to simulate the hydrological processes of the highly urbanized watershed flood in the Pearl River Delta area. A virtual soil type is then defined in the terrain properties dataset, and its runoff production and routing algorithms are added to the Liuxihe model. Based on a parameter sensitive analysis, the key hydrological processes of a highly urbanized watershed is proposed, that provides insight into the hydrological processes and for parameter optimization. Based on the above analysis, the model is set up in the Songmushan watershed where there is hydrological data observation. A model parameter optimization and updating strategy is proposed based on the remotely sensed LUC types, which optimizes model parameters with PSO algorithm and updates them based on the changed LUC types. The model parameters in Songmushan watershed are regionalized at the Pearl River Delta area watersheds based on the LUC types of the other watersheds. A dozen watersheds in the highly urbanized area of Dongguan City in the Pearl River Delta area were studied for the flood response changes due to urbanization, and the results show urbanization has big impact on the watershed flood responses. The peak flow increased a few times after urbanization which is much higher than previous reports.
NASA Astrophysics Data System (ADS)
Hazenberg, P.; Uijlenhoet, R.; Leijnse, H.
2015-12-01
Volumetric weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources, which can be subdivided into two main groups: 1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, vertical profile of reflectivity, attenuation, etc.), and 2) errors related to the conversion of the observed reflectivity (Z) values into rainfall intensity (R) and specific attenuation (k). Until the recent wide-scale implementation of dual-polarimetric radar, this second group of errors received relatively little attention, focusing predominantly on precipitation type-dependent Z-R and Z-k relations. The current work accounts for the impact of variations of the drop size distribution (DSD) on the radar QPE performance. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed within The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. However, overall precipitation intensities are still underestimated. This underestimation is expected to result from unaccounted errors (e.g. transmitter calibration, erroneous identification of precipitation as clutter, overshooting and small-scale variability). In case the DSD parameters are optimized, the performance of the radar is further improved, resulting in the best performance of the radar QPE product. However, the resulting optimal Z-R and Z-k relations are considerably different from those obtained from disdrometer observations. As such, the best microphysical parameter set results in a minimization of the overall bias, which besides accounting for DSD variations also corrects for the impact of additional error sources.
Spatial and temporal distribution of benthic macroinvertebrates in a Southeastern Brazilian river.
Silveira, M P; Buss, D F; Nessimian, J L; Baptista, D F
2006-05-01
Benthic macroinvertebrate assemblages are structured according to physical and chemical parameters that define microhabitats, including food supply, shelter to escape predators, and other biological parameters that influence reproductive success. The aim of this study is to investigate spatial and temporal distribution of macroinvertebrate assemblages at the Macaé river basin, in Rio de Janeiro state, Southeastern Brazil. According to the "Habitat Assessment Field Data Sheet--High Gradient Streams" (Barbour et al., 1999), the five sampling sites are considered as a reference condition. Despite the differences in hydrological parameters (mean width, depth and discharge) among sites, the physicochemical parameters and functional feeding groups' general structure were similar, except for the less impacted area, which showed more shredders. According to the Detrended Correspondence Analysis based on substrates, there is a clear distinction between pool and riffle assemblages. In fact, the riffle litter substrate had higher taxa in terms of richness and abundance, but the pool litter substrate had the greatest number of exclusive taxa. A Cluster Analysis based on sampling sites data showed that temporal variation was the main factor in structuring macroinvertebrate assemblages in the studied habitats.
NASA Astrophysics Data System (ADS)
Jawad, Enas A.
2018-05-01
In this paper, The Monte Carlo simulation program has been used to calculation the electron energy distribution function (EEDF) and electric transport parameters for the gas mixtures of The trif leoroiodo methane (CF3I) ‘environment friendly’ with a noble gases (Argon, Helium, kryptos, Neon and Xenon). The electron transport parameters are assessed in the range of E/N (E is the electric field and N is the gas number density of background gas molecules) between 100 to 2000Td (1 Townsend = 10-17 V cm2) at room temperature. These parameters, namely are electron mean energy (ε), the density –normalized longitudinal diffusion coefficient (NDL) and the density –normalized mobility (μN). In contrast, the impact of CF3I in the noble gases mixture is strongly apparent in the values for the electron mean energy, the density –normalized longitudinal diffusion coefficient and the density –normalized mobility. Note in the results of the calculation agreed well with the experimental results.
Quantal diffusion description of multinucleon transfers in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Ayik, S.; Yilmaz, B.; Yilmaz, O.; Umar, A. S.
2018-05-01
Employing the stochastic mean-field (SMF) approach, we develop a quantal diffusion description of the multi-nucleon transfer in heavy-ion collisions at finite impact parameters. The quantal transport coefficients are determined by the occupied single-particle wave functions of the time-dependent Hartree-Fock equations. As a result, the primary fragment mass and charge distribution functions are determined entirely in terms of the mean-field properties. This powerful description does not involve any adjustable parameter, includes the effects of shell structure, and is consistent with the fluctuation-dissipation theorem of the nonequilibrium statistical mechanics. As a first application of the approach, we analyze the fragment mass distribution in 48Ca+ 238U collisions at the center-of-mass energy Ec.m.=193 MeV and compare the calculations with the experimental data.
Jeong, Jong In; Gu, Seonhye; Cho, Juhee; Hong, Sang Duk; Kim, Su Jin; Dhong, Hun-Jong; Chung, Seung-Kyu; Kim, Hyo Yeol
2017-05-01
Considering the mechanisms by which obesity affects obstructive sleep apnea syndrome (OSAS) and the differences of fat distribution depending on gender, associations between anthropometric parameters, and OSAS may differ depending on gender or sleep position. We analyzed the impact of gender and sleep position on the relationship between fat distribution and development of OSAS. One thousand thirty-two consecutive subjects were analyzed. Recorded anthropometric measurements and overnight polysomnographic data of the subjects were reviewed retrospectively. The presence of OSAS was defined by the respiratory disturbance index (RDI) ≥5 with documented symptoms of excessive daytime sleepiness. Eight hundred fifty-eight males and 174 females were included. Male subjects had significantly higher body mass index (BMI), larger waist circumference (WC), and lower percent of overall body fat (P < 0.0001, P < 0.0001, and P < 0.0001, respectively). The severity of OSAS was significantly higher in male subjects (RDI 26.9 ± 22.4 in males vs. 10.2 ± 13.8 in females, P < 0.0001). In male subjects, BMI, WC, and overall body fat were significantly associated with severity of OSAS and had larger impacts on supine RDI than lateral RDI. Overall body fat was not associated with severity of OSAS in female subjects, and there were no significant differences of the associations between all anthropometric parameters and RDIs depending on sleep position. Evaluation of the correlation of anthropometric data with severity of OSAS should consider sleep position as well as gender.
On the estimation of the reproduction number based on misreported epidemic data.
Azmon, Amin; Faes, Christel; Hens, Niel
2014-03-30
Epidemic data often suffer from underreporting and delay in reporting. In this paper, we investigated the impact of delays and underreporting on estimates of reproduction number. We used a thinned version of the epidemic renewal equation to describe the epidemic process while accounting for the underlying reporting system. Assuming a constant reporting parameter, we used different delay patterns to represent the delay structure in our model. Instead of assuming a fixed delay distribution, we estimated the delay parameters while assuming a smooth function for the reproduction number over time. In order to estimate the parameters, we used a Bayesian semiparametric approach with penalized splines, allowing both flexibility and exact inference provided by MCMC. To show the performance of our method, we performed different simulation studies. We conducted sensitivity analyses to investigate the impact of misspecification of the delay pattern and the impact of assuming nonconstant reporting parameters on the estimates of the reproduction numbers. We showed that, whenever available, additional information about time-dependent underreporting can be taken into account. As an application of our method, we analyzed confirmed daily A(H1N1) v2009 cases made publicly available by the World Health Organization for Mexico and the USA. Copyright © 2013 John Wiley & Sons, Ltd.
Leigh, S A; Branton, S L; Evans, J D; Collier, S D
2013-12-01
This study was conducted to determine the impact of vaccination with Vectormune FP MG on egg production and egg quality characteristics of Single Comb White Leghorn hens. Due to questions of the efficacy of this vaccine in preventing Mycoplasma gallisepticum-mediated pathology, the ability of this vaccine to protect against postproduction-peak egg losses associated with F-strain M. gallisepticum (FMG) vaccination was also investigated. Vaccination with Vectormune FP MG did not result in any significant change in egg production or egg quality parameters compared with control (unvaccinated) hens. Subsequent revaccination with FMG at 45 wk of age (woa) yielded no impact on egg production or egg quality parameters of Vectormune FP MG vaccinated hens, unlike prior results for postproduction-peak vaccination of M. gallisepticum-clean hens with FMG, which exhibited a drop in egg production of approximately 6%. No difference in egg size distribution was observed for any of the treatment groups before or after FMG revaccination. These results suggest that hens can be safely vaccinated with Vectormune FP MG as pullets and can be revaccinated with a live M. gallisepticum vaccine such as FMG at a later date with no deleterious effects on egg production or egg or eggshell quality parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poludniowski, Gavin G.; Evans, Philip M.
2013-04-15
Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii)more » suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.« less
Effect of agitation time on nutrient distribution in full-scale CSTR biogas digesters.
Kress, Philipp; Nägele, Hans-Joachim; Oechsner, Hans; Ruile, Stephan
2018-01-01
The aim of this work was to study the impact of reduced mixing time in a full-scale CSTR biogas reactor from 10 to 5 and to 2min in half an hour on the distribution of DM, acetic acid and FOS/TAC as a measure to cut electricity consumption. The parameters in the digestate were unevenly distributed with the highest concentration measured at the point of feeding. By reducing mixing time, the FOS/TAC value increases by 16.6%. A reduced mixing time of 2min lead to an accumulation of 15% biogas in the digestate. Copyright © 2017 Elsevier Ltd. All rights reserved.
Asteroid rotation rates - Distributions and statistics
NASA Technical Reports Server (NTRS)
Binzel, Richard P.; Farinella, Paolo; Zappala, Vincenzo; Cellino, Alberto
1989-01-01
An analysis of asteroid rotation rates and light-curve amplitudes disclosed many significant correlations between these rotation parameters and asteroid diameter, with distinct changes occurring near 125 km, a diameter above which self-gravity may become important. It is suggested that this size range may represent a division between surviving primordial asteroids and collisional fragments. A comparison of rotational parameters between family and nonfamily asteroids showed that the Koronis and Eos families exhibit noticeable differences, considered to be due to different impact conditions and/or to a relatively younger age for the Koronis family.
Yekpe, Ketsia; Abatzoglou, Nicolas; Bataille, Bernard; Gosselin, Ryan; Sharkawi, Tahmer; Simard, Jean-Sébastien; Cournoyer, Antoine
2018-07-01
This study applied the concept of Quality by Design (QbD) to tablet dissolution. Its goal was to propose a quality control strategy to model dissolution testing of solid oral dose products according to International Conference on Harmonization guidelines. The methodology involved the following three steps: (1) a risk analysis to identify the material- and process-related parameters impacting the critical quality attributes of dissolution testing, (2) an experimental design to evaluate the influence of design factors (attributes and parameters selected by risk analysis) on dissolution testing, and (3) an investigation of the relationship between design factors and dissolution profiles. Results show that (a) in the case studied, the two parameters impacting dissolution kinetics are active pharmaceutical ingredient particle size distributions and tablet hardness and (b) these two parameters could be monitored with PAT tools to predict dissolution profiles. Moreover, based on the results obtained, modeling dissolution is possible. The practicality and effectiveness of the QbD approach were demonstrated through this industrial case study. Implementing such an approach systematically in industrial pharmaceutical production would reduce the need for tablet dissolution testing.
Husak, Gregory J.; Michaelsen, Joel; Kyriakidis, P.; Verdin, James P.; Funk, Chris; Galu, Gideon
2011-01-01
Probabilistic forecasts are produced from a variety of outlets to help predict rainfall, and other meteorological events, for periods of 1 month or more. Such forecasts are expressed as probabilities of a rainfall event, e.g. being in the upper, middle, or lower third of the relevant distribution of rainfall in the region. The impact of these forecasts on the expectation for the event is not always clear or easily conveyed. This article proposes a technique based on Monte Carlo simulation for adjusting existing climatologic statistical parameters to match forecast information, resulting in new parameters defining the probability of events for the forecast interval. The resulting parameters are shown to approximate the forecasts with reasonable accuracy. To show the value of the technique as an application for seasonal rainfall, it is used with consensus forecast developed for the Greater Horn of Africa for the 2009 March-April-May season. An alternative, analytical approach is also proposed, and discussed in comparison to the first simulation-based technique.
NASA Astrophysics Data System (ADS)
Ramzan, M.; Bilal, M.; Chung, Jae Dong; Lu, Dian Chen; Farooq, Umer
2017-09-01
A mathematical model has been established to study the magnetohydrodynamic second grade nanofluid flow past a bidirectional stretched surface. The flow is induced by Cattaneo-Christov thermal and concentration diffusion fluxes. Novel characteristics of Brownian motion and thermophoresis are accompanied by temperature dependent thermal conductivity and convective heat and mass boundary conditions. Apposite transformations are betrothed to transform a system of nonlinear partial differential equations to nonlinear ordinary differential equations. Analytic solutions of the obtained nonlinear system are obtained via a convergent method. Graphs are plotted to examine how velocity, temperature, and concentration distributions are affected by varied physical involved parameters. Effects of skin friction coefficients along the x- and y-direction versus various parameters are also shown through graphs and are well debated. Our findings show that velocities along both the x and y axes exhibit a decreasing trend for the Hartmann number. Moreover, temperature and concentration distributions are decreasing functions of thermal and concentration relaxation parameters.
Frigate Defense Effectiveness in Asymmetrical Green Water Engagements
2009-09-01
the model not employ- ing a helicopter, a high overlap in the sets of factors determining loss is observed. Both factor weighting and the predicted...61 4.1 Model Parameter Estimates Overview. . . . . . . . . . . . . . . . . . . . . . 69 4.2 Distribution of loss ...74 4.7 The range at which a contact is deemed hostile has low impact on predicted loss
Spatial variation of statistical properties of extreme water levels along the eastern Baltic Sea
NASA Astrophysics Data System (ADS)
Pindsoo, Katri; Soomere, Tarmo; Rocha, Eugénio
2016-04-01
Most of existing projections of future extreme water levels rely on the use of classic generalised extreme value distributions. The choice to use a particular distribution is often made based on the absolute value of the shape parameter of the Generalise Extreme Value distribution. If this parameter is small, the Gumbel distribution is most appropriate while in the opposite case the Weibull or Frechet distribution could be used. We demonstrate that the alongshore variation in the statistical properties of numerically simulated high water levels along the eastern coast of the Baltic Sea is so large that the use of a single distribution for projections of extreme water levels is highly questionable. The analysis is based on two simulated data sets produced in the Swedish Meteorological and Hydrological Institute. The output of the Rossby Centre Ocean model is sampled with a resolution of 6 h and the output of the circulation model NEMO with a resolution of 1 h. As the maxima of water levels of subsequent years may be correlated in the Baltic Sea, we also employ maxima for stormy seasons. We provide a detailed analysis of spatial variation of the parameters of the family of extreme value distributions along an approximately 600 km long coastal section from the north-western shore of Latvia in the Baltic Proper until the eastern Gulf of Finland. The parameters are evaluated using maximum likelihood method and method of moments. The analysis also covers the entire Gulf of Riga. The core parameter of this family of distributions, the shape parameter of the Generalised Extreme Value distribution, exhibits extensive variation in the study area. Its values evaluated using the Hydrognomon software and maximum likelihood method, vary from about -0.1 near the north-western coast of Latvia in the Baltic Proper up to about 0.05 in the eastern Gulf of Finland. This parameter is very close to zero near Tallinn in the western Gulf of Finland. Thus, it is natural that the Gumbel distribution gives adequate projections of extreme water levels for the vicinity of Tallinn. More importantly, this feature indicates that the use of a single distribution for the projections of extreme water levels and their return periods for the entire Baltic Sea coast is inappropriate. The physical reason is the interplay of the complex shape of large subbasins (such as the Gulf of Riga and Gulf of Finland) of the sea and highly anisotropic wind regime. The 'impact' of this anisotropy on the statistics of water level is amplified by the overall anisotropy of the distributions of the frequency of occurrence of high and low water levels. The most important conjecture is that long-term behaviour of water level extremes in different coastal sections of the Baltic Sea may be fundamentally different.
NASA Astrophysics Data System (ADS)
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
A new model to predict weak-lensing peak counts. II. Parameter constraint strategies
NASA Astrophysics Data System (ADS)
Lin, Chieh-An; Kilbinger, Martin
2015-11-01
Context. Peak counts have been shown to be an excellent tool for extracting the non-Gaussian part of the weak lensing signal. Recently, we developed a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analysis. Aims: In this work, we explore and compare various strategies for constraining a parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. Methods: First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique that makes a weaker assumption than does the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. Results: We find that neglecting the CDC effect enlarges parameter contours by 22% and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. Conclusions: The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.
Modelling and analysis of solar cell efficiency distributions
NASA Astrophysics Data System (ADS)
Wasmer, Sven; Greulich, Johannes
2017-08-01
We present an approach to model the distribution of solar cell efficiencies achieved in production lines based on numerical simulations, metamodeling and Monte Carlo simulations. We validate our methodology using the example of an industrial feasible p-type multicrystalline silicon “passivated emitter and rear cell” process. Applying the metamodel, we investigate the impact of each input parameter on the distribution of cell efficiencies in a variance-based sensitivity analysis, identifying the parameters and processes that need to be improved and controlled most accurately. We show that if these could be optimized, the mean cell efficiencies of our examined cell process would increase from 17.62% ± 0.41% to 18.48% ± 0.09%. As the method relies on advanced characterization and simulation techniques, we furthermore introduce a simplification that enhances applicability by only requiring two common measurements of finished cells. The presented approaches can be especially helpful for ramping-up production, but can also be applied to enhance established manufacturing.
Zhao, Suping; Yu, Ye; Xia, Dunsheng; Yin, Daiying; He, Jianjun; Liu, Na; Li, Fang
2015-12-01
The dust origins of the two events were identified using HYSPLIT trajectory model and MODIS and CALIPSO satellite data to understand the particle size distribution during two contrasting dust events originated from Taklimakan and Gobi deserts. The supermicron particles significantly increased during the dust events. The dust event from Gobi desert affected significantly on the particles larger than 2.5 μm, while that from Taklimakan desert impacted obviously on the particles in 1.0-2.5 μm. It is found that the particle size distributions and their modal parameters such as VMD (volume median diameter) have significant difference for varying dust origins. The dust from Taklimakan desert was finer than that from Gobi desert also probably due to other influencing factors such as mixing between dust and urban emissions. Our findings illustrated the capacity of combining in situ, satellite data and trajectory model to characterize large-scale dust plumes with a variety of aerosol parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Khandelwal, Govind S.; Khan, Ferdous
1989-01-01
An optical model description of energy and momentum transfer in relativistic heavy-ion collisions, based upon composite particle multiple scattering theory, is presented. Transverse and longitudinal momentum transfers to the projectile are shown to arise from the real and absorptive part of the optical potential, respectively. Comparisons of fragment momentum distribution observables with experiments are made and trends outlined based on our knowledge of the underlying nucleon-nucleon interaction. Corrections to the above calculations are discussed. Finally, use of the model as a tool for estimating collision impact parameters is indicated.
{sub qT} uncertainties for W and Z production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berge, S.; Nadolsky, P. M.; Olness, F. I.
Analysis of semi-inclusive DIS hadroproduction suggests broadening of transverse momentum distributions at small x below 10-3 {approx} 10-2, which can be modeled in the Collins-Soper-Sterman formalism by a modification of impact parameter dependent parton densities. We investigate these consequences for the production of electroweak bosons at the Tevatron and the LHC. If substantial small-x broadening is observed in forward Z0 boson production in the Tevatron Run-2, it will strongly affect the predicted qT distributions for W{+-} and Z0 boson production at the LHC.
Experiments and scaling laws for catastrophic collisions. [of asteroids
NASA Technical Reports Server (NTRS)
Fujiwara, A.; Cerroni, P.; Davis, D.; Ryan, E.; Di Martino, M.
1989-01-01
The existing data on shattering impacts are reviewed using natural silicate, ice, and cement-mortar targets. A comprehensive data base containing the most important parameters describing these experiments was prepared. The collisional energy needed to shatter consolidated homogeneous targets and the ensuing fragment size distributions have been well studied experimentally. However, major gaps exist in the data on fragment velocity and rotational distributions, as well as collisional energy partitioning for these targets. Current scaling laws lead to predicted outcomes of asteroid collisions that are inconsistent with interpretations of astronomical data.
Impact parameter smearing effects on isospin sensitive observables in heavy ion collisions
NASA Astrophysics Data System (ADS)
Li, Li; Zhang, Yingxun; Li, Zhuxia; Wang, Nan; Cui, Ying; Winkelbauer, Jack
2018-04-01
The validity of impact parameter estimation from the multiplicity of charged particles at low-intermediate energies is checked within the framework of the improved quantum molecular dynamics model. The simulations show that the multiplicity of charged particles cannot estimate the impact parameter of heavy ion collisions very well, especially for central collisions at the beam energies lower than ˜70 MeV/u due to the large fluctuations of the multiplicity of charged particles. The simulation results for the central collisions defined by the charged particle multiplicity are compared to those by using impact parameter b =2 fm and it shows that the charge distribution for 112Sn+112Sn at the beam energy of 50 MeV/u is different evidently for two cases; and the chosen isospin sensitive observable, the coalescence invariant single neutron to proton yield ratio, reduces less than 15% for neutron-rich systems Sn,132124+124Sn at Ebeam=50 MeV/u, while the coalescence invariant double neutron to proton yield ratio does not have obvious difference. The sensitivity of the chosen isospin sensitive observables to effective mass splitting is studied for central collisions defined by the multiplicity of charged particles. Our results show that the sensitivity is enhanced for 132Sn+124Sn relative to that for 124Sn+124Sn , and this reaction system should be measured in future experiments to study the effective mass splitting by heavy ion collisions.
Quantifying the evolution of individual scientific impact.
Sinatra, Roberta; Wang, Dashun; Deville, Pierre; Song, Chaoming; Barabási, Albert-László
2016-11-04
Despite the frequent use of numerous quantitative indicators to gauge the professional impact of a scientist, little is known about how scientific impact emerges and evolves in time. Here, we quantify the changes in impact and productivity throughout a career in science, finding that impact, as measured by influential publications, is distributed randomly within a scientist's sequence of publications. This random-impact rule allows us to formulate a stochastic model that uncouples the effects of productivity, individual ability, and luck and unveils the existence of universal patterns governing the emergence of scientific success. The model assigns a unique individual parameter Q to each scientist, which is stable during a career, and it accurately predicts the evolution of a scientist's impact, from the h-index to cumulative citations, and independent recognitions, such as prizes. Copyright © 2016, American Association for the Advancement of Science.
NASA Astrophysics Data System (ADS)
Lorite, I. J.; Mateos, L.; Fereres, E.
2005-01-01
SummaryThe simulations of dynamic, spatially distributed non-linear models are impacted by the degree of spatial and temporal aggregation of their input parameters and variables. This paper deals with the impact of these aggregations on the assessment of irrigation scheme performance by simulating water use and crop yield. The analysis was carried out on a 7000 ha irrigation scheme located in Southern Spain. Four irrigation seasons differing in rainfall patterns were simulated (from 1996/1997 to 1999/2000) with the actual soil parameters and with hypothetical soil parameters representing wider ranges of soil variability. Three spatial aggregation levels were considered: (I) individual parcels (about 800), (II) command areas (83) and (III) the whole irrigation scheme. Equally, five temporal aggregation levels were defined: daily, weekly, monthly, quarterly and annually. The results showed little impact of spatial aggregation in the predictions of irrigation requirements and of crop yield for the scheme. The impact of aggregation was greater in rainy years, for deep-rooted crops (sunflower) and in scenarios with heterogeneous soils. The highest impact on irrigation requirement estimations was in the scenario of most heterogeneous soil and in 1999/2000, a year with frequent rainfall during the irrigation season: difference of 7% between aggregation levels I and III was found. Equally, it was found that temporal aggregation had only significant impact on irrigation requirements predictions for time steps longer than 4 months. In general, simulated annual irrigation requirements decreased as the time step increased. The impact was greater in rainy years (specially with abundant and concentrated rain events) and in crops which cycles coincide in part with the rainy season (garlic, winter cereals and olive). It is concluded that in this case, average, representative values for the main inputs of the model (crop, soil properties and sowing dates) can generate results within 1% of those obtained by providing spatially specific values for about 800 parcels.
Discrete epidemic models with arbitrary stage distributions and applications to disease control.
Hernandez-Ceron, Nancy; Feng, Zhilan; Castillo-Chavez, Carlos
2013-10-01
W.O. Kermack and A.G. McKendrick introduced in their fundamental paper, A Contribution to the Mathematical Theory of Epidemics, published in 1927, a deterministic model that captured the qualitative dynamic behavior of single infectious disease outbreaks. A Kermack–McKendrick discrete-time general framework, motivated by the emergence of a multitude of models used to forecast the dynamics of epidemics, is introduced in this manuscript. Results that allow us to measure quantitatively the role of classical and general distributions on disease dynamics are presented. The case of the geometric distribution is used to evaluate the impact of waiting-time distributions on epidemiological processes or public health interventions. In short, the geometric distribution is used to set up the baseline or null epidemiological model used to test the relevance of realistic stage-period distribution on the dynamics of single epidemic outbreaks. A final size relationship involving the control reproduction number, a function of transmission parameters and the means of distributions used to model disease or intervention control measures, is computed. Model results and simulations highlight the inconsistencies in forecasting that emerge from the use of specific parametric distributions. Examples, using the geometric, Poisson and binomial distributions, are used to highlight the impact of the choices made in quantifying the risk posed by single outbreaks and the relative importance of various control measures.
Shapiro effect as a possible cause of the low-frequency pulsar timing noise in globular clusters
NASA Astrophysics Data System (ADS)
Larchenkova, T. I.; Kopeikin, S. M.
2006-01-01
A prolonged timing of millisecond pulsars has revealed low-frequency uncorrelated (infrared) noise, presumably of astrophysical origin, in the pulse arrival time (PAT) residuals for some of them. Currently available pulsar timing methods allow the statistical parameters of this noise to be reliably measured by decomposing the PAT residual function into orthogonal Fourier harmonics. In most cases, pulsars in globular clusters show a low-frequency modulation of their rotational phase and spin rate. The relativistic time delay of the pulsar signal in the curved spacetime of randomly distributed and moving globular cluster stars (the Shapiro effect) is suggested as a possible cause of this modulation. Extremely important (from an astrophysical point of view) information about the structure of the globular cluster core, which is inaccessible to study by other observational methods, could be obtained by analyzing the spectral parameters of the low-frequency noise caused by the Shapiro effect and attributable to the random passages of stars near the line of sight to the pulsar. Given the smallness of the aberration corrections that arise from the nonstationarity of the gravitational field of the randomly distributed ensemble of stars under consideration, a formula is derived for the Shapiro effect for a pulsar in a globular cluster. The derived formula is used to calculate the autocorrelation function of the low-frequency pulsar noise, the slope of its power spectrum, and the behavior of the σz statistic that characterizes the spectral properties of this noise in the form of a time function. The Shapiro effect under discussion is shown to manifest itself for large impact parameters as a low-frequency noise of the pulsar spin rate with a spectral index of n = -1.8 that depends weakly on the specific model distribution of stars in the globular cluster. For small impact parameters, the spectral index of the noise is n = -1.5.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mantry, Sonny; Petriello, Frank
We derive a factorization theorem for the Higgs boson transverse momentum (p{sub T}) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for m{sub h}>>p{sub T}>>{Lambda}{sub QCD}, where m{sub h} denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the p{sub T} scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the p{sub T}-scale physics simplifies themore » implementation of higher order radiative corrections in {alpha}{sub s}(p{sub T}). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in p{sub T}/m{sub h} and {Lambda}{sub QCD}/p{sub T} can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-p{sub T} resummation.« less
Examining System-Wide Impacts of Solar PV Control Systems with a Power Hardware-in-the-Loop Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Tess L.; Fuller, Jason C.; Schneider, Kevin P.
2014-10-11
High penetration levels of distributed solar PV power generation can lead to adverse power quality impacts such as excessive voltage rise, voltage flicker, and reactive power values that result in unacceptable voltage levels. Advanced inverter control schemes have been proposed that have the potential to mitigate many power quality concerns. However, closed-loop control may lead to unintended behavior in deployed systems as complex interactions can occur between numerous operating devices. In order to enable the study of the performance of advanced control schemes in a detailed distribution system environment, a Hardware-in-the-Loop (HIL) platform has been developed. In the HIL system,more » GridLAB-D, a distribution system simulation tool, runs in real-time mode at the Pacific Northwest National Laboratory (PNNL) and supplies power system parameters at a point of common coupling to hardware located at the National Renewable Energy Laboratory (NREL). Hardware inverters interact with grid and PV simulators emulating an operational distribution system and power output from the inverters is measured and sent to PNNL to update the real-time distribution system simulation. The platform is described and initial test cases are presented. The platform is used to study the system-wide impacts and the interactions of controls applied to inverters that are integrated into a simulation of the IEEE 8500-node test feeder, with inverters in either constant power factor control or active volt/VAR control. We demonstrate that this HIL platform is well-suited to the study of advanced inverter controls and their impacts on the power quality of a distribution feeder. Additionally, the results from HIL are used to validate GridLAB-D simulations of advanced inverter controls.« less
NASA Astrophysics Data System (ADS)
Rae, A. S. P.; Collins, G. S.; Grieve, R. A. F.; Osinski, G. R.; Morgan, J. V.
2017-07-01
Large impact structures have complex morphologies, with zones of structural uplift that can be expressed topographically as central peaks and/or peak rings internal to the crater rim. The formation of these structures requires transient strength reduction in the target material and one of the proposed mechanisms to explain this behavior is acoustic fluidization. Here, samples of shock-metamorphosed quartz-bearing lithologies at the West Clearwater Lake impact structure, Canada, are used to estimate the maximum recorded shock pressures in three dimensions across the crater. These measurements demonstrate that the currently observed distribution of shock metamorphism is strongly controlled by the formation of the structural uplift. The distribution of peak shock pressures, together with apparent crater morphology and geological observations, is compared with numerical impact simulations to constrain parameters used in the block-model implementation of acoustic fluidization. The numerical simulations produce craters that are consistent with morphological and geological observations. The results show that the regeneration of acoustic energy must be an important feature of acoustic fluidization in crater collapse, and should be included in future implementations. Based on the comparison between observational data and impact simulations, we conclude that the West Clearwater Lake structure had an original rim (final crater) diameter of 35-40 km and has since experienced up to 2 km of differential erosion.
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
Gujba, Abdullahi K.; Medraj, Mamoun
2014-01-01
The laser shock peening (LSP) process using a Q-switched pulsed laser beam for surface modification has been reviewed. The development of the LSP technique and its numerous advantages over the conventional shot peening (SP) such as better surface finish, higher depths of residual stress and uniform distribution of intensity were discussed. Similar comparison with ultrasonic impact peening (UIP)/ultrasonic shot peening (USP) was incorporated, when possible. The generation of shock waves, processing parameters, and characterization of LSP treated specimens were described. Special attention was given to the influence of LSP process parameters on residual stress profiles, material properties and structures. Based on the studies so far, more fundamental understanding is still needed when selecting optimized LSP processing parameters and substrate conditions. A summary of the parametric studies of LSP on different materials has been presented. Furthermore, enhancements in the surface micro and nanohardness, elastic modulus, tensile yield strength and refinement of microstructure which translates to increased fatigue life, fretting fatigue life, stress corrosion cracking (SCC) and corrosion resistance were addressed. However, research gaps related to the inconsistencies in the literature were identified. Current status, developments and challenges of the LSP technique were discussed. PMID:28788284
Influence of emphysema distribution on pulmonary function parameters in COPD patients
Bastos, Helder Novais e; Neves, Inês; Redondo, Margarida; Cunha, Rui; Pereira, José Miguel; Magalhães, Adriana; Fernandes, Gabriela
2015-01-01
ABSTRACT OBJECTIVE: To evaluate the impact that the distribution of emphysema has on clinical and functional severity in patients with COPD. METHODS: The distribution of the emphysema was analyzed in COPD patients, who were classified according to a 5-point visual classification system of lung CT findings. We assessed the influence of emphysema distribution type on the clinical and functional presentation of COPD. We also evaluated hypoxemia after the six-minute walk test (6MWT) and determined the six-minute walk distance (6MWD). RESULTS: Eighty-six patients were included. The mean age was 65.2 ± 12.2 years, 91.9% were male, and all but one were smokers (mean smoking history, 62.7 ± 38.4 pack-years). The emphysema distribution was categorized as obviously upper lung-predominant (type 1), in 36.0% of the patients; slightly upper lung-predominant (type 2), in 25.6%; homogeneous between the upper and lower lung (type 3), in 16.3%; and slightly lower lung-predominant (type 4), in 22.1%. Type 2 emphysema distribution was associated with lower FEV1, FVC, FEV1/FVC ratio, and DLCO. In comparison with the type 1 patients, the type 4 patients were more likely to have an FEV1 < 65% of the predicted value (OR = 6.91, 95% CI: 1.43-33.45; p = 0.016), a 6MWD < 350 m (OR = 6.36, 95% CI: 1.26-32.18; p = 0.025), and post-6MWT hypoxemia (OR = 32.66, 95% CI: 3.26-326.84; p = 0.003). The type 3 patients had a higher RV/TLC ratio, although the difference was not significant. CONCLUSIONS: The severity of COPD appears to be greater in type 4 patients, and type 3 patients tend to have greater hyperinflation. The distribution of emphysema could have a major impact on functional parameters and should be considered in the evaluation of COPD patients. PMID:26785956
Impact of ballistic body armour and load carriage on walking patterns and perceived comfort.
Park, Huiju; Branson, Donna; Petrova, Adriana; Peksoz, Semra; Jacobson, Bert; Warren, Aric; Goad, Carla; Kamenidis, Panagiotis
2013-01-01
This study investigated the impact of weight magnitude and distribution of body armour and carrying loads on military personnel's walking patterns and comfort perceptions. Spatio-temporal parameters of walking, plantar pressure and contact area were measured while seven healthy male right-handed military students wore seven different garments of varying weight (0.06, 9, 18 and 27 kg) and load distribution (balanced and unbalanced, on the front and back torso). Higher weight increased the foot contact time with the floor. In particular, weight placement on the non-dominant side of the front torso resulted in the greatest stance phase and double support. Increased plantar pressure and contact area observed during heavier loads entail increased impact forces, which can cause overuse injuries and foot blisters. Participants reported increasingly disagreeable pressure and strain in the shoulder, neck and lower back during heavier weight conditions and unnatural walking while wearing unbalanced weight distributed loads. This study shows the potentially synergistic impact of wearing body armour vest with differential loads on body movement and comfort perception. This study found that soldiers should balance loads, avoiding load placement on the non-dominant side front torso, thus minimising mobility restriction and potential injury risk. Implications for armour vest design modifications can also be found in the results.
Siddique, Juned; Harel, Ofer; Crespi, Catherine M.; Hedeker, Donald
2014-01-01
The true missing data mechanism is never known in practice. We present a method for generating multiple imputations for binary variables that formally incorporates missing data mechanism uncertainty. Imputations are generated from a distribution of imputation models rather than a single model, with the distribution reflecting subjective notions of missing data mechanism uncertainty. Parameter estimates and standard errors are obtained using rules for nested multiple imputation. Using simulation, we investigate the impact of missing data mechanism uncertainty on post-imputation inferences and show that incorporating this uncertainty can increase the coverage of parameter estimates. We apply our method to a longitudinal smoking cessation trial where nonignorably missing data were a concern. Our method provides a simple approach for formalizing subjective notions regarding nonresponse and can be implemented using existing imputation software. PMID:24634315
Yu, Manzhu; Yang, Chaowei
2016-01-01
Dust storms are devastating natural disasters that cost billions of dollars and many human lives every year. Using the Non-Hydrostatic Mesoscale Dust Model (NMM-dust), this research studies how different spatiotemporal resolutions of two input parameters (soil moisture and greenness vegetation fraction) impact the sensitivity and accuracy of a dust model. Experiments are conducted by simulating dust concentration during July 1-7, 2014, for the target area covering part of Arizona and California (31, 37, -118, -112), with a resolution of ~ 3 km. Using ground-based and satellite observations, this research validates the temporal evolution and spatial distribution of dust storm output from the NMM-dust, and quantifies model error using measurements of four evaluation metrics (mean bias error, root mean square error, correlation coefficient and fractional gross error). Results showed that the default configuration of NMM-dust (with a low spatiotemporal resolution of both input parameters) generates an overestimation of Aerosol Optical Depth (AOD). Although it is able to qualitatively reproduce the temporal trend of the dust event, the default configuration of NMM-dust cannot fully capture its actual spatial distribution. Adjusting the spatiotemporal resolution of soil moisture and vegetation cover datasets showed that the model is sensitive to both parameters. Increasing the spatiotemporal resolution of soil moisture effectively reduces model's overestimation of AOD, while increasing the spatiotemporal resolution of vegetation cover changes the spatial distribution of reproduced dust storm. The adjustment of both parameters enables NMM-dust to capture the spatial distribution of dust storms, as well as reproducing more accurate dust concentration.
Lehmann, Sara; Gajek, Grzegorz; Chmiel, Stanisław; Polkowska, Żaneta
2016-12-01
The chemism of the glaciers is strongly determined by long-distance transport of chemical substances and their wet and dry deposition on the glacier surface. This paper concerns spatial distribution of metals, ions, and dissolved organic carbon, as well as the differentiation of physicochemical parameters (pH, electrical conductivity) determined in ice surface samples collected from four Arctic glaciers during the summer season in 2012. The studied glaciers represent three different morphological types: ground based (Blomlibreen and Scottbreen), tidewater which evolved to ground based (Renardbreen), and typical tidewater glacier (Recherchebreen). All of the glaciers are functioning as a glacial system and hence are subject to the same physical processes (melting, freezing) and the process of ice flowing resulting from the cross-impact force of gravity and topographic conditions. According to this hypothesis, the article discusses the correlation between morphometric parameters, changes in mass balance, geological characteristics of the glaciers and the spatial distribution of analytes on the surface of ice. A strong correlation (r = 0.63) is recorded between the aspect of glaciers and values of pH and ions, whereas dissolved organic carbon (DOC) depends on the minimum elevation of glaciers (r = 0.55) and most probably also on the development of the accumulation area. The obtained results suggest that although certain morphometric parameters largely determine the spatial distribution of analytes, also the geology of the bed of glaciers strongly affects the chemism of the surface ice of glaciers in the phase of strong recession.
Optimizing the Hydrological and Biogeochemical Simulations on a Hillslope with Stony Soil
NASA Astrophysics Data System (ADS)
Zhu, Q.
2017-12-01
Stony soils are widely distributed in the hilly area. However, traditional pedotransfer functions are not reliable in predicting the soil hydraulic parameters for these soils due to the impacts of rock fragments. Therefore, large uncertainties and errors may exist in the hillslope hydrological and biogeochemical simulations in stony soils due to poor estimations of soil hydraulic parameters. In addition, homogenous soil hydraulic parameters are usually used in traditional hillslope simulations. However, soil hydraulic parameters are spatially heterogeneous on the hillslope. This may also cause the unreliable simulations. In this study, we obtained soil hydraulic parameters using five different approaches on a tea hillslope in Taihu Lake basin, China. These five approaches included (1) Rossetta predicted and spatially homogenous, (2) Rossetta predicted and spatially heterogeneous), (3) Rossetta predicted, rock fragment corrected and spatially homogenous, (4) Rossetta predicted, rock fragment corrected and spatially heterogeneous, and (5) extracted from observed soil-water retention curves fitted by dual-pore function and spatially heterogeneous (observed). These five sets of soil hydraulic properties were then input into Hydrus-3D and DNDC to simulate the soil hydrological and biogeochemical processes. The aim of this study is testing two hypotheses. First, considering the spatial heterogeneity of soil hydraulic parameters will improve the simulations. Second, considering the impact of rock fragment on soil hydraulic parameters will improve the simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Huiping; Qian, Yun; Zhao, Chun
2015-09-09
In this study, we adopt a parametric sensitivity analysis framework that integrates the quasi-Monte Carlo parameter sampling approach and a surrogate model to examine aerosol effects on the East Asian Monsoon climate simulated in the Community Atmosphere Model (CAM5). A total number of 256 CAM5 simulations are conducted to quantify the model responses to the uncertain parameters associated with cloud microphysics parameterizations and aerosol (e.g., sulfate, black carbon (BC), and dust) emission factors and their interactions. Results show that the interaction terms among parameters are important for quantifying the sensitivity of fields of interest, especially precipitation, to the parameters. Themore » relative importance of cloud-microphysics parameters and emission factors (strength) depends on evaluation metrics or the model fields we focused on, and the presence of uncertainty in cloud microphysics imposes an additional challenge in quantifying the impact of aerosols on cloud and climate. Due to their different optical and microphysical properties and spatial distributions, sulfate, BC, and dust aerosols have very different impacts on East Asian Monsoon through aerosol-cloud-radiation interactions. The climatic effects of aerosol do not always have a monotonic response to the change of emission factors. The spatial patterns of both sign and magnitude of aerosol-induced changes in radiative fluxes, cloud, and precipitation could be different, depending on the aerosol types, when parameters are sampled in different ranges of values. We also identify the different cloud microphysical parameters that show the most significant impact on climatic effect induced by sulfate, BC and dust, respectively, in East Asia.« less
NASA Astrophysics Data System (ADS)
Liu, Q.; Chiu, L. S.; Hao, X.
2017-10-01
The abundance or lack of rainfall affects peoples' life and activities. As a major component of the global hydrological cycle (Chokngamwong & Chiu, 2007), accurate representations at various spatial and temporal scales are crucial for a lot of decision making processes. Climate models show a warmer and wetter climate due to increases of Greenhouse Gases (GHG). However, the models' resolutions are often too coarse to be directly applicable to local scales that are useful for mitigation purposes. Hence disaggregation (downscaling) procedures are needed to transfer the coarse scale products to higher spatial and temporal resolutions. The aim of this paper is to examine the changes in the statistical parameters of rainfall at various spatial and temporal resolutions. The TRMM Multi-satellite Precipitation Analysis (TMPA) at 0.25 degree, 3 hourly grid rainfall data for a summer is aggregated to 0.5,1.0, 2.0 and 2.5 degree and at 6, 12, 24 hourly, pentad (five days) and monthly resolutions. The probability distributions (PDF) and cumulative distribution functions(CDF) of rain amount at these resolutions are computed and modeled as a mixed distribution. Parameters of the PDFs are compared using the Kolmogrov-Smironov (KS) test, both for the mixed and the marginal distribution. These distributions are shown to be distinct. The marginal distributions are fitted with Lognormal and Gamma distributions and it is found that the Gamma distributions fit much better than the Lognormal.
NASA Technical Reports Server (NTRS)
Barlow, Nadine G.
1991-01-01
Many martian impact craters ejecta morphologies suggestive of fluidization during ejecta emplacement. Impact into subsurface volatile reserviors (i.e., water, ice, CO2, etc.) is the mechanism favored by many scientists, although acceptance of this mechanism is not unanimous. In recent years, a number of studies were undertaken to better understand possible relationships between ejecta morphology and latitude, longitude, crater diameter, and terrain. These results suggest that subsurface volatiles do influence the formation of specific ejecta morphologies and may provide clues to the vertical and horizontal distribution of volatiles in more localized regions of Mars. The location of these volatile reservoirs will be important to humans exploring and settling Mars in the future. Qualitative descriptions of ejecta morphology and quantitative analyses of ejecta sinuosity and ejecta lobe areal extent from the basis of the studies. Ejecta morphology studies indicate that morphology is correlated with crater diameter and latitude, and, using depth-diameter relationships, these correlations strongly suggest that changes in morphology are related to transition among subsurface layers with varying amounts of volatiles. Ejecta sinuosity studies reveal correlations between degree of sinuosity (lobateness) and crater morphology, diameter, latitude, and terrain. Lobateness, together with variations in areal extent of the lobate ejecta blanket with morphology and latitude, probably depends most directly on the ejecta emplacement process. The physical parameters measured here can be compared with those predicted by existing ejecta emplacement models. Some of these parameters are best reproduced by models requiring incorporation of volatiles within the ejecta. However, inconsistencies between other parameters and the models indicate that more detailed modeling is necessary before the location of volatile reservoirs can be confidently predicted based on ejecta morphology studies alone.
Identifying Aquifer Heterogeneities using the Level Set Method
NASA Astrophysics Data System (ADS)
Lu, Z.; Vesselinov, V. V.; Lei, H.
2016-12-01
Material interfaces between hydrostatigraphic units (HSU) with contrasting aquifer parameters (e.g., strata and facies with different hydraulic conductivity) have a great impact on flow and contaminant transport in subsurface. However, the identification of HSU shape in the subsurface is challenging and typically relies on tomographic approaches where a series of steady-state/transient head measurements at spatially distributed observation locations are analyzed using inverse models. In this study, we developed a mathematically rigorous approach for identifying material interfaces among any arbitrary number of HSUs using the level set method. The approach has been tested first with several synthetic cases, where the true spatial distribution of HSUs was assumed to be known and the head measurements were taken from the flow simulation with the true parameter fields. These synthetic inversion examples demonstrate that the level set method is capable of characterizing the spatial distribution of the heterogeneous. We then applied the methodology to a large-scale problem in which the spatial distribution of pumping wells and observation well screens is consistent with the actual aquifer contamination (chromium) site at the Los Alamos National Laboratory (LANL). In this way, we test the applicability of the methodology at an actual site. We also present preliminary results using the actual LANL site data. We also investigated the impact of the number of pumping/observation wells and the drawdown observation frequencies/intervals on the quality of the inversion results. We also examined the uncertainties associated with the estimated HSU shapes, and the accuracy of the results under different hydraulic-conductivity contrasts between the HSU's.
Shape Distribution of Fragments from Microsatellite Impact Tests
NASA Technical Reports Server (NTRS)
Liou, J.C.; Hanada, T.
2009-01-01
Fragment shape is an important factor for conducting reliable orbital debris damage assessments for critical space assets, such as the International Space Station. To date, seven microsatellite impact tests have been completed as part of an ongoing collaboration between Kyushu University and the NASA Orbital Debris Program Office. The target satellites ranged in size from 15 cm 15 cm 15 cm to 20 cm 20 cm 20 cm. Each target satellite was equipped with fully functional electronics, including circuits, battery, and transmitter. Solar panels and multi-layer insulation (MLI) were added to the target satellites of the last two tests. The impact tests were carried out with projectiles of different sizes and impact speeds. All fragments down to about 2 mm in size were collected and analyzed based on their three orthogonal dimensions, x, y, and z, where x is the longest dimension, y is the longest dimension in the plane perpendicular to x, and z is the longest dimension perpendicular to both x and y. Each fragment was also photographed and classified by shape and material composition. This data set serves as the basis of our effort to develop a fragment shape distribution. Two distinct groups can be observed in the x/y versus y/z distribution of the fragments. Objects in the first group typically have large x/y values. Many of them are needle-like objects originating from the fragmentation of carbon fiber reinforced plastic materials used to construct the satellites. Objects in the second group tend to have small x/y values, and many of them are box-like or plate-like objects, depending on their y/z values. Each group forms the corresponding peak in the x/y distribution. However, only one peak can be observed in the y/z distribution. These distributions and how they vary with size, material type, and impact parameters will be described in detail within the paper.
NASA Astrophysics Data System (ADS)
Jumelet, Julien; David, Christine; Bekki, Slimane; Keckhut, Philippe
2009-01-01
The determination of stratospheric particle microphysical properties from multiwavelength lidar, including Rayleigh and/or Raman detection, has been widely investigated. However, most lidar systems are uniwavelength operating at 532 nm. Although the information content of such lidar data is too limited to allow the retrieval of the full size distribution, the coupling of two or more uniwavelength lidar measurements probing the same moving air parcel may provide some meaningful size information. Within the ORACLE-O3 IPY project, the coordination of several ground-based lidars and the CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) space-borne lidar is planned during measurement campaigns called MATCH-PSC (Polar Stratospheric Clouds). While probing the same moving air masses, the evolution of the measured backscatter coefficient (BC) should reflect the variation of particles microphysical properties. A sensitivity study of 532 nm lidar particle backscatter to variations of particles size distribution parameters is carried out. For simplicity, the particles are assumed to be spherical (liquid) particles and the size distribution is represented with a unimodal log-normal distribution. Each of the four microphysical parameters (i.e. log-normal size distribution parameters, refractive index) are analysed separately, while the three others are remained set to constant reference values. Overall, the BC behaviour is not affected by the initial values taken as references. The total concentration (N0) is the parameter to which BC is least sensitive, whereas it is most sensitive to the refractive index (m). A 2% variation of m induces a 15% variation of the lidar BC, while the uncertainty on the BC retrieval can also reach 15%. This result underlines the importance of having both an accurate lidar inversion method and a good knowledge of the temperature for size distribution retrieval techniques. The standard deviation ([sigma]) is the second parameter to which BC is most sensitive to. Yet, the impact of m and [sigma] on BC variations is limited by the realistic range of their variations. The mean radius (rm) of the size distribution is thus the key parameter for BC, as it can vary several-fold. BC is most sensitive to the presence of large particles. The sensitivity of BC to rm and [sigma] variations increases when the initial size distributions are characterized by low rm and large [sigma]. This makes lidar more suitable to detect particles growing on background aerosols than on volcanic aerosols.
NASA Astrophysics Data System (ADS)
Pechlivanidis, Ilias; McIntyre, Neil; Wheater, Howard
2017-04-01
Rainfall, one of the main inputs in hydrological modeling, is a highly heterogeneous process over a wide range of scales in space, and hence the ignorance of the spatial rainfall information could affect the simulated streamflow. Calibration of hydrological model parameters is rarely a straightforward task due to parameter equifinality and parameters' 'nature' to compensate for other uncertainties, i.e. structural and forcing input. In here, we analyse the significance of spatial variability of rainfall on streamflow as a function of catchment scale and type, and antecedent conditions using the continuous time, semi-distributed PDM hydrological model at the Upper Lee catchment, UK. The impact of catchment scale and type is assessed using 11 nested catchments ranging in scale from 25 to 1040 km2, and further assessed by artificially changing the catchment characteristics and translating these to model parameters with uncertainty using model regionalisation. Synthetic rainfall events are introduced to directly relate the change in simulated streamflow to the spatial variability of rainfall. Overall, we conclude that the antecedent catchment wetness and catchment type play an important role in controlling the significance of the spatial distribution of rainfall on streamflow. Results show a relationship between hydrograph characteristics (streamflow peak and volume) and the degree of spatial variability of rainfall for the impermeable catchments under dry antecedent conditions, although this decreases at larger scales; however this sensitivity is significantly undermined under wet antecedent conditions. Although there is indication that the impact of spatial rainfall on streamflow varies as a function of catchment scale, the variability of antecedent conditions between the synthetic catchments seems to mask this significance. Finally, hydrograph responses to different spatial patterns in rainfall depend on assumptions used for model parameter estimation and also the spatial variation in parameters indicating the need of an uncertainty framework in such investigation.
Modelling lifetime data with multivariate Tweedie distribution
NASA Astrophysics Data System (ADS)
Nor, Siti Rohani Mohd; Yusof, Fadhilah; Bahar, Arifah
2017-05-01
This study aims to measure the dependence between individual lifetimes by applying multivariate Tweedie distribution to the lifetime data. Dependence between lifetimes incorporated in the mortality model is a new form of idea that gives significant impact on the risk of the annuity portfolio which is actually against the idea of standard actuarial methods that assumes independent between lifetimes. Hence, this paper applies Tweedie family distribution to the portfolio of lifetimes to induce the dependence between lives. Tweedie distribution is chosen since it contains symmetric and non-symmetric, as well as light-tailed and heavy-tailed distributions. Parameter estimation is modified in order to fit the Tweedie distribution to the data. This procedure is developed by using method of moments. In addition, the comparison stage is made to check for the adequacy between the observed mortality and expected mortality. Finally, the importance of including systematic mortality risk in the model is justified by the Pearson's chi-squared test.
NASA Astrophysics Data System (ADS)
Wangerin, Kristen A.; Muzi, Mark; Peterson, Lanell M.; Linden, Hannah M.; Novakova, Alena; Mankoff, David A.; E Kinahan, Paul
2017-05-01
We developed a method to evaluate variations in the PET imaging process in order to characterize the relative ability of static and dynamic metrics to measure breast cancer response to therapy in a clinical trial setting. We performed a virtual clinical trial by generating 540 independent and identically distributed PET imaging study realizations for each of 22 original dynamic fluorodeoxyglucose (18F-FDG) breast cancer patient studies pre- and post-therapy. Each noise realization accounted for known sources of uncertainty in the imaging process, such as biological variability and SUV uptake time. Four definitions of SUV were analyzed, which were SUVmax, SUVmean, SUVpeak, and SUV50%. We performed a ROC analysis on the resulting SUV and kinetic parameter uncertainty distributions to assess the impact of the variability on the measurement capabilities of each metric. The kinetic macro parameter, K i , showed more variability than SUV (mean CV K i = 17%, SUV = 13%), but K i pre- and post-therapy distributions also showed increased separation compared to the SUV pre- and post-therapy distributions (mean normalized difference K i = 0.54, SUV = 0.27). For the patients who did not show perfect separation between the pre- and post-therapy parameter uncertainty distributions (ROC AUC < 1), dynamic imaging outperformed SUV in distinguishing metabolic change in response to therapy, ranging from 12 to 14 of 16 patients over all SUV definitions and uptake time scenarios (p < 0.05). For the patient cohort in this study, which is comprised of non-high-grade ER+ tumors, K i outperformed SUV in an ROC analysis of the parameter uncertainty distributions pre- and post-therapy. This methodology can be applied to different scenarios with the ability to inform the design of clinical trials using PET imaging.
Scaling up experimental ocean acidification and warming research: from individuals to the ecosystem
NASA Astrophysics Data System (ADS)
Queiros, A. M.
2016-02-01
Understanding long-term, ecosystem-level impacts of climate change is challenging because experimental research frequently focuses on short-term, individual-level impacts in isolation. We address this shortcoming first through an inter-disciplinary ensemble of novel experimental techniques to investigate the impacts of 14-month exposure to ocean acidification and warming (OAW) on the physiology, activity, predatory behaviour and susceptibility to predation of an important marine gastropod (Nucella lapillus). We simultaneously estimated the potential impacts of these global drivers on N. lapillus population dynamics and dispersal parameters. We then used these data to parameterise a dynamic bioclimatic envelope model, to investigate the consequences of OAW on the distribution of the species in the wider NE Atlantic region by 2100. The model accounts also for changes in the distribution of resources, suitable habitat and environment simulated by finely resolved biogeochemical models, under three IPCC global emissions scenarios. The experiments showed that temperature had the greatest impact on individual level responses, while acidification has a similarly important role in the mediation of predatory behaviour and susceptibility to predators. Changes in Nucella predatory behaviour appeared to serve as a strategy to mitigate individual level impacts of acidification, but the development of this response may be limited in the presence of predators. The model projected significant large-scale changes in the distribution of Nucella by the year 2100 that were exacerbated by rising greenhouse gas emissions. These changes were spatially heterogeneous, as the degree of impact of OAW on the combination of responses considered by the model varied depending on local environmental conditions and resource availability. Such changes in macro-scale distributions cannot be predicted by investigating individual level impacts in isolation, or by considering climate stressors separately. Scaling up the results of experimental climate change research requires approaches that account for long-term, multi-scale responses to multiple stressors, in an ecosystem context.
Scaling up experimental ocean acidification and warming research: from individuals to the ecosystem.
Queirós, Ana M; Fernandes, José A; Faulwetter, Sarah; Nunes, Joana; Rastrick, Samuel P S; Mieszkowska, Nova; Artioli, Yuri; Yool, Andrew; Calosi, Piero; Arvanitidis, Christos; Findlay, Helen S; Barange, Manuel; Cheung, William W L; Widdicombe, Stephen
2015-01-01
Understanding long-term, ecosystem-level impacts of climate change is challenging because experimental research frequently focuses on short-term, individual-level impacts in isolation. We address this shortcoming first through an interdisciplinary ensemble of novel experimental techniques to investigate the impacts of 14-month exposure to ocean acidification and warming (OAW) on the physiology, activity, predatory behaviour and susceptibility to predation of an important marine gastropod (Nucella lapillus). We simultaneously estimated the potential impacts of these global drivers on N. lapillus population dynamics and dispersal parameters. We then used these data to parameterize a dynamic bioclimatic envelope model, to investigate the consequences of OAW on the distribution of the species in the wider NE Atlantic region by 2100. The model accounts also for changes in the distribution of resources, suitable habitat and environment simulated by finely resolved biogeochemical models, under three IPCC global emissions scenarios. The experiments showed that temperature had the greatest impact on individual-level responses, while acidification had a similarly important role in the mediation of predatory behaviour and susceptibility to predators. Changes in Nucella predatory behaviour appeared to serve as a strategy to mitigate individual-level impacts of acidification, but the development of this response may be limited in the presence of predators. The model projected significant large-scale changes in the distribution of Nucella by the year 2100 that were exacerbated by rising greenhouse gas emissions. These changes were spatially heterogeneous, as the degree of impact of OAW on the combination of responses considered by the model varied depending on local-environmental conditions and resource availability. Such changes in macro-scale distributions cannot be predicted by investigating individual-level impacts in isolation, or by considering climate stressors separately. Scaling up the results of experimental climate change research requires approaches that account for long-term, multiscale responses to multiple stressors, in an ecosystem context. © 2014 John Wiley & Sons Ltd.
Characterization and Remediation of Contaminated Sites:Modeling, Measurement and Assessment
NASA Astrophysics Data System (ADS)
Basu, N. B.; Rao, P. C.; Poyer, I. C.; Christ, J. A.; Zhang, C. Y.; Jawitz, J. W.; Werth, C. J.; Annable, M. D.; Hatfield, K.
2008-05-01
The complexity of natural systems makes it impossible to estimate parameters at the required level of spatial and temporal detail. Thus, it becomes necessary to transition from spatially distributed parameters to spatially integrated parameters that are capable of adequately capturing the system dynamics, without always accounting for local process behavior. Contaminant flux across the source control plane is proposed as an integrated metric that captures source behavior and links it to plume dynamics. Contaminant fluxes were measured using an innovative technology, the passive flux meter at field sites contaminated with dense non-aqueous phase liquids or DNAPLs in the US and Australia. Flux distributions were observed to be positively or negatively correlated with the conductivity distribution, depending on the source characteristics of the site. The impact of partial source depletion on the mean contaminant flux and flux architecture was investigated in three-dimensional complex heterogeneous settings using the multiphase transport code UTCHEM and the reactive transport code ISCO3D. Source mass depletion reduced the mean contaminant flux approximately linearly, while the contaminant flux standard deviation reduced proportionally with the mean (i.e., coefficient of variation of flux distribution is constant with time). Similar analysis was performed using data from field sites, and the results confirmed the numerical simulations. The linearity of the mass depletion-flux reduction relationship indicates the ability to design remediation systems that deplete mass to achieve target reduction in source strength. Stability of the flux distribution indicates the ability to characterize the distributions in time once the initial distribution is known. Lagrangian techniques were used to predict contaminant flux behavior during source depletion in terms of the statistics of the hydrodynamic and DNAPL distribution. The advantage of the Lagrangian techniques lies in their small computation time and their inclusion of spatially integrated parameters that can be measured in the field using tracer tests. Analytical models that couple source depletion to plume transport were used for optimization of source and plume treatment. These models are being used for the development of decision and management tools (for DNAPL sites) that consider uncertainty assessments as an integral part of the decision-making process for contaminated site remediation.
Bio-physical vs. Economic Uncertainty in the Analysis of Climate Change Impacts on World Agriculture
NASA Astrophysics Data System (ADS)
Hertel, T. W.; Lobell, D. B.
2010-12-01
Accumulating evidence suggests that agricultural production could be greatly affected by climate change, but there remains little quantitative understanding of how these agricultural impacts would affect economic livelihoods in poor countries. The recent paper by Hertel, Burke and Lobell (GEC, 2010) considers three scenarios of agricultural impacts of climate change, corresponding to the fifth, fiftieth, and ninety fifth percentiles of projected yield distributions for the world’s crops in 2030. They evaluate the resulting changes in global commodity prices, national economic welfare, and the incidence of poverty in a set of 15 developing countries. Although the small price changes under the medium scenario are consistent with previous findings, their low productivity scenario reveals the potential for much larger food price changes than reported in recent studies which have hitherto focused on the most likely outcomes. The poverty impacts of price changes under the extremely adverse scenario are quite heterogeneous and very significant in some population strata. They conclude that it is critical to look beyond central case climate shocks and beyond a simple focus on yields and highly aggregated poverty impacts. In this paper, we conduct a more formal, systematic sensitivity analysis (SSA) with respect to uncertainty in the biophysical impacts of climate change on agriculture, by explicitly specifying joint distributions for global yield changes - this time focusing on 2050. This permits us to place confidence intervals on the resulting price impacts and poverty results which reflect the uncertainty inherited from the biophysical side of the analysis. We contrast this with the economic uncertainty inherited from the global general equilibrium model (GTAP), by undertaking SSA with respect to the behavioral parameters in that model. This permits us to assess which type of uncertainty is more important for regional price and poverty outcomes. Finally, we undertake a combined SSA, wherein climate change-induced productivity shocks are permitted to interact with the uncertain economic parameters. This permits us to examine potential interactions between the two sources of uncertainty.
Reallocation in modal aerosol models: impacts on predicting aerosol radiative effects
NASA Astrophysics Data System (ADS)
Korhola, T.; Kokkola, H.; Korhonen, H.; Partanen, A.-I.; Laaksonen, A.; Lehtinen, K. E. J.; Romakkaniemi, S.
2013-08-01
In atmospheric modelling applications the aerosol particle size distribution is commonly represented by modal approach, in which particles in different size ranges are described with log-normal modes within predetermined size ranges. Such method includes numerical reallocation of particles from a mode to another for example during particle growth, leading to potentially artificial changes in the aerosol size distribution. In this study we analysed how this reallocation affects climatologically relevant parameters: cloud droplet number concentration, aerosol-cloud interaction coefficient and light extinction coefficient. We compared these parameters between a modal model with and without reallocation routines, and a high resolution sectional model that was considered as a reference model. We analysed the relative differences of the parameters in different experiments that were designed to cover a wide range of dynamic aerosol processes occurring in the atmosphere. According to our results, limiting the allowed size ranges of the modes and the following numerical remapping of the distribution by reallocation, leads on average to underestimation of cloud droplet number concentration (up to 100%) and overestimation of light extinction (up to 20%). The analysis of aerosol first indirect effect is more complicated as the ACI parameter can be either over- or underestimated by the reallocating model, depending on the conditions. However, for example in the case of atmospheric new particle formation events followed by rapid particle growth, the reallocation can cause around average 10% overestimation of the ACI parameter. Thus it is shown that the reallocation affects the ability of a model to estimate aerosol climate effects accurately, and this should be taken into account when using and developing aerosol models.
NASA Astrophysics Data System (ADS)
Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.
2017-08-01
Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Revised scaling laws for asteroid disruptions
NASA Astrophysics Data System (ADS)
Jutzi, M.
2014-07-01
Models for the evolution of small-body populations (e.g., the asteroid main belt) of the solar system compute the time-dependent size and velocity distributions of the objects as a result of both collisional and dynamical processes. A scaling parameter often used in such numerical models is the critical specific impact energy Q^*_D, which results in the escape of half of the target's mass in a collision. The parameter Q^*_D is called the catastrophic impact energy threshold. We present recent improvements of the Smooth Particle Hydrodynamics (SPH) technique (Benz and Asphaug 1995, Jutzi et al. 2008, Jutzi 2014) for the modeling of the disruption of small bodies. Using the improved models, we then systematically study the effects of various target properties (e.g., strength, porosity, and friction) on the outcome of disruptive collisions (Figure), and we compute the corresponding Q^*_D curves as a function of target size. For a given specific impact energy and impact angle, the outcome of a collision in terms of Q^*_D does not only depend on the properties of the bodies involved, but also on the impact velocity and the size ratio of target/impactor. Leinhardt and Stewardt (2012) proposed scaling laws to predict the outcome of collisions with a wide range of impact velocities (m/s to km/s), target sizes and target/impactor mass ratios. These scaling laws are based on a "principal disruption curve" defined for collisions between equal-sized bodies: Q^*_{RD,γ = 1} = c^* 4/5 π ρ G R_{C1}^2, where the parameter c^* is a measure of the dissipation of energy within the target, R_{C1} the radius of a body with the combined mass of target and projectile and a density ρ = 1000 kg/m^3, and γ is the mass ratio. The dissipation parameter c^* is proposed to be 5±2 for bodies with strength and 1.9±0.3 for hydrodynamic bodies (Leinhardt and Stewardt 2012). We will present values for c^* based on our SPH simulations using various target properties and impact conditions. We will also discuss the validity of the principal disruption curve (with a single parameter c^*) for a wide range of sizes and impact velocities. Our preliminary results indicate that for a given target, c^* can vary significantly (by a factor of ˜ 10) as the impact velocity changes from subsonic to supersonic.
NASA Astrophysics Data System (ADS)
Xie, Cailang; Guo, Ying; Liao, Qin; Zhao, Wei; Huang, Duan; Zhang, Ling; Zeng, Guihua
2018-03-01
How to narrow the gap of security between theory and practice has been a notoriously urgent problem in quantum cryptography. Here, we analyze and provide experimental evidence of the clock jitter effect on the practical continuous-variable quantum key distribution (CV-QKD) system. The clock jitter is a random noise which exists permanently in the clock synchronization in the practical CV-QKD system, it may compromise the system security because of its impact on data sampling and parameters estimation. In particular, the practical security of CV-QKD with different clock jitter against collective attack is analyzed theoretically based on different repetition frequencies, the numerical simulations indicate that the clock jitter has more impact on a high-speed scenario. Furthermore, a simplified experiment is designed to investigate the influence of the clock jitter.
Scaling, Microstructure and Dynamic Fracture
NASA Astrophysics Data System (ADS)
Minich, Roger W.; Kumar, Mukul; Schwarz, Adam; Cazamias, James
2006-07-01
The relationship between pullback velocity and impact velocity is studied for different microstructures in Cu. A size distribution of potential nucleation sites is derived under the conditions of an applied stochastic stress field. The size distribution depends on the amplitude of the stress fluctuations, which may be proportional to the flow stress thereby providing a connection between plastic flow and microvoid nucleation rate. The pullback velocity in turn depends on the nucleation rate resulting in a prediction for the relationship between pullback velocity and flow stress. The theory is compared to results from Cu on Cu gas-gun experiments (10-50 GPa) with different microstructures. The scaling law relating pullback velocity and impact velocity is incorporated into a 1D finite difference code and is shown to reproduce the experimental data with one adjustable parameter, the nucleation exponent, Γ.
Some properties of a 5-parameter bivariate probability distribution
NASA Technical Reports Server (NTRS)
Tubbs, J. D.; Brewer, D. W.; Smith, O. E.
1983-01-01
A five-parameter bivariate gamma distribution having two shape parameters, two location parameters and a correlation parameter was developed. This more general bivariate gamma distribution reduces to the known four-parameter distribution. The five-parameter distribution gives a better fit to the gust data. The statistical properties of this general bivariate gamma distribution and a hypothesis test were investigated. Although these developments have come too late in the Shuttle program to be used directly as design criteria for ascent wind gust loads, the new wind gust model has helped to explain the wind profile conditions which cause large dynamic loads. Other potential applications of the newly developed five-parameter bivariate gamma distribution are in the areas of reliability theory, signal noise, and vibration mechanics.
Gravitational lensing and ghost images in the regular Bardeen no-horizon spacetimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schee, Jan; Stuchlík, Zdeněk, E-mail: jan.schee@fpf.slu.cz, E-mail: zdenek.stuchlik@fpf.slu.cz
We study deflection of light rays and gravitational lensing in the regular Bardeen no-horizon spacetimes. Flatness of these spacetimes in the central region implies existence of interesting optical effects related to photons crossing the gravitational field of the no-horizon spacetimes with low impact parameters. These effects occur due to existence of a critical impact parameter giving maximal deflection of light rays in the Bardeen no-horizon spacetimes. We give the critical impact parameter in dependence on the specific charge of the spacetimes, and discuss 'ghost' direct and indirect images of Keplerian discs, generated by photons with low impact parameters. The ghostmore » direct images can occur only for large inclination angles of distant observers, while ghost indirect images can occur also for small inclination angles. We determine the range of the frequency shift of photons generating the ghost images and determine distribution of the frequency shift across these images. We compare them to those of the standard direct images of the Keplerian discs. The difference of the ranges of the frequency shift on the ghost and direct images could serve as a quantitative measure of the Bardeen no-horizon spacetimes. The regions of the Keplerian discs giving the ghost images are determined in dependence on the specific charge of the no-horizon spacetimes. For comparison we construct direct and indirect (ordinary and ghost) images of Keplerian discs around Reissner-Nördström naked singularities demonstrating a clear qualitative difference to the ghost direct images in the regular Bardeen no-horizon spacetimes. The optical effects related to the low impact parameter photons thus give clear signature of the regular Bardeen no-horizon spacetimes, as no similar phenomena could occur in the black hole or naked singularity spacetimes. Similar direct ghost images have to occur in any regular no-horizon spacetimes having nearly flat central region.« less
NASA Astrophysics Data System (ADS)
Stucchi Boschi, Raquel; Qin, Mingming; Gimenez, Daniel; Cooper, Miguel
2016-04-01
Modeling is an important tool for better understanding and assessing land use impacts on landscape processes. A key point for environmental modeling is the knowledge of soil hydraulic properties. However, direct determination of soil hydraulic properties is difficult and costly, particularly in vast and remote regions such as one constituting the Amazon Biome. One way to overcome this problem is to extrapolate accurately estimated data to pedologically similar sites. The van Genuchten (VG) parametric equation is the most commonly used for modeling SWRC. The use of a Bayesian approach in combination with the Markov chain Monte Carlo to estimate the VG parameters has several advantages compared to the widely used global optimization techniques. The Bayesian approach provides posterior distributions of parameters that are independent from the initial values and allow for uncertainty analyses. The main objectives of this study were: i) to estimate hydraulic parameters from data of pasture and forest sites by the Bayesian inverse modeling approach; and ii) to investigate the extrapolation of the estimated VG parameters to a nearby toposequence with pedologically similar soils to those used for its estimate. The parameters were estimated from volumetric water content and tension observations obtained after rainfall events during a 207-day period from pasture and forest sites located in the southeastern Amazon region. These data were used to run HYDRUS-1D under a Differential Evolution Adaptive Metropolis (DREAM) scheme 10,000 times, and only the last 2,500 times were used to calculate the posterior distributions of each hydraulic parameter along with 95% confidence intervals (CI) of volumetric water content and tension time series. Then, the posterior distributions were used to generate hydraulic parameters for two nearby toposequences composed by six soil profiles, three are under forest and three are under pasture. The parameters of the nearby site were accepted when the predicted tension time series were within the 95% CI which is derived from the calibration site using DREAM scheme.
A study of the threshold method utilizing raingage data
NASA Technical Reports Server (NTRS)
Short, David A.; Wolff, David B.; Rosenfeld, Daniel; Atlas, David
1993-01-01
The threshold method for estimation of area-average rain rate relies on determination of the fractional area where rain rate exceeds a preset level of intensity. Previous studies have shown that the optimal threshold level depends on the climatological rain-rate distribution (RRD). It has also been noted, however, that the climatological RRD may be composed of an aggregate of distributions, one for each of several distinctly different synoptic conditions, each having its own optimal threshold. In this study, the impact of RRD variations on the threshold method is shown in an analysis of 1-min rainrate data from a network of tipping-bucket gauges in Darwin, Australia. Data are analyzed for two distinct regimes: the premonsoon environment, having isolated intense thunderstorms, and the active monsoon rains, having organized convective cell clusters that generate large areas of stratiform rain. It is found that a threshold of 10 mm/h results in the same threshold coefficient for both regimes, suggesting an alternative definition of optimal threshold as that which is least sensitive to distribution variations. The observed behavior of the threshold coefficient is well simulated by assumption of lognormal distributions with different scale parameters and same shape parameters.
Schürmann, Tim; Beckerle, Philipp; Preller, Julia; Vogt, Joachim; Christ, Oliver
2016-12-19
In product development for lower limb prosthetic devices, a set of special criteria needs to be met. Prosthetic devices have a direct impact on the rehabilitation process after an amputation with both perceived technological and psychological aspects playing an important role. However, available psychometric questionnaires fail to consider the important links between these two dimensions. In this article a probabilistic latent trait model is proposed with seven technical and psychological factors which measure satisfaction with the prosthesis. The results of a first study are used to determine the basic parameters of the statistical model. These distributions represent hypotheses about factor loadings between manifest items and latent factors of the proposed psychometric questionnaire. A study was conducted and analyzed to form hypotheses for the prior distributions of the questionnaire's measurement model. An expert agreement study conducted on 22 experts was used to determine the prior distribution of item-factor loadings in the model. Model parameters that had to be specified as part of the measurement model were informed prior distributions on the item-factor loadings. For the current 70 items in the questionnaire, each factor loading was set to represent the certainty with which experts had assigned the items to their respective factors. Considering only the measurement model and not the structural model of the questionnaire, 70 out of 217 informed prior distributions on parameters were set. The use of preliminary studies to set prior distributions in latent trait models, while being a relatively new approach in psychological research, provides helpful information towards the design of a seven factor questionnaire that means to identify relations between technical and psychological factors in prosthetic product design and rehabilitation medicine.
A model of objective weighting for EIA.
Ying, L G; Liu, Y C
1995-06-01
In spite of progress achieved in the research of environmental impact assessment (EIA), the problem of weight distribution for a set of parameters has not as yet, been properly solved. This paper presents an approach of objective weighting by using a procedure of P ij principal component-factor analysis (P ij PCFA), which suits specifically those parameters measured directly by physical scales. The P ij PCFA weighting procedure reforms the conventional weighting practice in two aspects: first, the expert subjective judgment is replaced by the standardized measure P ij as the original input of weight processing and, secondly, the principal component-factor analysis is introduced to approach the environmental parameters for their respective contributions to the totality of the regional ecosystem. Not only is the P ij PCFA weighting logical in theoretical reasoning, it also suits practically all levels of professional routines in natural environmental assessment and impact analysis. Having been assured of objectivity and accuracy in the EIA case study of the Chuansha County in Shanghai, China, the P ij PCFA weighting procedure has the potential to be applied in other geographical fields that need assigning weights to parameters that are measured by physical scales.
On the effect of model parameters on forecast objects
NASA Astrophysics Data System (ADS)
Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott
2018-04-01
Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map
. The field for some quantities generally consists of spatially coherent and disconnected objects
. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output
of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.
Form factors and generalized parton distributions in basis light-front quantization
NASA Astrophysics Data System (ADS)
Adhikari, Lekha; Li, Yang; Zhao, Xingbo; Maris, Pieter; Vary, James P.; El-Hady, Alaa Abd
2016-05-01
We calculate the elastic form factors and the generalized parton distributions (GPDs) for four low-lying bound states of a demonstration fermion-antifermion system, strong-coupling positronium (e e ¯ ), using basis light-front quantization (BLFQ). By using this approach, we also calculate the impact-parameter-dependent GPDs q (x ,b⃗⊥) to visualize the fermion density in the transverse plane (b⃗⊥). We compare selected results with corresponding quantities in the nonrelativistic limit to reveal relativistic effects. Our results establish the foundation within BLFQ for investigating the form factors and the GPDs for hadronic systems.
NASA Astrophysics Data System (ADS)
Ma, Wen; Liu, Fushun
Voids are inevitable in the fabrication of fiber reinforced composites and have a detrimental impact on mechanical properties of composites. Different void contents were acquired by applying different vacuum bag pressures. Ultrasonic inspection and ablation density method were adopted to measure the ultrasonic characteristic parameters and average porosity, the characterization of voids' distribution, shape and size were carried out through metallographic analysis. Effects of void content on the tensile, flexural and interlaminar shear properties and the ultrasonic characteristic parameters were discussed. The results showed that, as vacuum bag pressure went from -50kPa to -98kPa, the voids content decreased from 4.36 to 0.34, the ultrasonic attenuation coefficient decreased, but the mechanical strengths all increased.
Understanding asteroid collisional history through experimental and numerical studies
NASA Technical Reports Server (NTRS)
Davis, Donald R.; Ryan, Eileen V.; Weidenschilling, S. J.
1991-01-01
Asteroids can lose angular momentum due to so called splash effect, the analog to the drain effect for cratering impacts. Numerical code with the splash effect incorporated was applied to study the simultaneous evolution of asteroid sized and spins. Results are presented on the spin changes of asteroids due to various physical effects that are incorporated in the described model. The goal was to understand the interplay between the evolution of sizes and spins over a wide and plausible range of model parameters. A single starting population was used both for size distribution and the spin distribution of asteroids and the changes in the spins were calculated over solar system history for different model parameters. It is shown that there is a strong coupling between the size and spin evolution, that the observed relative spindown of asteroids approximately 100 km diameter is likely to be the result of the angular momentum splash effect.
Understanding asteroid collisional history through experimental and numerical studies
NASA Astrophysics Data System (ADS)
Davis, Donald R.; Ryan, Eileen V.; Weidenschilling, S. J.
1991-06-01
Asteroids can lose angular momentum due to so called splash effect, the analog to the drain effect for cratering impacts. Numerical code with the splash effect incorporated was applied to study the simultaneous evolution of asteroid sized and spins. Results are presented on the spin changes of asteroids due to various physical effects that are incorporated in the described model. The goal was to understand the interplay between the evolution of sizes and spins over a wide and plausible range of model parameters. A single starting population was used both for size distribution and the spin distribution of asteroids and the changes in the spins were calculated over solar system history for different model parameters. It is shown that there is a strong coupling between the size and spin evolution, that the observed relative spindown of asteroids approximately 100 km diameter is likely to be the result of the angular momentum splash effect.
Impact of Reactor Operating Parameters on Cask Reactivity in BWR Burnup Credit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilas, Germina; Betzler, Benjamin R; Ade, Brian J
This paper discusses the effect of reactor operating parameters used in fuel depletion calculations on spent fuel cask reactivity, with relevance for boiling-water reactor (BWR) burnup credit (BUC) applications. Assessments that used generic BWR fuel assembly and spent fuel cask configurations are presented. The considered operating parameters, which were independently varied in the depletion simulations for the assembly, included fuel temperature, bypass water density, specific power, and operating history. Different operating history scenarios were considered for the assembly depletion to determine the effect of relative power distribution during the irradiation cycles, as well as the downtime between cycles. Depletion, decay,more » and criticality simulations were performed using computer codes and associated nuclear data within the SCALE code system. Results quantifying the dependence of cask reactivity on the assembly depletion parameters are presented herein.« less
Gabbott, Ian P; Al Husban, Farhan; Reynolds, Gavin K
2016-09-01
A pharmaceutical compound was used to study the effect of batch wet granulation process parameters in combination with the residual moisture content remaining after drying on granule and tablet quality attributes. The effect of three batch wet granulation process parameters was evaluated using a multivariate experimental design, with a novel constrained design space. Batches were characterised for moisture content, granule density, crushing strength, porosity, disintegration time and dissolution. Mechanisms of the effect of the process parameters on the granule and tablet quality attributes are proposed. Water quantity added during granulation showed a significant effect on granule density and tablet dissolution rate. Mixing time showed a significant effect on tablet crushing strength, and mixing speed showed a significant effect on the distribution of tablet crushing strengths obtained. The residual moisture content remaining after granule drying showed a significant effect on tablet crushing strength. The effect of moisture on tablet tensile strength has been reported before, but not in combination with granulation parameters and granule properties, and the impact on tablet dissolution was not assessed. Correlations between the energy input during granulation, the density of granules produced, and the quality attributes of the final tablets were also identified. Understanding the impact of the granulation and drying process parameters on granule and tablet properties provides a basis for process optimisation and scaling. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kopacz, Michał
2017-09-01
The paper attempts to assess the impact of variability of selected geological (deposit) parameters on the value and risks of projects in the hard coal mining industry. The study was based on simulated discounted cash flow analysis, while the results were verified for three existing bituminous coal seams. The Monte Carlo simulation was based on nonparametric bootstrap method, while correlations between individual deposit parameters were replicated with use of an empirical copula. The calculations take into account the uncertainty towards the parameters of empirical distributions of the deposit variables. The Net Present Value (NPV) and the Internal Rate of Return (IRR) were selected as the main measures of value and risk, respectively. The impact of volatility and correlation of deposit parameters were analyzed in two aspects, by identifying the overall effect of the correlated variability of the parameters and the indywidual impact of the correlation on the NPV and IRR. For this purpose, a differential approach, allowing determining the value of the possible errors in calculation of these measures in numerical terms, has been used. Based on the study it can be concluded that the mean value of the overall effect of the variability does not exceed 11.8% of NPV and 2.4 percentage points of IRR. Neglecting the correlations results in overestimating the NPV and the IRR by up to 4.4%, and 0.4 percentage point respectively. It should be noted, however, that the differences in NPV and IRR values can vary significantly, while their interpretation depends on the likelihood of implementation. Generalizing the obtained results, based on the average values, the maximum value of the risk premium in the given calculation conditions of the "X" deposit, and the correspondingly large datasets (greater than 2500), should not be higher than 2.4 percentage points. The impact of the analyzed geological parameters on the NPV and IRR depends primarily on their co-existence, which can be measured by the strength of correlation. In the analyzed case, the correlations result in limiting the range of variation of the geological parameters and economics results (the empirical copula reduces the NPV and IRR in probabilistic approach). However, this is due to the adjustment of the calculation under conditions similar to those prevailing in the deposit.
ERIC Educational Resources Information Center
Culpepper, Steven Andrew
2013-01-01
A classic topic in the fields of psychometrics and measurement has been the impact of the number of scale categories on test score reliability. This study builds on previous research by further articulating the relationship between item response theory (IRT) and classical test theory (CTT). Equations are presented for comparing the reliability and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Boyan; Ou, Longwen; Dang, Qi
This study evaluates the techno-economic uncertainty in cost estimates for two emerging biorefinery technologies for biofuel production: in situ and ex situ catalytic pyrolysis. Stochastic simulations based on process and economic parameter distributions are applied to calculate biorefinery performance and production costs. The probability distributions for the minimum fuel-selling price (MFSP) indicate that in situ catalytic pyrolysis has an expected MFSP of $4.20 per gallon with a standard deviation of 1.15, while the ex situ catalytic pyrolysis has a similar MFSP with a smaller deviation ($4.27 per gallon and 0.79 respectively). These results suggest that a biorefinery based on exmore » situ catalytic pyrolysis could have a lower techno-economic risk than in situ pyrolysis despite a slightly higher MFSP cost estimate. Analysis of how each parameter affects the NPV indicates that internal rate of return, feedstock price, total project investment, electricity price, biochar yield and bio-oil yield are significant parameters which have substantial impact on the MFSP for both in situ and ex situ catalytic pyrolysis.« less
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.
Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris
2016-04-21
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung
NASA Astrophysics Data System (ADS)
Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris
2016-04-01
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
Angular dependance of spectral reflection for different materials
NASA Astrophysics Data System (ADS)
Kiefer, Pascal M.
2017-10-01
Parameters like the sun angle as well as the measurement angle mostly are not taken into account when simulating because their influence on the reflectivity is weak. Therefore the impact of a changing measurement and illumination angle on the reflectivity is investigated. Furthermore the impact of humidity and chlorophyll in the scenery is studied by analyzing reflectance spectra of different vegetative background areas. It is shown that the measurement as well as the illumination angle has an important influence on the absolute reflection values which raises the importance of measurements of the bidirectional reflectance distribution function (BRDF).
Bernstein, Diana N.; Neelin, J. David
2016-04-28
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Diana N.; Neelin, J. David
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
[The rabbit experimental study for toxicokinetics of chlorpyrifos impacted by hemoperfusion].
Guo, Xiang; Chen, Xiao; Zhang, Hongshun; Long, Xin; He, Qian; Sun, Chengye; Huang, Xianqing; He, Jian
2015-11-01
To investigate toxicokinetic parameters impacted by hemoperfusion after oral chlorpyrifos exposure, to investigate the adsorption effect of hemoperhusion for chlorpyrifos poisoning. 12 rabbits were divided into two groups after oral exposure with chlorpyrifos 300 mg/kg body weight. Control group: without hemoperfusion; hemoperfusion group: hemoperfusion starts 0.5 h after chlorpyrifos exposure and lasts for 2h. Blood samples were collected at different times, concentrations of chlorpyrifos were tested by GC, then, toxicokinetic parameterswere calculated and analysis by DAS3.0. In hemoperfusion group, peak time was (7.19±3.74) h, peak concentrations was (1.37±0.56) mg/L, clearance rate was (13.93±10.27) L/h/kg, apparent volume of distribution was (418.18±147.15) L/kg The difference of these parameter were statistically significant compared with control group (P<0.05). Hmoperfusion will decrease the inner exposure and load dose of rabbits with chlorpyrifos poisoning.
NASA Astrophysics Data System (ADS)
Yang, G.; Maher, K.; Caers, J.
2015-12-01
Groundwater contamination associated with remediated uranium mill tailings is a challenging environmental problem, particularly within the Colorado River Basin. To examine the effectiveness of in-situ bioremediation of U(VI), acetate injection has been proposed and tested at the Rifle pilot site. There have been several geologic modeling and simulated contaminant transport investigations, to evaluate the potential outcomes of the process and identify crucial factors for successful uranium reduction. Ultimately, findings from these studies would contribute to accurate predictions of the efficacy of uranium reduction. However, all these previous studies have considered limited model complexities, either because of the concern that data is too sparse to resolve such complex systems or because some parameters are assumed to be less important. Such simplified initial modeling, however, limits the predictive power of the model. Moreover, previous studies have not yet focused on spatial heterogeneity of various modeling components and its impact on the spatial distribution of the immobilized uranium (U(IV)). In this study, we study the impact of uncertainty on 21 parameters on model responses by means of recently developed distance-based global sensitivity analysis (DGSA), to study the main effects and interactions of parameters of various types. The 21 parameters include, for example, spatial variability of initial uranium concentration, mean hydraulic conductivity, and variogram structures of hydraulic conductivity. DGSA allows for studying multi-variate model responses based on spatial and non-spatial model parameters. When calculating the distances between model responses, in addition to the overall uranium reduction efficacy, we also considered the spatial profiles of the immobilized uranium concentration as target response. Results show that the mean hydraulic conductivity and the mineral reaction rate are the two most sensitive parameters with regard to the overall uranium reduction. But in terms of spatial distribution of immobilized uranium, initial conditions of uranium concentration and spatial uncertainty in hydraulic conductivity also become important. These analyses serve as the first step of further prediction practices of the complex uranium transport and reaction systems.
Estimating the Grain Size Distribution of Mars based on Fragmentation Theory and Observations
NASA Astrophysics Data System (ADS)
Charalambous, C.; Pike, W. T.; Golombek, M.
2017-12-01
We present here a fundamental extension to the fragmentation theory [1] which yields estimates of the distribution of particle sizes of a planetary surface. The model is valid within the size regimes of surfaces whose genesis is best reflected by the evolution of fragmentation phenomena governed by either the process of meteoritic impacts, or by a mixture with aeolian transportation at the smaller sizes. The key parameter of the model, the regolith maturity index, can be estimated as an average of that observed at a local site using cratering size-frequency measurements, orbital and surface image-detected rock counts and observations of sub-mm particles at landing sites. Through validation of ground truth from previous landed missions, the basis of this approach has been used at the InSight landing ellipse on Mars to extrapolate rock size distributions in HiRISE images down to 5 cm rock size, both to determine the landing safety risk and the subsequent probability of obstruction by a rock of the deployed heat flow mole down to 3-5 m depth [2]. Here we focus on a continuous extrapolation down to 600 µm coarse sand particles, the upper size limit that may be present through aeolian processes [3]. The parameters of the model are first derived for the fragmentation process that has produced the observable rocks via meteorite impacts over time, and therefore extrapolation into a size regime that is affected by aeolian processes has limited justification without further refinement. Incorporating thermal inertia estimates, size distributions observed by the Spirit and Opportunity Microscopic Imager [4] and Atomic Force and Optical Microscopy from the Phoenix Lander [5], the model's parameters in combination with synthesis methods are quantitatively refined further to allow transition within the aeolian transportation size regime. In addition, due to the nature of the model emerging in fractional mass abundance, the percentage of material by volume or mass that resides within the transported fraction on Mars can be estimated. The parameters of the model thus allow for a better understanding of the regolith's history which has implications to the origin of sand on Mars. [1] Charalambous, PhD thesis, ICL, 2015 [2] Golombek et al., Space Science Reviews, 2016 [3] Kok et al., ROPP, 2012 [4] McGlynn et al., JGR, 2011 [5] Pike et al., GRL, 2011
The impact of different dose response parameters on biologically optimized IMRT in breast cancer
NASA Astrophysics Data System (ADS)
Costa Ferreira, Brigida; Mavroidis, Panayiotis; Adamus-Górka, Magdalena; Svensson, Roger; Lind, Bengt K.
2008-05-01
The full potential of biologically optimized radiation therapy can only be maximized with the prediction of individual patient radiosensitivity prior to treatment. Unfortunately, the available biological parameters, derived from clinical trials, reflect an average radiosensitivity of the examined populations. In the present study, a breast cancer patient of stage I II with positive lymph nodes was chosen in order to analyse the effect of the variation of individual radiosensitivity on the optimal dose distribution. Thus, deviations from the average biological parameters, describing tumour, heart and lung response, were introduced covering the range of patient radiosensitivity reported in the literature. Two treatment configurations of three and seven biologically optimized intensity-modulated beams were employed. The different dose distributions were analysed using biological and physical parameters such as the complication-free tumour control probability (P+), the biologically effective uniform dose (\\bar{\\bar{D}} ), dose volume histograms, mean doses, standard deviations, maximum and minimum doses. In the three-beam plan, the difference in P+ between the optimal dose distribution (when the individual patient radiosensitivity is known) and the reference dose distribution, which is optimal for the average patient biology, ranges up to 13.9% when varying the radiosensitivity of the target volume, up to 0.9% when varying the radiosensitivity of the heart and up to 1.3% when varying the radiosensitivity of the lung. Similarly, in the seven-beam plan, the differences in P+ are up to 13.1% for the target, up to 1.6% for the heart and up to 0.9% for the left lung. When the radiosensitivity of the most important tissues in breast cancer radiation therapy was simultaneously changed, the maximum gain in outcome was as high as 7.7%. The impact of the dose response uncertainties on the treatment outcome was clinically insignificant for the majority of the simulated patients. However, the jump from generalized to individualized radiation therapy may significantly increase the therapeutic window for patients with extreme radio sensitivity or radioresistance, provided that these are identified. Even for radiosensitive patients a simple treatment technique is sufficient to maximize the outcome, since no significant benefits were obtained with a more complex technique using seven intensity-modulated beams portals.
NASA Astrophysics Data System (ADS)
Patade, Sachin; Prabha, T. V.; Axisa, D.; Gayatri, K.; Heymsfield, A.
2015-10-01
A comprehensive analysis of particle size distributions measured in situ with airborne instrumentation during the Cloud Aerosol Interaction and Precipitation Enhancement Experiment (CAIPEEX) is presented. In situ airborne observations in the developing stage of continental convective clouds during premonsoon (PRE), transition, and monsoon (MON) period at temperatures from 25 to -22°C are used in the study. The PRE clouds have narrow drop size and particle size distributions compared to monsoon clouds and showed less development of size spectra with decrease in temperature. Overall, the PRE cases had much lower values of particle number concentrations and ice water content compared to MON cases, indicating large differences in the ice initiation and growth processes between these cloud regimes. This study provided compelling evidence that in addition to dynamics, aerosol and moisture are important for modulating ice microphysical processes in PRE and MON clouds through impacts on cloud drop size distribution. Significant differences are observed in the relationship of the slope and intercept parameters of the fitted particle size distributions (PSDs) with temperature in PRE and MON clouds. The intercept values are higher in MON clouds than PRE for exponential distribution which can be attributed to higher cloud particle number concentrations and ice water content in MON clouds. The PRE clouds tend to have larger values of dispersion of gamma size distributions than MON clouds, signifying narrower spectra. The relationships between PSDs parameters are presented and compared with previous observations.
NASA Astrophysics Data System (ADS)
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
Future global SLR network evolution and its impact on the terrestrial reference frame
NASA Astrophysics Data System (ADS)
Kehm, Alexander; Bloßfeld, Mathis; Pavlis, Erricos C.; Seitz, Florian
2018-06-01
Satellite laser ranging (SLR) is an important technique that contributes to the determination of terrestrial geodetic reference frames, especially to the realization of the origin and the scale of global networks. One of the major limiting factors of SLR-derived reference frame realizations is the datum accuracy which significantly suffers from the current global SLR station distribution. In this paper, the impact of a potential future development of the SLR network on the estimated datum parameters is investigated. The current status of the SLR network is compared to a simulated potential future network featuring additional stations improving the global network geometry. In addition, possible technical advancements resulting in a higher amount of observations are taken into account as well. As a result, we find that the network improvement causes a decrease in the scatter of the network translation parameters of up to 24%, and up to 20% for the scale, whereas the technological improvement causes a reduction in the scatter of up to 27% for the translations and up to 49% for the scale. The Earth orientation parameters benefit by up to 15% from both effects.
NASA Astrophysics Data System (ADS)
Atmani, O.; Abbès, B.; Abbès, F.; Li, Y. M.; Batkam, S.
2018-05-01
Thermoforming of high impact polystyrene sheets (HIPS) requires technical knowledge on material behavior, mold type, mold material, and process variables. Accurate thermoforming simulations are needed in the optimization process. Determining the behavior of the material under thermoforming conditions is one of the key parameters for an accurate simulation. The aim of this work is to identify the thermomechanical behavior of HIPS in the thermoforming conditions. HIPS behavior is highly dependent on temperature and strain rate. In order to reproduce the behavior of such material, a thermo-elasto-viscoplastic constitutive law was implement in the finite element code ABAQUS. The proposed model parameters are considered as thermo-dependent. The strain-dependence effect is introduced using Prony series. Tensile tests were carried out at different temperatures and strain rates. The material parameters were then identified using a NSGA-II algorithm. To validate the rheological model, experimental blowing tests were carried out on a thermoforming pilot machine. To compare the numerical results with the experimental ones the thickness distribution and the bubble shape were investigated.
Incorporating rainfall uncertainty in a SWAT model: the river Zenne basin (Belgium) case study
NASA Astrophysics Data System (ADS)
Tolessa Leta, Olkeba; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2013-04-01
The European Union Water Framework Directive (EU-WFD) called its member countries to achieve a good ecological status for all inland and coastal water bodies by 2015. According to recent studies, the river Zenne (Belgium) is far from this objective. Therefore, an interuniversity and multidisciplinary project "Towards a Good Ecological Status in the river Zenne (GESZ)" was launched to evaluate the effects of wastewater management plans on the river. In this project, different models have been developed and integrated using the Open Modelling Interface (OpenMI). The hydrologic, semi-distributed Soil and Water Assessment Tool (SWAT) is hereby used as one of the model components in the integrated modelling chain in order to model the upland catchment processes. The assessment of the uncertainty of SWAT is an essential aspect of the decision making process, in order to design robust management strategies that take the predicted uncertainties into account. Model uncertainty stems from the uncertainties on the model parameters, the input data (e.g, rainfall), the calibration data (e.g., stream flows) and on the model structure itself. The objective of this paper is to assess the first three sources of uncertainty in a SWAT model of the river Zenne basin. For the assessment of rainfall measurement uncertainty, first, we identified independent rainfall periods, based on the daily precipitation and stream flow observations and using the Water Engineering Time Series PROcessing tool (WETSPRO). Secondly, we assigned a rainfall multiplier parameter for each of the independent rainfall periods, which serves as a multiplicative input error corruption. Finally, we treated these multipliers as latent parameters in the model optimization and uncertainty analysis (UA). For parameter uncertainty assessment, due to the high number of parameters of the SWAT model, first, we screened out its most sensitive parameters using the Latin Hypercube One-factor-At-a-Time (LH-OAT) technique. Subsequently, we only considered the most sensitive parameters for parameter optimization and UA. To explicitly account for the stream flow uncertainty, we assumed that the stream flow measurement error increases linearly with the stream flow value. To assess the uncertainty and infer posterior distributions of the parameters, we used a Markov Chain Monte Carlo (MCMC) sampler - differential evolution adaptive metropolis (DREAM) that uses sampling from an archive of past states to generate candidate points in each individual chain. It is shown that the marginal posterior distributions of the rainfall multipliers vary widely between individual events, as a consequence of rainfall measurement errors and the spatial variability of the rain. Only few of the rainfall events are well defined. The marginal posterior distributions of the SWAT model parameter values are well defined and identified by DREAM, within their prior ranges. The posterior distributions of output uncertainty parameter values also show that the stream flow data is highly uncertain. The approach of using rainfall multipliers to treat rainfall uncertainty for a complex model has an impact on the model parameter marginal posterior distributions and on the model results Corresponding author: Tel.: +32 (0)2629 3027; fax: +32(0)2629 3022. E-mail: otolessa@vub.ac.be
Luo, Wei; Katz, David A; Hamilton, Deven T; McKenney, Jennie; Jenness, Samuel M; Goodreau, Steven M; Stekler, Joanne D; Rosenberg, Eli S; Sullivan, Patrick S; Cassels, Susan
2018-06-29
In the United States HIV epidemic, men who have sex with men (MSM) remain the most profoundly affected group. Prevention science is increasingly being organized around HIV testing as a launch point into an HIV prevention continuum for MSM who are not living with HIV and into an HIV care continuum for MSM who are living with HIV. An increasing HIV testing frequency among MSM might decrease future HIV infections by linking men who are living with HIV to antiretroviral care, resulting in viral suppression. Distributing HIV self-test (HIVST) kits is a strategy aimed at increasing HIV testing. Our previous modeling work suggests that the impact of HIV self-tests on transmission dynamics will depend not only on the frequency of tests and testers' behaviors but also on the epidemiological and testing characteristics of the population. The objective of our study was to develop an agent-based model to inform public health strategies for promoting safe and effective HIV self-tests to decrease the HIV incidence among MSM in Atlanta, GA, and Seattle, WA, cities representing profoundly different epidemiological settings. We adapted and extended a network- and agent-based stochastic simulation model of HIV transmission dynamics that was developed and parameterized to investigate racial disparities in HIV prevalence among MSM in Atlanta. The extension comprised several activities: adding a new set of model parameters for Seattle MSM; adding new parameters for tester types (ie, regular, risk-based, opportunistic-only, or never testers); adding parameters for simplified pre-exposure prophylaxis uptake following negative results for HIV tests; and developing a conceptual framework for the ways in which the provision of HIV self-tests might change testing behaviors. We derived city-specific parameters from previous cohort and cross-sectional studies on MSM in Atlanta and Seattle. Each simulated population comprised 10,000 MSM and targeted HIV prevalences are equivalent to 28% and 11% in Atlanta and Seattle, respectively. Previous studies provided sufficient data to estimate the model parameters representing nuanced HIV testing patterns and HIV self-test distribution. We calibrated the models to simulate the epidemics representing Atlanta and Seattle, including matching the expected stable HIV prevalence. The revised model facilitated the estimation of changes in 10-year HIV incidence based on counterfactual scenarios of HIV self-test distribution strategies and their impact on testing behaviors. We demonstrated that the extension of an existing agent-based HIV transmission model was sufficient to simulate the HIV epidemics among MSM in Atlanta and Seattle, to accommodate a more nuanced depiction of HIV testing behaviors than previous models, and to serve as a platform to investigate how HIV self-tests might impact testing and HIV transmission patterns among MSM in Atlanta and Seattle. In our future studies, we will use the model to test how different HIV self-test distribution strategies might affect HIV incidence among MSM. ©Wei Luo, David A Katz, Deven T Hamilton, Jennie McKenney, Samuel M Jenness, Steven M Goodreau, Joanne D Stekler, Eli S Rosenberg, Patrick S Sullivan, Susan Cassels. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 29.06.2018.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myagkov, N. N., E-mail: nn-myagkov@mail.ru
The problem of aluminum projectile fragmentation upon high-velocity impact on a thin aluminum shield is considered. A distinctive feature of this description is that the fragmentation has been numerically simulated using the complete system of equations of deformed solid mechanics by a method of smoothed particle hydrodynamics in three-dimensional setting. The transition from damage to fragmentation is analyzed and scaling relations are derived in terms of the impact velocity (V), ratio of shield thickness to projectile diameter (h/D), and ultimate strength (σ{sub p}) in the criterion of projectile and shield fracture. Analysis shows that the critical impact velocity V{sub c}more » (separating the damage and fragmentation regions) is a power function of σ{sub p} and h/D. In the supercritical region (V > V{sub c}), the weight-average fragment mass asymptotically tends to a power function of the impact velocity with exponent independent of h/D and σ{sub p}. Mean cumulative fragment mass distributions at the critical point are scale-invariant with respect to parameters h/D and σ{sub p}. Average masses of the largest fragments are also scale-invariant at V > V{sub c}, but only with respect to variable parameter σ{sub p}.« less
NASA Astrophysics Data System (ADS)
Matsui, H.; Koike, M.; Kondo, Y.; Fast, J. D.; Takigawa, M.
2014-09-01
Number concentrations, size distributions, and mixing states of aerosols are essential parameters for accurate estimations of aerosol direct and indirect effects. In this study, we develop an aerosol module, designated the Aerosol Two-dimensional bin module for foRmation and Aging Simulation (ATRAS), that can explicitly represent these parameters by considering new particle formation (NPF), black carbon (BC) aging, and secondary organic aerosol (SOA) processes. A two-dimensional bin representation is used for particles with dry diameters from 40 nm to 10 μm to resolve both aerosol sizes (12 bins) and BC mixing states (10 bins) for a total of 120 bins. The particles with diameters between 1 and 40 nm are resolved using additional eight size bins to calculate NPF. The ATRAS module is implemented in the WRF-Chem model and applied to examine the sensitivity of simulated mass, number, size distributions, and optical and radiative parameters of aerosols to NPF, BC aging, and SOA processes over East Asia during the spring of 2009. The BC absorption enhancement by coating materials is about 50% over East Asia during the spring, and the contribution of SOA processes to the absorption enhancement is estimated to be 10-20% over northern East Asia and 20-35% over southern East Asia. A clear north-south contrast is also found between the impacts of NPF and SOA processes on cloud condensation nuclei (CCN) concentrations: NPF increases CCN concentrations at higher supersaturations (smaller particles) over northern East Asia, whereas SOA increases CCN concentrations at lower supersaturations (larger particles) over southern East Asia. The application of ATRAS in East Asia also shows that the impact of each process on each optical and radiative parameter depends strongly on the process and the parameter in question. The module can be used in the future as a benchmark model to evaluate the accuracy of simpler aerosol models and examine interactions between NPF, BC aging, and SOA processes under different meteorological conditions and emissions.
Enhanced CAH dechlorination in a low permeability, variably-saturated medium
Martin, J.P.; Sorenson, K.S.; Peterson, L.N.; Brennan, R.A.; Werth, C.J.; Sanford, R.A.; Bures, G.H.; Taylor, C.J.; ,
2002-01-01
An innovative pilot-scale field test was performed to enhance the anaerobic reductive dechlorination (ARD) of chlorinated aliphatic hydrocarbons (CAHs) in a low permeability, variably-saturated formation. The selected technology combines the use of a hydraulic fracturing (fracking) technique with enhanced bioremediation through the creation of highly-permeable sand- and electron donor-filled fractures in the low permeability matrix. Chitin was selected as the electron donor because of its unique properties as a polymeric organic material and based on the results of lab studies that indicated its ability to support ARD. The distribution and impact of chitin- and sand-filled fractures to the system was evaluated using hydrologic, geophysical, and geochemical parameters. The results indicate that, where distributed, chitin favorably impacted redox conditions and supported enhanced ARD of CAHs. These results indicate that this technology may be a viable and cost-effective approach for remediation of low-permeability, variably saturated systems.
Assessment of space sensors for ocean pollution monitoring
NASA Technical Reports Server (NTRS)
Alvarado, U. R.; Tomiyasu, K.; Gulatsi, R. L.
1980-01-01
Several passive and active microwave, as well as passive optical remote sensors, applicable to the monitoring of oil spills and waste discharges at sea, are considered. The discussed types of measurements relate to: (1) spatial distribution and properties of the pollutant, and (2) oceanic parameters needed to predict the movement of the pollutants and their impact upon land. The sensors, operating from satellite platforms at 700-900 km altitudes, are found to be useful in mapping the spread of oil in major oil spills and in addition, can be effective in producing wind and ocean parameters as inputs to oil trajectory and dispersion models. These capabilities can be used in countermeasures.
NASA Astrophysics Data System (ADS)
Abdulhadi, Ahmed M.; Ahmed, Tamara S.
2018-05-01
In this paper, we deals with the impact of radialiy magnetic field on the peristaltic transport of Jeffrey fluid through a curved channel with two dimensional. The effect of slip condition on velocity, the non-slip condition on temperature and conversation is performed. The heat and mass transfer are considered under the influence of various parameters. The flow is investigated under the assumption of long wave length and low Reynolds number approximations. The distribution of temperature and concentration are discussed for various parameters governing the flow with the simultaneous effects of Brinkman number, Soret number and Schmidt number.
NASA Astrophysics Data System (ADS)
Mockler, E. M.; Chun, K. P.; Sapriza-Azuri, G.; Bruen, M.; Wheater, H. S.
2016-11-01
Predictions of river flow dynamics provide vital information for many aspects of water management including water resource planning, climate adaptation, and flood and drought assessments. Many of the subjective choices that modellers make including model and criteria selection can have a significant impact on the magnitude and distribution of the output uncertainty. Hydrological modellers are tasked with understanding and minimising the uncertainty surrounding streamflow predictions before communicating the overall uncertainty to decision makers. Parameter uncertainty in conceptual rainfall-runoff models has been widely investigated, and model structural uncertainty and forcing data have been receiving increasing attention. This study aimed to assess uncertainties in streamflow predictions due to forcing data and the identification of behavioural parameter sets in 31 Irish catchments. By combining stochastic rainfall ensembles and multiple parameter sets for three conceptual rainfall-runoff models, an analysis of variance model was used to decompose the total uncertainty in streamflow simulations into contributions from (i) forcing data, (ii) identification of model parameters and (iii) interactions between the two. The analysis illustrates that, for our subjective choices, hydrological model selection had a greater contribution to overall uncertainty, while performance criteria selection influenced the relative intra-annual uncertainties in streamflow predictions. Uncertainties in streamflow predictions due to the method of determining parameters were relatively lower for wetter catchments, and more evenly distributed throughout the year when the Nash-Sutcliffe Efficiency of logarithmic values of flow (lnNSE) was the evaluation criterion.
2013-09-30
soundscape into frequency categories and sound level percentiles allowed for detailed examination of the acoustic environment that would not have been...patterns and trends across sound level parameters and frequency at a single location, it is recommended that the soundscape of any region be...joined to better understand the contribution and variation in distant shipping noise to local soundscapes (Ainslie & Miksis-Olds, 2013) REFERENCES
Biophysical and Economic Uncertainty in the Analysis of Poverty Impacts of Climate Change
NASA Astrophysics Data System (ADS)
Hertel, T. W.; Lobell, D. B.; Verma, M.
2011-12-01
This paper seeks to understand the main sources of uncertainty in assessing the impacts of climate change on agricultural output, international trade, and poverty. We incorporate biophysical uncertainty by sampling from a distribution of global climate model predictions for temperature and precipitation for 2050. The implications of these realizations for crop yields around the globe are estimated using the recently published statistical crop yield functions provided by Lobell, Schlenker and Costa-Roberts (2011). By comparing these yields to those predicted under current climate, we obtain the likely change in crop yields owing to climate change. The economic uncertainty in our analysis relates to the response of the global economic system to these biophysical shocks. We use a modified version of the GTAP model to elicit the impact of the biophysical shocks on global patterns of production, consumption, trade and poverty. Uncertainty in these responses is reflected in the econometrically estimated parameters governing the responsiveness of international trade, consumption, production (and hence the intensive margin of supply response), and factor supplies (which govern the extensive margin of supply response). We sample from the distributions of these parameters as specified by Hertel et al. (2007) and Keeney and Hertel (2009). We find that, even though it is difficult to predict where in the world agricultural crops will be favorably affected by climate change, the responses of economic variables, including output and exports can be far more robust (Table 1). This is due to the fact that supply and demand decisions depend on relative prices, and relative prices depend on productivity changes relative to other crops in a given region, or relative to similar crops in other parts of the world. We also find that uncertainty in poverty impacts of climate change appears to be almost entirely driven by biophysical uncertainty.
Parameters of microbial respiration in soils of the impact zone of a mineral fertilizer factory
NASA Astrophysics Data System (ADS)
Zhukova, A. D.; Khomyakov, D. M.
2015-08-01
The carbon content in the microbial biomass and the microbial production of CO2 (the biological component of soil respiration) were determined in the upper layer (0-10 cm) of soils in the impact zone of the OJSC Voskresensk Mineral Fertilizers, one of the largest factories manufacturing mineral fertilizers in Russia. Statistical characteristics and schematic distribution of the biological parameters in the soil cover of the impact zone were analyzed. The degree of disturbance of microbial communities in the studied objects varied from weak to medium. The maximum value (0.44) was observed on the sampling plot 4 km away from the factory and 0.5 km away from the place of waste (phosphogypsum) storage. Significantly lower carbon content in the microbial biomass and its specific respiration were recorded in the agrosoddy-podzolic soil as compared with the alluvial soil sampled at the same distance from the plant. The effects of potential soil pollutants (fluorine, sulfur, cadmium, and stable strontium) on the characteristics of soil microbial communities were described with reliable regression equations.
Precipitation-runoff modeling system; user's manual
Leavesley, G.H.; Lichty, R.W.; Troutman, B.M.; Saindon, L.G.
1983-01-01
The concepts, structure, theoretical development, and data requirements of the precipitation-runoff modeling system (PRMS) are described. The precipitation-runoff modeling system is a modular-design, deterministic, distributed-parameter modeling system developed to evaluate the impacts of various combinations of precipitation, climate, and land use on streamflow, sediment yields, and general basin hydrology. Basin response to normal and extreme rainfall and snowmelt can be simulated to evaluate changes in water balance relationships, flow regimes, flood peaks and volumes, soil-water relationships, sediment yields, and groundwater recharge. Parameter-optimization and sensitivity analysis capabilites are provided to fit selected model parameters and evaluate their individual and joint effects on model output. The modular design provides a flexible framework for continued model system enhancement and hydrologic modeling research and development. (Author 's abstract)
C -parameter distribution at N 3 LL ' including power corrections
Hoang, André H.; Kolodrubetz, Daniel W.; Mateu, Vicent; ...
2015-05-15
We compute the e⁺e⁻ C-parameter distribution using the soft-collinear effective theory with a resummation to next-to-next-to-next-to-leading-log prime accuracy of the most singular partonic terms. This includes the known fixed-order QCD results up to O(α 3 s), a numerical determination of the two-loop nonlogarithmic term of the soft function, and all logarithmic terms in the jet and soft functions up to three loops. Our result holds for C in the peak, tail, and far tail regions. Additionally, we treat hadronization effects using a field theoretic nonperturbative soft function, with moments Ω n. To eliminate an O(Λ QCD) renormalon ambiguity in themore » soft function, we switch from the MS¯ to a short distance “Rgap” scheme to define the leading power correction parameter Ω 1. We show how to simultaneously account for running effects in Ω 1 due to renormalon subtractions and hadron-mass effects, enabling power correction universality between C-parameter and thrust to be tested in our setup. We discuss in detail the impact of resummation and renormalon subtractions on the convergence. In the relevant fit region for αs(m Z) and Ω 1, the perturbative uncertainty in our cross section is ≅ 2.5% at Q=m Z.« less
NASA Technical Reports Server (NTRS)
Tesar, Delbert; Tosunoglu, Sabri; Lin, Shyng-Her
1990-01-01
Research results on general serial robotic manipulators modeled with structural compliances are presented. Two compliant manipulator modeling approaches, distributed and lumped parameter models, are used in this study. System dynamic equations for both compliant models are derived by using the first and second order influence coefficients. Also, the properties of compliant manipulator system dynamics are investigated. One of the properties, which is defined as inaccessibility of vibratory modes, is shown to display a distinct character associated with compliant manipulators. This property indicates the impact of robot geometry on the control of structural oscillations. Example studies are provided to illustrate the physical interpretation of inaccessibility of vibratory modes. Two types of controllers are designed for compliant manipulators modeled by either lumped or distributed parameter techniques. In order to maintain the generality of the results, neither linearization is introduced. Example simulations are given to demonstrate the controller performance. The second type controller is also built for general serial robot arms and is adaptive in nature which can estimate uncertain payload parameters on-line and simultaneously maintain trajectory tracking properties. The relation between manipulator motion tracking capability and convergence of parameter estimation properties is discussed through example case studies. The effect of control input update delays on adaptive controller performance is also studied.
Spread of the dust temperature distribution in circumstellar disks
NASA Astrophysics Data System (ADS)
Heese, S.; Wolf, S.; Dutrey, A.; Guilloteau, S.
2017-07-01
Context. Accurate temperature calculations for circumstellar disks are particularly important for their chemical evolution. Their temperature distribution is determined by the optical properties of the dust grains, which, among other parameters, depend on their radius. However, in most disk studies, only average optical properties and thus an average temperature is assumed to account for an ensemble of grains with different radii. Aims: We investigate the impact of subdividing the grain radius distribution into multiple sub-intervals on the resulting dust temperature distribution and spectral energy distribution (SED). Methods: The temperature distribution, the relative grain surface below a certain temperature, the freeze-out radius, and the SED were computed for two different scenarios: (1) Radius distribution represented by 16 logarithmically distributed radius intervals, and (2) radius distribution represented by a single grain species with averaged optical properties (reference). Results: Within the considered parameter range, I.e., of grain radii between 5 nm and 1 mm and an optically thin and thick disk with a parameterized density distribution, we obtain the following results: in optically thin disk regions, the temperature spread can be as large as 63% and the relative grain surface below a certain temperature is lower than in the reference disk. With increasing optical depth, the difference in the midplane temperature and the relative grain surface below a certain temperature decreases. Furthermore, below 20 K, this fraction is higher for the reference disk than for the case of multiple grain radii, while it shows the opposite behavior for temperatures above this threshold. The thermal emission in the case of multiple grain radii at short wavelengths is stronger than for the reference disk. The freeze-out radius (snowline) is a function of grain radius, spanning a radial range between the coldest and warmest grain species of 30 AU.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
NASA Astrophysics Data System (ADS)
Salthammer, Tunga; Schripp, Tobias
2015-04-01
In the indoor environment, distribution and dynamics of an organic compound between gas phase, particle phase and settled dust must be known for estimating human exposure. This, however, requires a detailed understanding of the environmentally important compound parameters, their interrelation and of the algorithms for calculating partitioning coefficients. The parameters of major concern are: (I) saturation vapor pressure (PS) (of the subcooled liquid); (II) Henry's law constant (H); (III) octanol/water partition coefficient (KOW); (IV) octanol/air partition coefficient (KOA); (V) air/water partition coefficient (KAW) and (VI) settled dust properties like density and organic content. For most of the relevant compounds reliable experimental data are not available and calculated gas/particle distributions can widely differ due to the uncertainty in predicted Ps and KOA values. This is not a big problem if the target compound is of low (<10-6 Pa) or high (>10-2 Pa) volatility, but in the intermediate region even small changes in Ps or KOA will have a strong impact on the result. Moreover, the related physical processes might bear large uncertainties. The KOA value can only be used for particle absorption from the gas phase if the organic portion of the particle or dust is high. The Junge- and Pankow-equation for calculating the gas/particle distribution coefficient KP do not consider the physical and chemical properties of the particle surface area. It is demonstrated by error propagation theory and Monte-Carlo simulations that parameter uncertainties from estimation methods for molecular properties and variations of indoor conditions might strongly influence the calculated distribution behavior of compounds in the indoor environment.
Ejecta velocity distribution for impact cratering experiments on porous and low strength targets
NASA Astrophysics Data System (ADS)
Michikami, Tatsuhiro; Moriguchi, Kouichi; Hasegawa, Sunao; Fujiwara, Akira
2007-01-01
Impact cratering experiments on porous targets with various compressive strength ranging from ˜0.5 to ˜250 MPa were carried out in order to investigate the relationship between the ejecta velocity, and material strength or porosity of the target. A spherical alumina projectile (diameter ˜1 mm) was shot perpendicularly into the target surface with velocity ranging from 1.2 to 4.5 km/s (nominal 4 km/s), using a two-stage light-gas gun. The ejecta velocity was estimated from the fall point distance of ejecta. The results show that there are in fact a large fraction of ejecta with very low velocities when the material strength of the target is small and the porosity is high. As an example, in the case of one specific target (compressive strength ˜0.5 MPa and porosity 43%), the amount of ejecta with velocities lower than 1 m/s is about 40% of the total mass. The average velocity of the ejecta decreases with decreasing material strength or increasing the porosity of the target. Moreover, in our experiments, the ejecta velocity distributions normalized to total ejecta mass seem to be mainly dependent on the material strength of the target, and not so greatly on the porosity. We also compare our experimental results with those of Gault et al. [1963. Spray ejected from the lunar surface by meteoroid impact. NASA Technical Note D-1767] and Housen [1992. Crater ejecta velocities for impacts on rocky bodies. LPSC XXIII, 555-556] for the ejecta velocity distribution using Housen's nondimensional scaling parameter. The ejecta velocity distributions of our experiments are lower than those of Gault et al. [1963. Spray ejected from the lunar surface by meteoroid impact. NASA Technical Note D-1767] and Housen [1992. Crater ejecta velocities for impacts on rocky bodies. LPSC XIII, 555-556].
Evaluation of a High-Resolution Benchtop Micro-CT Scanner for Application in Porous Media Research
NASA Astrophysics Data System (ADS)
Tuller, M.; Vaz, C. M.; Lasso, P. O.; Kulkarni, R.; Ferre, T. A.
2010-12-01
Recent advances in Micro Computed Tomography (MCT) provided the motivation to thoroughly evaluate and optimize scanning, image reconstruction/segmentation and pore-space analysis capabilities of a new generation benchtop MCT scanner and associated software package. To demonstrate applicability to soil research the project was focused on determination of porosities and pore size distributions of two Brazilian Oxisols from segmented MCT-data. Effects of metal filters and various acquisition parameters (e.g. total rotation, rotation step, and radiograph frame averaging) on image quality and acquisition time are evaluated. Impacts of sample size and scanning resolution on CT-derived porosities and pore-size distributions are illustrated.
Dealing with uncertainty in modeling intermittent water supply
NASA Astrophysics Data System (ADS)
Lieb, A. M.; Rycroft, C.; Wilkening, J.
2015-12-01
Intermittency in urban water supply affects hundreds of millions of people in cities around the world, impacting water quality and infrastructure. Building on previous work to dynamically model the transient flows in water distribution networks undergoing frequent filling and emptying, we now consider the hydraulic implications of uncertain input data. Water distribution networks undergoing intermittent supply are often poorly mapped, and household metering frequently ranges from patchy to nonexistent. In the face of uncertain pipe material, pipe slope, network connectivity, and outflow, we investigate how uncertainty affects dynamical modeling results. We furthermore identify which parameters exert the greatest influence on uncertainty, helping to prioritize data collection.
Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects
Baumann, Hendrik; Sandmann, Werner
2016-01-01
Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity. PMID:27010993
Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects.
Baumann, Hendrik; Sandmann, Werner
2016-01-01
Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity.
Miled, Rabeb Bennour; Guillier, Laurent; Neves, Sandra; Augustin, Jean-Christophe; Colin, Pierre; Besse, Nathalie Gnanou
2011-06-01
Cells of six strains of Cronobacter were subjected to dry stress and stored for 2.5 months at ambient temperature. The individual cell lag time distributions of recovered cells were characterized at 25 °C and 37 °C in non-selective broth. The individual cell lag times were deduced from the times taken by cultures from individual cells to reach an optical density threshold. In parallel, growth curves for each strain at high contamination levels were determined in the same growth conditions. In general, the extreme value type II distribution with a shape parameter fixed to 5 (EVIIb) was the most effective at describing the 12 observed distributions of individual cell lag times. Recently, a model for characterizing individual cell lag time distribution from population growth parameters was developed for other food-borne pathogenic bacteria such as Listeria monocytogenes. We confirmed this model's applicability to Cronobacter by comparing the mean and the standard deviation of individual cell lag times to populational lag times observed with high initial concentration experiments. We also validated the model in realistic conditions by studying growth in powdered infant formula decimally diluted in Buffered Peptone Water, which represents the first enrichment step of the standard detection method for Cronobacter. Individual lag times and the pooling of samples significantly affect detection performances. Copyright © 2010 Elsevier Ltd. All rights reserved.
Kuhn, Thomas; Cunze, Sarah; Kochmann, Judith; Klimpel, Sven
2016-01-01
Marine nematodes of the genus Anisakis are common parasites of a wide range of aquatic organisms. Public interest is primarily based on their importance as zoonotic agents of the human Anisakiasis, a severe infection of the gastro-intestinal tract as result of consuming live larvae in insufficiently cooked fish dishes. The diverse nature of external impacts unequally influencing larval and adult stages of marine endohelminth parasites requires the consideration of both abiotic and biotic factors. Whereas abiotic factors are generally more relevant for early life stages and might also be linked to intermediate hosts, definitive hosts are indispensable for a parasite’s reproduction. In order to better understand the uneven occurrence of parasites in fish species, we here use the maximum entropy approach (Maxent) to model the habitat suitability for nine Anisakis species accounting for abiotic parameters as well as biotic data (definitive hosts). The modelled habitat suitability reflects the observed distribution quite well for all Anisakis species, however, in some cases, habitat suitability exceeded the known geographical distribution, suggesting a wider distribution than presently recorded. We suggest that integrative modelling combining abiotic and biotic parameters is a valid approach for habitat suitability assessments of Anisakis, and potentially other marine parasite species. PMID:27507328
NASA Astrophysics Data System (ADS)
Orlov, M. Yu; Lukachev, S. V.; Anisimov, V. M.
2018-01-01
The position of combustion chamber between compressor and turbine and combined action of these elements imply that the working processes of all these elements are interconnected. One of the main requirements of the combustion chamber is the formation of the desirable temperature field at the turbine inlet, which can realize necessary durability of nozzle assembly and blade wheel of the first stage of high-pressure turbine. The method of integrated simulation of combustion chamber and neighboring nodes (compressor and turbine) was developed. On the first stage of the study, this method was used to investigate the influence of non-uniformity of flow distribution, occurred after compressor blades on combustion chamber workflow. The goal of the study is to assess the impact of non-uniformity of flow distribution after the compressor on the parameters before the turbine. The calculation was carried out in a transient case for some operation mode of the engine. The simulation showed that the inclusion of compressor has an effect on combustion chamber workflow and allows us to determine temperature field at the turbine inlet and assesses its durability more accurately. In addition, the simulation with turbine showed the changes in flow velocity distribution and pressure in combustion chamber.
NASA Astrophysics Data System (ADS)
Kuhn, Thomas; Cunze, Sarah; Kochmann, Judith; Klimpel, Sven
2016-08-01
Marine nematodes of the genus Anisakis are common parasites of a wide range of aquatic organisms. Public interest is primarily based on their importance as zoonotic agents of the human Anisakiasis, a severe infection of the gastro-intestinal tract as result of consuming live larvae in insufficiently cooked fish dishes. The diverse nature of external impacts unequally influencing larval and adult stages of marine endohelminth parasites requires the consideration of both abiotic and biotic factors. Whereas abiotic factors are generally more relevant for early life stages and might also be linked to intermediate hosts, definitive hosts are indispensable for a parasite’s reproduction. In order to better understand the uneven occurrence of parasites in fish species, we here use the maximum entropy approach (Maxent) to model the habitat suitability for nine Anisakis species accounting for abiotic parameters as well as biotic data (definitive hosts). The modelled habitat suitability reflects the observed distribution quite well for all Anisakis species, however, in some cases, habitat suitability exceeded the known geographical distribution, suggesting a wider distribution than presently recorded. We suggest that integrative modelling combining abiotic and biotic parameters is a valid approach for habitat suitability assessments of Anisakis, and potentially other marine parasite species.
Origin of orbital debris impacts on LDEF's trailing surfaces
NASA Technical Reports Server (NTRS)
Kessler, Donald J.
1993-01-01
A model was developed to determine the origin of orbital impacts measured on the training surfaces of LDEF. The model calculates the expected debris impact crater distribution around LDEF as a function of debris orbital parameters. The results show that only highly elliptical, low inclination orbits could be responsible for these impacts. The most common objects left in this type of orbit are orbital transfer stages used by the U.S. and ESA to place payloads into geosynchronous orbit. Objects in this type of orbit are difficult to catalog by the U.S. Space Command; consequently there are independent reasons to believe that the catalog does not adequately represent this population. This analysis concludes that the relative number of cataloged objects with highly elliptical, low inclination orbits must be increased by a factor of 20 to be consistent with the LDEF data.
A comparison of two methods for expert elicitation in health technology assessments.
Grigore, Bogdan; Peters, Jaime; Hyde, Christopher; Stein, Ken
2016-07-26
When data needed to inform parameters in decision models are lacking, formal elicitation of expert judgement can be used to characterise parameter uncertainty. Although numerous methods for eliciting expert opinion as probability distributions exist, there is little research to suggest whether one method is more useful than any other method. This study had three objectives: (i) to obtain subjective probability distributions characterising parameter uncertainty in the context of a health technology assessment; (ii) to compare two elicitation methods by eliciting the same parameters in different ways; (iii) to collect subjective preferences of the experts for the different elicitation methods used. Twenty-seven clinical experts were invited to participate in an elicitation exercise to inform a published model-based cost-effectiveness analysis of alternative treatments for prostate cancer. Participants were individually asked to express their judgements as probability distributions using two different methods - the histogram and hybrid elicitation methods - presented in a random order. Individual distributions were mathematically aggregated across experts with and without weighting. The resulting combined distributions were used in the probabilistic analysis of the decision model and mean incremental cost-effectiveness ratios and the expected values of perfect information (EVPI) were calculated for each method, and compared with the original cost-effectiveness analysis. Scores on the ease of use of the two methods and the extent to which the probability distributions obtained from each method accurately reflected the expert's opinion were also recorded. Six experts completed the task. Mean ICERs from the probabilistic analysis ranged between £162,600-£175,500 per quality-adjusted life year (QALY) depending on the elicitation and weighting methods used. Compared to having no information, use of expert opinion decreased decision uncertainty: the EVPI value at the £30,000 per QALY threshold decreased by 74-86 % from the original cost-effectiveness analysis. Experts indicated that the histogram method was easier to use, but attributed a perception of more accuracy to the hybrid method. Inclusion of expert elicitation can decrease decision uncertainty. Here, choice of method did not affect the overall cost-effectiveness conclusions, but researchers intending to use expert elicitation need to be aware of the impact different methods could have.
Decker, Anna L.; Hubbard, Alan; Crespi, Catherine M.; Seto, Edmund Y.W.; Wang, May C.
2015-01-01
While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated, optimal semiparametric estimation of longitudinal treatment-specific means via ltmle provides an incredibly powerful, yet easy-to-use tool, removing impediments for putting theory into practice. PMID:26046009
NASA Astrophysics Data System (ADS)
Dumont, M.; Flin, F.; Malinka, A.; Brissaud, O.; Hagenmuller, P.; Dufour, A.; Lapalus, P.; Lesaffre, B.; Calonne, N.; Rolland du Roscoat, S.; Ando, E.
2017-12-01
Snow optical properties are unique among Earth surface and crucial for a wide range of applications. The bi-directional reflectance, hereafter BRDF, of snow is sensible to snow microstructure. However the complex interplays between different parameters of snow microstructure namely size parameters and shape parameters on reflectance are challenging to disentangle both theoretically and experimentally. An accurate understanding and modelling of snow BRDF is required to correctly process satellite data. BRDF measurements might also provide means of characterizing snow morphology. This study presents one of the very few dataset that combined bi-directional reflectance measurements over 500-2500 nm and X-ray tomography of the snow microstructure for three different snow samples and two snow types. The dataset is used to evaluate the approach from Malinka, 2014 that relates snow optical properties to the chord length distribution in the snow microstructure. For low and medium absorption, the model accurately reproduces the measurements but tends to slightly overestimate the anisotropy of the reflectance. The model indicates that the deviation of the ice chord length distribution from an exponential distribution, that can be understood as a characterization of snow types, does not impact the reflectance for such absorptions. The simulations are also impacted by the uncertainties in the ice refractive index values. At high absorption and high viewing/incident zenith angle, the simulations and the measurements disagree indicating that some of the assumptions made in the model are not met anymore. The study also indicates that crystal habits might play a significant role for the reflectance under such geometries and wavelengths. However quantitative relationship between crystal habits and reflectance alongside with potential optical methodologies to classify snow morphology would require an extended dataset over more snow types. This extended dataset can likely be obtained thanks to the use of ray tracing models on tomography images of the snow microstructure.
Landslides triggered by the January 12, 2010 Port-au-Prince, Haiti Mw 7.0 earthquake
NASA Astrophysics Data System (ADS)
Xu, Chong
2014-05-01
The January 12, 2010 Port-au-Prince, Haiti earthquake (Mw 7.0) triggered tens of thousands of landslides. The purpose of this study is to investigate correlations of the occurrence of landslides and its erosion thickness with topographic factors, seismic parameters, and distance from roads. A total of 30,828 landslides triggered by the earthquake cover a total area of 15.736 km2, and the volume of landslide accumulation materials is estimated to be about 30,000,000 m3, and throughout an area more than 3,000 km2. These landslides are of various types, mainly in shallow disrupted landslides and rock falls, and also including coherent deep-seated landslides, shallow disrupted landslides, rock falls, and rock slides. These landslides were delineated using pre- and post-earthquake high-resolutions satellite images. Spatial distribution maps and contour maps of landslide number density, landslide area percentage, and landslide erosion thickness were respectively constructed in order to more intuitive to discover the spatial distribution patterns of the co-seismic landslides. Statistics of size distribution and morphometric parameters of the co-seismic landslides were carried out and were compared with other earthquake events. Four proxies of co-seismic landslides abundances, including landslides centroid number density (LCND), landslide top number density (LTND), landslide area percentage (LAP), and landslide erosion thickness (LET) were used to correlate the co-seismic landslides with various landslide controlling parameters. These controlling parameters include elevation, slope angle, slope aspect, slope curvature, topographic position, distance from drainages, stratum/lithology, distance from the epicenter, distance from the Enriquillo-Plantain Garden fault, distance along the fault, and peak ground acceleration (PGA). Comparing of controls of impact parameters on co-seismic landslides show that slope angle is the strongest impact parameter on co-seismic landslides occurrence. In addition, it should be noted that the co-seismic landslides of our inventories is much more detailed than other inventories in several previous publications. Therefore, comparisons of inventories of landslides triggered by the Haiti earthquake with other published results were carried out and the reasons of such differences were presented. We suggest it should not be limited by past empirical functions between earthquake magnitude and co-seismic landslides or it is necessary to update the past empirical functions based on more and more latest and complete co-seismic landslide inventories. This research was supported by the National Science Foundation of China (41202235)
The harmonic impact of electric vehicle battery charging
NASA Astrophysics Data System (ADS)
Staats, Preston Trent
The potential widespread introduction of the electric vehicle (EV) presents both opportunities and challenges to the power systems engineers who will be required to supply power to EV batteries. One of the challenges associated with EV battery charging comes from the potentially high harmonic currents associated with the conversion of ac power system voltages to dc EV battery voltages. Harmonic currents lead to increased losses in distribution circuits and reduced life expectancy of such power distribution components as capacitors and transformers. Harmonic current injections also cause harmonic voltages on power distribution networks. These distorted voltages can affect power system loads and specific standards exist regulating acceptable voltage distortion. This dissertation develops and presents the theory required to evaluate the electric vehicle battery charger as a harmonic distorting load and its possible harmonic impact on various aspects of power distribution systems. The work begins by developing a method for evaluating the net harmonic current injection of a large collection of EV battery chargers which accounts for variation in the start-time and initial battery state-of-charge between individual chargers. Next, this method is analyzed to evaluate the effect of input parameter variation on the net harmonic currents predicted by the model. We then turn to an evaluation of the impact of EV charger harmonic currents on power distribution systems, first evaluating the impact of these currents on a substation transformer and then on power distribution system harmonic voltages. The method presented accounts for the uncertainty in EV harmonic current injections by modeling the start-time and initial battery state-of-charge (SOC) of an individual EV battery charger as random variables. Thus, the net harmonic current, and distribution system harmonic voltages are formulated in a stochastic framework. Results indicate that considering variation in start-time and SOC leads to reduced estimates of harmonic current injection when compared to more traditional methods that do not account for variation. Evaluation of power distribution system harmonic voltages suggests that for any power distribution network there is a definite threshold penetration of EVs, below which the total harmonic distortion of voltage exceeds 5% at an insignificant number of buses. Thus, most existing distribution systems will probably be able to accommodate the early introduction of EV battery charging without widespread harmonic voltage problems.
Trend analysis for daily rainfall series of Barcelona
NASA Astrophysics Data System (ADS)
Ortego, M. I.; Gibergans-Báguena, J.; Tolosana-Delgado, R.; Egozcue, J. J.; Llasat, M. C.
2009-09-01
Frequency analysis of hydrological series is a key point to acquire an in-depth understanding of the behaviour of hydrologic events. The occurrence of extreme hydrologic events in an area may imply great social and economical impacts. A good understanding of hazardous events improves the planning of human activities. A useful model for hazard assessment of extreme hydrologic events in an area is the point-over-threshold (POT) model. Time-occurrence of events is assumed to be Poisson distributed, and the magnitude X of each event is modeled as an arbitrary random variable, whose excesses over the threshold x0, Y = X - x0, given X > x0, have a Generalized Pareto Distribution (GPD), ( ? )- 1? FY (y|β,?) = 1 - 1+ βy , 0 ? y < ysup , where ysup = +? if ? 0, and ysup = -β? ? if ? < 0. The limiting distribution for ? = 0 is an exponential one. Independence between this magnitude and occurrence in time is assumed, as well as independence from event to event. In order to take account for uncertainty of the estimation of the GPD parameters, a Bayesian approach is chosen. This approach allows to include necessary conditions on the parameters of the distribution for our particular phenomena, as well as propagate adequately the uncertainty of estimations to the hazard parameters, such as return periods. A common concern is to know whether magnitudes of hazardous events have changed in the last decades. Long data series are very appreciated in order to properly study these issues. The series of daily rainfall in Barcelona (1854-2006) has been selected. This is one of the longer european daily rainfall series available. Daily rainfall is better described using a relative scale and therefore it is suitably treated in a log-scale. Accordingly, log-precipitation is identified with X. Excesses over a threshold are modeled by a GPD with a limited maximum value. An additional assumption is that the distribution of the excesses Y has limited upper tail and, therefore, ? < 0, ysup = -β?. Such a long data series provides valuable information about the phenomena on hand, and therefore a very first step is to have a look to its reliability. The first part of the work focuses on the possible existence of abrupt changes in the parameters of the GPD. These abrupt changes may be due to changes in the location of the observatories and/or technological advances introduced in the measuring instruments. The second part of the work examines the possible existence of trends. The parameters of the model are considered as a function of time. A new parameterisation of the GPD distribution is suggested, in order to parsimoniously deal with this climate variation, ? = ln(-? ?;β) and ? = ln(-? ? β) The classical scale and shape parameters of the GPD (β,?) are reformulated as a location parameter ? "linked to the upper limit of the distribution", and a shape parameter ?. In this reparameterisation, the parsimonious choice is to consider shape as a linear function of time, ?(t) = ?0 + t? while keeping location fixed, ?(t) = ?0. Then, the climate change is assessed by checking the hypothesis ? 0. Results show no significant abrupt changes in excesses distribution of the Barcelona daily rainfall series but suggest a significant change for the parameters, and therefore the existence of a trend in daily rainfall for this period.
Low-energy Electrons in Gamma-Ray Burst Afterglow Models
NASA Astrophysics Data System (ADS)
Jóhannesson, Guđlaugur; Björnsson, Gunnlaugur
2018-05-01
Observations of gamma-ray burst (GRB) afterglows have long provided the most detailed information about the origin of this spectacular phenomenon. The model that is most commonly used to extract physical properties of the event from the observations is the relativistic fireball model, where ejected material moving at relativistic speeds creates a shock wave when it interacts with the surrounding medium. Electrons are accelerated in the shock wave, generating the observed synchrotron emission through interactions with the magnetic field in the downstream medium. It is usually assumed that the accelerated electrons follow a simple power-law distribution in energy between specific energy boundaries, and that no electron exists outside these boundaries. This Letter explores the consequences of adding a low-energy power-law segment to the electron distribution with energy that contributes insignificantly to the total energy budget of the distribution. The low-energy electrons have a significant impact on the radio emission, providing synchrotron absorption and emission at these long wavelengths. Shorter wavelengths are affected through the normalization of the distribution. The new model is used to analyze the light curves of GRB 990510, and the resulting parameters are compared to a model without the extra electrons. The quality of the fit and the best-fit parameters are significantly affected by the additional model component. The new component is in one case found to strongly affect the X-ray light curves, showing how changes to the model at radio frequencies can affect light curves at other frequencies through changes in best-fit model parameters.
NASA Astrophysics Data System (ADS)
Kotnig, C.; Tavian, L.; Brenn, G.
2017-12-01
The cooling of the superconducting magnet cold masses with superfluid helium (He II) is a well-established concept successfully in operation for years in the LHC. Consequently, its application for the cooling of FCC magnets is an obvious option. The 12-kW heat loads distributed over 10-km long sectors not only require an adaption of the magnet bayonet heat exchangers but also present new challenges to the cryogenic plants, the distribution system and the control strategy. This paper recalls the basic LHC cooling concept with superfluid helium and defines the main parameters for the adaption to the FCC requirements. Pressure drop and hydrostatic head are developed in the distribution and pumping systems; their impact on the magnet temperature profile and the corresponding cooling efficiency is presented and compared for different distribution and pumping schemes.
NASA Astrophysics Data System (ADS)
Egedal, J.; Le, A.; Daughton, W.; Wetherton, B.; Cassak, Pa; Chen, Lj; Lavraud, B.; Dorell, J.; Avanov, L.; Gershman, D.
2016-10-01
During asymmetric magnetic reconnection in the dayside magnetopause in situ spacecraft mea- surements show that electrons from the high density inflow penetrate some distance into the low density inflow. Supported by a kinetic simulation, we present a general derivation of an exclusion energy parameter, which provides a lower kinetic energy bound for an electron to jump across the reconnection region from one inflow region to the other. As by a Maxwell Demon, only high energy electrons are permitted to cross the inner reconnection region, strongly impacting the form of the electron distribution function observed along the low density side separatrix. The dynamics produce two distinct flavors of crescent-shaped electron distributions in a thin boundary layer along the separatrix between the magnetospheric inflow and the reconnection exhaust. The analytical model presented relates these salient details of the distribution function to the electron dynamics in the inner reconnection region.
Marang, Leonie; van Loosdrecht, Mark C M; Kleerebezem, Robbert
2015-12-01
Although the enrichment of specialized microbial cultures for the production of polyhydroxyalkanoates (PHA) is generally performed in sequencing batch reactors (SBRs), the required feast-famine conditions can also be established using two or more continuous stirred-tank reactors (CSTRs) in series with partial biomass recirculation. The use of CSTRs offers several advantages, but will result in distributed residence times and a less strict separation between feast and famine conditions. The aim of this study was to investigate the impact of the reactor configuration, and various process and biomass-specific parameters, on the enrichment of PHA-producing bacteria. A set of mathematical models was developed to predict the growth of Plasticicumulans acidivorans-as a model PHA producer-in competition with a non-storing heterotroph. A macroscopic model considering lumped biomass and an agent-based model considering individual cells were created to study the effect of residence time distribution and the resulting distributed bacterial states. The simulations showed that in the 2-stage CSTR system the selective pressure for PHA-producing bacteria is significantly lower than in the SBR, and strongly affected by the chosen feast-famine ratio. This is the result of substrate competition based on both the maximum specific substrate uptake rate and substrate affinity. Although the macroscopic model overestimates the selective pressure in the 2-stage CSTR system, it provides a quick and fairly good impression of the reactor performance and the impact of process and biomass-specific parameters. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Boemer, Jens C.; Vittal, Eknath
The response of low voltage networks with high penetration of PV systems to transmission network faults will, in the future, determine the overall power system performance during certain hours of the year. The WECC distributed PV system model (PVD1) is designed to represent small-scale distribution-connected systems. Although default values are provided by WECC for the model parameters, tuning of those parameters seems to become important in order to accurately estimate the partial loss of distributed PV systems for bulk system studies. The objective of this paper is to describe a new methodology to determine the WECC distributed PV system (PVD1)more » model parameters and to derive parameter sets obtained for six distribution circuits of a Californian investor-owned utility with large amounts of distributed PV systems. The results indicate that the parameters for the partial loss of distributed PV systems may differ significantly from the default values provided by WECC.« less
Adult lactose digestion status and effects on disease
Szilagyi, Andrew
2015-01-01
BACKGROUND: Adult assimilation of lactose divides humans into dominant lactase-persistent and recessive nonpersistent phenotypes. OBJECTIVES: To review three medical parameters of lactose digestion, namely: the changing concept of lactose intolerance; the possible impact on diseases of microbial adaptation in lactase-nonpersistent populations; and the possibility that the evolution of lactase has influenced some disease pattern distributions. METHODS: A PubMed, Google Scholar and manual review of articles were used to provide a narrative review of the topic. RESULTS: The concept of lactose intolerance is changing and merging with food intolerances. Microbial adaptation to regular lactose consumption in lactase-nonpersistent individuals is supported by limited evidence. There is evidence suggestive of a relationship among geographical distributions of latitude, sunhine exposure and lactase proportional distributions worldwide. DISCUSSION: The definition of lactose intolerance has shifted away from association with lactose maldigestion. Lactose sensitivity is described equally in lactose digesters and maldigesters. The important medical consequence of withholding dairy foods could have a detrimental impact on several diseases; in addition, microbial adaptation in lactase-nonpersistent populations may alter risk for some diseases. There is suggestive evidence that the emergence of lactase persistence, together with human migrations before and after the emergence of lactase persistence, have impacted modern-day diseases. CONCLUSIONS: Lactose maldigestion and lactose intolerance are not synonymous. Withholding dairy foods is a poor method to treat lactose intolerance. Further epidemiological work could shed light on the possible effects of microbial adaptation in lactose maldigesters. The evolutionary impact of lactase may be still ongoing. PMID:25855879
Adult lactose digestion status and effects on disease.
Szilagyi, Andrew
2015-04-01
Adult assimilation of lactose divides humans into dominant lactase-persistent and recessive nonpersistent phenotypes. To review three medical parameters of lactose digestion, namely: the changing concept of lactose intolerance; the possible impact on diseases of microbial adaptation in lactase-nonpersistent populations; and the possibility that the evolution of lactase has influenced some disease pattern distributions. A PubMed, Google Scholar and manual review of articles were used to provide a narrative review of the topic. The concept of lactose intolerance is changing and merging with food intolerances. Microbial adaptation to regular lactose consumption in lactase-nonpersistent individuals is supported by limited evidence. There is evidence suggestive of a relationship among geographical distributions of latitude, sunhine exposure and lactase proportional distributions worldwide. The definition of lactose intolerance has shifted away from association with lactose maldigestion. Lactose sensitivity is described equally in lactose digesters and maldigesters. The important medical consequence of withholding dairy foods could have a detrimental impact on several diseases; in addition, microbial adaptation in lactase-nonpersistent populations may alter risk for some diseases. There is suggestive evidence that the emergence of lactase persistence, together with human migrations before and after the emergence of lactase persistence, have impacted modern-day diseases. Lactose maldigestion and lactose intolerance are not synonymous. Withholding dairy foods is a poor method to treat lactose intolerance. Further epidemiological work could shed light on the possible effects of microbial adaptation in lactose maldigesters. The evolutionary impact of lactase may be still ongoing.
Laub, P; Budy, Phaedra
2015-01-01
A critical decision in species conservation is whether to target individual species or a complex of ecologically similar species. Management of multispecies complexes is likely to be most effective when species share similar distributions, threats, and response to threats. We used niche overlap analysis to assess ecological similarity of 3 sensitive desert fish species currently managed as an ecological complex. We measured the amount of shared distribution of multiple habitat and life history parameters between each pair of species. Habitat use and multiple life history parameters, including maximum body length, spawning temperature, and longevity, differed significantly among the 3 species. The differences in habitat use and life history parameters among the species suggest they are likely to respond differently to similar threats and that most management actions will not benefit all 3 species equally. Habitat restoration, frequency of stream dewatering, non-native species control, and management efforts in tributaries versus main stem rivers are all likely to impact each of the species differently. Our results demonstrate that niche overlap analysis provides a powerful tool for assessing the likely effectiveness of multispecies versus single-species conservation plans.
Štrbová, Kristína; Raclavská, Helena; Bílek, Jiří
2017-12-01
The aim of the study was to characterize vertical distribution of particulate matter, in an area well known by highest air pollution levels in Europe. A balloon filled with helium with measuring instrumentation was used for vertical observation of air pollution over the fugitive sources in Moravian-Silesian metropolitan area during spring and summer. Synchronously, selected meteorological parameters were recorded together with particulate matter for exploration its relationship with particulate matter. Concentrations of particulate matter in the vertical profile were significantly higher in the spring than in the summer. Significant effect of fugitive sources was observed up to the altitude ∼255 m (∼45 m above ground) in both seasons. The presence of inversion layer was observed at the altitude ∼350 m (120-135 m above ground) at locations with major source traffic load. Both particulate matter concentrations and number of particles for the selected particle sizes decreased with increasing height. Strong correlation of particulate matter with meteorological parameters was not observed. The study represents the first attempt to assess the vertical profile over the fugitive emission sources - old environmental burdens in industrial region. Copyright © 2017 Elsevier Ltd. All rights reserved.
On the problem of modeling for parameter identification in distributed structures
NASA Technical Reports Server (NTRS)
Norris, Mark A.; Meirovitch, Leonard
1988-01-01
Structures are often characterized by parameters, such as mass and stiffness, that are spatially distributed. Parameter identification of distributed structures is subject to many of the difficulties involved in the modeling problem, and the choice of the model can greatly affect the results of the parameter identification process. Analogously to control spillover in the control of distributed-parameter systems, identification spillover is shown to exist as well and its effect is to degrade the parameter estimates. Moreover, as in modeling by the Rayleigh-Ritz method, it is shown that, for a Rayleigh-Ritz type identification algorithm, an inclusion principle exists in the identification of distributed-parameter systems as well, so that the identified natural frequencies approach the actual natural frequencies monotonically from above.
NASA Astrophysics Data System (ADS)
Shah, S.; Hussain, S.; Sagheer, M.
2018-06-01
This article explores the problem of two-dimensional, laminar, steady and boundary layer stagnation point slip flow over a Riga plate. The incompressible upper-convected Maxwell fluid has been considered as a rheological fluid model. The heat transfer characteristics are investigated with generalized Fourier's law. The fluid thermal conductivity is assumed to be temperature dependent in this study. A system of partial differential equations governing the flow of an upper-convected Maxwell fluid, heat and mass transfer using generalized Fourier's law is developed. The main objective of the article is to inspect the impacts of pertinent physical parameters such as the stretching ratio parameter (0 ⩽ A ⩽ 0.3) , Deborah number (0 ⩽ β ⩽ 0.6) , thermal relaxation parameter (0 ⩽ γ ⩽ 0.5) , wall thickness parameter (0.1 ⩽ α ⩽ 3.5) , slip parameter (0 ⩽ R ⩽ 1.5) , thermal conductivity parameter (0.1 ⩽ δ ⩽ 1.0) and modified Hartmann number (0 ⩽ Q ⩽ 3) on the velocity and temperature profiles. Suitable local similarity transformations have been used to get a system of non-linear ODEs from the governing PDEs. The numerical solutions for the dimensionless velocity and temperature distributions have been achieved by employing an effective numerical method called the shooting method. It is seen that the velocity profile shows the reduction in the velocity for the higher values of viscoelastic parameter and the thermal relaxation parameter. In addition, to enhance the reliability at the maximum level of the obtained numerical results by shooting method, a MATLAB built-in solver bvp4c has also been utilized.
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2012-01-01
Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727
Effects of a modulated vortex structure on the diffraction dynamics of ring Airy Gaussian beams.
Huang, Xianwei; Shi, Xiaohui; Deng, Zhixiang; Bai, Yanfeng; Fu, Xiquan
2017-09-01
The evolution of the ring Airy Gaussian beams with a modulated vortex in free space is numerically investigated. Compared with the unmodulated vortex, the unique property is that the beam spots first break up, and then gather. The evolution of the beams is influenced by the parameters of the vortex modulation, and the splitting phenomenon gets enhanced with multiple rings becoming light spots if the modulation depth increases. The symmetric branch pattern of the beam spots gets changed when the number of phase folds increases, and the initial modulation phase only impacts the angle of the beam spots. Moreover, a large distribution factor correlates to a hollow Gaussian vortex shape and weakens the splitting and gathering trend. By changing the initial parameters of the vortex modulation and the distribution factor, the peak intensity is greatly affected. In addition, the energy flow and the angular momentum are elucidated with the beam evolution features being confirmed.
Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.
Duarte, Belmiro P M; Wong, Weng Kee
2015-08-01
This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.
Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach
Duarte, Belmiro P. M.; Wong, Weng Kee
2014-01-01
Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159
BIG BANG NUCLEOSYNTHESIS WITH A NON-MAXWELLIAN DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertulani, C. A.; Fuqua, J.; Hussein, M. S.
The abundances of light elements based on the big bang nucleosynthesis model are calculated using the Tsallis non-extensive statistics. The impact of the variation of the non-extensive parameter q from the unity value is compared to observations and to the abundance yields from the standard big bang model. We find large differences between the reaction rates and the abundance of light elements calculated with the extensive and the non-extensive statistics. We found that the observations are consistent with a non-extensive parameter q = 1{sub -} {sub 0.12}{sup +0.05}, indicating that a large deviation from the Boltzmann-Gibbs statistics (q = 1)more » is highly unlikely.« less
NASA Astrophysics Data System (ADS)
Perdana, B. P.; Setiawan, Y.; Prasetyo, L. B.
2018-02-01
Recently, a highway development is required as a liaison between regions to support the economic development of the regions. Even the availability of highways give positive impacts, it also has negative impacts, especially related to the changes of vegetated lands. This study aims to determine the change of vegetation coverage in Jagorawi corridor Jakarta-Bogor during 37 years, and to analyze landscape patterns in the corridor based on distance factor from Jakarta to Bogor. In this study, we used a long-series of Landsat images taken by Landsat 2 MSS (1978), Landsat 5 TM (1988, 1995, and 2005) and Landsat 8 OLI/TIRS (2015). Analysis of landscape metrics was conducted through patch analysis approach to determine the change of landscape patterns in the Jagorawi corridor Jakarta-Bogor. Several parameters of landscape metrics used are Number of Patches (NumP), Mean Patch Size (MPS), Mean Shape Index (MSI), and Edge Density (ED). These parameters can be used to provide information of structural elements of landscape, composition and spatial distribution in the corridor. The results indicated that vegetation coverage in the Jagorawi corridor Jakarta-Bogor decreased about 48% for 35 years. Moreover, NumP value increased and decreasing of MPS value as a means of higher fragmentation level occurs with patch size become smaller. Meanwhile, The increase in ED parameters indicates that vegetated land is damaged annually. MSI parameter shows a decrease in every year which means land degradation on vegetated land. This indicates that the declining value of MSI will have an impact on land degradation.
Singh, Tarini; Laub, Ruth; Burgard, Jan Pablo; Frings, Christian
2018-05-01
Selective attention refers to the ability to selectively act upon relevant information at the expense of irrelevant information. Yet, in many experimental tasks, what happens to the representation of the irrelevant information is still debated. Typically, 2 approaches to distractor processing have been suggested, namely distractor inhibition and distractor-based retrieval. However, it is also typical that both processes are hard to disentangle. For instance, in the negative priming literature (for a review Frings, Schneider, & Fox, 2015) this has been a continuous debate since the early 1980s. In the present study, we attempted to prove that both processes exist, but that they reflect distractor processing at different levels of representation. Distractor inhibition impacts stimulus representation, whereas distractor-based retrieval impacts mainly motor processes. We investigated both processes in a distractor-priming task, which enables an independent measurement of both processes. For our argument that both processes impact different levels of distractor representation, we estimated the exponential parameter (τ) and Gaussian components (μ, σ) of the exponential Gaussian reaction-time (RT) distribution, which have previously been used to independently test the effects of cognitive and motor processes (e.g., Moutsopoulou & Waszak, 2012). The distractor-based retrieval effect was evident for the Gaussian component, which is typically discussed as reflecting motor processes, but not for the exponential parameter, whereas the inhibition component was evident for the exponential parameter, which is typically discussed as reflecting cognitive processes, but not for the Gaussian parameter. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Economic optimization of the energy transport component of a large distributed solar power plant
NASA Technical Reports Server (NTRS)
Turner, R. H.
1976-01-01
A solar thermal power plant with a field of collectors, each locally heating some transport fluid, requires a pipe network system for eventual delivery of energy power generation equipment. For a given collector distribution and pipe network geometry, a technique is herein developed which manipulates basic cost information and physical data in order to design an energy transport system consistent with minimized cost constrained by a calculated technical performance. For a given transport fluid and collector conditions, the method determines the network pipe diameter and pipe thickness distribution and also insulation thickness distribution associated with minimum system cost; these relative distributions are unique. Transport losses, including pump work and heat leak, are calculated operating expenses and impact the total system cost. The minimum cost system is readily selected. The technique is demonstrated on six candidate transport fluids to emphasize which parameters dominate the system cost and to provide basic decision data. Three different power plant output sizes are evaluated in each case to determine severity of diseconomy of scale.
Safety and compliance of prescription spectacles ordered by the public via the Internet.
Citek, Karl; Torgersen, Daniel L; Endres, Jeffrey D; Rosenberg, Robert R
2011-09-01
This study investigated prescription spectacles ordered from online vendors and delivered directly to the public for compliance with the optical tolerance and impact resistance requirements for eyewear dispensed in the United States. Ten individuals ordered 2 pairs of spectacles from each of 10 of the most visited Internet vendors, totaling 200 eyewear orders. Spectacles ordered consisted of ranges of lens and frame materials, lens styles, and refractive corrections reflecting current distributions in the United States. Evaluations included measurement of sphere power, cylinder power and axis, add power (if indicated), horizontal prism imbalance, and impact testing. We received and evaluated 154 pairs of spectacles, comprising 308 lenses. Several spectacles were provided incorrectly, such as single vision instead of multifocal and lens treatments added or omitted. In 28.6% of spectacles, at least 1 lens failed tolerance standards for at least 1 optical parameter, and in 22.7% of spectacles, at least 1 lens failed impact testing. Overall, 44.8% of spectacles failed at least 1 parameter of optical or impact testing. Nearly half of prescription spectacles delivered directly by online vendors did not meet either the optical requirements of the patient's visual needs or the physical requirements for the patient's safety. Copyright © 2011 American Optometric Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad
2017-10-01
The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.
Biehler, J; Wall, W A
2018-02-01
If computational models are ever to be used in high-stakes decision making in clinical practice, the use of personalized models and predictive simulation techniques is a must. This entails rigorous quantification of uncertainties as well as harnessing available patient-specific data to the greatest extent possible. Although researchers are beginning to realize that taking uncertainty in model input parameters into account is a necessity, the predominantly used probabilistic description for these uncertain parameters is based on elementary random variable models. In this work, we set out for a comparison of different probabilistic models for uncertain input parameters using the example of an uncertain wall thickness in finite element models of abdominal aortic aneurysms. We provide the first comparison between a random variable and a random field model for the aortic wall and investigate the impact on the probability distribution of the computed peak wall stress. Moreover, we show that the uncertainty about the prevailing peak wall stress can be reduced if noninvasively available, patient-specific data are harnessed for the construction of the probabilistic wall thickness model. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Riva, Fabio; Milanese, Lucio; Ricci, Paolo
2017-10-01
To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.
NASA Astrophysics Data System (ADS)
Brusseau, Mark L.; Xie, Lily H.; Li, Li
1999-04-01
Interest in coupled biodegradation and transport of organic contaminants has expanded greatly in the past several years. In a system in which biodegradation is coupled with solute transport, the magnitude and rate of biodegradation is influenced not only by properties of the microbial population and the substrate, but also by hydrodynamic properties (e.g., residence time, dispersivity). By nondimensionalizing the coupled-process equations for transport and nonlinear biodegradation, we show that transport behavior is controlled by three characteristic parameters: the effective maximum specific growth rate, the relative half-saturation constant, and the relative substrate-utilization coefficient. The impact on biodegradation and transport of these parameters, which constitute various combinations of factors reflecting the influences of biotic and hydraulic properties of the system, are examined numerically. A type-curve diagram based on the three characteristic parameters is constructed to illustrate the conditions under which steady and non-steady transport is observed, and the conditions for which the linear, first-order approximation is valid for representing biodegradation. The influence of constraints to microbial growth and substrate utilization on contaminant transport is also briefly discussed. Additionally, the impact of biodegradation, with and without biomass growth, on spatial solute distribution and moments is examined.
Recovering Parameters of Johnson's SB Distribution
Bernard R. Parresol
2003-01-01
A new parameter recovery model for Johnson's SB distribution is developed. This latest alternative approach permits recovery of the range and both shape parameters. Previous models recovered only the two shape parameters. Also, a simple procedure for estimating the distribution minimum from sample values is presented. The new methodology...
Transmuted of Rayleigh Distribution with Estimation and Application on Noise Signal
NASA Astrophysics Data System (ADS)
Ahmed, Suhad; Qasim, Zainab
2018-05-01
This paper deals with transforming one parameter Rayleigh distribution, into transmuted probability distribution through introducing a new parameter (λ), since this studied distribution is necessary in representing signal data distribution and failure data model the value of this transmuted parameter |λ| ≤ 1, is also estimated as well as the original parameter (⊖) by methods of moments and maximum likelihood using different sample size (n=25, 50, 75, 100) and comparing the results of estimation by statistical measure (mean square error, MSE).
Modeling the ejecta cloud in the first seconds after Deep Impact
NASA Astrophysics Data System (ADS)
Nagdimunov, L.; Kolokolova, L.; Wolff, M.; A'Hearn, M.; Farnham, T.
2014-07-01
Although the Deep Impact experiment was performed nine years ago, analysis of its data continues to shed light on our understanding of cometary atmospheres, surfaces, and interiors. We analyze the images acquired by the Deep Impact spacecraft High Resolution Instrument (HRI) in the first seconds after impact. These early images reflect the development of the material excavation from the cometary nucleus, enabling a study of fresh, unprocessed nuclear material, and potentially allowing a peek into the interior. Simply studying the brightness of the ejecta plume and its distribution as a function of height and time after impact could provide some insight into the characteristics of the ejecta. However, the optical thickness of the ejecta offers an additional source of information through the resultant shadow on the surface of the nucleus and brightness variations within it. Our goal was to reproduce both the distribution of brightness in the plume and in its shadow, thus constraining the characteristics of the ejecta. To achieve this, we used a 3D radiative-transfer package HYPERION [1], which allows an arbitrary spatial distribution and multiple dust components, for simulations of multiple scattering with realistic scattering and observational geometries. The parameters of our dust modeling were composition, size distribution, and number density of particles at the base of the ejecta cone (the last varied with the height, h, as h^{-3}). Composition was created as a mixture of so called Halley-like dust (silicates, carbon, and organics, see [2]), ice, and voids to account for particle porosity. We performed a parameter survey, searching for dust/ice ratios and particle porosity that could reproduce a density of the individual particles equal to the bulk density of the nucleus, 0.4 g/(cm^3), or 1.75 g/(cm^3) used in [3] to model crater development. The size distribution was taken from [4] and the number density was varied to achieve the best fit. To further constrain the results, we compared them with those of crater modeling [3]. Based on the approach given in [3] and using the crater diameter from [5], the mass of the ejecta 1 sec. after impact was estimated as 9×10^3-2×10^4 kg. The best fit to Deep Impact data and excavated mass constraints was achieved with ˜10% Halley dust, ˜20% ice, and the rest voids by volume for density 0.4 g/(cm^3) and ˜65% Halley dust with 38-8 % ice, depending on porosity, for density 1.75 g/(cm^3). Both cases result in a number density of ˜(10^4) particles/(cm^3). The dust/ice mass ratio for each density is ≥1, which is consistent with [6]. To reproduce the correct position and geometry of the shadow, we had to modify the geometry of the ejecta cone originally prescribed in [3]. This was required, in part, by the use of a revised nuclear shape model [7]. Our estimate of cone tilt differs from the previous one by 13.2°. It appeared that the observed change in brightness of the plume and shadow during the first second cannot be reproduced by a hollow cone. This is consistent with lab simulations of oblique impacts [8] which showed that hollowness of the ejecta cone can develop somewhat later in the plume evolution. Variations of brightness within the plume and the shadow can reveal the structure of the upper layers of the nucleus.
Gómez Fernández, María Jimena; Boston, Emma S M; Gaggiotti, Oscar E; Kittlein, Marcelo J; Mirol, Patricia M
2016-12-01
In this study we combine information from landscape characteristics, demographic inference and species distribution modelling to identify environmental factors that shape the genetic distribution of the fossorial rodent Ctenomys. We sequenced the mtDNA control region and amplified 12 microsatellites from 27 populations distributed across the Iberá wetland ecosystem. Hierarchical Bayesian modelling was used to construct phylogenies and estimate divergence times. We developed species distribution models to determine what climatic variables and soil parameters predicted species presence by comparing the current to the historic and predicted future distribution of the species. Finally, we explore the impact of environmental variables on the genetic structure of Ctenomys based on current and past species distributions. The variables that consistently correlated with the predicted distribution of the species and explained the observed genetic differentiation among populations included the distribution of well-drained sandy soils and temperature seasonality. A core region of stable suitable habitat was identified from the Last Interglacial, which is projected to remain stable into the future. This region is also the most genetically diverse and is currently under strong anthropogenic pressure. Results reveal complex demographic dynamics, which have been in constant change in both time and space, and are likely linked to the evolution of the Paraná River. We suggest that any alteration of soil properties (climatic or anthropic) may significantly impact the availability of suitable habitat and consequently the ability of individuals to disperse. The protection of this core stable habitat is of prime importance given the increasing levels of human disturbance across this wetland system and the threat of climate change.
Future impacts of nitrogen deposition and climate change scenarios on forest crown defoliation.
De Marco, Alessandra; Proietti, Chiara; Cionni, Irene; Fischer, Richard; Screpanti, Augusto; Vitale, Marcello
2014-11-01
Defoliation is an indicator for forest health in response to several stressors including air pollutants, and one of the most important parameters monitored in the International Cooperative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests). The study aims to estimate crown defoliation in 2030, under three climate and one nitrogen deposition scenarios, based on evaluation of the most important factors (meteorological, nitrogen deposition and chemical soil parameters) affecting defoliation of twelve European tree species. The combination of favourable climate and nitrogen fertilization in the more adaptive species induces a generalized decrease of defoliation. On the other hand, severe climate change and drought are main causes of increase in defoliation in Quercus ilex and Fagus sylvatica, especially in Mediterranean area. Our results provide information on regional distribution of future defoliation, an important knowledge for identifying policies to counteract negative impacts of climate change and air pollution. Copyright © 2014 Elsevier Ltd. All rights reserved.
Nanodosimetric track structure in homogeneous extended beams.
Conte, V; Moro, D; Colautti, P; Grosswendt, B
2015-09-01
Physical aspects of particle track structure are important in determining the induction of clustered damage in relevant subcellular structures like the DNA and higher-order genomic structures. The direct measurement of track-structure properties of ionising radiation is feasible today by counting the number of ionisations produced inside a small gas volume. In particular, the so-called track-nanodosimeter, installed at the TANDEM-ALPI accelerator complex of LNL, measures ionisation cluster-size distributions in a simulated subcellular structure of dimensions 20 nm, corresponding approximately to the diameter of the chromatin fibre. The target volume is irradiated by pencil beams of primary particles passing at specified impact parameter. To directly relate these measured track-structure data to radiobiological measurements performed in broad homogeneous particle beams, these data can be integrated over the impact parameter. This procedure was successfully applied to 240 MeV carbon ions and compared with Monte Carlo simulations for extended fields. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Deflection of light to second order in conformal Weyl gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sultana, Joseph, E-mail: joseph.sultana@um.edu.mt
2013-04-01
We reexamine the deflection of light in conformal Weyl gravity obtained in Sultana and Kazanas (2010), by extending the calculation based on the procedure by Rindler and Ishak, for the bending angle by a centrally concentrated spherically symmetric matter distribution, to second order in M/R, where M is the mass of the source and R is the impact parameter. It has recently been reported in Bhattacharya et al. (JCAP 09 (2010) 004; JCAP 02 (2011) 028), that when this calculation is done to second order, the term γr in the Mannheim-Kazanas metric, yields again the paradoxical contribution γR (where themore » bending angle is proportional to the impact parameter) obtained by standard formalisms appropriate to asymptotically flat spacetimes. We show that no such contribution is obtained for a second order calculation and the effects of the term γr in the metric are again insignificant as reported in our earlier work.« less
Sarkozy, Clémentine; Camus, Vincent; Tilly, Hervé; Salles, Gilles; Jardin, Fabrice
2015-07-01
Diffuse large B-cell lymphoma (DLBCL) is the most common form of aggressive non-Hodgkin lymphoma, accounting for 30-40% of newly diagnosed cases. Obesity is a well-defined risk factor for DLBCL. However, the impact of body mass index (BMI) on DLBCL prognosis is controversial. Recent studies suggest that skeletal muscle wasting (sarcopenia) or loss of fat mass can be detected by computed tomography (CT) images and is useful for predicting the clinical outcome in several types of cancer including DLBCL. Several hypotheses have been proposed to explain the differences in DLBCL outcome according to BMI or weight that include tolerance to treatment, inflammatory background and chemotherapy or rituximab metabolism. In this review, we summarize the available literature, addressing the impact and physiopathological relevance of simple anthropometric tools including BMI and tissue distribution measurements. We also discuss their relationship with other nutritional parameters and their potential role in the management of patients with DLBCL.
Quantitative validation of carbon-fiber laminate low velocity impact simulations
English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.
2015-09-26
Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less
Downstream processing from hot-melt extrusion towards tablets: A quality by design approach.
Grymonpré, W; Bostijn, N; Herck, S Van; Verstraete, G; Vanhoorne, V; Nuhn, L; Rombouts, P; Beer, T De; Remon, J P; Vervaet, C
2017-10-05
Since the concept of continuous processing is gaining momentum in pharmaceutical manufacturing, a thorough understanding on how process and formulation parameters can impact the critical quality attributes (CQA) of the end product is more than ever required. This study was designed to screen the influence of process parameters and drug load during HME on both extrudate properties and tableting behaviour of an amorphous solid dispersion formulation using a quality-by-design (QbD) approach. A full factorial experimental design with 19 experiments was used to evaluate the effect of several process variables (barrel temperature: 160-200°C, screw speed: 50-200rpm, throughput: 0.2-0.5kg/h) and drug load (0-20%) as formulation parameter on the hot-melt extrusion (HME) process, extrudate and tablet quality of Soluplus ® -Celecoxib amorphous solid dispersions. A prominent impact of the formulation parameter on the CQA of the extrudates (i.e. solid state properties, moisture content, particle size distribution) and tablets (i.e. tabletability, compactibility, fragmentary behaviour, elastic recovery) was discovered. The resistance of the polymer matrix to thermo-mechanical stress during HME was confirmed throughout the experimental design space. In addition, the suitability of Raman spectroscopy as verification method for the active pharmaceutical ingredient (API) concentration in solid dispersions was evaluated. Incorporation of the Raman spectroscopy data in a PLS model enabled API quantification in the extrudate powders with none of the DOE-experiments resulting in extrudates with a CEL content deviating>3% of the label claim. This research paper emphasized that HME is a robust process throughout the experimental design space for obtaining amorphous glassy solutions and for tabletting of such formulations since only minimal impact of the process parameters was detected on the extrudate and tablet properties. However, the quality of extrudates and tablets can be optimized by adjusting specific formulations parameters (e.g. drug load). Copyright © 2017 Elsevier B.V. All rights reserved.
Courcoul, Aurélie; Monod, Hervé; Nielen, Mirjam; Klinkenberg, Don; Hogerwerf, Lenny; Beaudeau, François; Vergu, Elisabeta
2011-09-07
Coxiella burnetii is the bacterium responsible for Q fever, a worldwide zoonosis. Ruminants, especially cattle, are recognized as the most important source of human infections. Although a great heterogeneity between shedder cows has been described, no previous studies have determined which features such as shedding route and duration or the quantity of bacteria shed have the strongest impact on the environmental contamination and thus on the zoonotic risk. Our objective was to identify key parameters whose variation highly influences C. burnetii spread within a dairy cattle herd, especially those related to the heterogeneity of shedding. To compare the impact of epidemiological parameters on different dynamical aspects of C. burnetii infection, we performed a sensitivity analysis on an original stochastic model describing the bacterium spread and representing the individual variability of the shedding duration, routes and intensity as well as herd demography. This sensitivity analysis consisted of a principal component analysis followed by an ANOVA. Our findings show that the most influential parameters are the probability distribution governing the levels of shedding, especially in vaginal mucus and faeces, the characteristics of the bacterium in the environment (i.e. its survival and the fraction of bacteria shed reaching the environment), and some physiological parameters related to the intermittency of shedding (transition probability from a non-shedding infected state to a shedding state) or to the transition from one type of shedder to another one (transition probability from a seronegative shedding state to a seropositive shedding state). Our study is crucial for the understanding of the dynamics of C. burnetii infection and optimization of control measures. Indeed, as control measures should impact the parameters influencing the bacterium spread most, our model can now be used to assess the effectiveness of different control strategies of Q fever within dairy cattle herds. Copyright © 2011 Elsevier Ltd. All rights reserved.
Collision of Dual Aggregates (CODA): Experimental observations of low-velocity collisions
NASA Astrophysics Data System (ADS)
Jorges, Jeffery; Dove, Adrienne; Colwell, Josh E.
2016-10-01
Low-velocity collisions are one of the driving factors that determine the particle size distribution and particle size evolution in planetary ring systems and in the early stages of planet formation. Collisions of sub-micron to decimeter-sized objects may result in particle growth by accretion, rebounding, or erosive processes that result in the production of additional smaller particles. Numerical simulations of these systems are limited by a need to understand these collisional parameters over a range of conditions. We present the results of a sequence of laboratory experiments designed to explore collisions over a range of parameter space . We are able to observe low-velocity collisions by conducting experiments in vacuum chambers in our 0.8-sec drop tower apparatus. Initial experiments utilize a variety of impacting spheres, including glass, Teflon, aluminum, stainless steel, and brass. These spheres are either used in their natural state or are "mantled" - coated with a few-mm thick layer of a cohesive powder. A high-speed, high-resolution video camera is used to record the motion of the colliding bodies. We track the particles to determine impactor speeds before and after collision, the impact parameter, and the collisional outcome. In the case of the mantled impactors, we can assess how much rotation is generated by the collision and estimate how much powder is released (i.e. how much mass is lost) due to the collision. We also determine how the coefficient of restitution varies as a function of material type, morphology, and impact velocity. With impact velocities ranging from about 20-100 cm/s we observe that mantling of particles significantly reduces their coefficients of restitution, but we see basically no dependence of the coefficient of restitution on the impact velocity, impact parameter, or system mass. The results of this study will contribute to a better empirical model of collisional outcomes that will be refined with numerical simulation of the experiment to improve our understanding of the collisional evolution of ring systems and early planet formation.
An optical model description of momentum transfer in heavy ion collisions
NASA Technical Reports Server (NTRS)
Khan, F.; Khandelwal, G. S.; Townsend, Lawrence W.; Wilson, J. W.; Norbury, John W.
1989-01-01
An optical model description of momentum transfer in relativistic heavy ion collisions, based upon composite particle multiple scattering theory, is presented. The imaginary component of the complex momentum transfer, which comes from the absorptive part of the optical potential, is identified as the longitudinal momentum downshift of the projectile. Predictions of fragment momentum distribution observables are made and compared with experimental data. Use of the model as a tool for estimating collision impact parameters is discussed.
2012-10-01
Sum of NOx and its oxidation products NOAA National Oceanic and Atmospheric Administration O3 Ozone OAQPS Office of Air Quality Planning and...Emission related parameters such as emission strength, timing, and vertical distribution (plume fraction penetrating into the free troposphere ) proved to...emissions and the effect of PB on ozone levels in Columbus-Phenix City metropolitan areas are also of concern. Forest fires produce nitrogen oxides
NASA Astrophysics Data System (ADS)
Huneeus, Nicolas; Boucher, Olivier; Alterskjær, Kari; Cole, Jason N. S.; Curry, Charles L.; Ji, Duoying; Jones, Andy; Kravitz, Ben; Kristjánsson, Jón Egill; Moore, John C.; Muri, Helene; Niemeier, Ulrike; Rasch, Phil; Robock, Alan; Singh, Balwinder; Schmidt, Hauke; Schulz, Michael; Tilmes, Simone; Watanabe, Shingo; Yoon, Jin-Ho
2014-05-01
The effective radiative forcings (including rapid adjustments) and feedbacks associated with an instantaneous quadrupling of the preindustrial CO2 concentration and a counterbalancing reduction of the solar constant are investigated in the context of the Geoengineering Model Intercomparison Project (GeoMIP). The forcing and feedback parameters of the net energy flux, as well as its different components at the top-of-atmosphere (TOA) and surface, were examined in 10 Earth System Models to better understand the impact of solar radiation management on the energy budget. In spite of their very different nature, the feedback parameter and its components at the TOA and surface are almost identical for the two forcing mechanisms, not only in the global mean but also in their geographical distributions. This conclusion holds for each of the individual models despite intermodel differences in how feedbacks affect the energy budget. This indicates that the climate sensitivity parameter is independent of the forcing (when measured as an effective radiative forcing). We also show the existence of a large contribution of the cloudy-sky component to the shortwave effective radiative forcing at the TOA suggesting rapid cloud adjustments to a change in solar irradiance. In addition, the models present significant diversity in the spatial distribution of the shortwave feedback parameter in cloudy regions, indicating persistent uncertainties in cloud feedback mechanisms.
NASA Technical Reports Server (NTRS)
Koda, M.; Seinfeld, J. H.
1982-01-01
The reconstruction of a concentration distribution from spatially averaged and noise-corrupted data is a central problem in processing atmospheric remote sensing data. Distributed parameter observer theory is used to develop reconstructibility conditions for distributed parameter systems having measurements typical of those in remote sensing. The relation of the reconstructibility condition to the stability of the distributed parameter observer is demonstrated. The theory is applied to a variety of remote sensing situations, and it is found that those in which concentrations are measured as a function of altitude satisfy the conditions of distributed state reconstructibility.
Beaudouin, Rémy; Micallef, Sandrine; Brochot, Céline
2010-06-01
Physiologically based pharmacokinetic (PBPK) models have proven to be successful in integrating and evaluating the influence of age- or gender-dependent changes with respect to the pharmacokinetics of xenobiotics throughout entire lifetimes. Nevertheless, for an effective application of toxicokinetic modelling to chemical risk assessment, a PBPK model has to be detailed enough to include all the multiple tissues that could be targeted by the various xenobiotics present in the environment. For this reason, we developed a PBPK model based on a detailed compartmentalization of the human body and parameterized with new relationships describing the time evolution of physiological and anatomical parameters. To take into account the impact of human variability on the predicted toxicokinetics, we defined probability distributions for key parameters related to the xenobiotics absorption, distribution, metabolism and excretion. The model predictability was evaluated by a direct comparison between computational predictions and experimental data for the internal concentrations of two chemicals (1,3-butadiene and 2,3,7,8-tetrachlorodibenzo-p-dioxin). A good agreement between predictions and observed data was achieved for different scenarios of exposure (e.g., acute or chronic exposure and different populations). Our results support that the general stochastic PBPK model can be a valuable computational support in the area of chemical risk analysis. (c)2010 Elsevier Inc. All rights reserved.
A planetary dust ring generated by impact-ejection from the Galilean satellites
NASA Astrophysics Data System (ADS)
Sachse, Manuel
2018-03-01
All outer planets in the Solar System are surrounded by a ring system. Many of these rings are dust rings or they contain at least a high proportion of dust. They are often formed by impacts of micro-meteoroids onto embedded bodies. The ejected material typically consists of micron-sized charged particles, which are susceptible to gravitational and non-gravitational forces. Generally, detailed information on the dynamics and distribution of the dust requires expensive numerical simulations of a large number of particles. Here we develop a relatively simple and fast, semi-analytical model for an impact-generated planetary dust ring governed by the planet's gravity and the relevant perturbation forces for the dynamics of small charged particles. The most important parameter of the model is the dust production rate, which is a linear factor in the calculation of the dust densities. We apply our model to dust ejected from the Galilean satellites using production rates obtained from flybys of the dust sources. The dust densities predicted by our model are in good agreement with numerical simulations and with in situ measurements by the Galileo spacecraft. The lifetimes of large particles are about two orders of magnitude greater than those of small ones, which implies a flattening of the size distribution in circumplanetary space. Information about the distribution of circumplanetary dust is also important for the risk assessment of spacecraft orbits in the respective regions.
Blocquet, M; Guo, F; Mendez, M; Ward, M; Coudert, S; Batut, S; Hecquet, C; Blond, N; Fittschen, C; Schoemaecker, C
2018-05-01
The characteristics of indoor light (intensity, spectral, spatial distribution) originating from outdoors have been studied using experimental and modeling tools. They are influenced by many parameters such as building location, meteorological conditions, and the type of window. They have a direct impact on indoor air quality through a change in chemical processes by varying the photolysis rates of indoor pollutants. Transmittances of different windows have been measured and exhibit different wavelength cutoffs, thus influencing the potential of different species to be photolysed. The spectral distribution of light entering indoors through the windows was measured under different conditions and was found to be weakly dependent on the time of day for indirect cloudy, direct sunshine, partly cloudy conditions contrary to the light intensity, in agreement with calculations of the transmittance as a function of the zenithal angle and the calculated outdoor spectral distribution. The same conclusion can be drawn concerning the position within the room. The impact of these light characteristics on the indoor chemistry has been studied using the INCA-Indoor model by considering the variation in the photolysis rates of key indoor species. Depending on the conditions, photolysis processes can lead to a significant production of radicals and secondary species. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Zorec, J.; Frémat, Y.; Domiciano de Souza, A.; Royer, F.; Cidale, L.; Hubert, A.-M.; Semaan, T.; Martayan, C.; Cochetti, Y. R.; Arias, M. L.; Aidelman, Y.; Stee, P.
2017-06-01
Aims: We assume that stars may undergo surface differential rotation to study its impact on the interpretation of Vsini and on the observed distribution Φ(u) of ratios of true rotational velocities u = V/Vc (Vc is the equatorial critical velocity). We discuss some phenomena affecting the formation of spectral lines and their broadening, which can obliterate the information carried by Vsini concerning the actual stellar rotation. Methods: We studied the line broadening produced by several differential rotational laws, but adopted Maunder's expression Ω(θ) = Ω0(1 + αcos2θ) as an attempt to account for all of these laws with the lowest possible number of free parameters. We studied the effect of the differential rotation parameter α on the measured Vsini parameter and on the distribution Φ(u) of ratios u = V/Vc. Results: We conclude that the inferred Vsini is smaller than implied by the actual equatorial linear rotation velocity Veq if the stars rotate with α < 0, but is larger if the stars have α > 0. For a given | α | the deviations of Vsini are larger when α < 0. If the studied Be stars have on average α < 0, the number of rotators with Veq ≃ 0.9Vc is larger than expected from the observed distribution Φ(u); if these stars have on average α > 0, this number is lower than expected. We discuss seven phenomena that contribute either to narrow or broaden spectral lines, which blur the information on the rotation carried by Vsini and, in particular, to decide whether the Be phenomenon mostly rely on the critical rotation. We show that two-dimensional radiation transfer calculations are needed in rapid rotators to diagnose the stellar rotation more reliably.
NASA Astrophysics Data System (ADS)
Abramson, Adam; Adar, Eilon; Lazarovitch, Naftali
2014-06-01
Groundwater is often the most or only feasible safe drinking water source in remote, low-resource areas, yet the economics of its development have not been systematically outlined. We applied AWARE (Assessing Water Alternatives in Remote Economies), a recently developed Decision Support System, to investigate the costs and benefits of groundwater access and abstraction for non-networked, rural supplies. Synthetic profiles of community water services (n = 17,962), defined across 13 parameters' values and ranges relevant to remote areas, were applied to the decision framework, and the parameter effects on economic outcomes were investigated. Regressions and analysis of output distributions indicate that the most important factors determining the cost of water improvements include the technological approach, the water service target, hydrological parameters, and population density. New source construction is less cost-effective than the use or improvement of existing wells, but necessary for expanding access to isolated households. We also explored three financing approaches - willingness-to-pay, -borrow, and -work - and found that they significantly impact the prospects of achieving demand-driven cost recovery. The net benefit under willingness to work, in which water infrastructure is coupled to community irrigation and cash payments replaced by labor commitments, is impacted most strongly by groundwater yield and managerial factors. These findings suggest that the cost-benefit dynamics of groundwater-based water supply improvements vary considerably by many parameters, and that the relative strengths of different development strategies may be leveraged for achieving optimal outcomes.
Development of uncertainty-based work injury model using Bayesian structural equation modelling.
Chatterjee, Snehamoy
2014-01-01
This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.
Remote sensing of PM2.5 from ground-based optical measurements
NASA Astrophysics Data System (ADS)
Li, S.; Joseph, E.; Min, Q.
2014-12-01
Remote sensing of particulate matter concentration with aerodynamic diameter smaller than 2.5 um(PM2.5) by using ground-based optical measurements of aerosols is investigated based on 6 years of hourly average measurements of aerosol optical properties, PM2.5, ceilometer backscatter coefficients and meteorological factors from Howard University Beltsville Campus facility (HUBC). The accuracy of quantitative retrieval of PM2.5 using aerosol optical depth (AOD) is limited due to changes in aerosol size distribution and vertical distribution. In this study, ceilometer backscatter coefficients are used to provide vertical information of aerosol. It is found that the PM2.5-AOD ratio can vary largely for different aerosol vertical distributions. The ratio is also sensitive to mode parameters of bimodal lognormal aerosol size distribution when the geometric mean radius for the fine mode is small. Using two Angstrom exponents calculated at three wavelengths of 415, 500, 860nm are found better representing aerosol size distributions than only using one Angstrom exponent. A regression model is proposed to assess the impacts of different factors on the retrieval of PM2.5. Compared to a simple linear regression model, the new model combining AOD and ceilometer backscatter can prominently improve the fitting of PM2.5. The contribution of further introducing Angstrom coefficients is apparent. Using combined measurements of AOD, ceilometer backscatter, Angstrom coefficients and meteorological parameters in the regression model can get a correlation coefficient of 0.79 between fitted and expected PM2.5.
Modeling and evaluating the performance of Brillouin distributed optical fiber sensors.
Soto, Marcelo A; Thévenaz, Luc
2013-12-16
A thorough analysis of the key factors impacting on the performance of Brillouin distributed optical fiber sensors is presented. An analytical expression is derived to estimate the error on the determination of the Brillouin peak gain frequency, based for the first time on real experimental conditions. This expression is experimentally validated, and describes how this frequency uncertainty depends on measurement parameters, such as Brillouin gain linewidth, frequency scanning step and signal-to-noise ratio. Based on the model leading to this expression and considering the limitations imposed by nonlinear effects and pump depletion, a figure-of-merit is proposed to fairly compare the performance of Brillouin distributed sensing systems. This figure-of-merit offers to the research community and to potential users the possibility to evaluate with an objective metric the real performance gain resulting from any proposed configuration.
Mentzafou, A; Wagner, S; Dimitriou, E
2018-04-29
Identifying the historical hydrometeorological trends in a river basin is necessary for understanding the dominant interactions between climate, human activities and local hydromorphological conditions. Estimating the hydrological reference conditions in a river is also crucial for estimating accurately the impacts from human water related activities and design appropriate water management schemes. In this effort, the output of a regional past climate model was used, covering the period from 1660 to 1990, in combination with a dynamic, spatially distributed, hydrologic model to estimate the past and recent trends in the main hydrologic parameters such as overland flow, water storages and evapotranspiration, in a Mediterranean river basin. The simulated past hydrologic conditions (1660-1960) were compared with the current hydrologic regime (1960-1990), to assess the magnitude of human and natural impacts on the identified hydrologic trends. The hydrological components of the recent period of 2008-2016 were also examined in relation to the impact of human activities. The estimated long-term trends of the hydrologic parameters were partially assigned to varying atmospheric forcing due to volcanic activity combined with spontaneous meteorological fluctuations. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Olurotimi, E. O.; Sokoya, O.; Ojo, J. S.; Owolawi, P. A.
2018-03-01
Rain height is one of the significant parameters for prediction of rain attenuation for Earth-space telecommunication links, especially those operating at frequencies above 10 GHz. This study examines Three-parameter Dagum distribution of the rain height over Durban, South Africa. 5-year data were used to study the monthly, seasonal, and annual variations using the parameters estimated by the maximum likelihood of the distribution. The performance estimation of the distribution was determined using the statistical goodness of fit. Three-parameter Dagum distribution shows an appropriate distribution for the modeling of rain height over Durban with the Root Mean Square Error of 0.26. Also, the shape and scale parameters for the distribution show a wide variation. The probability exceedance of time for 0.01% indicates the high probability of rain attenuation at higher frequencies.
Analysis of the spatial distribution of prostate cancer obtained from histopathological images
NASA Astrophysics Data System (ADS)
Diaz, Kristians; Castaneda, Benjamin; Montero, Maria Luisa; Yao, Jorge; Joseph, Jean; Rubens, Deborah; Parker, Kevin J.
2013-03-01
Understanding the spatial distribution of prostate cancer and how it changes according to prostate specific antigen (PSA) values, Gleason score, and other clinical parameters may help comprehend the disease and increase the overall success rate of biopsies. This work aims to build 3D spatial distributions of prostate cancer and examine the extent and location of cancer as a function of independent clinical parameters. The border of the gland and cancerous regions from wholemount histopathological images are used to reconstruct 3D models showing the localization of tumor. This process utilizes color segmentation and interpolation based on mathematical morphological distance. 58 glands are deformed into one prostate atlas using a combination of rigid, affine, and b-spline deformable registration techniques. Spatial distribution is developed by counting the number of occurrences in a given position in 3D space from each registered prostate cancer. Finally a difference between proportions is used to compare different spatial distributions. Results show that prostate cancer has a significant difference (SD) in the right zone of the prostate between populations with PSA greater and less than 5ng/ml. Age does not have any impact in the spatial distribution of the disease. Positive and negative capsule-penetrated cases show a SD in the right posterior zone. There is SD in almost all the glands between cases with tumors larger and smaller than 10% of the whole prostate. A larger database is needed to improve the statistical validity of the test. Finally, information from whole-mount histopathological images may provide better insight into prostate cancer.
Uncertainty Analysis of Simulated Hydraulic Fracturing
NASA Astrophysics Data System (ADS)
Chen, M.; Sun, Y.; Fu, P.; Carrigan, C. R.; Lu, Z.
2012-12-01
Artificial hydraulic fracturing is being used widely to stimulate production of oil, natural gas, and geothermal reservoirs with low natural permeability. Optimization of field design and operation is limited by the incomplete characterization of the reservoir, as well as the complexity of hydrological and geomechanical processes that control the fracturing. Thus, there are a variety of uncertainties associated with the pre-existing fracture distribution, rock mechanics, and hydraulic-fracture engineering that require evaluation of their impact on the optimized design. In this study, a multiple-stage scheme was employed to evaluate the uncertainty. We first define the ranges and distributions of 11 input parameters that characterize the natural fracture topology, in situ stress, geomechanical behavior of the rock matrix and joint interfaces, and pumping operation, to cover a wide spectrum of potential conditions expected for a natural reservoir. These parameters were then sampled 1,000 times in an 11-dimensional parameter space constrained by the specified ranges using the Latin-hypercube method. These 1,000 parameter sets were fed into the fracture simulators, and the outputs were used to construct three designed objective functions, i.e. fracture density, opened fracture length and area density. Using PSUADE, three response surfaces (11-dimensional) of the objective functions were developed and global sensitivity was analyzed to identify the most sensitive parameters for the objective functions representing fracture connectivity, which are critical for sweep efficiency of the recovery process. The second-stage high resolution response surfaces were constructed with dimension reduced to the number of the most sensitive parameters. An additional response surface with respect to the objective function of the fractal dimension for fracture distributions was constructed in this stage. Based on these response surfaces, comprehensive uncertainty analyses were conducted among input parameters and objective functions. In addition, reduced-order emulation models resulting from this analysis can be used for optimal control of hydraulic fracturing. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Alshomrani, Ali Saleh; Gul, Taza
2017-11-01
This study is related with the analysis of spray distribution considering a nanofluid thin layer over the slippery and stretching surface of a cylinder with thermal radiation. The distribution of the spray rate is designated as a function of the nanolayer thickness. The applied temperature used during spray phenomenon has been assumed as a reference temperature with the addition of the viscous dissipation term. The diverse behavior of the thermal radiation with magnetic and chemical reaction has been cautiously observed, which has consequences in causing variations in the spray distribution and heat transmission. Nanofluids have been used as water-based like Al2O3-H2O, Cu- H2O and have been examined under the consideration of momentum and thermal slip boundary conditions. The basic equations have been transformed into a set of nonlinear equations by using suitable variables for alteration. The approximate results of the problem have been achieved by using the optimal approach of the Homotopy Analysis Method (HAM). We demonstrate our results with the help of the numerical (ND-Solve) method. In addition, we found a close agreement of the two methods which is confirmed through graphs and tables. The rate of the spray pattern under the applied pressure term has also been obtained. The maximum cooling performance has been obtained by using the Cu water with the small values of the magnetic parameter and alumina for large values of the magnetic parameter. The outcomes of the Cu-water and Al2O3-H2O nanofluids have been linked to the published results in the literature. The impact of the physical parameters, like the skin friction coefficient, and the local Nusselt number have also been observed and compared with the published work. The momentum slip and thermal slip parameters, thermal radiation parameter, magnetic parameter and heat generation/absorption parameter effects on the spray rate have been calculated and discussed.
NASA Astrophysics Data System (ADS)
Csáki, Péter; Kalicz, Péter; Gribovszki, Zoltán
2016-04-01
Water balance of sand regions of Hungary was analysed using remote-sensing based evapotranspiration (ET) maps (1*1 km spatial resolution) by CREMAP model over the 2000-2008 period. The mean annual (2000-2008) net groundwater recharge (R) estimated as the difference in mean annual precipitation (P) and ET, taking advantage that for sand regions the surface runoff is commonly negligible. For the examined nine-year period (2000-2008) the ET and R were about 90 percent and 10 percent of the P. The mean annual ET and R were analysed in the context of land cover types. A Budyko-model was used in spatially-distributed mode for the climate change impact analysis. The parameters of the Budyko-model (α) was calculated for pixels without surplus water. For the extra-water affected pixels a linear model with β-parameters (actual evapotranspiration / pan-evapotranspiration) was used. These parameter maps can be used for evaluating future ET and R in spatially-distributed mode (1*1 km resolution). By using the two parameter maps (α and β) and data of regional climate models (mean annual temperature and precipitation) evapotranspiration and net groundwater recharge projections have been done for three future periods (2011-2040, 2041-2070, 2071-2100). The expected ET and R changes have been determined relative to a reference period (1981-2010). According to the projections, by the end of the 21th century, ET may increase while in case of R a heavy decrease can be detected for the sand regions of Hungary. This research has been supported by Agroclimate.2 VKSZ_12-1-2013-0034 project. Keywords: evapotranspiration, net groundwater recharge, climate change, Budyko-model
NASA Astrophysics Data System (ADS)
Chen, Zhanbin
2018-05-01
Plasma-screening effects on the 1s _{1/2} → 2l (l = s , p ) and 1s _{1/2} → 3d _{3/2} electron-impact excitation of highly charged ions are investigated, together with their subsequent radiative decay. The analysis is performed based on the multi-configuration Dirac-Fock method and the fully relativistic distorted-wave method incorporating the Debye-Hückel potential. To explore the nature of the effects, calculations are carried out based on detailed analyses of the integrated total and magnetic sublevel cross sections, the alignment parameters, the linear polarizations, and the angular distribution of the X-ray photoemission, as well as on corresponding data calculated in various Debye lengths/environments, taking the 2p _{3/2}→ 1s _{1/2} and 3d _{3/2}→ 1s _{1/2} characteristic lines of H-like Fe^{25+} ion as an example. The present results are compared with experimental data and other theoretical predictions where available.
NASA Astrophysics Data System (ADS)
Rahmani, Kianoosh; Kavousifard, Farzaneh; Abbasi, Alireza
2017-09-01
This article proposes a novel probabilistic Distribution Feeder Reconfiguration (DFR) based method to consider the uncertainty impacts into account with high accuracy. In order to achieve the set aim, different scenarios are generated to demonstrate the degree of uncertainty in the investigated elements which are known as the active and reactive load consumption and the active power generation of the wind power units. Notably, a normal Probability Density Function (PDF) based on the desired accuracy is divided into several class intervals for each uncertain parameter. Besides, the Weiball PDF is utilised for modelling wind generators and taking the variation impacts of the power production in wind generators. The proposed problem is solved based on Fuzzy Adaptive Modified Particle Swarm Optimisation to find the most optimal switching scheme during the Multi-objective DFR. Moreover, this paper holds two suggestions known as new mutation methods to adjust the inertia weight of PSO by the fuzzy rules to enhance its ability in global searching within the entire search space.
NASA Technical Reports Server (NTRS)
Smith, T. M.; Kloesel, M. F.; Sudbrack, C. K.
2017-01-01
Powder-bed additive manufacturing processes use fine powders to build parts layer by layer. For selective laser melted (SLM) Alloy 718, the powders that are available off-the-shelf are in the 10-45 or 15-45 micron size range. A comprehensive investigation of sixteen powders from these typical ranges and two off-nominal-sized powders is underway to gain insight into the impact of feedstock on processing, durability and performance of 718 SLM space-flight hardware. This talk emphasizes an aspect of this work: the impact of powder variability on the microstructure and defects observed in the as-fabricated and full heated material, where lab-scale components were built using vendor recommended parameters. These typical powders exhibit variation in composition, percentage of fines, roughness, morphology and particle size distribution. How these differences relate to the melt-pool size, porosity, grain structure, precipitate distributions, and inclusion content will be presented and discussed in context of build quality and powder acceptance.
Larrouy-Maestri, Pauline; Magis, David; Morsomme, Dominique
2014-05-01
The operatic singing technique is frequently used in classical music. Several acoustical parameters of this specific technique have been studied but how these parameters combine remains unclear. This study aims to further characterize the Western operatic singing technique by observing the effects of melody and technique on acoustical and musical parameters of the singing voice. Fifty professional singers performed two contrasting melodies (popular song and romantic melody) with two vocal techniques (with and without operatic singing technique). The common quality parameters (energy distribution, vibrato rate, and extent), perturbation parameters (standard deviation of the fundamental frequency, signal-to-noise ratio, jitter, and shimmer), and musical features (fundamental frequency of the starting note, average tempo, and sound pressure level) of the 200 sung performances were analyzed. The results regarding the effect of melody and technique on the acoustical and musical parameters show that the choice of melody had a limited impact on the parameters observed, whereas a particular vocal profile appeared depending on the vocal technique used. This study confirms that vocal technique affects most of the parameters examined. In addition, the observation of quality, perturbation, and musical parameters contributes to a better understanding of the Western operatic singing technique. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Mathematical Model to estimate the wind power using four-parameter Burr distribution
NASA Astrophysics Data System (ADS)
Liu, Sanming; Wang, Zhijie; Pan, Zhaoxu
2018-03-01
When the real probability of wind speed in the same position needs to be described, the four-parameter Burr distribution is more suitable than other distributions. This paper introduces its important properties and characteristics. Also, the application of the four-parameter Burr distribution in wind speed prediction is discussed, and the expression of probability distribution of output power of wind turbine is deduced.
Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks
NASA Astrophysics Data System (ADS)
Karpov, Kirill; Fedotova, Irina; Siemens, Eduard
2017-07-01
In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.
Contribution inequality in the spatial public goods game: Should the rich contribute more?
NASA Astrophysics Data System (ADS)
Tu, Jing
2018-04-01
Scale-free property exists in resource distribution, and the rich pay more in public goods is commonplace in reality. What will happen if the rich are expected to contribute more in the spatial public goods game? This paper therefore proposes a new contribution paradigm, in which individual contribution is determined by his payoff in the last evolution step. Tunable parameter w is used to characterize the contribution rate of nodes whose payoff is larger than the average. The results of simulations reveal that the impact of w on cooperation is associated with the enhancement factor r. When r is low, the higher w is, the lower the cooperation rate is. With the increment of r, the value of w to optimize cooperation rate increases with r. The relationship of cooperation rate and wealth on the network has also been investigated. Interestingly, higher cooperation rate does not always lead to higher wealth. Finally, the impact of w on the wealth distribution on the network is explored. The higher w reduces the inequality in wealth distribution by the shrinking of lower-class, which is enhanced by a higher r.
NASA Astrophysics Data System (ADS)
Kaur, Anterpreet
2018-01-01
We present results on the measurements of characteristics of events with jets including jet-charge, investigations of shapes and jet mass distributions. The measurements are compared to theoretical predictions including those matched to parton shower and hadronization. Multi-differential jet cross sections are also presented over a wide range in transverse momenta from inclusive jets to multi-jet final states. These measurements have an impact on the determination of the strong coupling constant as well as on parton distribution functions (PDFs) and are helpful in the treatment of heavy flavours in QCD analyses. We also show angular correlations in multi-jet events at highest center-of-mass energies and compare the measurements to theoretical predictions including higher order parton radiation and coherence effects. Measurements of cross sections of jet and top-quark pair production are in particular sensitive to the gluon distribution in the proton, while the electroweak boson production - inclusive or associated with charm or beauty quarks - gives insight into the flavour separation of the proton sea and to the treatment of heavy quarks in PDF-related studies.
NASA Astrophysics Data System (ADS)
Dai, Mingzhi; Khan, Karim; Zhang, Shengnan; Jiang, Kemin; Zhang, Xingye; Wang, Weiliang; Liang, Lingyan; Cao, Hongtao; Wang, Pengjun; Wang, Peng; Miao, Lijing; Qin, Haiming; Jiang, Jun; Xue, Lixin; Chu, Junhao
2016-06-01
Sub-gap density of states (DOS) is a key parameter to impact the electrical characteristics of semiconductor materials-based transistors in integrated circuits. Previously, spectroscopy methodologies for DOS extractions include the static methods, temperature dependent spectroscopy and photonic spectroscopy. However, they might involve lots of assumptions, calculations, temperature or optical impacts into the intrinsic distribution of DOS along the bandgap of the materials. A direct and simpler method is developed to extract the DOS distribution from amorphous oxide-based thin-film transistors (TFTs) based on Dual gate pulse spectroscopy (GPS), introducing less extrinsic factors such as temperature and laborious numerical mathematical analysis than conventional methods. From this direct measurement, the sub-gap DOS distribution shows a peak value on the band-gap edge and in the order of 1017-1021/(cm3·eV), which is consistent with the previous results. The results could be described with the model involving both Gaussian and exponential components. This tool is useful as a diagnostics for the electrical properties of oxide materials and this study will benefit their modeling and improvement of the electrical properties and thus broaden their applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Bin; Li, Huiying; Du, Xiaoming
2016-02-01
During the process of surfactant enhanced aquifer remediation (SEAR), free phase dense non-aqueous phase liquid (DNAPL) may be mobilized and spread. The understanding of the impact of DNAPL spreading on the SEAR remediation is not sufficient with its positive effect infrequently mentioned. To evaluate the correlation between DNAPL spreading and remediation efficiency, a two-dimensional sandbox apparatus was used to simulate the migration and dissolution process of 1,2-DCA (1,2-dichloroethane) DNAPL in SEAR. Distribution area of DNAPL in the sandbox was determined by digital image analysis and correlated with effluent DNAPL concentration. The results showed that the effluent DNAPL concentration has significantmore » positive linear correlation with the DNAPL distribution area, indicating the mobilization of DNAPL could improve remediation efficiency by enlarging total NAPL-water interfacial area for mass transfer. Meanwhile, the vertical migration of 1,2-DCA was limited within the boundary of aquifer in all experiments, implying that by manipulating injection parameters in SEAR, optimal remediation efficiency can be reached while the risk of DNAPL vertical migration is minimized. This study provides a convenient visible and quantitative method for the optimization of parameters for SEAR project, and an approach of rapid predicting the extent of DNAPL contaminant distribution based on the dissolved DNAPL concentration in the extraction well.« less
Huang, Ting-Lin; Li, Yu-Xian; Zhang, Hui
2008-01-01
During the reconstruction of horizontal flow tanks into inclined settling tanks in Chinese water plants, uniformity of water distribution has not been solved theoretically. Based on the concepts of hydraulics, a model of inclined tanks, including the ratio (L/B) of tank length (L) to width (B), diameter of inclined tubes (d) and height of the water distribution area (h1) and so on, was established to simulate and analyze the effects of these parameters on Non-Uniformity of Water Distribution (NUWD). The influences of NUWD on settling efficiency were also analyzed based on Yao's formula. Simulated results show that the ratio (L/B) has the greatest impact on NUWD, and the settling efficiency decreases with it. Under the conditions of q=10 or 20 m/h and L/B>or=5 or 3, the total forces imposed on down-sliding flocs tend to be zero, which reduces the separating efficiency. Moreover, critical settling velocity (CSV) of the first inclined tube will decrease with the increase of h1, and the optimal range of h1 will be 1.2-1.6 m. The difference of CSV between the first tube and the average value of the tank u0 (shown as Delta(uF0-u0)) will increase with d and surface load (q). Copyright IWA Publishing 2008.
NASA Astrophysics Data System (ADS)
Bin, Che; Ruoying, Yu; Dongsheng, Dang; Xiangyan, Wang
2017-05-01
Distributed Generation (DG) integrating to the network would cause the harmonic pollution which would cause damages on electrical devices and affect the normal operation of power system. On the other hand, due to the randomness of the wind and solar irradiation, the output of DG is random, too, which leads to an uncertainty of the harmonic generated by the DG. Thus, probabilistic methods are needed to analyse the impacts of the DG integration. In this work we studied the harmonic voltage probabilistic distribution and the harmonic distortion in distributed network after the distributed photovoltaic (DPV) system integrating in different weather conditions, mainly the sunny day, cloudy day, rainy day and the snowy day. The probabilistic distribution function of the DPV output power in different typical weather conditions could be acquired via the parameter identification method of maximum likelihood estimation. The Monte-Carlo simulation method was adopted to calculate the probabilistic distribution of harmonic voltage content at different frequency orders as well as the harmonic distortion (THD) in typical weather conditions. The case study was based on the IEEE33 system and the results of harmonic voltage content probabilistic distribution as well as THD in typical weather conditions were compared.
Space station impact experiments
NASA Technical Reports Server (NTRS)
Schultz, P.; Ahrens, T.; Alexander, W. M.; Cintala, M.; Gault, D.; Greeley, R.; Hawke, B. R.; Housen, K.; Schmidt, R.
1986-01-01
Four processes serve to illustrate potential areas of study and their implications for general problems in planetary science. First, accretional processes reflect the success of collisional aggregation over collisional destruction during the early history of the solar system. Second, both catastrophic and less severe effects of impacts on planetary bodies survivng from the time of the early solar system may be expressed by asteroid/planetary spin rates, spin orientations, asteroid size distributions, and perhaps the origin of the Moon. Third, the surfaces of planetary bodies directly record the effects of impacts in the form of craters; these records have wide-ranging implications. Fourth, regoliths evolution of asteroidal surfaces is a consequence of cumulative impacts, but the absence of a significant gravity term may profoundly affect the retention of shocked fractions and agglutinate build-up, thereby biasing the correct interpretations of spectral reflectance data. An impact facility on the Space Station would provide the controlled conditions necessary to explore such processes either through direct simulation of conditions or indirect simulation of certain parameters.
Assessing sewage impact in a South-West Atlantic rocky shore intertidal algal community.
Becherucci, Maria Eugenia; Santiago, Lucerito; Benavides, Hugo Rodolfo; Vallarino, Eduardo Alberto
2016-05-15
The spatial and seasonal variation of the specific composition and community parameters (abundance, diversity, richness and evenness) of the intertidal algal assemblages was studied at four coastal sampling sites, distributed along an environmental gradient from the sewage water outfall of Mar del Plata, Buenos Aires, Argentina. Two of them were located close to the sewage outfall (<800m) (impacted area) and the two other were 8 and 9km distant (non-impacted area). The algal abundance was monthly analyzed from October 2008 to May 2009. The algal assemblages varied according to the pollution gradient in spring, summer and autumn, being autumn the season when the highest difference was observed. Ceramium uruguayense was recognized as an indicator species for the non-impacted areas, while Berkeleya sp. represented an indicator species for the sewage outfall impact. Ulva spp. did not reflect the typical pattern observed for other sewage pollution areas. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsui, H.; Koike, Makoto; Kondo, Yutaka
2014-09-30
Number concentrations, size distributions, and mixing states of aerosols are essential parameters for accurate estimation of aerosol direct and indirect effects. In this study, we developed an aerosol module, designated Aerosol Two-dimensional bin module for foRmation and Aging Simulation (ATRAS), that can represent these parameters explicitly by considering new particle formation (NPF), black carbon (BC) aging, and secondary organic aerosol (SOA) processes. A two-dimensional bin representation is used for particles with dry diameters from 40 nm to 10 µm to resolve both aerosol size (12 bins) and BC mixing state (10 bins) for a total of 120 bins. The particlesmore » with diameters from 1 to 40 nm are resolved using an additional 8 size bins to calculate NPF. The ATRAS module was implemented in the WRF-chem model and applied to examine the sensitivity of simulated mass, number, size distributions, and optical and radiative parameters of aerosols to NPF, BC aging and SOA processes over East Asia during the spring of 2009. BC absorption enhancement by coating materials was about 50% over East Asia during the spring, and the contribution of SOA processes to the absorption enhancement was estimated to be 10 – 20% over northern East Asia and 20 – 35% over southern East Asia. A clear north-south contrast was also found between the impacts of NPF and SOA processes on cloud condensation nuclei (CCN) concentrations: NPF increased CCN concentrations at higher supersaturations (smaller particles) over northern East Asia, whereas SOA increased CCN concentrations at lower supersaturations (larger particles) over southern East Asia. Application of ATRAS to East Asia also showed that the impact of each process on each optical and radiative parameter depended strongly on the process and the parameter in question. The module can be used in the future as a benchmark model to evaluate the accuracy of simpler aerosol models and examine interactions between NPF, BC aging, and SOA processes under different meteorological conditions and emissions.« less
Mandolfo, S; Malberti, F; Imbasciati, E; Cogliati, P; Gauly, A
2003-02-01
Optimization of hemodialysis treatment parameters and the characteristics of the dialyzer are crucial for short- and long-term outcome of end stage renal disease patients. The new high-flux membrane Helixone in the dialyzer of the FX series (Fresenius Medical Care, Germany) has interesting features, such as the relationship of membrane thickness and capillary diameter which increases middle molecule elimination by convection, as well as higher capillary packing and microondulation to improve the dialysate flow and distribution. Blood flow, dialysate flow and surface area are the main determinants of the performance of a dialyzer, however the impact of each parameter on small and middle molecule clearance in high flux dialysis has not been well explored. In order to find the best treatment condition for the new dialyzer series, we evaluated urea, creatinine, phosphate clearances and reduction rate of beta2-microglobulin in ten stable patients treated with different blood flows (effective Qb 280 and 360 ml/min), dialysate flow (Qd 300 or 500 ml/min) and dialyzer surfaces (1.4 and 2.2 m2, FX60 or FX100). KoA and Kt/V were also calculated. Blood flow, dialysate flow and surface area demonstrated a significant and independent effect on clearance of urea, creatinine and phosphate, as well as on Kt/V. Small solute clearance was stable over the treatment. In contrast to small solutes, reduction rate of beta2-microglobulin was related to increasing dialyzer surface only. The new dialyzer design of the FX series proves highly effective due to improved dialysate distribution and reduced diffusive resistance as shown by the small solute clearance. A high reduction rate of beta2-microglobulin is favored by improved fiber geometry and pore size distribution. These findings have potential long-term benefits for the patient.
Data-Conditioned Distributions of Groundwater Recharge Under Climate Change Scenarios
NASA Astrophysics Data System (ADS)
McLaughlin, D.; Ng, G. C.; Entekhabi, D.; Scanlon, B.
2008-12-01
Groundwater recharge is likely to be impacted by climate change, with changes in precipitation amounts altering moisture availability and changes in temperature affecting evaporative demand. This could have major implications for sustainable aquifer pumping rates and contaminant transport into groundwater reservoirs in the future, thus making predictions of recharge under climate change very important. Unfortunately, in dry environments where groundwater resources are often most critical, low recharge rates are difficult to resolve due to high sensitivity to modeling and input errors. Some recent studies on climate change and groundwater have considered recharge using a suite of general circulation model (GCM) weather predictions, an obvious and key source of uncertainty. This work extends beyond those efforts by also accounting for uncertainty in other land-surface model inputs in a probabilistic manner. Recharge predictions are made using a range of GCM projections for a rain-fed cotton site in the semi-arid Southern High Plains region of Texas. Results showed that model simulations using a range of unconstrained literature-based parameter values produce highly uncertain and often misleading recharge rates. Thus, distributional recharge predictions are found using soil and vegetation parameters conditioned on current unsaturated zone soil moisture and chloride concentration observations; assimilation of observations is carried out with an ensemble importance sampling method. Our findings show that the predicted distribution shapes can differ for the various GCM conditions considered, underscoring the importance of probabilistic analysis over deterministic simulations. The recharge predictions indicate that the temporal distribution (over seasons and rain events) of climate change will be particularly critical for groundwater impacts. Overall, changes in recharge amounts and intensity were often more pronounced than changes in annual precipitation and temperature, thus suggesting high susceptibility of groundwater systems to future climate change. Our approach provides a probabilistic sensitivity analysis of recharge under potential climate changes, which will be critical for future management of water resources.
NASA Astrophysics Data System (ADS)
Aróztegui, Juan J.; Urcola, José J.; Fuentes, Manuel
1989-09-01
Commercial electric arc melted low-carbon steels, provided as I beams, were characterized both microstructurally and mechanically in the as-rolled, copper precipitation, and plastically pre-deformed conditions. Inclusion size distribution, ferrite grain size, pearlite volume fraction, precipitated volume fraction of copper, and size distribution of these precipitates were deter-mined by conventional quantitative optical and electron metallographic techniques. From the tensile tests conducted at a strain rate of 10-3 s-1 and impact Charpy V-notched tests carried out, stress/strain curves, yield stress, and impact-transition temperature were obtained. The spe-cific fractographic features of the fracture surfaces also were quantitatively characterized. The increases in yield stress and transition temperature experienced upon either aging or work hard-ening were related through empirical relationships. These dependences were analyzed semi-quantitatively by combining microscopic and macroscopic fracture criteria based on measured fundamental properties (fracture stress and yield stress) and observed fractographic parameters (crack nucleation distance and nuclei size). The rationale developed from these fracture criteria allows the semiquantitative prediction of the temperature transition shifts produced upon aging and work hardening. The values obtained are of the right order of magnitude.
Burst nucleation by hot injection for size controlled synthesis of ε-cobalt nanoparticles.
Zacharaki, Eirini; Kalyva, Maria; Fjellvåg, Helmer; Sjåstad, Anja Olafsen
2016-01-01
Reproducible growth of narrow size distributed ε-Co nanoparticles with a specific size requires full understanding and identification of the role of essential synthesis parameters for the applied synthesis method. For the hot injection methodology, a significant discrepancy with respect to obtained sizes and applied reaction conditions is reported. Currently, a systematic investigation controlling key synthesis parameters as injection-temperature and time, metal to surfactant ratio and reaction holding time in terms of their impact on mean ([Formula: see text]mean) and median ([Formula: see text]median) particle diameter using dichlorobenzene (DCB), Co2(CO)8 and oleic acid (OA) as the reactant matrix is lacking. A series of solution-based ε-Co nanoparticles were synthesized using the hot injection method. Suspensions and obtained particles were analyzed by DLS, ICP-OES, (synchrotron)XRD and TEM. Rietveld refinements were used for structural analysis. Mean ([Formula: see text]mean) and median ([Formula: see text]median) particle diameters were calculated with basis in measurements of 250-500 particles for each synthesis. 95 % bias corrected confidence intervals using bootstrapping were calculated for syntheses with three or four replicas. ε-Co NPs in the size range ~4-10 nm with a narrow size distribution are obtained via the hot injection method, using OA as the sole surfactant. Typically the synthesis yield is ~75 %, and the particles form stable colloidal solutions when redispersed in hexane. Reproducibility of the adopted synthesis procedure on replicate syntheses was confirmed. We describe in detail the effects of essential synthesis parameters, such as injection-temperature and time, metal to surfactant ratio and reaction holding time in terms of their impact on mean ([Formula: see text]mean) and median ([Formula: see text]median) particle diameter. The described synthesis procedure towards ε-Co nanoparticles (NPs) is concluded to be robust when controlling key synthesis parameters, giving targeted particle diameters with a narrow size distribution. We have identified two major synthesis parameters which control particle size, i.e., the metal to surfactant molar ratio and the injection temperature of the hot OA-DCB solution into which the cobalt precursor is injected. By increasing the metal to surfactant molar ratio, the mean particle diameter of the ε-Co NPs has been found to increase. Furthermore, an increase in the injection temperature of the hot OA-DCB solution into which the cobalt precursor is injected, results in a decrease in the mean particle diameter of the ε-Co NPs, when the metal to surfactant molar ratio [Formula: see text] is fixed at ~12.9.
RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.
Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z
2017-04-01
We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pleban, J. R.; Mackay, D. S.; Ewers, B. E.; Weinig, C.; Aston, T.
2015-12-01
Challenges in terrestrial ecosystem modeling include characterizing the impact of stress on vegetation and the heterogeneous behavior of different species within the environment. In an effort to address these challenges the impacts of drought and nutrient limitation on the CO2 assimilation of multiple genotypes of Brassica rapa was investigated using the Farquhar Model (FM) of photosynthesis following a Bayesian parameterization and updating scheme. Leaf gas exchange and chlorophyll fluorescence measurements from an unstressed group (well-watered/well-fertilized) and two stressed groups (drought/well-fertilized and well-watered/nutrient limited) were used to estimate FM model parameters. Unstressed individuals were used to initialize Bayesian parameter estimation. Posterior mean estimates yielded a close fit with data as observed assimilation (An) closely matched predicted (Ap) with mean standard error for all individuals ranging from 0.8 to 3.1 μmol CO2 m-2 s-1. Posterior parameter distributions of the unstressed individuals were combined and fit to distributions to establish species level Bayesian priors of FM parameters for testing stress responses. Species level distributions of unstressed group identified mean maximum rates of carboxylation standardized to 25° (Vcmax25) as 101.8 μmol m-2 s-1 (± 29.0) and mean maximum rates of electron transport standardized to 25° (Jmax25) as 319.7 μmol m-2 s-1 (± 64.4). These updated priors were used to test the response of drought and nutrient limitations on assimilation. In the well-watered/nutrient limited group a decrease of 28.0 μmol m-2 s-1 was observed in mean estimate of Vcmax25, a decrease of 27.9 μmol m-2 s-1 in Jmax25 and a decrease in quantum yield from 0.40 mol photon/mol e- in unstressed individuals to 0.14 in the nutrient limited group. In the drought/well-fertilized group a decrease was also observed in Vcmax25 and Jmax25. The genotype specific unstressed and stressed responses were then used to parameterize an ecosystem process model with application at the field scale to investigate mechanisms of stress response in B. rapa by testing a variety of functional forms to limit assimilation in hydraulic or nutrient limited conditions.
Reflectivity retrieval in a networked radar environment
NASA Astrophysics Data System (ADS)
Lim, Sanghun
Monitoring of precipitation using a high-frequency radar system such as X-band is becoming increasingly popular due to its lower cost compared to its counterpart at S-band. Networks of meteorological radar systems at higher frequencies are being pursued for targeted applications such as coverage over a city or a small basin. However, at higher frequencies, the impact of attenuation due to precipitation needs to be resolved for successful implementation. In this research, new attenuation correction algorithms are introduced to compensate the attenuation impact due to rain medium. In order to design X-band radar systems as well as evaluate algorithm development, it is useful to have simultaneous X-band observation with and without the impact of path attenuation. One way to obtain that data set is through theoretical models. Methodologies for generating realistic range profiles of radar variables at attenuating frequencies such as X-band for rain medium are presented here. Fundamental microphysical properties of precipitation, namely size and shape distribution information, are used to generate realistic profiles of X-band starting with S-band observations. Conditioning the simulation from S-band radar measurements maintains the natural distribution of microphysical parameters associated with rainfall. In this research, data taken by the CSU-CHILL radar and the National Center for Atmospheric Research S-POL radar are used to simulate X-band radar variables. Three procedures to simulate the radar variables at X-band and sample applications are presented. A new attenuation correction algorithm based on profiles of reflectivity, differential reflectivity, and differential propagation phase shift is presented. A solution for specific attenuation retrieval in rain medium is proposed that solves the integral equations for reflectivity and differential reflectivity with cumulative differential propagation phase shift constraint. The conventional rain profiling algorithms that connect reflectivity and specific attenuation can retrieve specific attenuation values along the radar path assuming a constant intercept parameter of the normalized drop size distribution. However, in convective storms, the drop size distribution parameters can have significant variation along the path. In this research, a dual-polarization rain profiling algorithm for horizontal-looking radars incorporating reflectivity as well as differential reflectivity profiles is developed. The dual-polarization rain profiling algorithm has been evaluated with X-band radar observations simulated from drop size distribution derived from high-resolution S-band measurements collected by the CSU-CHILL radar. The analysis shows that the dual-polarization rain profiling algorithm provides significant improvement over the current algorithms. A methodology for reflectivity and attenuation retrieval for rain medium in a networked radar environment is described. Electromagnetic waves backscattered from a common volume in networked radar systems are attenuated differently along the different paths. A solution for the specific attenuation distribution is proposed by solving the integral equation for reflectivity. The set of governing integral equations describing the backscatter and propagation of common resolution volume are solved simultaneously with constraints on total path attenuation. The proposed algorithm is evaluated based on simulated X-band radar observations synthesized from S-band measurements collected by the CSU-CHILL radar. Retrieved reflectivity and specific attenuation using the proposed method show good agreement with simulated reflectivity and specific attenuation.
Coaxial digital holography measures particular matter in cloud and ambient atmosphere
NASA Astrophysics Data System (ADS)
Li, Baosheng; Yu, Haonan; Jia, Yizhen; Tao, Xiaojie; Zhang, Yang
2018-02-01
In the artificially affected weather, the detection of cloud droplets particles provides an important reference for the effective impact of artificial weather. Digital holography has the unique advantages of full-field, non-contact, no damage, real-time and quantification. In this paper, coaxial digital holography is used to record the polyethylene standard particles and aluminum scrap, and some important parameters, such as three-dimensional coordinate spatial distribution and particle size, will be obtained by the means of analyzing the digital hologram of the particle. The experimental results verify the feasibility of the coaxial digital holographic device applied to the measurement of the cloud parameters, and complete the construction of the coaxial digital holographic system and the measurement of the particles.
NASA Applications of Structural Health Monitoring Technology
NASA Technical Reports Server (NTRS)
Richards, W Lance; Madaras, Eric I.; Prosser, William H.; Studor, George
2013-01-01
This presentation provides examples of research and development that has recently or is currently being conducted at NASA, with a special emphasis on the application of structural health monitoring (SHM) of aerospace vehicles. SHM applications on several vehicle programs are highlighted, including Space Shuttle Orbiter, International Space Station, Uninhabited Aerial Vehicles, and Expandable Launch Vehicles. Examples of current and previous work are presented in the following categories: acoustic emission impact detection, multi-parameter fiber optic strain-based sensing, wireless sensor system development, and distributed leak detection.
NASA Applications of Structural Health Monitoring Technology
NASA Technical Reports Server (NTRS)
Richards, W Lance; Madaras, Eric I.; Prosser, William H.; Studor, George
2013-01-01
This presentation provides examples of research and development that has recently or is currently being conducted at NASA, with a special emphasis on the application of structural health monitoring (SHM) of aerospace vehicles. SHM applications on several vehicle programs are highlighted, including Space Shuttle Orbiter, the International Space Station, Uninhabited Aerial Vehicles, and Expendable Launch Vehicles. Examples of current and previous work are presented in the following categories: acoustic emission impact detection, multi-parameter fiber optic strain-based sensing, wireless sensor system development, and distributed leak detection.
Kim, Jaeyoun; Soref, Richard; Buchwald, Walter R
2010-08-16
We investigate the electromagnetic response of the concentric multi-ring, or the bull's eye, structure as an extension of the dual-ring metamaterial which exhibits electromagnetically-induced transparency (EIT)-like transmission characteristics. Our results show that adding inner rings produces additional EIT-like peaks, and widens the metamaterial's spectral range of operation. Analyses of the dispersion characteristics and induced current distribution further confirmed the peak's EIT-like nature. Impacts of structural and dielectric parameters are also investigated.
Clustering and optimal arrangement of enzymes in reaction-diffusion systems.
Buchner, Alexander; Tostevin, Filipe; Gerland, Ulrich
2013-05-17
Enzymes within biochemical pathways are often colocalized, yet the consequences of specific spatial enzyme arrangements remain poorly understood. We study the impact of enzyme arrangement on reaction efficiency within a reaction-diffusion model. The optimal arrangement transitions from a cluster to a distributed profile as a single parameter, which controls the probability of reaction versus diffusive loss of pathway intermediates, is varied. We introduce the concept of enzyme exposure to explain how this transition arises from the stochastic nature of molecular reactions and diffusion.
Leading Twist GPDs and Transverse Spin Densities in a Proton
NASA Astrophysics Data System (ADS)
Mondal, Chandan; Maji, Tanmay; Chakrabarti, Dipankar; Zhao, Xingbo
2018-05-01
We present a study of both chirally even and odd generalized parton distributions in the leading twist for the quarks in a proton using the light-front wavefunctions of a quark-diquark model predicted by the holographic QCD. For transversely polarized proton, both chiral even and chiral odd GPDs contribute to the spin densities which are related to the GPDs in transverse impact parameter space. Here, we also present a study of the spin densities for transversely polarized quark and proton.
NASA Astrophysics Data System (ADS)
Garcia Galiano, S. G.; Olmos, P.; Giraldo Osorio, J. D.
2015-12-01
In the Mediterranean area, significant changes on temperature and precipitation are expected throughout the century. These trends could exacerbate the existing conditions in regions already vulnerable to climatic variability, reducing the water availability. Improving knowledge about plausible impacts of climate change on water cycle processes at basin scale, is an important step for building adaptive capacity to the impacts in this region, where severe water shortages are expected for the next decades. RCMs ensemble in combination with distributed hydrological models with few parameters, constitutes a valid and robust methodology to increase the reliability of climate and hydrological projections. For reaching this objective, a novel methodology for building Regional Climate Models (RCMs) ensembles of meteorological variables (rainfall an temperatures), was applied. RCMs ensembles are justified for increasing the reliability of climate and hydrological projections. The evaluation of RCMs goodness-of-fit to build the ensemble is based on empirical probability density functions (PDF) extracted from both RCMs dataset and a highly resolution gridded observational dataset, for the time period 1961-1990. The applied method is considering the seasonal and annual variability of the rainfall and temperatures. The RCMs ensembles constitute the input to a distributed hydrological model at basin scale, for assessing the runoff projections. The selected hydrological model is presenting few parameters in order to reduce the uncertainties involved. The study basin corresponds to a head basin of Segura River Basin, located in the South East of Spain. The impacts on runoff and its trend from observational dataset and climate projections, were assessed. Considering the control period 1961-1990, plausible significant decreases in runoff for the time period 2021-2050, were identified.
Asquith, William H.
2014-01-01
The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.
Impact Of The Material Variability On The Stamping Process: Numerical And Analytical Analysis
NASA Astrophysics Data System (ADS)
Ledoux, Yann; Sergent, Alain; Arrieux, Robert
2007-05-01
The finite element simulation is a very useful tool in the deep drawing industry. It is used more particularly for the development and the validation of new stamping tools. It allows to decrease cost and time for the tooling design and set up. But one of the most important difficulties to have a good agreement between the simulation and the real process comes from the definition of the numerical conditions (mesh, punch travel speed, limit conditions,…) and the parameters which model the material behavior. Indeed, in press shop, when the sheet set changes, often a variation of the formed part geometry is observed according to the variability of the material properties between these different sets. This last parameter represents probably one of the main source of process deviation when the process is set up. That's why it is important to study the influence of material data variation on the geometry of a classical stamped part. The chosen geometry is an omega shaped part because of its simplicity and it is representative one in the automotive industry (car body reinforcement). Moreover, it shows important springback deviations. An isotropic behaviour law is assumed. The impact of the statistical deviation of the three law coefficients characterizing the material and the friction coefficient around their nominal values is tested. A Gaussian distribution is supposed and their impact on the geometry variation is studied by FE simulation. An other approach is envisaged consisting in modeling the process variability by a mathematical model and then, in function of the input parameters variability, it is proposed to define an analytical model which leads to find the part geometry variability around the nominal shape. These two approaches allow to predict the process capability as a function of the material parameter variability.
ERIC Educational Resources Information Center
Xu, Xueli; Jia, Yue
2011-01-01
Estimation of item response model parameters and ability distribution parameters has been, and will remain, an important topic in the educational testing field. Much research has been dedicated to addressing this task. Some studies have focused on item parameter estimation when the latent ability was assumed to follow a normal distribution,…
NASA Technical Reports Server (NTRS)
Szalay, Alexander S.; Jain, Bhuvnesh; Matsubara, Takahiko; Scranton, Ryan; Vogeley, Michael S.; Connolly, Andrew; Dodelson, Scott; Eisenstein, Daniel; Frieman, Joshua A.; Gunn, James E.
2003-01-01
We present measurements of parameters of the three-dimensional power spectrum of galaxy clustering from 222 square degrees of early imaging data in the Sloan Digital Sky Survey (SDSS). The projected galaxy distribution on the sky is expanded over a set of Karhunen-Loeve (KL) eigenfunctions, which optimize the signal-to-noise ratio in our analysis. A maximum likelihood analysis is used to estimate parameters that set the shape and amplitude of the three-dimensional power spectrum of galaxies in the SDSS magnitude-limited sample with r* less than 21. Our best estimates are gamma = 0.188 +/- 0.04 and sigma(sub 8L) = 0.915 +/- 0.06 (statistical errors only), for a flat universe with a cosmological constant. We demonstrate that our measurements contain signal from scales at or beyond the peak of the three-dimensional power spectrum. We discuss how the results scale with systematic uncertainties, like the radial selection function. We find that the central values satisfy the analytically estimated scaling relation. We have also explored the effects of evolutionary corrections, various truncations of the KL basis, seeing, sample size, and limiting magnitude. We find that the impact of most of these uncertainties stay within the 2 sigma uncertainties of our fiducial result.
A micro-hydrology computation ordering algorithm
NASA Astrophysics Data System (ADS)
Croley, Thomas E.
1980-11-01
Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented "node" definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing microhydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies.
NASA Astrophysics Data System (ADS)
Shahariar, G. M. H.; Wardana, M. K. A.; Lim, O. T.
2018-04-01
The post impingement effects of urea-water solution spray on the heated wall of automotive SCR systems was numerically investigated in a constant volume chamber using STAR CCM+ CFD code. The turbulence flow was modelled by realizable k-ε two-layer model together with standard wall function and all y+ treatment was applied along with two-layer approach. The Eulerian-Lagrangian approach was used for the modelling of multi phase flow. Urea water solution (UWS) was injected onto the heated wall for the wall temperature of 338, 413, 473, 503 & 573 K. Spray development after impinging on the heated wall was visualized and measured. Droplet size distribution and droplet evaporation rates were also measured, which are vital parameters for the system performance but still not well researched. Specially developed user defined functions (UDF) are implemented to simulate the desired conditions and parameters. The investigation reveals that wall temperature has a great impact on spray development after impingement, droplet size distribution and evaporation. Increasing the wall temperature leads to longer spray front projection length, smaller droplet size and faster droplet evaporation which are preconditions for urea crystallization reduction. The numerical model and parameters are validated comparing with experimental data.
Complex dynamics in the distribution of players’ scoring performance in Rugby Union world cups
NASA Astrophysics Data System (ADS)
Seuront, Laurent
2013-09-01
The evolution of the scoring performance of Rugby Union players is investigated over the seven rugby world cups (RWC) that took place from 1987 to 2011, and a specific attention is given to how they may have been impacted by the switch from amateurism to professionalism that occurred in 1995. The distribution of the points scored by individual players, Ps, ranked in order of performance were well described by the simplified canonical law Ps∝(, where r is the rank, and ϕ and α are the parameters of the distribution. The parameter α did not significantly change from 1987 to 2007 (α=0.92±0.03), indicating a negligible effect of professionalism on players’ scoring performance. In contrast, the parameter ϕ significantly increased from ϕ=1.32 for 1987 RWC, ϕ=2.30 for 1999 to 2003 RWC and ϕ=5.60 for 2007 RWC, suggesting a progressive decrease in the relative performance of the best players. Finally, the sharp decreases observed in both α(α=0.38) and ϕ(ϕ=0.70) in the 2011 RWC indicate a more even distribution of the performance of individuals among scorers, compared to the more heterogeneous distributions observed from 1987 to 2007, and suggest a sharp increase in the level of competition leading to an increase in the average quality of players and a decrease in the relative skills of the top players. Note that neither α nor ϕ significantly correlate with traditional performance indicators such as the number of points scored by the best players, the number of games played by the best players, the number of points scored by the team of the best players or the total number of points scored over each RWC. This indicates that the dynamics of the scoring performance of Rugby Union players is influenced by hidden processes hitherto inaccessible through standard performance metrics; this suggests that players’ scoring performance is connected to ubiquitous phenomena such as anomalous diffusion.
Wu, Fei; Sioshansi, Ramteen
2017-05-25
Electric vehicles (EVs) hold promise to improve the energy efficiency and environmental impacts of transportation. However, widespread EV use can impose significant stress on electricity-distribution systems due to their added charging loads. This paper proposes a centralized EV charging-control model, which schedules the charging of EVs that have flexibility. This flexibility stems from EVs that are parked at the charging station for a longer duration of time than is needed to fully recharge the battery. The model is formulated as a two-stage stochastic optimization problem. The model captures the use of distributed energy resources and uncertainties around EV arrival timesmore » and charging demands upon arrival, non-EV loads on the distribution system, energy prices, and availability of energy from the distributed energy resources. We use a Monte Carlo-based sample-average approximation technique and an L-shaped method to solve the resulting optimization problem efficiently. We also apply a sequential sampling technique to dynamically determine the optimal size of the randomly sampled scenario tree to give a solution with a desired quality at minimal computational cost. Here, we demonstrate the use of our model on a Central-Ohio-based case study. We show the benefits of the model in reducing charging costs, negative impacts on the distribution system, and unserved EV-charging demand compared to simpler heuristics. Lastly, we also conduct sensitivity analyses, to show how the model performs and the resulting costs and load profiles when the design of the station or EV-usage parameters are changed.« less
Analysis of PV Advanced Inverter Functions and Setpoints under Time Series Simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seuss, John; Reno, Matthew J.; Broderick, Robert Joseph
Utilities are increasingly concerned about the potential negative impacts distributed PV may have on the operational integrity of their distribution feeders. Some have proposed novel methods for controlling a PV system's grid - tie inverter to mitigate poten tial PV - induced problems. This report investigates the effectiveness of several of these PV advanced inverter controls on improving distribution feeder operational metrics. The controls are simulated on a large PV system interconnected at several locations within two realistic distribution feeder models. Due to the time - domain nature of the advanced inverter controls, quasi - static time series simulations aremore » performed under one week of representative variable irradiance and load data for each feeder. A para metric study is performed on each control type to determine how well certain measurable network metrics improve as a function of the control parameters. This methodology is used to determine appropriate advanced inverter settings for each location on the f eeder and overall for any interconnection location on the feeder.« less
NASA Astrophysics Data System (ADS)
Bodergat, Anne-Marie; Oki, Kimihiko; Ishizaki, Kunihiro; Rio, Michel
2002-11-01
The distribution of ostracod populations in Kagoshima Bay (Japan) is analysed with reference to different environmental parameters. The bay is an area of volcanic activity of Sakurajima volcano under the influence of the Kuroshio Current. Most of the Head environment is occupied by an acidic water mass. Numbers of individual and species decrease from the Mouth of the bay towards the Basin and Head environments. In this latter, acidic water mass has a drastic effect on ostracod populations, whereas volcanic ashes and domestic inputs are not hostile. Ostracod distribution is influenced by the quality and structure of water masses. To cite this article: A.-M. Bodergat et al., C. R. Geoscience 334 (2002) 1053-1059.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
The Shark Random Swim - (Lévy Flight with Memory)
NASA Astrophysics Data System (ADS)
Businger, Silvia
2018-05-01
The Elephant Random Walk (ERW), first introduced by Schütz and Trimper (Phys Rev E 70:045101, 2004), is a one-dimensional simple random walk on Z having a memory about the whole past. We study the Shark Random Swim, a random walk with memory about the whole past, whose steps are α -stable distributed with α \\in (0,2] . Our aim in this work is to study the impact of the heavy tailed step distributions on the asymptotic behavior of the random walk. We shall see that, as for the ERW, the asymptotic behavior of the Shark Random Swim depends on its memory parameter p, and that a phase transition can be observed at the critical value p=1/α.
Impact of baryonic physics on intrinsic alignments
Tenneti, Ananth; Gnedin, Nickolay Y.; Feng, Yu
2017-01-11
We explore the effects of specific assumptions in the subgrid models of star formation and stellar and AGN feedback on intrinsic alignments of galaxies in cosmological simulations of "MassiveBlack-II" family. Using smaller volume simulations, we explored the parameter space of the subgrid star formation and feedback model and found remarkable robustness of the observable statistical measures to the details of subgrid physics. The one observational probe most sensitive to modeling details is the distribution of misalignment angles. We hypothesize that the amount of angular momentum carried away by the galactic wind is the primary physical quantity that controls the orientationmore » of the stellar distribution. Finally, our results are also consistent with a similar study by the EAGLE simulation team.« less
Gravity influence on the clustering of charged particles in turbulence
NASA Astrophysics Data System (ADS)
Lu, Jiang; Nordsiek, Hansen; Shaw, Raymond
2010-11-01
We report results aimed at studying the interactions of bidisperse charged inertial particles in homogeneous, isotropic turbulence, under the influence of gravitational settling. We theoretically and experimentally investigate the impact of gravititational settling on particle clustering, which is quantified by the radial distribution function (RDF). The theory is based on a drift-diffusion (Fokker-Planck) model with gravitational settling appearing as a diffusive term depending on a dimensionless settling parameter. The experiments are carried out in a laboratory chamber with nearly homogeneous, isotropic turbulence in which the flow is seeded with charged particles and digital holography used to obtain 3D particle positions and velocities. The derived radial distribution function for bidisperse settling charged particles is compared to the experimental RDFs.
Heat and mass transport during a groundwater replenishment trial in a highly heterogeneous aquifer
NASA Astrophysics Data System (ADS)
Seibert, Simone; Prommer, Henning; Siade, Adam; Harris, Brett; Trefry, Mike; Martin, Michael
2014-12-01
Changes in subsurface temperature distribution resulting from the injection of fluids into aquifers may impact physiochemical and microbial processes as well as basin resource management strategies. We have completed a 2 year field trial in a hydrogeologically and geochemically heterogeneous aquifer below Perth, Western Australia in which highly treated wastewater was injected for large-scale groundwater replenishment. During the trial, chloride and temperature data were collected from conventional monitoring wells and by time-lapse temperature logging. We used a joint inversion of these solute tracer and temperature data to parameterize a numerical flow and multispecies transport model and to analyze the solute and heat propagation characteristics that prevailed during the trial. The simulation results illustrate that while solute transport is largely confined to the most permeable lithological units, heat transport was also affected by heat exchange with lithological units that have a much lower hydraulic conductivity. Heat transfer by heat conduction was found to significantly influence the complex temporal and spatial temperature distribution, especially with growing radial distance and in aquifer sequences with a heterogeneous hydraulic conductivity distribution. We attempted to estimate spatially varying thermal transport parameters during the data inversion to illustrate the anticipated correlations of these parameters with lithological heterogeneities, but estimates could not be uniquely determined on the basis of the collected data.
Impacts of relative permeability on CO2 phase behavior, phase distribution, and trapping mechanisms
NASA Astrophysics Data System (ADS)
Moodie, N.; McPherson, B. J. O. L.; Pan, F.
2015-12-01
A critical aspect of geologic carbon storage, a carbon-emissions reduction method under extensive review and testing, is effective multiphase CO2 flow and transport simulation. Relative permeability is a flow parameter particularly critical for accurate forecasting of multiphase behavior of CO2 in the subsurface. The relative permeability relationship assumed and especially the irreducible saturation of the gas phase greatly impacts predicted CO2 trapping mechanisms and long-term plume migration behavior. A primary goal of this study was to evaluate the impact of relative permeability on efficacy of regional-scale CO2 sequestration models. To accomplish this we built a 2-D vertical cross-section of the San Rafael Swell area of East-central Utah. This model simulated injection of CO2 into a brine aquifer for 30 years. The well was then shut-in and the CO2 plume behavior monitored for another 970 years. We evaluated five different relative permeability relationships to quantify their relative impacts on forecasted flow results of the model, with all other parameters maintained uniform and constant. Results of this analysis suggest that CO2 plume movement and behavior are significantly dependent on the specific relative permeability formulation assigned, including the assumed irreducible saturation values of CO2 and brine. More specifically, different relative permeability relationships translate to significant differences in CO2 plume behavior and corresponding trapping mechanisms.
Balancing energy development and conservation: A method utilizing species distribution models
Jarnevich, C.S.; Laubhan, M.K.
2011-01-01
Alternative energy development is increasing, potentially leading to negative impacts on wildlife populations already stressed by other factors. Resource managers require a scientifically based methodology to balance energy development and species conservation, so we investigated modeling habitat suitability using Maximum Entropy to develop maps that could be used with other information to help site energy developments. We selected one species of concern, the Lesser Prairie-Chicken (LPCH; Tympanuchus pallidicinctus) found on the southern Great Plains of North America, as our case study. LPCH populations have been declining and are potentially further impacted by energy development. We used LPCH lek locations in the state of Kansas along with several environmental and anthropogenic parameters to develop models that predict the probability of lek occurrence across the landscape. The models all performed well as indicated by the high test area under the curve (AUC) scores (all >0.9). The inclusion of anthropogenic parameters in models resulted in slightly better performance based on AUC values, indicating that anthropogenic features may impact LPCH lek habitat suitability. Given the positive model results, this methodology may provide additional guidance in designing future survey protocols, as well as siting of energy development in areas of marginal or unsuitable habitat for species of concern. This technique could help to standardize and quantify the impacts various developments have upon at-risk species. ?? 2011 Springer Science+Business Media, LLC (outside the USA).
Avanasi, Raghavendhran; Shin, Hyeong-Moo; Vieira, Veronica M; Bartell, Scott M
2016-04-01
We recently utilized a suite of environmental fate and transport models and an integrated exposure and pharmacokinetic model to estimate individual perfluorooctanoate (PFOA) serum concentrations, and also assessed the association of those concentrations with preeclampsia for participants in the C8 Health Project (a cross-sectional study of over 69,000 people who were environmentally exposed to PFOA near a major U.S. fluoropolymer production facility located in West Virginia). However, the exposure estimates from this integrated model relied on default values for key independent exposure parameters including water ingestion rates, the serum PFOA half-life, and the volume of distribution for PFOA. The aim of the present study is to assess the impact of inter-individual variability and epistemic uncertainty in these parameters on the exposure estimates and subsequently, the epidemiological association between PFOA exposure and preeclampsia. We used Monte Carlo simulation to propagate inter-individual variability/epistemic uncertainty in the exposure assessment and reanalyzed the epidemiological association. Inter-individual variability in these parameters mildly impacted the serum PFOA concentration predictions (the lowest mean rank correlation between the estimated serum concentrations in our study and the original predicted serum concentrations was 0.95) and there was a negligible impact on the epidemiological association with preeclampsia (no change in the mean adjusted odds ratio (AOR) and the contribution of exposure uncertainty to the total uncertainty including sampling variability was 7%). However, when epistemic uncertainty was added along with the inter-individual variability, serum PFOA concentration predictions and their association with preeclampsia were moderately impacted (the mean AOR of preeclampsia occurrence was reduced from 1.12 to 1.09, and the contribution of exposure uncertainty to the total uncertainty was increased up to 33%). In conclusion, our study shows that the change of the rank exposure among the study participants due to variability and epistemic uncertainty in the independent exposure parameters was large enough to cause a 25% bias towards the null. This suggests that the true AOR of the association between PFOA and preeclampsia in this population might be higher than the originally reported AOR and has more uncertainty than indicated by the originally reported confidence interval. Copyright © 2016 Elsevier Inc. All rights reserved.
Interaction of highly charged ions with carbon nano membranes
NASA Astrophysics Data System (ADS)
Gruber, Elisabeth; Wilhelm, Richard A.; Smejkal, Valerie; Heller, René; Facsko, Stefan; Aumayr, Friedrich
2015-09-01
Charge state and energy loss measurements of slow highly charged ions (HCIs) after transmission through nanometer and sub-nanometer thin membranes are presented. Direct transmission measurements through carbon nano membranes (CNMs) show an unexpected bimodal exit charge state distribution, accompanied by charge exchange dependent energy loss. The energy loss of ions in CNMs with large charge loss shows a quadratic dependency on the incident charge state, indicating charge state dependent stopping force values. Another access to the exit charge state distribution is given by irradiating stacks of CNMs and investigating each layer of the stack with high resolution imaging techniques like transmission electron microscopy (TEM) and helium ion microscopy (HIM) independently. The observation of pores created in all of the layers confirms the assumption derived from the transmission measurements that the two separated charge state distributions reflect two different impact parameter regimes, i.e. close collision with large charge exchange and distant collisions with weak ion-target interaction.
Multi-scale modularity and motif distributional effect in metabolic networks.
Gao, Shang; Chen, Alan; Rahmani, Ali; Zeng, Jia; Tan, Mehmet; Alhajj, Reda; Rokne, Jon; Demetrick, Douglas; Wei, Xiaohui
2016-01-01
Metabolism is a set of fundamental processes that play important roles in a plethora of biological and medical contexts. It is understood that the topological information of reconstructed metabolic networks, such as modular organization, has crucial implications on biological functions. Recent interpretations of modularity in network settings provide a view of multiple network partitions induced by different resolution parameters. Here we ask the question: How do multiple network partitions affect the organization of metabolic networks? Since network motifs are often interpreted as the super families of evolved units, we further investigate their impact under multiple network partitions and investigate how the distribution of network motifs influences the organization of metabolic networks. We studied Homo sapiens, Saccharomyces cerevisiae and Escherichia coli metabolic networks; we analyzed the relationship between different community structures and motif distribution patterns. Further, we quantified the degree to which motifs participate in the modular organization of metabolic networks.
Simple models of SL-9 impact plumes in flight
NASA Astrophysics Data System (ADS)
Harrington, J.; Deming, D.
1998-09-01
We have extended our ballistic Monte-Carlo model of the Shoemaker-Levy 9 impact plumes (J. Harrington and D. Deming 1996. Simple models of SL9 impact plumes, Bull. Am. Astron. Soc. 28 1150--1151) to calculate the appearance of the plumes in flight. We compare these synthetic images to the data taken by the Hubble Space Telescope of plumes on the limb of Jupiter during impacts A, E, G, and W. The model uses a parameterized version of the final power-law velocity distribution from the impact models of Zahnle and Mac Low. The observed plume heights, lightcurve features, and debris patterns fix the values of model parameters. The parameters that best reproduce the debris patterns dictate an approximately conic plume geometry, with the apex of the cone initially near the impact site, the cone's axis pointed in the direction from which the impactor came, and an opening angle >45sp ° from the axis. Since material of a given velocity is, at any given time, a certain distance from the cone apex, the geometry spreads high-velocity material much thinner than low-velocity material. The power law exponent of -1.55 combines with this effect to make mass density fall off as the -3.55 power of the velocity (or distance from the plume base). However, the outer shell of highest-velocity material, corresponding to the atmospheric shock wave, carries considerably elevated mass density. We are currently studying the range of reasonable optical properties to determine whether the visible plume tops corresponded to the physical top of this shell, or to a lower density contour.
Greenwood, J. Arthur; Landwehr, J. Maciunas; Matalas, N.C.; Wallis, J.R.
1979-01-01
Distributions whose inverse forms are explicitly defined, such as Tukey's lambda, may present problems in deriving their parameters by more conventional means. Probability weighted moments are introduced and shown to be potentially useful in expressing the parameters of these distributions.
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
2004-01-01
Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).
Bowker, Matthew A.; Maestre, Fernando T.
2012-01-01
Dryland vegetation is inherently patchy. This patchiness goes on to impact ecology, hydrology, and biogeochemistry. Recently, researchers have proposed that dryland vegetation patch sizes follow a power law which is due to local plant facilitation. It is unknown what patch size distribution prevails when competition predominates over facilitation, or if such a pattern could be used to detect competition. We investigated this question in an alternative vegetation type, mosses and lichens of biological soil crusts, which exhibit a smaller scale patch-interpatch configuration. This micro-vegetation is characterized by competition for space. We proposed that multiplicative effects of genetics, environment and competition should result in a log-normal patch size distribution. When testing the prevalence of log-normal versus power law patch size distributions, we found that the log-normal was the better distribution in 53% of cases and a reasonable fit in 83%. In contrast, the power law was better in 39% of cases, and in 8% of instances both distributions fit equally well. We further hypothesized that the log-normal distribution parameters would be predictably influenced by competition strength. There was qualitative agreement between one of the distribution's parameters (μ) and a novel intransitive (lacking a 'best' competitor) competition index, suggesting that as intransitivity increases, patch sizes decrease. The correlation of μ with other competition indicators based on spatial segregation of species (the C-score) depended on aridity. In less arid sites, μ was negatively correlated with the C-score (suggesting smaller patches under stronger competition), while positive correlations (suggesting larger patches under stronger competition) were observed at more arid sites. We propose that this is due to an increasing prevalence of competition transitivity as aridity increases. These findings broaden the emerging theory surrounding dryland patch size distributions and, with refinement, may help us infer cryptic ecological processes from easily observed spatial patterns in the field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagne, MC; Archambault, L; CHU de Quebec, Quebec, Quebec
2014-06-15
Purpose: Intensity modulated radiation therapy always requires compromises between PTV coverage and organs at risk (OAR) sparing. We previously developed metrics that correlate doses to OAR to specific patients’ morphology using stochastic frontier analysis (SFA). Here, we aim to examine the validity of this approach using a large set of realistically simulated dosimetric and geometric data. Methods: SFA describes a set of treatment plans as an asymmetric distribution with respect to a frontier defining optimal plans. Eighty head and neck IMRT plans were used to establish a metric predicting the mean dose to parotids as a function of simple geometricmore » parameters. A database of 140 parotids was used as a basis distribution to simulate physically plausible data of geometry and dose. Distributions comprising between 20 and 5000 were simulated and the SFA was applied to obtain new frontiers, which were compared to the original frontier. Results: It was possible to simulate distributions consistent with the original dataset. Below 160 organs, the SFA could not always describe distributions as asymmetric: a few cases showed a Gaussian or half-Gaussian distribution. In order to converge to a stable solution, the number of organs in a distribution must ideally be above 100, but in many cases stable parameters could be achieved with as low as 60 samples of organ data. Mean RMS value of the error of new frontiers was significantly reduced when additional organs are used. Conclusion: The number of organs in a distribution showed to have an impact on the effectiveness of the model. It is always possible to obtain a frontier, but if the number of organs in the distribution is small (< 160), it may not represent de lowest dose achievable. These results will be used to determine number of cases necessary to adapt the model to other organs.« less
Debris Albedo from Laser Ablation in Low and High Vacuum: Comparisons to Hypervelocity Impact
NASA Astrophysics Data System (ADS)
Radhakrishnan, G.; Adams, P. M.; Alaan, D. R.; Panetta, C. J.
The albedo of orbital debris fragments in space is a critical parameter used in the derivation of their physical sizes from optical measurements. The change in albedo results from scattering due to micron and sub-micron particles on the surface. There are however no known hypervelocity collision ground tests that simulate the high-vacuum conditions on-orbit. While hypervelocity impact experiments at a gun range can offer a realistic representation of the energy of impact and fragmentation, and can aid the understanding of albedo, they are conducted in low-pressure air that is not representative of the very high vacuum of 10-8 Torr or less that exists in the Low Earth Orbit environment. Laboratory simulation using laser ablation with a high power laser, on the same target materials as used in current satellite structures, is appealing because it allows for well-controlled investigations that can be coupled to optical albedo (reflectance) measurements of the resultant debris. This relatively low-cost laboratory approach can complement the significantly more elaborate and expensive field-testing of single-shot hypervelocity impact on representative satellite structures. Debris generated is optically characterized with UV-VIS-NIR reflectance, and particle size distributions can be measured. In-situ spectroscopic diagnostics (nanosecond time frame) provide an identification of atoms and ions in the plume, and plasma temperatures, allowing a correlation of the energetics of the ablated plume with resulting albedo and particle size distributions of ablated debris. Our laboratory experiments offer both a high-vacuum environment, and selection of any gaseous ambient, at any controlled pressure, thus allowing for comparison to the hypervelocity impact experiments in low-pressure air. Initial results from plume analysis, and size distribution and microstructure of debris collected on witness plates show that laser ablations in low-pressure air offer many similarities to the recent DebrisLV and DebriSat hypervelocity impact experiments, while ablations in high-vacuum provide critical distinctions.
Distributed activation energy model parameters of some Turkish coals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunes, M.; Gunes, S.K.
2008-07-01
A multi-reaction model based on distributed activation energy has been applied to some Turkish coals. The kinetic parameters of distributed activation energy model were calculated via computer program developed for this purpose. It was observed that the values of mean of activation energy distribution vary between 218 and 248 kJ/mol, and the values of standard deviation of activation energy distribution vary between 32 and 70 kJ/mol. The correlations between kinetic parameters of the distributed activation energy model and certain properties of coal have been investigated.
NASA Astrophysics Data System (ADS)
Zi, Bin; Zhou, Bin
2016-07-01
For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .
Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska
NASA Astrophysics Data System (ADS)
Bonin, J. A.; Chambers, D. P.
2012-12-01
Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.
Sensitivity analysis of a sound absorption model with correlated inputs
NASA Astrophysics Data System (ADS)
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
NASA Astrophysics Data System (ADS)
Coralli, Alberto; Villela de Miranda, Hugo; Espiúca Monteiro, Carlos Felipe; Resende da Silva, José Francisco; Valadão de Miranda, Paulo Emílio
2014-12-01
Solid oxide fuel cells are globally recognized as a very promising technology in the area of highly efficient electricity generation with a low environmental impact. This technology can be advantageously implemented in many situations in Brazil and it is well suited to the use of ethanol as a primary energy source, an important feature given the highly developed Brazilian ethanol industry. In this perspective, a simplified mathematical model is developed for a fuel cell and its balance of plant, in order to identify the optimal system structure and the most convenient values for the operational parameters, with the aim of maximizing the global electric efficiency. In this way it is discovered the best operational configuration for the desired application, which is the distributed generation in the concession area of the electricity distribution company Elektro. The data regarding this configuration are required for the continuation of the research project, i.e. the development of a prototype, a cost analysis of the developed system and a detailed perspective of the market opportunities in Brazil.
a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.
2017-12-01
We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.
Impact of Ice Morphology on Design Space of Pharmaceutical Freeze-Drying.
Goshima, Hiroshika; Do, Gabsoo; Nakagawa, Kyuya
2016-06-01
It has been known that the sublimation kinetics of a freeze-drying product is affected by its internal ice crystal microstructures. This article demonstrates the impact of the ice morphologies of a frozen formulation in a vial on the design space for the primary drying of a pharmaceutical freeze-drying process. Cross-sectional images of frozen sucrose-bovine serum albumin aqueous solutions were optically observed and digital pictures were acquired. Binary images were obtained from the optical data to extract the geometrical parameters (i.e., ice crystal size and tortuosity) that relate to the mass-transfer resistance of water vapor during the primary drying step. A mathematical model was used to simulate the primary drying kinetics and provided the design space for the process. The simulation results predicted that the geometrical parameters of frozen solutions significantly affect the design space, with large and less tortuous ice morphologies resulting in wide design spaces and vice versa. The optimal applicable drying conditions are influenced by the ice morphologies. Therefore, owing to the spatial distributions of the geometrical parameters of a product, the boundary curves of the design space are variable and could be tuned by controlling the ice morphologies. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Significance of the impact of motion compensation on the variability of PET image features
NASA Astrophysics Data System (ADS)
Carles, M.; Bach, T.; Torres-Espallardo, I.; Baltas, D.; Nestle, U.; Martí-Bonmatí, L.
2018-03-01
In lung cancer, quantification by positron emission tomography/computed tomography (PET/CT) imaging presents challenges due to respiratory movement. Our primary aim was to study the impact of motion compensation implied by retrospectively gated (4D)-PET/CT on the variability of PET quantitative parameters. Its significance was evaluated by comparison with the variability due to (i) the voxel size in image reconstruction and (ii) the voxel size in image post-resampling. The method employed for feature extraction was chosen based on the analysis of (i) the effect of discretization of the standardized uptake value (SUV) on complementarity between texture features (TF) and conventional indices, (ii) the impact of the segmentation method on the variability of image features, and (iii) the variability of image features across the time-frame of 4D-PET. Thirty-one PET-features were involved. Three SUV discretization methods were applied: a constant width (SUV resolution) of the resampling bin (method RW), a constant number of bins (method RN) and RN on the image obtained after histogram equalization (method EqRN). The segmentation approaches evaluated were 40% of SUVmax and the contrast oriented algorithm (COA). Parameters derived from 4D-PET images were compared with values derived from the PET image obtained for (i) the static protocol used in our clinical routine (3D) and (ii) the 3D image post-resampled to the voxel size of the 4D image and PET image derived after modifying the reconstruction of the 3D image to comprise the voxel size of the 4D image. Results showed that TF complementarity with conventional indices was sensitive to the SUV discretization method. In the comparison of COA and 40% contours, despite the values not being interchangeable, all image features showed strong linear correlations (r > 0.91, p\\ll 0.001 ). Across the time-frames of 4D-PET, all image features followed a normal distribution in most patients. For our patient cohort, the compensation of tumor motion did not have a significant impact on the quantitative PET parameters. The variability of PET parameters due to voxel size in image reconstruction was more significant than variability due to voxel size in image post-resampling. In conclusion, most of the parameters (apart from the contrast of neighborhood matrix) were robust to the motion compensation implied by 4D-PET/CT. The impact on parameter variability due to the voxel size in image reconstruction and in image post-resampling could not be assumed to be equivalent.
Sanz, E.; Voss, C.I.
2006-01-01
Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only concentration observations. Permeability, freshwater inflow, solute molecular diffusivity, and porosity can be estimated with roughly equivalent confidence using observations of only the logarithm of concentration. Furthermore, covariance analysis allows a logical reduction of the number of estimated parameters for ill-posed inverse seawater intrusion problems. Ill-posed problems may exhibit poor estimation convergence, have a non-unique solution, have multiple minima, or require excessive computational effort, and the condition often occurs when estimating too many or co-dependent parameters. For the Henry problem, such analysis allows selection of the two parameters that control system physics from among all possible system parameters. ?? 2005 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.
2015-12-01
Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.
Impact of space dependent eddy mixing on large ocean circulation
NASA Astrophysics Data System (ADS)
Pradal, M. A. S.; Gnanadesikan, A.; Abernathey, R. P.
2016-02-01
Throughout the ocean, mesoscale eddies stir tracers such as heat, oxygen, helium, dissolved CO2, affecting their spatial distribution. Recent work (Gnanadesikan et al., 2013) showed that changes in eddy stirring could result in changes of the volume of hypoxic and anoxic waters, leading to drastic consequences for ocean biogeochemical cycles. The parameterization of mesocale eddies in global climate models (GCMs) is two parts, based on the formulations of Redi (1982) and Gent and McWilliams (1990) which are associated with mixing parameters ARedi and AGM respectively. Numerous studies have looked at the sensitivity of ESMs to changing AGM, either alone or in combination with an ARedi parameter taken to be equivalent to the value of the AGM. By contrast the impact of the Redi parameterization in isolation remains unexplored. In a previous article, Pradal and Gnanadesikan, 2014, described the sensitivity of the climate system to a six fold increase in the Redi parameter. They found that increasing the isopycnal mixing coefficient tended to warm the climate of the planet overall, through an increase of heat absorption linked to a destabilization of the halocline in subpolar regions (particularly the Southern Ocean). This previous work varied a globally constant Redi parameter from 400m2/s to 2400m2/s. New estimates from altimetry (Abernathey and Marshall, 2013) better constrain the spatial patterns and range for the ARedi parameter. Does such spatial variation matter, and if so, where does matter? Following Gnanadesikan et al. (2013) and Pradal and Gnanadesikan, 2014 this study examines this question with a suite of Earth System Models.
NASA Astrophysics Data System (ADS)
Macedonio, Giovanni; Costa, Antonio; Scollo, Simona; Neri, Augusto
2015-04-01
Uncertainty in the tephra fallout hazard assessment may depend on different meteorological datasets and eruptive source parameters used in the modelling. We present a statistical study to analyze this uncertainty in the case of a sub-Plinian eruption of Vesuvius of VEI = 4, column height of 18 km and total erupted mass of 5 × 1011 kg. The hazard assessment for tephra fallout is performed using the advection-diffusion model Hazmap. Firstly, we analyze statistically different meteorological datasets: i) from the daily atmospheric soundings of the stations located in Brindisi (Italy) between 1962 and 1976 and between 1996 and 2012, and in Pratica di Mare (Rome, Italy) between 1996 and 2012; ii) from numerical weather prediction models of the National Oceanic and Atmospheric Administration and of the European Centre for Medium-Range Weather Forecasts. Furthermore, we modify the total mass, the total grain-size distribution, the eruption column height, and the diffusion coefficient. Then, we quantify the impact that different datasets and model input parameters have on the probability maps. Results shows that the parameter that mostly affects the tephra fallout probability maps, keeping constant the total mass, is the particle terminal settling velocity, which is a function of the total grain-size distribution, particle density and shape. Differently, the evaluation of the hazard assessment weakly depends on the use of different meteorological datasets, column height and diffusion coefficient.
Maximum Entropy Principle for Transportation
NASA Astrophysics Data System (ADS)
Bilich, F.; DaSilva, R.
2008-11-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
On the star-forming ability of Molecular Clouds
NASA Astrophysics Data System (ADS)
Anathpindika, S.; Burkert, A.; Kuiper, R.
2018-02-01
The star-forming ability of a molecular cloud depends on the fraction of gas it can cycle into the dense-phase. Consequently, one of the crucial questions in reconciling star formation in clouds is to understand the factors that control this process. While it is widely accepted that the variation in ambient conditions can alter significantly the ability of a cloud to spawn stars, the observed variation in the star-formation rate in nearby clouds that experience similar ambient conditions, presents an interesting question. In this work, we attempted to reconcile this variation within the paradigm of colliding flows. To this end we develop self-gravitating, hydrodynamic realizations of identical flows, but allowed to collide off-centre. Typical observational diagnostics such as the gas-velocity dispersion, the fraction of dense-gas, the column density distribution (N-PDF), the distribution of gas mass as a function of K-band extinction and the strength of compressional/solenoidal modes in the post-collision cloud were deduced for different choices of the impact parameter of collision. We find that a strongly sheared cloud is terribly inefficient in cycling gas into the dense phase and that such a cloud can possibly reconcile the sluggish nature of star formation reported for some clouds. Within the paradigm of cloud formation via colliding flows this is possible in case of flows colliding with a relatively large impact parameter. We conclude that compressional modes - though probably essential - are insufficient to ensure a relatively higher star-formation efficiency in a cloud.
Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology
Murakami, Yohei
2014-01-01
Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832
Electron impact ionization of O2 and the interference effect from forward-backward asymmetry
NASA Astrophysics Data System (ADS)
Chowdhury, Madhusree Roy; Tribedi, Lokesh C.
2017-08-01
Absolute double differential cross sections (DDCSs) of secondary electrons emitted from O2 under the impact of 7 keV electrons were measured for different emission angles between 30° and 145° having energies from 1-600 eV. The forward-backward angular asymmetry was observed from angular distribution of the DDCS of secondary electrons. The asymmetry parameter, thus obtained from the DDCS of two complementary angles, showed a clear signature of interference oscillation. The Cohen-Fano model of Young type electron interference at a molecular double slit is found to provide a good fit to the observed oscillatory structures. The present observation is in qualitative agreement with the recent results obtained from photoionization.
Understanding the Physical Structure of the Comet Shoemaker-Levy 9 Fragments
NASA Astrophysics Data System (ADS)
Rettig, Terrence
2000-07-01
Images of the fragmented comet Shoemaker-Levy 9 {SL9} as it approached Jupiter in 1994 provided a unique opportunity to {1} probe the comae, {2} understand the structure of the 20 cometary objects, and {3} provide limits on the Jovian impact parameters. The primary cometary questions were: how were the fragments formed and what was their central structure? There still remains a diversity of opinion regarding the structure of the 21 comet-like fragments as well as the specifics of the disruption event itself. We have shown from Monte Carlo modeling of surface brightness profiles that SL9 fragments had unusual dust size distributions and outflow velocities. Further work of a preliminary nature showed some of the central reflecting area excesses derived from surface brightness profile fitting {w/psf} appeared distributed rather than centrally concentrated as would be expected for comet- like objects, some central excesses were negative and also, the excesses could vary with time. With an improved coma subtraction technique we propose to model each coma surface brightness profile, extract central reflecting areas or central brightness excesses for the non-star-contaminated WFPC-2 SL9, to determine the behavior and characteristics of the central excesses as the fragments approached Jupiter. A second phase of the proposal will be to use numerical techniques {in conjunction with D. Richardson} to investigate the various fragment models. This is a difficult modeling process that will allow us to model the structure and physical characteristics of the fragments and thus constrain parameters for the Jovian impact events. The results will be used to constrain the structure of the central fragment cores of SL9 and how the observed dust comae were produced. The results will provide evidence to discriminate between the parent nucleus models {i.e., were the fragments solid objects or swarms of particles?} and provide better constraints on the atmospheric impact models. The physical characteristics of cometary nuclei are not well understood and the SL9 data provides an important opportunity to constrain these parameters.
L-moments and TL-moments of the generalized lambda distribution
Asquith, W.H.
2007-01-01
The 4-parameter generalized lambda distribution (GLD) is a flexible distribution capable of mimicking the shapes of many distributions and data samples including those with heavy tails. The method of L-moments and the recently developed method of trimmed L-moments (TL-moments) are attractive techniques for parameter estimation for heavy-tailed distributions for which the L- and TL-moments have been defined. Analytical solutions for the first five L- and TL-moments in terms of GLD parameters are derived. Unfortunately, numerical methods are needed to compute the parameters from the L- or TL-moments. Algorithms are suggested for parameter estimation. Application of the GLD using both L- and TL-moment parameter estimates from example data is demonstrated, and comparison of the L-moment fit of the 4-parameter kappa distribution is made. A small simulation study of the 98th percentile (far-right tail) is conducted for a heavy-tail GLD with high-outlier contamination. The simulations show, with respect to estimation of the 98th-percent quantile, that TL-moments are less biased (more robost) in the presence of high-outlier contamination. However, the robustness comes at the expense of considerably more sampling variability. ?? 2006 Elsevier B.V. All rights reserved.
Photon statistics of a two-mode squeezed vacuum
NASA Technical Reports Server (NTRS)
Schrade, Guenter; Akulin, V. M.; Schleich, W. P.; Manko, Vladimir I.
1994-01-01
We investigate the general case of the photon distribution of a two-mode squeezed vacuum and show that the distribution of photons among the two modes depends on four parameters: two squeezing parameters, the relative phase between the two oscillators and their spatial orientation. The distribution of the total number of photons depends only on the two squeezing parameters. We derive analytical expressions and present pictures for both distributions.
NASA Astrophysics Data System (ADS)
Dang, Xinyue; Yang, Huan; Naafs, B. David A.; Pancost, Richard D.; Xie, Shucheng
2016-09-01
The distribution of bacterial branched glycerol dialkyl glycerol tetraethers (brGDGTs) is influenced by growth temperature and pH. This results in the widespread application of the brGDGT-based MBT(‧)/CBT proxy (MBT - methylation of branched tetraethers, CBT - cyclization of branched tetraethers) in terrestrial paleo-environmental reconstructions. Recently, it was shown that the amount of precipitation could also have an impact on CBT, as well as the abundance of brGDGTs relative to that of archaeal isoprenoidal (iso)GDGTs (Ri/b) and the absolute abundance of brGDGTs, potentially complicating the use of MBT/CBT as paleothermometer. However, the full influence of hydrology, and in particular soil water content (SWC), on GDGT distributions remains unclear. Here we investigated variations in the GDGT distribution across a SWC gradient (0-61%) around Qinghai Lake in the Tibetan Plateau, an arid to semiarid region in China. Our results demonstrate that SWC affects the brGDGT distribution. In particular, we show that SWC has a clear impact on the degree of methylation of C6-methylated brGDGTs, whereas C5-methylated brGDGTs are more impacted by temperature. This results in a combined SWC and temperature control on MBT‧. In this context we propose a diagnostic parameter, the IR6ME (relative abundance of C6-methylated GDGTs) index, to evaluate the applicability of brGDGT-based paleotemperature reconstructions. Using the global dataset, expanded with our own data, MBT‧ has a significant correlation with mean annual air temperature when IR6ME < 0.5, allowing for the use of MBT‧/CBT as temperature proxy. However, MBT‧ has a significant correlation with mean annual precipitation (i.e., a substantial reflection of SWC impact) when IR6ME > 0.5, implying that MBT‧ may respond to hydrological change in these regions and can be used as a proxy for MAP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viswanathan, Sandeep; Rothamer, David; Zelenyuk, Alla
The impact of inlet particle properties on the filtration performance of clean and particulate matter (PM) laden cordierite filter samples was evaluated using PM generated by a spark-ignition direct-injection (SIDI) engine fuelled with tier II EEE certification gasoline. Prior to the filtration experiments, a scanning mobility particle spectrometer (SMPS) was used to measure the electrical-mobility based particle size distribution (PSD) in the SIDI exhaust from distinct engine operating conditions. An advanced aerosol characterization system that comprised of a centrifugal particle mass analyser (CPMA), a differential mobility analyser (DMA), and a single particle mass spectrometer (SPLAT II) was used to obtainmore » additional information on the SIDI particulate, including particle composition, mass, and dynamic shape factors (DSFs) in the transition () and free-molecular () flow regimes. During the filtration experiments, real-time measurements of PSDs upstream and downstream of the filter sample were used to estimate the filtration performance and the total trapped mass within the filter using an integrated particle size distribution method. The filter loading process was paused multiple times to evaluate the filtration performance in the partially loaded state. The change in vacuum aerodynamic diameter () distribution of mass-selected particles was examined for flow through the filter to identify whether preferential capture of particles of certain shapes occurred in the filter. The filter was also probed using different inlet PSDs to understand their impact on particle capture within the filter sample. Results from the filtration experiment suggest that pausing the filter loading process and subsequently performing the filter probing experiments did not impact the overall evolution of filtration performance. Within the present distribution of particle sizes, filter efficiency was independent of particle shape potentially due to the diffusion-dominant filtration process. Particle mobility diameter and trapped mass within the filter appeared to be the dominant parameters that impacted filter performance.« less
NASA Astrophysics Data System (ADS)
Wolf, Thomas; Lüddeke, Frauke; Thiange, Christophe
2015-04-01
According to the assessment criteria of the European water framework directive Lake Constance is having a good water quality. Nevertheless upcoming criteria using environmental quality measures show that there are still problems with respect to micropollutants. In fact, we observe significantly enhanced concentrations of micropollutants close to river mouths and in the areas of shallow water zones within Lake Constance compared to deep water concentrations. These findings are caused by river water plumes which can flow over distances of kilometers in the lake without being diluted or mixed only weakly with the surrounding lake water body. Besides, in the area of interest exist large populations of submerged aquatic macrophytes (SAM). There is only little knowledge, how these influence the distribution and transport processes of micropollutants. In order to assess the impact and distribution of river water plumes in different areas of the lake we implemented a 3-dim hydrodynamic model using DELFT3D-FLOW on a locally refined numerical grid which enables to cover different process scales of the distribution of river water bodies ranging from a few meters up to basin wide scales in the order of a few kilometers. We used numerical tracers (conservative and non-conservative) in order to quantify the impact of different abstract substance classes which are distinguished by their decay rates. In order to asses the influence of SAM populations on current field and transport processes we used a special simulation technique - the trachytope concept. The results of our 3-dim hydrodynamic model showed significantly changed current velocities, residence times and age of water parameters within the SAM areas compared to the control simulation without SAM. By simulating the propagation of coliform bacteria using numerical tracers with spatially and temporarily variable decay rates, we found complex impact pattern of the SAM on the distribution of these potentially harmful microorganisms.
NASA Astrophysics Data System (ADS)
Borfecchia, Flavio; Micheli, Carla; Belmonte, Alessandro; De Cecco, Luigi; Sannino, Gianmaria; Bracco, Giovanni; Mattiazzo, Giuliana; Vittoria Struglia, Maria
2016-04-01
Marine renewable energy extraction plays a key role both in energy security of small islands and in mitigation of climate change, but at the same time poses the important question of monitoring the effects of the interaction of such devices with the marine environment. In this work we present a new methodology, integrating satellite remote sensing techniques with in situ observations and biophysical parameters analysis, for the monitoring and mapping of Posidonia Oceanica (PO) meadows in shallow coastal waters. This methodology has been applied to the coastal area offshore Pantelleria Island (Southern Mediterranean) where the first Italian Inertial Sea Wave Energy Converter (ISWEC) prototype has been recently installed. The prototype, developed by the Polytechnic of Turin consists of a platform 8 meters wide, 15 meters long and 4.5 meters high, moored at about 800 meters from the shore and at 31 m depth. It is characterized by high conversion efficiency, resulting from its adaptability to different wave conditions, and a limited environmental impact due to its mooring innovative method with absence of fixed anchors to the seabed. The island of Pantelleria, is characterized by high transparency of coastal waters and PO meadows ecosystems with still significant levels of biodiversity and specific adaptation to accentuated hydrodynamics of these shores. Although ISWEC is a low-impact mooring inertial system able to ensure a reliable connection to the electric grid with minimal impact on seagrass growing in the seabed, the prototype installation and operation involves an interaction with local PO and seagrass meadows and possible water transparency decreasing. In this view monitoring of local PO ecosystem is mandatory in order to allow the detection of potential stress and damages due to ISWEC related activities and/or other factors. However, monitoring and collection of accurate and repetitive information over large areas of the necessary parameters by means of traditional methods (e.g. diving and plants counting), can be difficult and expensive. To overcome these limits we present an integrated methodology for effective monitoring and mapping of PO meadows using satellite/airborne EO (Earth Observation) techniques calibrated by means of sea truth measurements and laboratory genetics analyses. During last summer a sea truth campaign over the areas of interest has been performed and point measurements of several biophysical parameters (biomass, shoot density, cover) related to PO phenology has been acquired by means of original sampling method on the stations distributed along a bathymetry gradient starting from the ISWEC location, at 31 m. of depth. The Landsat 8 OLI with the Sentinel 2 MSI (recently made available within the Copernicus EU program) synchronous satellite multispectral data, including the entire coastal area of interest, were acquired and preprocessed with the objective to test their improved mapping capabilities of PO distribution and related biophysical parameters on the basis of the previously developed operative methods and near synchronous sea truth data. The processed point samples measurements were then exploited for multispectral data calibration, with the support of the statistic and bio-optical modelling approaches to obtain improved thematic maps of the local PO distributions.
NASA Astrophysics Data System (ADS)
Jiang, Quan; Zhong, Shan; Cui, Jie; Feng, Xia-Ting; Song, Leibo
2016-12-01
We investigated the statistical characteristics and probability distribution of the mechanical parameters of natural rock using triaxial compression tests. Twenty cores of Jinping marble were tested under each different levels of confining stress (i.e., 5, 10, 20, 30, and 40 MPa). From these full stress-strain data, we summarized the numerical characteristics and determined the probability distribution form of several important mechanical parameters, including deformational parameters, characteristic strength, characteristic strains, and failure angle. The statistical proofs relating to the mechanical parameters of rock presented new information about the marble's probabilistic distribution characteristics. The normal and log-normal distributions were appropriate for describing random strengths of rock; the coefficients of variation of the peak strengths had no relationship to the confining stress; the only acceptable random distribution for both Young's elastic modulus and Poisson's ratio was the log-normal function; and the cohesive strength had a different probability distribution pattern than the frictional angle. The triaxial tests and statistical analysis also provided experimental evidence for deciding the minimum reliable number of experimental sample and for picking appropriate parameter distributions to use in reliability calculations for rock engineering.
Bordes, Julien; Mazzeo, Cecilia; Gourtobe, Philippe; Cungi, Pierre Julien; Antonini, Francois; Bourgoin, Stephane; Kaiser, Eric
2015-01-01
Background: Extraperitoneal laparoscopy has become a common technique for many surgical procedures, especially for inguinal hernia surgery. Investigations of physiological changes occurring during extraperitoneal carbon dioxide (CO2) insufflation mostly focused on blood gas changes. To date, the impact of extraperitoneal CO2 insufflation on respiratory mechanics remains unknown, whereas changes in respiratory mechanics have been extensively studied in intraperitoneal insufflation. Objectives: The aim of this study was to investigate the effects of extraperitoneal CO2 insufflation on respiratory mechanics. Patients and Methods: A prospective and observational study was performed on nine patients undergoing laparoscopic inguinal hernia repair. Anesthetic management and intraoperative care were standardized. All patients were mechanically ventilated with a tidal volume of 8 mL/kg using an Engström Carestation ventilator (GE Healthcare). Ventilation distribution was assessed by electrical impedance tomography (EIT). End-expiratory lung volume (EELV) was measured by a nitrogen wash-out/wash-in method. Ventilation distribution, EELV, ventilator pressures and hemodynamic parameters were assessed before extraperitoneal insufflation, and during insufflation with a PEEP of 0 cmH2O, 5 cmH20 and of 10 cmH20. Results: EELV and thoracopulmonary compliance were significantly decreased after extraperitoneal insufflation. Ventilation distribution was significantly higher in ventral lung regions during general anesthesia and was not modified after insufflation. A 10 cmH20 PEEP application resulted in a significant increase in EELV, and a shift of ventilation toward the dorsal lung regions. Conclusions: Extraperitoneal insufflation decreased EELV and thoracopulmonary compliance. Application of a 10 cmH20 PEEP increased EELV and homogenized ventilation distribution. This preliminary clinical study showed that extraperitoneal insufflation worsened respiratory mechanics, which may justify further investigations to evaluate the clinical impact. PMID:25789238
Numerical details and SAS programs for parameter recovery of the SB distribution
Bernard R. Parresol; Teresa Fidalgo Fonseca; Carlos Pacheco Marques
2010-01-01
The four-parameter SB distribution has seen widespread use in growth-and-yield modeling because it covers a broad spectrum of shapes, fitting both positively and negatively skewed data and bimodal configurations. Two recent parameter recovery schemes, an approach whereby characteristics of a statistical distribution are equated with attributes of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zadora, A. S., E-mail: as.zadora@physics.msu.ru
The objective of the present study is to consider in more detail the exotic color-charge-glow effect discovered recently and to analyze its possible physical manifestations associated with the treatment of ensembles of color-charged particles at a classical level. The ways in which this effect may appear in arbitrary systems consisting of pointlike massive particles and admitting the partition into elementary configurations like color charges and color dipoles are studied. The possible influence of this effect on particle dynamics (in particular, on gluon distributions) is also examined. Particle collisions at a given impact parameters are considered for a natural regularization ofmore » emerging expressions. It is shown that, in the case of reasonable impact-parameter values, collisions may proceed in the electrodynamic mode, in which case the charge-glow contribution to field strengths is suppressed in relation to what we have in the electrodynamic picture. From an analysis of the color-echo situation, it follows that the above conclusion remains valid for more complicated particle configurations as well, since hard gluon fields may arise only owing to a direct collision rather than owing to any echo-like effects.« less
Role of Relativistic Effects in the Ionization of Heavy Ions by Electron Impact
NASA Astrophysics Data System (ADS)
Saha, Bidhan C.; Basak, Arun K.
2005-05-01
Electron impact single ionization cross sections of few heavy ions are evaluated using the recently proposed modifications [1] of the widely used simplified version of the improved binary-encounter (siBED) dipole model [2]. This model consists of two adjustable parameters and it is found that they are related to the nature of the charge distribution in the bonding region of the target. For its effective uses for ionic target the siBED model is further modified [3] in terms of the ionic and relativistic effects. This study focuses on the relativistic energy domain and the findings suggest the fate of those parameters. Details of our findings will be presented at the conference. [1] W. M. Huo, Phys. Rev. A 64, 042719 (2001). [2] M. A. Uddin, M. A. K. F. Haque, A. K. Basak and B. C. Saha, Phys. Rev A70, 0322706(2004). [3] M. a. Uddin, M. A. K. F. Haque, M. S. Mahbub, K. R. Karim, A.K. Basak and B. C. Saha, Phys. Rev. A (in press) 2005.
High Pressure Water Stripping Using Multi-Orifice Nozzles
NASA Technical Reports Server (NTRS)
Hoppe, David T.
1998-01-01
The use of multi-orifice rotary nozzles not only increases the speed and stripping effectiveness of high pressure water blasting systems, but also greatly increases the complexity of selecting and optimizing the operating parameters. The rotational speed of the nozzle must be coupled with the transverse velocity of the nozzle as it passes across the surface of the substrate being stripped. The radial and angular positions of each orifice must be included in the analysis of the nozzle configuration. Since orifices at the outer edge of the nozzle head move at a faster rate than the orifice located near the center, the energy impact force of the water stream from the outer orifice is spread over a larger area than the water streams from the inner orifice. Utilizing a larger diameter orifice in the outer radial positions increases the energy impact to compensate for its wider force distribution. The total flow rate from the combination of orifices must be monitored and kept below the pump capacity while choosing an orifice to insert in each position. The energy distribution from the orifice pattern is further complicated since the rotary path of all orifices in the nozzle head pass through the center section, contributing to the stripping in this area while only the outer most orifice contributes to the stripping in the shell area at the extreme outside edge of the nozzle. From t he outer most shell to the center section, more orifices contribute to the stripping in each progressively reduced diameter shell. With all these parameters to configure and each parameter change affecting the others, a computer model was developed to track and coordinate these parameters. The computer simulation responds by graphically indicating the cumulative affect from each parameter selected. The results from the proper choices in parameters is a well designed, highly efficient stripping system. A poorly chosen set of parameters will cause the nozzle to strip aggressively in some areas while leaving the coating untouched in adjacent sections. The high pressure water stripping system can be set to extremely aggressive conditions allowing stripping of hard to remove adhesives, paint systems, cladding and chromate conversion coatings. The energy force can be reduced to strip coatings from thin aluminum substrates without causing damage or deterioration to the substrate's surface. High pressure water stripping of aerospace components have thus proven to be an efficient and cost effective method for cleaning and removing coatings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.
Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less
NASA Astrophysics Data System (ADS)
Paris, Jean-Daniel; Belan, Boris D.; Ancellet, Gérard; Nédélec, Philippe; Arshinov, Mikhail Yu.; Pruvost, Arnaud; Berchet, Antoine; Arzoumanian, Emmanuel; Pison, Isabelle; Ciais, Philippe; Law, Kathy
2014-05-01
Despite the unique scientific value of better knowing atmospheric composition over Siberia, regional observations of the tropospheric composition over this region are still lacking. Large local anthropogenic emissions, strong ecosystem gas exchange across the vast forest expanse, and processes feeding back to global climate such as wetlands CH4 emissions, seabed hydrates destabilization and degrading permafrost make this region particularly crucial to investigate. We aim at addressing this need in the YAK-AEROSIB program by collecting high-precision in-situ measurements of the vertical distribution of CO2, CH4, CO, O3, black carbon and ultrafine particles distribution in the Siberian troposphere, as well as other parameters including aerosol lidar profiles, on a pan-Siberian aircraft transect. Campaigns are performed almost annually since 2006 until now on this regular route, while special campaigns are occasionnally arranged to sample the troposphere elsewere (e.g. Russian Arctic coast). We show the background tropospheric composition obtained from these surveys, the variability and the impact of large-scale transport of anthropogenic emissions from Europe and Asia, as well as the impact of biomass burning plumes both from local wildfires (2012) and from remote sources elsewhere in Asia. Long range transport of anthropogenic emissions is shown to have a discernible impact on O3 distribution, although its lower-tropospheric variability is largely driven by surface deposition. Regional sources and sinks drive the lower troposphere CO2 and CH4 concentrations. Recent efforts aim at better understanding the respective role of CH4 emission processes (including methanogenesis in wetlands and emissions by wildfires) in driving its large scale atmospheric variability over the region. Generally, the YAK AEROSIB provide unique observations over Siberia, documenting both direct impact of regional sources and aged air masses experiencing long range transport toward the high Arctic.
Gao, Qingzhu; Guo, Yaqi; Xu, Hongmei; Ganjurjav, Hasbagen; Li, Yue; Wan, Yunfan; Qin, Xiaobo; Ma, Xin; Liu, Shuo
2016-06-01
Changes in climate have caused impacts on ecosystems on all continents scale, and climate change is also projected to be a stressor on most ecosystems even at the rate of low- to medium-range warming scenarios. Alpine ecosystem in the Qinghai-Tibetan Plateau is vulnerable to climate change. To quantify the climate change impacts on alpine ecosystems, we simulated the vegetation distribution and net primary production in the Qinghai-Tibetan Plateau for three future periods (2020s, 2050s and 2080s) using climate projection for RCPs (Representative Concentration Pathways) RCP4.5 and RCP8.5 scenarios. The modified Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJ model) was parameter and test to make it applicable to the Qinghai-Tibetan Plateau. Climate projections that were applied to LPJ model in the Qinghai-Tibetan Plateau showed trends toward warmer and wetter conditions. Results based on climate projections indicated changes from 1.3°C to 4.2°C in annual temperature and changes from 2% to 5% in annual precipitation. The main impacts on vegetation distribution was increase in the area of forests and shrubs, decrease in alpine meadows which mainly replaced by shrubs which dominated the eastern plateau, and expanding in alpine steppes to the northwest dominated the western and northern plateau. The NPP was projected to increase by 79% and 134% under the RCP4.5 and RCP8.5. The projected NPP generally increased about 200gC·m(-2)·yr(-1) in most parts of the plateau with a gradual increase from the eastern to the western region of the Qinghai-Tibetan Plateau at the end of this century. Copyright © 2016 Elsevier B.V. All rights reserved.
Impact of Inflow Conditions on Coherent Structures in an Aneurysm
NASA Astrophysics Data System (ADS)
Yu, Paulo; Durgesh, Vibhav; Johari, Hamid
2017-11-01
An aneurysm is an enlargement of a weakened arterial wall that can be debilitating or fatal on rupture. Studies have shown that hemodynamics is integral to developing an understanding of aneurysm formation, growth, and rupture. This investigation focuses on a comprehensive study of the impact of varying inflow conditions and aneurysm shapes on spatial and temporal behavior of flow parameters and structures in an aneurysm. Two different shapes of an idealized rigid aneurysm model were studied and the non-dimensional frequency and Reynolds number were varied between 2-5 and 50-250, respectively. A ViVitro Labs SuperPump system was used to precisely control inflow conditions. Particle Image Velocimetry (PIV) measurements were performed at three different locations inside the aneurysm sac to obtain detailed velocity flow field information. The results of this study showed that aneurysm morphology significantly impacts spatial and temporal behavior of large-scale flow structures as well as wall shear stress distribution. The flow behavior and structures showed a significant difference with change in inflow conditions. A primary fluctuating flow structure was observed for Reynolds number of 50, while for higher Reynolds numbers, primary and secondary flow structures were observed. Furthermore, the paths of these coherent structures were dependent on aneurysm shape and inflow parameters.
The statistical characteristics of rain-generated stalks on water surface
NASA Astrophysics Data System (ADS)
Liu, Xinan; Liu, Ren; Duncan, James H.
2017-11-01
Laboratory measurements of the stalks generated by the impact of raindrops are performed in a 1.22-m-by-1.22-m water pool with a water depth of 0.3 m. Simulated raindrops are generated by an array of 22-gauge hypodermic needles that are attached to the bottom of an open-surface rain tank. The raindrop diameter is about 2.6 mm and the height of the rain tank above the water surface of the pool is varied from 1 m to 4.5 m to provide different impact velocities. A number of parameters, including the diameter, height and initial upward velocity of the center jets (stalks) are measured with a cinematic laser-induced- fluorescence technique. It is found that the maximum potential energy of the stalk and the joint distribution of stalk height and diameter are strongly correlated to the impact velocities of raindrops. Comparisons between the rain experiments and single drop impacts on a quiescent water surface are also shown.
Splash Dynamics of Falling Surfactant-Laden Droplets
NASA Astrophysics Data System (ADS)
Sulaiman, Nur; Buitrago, Lewis; Pereyra, Eduardo
2017-11-01
Splashing dynamics is a common issue in oil and gas separation technology. In this study, droplet impact of various surfactant concentrations onto solid and liquid surfaces is studied experimentally using a high-speed imaging analysis. Although this area has been widely studied in the past, there is still not a good understanding of the role of surfactant over droplet impact and characterization of resulting splash dynamics. The experiments are conducted using tap water laden with anionic surfactant. The effects of system parameters on a single droplet impingement such as surfactant concentration (no surfactant, below, at and above critical micelle concentration), parent drop diameter (2-5mm), impact velocity and type of impact surface (thin and deep pool) are investigated. Image analysis technique is shown to be an effective technique for identification of coalescence to splashing transition. In addition, daughter droplets size distributions are analyzed qualitatively in the events of splashing. As expected, it is observed that the formation of secondary droplets is affected by the surfactant concentration. A summary of findings will be discussed.
Frequency distributions and correlations of solar X-ray flare parameters
NASA Technical Reports Server (NTRS)
Crosby, Norma B.; Aschwanden, Markus J.; Dennis, Brian R.
1993-01-01
Frequency distributions of flare parameters are determined from over 12,000 solar flares. The flare duration, the peak counting rate, the peak hard X-ray flux, the total energy in electrons, and the peak energy flux in electrons are among the parameters studied. Linear regression fits, as well as the slopes of the frequency distributions, are used to determine the correlations between these parameters. The relationship between the variations of the frequency distributions and the solar activity cycle is also investigated. Theoretical models for the frequency distribution of flare parameters are dependent on the probability of flaring and the temporal evolution of the flare energy build-up. The results of this study are consistent with stochastic flaring and exponential energy build-up. The average build-up time constant is found to be 0.5 times the mean time between flares.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, F.; Bohler, D.; Ding, Y.
2015-12-07
Photocathode RF gun has been widely used for generation of high-brightness electron beams for many different applications. We found that the drive laser distributions in such RF guns play important roles in minimizing the electron beam emittance. Characterizing the laser distributions with measurable parameters and optimizing beam emittance versus the laser distribution parameters in both spatial and temporal directions are highly desired for high-brightness electron beam operation. In this paper, we report systematic measurements and simulations of emittance dependence on the measurable parameters represented for spatial and temporal laser distributions at the photocathode RF gun systems of Linac Coherent Lightmore » Source. The tolerable parameter ranges for photocathode drive laser distributions in both directions are presented for ultra-low emittance beam operations.« less
Impact of the June 2013 Riau province Sumatera smoke haze event on regional air pollution
NASA Astrophysics Data System (ADS)
Dewi Ayu Kusumaningtyas, Sheila; Aldrian, Edvin
2016-07-01
Forest and land fires in Riau province of Sumatera increase along with the rapid deforestation, land clearing, and are induced by dry climate. Forest and land fires, which occur routinely every year, cause trans-boundary air pollution up to Singapore. Economic losses were felt by Indonesia and Singapore as the affected country thus creates tensions among neighboring countries. A high concentration of aerosols are emitted from fire which degrade the local air quality and reduce visibility. This study aimed to analyze the impact of the June 2013 smoke haze event on the environment and air quality both in Riau and Singapore as well as to characterize the aerosol properties in Singapore during the fire period. Air quality parameters combine with aerosols from Aerosol Robotic Network (AERONET) data and some environmental parameters, i.e. rainfall, visibility, and hotspot numbers are investigated. There are significant relationships between aerosol and environmental parameters both in Riau and Singapore. From Hysplit modeling and a day lag correlation, smoke haze in Singapore is traced back to fire locations in Riau province after propagated one day. Aerosol characterization through aerosol optical depth (AOD), Ångstrom parameter and particle size distribution indicate the presence of fine aerosols in a great number in Singapore, which is characteristic of biomass burning aerosols. Fire and smoke haze even impaired economic activity both in Riau and Singapore, thus leaving some accounted economic losses as reported by some agencies.
Dielectric elastomer for stretchable sensors: influence of the design and material properties
NASA Astrophysics Data System (ADS)
Jean-Mistral, C.; Iglesias, S.; Pruvost, S.; Duchet-Rumeau, J.; Chesné, S.
2016-04-01
Dielectric elastomers exhibit extended capabilities as flexible sensors for the detection of load distributions, pressure or huge deformations. Tracking the human movements of the fingers or the arms could be useful for the reconstruction of sporting gesture, or to control a human-like robot. Proposing new measurements methods are addressed in a number of publications leading to improving the sensitivity and accuracy of the sensing method. Generally, the associated modelling remains simple (RC or RC transmission line). The material parameters are considered constant or having a negligible effect which can lead to serious reduction of accuracy. Comparisons between measurements and modelling require care and skill, and could be tricky. Thus, we propose here a comprehensive modelling, taking into account the influence of the material properties on the performances of the dielectric elastomer sensor (DES). Various parameters influencing the characteristics of the sensors have been identified: dielectric constant, hyper-elasticity. The variations of these parameters as a function of the strain impact the linearity and sensitivity of the sensor of few percent. The sensitivity of the DES is also evaluated changing geometrical parameters (initial thickness) and its design (rectangular and dog-bone shapes). We discuss the impact of the shape regarding stress. Finally, DES including a silicone elastomer sandwiched between two high conductive stretchable electrodes, were manufactured and investigated. Classic and reliable LCR measurements are detailed. Experimental results validate our numerical model of large strain sensor (>50%).
Architectural Optimization of Digital Libraries
NASA Technical Reports Server (NTRS)
Biser, Aileen O.
1998-01-01
This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.
Detection and Distribution of Natural Gaps in Tropical Rainforest
NASA Astrophysics Data System (ADS)
Goulamoussène, Y.; Linguet, L.; Hérault, B.
2014-12-01
Forest management is important to assess biodiversity and ecological processes. Requirements for disturbance information have also been motivated by the scientific community. Therefore, understanding and monitoring the distribution frequencies of treefall gaps is relevant to better understanding and predicting the carbon budget in response to global change and land use change. In this work we characterize and quantify the frequency distribution of natural canopy gaps. We observe then interaction between environment variables and gap formation across tropical rainforest of the French Guiana region by using high resolution airborne Light Detection and Ranging (LiDAR). We mapped gaps with canopy model distribution on 40000 ha of forest. We used a Bayesian modelling framework to estimate and select useful covariate model parameters. Topographic variables are included in a model to predict gap size distribution. We discuss results from the interaction between environment and gap size distribution, mainly topographic indexes. The use of both airborne and space-based techniques has improved our ability to supply needed disturbance information. This work is an approach at plot scale. The use of satellite data will allow us to work at forest scale. The inclusion of climate variables in our model will let us assess the impact of global change on tropical rainforest.
NASA Astrophysics Data System (ADS)
Wallace, M. G.; Iuzzolina, H.
2005-12-01
A probabilistic analysis was conducted to estimate ranges for the numbers of waste packages that could be damaged in a potential future igneous event through a repository at Yucca Mountain. The analysis includes disruption from an intrusive igneous event and from an extrusive volcanic event. This analysis supports the evaluation of the potential consequences of future igneous activity as part of the total system performance assessment for the license application for the Yucca Mountain Project (YMP). The first scenario, igneous intrusion, investigated the case where one or more igneous dikes intersect the repository. A swarm of dikes was characterized by distributions of length, width, azimuth, and number of dikes and the spacings between them. Through the use in part of a latin hypercube simulator and a modified video game engine, mathematical relationships were built between those parameters and the number of waste packages hit. Corresponding cumulative distribution function curves (CDFs) for the number of waste packages hit under several different scenarios were calculated. Variations in dike thickness ranges, as well as in repository magma bulkhead positions were examined through sensitivity studies. It was assumed that all waste packages in an emplacement drift would be impacted if that drift was intersected by a dike. Over 10,000 individual simulations were performed. Based on these calculations, out of a total of over 11,000 planned waste packages distributed over an area of approximately 5.5 km2 , the median number of waste packages impacted was roughly 1/10 of the total. Individual cases ranged from 0 waste packages to the entire inventory being impacted. The igneous intrusion analysis involved an explicit characterization of dike-drift intersections, built upon various distributions that reflect the uncertainties associated with the inputs. The second igneous scenario, volcanic eruption (eruptive conduits), considered the effects of conduits formed in association with a volcanic eruption through the repository. Mathematical relations were built between the resulting conduit areas and the fraction of the repository area occupied by waste packages. This relation was used in conjunction with a joint distribution incorporating variability in eruptive conduit diameters and in the number of eruptive conduits that could intersect the repository.
Lunar and Planetary Science XXXV: Mars Geophysics
NASA Technical Reports Server (NTRS)
2004-01-01
The titles in this section include: 1) Distribution of Large Visible and Buried Impact Basins on Mars: Comparison with Free-Air Gravity, Crustal Thickness, and Magnetization Models; 2) The Early Thermal and Magnetic State of Terra Cimmeria, Southern Highlands of Mars; 3) Compatible Vector Components of the Magnetic Field of the Martian Crust; 4) Vertical Extrapolation of Mars Magnetic Potentials; 5) Rock Magnetic Fields Shield the Surface of Mars from Harmful Radiation; 6) Loading-induced Stresses near the Martian Hemispheric Dichotomy Boundary; 7) Growth of the Hemispheric Dichotomy and the Cessation of Plate Tectonics on Mars; 8) A Look at the Interior of Mars; 9) Uncertainties on Mars Interior Parameters Deduced from Orientation Parameters Using Different Radio-Links: Analytical Simulations; 10) Refinement of Phobos Ephemeris Using Mars Orbiter Laser Altimetry Radiometry.
Analysis of Fractional Flow for Transient Two-Phase Flow in Fractal Porous Medium
NASA Astrophysics Data System (ADS)
Lu, Ting; Duan, Yonggang; Fang, Quantang; Dai, Xiaolu; Wu, Jinsui
2016-03-01
Prediction of fractional flow in fractal porous medium is important for reservoir engineering and chemical engineering as well as hydrology. A physical conceptual fractional flow model of transient two-phase flow is developed in fractal porous medium based on the fractal characteristics of pore-size distribution and on the approximation that porous medium consist of a bundle of tortuous capillaries. The analytical expression for fractional flow for wetting phase is presented, and the proposed expression is the function of structural parameters (such as tortuosity fractal dimension, pore fractal dimension, maximum and minimum diameters of capillaries) and fluid properties (such as contact angle, viscosity and interfacial tension) in fractal porous medium. The sensitive parameters that influence fractional flow and its derivative are formulated, and their impacts on fractional flow are discussed.
Oxide vapor distribution from a high-frequency sweep e-beam system
NASA Astrophysics Data System (ADS)
Chow, R.; Tassano, P. L.; Tsujimoto, N.
1995-03-01
Oxide vapor distributions have been determined as a function of operating parameters of a high frequency sweep e-beam source combined with a programmable sweep controller. We will show which parameters are significant, the parameters that yield the broadest oxide deposition distribution, and the procedure used to arrive at these conclusions. A design-of-experimental strategy was used with five operating parameters: evaporation rate, sweep speed, sweep pattern (pre-programmed), phase speed (azimuthal rotation of the pattern), profile (dwell time as a function of radial position). A design was chosen that would show which of the parameters and parameter pairs have a statistically significant effect on the vapor distribution. Witness flats were placed symmetrically across a 25 inches diameter platen. The stationary platen was centered 24 inches above the e-gun crucible. An oxide material was evaporated under 27 different conditions. Thickness measurements were made with a stylus profilometer. The information will enable users of the high frequency e-gun systems to optimally locate the source in a vacuum system and understand which parameters have a major effect on the vapor distribution.
NASA Astrophysics Data System (ADS)
Ševecek, Pavel; Broz, Miroslav; Nesvorny, David; Durda, Daniel D.; Asphaug, Erik; Walsh, Kevin J.; Richardson, Derek C.
2016-10-01
Detailed models of asteroid collisions can yield important constrains for the evolution of the Main Asteroid Belt, but the respective parameter space is large and often unexplored. We thus performed a new set of simulations of asteroidal breakups, i.e. fragmentations of intact targets, subsequent gravitational reaccumulation and formation of small asteroid families, focusing on parent bodies with diameters D = 10 km.Simulations were performed with a smoothed-particle hydrodynamics (SPH) code (Benz & Asphaug 1994), combined with an efficient N-body integrator (Richardson et al. 2000). We assumed a number of projectile sizes, impact velocities and impact angles. The rheology used in the physical model does not include friction nor crushing; this allows for a direct comparison to results of Durda et al. (2007). Resulting size-frequency distributions are significantly different from scaled-down simulations with D = 100 km monolithic targets, although they may be even more different for pre-shattered targets.We derive new parametric relations describing fragment distributions, suitable for Monte-Carlo collisional models. We also characterize velocity fields and angular distributions of fragments, which can be used as initial conditions in N-body simulations of small asteroid families. Finally, we discuss various uncertainties related to SPH simulations.
Including operational data in QMRA model: development and impact of model inputs.
Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle
2009-03-01
A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations < DL. The selection of process performance distributions for modelling the performance of treatment (filtration and ozonation) influences the estimated risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).
A Comparative Study of Distribution System Parameter Estimation Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of bothmore » methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.« less
Physics of debris clouds from hypervelocity impacts
NASA Technical Reports Server (NTRS)
Zee, Ralph
1993-01-01
The protection scheme developed for long duration space platforms relies primarily upon placing thin metal plates or 'bumpers' around flight critical components. The effectiveness of this system is highly dependent upon its ability to break up and redistribute the momentum of any particle which might otherwise strike the outer surface of the spacecraft. Therefore it is of critical importance to design the bumpers such that maximum dispersion of momentum is achieved. This report is devoted to an in-depth study into the design and development of a laboratory instrument which would permit the in-situ monitoring of the momentum distribution as the impact event occurs. A series of four designs were developed, constructed and tested culminating with the working instrument which is currently in use. Each design was individually tested using the Space Environmental Effects Facility (SEEF) at the Marshall Space Flight Center in Huntsville, Alabama. Along with the development of the device, an experimental procedure was developed to assist in the investigation of various bumper materials and designs at the SEEF. Preliminary results were used to compute data which otherwise were not experimentally obtainable. These results were shown to be in relative agreement with previously obtained values derived through other methods. The results of this investigation indicated that momentum distribution could in fact be measured in-situ as the impact event occurred thus giving a more accurate determination of the effects of experimental parameters on the momentum spread. Data produced by the instrument indicated a Gaussian-type momentum distribution. A second apparatus was developed and it was placed before the shield in the line of travel utilized a plate to collect impact debris scattered backwards. This plate had a passage hole in the center to allow the particle to travel through it and impact the proposed shield material. Applying the law of conservation of angular momentum a backward momentum vector was determined from the angular velocity of the plate. The forward scattered and backward scattered momentum values were then analyzed to judge the distribution of debris. Loss of momentum was attributed to the inaccuracies of the means of measurement. Assumptions of symmetrical debris for the forward and backward scattered directions also contributed to this loss.
The global impact distribution of Near-Earth objects
NASA Astrophysics Data System (ADS)
Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.
2016-02-01
Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.
SU-G-IeP4-13: PET Image Noise Variability and Its Consequences for Quantifying Tumor Hypoxia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kueng, R; Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario; Manser, P
Purpose: The values in a PET image which represent activity concentrations of a radioactive tracer are influenced by a large number of parameters including patient conditions as well as image acquisition and reconstruction. This work investigates noise characteristics in PET images for various image acquisition and image reconstruction parameters. Methods: Different phantoms with homogeneous activity distributions were scanned using several acquisition parameters and reconstructed with numerous sets of reconstruction parameters. Images from six PET scanners from different vendors were analyzed and compared with respect to quantitative noise characteristics. Local noise metrics, which give rise to a threshold value defining themore » metric of hypoxic fraction, as well as global noise measures in terms of noise power spectra (NPS) were computed. In addition to variability due to different reconstruction parameters, spatial variability of activity distribution and its noise metrics were investigated. Patient data from clinical trials were mapped onto phantom scans to explore the impact of the scanner’s intrinsic noise variability on quantitative clinical analysis. Results: Local noise metrics showed substantial variability up to an order of magnitude for different reconstruction parameters. Investigations of corresponding NPS revealed reconstruction dependent structural noise characteristics. For the acquisition parameters, noise metrics were guided by Poisson statistics. Large spatial non-uniformity of the noise was observed in both axial and radial direction of a PET image. In addition, activity concentrations in PET images of homogeneous phantom scans showed intriguing spatial fluctuations for most scanners. The clinical metric of the hypoxic fraction was shown to be considerably influenced by the PET scanner’s spatial noise characteristics. Conclusion: We showed that a hypoxic fraction metric based on noise characteristics requires careful consideration of the various dependencies in order to justify its quantitative validity. This work may result in recommendations for harmonizing QA of PET imaging for multi-institutional clinical trials.« less
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis
2016-04-01
There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins having different climate and catchment characteristics. Because augmentation of parameters is not required within an assimilation window, the approach could be stable with limited ensemble members and viable for practical uses.
Bou Kheir, Rania; Greve, Mogens H; Bøcher, Peder K; Greve, Mette B; Larsen, René; McCloy, Keith
2010-05-01
Soil organic carbon (SOC) is one of the most important carbon stocks globally and has large potential to affect global climate. Distribution patterns of SOC in Denmark constitute a nation-wide baseline for studies on soil carbon changes (with respect to Kyoto protocol). This paper predicts and maps the geographic distribution of SOC across Denmark using remote sensing (RS), geographic information systems (GISs) and decision-tree modeling (un-pruned and pruned classification trees). Seventeen parameters, i.e. parent material, soil type, landscape type, elevation, slope gradient, slope aspect, mean curvature, plan curvature, profile curvature, flow accumulation, specific catchment area, tangent slope, tangent curvature, steady-state wetness index, Normalized Difference Vegetation Index (NDVI), Normalized Difference Wetness Index (NDWI) and Soil Color Index (SCI) were generated to statistically explain SOC field measurements in the area of interest (Denmark). A large number of tree-based classification models (588) were developed using (i) all of the parameters, (ii) all Digital Elevation Model (DEM) parameters only, (iii) the primary DEM parameters only, (iv), the remote sensing (RS) indices only, (v) selected pairs of parameters, (vi) soil type, parent material and landscape type only, and (vii) the parameters having a high impact on SOC distribution in built pruned trees. The best constructed classification tree models (in the number of three) with the lowest misclassification error (ME) and the lowest number of nodes (N) as well are: (i) the tree (T1) combining all of the parameters (ME=29.5%; N=54); (ii) the tree (T2) based on the parent material, soil type and landscape type (ME=31.5%; N=14); and (iii) the tree (T3) constructed using parent material, soil type, landscape type, elevation, tangent slope and SCI (ME=30%; N=39). The produced SOC maps at 1:50,000 cartographic scale using these trees are highly matching with coincidence values equal to 90.5% (Map T1/Map T2), 95% (Map T1/Map T3) and 91% (Map T2/Map T3). The overall accuracies of these maps once compared with field observations were estimated to be 69.54% (Map T1), 68.87% (Map T2) and 69.41% (Map T3). The proposed tree models are relatively simple, and may be also applied to other areas. Copyright 2010 Elsevier Ltd. All rights reserved.
Pierrillas, Philippe B; Tod, Michel; Amiel, Magali; Chenel, Marylore; Henin, Emilie
2016-09-01
The purpose of this study was to explore the impact of censoring due to animal sacrifice on parameter estimates and tumor volume calculated from two diameters in larger tumors during tumor growth experiments in preclinical studies. The type of measurement error that can be expected was also investigated. Different scenarios were challenged using the stochastic simulation and estimation process. One thousand datasets were simulated under the design of a typical tumor growth study in xenografted mice, and then, eight approaches were used for parameter estimation with the simulated datasets. The distribution of estimates and simulation-based diagnostics were computed for comparison. The different approaches were robust regarding the choice of residual error and gave equivalent results. However, by not considering missing data induced by sacrificing the animal, parameter estimates were biased and led to false inferences in terms of compound potency; the threshold concentration for tumor eradication when ignoring censoring was 581 ng.ml(-1), but the true value was 240 ng.ml(-1).
NASA Astrophysics Data System (ADS)
Sarkar, Amit; Kundu, Prabir Kumar
2017-12-01
This specific article unfolds the efficacy of Cattaneo-Christov heat flux on the heat and mass transport of Maxwell nanofluid flow over a stretched sheet with changeable thickness. Homogeneous/heterogeneous reactions in the fluid are additionally considered. The Cattaneo-Christov heat flux model is initiated in the energy equation. Appropriate similarity transformations are taken up to form a system of nonlinear ODEs. The impact of related parameters on the nanoparticle concentration and temperature is inspected through tables and diagrams. It is renowned that temperature distribution increases for lower values of the thermal relaxation parameter. The rate of mass transfer is enhanced for increasing in the heterogeneous reaction parameter but the reverse tendency is ensued for the homogeneous reaction parameter. On the other side, the rate of heat transfer is getting enhanced for the Cattaneo-Christov model compared to the classical Fourier's model for some flow factors. Thus the implication of the current study is to delve its unique effort towards the generalized version of traditional Fourier's law at nano level.
The evolution of Zipf's law indicative of city development
NASA Astrophysics Data System (ADS)
Chen, Yanguang
2016-02-01
Zipf's law of city-size distributions can be expressed by three types of mathematical models: one-parameter form, two-parameter form, and three-parameter form. The one-parameter and one of the two-parameter models are familiar to urban scientists. However, the three-parameter model and another type of two-parameter model have not attracted attention. This paper is devoted to exploring the conditions and scopes of application of these Zipf models. By mathematical reasoning and empirical analysis, new discoveries are made as follows. First, if the size distribution of cities in a geographical region cannot be described with the one- or two-parameter model, maybe it can be characterized by the three-parameter model with a scaling factor and a scale-translational factor. Second, all these Zipf models can be unified by hierarchical scaling laws based on cascade structure. Third, the patterns of city-size distributions seem to evolve from three-parameter mode to two-parameter mode, and then to one-parameter mode. Four-year census data of Chinese cities are employed to verify the three-parameter Zipf's law and the corresponding hierarchical structure of rank-size distributions. This study is revealing for people to understand the scientific laws of social systems and the property of urban development.
The impact of non-Gaussianity upon cosmological forecasts
NASA Astrophysics Data System (ADS)
Repp, A.; Szapudi, I.; Carron, J.; Wolk, M.
2015-12-01
The primary science driver for 3D galaxy surveys is their potential to constrain cosmological parameters. Forecasts of these surveys' effectiveness typically assume Gaussian statistics for the underlying matter density, despite the fact that the actual distribution is decidedly non-Gaussian. To quantify the effect of this assumption, we employ an analytic expression for the power spectrum covariance matrix to calculate the Fisher information for Baryon Acoustic Oscillation (BAO)-type model surveys. We find that for typical number densities, at kmax = 0.5h Mpc-1, Gaussian assumptions significantly overestimate the information on all parameters considered, in some cases by up to an order of magnitude. However, after marginalizing over a six-parameter set, the form of the covariance matrix (dictated by N-body simulations) causes the majority of the effect to shift to the `amplitude-like' parameters, leaving the others virtually unaffected. We find that Gaussian assumptions at such wavenumbers can underestimate the dark energy parameter errors by well over 50 per cent, producing dark energy figures of merit almost three times too large. Thus, for 3D galaxy surveys probing the non-linear regime, proper consideration of non-Gaussian effects is essential.
Influence of Powder Injection Parameters in High-Pressure Cold Spray
NASA Astrophysics Data System (ADS)
Ozdemir, Ozan C.; Widener, Christian A.
2017-10-01
High-pressure cold spray systems are becoming widely accepted for use in the structural repair of surface defects of expensive machinery parts used in industrial and military equipment. The deposition quality of cold spray repairs is typically validated using coupon testing and through destructive analysis of mock-ups or first articles for a defined set of parameters. In order to provide a reliable repair, it is important to not only maintain the same processing parameters, but also to have optimum fixed parameters, such as the particle injection location. This study is intended to provide insight into the sensitivity of the way that the powder is injected upstream of supersonic nozzles in high-pressure cold spray systems and the effects of variations in injection parameters on the nature of the powder particle kinetics. Experimentally validated three-dimensional computational fluid dynamics (3D CFD) models are implemented to study the particle impact conditions for varying powder feeder tube size, powder feeder tube axial misalignment, and radial powder feeder injection location on the particle velocity and the deposition shape of aluminum alloy 6061. Outputs of the models are statistically analyzed to explore the shape of the spray plume distribution and resulting coating buildup.
Climate change impact assessment on hydrology of a small watershed using semi-distributed model
NASA Astrophysics Data System (ADS)
Pandey, Brij Kishor; Gosain, A. K.; Paul, George; Khare, Deepak
2017-07-01
This study is an attempt to quantify the impact of climate change on the hydrology of Armur watershed in Godavari river basin, India. A GIS-based semi-distributed hydrological model, soil and water assessment tool (SWAT) has been employed to estimate the water balance components on the basis of unique combinations of slope, soil and land cover classes for the base line (1961-1990) and future climate scenarios (2071-2100). Sensitivity analysis of the model has been performed to identify the most critical parameters of the watershed. Average monthly calibration (1987-1994) and validation (1995-2000) have been performed using the observed discharge data. Coefficient of determination (R2), Nash-Sutcliffe efficiency (ENS) and root mean square error (RMSE) were used to evaluate the model performance. Calibrated SWAT setup has been used to evaluate the changes in water balance components of future projection over the study area. HadRM3, a regional climatic data, have been used as input of the hydrological model for climate change impact studies. In results, it was found that changes in average annual temperature (+3.25 °C), average annual rainfall (+28 %), evapotranspiration (28 %) and water yield (49 %) increased for GHG scenarios with respect to the base line scenario.
Hartin, Samantha N; Hossain, Waheeda A; Manzardo, Ann M; Brown, Shaquanna; Fite, Paula J; Bortolato, Marco; Butler, Merlin G
2018-02-12
The first study of growth hormone receptor (GHR) genotypes in healthy young adults in the United States attending a Midwestern university and impact on selected growth parameters. To describe the frequency of GHR genotypes in a sample of healthy young adults from the United States attending a university in the Midwest and analyze the relationship between GHR genotypes and selected growth parameters. Saliva was collected from 459 healthy young adults (237 females, 222 males; age range = 18-25 y) and DNA isolated for genotyping of GHR alleles (fl/fl, fl/d3, or d3/d3). Selected growth parameters were collected and GHR genotype data examined for previously reported associations (e.g., height, weight or bone mass density) or novel findings (e.g., % body water and index finger length). We found 219 participants (48%) homozygous for fl/fl, 203 (44%), heterozygous fl/d3 and 37 (8%) homozygous d3/d3. The distribution of GHR genotypes in our participants was consistent with previous reports of non-US populations. Several anthropometric measures differed by sex. The distribution of GHR genotypes did not significantly differ by sex, weight, or other anthropometric measures. However, the fl/d3 genotype was more common among African-Americans. Our study of growth and anthropometric parameters in relationship to GHR genotypes found no association with height, weight, right index finger length, BMI, bone mass density, % body fat or % body water in healthy young adults. We did identify sex differences with increased body fat, decreased bone density, body water and index finger length in females. Copyright © 2018 Elsevier Ltd. All rights reserved.
Probability distribution functions for unit hydrographs with optimization using genetic algorithm
NASA Astrophysics Data System (ADS)
Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh
2017-05-01
A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.
Statistical analysis of general aviation VG-VGH data
NASA Technical Reports Server (NTRS)
Clay, L. E.; Dickey, R. L.; Moran, M. S.; Payauys, K. W.; Severyn, T. P.
1974-01-01
To represent the loads spectra of general aviation aircraft operating in the Continental United States, VG and VGH data collected since 1963 in eight operational categories were processed and analyzed. Adequacy of data sample and current operational categories, and parameter distributions required for valid data extrapolation were studied along with envelopes of equal probability of exceeding the normal load factor (n sub z) versus airspeed for gust and maneuver loads and the probability of exceeding current design maneuver, gust, and landing impact n sub z limits. The significant findings are included.
NASA Technical Reports Server (NTRS)
Gemin, Paul; Kupiszewski, Tom; Radun, Arthur; Pan, Yan; Lai, Rixin; Zhang, Di; Wang, Ruxi; Wu, Xinhui; Jiang, Yan; Galioto, Steve;
2015-01-01
The purpose of this effort was to advance the selection, characterization, and modeling of a propulsion electric grid for a Turboelectric Distributed Propulsion (TeDP) system for transport aircraft. The TeDP aircraft would constitute a miniature electric grid with 50 MW or more of total power, two or more generators, redundant transmission lines, and multiple electric motors driving propulsion fans. The study proposed power system architectures, investigated electromechanical and solid state circuit breakers, estimated the impact of the system voltage on system mass, and recommended DC bus voltage range. The study assumed an all cryogenic power system. Detailed assumptions within the study include hybrid circuit breakers, a two cryogen system, and supercritical cyrogens. A dynamic model was developed to investigate control and parameter selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuñez-Cumplido, E., E-mail: ejnc-mccg@hotmail.com; Hernandez-Armas, J.; Perez-Calatayud, J.
2015-08-15
Purpose: In clinical practice, specific air kerma strength (S{sub K}) value is used in treatment planning system (TPS) permanent brachytherapy implant calculations with {sup 125}I and {sup 103}Pd sources; in fact, commercial TPS provide only one S{sub K} input value for all implanted sources and the certified shipment average is typically used. However, the value for S{sub K} is dispersed: this dispersion is not only due to the manufacturing process and variation between different source batches but also due to the classification of sources into different classes according to their S{sub K} values. The purpose of this work is tomore » examine the impact of S{sub K} dispersion on typical implant parameters that are used to evaluate the dose volume histogram (DVH) for both planning target volume (PTV) and organs at risk (OARs). Methods: The authors have developed a new algorithm to compute dose distributions with different S{sub K} values for each source. Three different prostate volumes (20, 30, and 40 cm{sup 3}) were considered and two typical commercial sources of different radionuclides were used. Using a conventional TPS, clinically accepted calculations were made for {sup 125}I sources; for the palladium, typical implants were simulated. To assess the many different possible S{sub K} values for each source belonging to a class, the authors assigned an S{sub K} value to each source in a randomized process 1000 times for each source and volume. All the dose distributions generated for each set of simulations were assessed through the DVH distributions comparing with dose distributions obtained using a uniform S{sub K} value for all the implanted sources. The authors analyzed several dose coverage (V{sub 100} and D{sub 90}) and overdosage parameters for prostate and PTV and also the limiting and overdosage parameters for OARs, urethra and rectum. Results: The parameters analyzed followed a Gaussian distribution for the entire set of computed dosimetries. PTV and prostate V{sub 100} and D{sub 90} variations ranged between 0.2% and 1.78% for both sources. Variations for the overdosage parameters V{sub 150} and V{sub 200} compared to dose coverage parameters were observed and, in general, variations were larger for parameters related to {sup 125}I sources than {sup 103}Pd sources. For OAR dosimetry, variations with respect to the reference D{sub 0.1cm{sup 3}} were observed for rectum values, ranging from 2% to 3%, compared with urethra values, which ranged from 1% to 2%. Conclusions: Dose coverage for prostate and PTV was practically unaffected by S{sub K} dispersion, as was the maximum dose deposited in the urethra due to the implant technique geometry. However, the authors observed larger variations for the PTV V{sub 150}, rectum V{sub 100}, and rectum D{sub 0.1cm{sup 3}} values. The variations in rectum parameters were caused by the specific location of sources with S{sub K} value that differed from the average in the vicinity. Finally, on comparing the two sources, variations were larger for {sup 125}I than for {sup 103}Pd. This is because for {sup 103}Pd, a greater number of sources were used to obtain a valid dose distribution than for {sup 125}I, resulting in a lower variation for each S{sub K} value for each source (because the variations become averaged out statistically speaking)« less
Luensmann, Doerte; Yu, Mili; Yang, Jeffery; Srinivasan, Sruthi; Jones, Lyndon
2015-07-01
To evaluate the impact of cosmetics on silicone hydrogel (SiHy) contact lens shape, lens power, and optical performance. In this in vitro experiment, 7 SiHy materials were coated with 9 marketed brands of cosmetics, including hand creams (HCs) (3), eye makeup removers (MRs) (3), and mascaras (3). Diameter, sagittal depth, and base curve were determined using the Chiltern (Optimec Limited), whereas lens power and optical performance were assessed using the Contest Plus (Rotlex). Six replicates were used for each lens and cosmetic combination. Measurements were repeated after a cleaning cycle using a one-step hydrogen peroxide solution. Makeup removers had the greatest impact on diameter, sagittal depth, and base curve, resulting in changes of up to 0.5, 0.15, and 0.77 mm, respectively. The HCs and mascaras had little impact on these parameters; however, differences were observed between lens types. Optical performance was reduced with all mascaras, and a decrease of greater than 2 units on a 0 to 10 scale (10=uniform power distribution) was seen for 5 lens types exposed to waterproof mascara (P<0.01). Most HCs and MRs had minimal impact on image quality. Lens power did not change with any of the cosmetics (± 0.25 diopter; P>0.05). Lens cleaning resulted in some recovery of the lens parameters, and efficiency varied between cosmetics. Some eye MRs and waterproof mascaras changed the shape and optical performance of some SiHy lenses. Further research is needed to understand the clinical implications for SiHy lens wearers using cosmetics.
1981-12-01
CONCERNING THE RELIABILITY OF A SYSTEM MODELED BY A TWO-PARAMETER WEIBULL DISTRIBUTION THESIS AFIT/GOR/MA/81D-8 Philippe A. Lussier 2nd Lt USAF... MODELED BY A TWO-PARAMETER WEIBULL DISTRIBUTION THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology...repetitions are used for these test procedures. vi Sequential Testing of Hypotheses Concerning the Reliability of a System Modeled by a Two-Parameter
Comparison of z-known GRBs with the Main Groups of Bright BATSE Events
NASA Technical Reports Server (NTRS)
Mitrofanov, Igor G.; Sanin, Anton B.; Anfimov, Dmitrij S.; Litvak, Maxim L.; Briggs, Michael S.; Paciesas, William S.; Pendleton, Geoffrey N.; Preece, Robert D.; Meegan, Charles A.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
The small reference sample of six BATSE gamma-ray bursts with known redshifts from optical afterglows is compared with a comparison group of the 218 brightest BATSE bursts. These two groups are shown to be consistent both with respect to the distributions of the spectral peak parameter in the observer's frame and also with respect to the distributions of the frame-independent cosmological invariant parameter (CIP). Using the known values of the redshifts z for the reference sample, the rest-frame distribution of spectral parameters is built. The de-redshifted distribution of the spectral parameters of the reference sample is compared with distribution of these parameters for the comparison group after de-redshifting by the factor 1/(1+z), with z a free parameter. Requiring consistency between these two distributions produces a collective estimation of the best fitting redshifts z for the comparison group, z=1.8--3.6. These values can be considered as the average cosmological redshift of the sources of the brightest BATSE bursts. The most probable value of the peak energy of the spectrum in the rest frame is 920 keV, close to the rest mass of an electron-positron pair.
NASA Astrophysics Data System (ADS)
Ham, S. H.; Kato, S.; Rose, F. G.
2016-12-01
In the retrieval of ice clouds from Radar and Lidar Measurements, mass-Dimension (m-D) and Area-Dimension (A-D) relationships are often used to describe nonspherical ice particle shapes. This study analytically investigates how the assumption of m-D and A-D relationships affects retrieval of ice effective radius. We use gamma and lognormal particle distributions and integrate optical parameters over the size distribution. The effective radius is expressed as a function of radar reflectivity factor, visible extinction coefficient, and parameters describing m-D and A-D relationships. The analytic expressions are used for converting effective radius retrieved from one set of m-D and A-D relationships into that with another set of m-D and A-D, including plates, solid columns, bullets, and mixture of different habits. The conversion method can be used for consistent radiative transfer simulation with cloud retrieval algorithms. In addition, when we want to merge cloud effective radii retrieved from different m-D and A-D, the conversion method can be efficiently used to remove undesired biases caused by m-D and A-D assumptions. Furthermore, the sensitivity of the effective radius to m-D and A-D relationships can be quantified by taking the first derivative of the effective radius with respect to parameters expressing the m-D and A-D relationships.
[Modelling the impact of vaccination on the epidemiology of varicella zoster virus].
Bonmarin, I; Santa-Olalla, P; Lévy-Bruhl, D
2008-10-01
The soon to come the availability of a combined MMR-varicella vaccine has re-stimulated the debate around universal infant vaccination against varicella. In France, the incidence of varicella is estimated at about 700,000 cases per year, with approximately 3500 hospitalisations and 15-25 deaths, the latter mainly occurring in those over 15years. Vaccination would certainly decrease the overall incidence of the disease but concerns about vaccination leading to a shift in the average age at infection followed by an increase in incidence of severe cases and congenital varicella, still remain. In order to provide support for decision-making, a dynamic mathematical model of varicella virus transmission was used to predict the effect of different vaccination strategies and coverages on the epidemiology of varicella and zoster. A deterministic realistic age-structured model was adapted to the French situation. Epidemiological parameters were estimated from literature or surveillance data. Various vaccine coverages and vaccination strategies were investigated. A sensitivity analysis of varicella incidence predictions was performed to test the impact of changes in the vaccine parameters and age-specific mixing patterns. The model confirms that the overall incidence and morbidity of varicella would likely be reduced by mass vaccination of 12-month-old children. Whatever the coverage and the vaccine strategy, the vaccination will cause a shift in age distribution with, for vaccination coverage up to at least 80% in the base-case analysis, an increased morbidity among adults and pregnant women. However, the total number of deaths and hospitalisations from varicella is predicted to remain below that expected without vaccination. The model is very sensitive to the matrix of contacts used and to the parameters describing vaccine effectiveness. Zoster incidence will increase over a number of decades followed by a decline to below prevaccination levels. Mass varicella vaccination, in France, will result in an overall reduction of varicella incidence but will cause a shift in age distribution with an increase in adult cases. Due to the uncertainties in key parameters values, the exact magnitude of this shift is difficult to assess.
Novel methodology for pharmaceutical expenditure forecast.
Vataire, Anne-Lise; Cetinsoy, Laurent; Aballéa, Samuel; Rémuzat, Cécile; Urbinati, Duccio; Kornfeld, Åsa; Mzoughi, Olfa; Toumi, Mondher
2014-01-01
The value appreciation of new drugs across countries today features a disruption that is making the historical data that are used for forecasting pharmaceutical expenditure poorly reliable. Forecasting methods rarely addressed uncertainty. The objective of this project was to propose a methodology to perform pharmaceutical expenditure forecasting that integrates expected policy changes and uncertainty (developed for the European Commission as the 'EU Pharmaceutical expenditure forecast'; see http://ec.europa.eu/health/healthcare/key_documents/index_en.htm). 1) Identification of all pharmaceuticals going off-patent and new branded medicinal products over a 5-year forecasting period in seven European Union (EU) Member States. 2) Development of a model to estimate direct and indirect impacts (based on health policies and clinical experts) on savings of generics and biosimilars. Inputs were originator sales value, patent expiry date, time to launch after marketing authorization, price discount, penetration rate, time to peak sales, and impact on brand price. 3) Development of a model for new drugs, which estimated sales progression in a competitive environment. Clinical expected benefits as well as commercial potential were assessed for each product by clinical experts. Inputs were development phase, marketing authorization dates, orphan condition, market size, and competitors. 4) Separate analysis of the budget impact of products going off-patent and new drugs according to several perspectives, distribution chains, and outcomes. 5) Addressing uncertainty surrounding estimations via deterministic and probabilistic sensitivity analysis. This methodology has proven to be effective by 1) identifying the main parameters impacting the variations in pharmaceutical expenditure forecasting across countries: generics discounts and penetration, brand price after patent loss, reimbursement rate, the penetration of biosimilars and discount price, distribution chains, and the time to reach peak sales for new drugs; 2) estimating the statistical distribution of the budget impact; and 3) testing different pricing and reimbursement policy decisions on health expenditures. This methodology was independent of historical data and appeared to be highly flexible and adapted to test robustness and provide probabilistic analysis to support policy decision making.
Deficiencies in the reporting of VD and t1/2 in the FDA approved chemotherapy drug inserts
D’Souza, Malcolm J.; Alabed, Ghada J.
2011-01-01
Since its release in 2006, the US Food and Drug Administration (FDA) final improved format for prescription drug labeling has revamped the comprehensiveness of drug inserts, including chemotherapy drugs. The chemotherapy drug “packets”, retrieved via the FDA website and other accredited drug information reporting agencies such as the Physician Drug Reference (PDR), are practically the only available unbiased summary of information. One objective is to impartially evaluate the reporting of useful pharmacokinetic parameters, in particular, Volume of Distribution (VD) and elimination half-life (t1/2), in randomly selected FDA approved chemotherapy drug inserts. The web-accessible portable document format (PDF) files for 30 randomly selected chemotherapy drugs are subjected to detailed search and the two parameters of interest are tabulated. The knowledge of the two parameters is essential in directing patient care as well as for clinical research and since the completeness of the core FDA recommendations has been found deficient, a detailed explanation of the impact of such deficiencies is provided. PMID:21643531
Gariano, John; Neifeld, Mark; Djordjevic, Ivan
2017-01-20
Here, we present the engineering trade studies of a free-space optical communication system operating over a 30 km maritime channel for the months of January and July. The system under study follows the BB84 protocol with the following assumptions: a weak coherent source is used, Eve is performing the intercept resend attack and photon number splitting attack, prior knowledge of Eve's location is known, and Eve is allowed to know a small percentage of the final key. In this system, we examine the effect of changing several parameters in the following areas: the implementation of the BB84 protocol over the public channel, the technology in the receiver, and our assumptions about Eve. For each parameter, we examine how different values impact the secure key rate for a constant brightness. Additionally, we will optimize the brightness of the source for each parameter to study the improvement in the secure key rate.
Optimization and application of blasting parameters based on the "pushing-wall" mechanism
NASA Astrophysics Data System (ADS)
Ren, Feng-yu; Sow, Thierno Amadou Mouctar; He, Rong-xing; Liu, Xin-rui
2012-10-01
The large structure parameter of a sublevel caving method was used in Beiminghe iron mine. The ores were generally lower than the medium hardness and easy to be drilled and blasted. However, the questions of boulder yield, "pushing-wall" accident rate, and brow damage rate were not effectively controlled in practical blasting. The model test of a similar material shows that the charge concentration of bottom blastholes in the sector is too high; the pushing wall is the fundamental reason for the poor blasting effect. One of the main methods to adjust the explosive distribution is to increase the length of charged blastholes. Therefore, the field tests with respect to increasing the length of uncharged blastholes were made in 12# stope of -95 subsection and 6# stope of Beiminghe iron mine. This paper took the test result of 12# stope as an example to analyze the impact of charge structure on blasting effect and design an appropriate blasting parameter that is to similar to No.12 stope.
Casassa, L Federico; Larsen, Richard C; Beaver, Christopher W; Mireles, Maria S; Keller, Markus; Riley, William R; Smithyman, Russell; Harbertson, James F
2013-07-03
The impact of extended maceration (EM) was studied in Cabernet Sauvignon grapes sourced from a vineyard subjected to four regulated deficit irrigation (RDI) treatments: (I) 100% replenishment of crop evapotranspiration (100% ETc), (II) 70% ETc, (III) 25% ETc until véraison, followed by 100% ETc until harvest, and IV) 25% ETc. Each vineyard replicate was made into wine with two replicates designated as controls (10-day skin contact) and two as extended maceration (EM, 30-day skin contact). The mean degree of polymerization (mDP), size distribution, concentration, and composition of wine proanthocyanidins (PAs) and monomeric flavan-3-ols of 90 fractions were characterized by preparative and analytical HPLC techniques. The maceration length imparted a larger effect on most chemical parameters. The RDI treatment had no effect on the extraction patterns of anthocyanins, PAs, and/or on the origin of the PAs extracted into the wines. Conversely, EM led to anthocyanin losses and increased PA extraction during maceration, with ~73% of seed-derived PAs. Accordingly, the concentration of monomeric flavan-3-ols, oligomeric (2 ≤ mDP < 5) and polymeric PAs (mDP ≥ 5) was higher in EM wines. The size distribution of the wines' PAs revealed two major peaks as a function of concentration at mDP 2 (22-27% of total PAs mass) and at mDP 6-7 (12-17% of total PAs mass) and was found to follow a non-normal Rayleigh-type distribution.
10 CFR 51.77 - Distribution of draft environmental impact statement.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Distribution of draft environmental impact statement. 51...-Regulations Implementing Section 102(2) Draft Environmental Impact Statements-Production and Utilization Facilities § 51.77 Distribution of draft environmental impact statement. (a) In addition to the distribution...
10 CFR 51.77 - Distribution of draft environmental impact statement.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Distribution of draft environmental impact statement. 51...-Regulations Implementing Section 102(2) Draft Environmental Impact Statements-Production and Utilization Facilities § 51.77 Distribution of draft environmental impact statement. (a) In addition to the distribution...
10 CFR 51.77 - Distribution of draft environmental impact statement.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 2 2012-01-01 2012-01-01 false Distribution of draft environmental impact statement. 51...-Regulations Implementing Section 102(2) Draft Environmental Impact Statements-Production and Utilization Facilities § 51.77 Distribution of draft environmental impact statement. (a) In addition to the distribution...
10 CFR 51.77 - Distribution of draft environmental impact statement.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Distribution of draft environmental impact statement. 51...-Regulations Implementing Section 102(2) Draft Environmental Impact Statements-Production and Utilization Facilities § 51.77 Distribution of draft environmental impact statement. (a) In addition to the distribution...
10 CFR 51.77 - Distribution of draft environmental impact statement.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Distribution of draft environmental impact statement. 51...-Regulations Implementing Section 102(2) Draft Environmental Impact Statements-Production and Utilization Facilities § 51.77 Distribution of draft environmental impact statement. (a) In addition to the distribution...
Dynamics of a distributed drill string system: Characteristic parameters and stability maps
NASA Astrophysics Data System (ADS)
Aarsnes, Ulf Jakob F.; van de Wouw, Nathan
2018-03-01
This paper involves the dynamic (stability) analysis of distributed drill-string systems. A minimal set of parameters characterizing the linearized, axial-torsional dynamics of a distributed drill string coupled through the bit-rock interaction is derived. This is found to correspond to five parameters for a simple drill string and eight parameters for a two-sectioned drill-string (e.g., corresponding to the pipe and collar sections of a drilling system). These dynamic characterizations are used to plot the inverse gain margin of the system, parametrized in the non-dimensional parameters, effectively creating a stability map covering the full range of realistic physical parameters. This analysis reveals a complex spectrum of dynamics not evident in stability analysis with lumped models, thus indicating the importance of analysis using distributed models. Moreover, it reveals trends concerning stability properties depending on key system parameters useful in the context of system and control design aiming at the mitigation of vibrations.
Kauweloa, Kevin I; Gutierrez, Alonso N; Stathakis, Sotirios; Papanikolaou, Niko; Mavroidis, Panayiotis
2016-07-01
A toolkit has been developed for calculating the 3-dimensional biological effective dose (BED) distributions in multi-phase, external beam radiotherapy treatments such as those applied in liver stereotactic body radiation therapy (SBRT) and in multi-prescription treatments. This toolkit also provides a wide range of statistical results related to dose and BED distributions. MATLAB 2010a, version 7.10 was used to create this GUI toolkit. The input data consist of the dose distribution matrices, organ contour coordinates, and treatment planning parameters from the treatment planning system (TPS). The toolkit has the capability of calculating the multi-phase BED distributions using different formulas (denoted as true and approximate). Following the calculations of the BED distributions, the dose and BED distributions can be viewed in different projections (e.g. coronal, sagittal and transverse). The different elements of this toolkit are presented and the important steps for the execution of its calculations are illustrated. The toolkit is applied on brain, head & neck and prostate cancer patients, who received primary and boost phases in order to demonstrate its capability in calculating BED distributions, as well as measuring the inaccuracy and imprecision of the approximate BED distributions. Finally, the clinical situations in which the use of the present toolkit would have a significant clinical impact are indicated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Non-linear matter power spectrum covariance matrix errors and cosmological parameter uncertainties
NASA Astrophysics Data System (ADS)
Blot, L.; Corasaniti, P. S.; Amendola, L.; Kitching, T. D.
2016-06-01
The covariance of the matter power spectrum is a key element of the analysis of galaxy clustering data. Independent realizations of observational measurements can be used to sample the covariance, nevertheless statistical sampling errors will propagate into the cosmological parameter inference potentially limiting the capabilities of the upcoming generation of galaxy surveys. The impact of these errors as function of the number of realizations has been previously evaluated for Gaussian distributed data. However, non-linearities in the late-time clustering of matter cause departures from Gaussian statistics. Here, we address the impact of non-Gaussian errors on the sample covariance and precision matrix errors using a large ensemble of N-body simulations. In the range of modes where finite volume effects are negligible (0.1 ≲ k [h Mpc-1] ≲ 1.2), we find deviations of the variance of the sample covariance with respect to Gaussian predictions above ˜10 per cent at k > 0.3 h Mpc-1. Over the entire range these reduce to about ˜5 per cent for the precision matrix. Finally, we perform a Fisher analysis to estimate the effect of covariance errors on the cosmological parameter constraints. In particular, assuming Euclid-like survey characteristics we find that a number of independent realizations larger than 5000 is necessary to reduce the contribution of sampling errors to the cosmological parameter uncertainties at subpercent level. We also show that restricting the analysis to large scales k ≲ 0.2 h Mpc-1 results in a considerable loss in constraining power, while using the linear covariance to include smaller scales leads to an underestimation of the errors on the cosmological parameters.
2010-01-01
Background Patients-Reported Outcomes (PRO) are increasingly used in clinical and epidemiological research. Two main types of analytical strategies can be found for these data: classical test theory (CTT) based on the observed scores and models coming from Item Response Theory (IRT). However, whether IRT or CTT would be the most appropriate method to analyse PRO data remains unknown. The statistical properties of CTT and IRT, regarding power and corresponding effect sizes, were compared. Methods Two-group cross-sectional studies were simulated for the comparison of PRO data using IRT or CTT-based analysis. For IRT, different scenarios were investigated according to whether items or person parameters were assumed to be known, to a certain extent for item parameters, from good to poor precision, or unknown and therefore had to be estimated. The powers obtained with IRT or CTT were compared and parameters having the strongest impact on them were identified. Results When person parameters were assumed to be unknown and items parameters to be either known or not, the power achieved using IRT or CTT were similar and always lower than the expected power using the well-known sample size formula for normally distributed endpoints. The number of items had a substantial impact on power for both methods. Conclusion Without any missing data, IRT and CTT seem to provide comparable power. The classical sample size formula for CTT seems to be adequate under some conditions but is not appropriate for IRT. In IRT, it seems important to take account of the number of items to obtain an accurate formula. PMID:20338031
NASA Astrophysics Data System (ADS)
Doummar, Joanna; Kassem, Assaad
2017-04-01
In the framework of a three-year PEER (USAID/NSF) funded project, flow in a Karst system in Lebanon (Assal) dominated by snow and semi arid conditions was simulated and successfully calibrated using an integrated numerical model (MIKE-She 2016) based on high resolution input data and detailed catchment characterization. Point source infiltration and fast flow pathways were simulated by a bypass function and a high conductive lens respectively. The approach consisted of identifying all the factors used in qualitative vulnerability methods (COP, EPIK, PI, DRASTIC, GOD) applied in karst systems and to assess their influence on recharge signals in the different hydrological karst compartments (Atmosphere, Unsaturated zone and Saturated zone) based on the integrated numerical model. These parameters are usually attributed different weights according to their estimated impact on Groundwater vulnerability. The aim of this work is to quantify the importance of each of these parameters and outline parameters that are not accounted for in standard methods, but that might play a role in the vulnerability of a system. The spatial distribution of the detailed evapotranspiration, infiltration, and recharge signals from atmosphere to unsaturated zone to saturated zone was compared and contrasted among different surface settings and under varying flow conditions (e.g., in varying slopes, land cover, precipitation intensity, and soil properties as well point source infiltration). Furthermore a sensitivity analysis of individual or coupled major parameters allows quantifying their impact on recharge and indirectly on vulnerability. The preliminary analysis yields a new methodology that accounts for most of the factors influencing vulnerability while refining the weights attributed to each one of them, based on a quantitative approach.
Long-term statistics of extreme tsunami height at Crescent City
NASA Astrophysics Data System (ADS)
Dong, Sheng; Zhai, Jinjin; Tao, Shanshan
2017-06-01
Historically, Crescent City is one of the most vulnerable communities impacted by tsunamis along the west coast of the United States, largely attributed to its offshore geography. Trans-ocean tsunamis usually produce large wave runup at Crescent Harbor resulting in catastrophic damages, property loss and human death. How to determine the return values of tsunami height using relatively short-term observation data is of great significance to assess the tsunami hazards and improve engineering design along the coast of Crescent City. In the present study, the extreme tsunami heights observed along the coast of Crescent City from 1938 to 2015 are fitted using six different probabilistic distributions, namely, the Gumbel distribution, the Weibull distribution, the maximum entropy distribution, the lognormal distribution, the generalized extreme value distribution and the generalized Pareto distribution. The maximum likelihood method is applied to estimate the parameters of all above distributions. Both Kolmogorov-Smirnov test and root mean square error method are utilized for goodness-of-fit test and the better fitting distribution is selected. Assuming that the occurrence frequency of tsunami in each year follows the Poisson distribution, the Poisson compound extreme value distribution can be used to fit the annual maximum tsunami amplitude, and then the point and interval estimations of return tsunami heights are calculated for structural design. The results show that the Poisson compound extreme value distribution fits tsunami heights very well and is suitable to determine the return tsunami heights for coastal disaster prevention.
Log-Normal Distribution of Cosmic Voids in Simulations and Mocks
NASA Astrophysics Data System (ADS)
Russell, E.; Pycke, J.-R.
2017-01-01
Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.
Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model
NASA Astrophysics Data System (ADS)
Yuan, Zhongda; Deng, Junxiang; Wang, Dawei
2018-02-01
Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.
Optimization of a Centrifugal Impeller Design Through CFD Analysis
NASA Technical Reports Server (NTRS)
Chen, W. C.; Eastland, A. H.; Chan, D. C.; Garcia, Roberto
1993-01-01
This paper discusses the procedure, approach and Rocketdyne CFD results for the optimization of the NASA consortium impeller design. Two different approaches have been investigated. The first one is to use a tandem blade arrangement, the main impeller blade is split into two separate rows with the second blade row offset circumferentially with respect to the first row. The second approach is to control the high losses related to secondary flows within the impeller passage. Many key parameters have been identified and each consortium team member involved will optimize a specific parameter using 3-D CFD analysis. Rocketdyne has provided a series of CFD grids for the consortium team members. SECA will complete the tandem blade study, SRA will study the effect of the splitter blade solidity change, NASA LeRC will evaluate the effect of circumferential position of the splitter blade, VPI will work on the hub to shroud blade loading distribution, NASA Ames will examine the impeller discharge leakage flow impacts and Rocketdyne will continue to work on the meridional contour and the blade leading to trailing edge work distribution. This paper will also present Rocketdyne results from the tandem blade study and from the blade loading distribution study. It is the ultimate goal of this consortium team to integrate the available CFD analysis to design an advanced technology impeller that is suitable for use in the NASA Space Transportation Main Engine (STME) fuel turbopump.
Local Burn-Up Effects in the NBSR Fuel Element
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown N. R.; Hanson A.; Diamond, D.
2013-01-31
This study addresses the over-prediction of local power when the burn-up distribution in each half-element of the NBSR is assumed to be uniform. A single-element model was utilized to quantify the impact of axial and plate-wise burn-up on the power distribution within the NBSR fuel elements for both high-enriched uranium (HEU) and low-enriched uranium (LEU) fuel. To validate this approach, key parameters in the single-element model were compared to parameters from an equilibrium core model, including neutron energy spectrum, power distribution, and integral U-235 vector. The power distribution changes significantly when incorporating local burn-up effects and has lower power peakingmore » relative to the uniform burn-up case. In the uniform burn-up case, the axial relative power peaking is over-predicted by as much as 59% in the HEU single-element and 46% in the LEU single-element with uniform burn-up. In the uniform burn-up case, the plate-wise power peaking is over-predicted by as much as 23% in the HEU single-element and 18% in the LEU single-element. The degree of over-prediction increases as a function of burn-up cycle, with the greatest over-prediction at the end of Cycle 8. The thermal flux peak is always in the mid-plane gap; this causes the local cumulative burn-up near the mid-plane gap to be significantly higher than the fuel element average. Uniform burn-up distribution throughout a half-element also causes a bias in fuel element reactivity worth, due primarily to the neutronic importance of the fissile inventory in the mid-plane gap region.« less
NASA Astrophysics Data System (ADS)
Krauze, A.; Virbulis, J.; Kravtsov, A.
2018-05-01
A beam glow discharge based electron gun can be applied as heater for silicon crystal growth systems in which silicon rods are pulled from melt. Impacts of high-energy charged particles cause wear and tear of the gun and generate an additional source of silicon contamination. A steady-state model for electron beam formation has been developed to model the electron gun and optimize its design. Description of the model and first simulation results are presented. It has been shown that the model can simulate dimensions of particle impact areas on the cathode and anode, but further improvements of the model are needed to correctly simulate electron trajectory distribution in the beam and the beam current dependence on the applied gas pressure.
Finite Element Simulation of Shot Peening: Prediction of Residual Stresses and Surface Roughness
NASA Astrophysics Data System (ADS)
Gariépy, Alexandre; Perron, Claude; Bocher, Philippe; Lévesque, Martin
Shot peening is a surface treatment that consists of bombarding a ductile surface with numerous small and hard particles. Each impact creates localized plastic strains that permanently stretch the surface. Since the underlying material constrains this stretching, compressive residual stresses are generated near the surface. This process is commonly used in the automotive and aerospace industries to improve fatigue life. Finite element analyses can be used to predict residual stress profiles and surface roughness created by shot peening. This study investigates further the parameters and capabilities of a random impact model by evaluating the representative volume element and the calculated stress distribution. Using an isotropic-kinematic hardening constitutive law to describe the behaviour of AA2024-T351 aluminium alloy, promising results were achieved in terms of residual stresses.
Impact of observational incompleteness on the structural properties of protein interaction networks
NASA Astrophysics Data System (ADS)
Kuhnt, Mathias; Glauche, Ingmar; Greiner, Martin
2007-01-01
The observed structure of protein interaction networks is corrupted by many false positive/negative links. This observational incompleteness is abstracted as random link removal and a specific, experimentally motivated (spoke) link rearrangement. Their impact on the structural properties of gene-duplication-and-mutation network models is studied. For the degree distribution a curve collapse is found, showing no sensitive dependence on the link removal/rearrangement strengths and disallowing a quantitative extraction of model parameters. The spoke link rearrangement process moves other structural observables, like degree correlations, cluster coefficient and motif frequencies, closer to their counterparts extracted from the yeast data. This underlines the importance to take a precise modeling of the observational incompleteness into account when network structure models are to be quantitatively compared to data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van den Heuvel, F; Hackett, S; Fiorini, F
Purpose: Currently, planning systems allow robustness calculations to be performed, but a generalized assessment methodology is not yet available. We introduce and evaluate a methodology to quantify the robustness of a plan on an individual patient basis. Methods: We introduce the notion of characterizing a treatment instance (i.e. one single fraction delivery) by describing the dose distribution within an organ as an alpha-stable distribution. The parameters of the distribution (shape(α), scale(γ), position(δ), and symmetry(β)), will vary continuously (in a mathematical sense) as the distributions change with the different positions. The rate of change of the parameters provides a measure ofmore » the robustness of the treatment. The methodology is tested in a planning study of 25 patients with known residual errors at each fraction. Each patient was planned using Eclipse with an IBA-proton beam model. The residual error space for every patient was sampled 30 times, yielding 31 treatment plans for each patient and dose distributions in 5 organs. The parameters’ change rate as a function of Euclidean distance from the original plan was analyzed. Results: More than 1,000 dose distributions were analyzed. For 4 of the 25 patients the change in scale rate (γ) was considerably higher than the lowest change rate, indicating a lack of robustness. The sign of the shape change rate (α) also seemed indicative but the experiment lacked the power to prove significance. Conclusion: There are indications that this robustness measure is a valuable tool to allow a more patient individualized approach to the determination of margins. In a further study we will also evaluate this robustness measure using photon treatments, and evaluate the impact of using breath hold techniques, and the a Monte Carlo based dose deposition calculation. A principle component analysis is also planned.« less
Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression.
Krzyzanski, Wojciech; Hu, Shuhua; Dunlavey, Michael
2018-04-01
The distributed delay model has been introduced that replaces the transit compartments in the classic model of chemotherapy-induced myelosuppression with a convolution integral. The maturation of granulocyte precursors in the bone marrow is described by the gamma probability density function with the shape parameter (ν). If ν is a positive integer, the distributed delay model coincides with the classic model with ν transit compartments. The purpose of this work was to evaluate performance of the distributed delay model with particular focus on model deterministic identifiability in the presence of the shape parameter. The classic model served as a reference for comparison. Previously published white blood cell (WBC) count data in rats receiving bolus doses of 5-fluorouracil were fitted by both models. The negative two log-likelihood objective function (-2LL) and running times were used as major markers of performance. Local sensitivity analysis was done to evaluate the impact of ν on the pharmacodynamics response WBC. The ν estimate was 1.46 with 16.1% CV% compared to ν = 3 for the classic model. The difference of 6.78 in - 2LL between classic model and the distributed delay model implied that the latter performed significantly better than former according to the log-likelihood ratio test (P = 0.009), although the overall performance was modestly better. The running times were 1 s and 66.2 min, respectively. The long running time of the distributed delay model was attributed to computationally intensive evaluation of the convolution integral. The sensitivity analysis revealed that ν strongly influences the WBC response by controlling cell proliferation and elimination of WBCs from the circulation. In conclusion, the distributed delay model was deterministically identifiable from typical cytotoxic data. Its performance was modestly better than the classic model with significantly longer running time.
NASA Astrophysics Data System (ADS)
Kallolimath, Sharan Chandrashekar
For the past several years, many researchers are constantly developing and improving board level drop test procedures and specifications to quantify the solder joint reliability performance of consumer electronics products. Predictive finite element analysis (FEA) by utilizing simulation software has become widely acceptable verification method which can reduce time and cost of the real-time test process. However, due to testing and metrological limitations it is difficult not only to simulate exact drop condition and capture critical measurement data but also tedious to calibrate the system to improve test methods. Moreover, some of the important ever changing factors such as board flexural rigidity, damping, drop height, and drop orientation results in non-uniform stress/strain distribution throughout the test board. In addition, one of the most challenging tasks is to quantify uniform stress and strain distribution throughout the test board and identify critical failure factors. The major contributions of this work are in the four aspects of the drop test in electronics as following. First of all, an analytical FEA model was developed to study the board natural frequencies and responses of the system with the consideration of dynamic stiffness, damping behavior of the material and effect of impact loading condition. An approach to find the key parameters that affect stress and strain distributions under predominate mode responses was proposed and verified with theoretical solutions. Input-G method was adopted to study board response behavior and cut boundary interpolation methods was used to analyze local model solder joint stresses with the development of global/local FEA model in ANSYS software. Second, no ring phenomenon during the drop test was identified theoretically when the test board was modeled as both discrete system and continuous system. Numerical analysis was then conducted by FEA method for detailed geometry of attached chips with solder-joints. No ring test conditions was proposed and verified for the current widely used JEDEC standard. The significance of impact loading parameters such as pulse magnitude, pulse duration, pulse shapes and board dynamic parameter such as linear hysteretic damping and dynamic stiffness were discussed. Third, Kirchhoff's plate theory by principle of minimum potential energy was adopted to develop the FEA formulation to consider the effect of material hysteretic damping for the currently used JEDEC board test and proposed no-ring response test condition. Fourth, a hexagonal symmetrical board model was proposed to address the uniform stress and strain distribution throughout the test board and identify the critical failure factors. Dynamic stress and strain of the hexagonal board model were then compared with standard JEDEC board for both standard and proposed no-ring test conditions. In general, this line of research demonstrates that advanced techniques of FEA analysis can provide useful insights concerning the optimal design of drop test in microelectronics.
NASA Astrophysics Data System (ADS)
Dafflon, B.; Tran, A. P.; Wainwright, H. M.; Hubbard, S. S.; Peterson, J.; Ulrich, C.; Williams, K. H.
2015-12-01
Quantifying water and heat fluxes in the subsurface is crucial for managing water resources and for understanding the terrestrial ecosystem where hydrological properties drive a variety of biogeochemical processes across a large range of spatial and temporal scales. Here, we present the development of an advanced monitoring strategy where hydro-thermal-geophysical datasets are continuously acquired and further involved in a novel inverse modeling framework to estimate the hydraulic and thermal parameter that control heat and water dynamics in the subsurface and further influence surface processes such as evapotranspiration and vegetation growth. The measured and estimated soil properties are also used to investigate co-interaction between subsurface and surface dynamics by using above-ground aerial imaging. The value of this approach is demonstrated at two different sites, one in the polygonal shaped Arctic tundra where water and heat dynamics have a strong impact on freeze-thaw processes, vegetation and biogeochemical processes, and one in a floodplain along the Colorado River where hydrological fluxes between compartments of the system (surface, vadose zone and groundwater) drive biogeochemical transformations. Results show that the developed strategy using geophysical, point-scale and aerial measurements is successful to delineate the spatial distribution of hydrostratigraphic units having distinct physicochemical properties, to monitor and quantify in high resolution water and heat distribution and its linkage with vegetation, geomorphology and weather conditions, and to estimate hydraulic and thermal parameters for enhanced predictions of water and heat fluxes as well as evapotranspiration. Further, in the Colorado floodplain, results document the potential presence of only periodic infiltration pulses as a key hot moment controlling soil hydro and biogeochemical functioning. In the arctic, results show the strong linkage between soil water content, thermal parameters, thaw layer thickness and vegetation distribution. Overall, results of these efforts demonstrate the value of coupling various datasets at high spatial and temporal resolution to improve predictive understanding of subsurface and surface dynamics.
Systems and methods for optimal power flow on a radial network
Low, Steven H.; Peng, Qiuyu
2018-04-24
Node controllers and power distribution networks in accordance with embodiments of the invention enable distributed power control. One embodiment includes a node controller including a distributed power control application; a plurality of node operating parameters describing the operating parameter of a node and a set of at least one node selected from the group consisting of an ancestor node and at least one child node; wherein send node operating parameters to nodes in the set of at least one node; receive operating parameters from the nodes in the set of at least one node; calculate a plurality of updated node operating parameters using an iterative process to determine the updated node operating parameters using the node operating parameters that describe the operating parameters of the node and the set of at least one node, where the iterative process involves evaluation of a closed form solution; and adjust node operating parameters.
Fayed, Mohamed H; Abdel-Rahman, Sayed I; Alanazi, Fars K; Ahmed, Mahrous O; Tawfeek, Hesham M; Al-Shedfat, Ramadan I
2017-01-01
Application of quality by design (QbD) in high shear granulation process is critical and need to recognize the correlation between the granulation process parameters and the properties of intermediate (granules) and corresponding final product (tablets). The present work examined the influence of water amount (X,) and wet massing time (X2) as independent process variables on the critical quality attributes of granules and corresponding tablets using design of experiment (DoE) technique. A two factor, three level (32) full factorial design was performed; each of these variables was investigated at three levels to characterize their strength and interaction. The dried granules have been analyzed for their size distribution, density and flow pattern. Additionally, the produced tablets have been investigated for weight uniformity, crushing strength, friability and percent capping, disintegration time and drug dissolution. Statistically significant impact (p < 0.05) of water amount was identified for granule growth, percent fines and distribution width and flow behavior. Granule density and compressibility were found to be significantly influenced (p < 0.05) by the two operating conditions. Also, water amount has significant effect (p < 0.05) on tablet weight unifornity, friability and percent capping. Moreover, tablet disintegration time and drug dissolution appears to be significantly influenced (p < 0.05) by the two process variables. On the other hand, the relationship of process parameters with critical quality attributes of granule and final product tablet was identified and correlated. Ultimately, a judicious selection of process parameters in high shear granulation process will allow providing product of desirable quality.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
Rafal Podlaski; Francis A. Roesch
2013-01-01
Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...
Monterial, Mateusz; Marleau, Peter; Paff, Marc; ...
2017-01-20
Here, we present the results from the first measurements of the Time-Correlated Pulse-Height (TCPH) distributions from 4.5 kg sphere of α-phase weapons-grade plutonium metal in five configurations: bare, reflected by 1.27 cm and 2.54 cm of tungsten, and 2.54 cm and 7.62 cm of polyethylene. A new method for characterizing source multiplication and shielding configuration is also demonstrated. The method relies on solving for the underlying fission chain timing distribution that drives the spreading of the measured TCPH distribution. We found that a gamma distribution fits the fission chain timing distribution well and that the fit parameters correlate with bothmore » multiplication (rate parameter) and shielding material types (shape parameter). The source-to-detector distance was another free parameter that we were able to optimize, and proved to be the most well constrained parameter. MCNPX-PoliMi simulations were used to complement the measurements and help illustrate trends in these parameters and their relation to multiplication and the amount and type of material coupled to the subcritical assembly.« less
NASA Astrophysics Data System (ADS)
Monterial, Mateusz; Marleau, Peter; Paff, Marc; Clarke, Shaun; Pozzi, Sara
2017-04-01
We present the results from the first measurements of the Time-Correlated Pulse-Height (TCPH) distributions from 4.5 kg sphere of α-phase weapons-grade plutonium metal in five configurations: bare, reflected by 1.27 cm and 2.54 cm of tungsten, and 2.54 cm and 7.62 cm of polyethylene. A new method for characterizing source multiplication and shielding configuration is also demonstrated. The method relies on solving for the underlying fission chain timing distribution that drives the spreading of the measured TCPH distribution. We found that a gamma distribution fits the fission chain timing distribution well and that the fit parameters correlate with both multiplication (rate parameter) and shielding material types (shape parameter). The source-to-detector distance was another free parameter that we were able to optimize, and proved to be the most well constrained parameter. MCNPX-PoliMi simulations were used to complement the measurements and help illustrate trends in these parameters and their relation to multiplication and the amount and type of material coupled to the subcritical assembly.
Cooley, Richard L.
1993-01-01
Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.
Measurement of Device Parameters Using Image Recovery Techniques in Large-Scale IC Devices
NASA Technical Reports Server (NTRS)
Scheick, Leif; Edmonds, Larry
2004-01-01
Devices that respond to radiation on a cell level will produce histograms showing the relative frequency of cell damage as a function of damage. The measured distribution is the convolution of distributions from radiation responses, measurement noise, and manufacturing parameters. A method of extracting device characteristics and parameters from measured distributions via mathematical and image subtraction techniques is described.
Maximum entropy principal for transportation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilich, F.; Da Silva, R.
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utilitymore » concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.« less
Fingerprinting the type of line edge roughness
NASA Astrophysics Data System (ADS)
Fernández Herrero, A.; Pflüger, M.; Scholze, F.; Soltwisch, V.
2017-06-01
Lamellar gratings are widely used diffractive optical elements and are prototypes of structural elements in integrated electronic circuits. EUV scatterometry is very sensitive to structure details and imperfections, which makes it suitable for the characterization of nanostructured surfaces. As compared to X-ray methods, EUV scattering allows for steeper angles of incidence, which is highly preferable for the investigation of small measurement fields on semiconductor wafers. For the control of the lithographic manufacturing process, a rapid in-line characterization of nanostructures is indispensable. Numerous studies on the determination of regular geometry parameters of lamellar gratings from optical and Extreme Ultraviolet (EUV) scattering also investigated the impact of roughness on the respective results. The challenge is to appropriately model the influence of structure roughness on the diffraction intensities used for the reconstruction of the surface profile. The impact of roughness was already studied analytically but for gratings with a periodic pseudoroughness, because of practical restrictions of the computational domain. Our investigation aims at a better understanding of the scattering caused by line roughness. We designed a set of nine lamellar Si-gratings to be studied by EUV scatterometry. It includes one reference grating with no artificial roughness added, four gratings with a periodic roughness distribution, two with a prevailing line edge roughness (LER) and another two with line width roughness (LWR), and four gratings with a stochastic roughness distribution (two with LER and two with LWR). We show that the type of line roughness has a strong impact on the diffuse scatter angular distribution. Our experimental results are not described well by the present modelling approach based on small, periodically repeated domains.
Charged-particle pseudorapidity distributions in Au+Au collisions at sNN=62.4 GeV
NASA Astrophysics Data System (ADS)
Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Busza, W.; Carroll, A.; Chai, Z.; Decowski, M. P.; García, E.; Gburek, T.; George, N.; Gulbrandsen, K.; Halliwell, C.; Hamblen, J.; Hauer, M.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Hołyński, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Khan, N.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Reed, C.; Roland, C.; Roland, G.; Sagerer, J.; Seals, H.; Sedykh, I.; Smith, C. E.; Stankiewicz, M. A.; Steinberg, P.; Stephans, G. S. F.; Sukhanov, A.; Tonjes, M. B.; Trzupek, A.; Vale, C.; Nieuwenhuizen, G. J. Van; Vaurynovich, S. S.; Verdier, R.; Veres, G. I.; Wenger, E.; Wolfs, F. L. H.; Wosiek, B.; Woźniak, K.; Wysłouch, B.
2006-08-01
The charged-particle pseudorapidity density for Au+Au collisions at sNN=62.4 GeV has been measured over a wide range of impact parameters and compared to results obtained at other energies. As a function of collision energy, the pseudorapidity distribution grows systematically both in height and width. The midrapidity density is found to grow approximately logarithmically between BNL Alternating Gradient Synchrotron (AGS) energies and the top BNL Relativistic Heavy Ion Collider (RHIC) energy. There is also an approximate factorization of the centrality and energy dependence of the midrapidity yields. The new results at sNN=62.4 GeV confirm the previously observed phenomenon of “extended longitudinal scaling” in the pseudorapidity distributions when viewed in the rest frame of one of the colliding nuclei. It is also found that the evolution of the shape of the distribution with centrality is energy independent, when viewed in this reference frame. As a function of centrality, the total charged particle multiplicity scales linearly with the number of participant pairs as it was observed at other energies.
Distributed intelligent monitoring and reporting facilities
NASA Astrophysics Data System (ADS)
Pavlou, George; Mykoniatis, George; Sanchez-P, Jorge-A.
1996-06-01
Distributed intelligent monitoring and reporting facilities are of paramount importance in both service and network management as they provide the capability to monitor quality of service and utilization parameters and notify degradation so that corrective action can be taken. By intelligent, we refer to the capability of performing the monitoring tasks in a way that has the smallest possible impact on the managed network, facilitates the observation and summarization of information according to a number of criteria and in its most advanced form and permits the specification of these criteria dynamically to suit the particular policy in hand. In addition, intelligent monitoring facilities should minimize the design and implementation effort involved in such activities. The ISO/ITU Metric, Summarization and Performance management functions provide models that only partially satisfy the above requirements. This paper describes our extensions to the proposed models to support further capabilities, with the intention to eventually lead to fully dynamically defined monitoring policies. The concept of distributing intelligence is also discussed, including the consideration of security issues and the applicability of the model in ODP-based distributed processing environments.
NASA Astrophysics Data System (ADS)
Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi
To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.
Life cycle assessment of Mexican polymer and high-durability cotton paper banknotes.
Luján-Ornelas, Cristina; Mancebo Del C Sternenfels, Uriel; Güereca, Leonor Patricia
2018-07-15
This study compares the environmental performance of Mexican banknotes printed on high-durability cotton paper (HD paper) and thermoplastic polymer (polymer) through a life cycle assessment to appraise the environmental impacts from the extraction of raw materials to the final disposal of the banknotes. The functional unit was defined considering the next parameters: 1) lifespan of the banknotes, stablished in 31.5 and 54months for HD paper and polymer, respectively; 2) denomination, selecting $200 pesos banknotes; 3) a 5year time frame and 4) a defined amount of money, in this case stablished as the monthly cash supply of an average Mexican household, equaling $12,708 pesos. Accordingly, 121 pieces for the HD paper and 71 pieces for the polymer banknotes were analyzed. The results favor the banknotes printed on polymer substrate primarily because of the longer lifespan of this type of material; however, there is a considerable environmental impact in the stages of distribution, followed by the extraction of the raw materials (crude oil) during manufacturing. Regarding the HD cotton paper, the major impact corresponds to extraction of the raw materials, followed by the distribution of the banknotes. The inclusion of the automatic teller machines (ATMs) in the life cycle assessment of banknotes shows that the electricity required by these devices became the largest contributor to the environmental impacts. Additionally, the sensitivity analysis that the average lifetime of the banknotes is a determining factor for the environmental impacts associated with the whole life cycle of this product. The life cycle stages that refer to the extraction of the raw materials, combined with the average lifetime of the banknotes and the electricity required during the usage stage, are determining factors in the total environmental impact associated with Mexican banknotes. Copyright © 2018 Elsevier B.V. All rights reserved.
Biotic variation in coastal water bodies in Sussex, England: Implications for saline lagoons
NASA Astrophysics Data System (ADS)
Joyce, Chris B.; Vina-Herbon, Cristina; Metcalfe, Daniel J.
2005-12-01
Coastal water bodies are a heterogeneous resource typified by high spatial and temporal variability and threatened by anthropogenic impacts. This includes saline lagoons, which support a specialist biota and are a priority habitat for nature conservation. This paper describes the biotic variation in coastal water bodies in Sussex, England, in order to characterise the distinctiveness of the saline lagoon community and elucidate environmental factors that determine its distribution. Twenty-eight coastal water bodies were surveyed for their aquatic flora and invertebrate fauna and a suite of exploratory environmental variables compiled. Ordination and cluster analyses were used to examine patterns in community composition and relate these to environmental parameters. Biotic variation in the coastal water body resource was high. Salinity was the main environmental parameter explaining the regional distribution of taxa; freshwater and saline assemblages were evident and related to sea water ingress. Freshwater sites were indicated by the plant Myriophyllum spicatum and gastropod mollusc Lymnaea peregra, while more saline communities supported marine and brackish water taxa, notably a range of chlorophytic algae and the bivalve mollusc Cerastoderma glaucum. Site community differences were also related to bank slope and parameters describing habitat heterogeneity. A saline lagoon community was discerned within the matrix of biotic variation consisting of specialist lagoonal species with associated typically euryhaline taxa. For fauna, the latter were the molluscs Abra tenuis and Hydrobia ulvae, and the crustaceans Corophium volutator and Palaemonetes varians, and for flora they were the algae Ulva lactuca, Chaetomorpha mediterranea, Cladophora spp. and Enteromorpha intestinalis. One non-native polychaete species, Ficopomatus enigmaticus, also strongly influenced community structure within the lagoonal resource. The community was not well defined as specialist and associated taxa were distributed throughout the spectrum of sites surveyed. Implications for the identification and conservation of saline lagoons are discussed.
Distribution of water quality parameters in Dhemaji district, Assam (India).
Buragohain, Mridul; Bhuyan, Bhabajit; Sarma, H P
2010-07-01
The primary objective of this study is to present a statistically significant water quality database of Dhemaji district, Assam (India) with special reference to pH, fluoride, nitrate, arsenic, iron, sodium and potassium. 25 water samples collected from different locations of five development blocks in Dhemaji district have been studied separately. The implications presented are based on statistical analyses of the raw data. Normal distribution statistics and reliability analysis (correlation and covariance matrix) have been employed to find out the distribution pattern, localisation of data, and other related information. Statistical observations show that all the parameters under investigation exhibit non uniform distribution with a long asymmetric tail either on the right or left side of the median. The width of the third quartile was consistently found to be more than the second quartile for each parameter. Differences among mean, mode and median, significant skewness and kurtosis value indicate that the distribution of various water quality parameters in the study area is widely off normal. Thus, the intrinsic water quality is not encouraging due to unsymmetrical distribution of various water quality parameters in the study area.
On the Use of the Beta Distribution in Probabilistic Resource Assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olea, Ricardo A., E-mail: olea@usgs.gov
2011-12-15
The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. Themore » beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution.« less
Statistical distributions of earthquake numbers: consequence of branching process
NASA Astrophysics Data System (ADS)
Kagan, Yan Y.
2010-03-01
We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.
NASA Technical Reports Server (NTRS)
Dudkin, V. E.; Kovalev, E. E.; Nefedov, N. A.; Antonchik, V. A.; Bogdanov, S. D.; Kosmach, V. F.; Likhachev, A. YU.; Benton, E. V.; Crawford, H. J.
1995-01-01
A method is proposed for finding the dependence of mean multiplicities of secondaries on the nucleus-collision impact parameter from the data on the total interaction ensemble. The impact parameter has been shown to completely define the mean characteristics of an individual interaction event. A difference has been found between experimental results and the data calculated in terms of the cascade-evaporation model at impact-parameter values below 3 fm.
Factors Impacting Spatial Patterns of Snow Distribution in a Small Catchment near Nome, AK
NASA Astrophysics Data System (ADS)
Chen, M.; Wilson, C. J.; Charsley-Groffman, L.; Busey, R.; Bolton, W. R.
2017-12-01
Snow cover plays an important role in the climate, hydrology and ecological systems of the Arctic due to its influence on the water balance, thermal regimes, vegetation and carbon flux. Thus, snow depth and coverage have been key components in all the earth system models but are often poorly represented for arctic regions, where fine scale snow distribution data is sparse. The snow data currently used in the models is at coarse resolution, which in turn leads to high uncertainty in model predictions. Through the DOE Office of Science Next Generation Ecosystem Experiment, NGEE-Arctic, high resolution snow distribution data is being developed and applied in catchment scale models to ultimately improve representation of snow and its interactions with other model components in the earth system models . To improve these models, it is important to identify key factors that control snow distribution and quantify the impacts of those factors on snow distribution. In this study, two intensive snow depth surveys (1 to 10 meters scale) were conducted for a 2.3 km2 catchment on the Teller road, near Nome, AK in the winter of 2016 and 2017. We used a statistical model to quantify the impacts of vegetation types, macro-topography, micro-topography, and meteorological parameters on measured snow depth. The results show that snow spatial distribution was similar between 2016 and 2017, snow depth was spatially auto correlated over small distance (2-5 meters), but not spatially auto correlated over larger distance (more than 2-5 meters). The coefficients of variation of snow depth was above 0.3 for all the snow survey transects (500-800 meters long). Variation of snow depth is governed by vegetation height, aspect, slope, surface curvature, elevation and wind speed and direction. We expect that this empirical statistical model can be used to estimate end of winter snow depth for the whole watershed and will further develop the model using data from other arctic regions to estimate seasonally dynamic snow coverage and properties for use in catchment scale to pan-Arctic models.
NASA Astrophysics Data System (ADS)
Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.
2016-12-01
Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.
r.randomwalk v1.0, a multi-functional conceptual tool for mass movement routing
NASA Astrophysics Data System (ADS)
Mergili, M.; Krenn, J.; Chu, H.-J.
2015-09-01
We introduce r.randomwalk, a flexible and multi-functional open source tool for backward- and forward-analyses of mass movement propagation. r.randomwalk builds on GRASS GIS, the R software for statistical computing and the programming languages Python and C. Using constrained random walks, mass points are routed from defined release pixels of one to many mass movements through a digital elevation model until a defined break criterion is reached. Compared to existing tools, the major innovative features of r.randomwalk are: (i) multiple break criteria can be combined to compute an impact indicator score, (ii) the uncertainties of break criteria can be included by performing multiple parallel computations with randomized parameter settings, resulting in an impact indicator index in the range 0-1, (iii) built-in functions for validation and visualization of the results are provided, (iv) observed landslides can be back-analyzed to derive the density distribution of the observed angles of reach. This distribution can be employed to compute impact probabilities for each pixel. Further, impact indicator scores and probabilities can be combined with release indicator scores or probabilities, and with exposure indicator scores. We demonstrate the key functionalities of r.randomwalk (i) for a single event, the Acheron Rock Avalanche in New Zealand, (ii) for landslides in a 61.5 km2 study area in the Kao Ping Watershed, Taiwan; and (iii) for lake outburst floods in a 2106 km2 area in the Gunt Valley, Tajikistan.
r.randomwalk v1, a multi-functional conceptual tool for mass movement routing
NASA Astrophysics Data System (ADS)
Mergili, M.; Krenn, J.; Chu, H.-J.
2015-12-01
We introduce r.randomwalk, a flexible and multi-functional open-source tool for backward and forward analyses of mass movement propagation. r.randomwalk builds on GRASS GIS (Geographic Resources Analysis Support System - Geographic Information System), the R software for statistical computing and the programming languages Python and C. Using constrained random walks, mass points are routed from defined release pixels of one to many mass movements through a digital elevation model until a defined break criterion is reached. Compared to existing tools, the major innovative features of r.randomwalk are (i) multiple break criteria can be combined to compute an impact indicator score; (ii) the uncertainties of break criteria can be included by performing multiple parallel computations with randomized parameter sets, resulting in an impact indicator index in the range 0-1; (iii) built-in functions for validation and visualization of the results are provided; (iv) observed landslides can be back analysed to derive the density distribution of the observed angles of reach. This distribution can be employed to compute impact probabilities for each pixel. Further, impact indicator scores and probabilities can be combined with release indicator scores or probabilities, and with exposure indicator scores. We demonstrate the key functionalities of r.randomwalk for (i) a single event, the Acheron rock avalanche in New Zealand; (ii) landslides in a 61.5 km2 study area in the Kao Ping Watershed, Taiwan; and (iii) lake outburst floods in a 2106 km2 area in the Gunt Valley, Tajikistan.
Impact of Urbanization on Precipitation Distribution and Intensity over Lake Victoria Basin
NASA Astrophysics Data System (ADS)
Gudoshava, M.; Semazzi, F. H. M.
2014-12-01
In this study, sensitivity simulations on the impact of rapid urbanization over Lake Victoria Basin in East Africa were done using a Regional Climate Model (RegCM4.4-rc29) with the Hostetler lake model activated. The simulations were done for the rainy seasons that is the long rains (March-April-May) and short rains (October-November-December). Africa is projected to have a surge in urbanization with an approximate rate of 590% in 2030 over their 2000 levels. The Northern part of Lake Victoria Basin and some parts of Rwanda and Burundi are amongst the regions with high urbanization projections. Simulations were done with the land cover for 2000 and the projected 2030 urbanization levels. The results showed that increasing the urban fraction over the northern part of the basin modified the physical parameters such as albedo, moisture and surface energy fluxes, aerodynamic roughness and surface emissivity, thereby altering the precipitation distribution, intensity and frequency in the region. The change in the physical parameters gave a response of an average increase in temperature of approximately 2oC over the urbanized region. A strong convergence zone was formed over the urbanized region and thereby accelerating the lake-breeze front towards the urbanized region center. Precipitation in the urbanized region and regions immediate to the area increased by approximately 4mm/day, while drying up the southern (non-urbanized) side of the basin. The drying up of the southern side of the basin could be a result of divergent flow and subsidence that suppresses vertical development of storms.
Hurtado Rúa, Sandra M; Mazumdar, Madhu; Strawderman, Robert L
2015-12-30
Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
May, Katharina; Brügemann, Kerstin; König, Sven; Strube, Christina
2017-10-15
Infections with gastrointestinal nematodes (GIN) can lead to production losses and impacts on product quality in affected cows, which has mainly been demonstrated during deworming experiments or via herd-level measurements. Here, a field study was carried out to explore the association between GIN infection status and milk production as well as fertility parameters in individual dairy cows. Different selection lines of Black and White cows were included in the study, which were distributed among 17 small and medium-sized organic and conventional German grassland farms. Faecal samples of 1166 dairy cows were examined twice, in July and September 2015. Nematode eggs were found in the faeces of 473 (40.6%) cows. As expected, strongylid eggs (Trichostrongylidae or Oesophagostomum and Bunostomum spp., respectively) were the predominant morphotype, followed by Strongyloides papillosus and Capillaria spp. eggs. In July, cows kept under organic conditions had a significantly lower GIN prevalence in comparison to cows kept on conventional farms. Faecal egg counts were generally low, with the highest value in September and an arithmetic mean of 11.3 eggs per gram faeces (EPG) for all observations. The relationships between GIN infection status and milk yield (kg milk/cow/day), milk protein content (%) and milk fat content (%) for each first test-day record after parasitological assessment were estimated by using linear mixed models. Milk protein content was estimated 0.05% lower in GIN positive compared to GIN negative cows, whereas no significant effect on milk yield or milk fat content was observed. The impact of GIN infection status on success in first insemination (SFI) was estimated by using a threshold model. No significant association was demonstrated between GIN infection status and SFI. Unexpectedly, the fertility parameter days from calving-to-first-service (CTFS) showed a significantly shorter average interval in GIN positive cows. However, these data on reproductive performance need to be considered preliminary as long-term studies are needed to allow a firm prediction of the impact of GIN infection status on dairy cow fertility parameters. Copyright © 2017 Elsevier B.V. All rights reserved.
4D dose simulation in volumetric arc therapy: Accuracy and affecting parameters
Werner, René
2017-01-01
Radiotherapy of lung and liver lesions has changed from normofractioned 3D-CRT to stereotactic treatment in a single or few fractions, often employing volumetric arc therapy (VMAT)-based techniques. Potential unintended interference of respiratory target motion and dynamically changing beam parameters during VMAT dose delivery motivates establishing 4D quality assurance (4D QA) procedures to assess appropriateness of generated VMAT treatment plans when taking into account patient-specific motion characteristics. Current approaches are motion phantom-based 4D QA and image-based 4D VMAT dose simulation. Whereas phantom-based 4D QA is usually restricted to a small number of measurements, the computational approaches allow simulating many motion scenarios. However, 4D VMAT dose simulation depends on various input parameters, influencing estimated doses along with mitigating simulation reliability. Thus, aiming at routine use of simulation-based 4D VMAT QA, the impact of such parameters as well as the overall accuracy of the 4D VMAT dose simulation has to be studied in detail–which is the topic of the present work. In detail, we introduce the principles of 4D VMAT dose simulation, identify influencing parameters and assess their impact on 4D dose simulation accuracy by comparison of simulated motion-affected dose distributions to corresponding dosimetric motion phantom measurements. Exploiting an ITV-based treatment planning approach, VMAT treatment plans were generated for a motion phantom and different motion scenarios (sinusoidal motion of different period/direction; regular/irregular motion). 4D VMAT dose simulation results and dose measurements were compared by local 3% / 3 mm γ-evaluation, with the measured dose distributions serving as ground truth. Overall γ-passing rates of simulations and dynamic measurements ranged from 97% to 100% (mean across all motion scenarios: 98% ± 1%); corresponding values for comparison of different day repeat measurements were between 98% and 100%. Parameters of major influence on 4D VMAT dose simulation accuracy were the degree of temporal discretization of the dose delivery process (the higher, the better) and correct alignment of the assumed breathing phases at the beginning of the dose measurements and simulations. Given the high γ-passing rates between simulated motion-affected doses and dynamic measurements, we consider the simulations to provide a reliable basis for assessment of VMAT motion effects that–in the sense of 4D QA of VMAT treatment plans–allows to verify target coverage in hypofractioned VMAT-based radiotherapy of moving targets. Remaining differences between measurements and simulations motivate, however, further detailed studies. PMID:28231337
4D dose simulation in volumetric arc therapy: Accuracy and affecting parameters.
Sothmann, Thilo; Gauer, Tobias; Werner, René
2017-01-01
Radiotherapy of lung and liver lesions has changed from normofractioned 3D-CRT to stereotactic treatment in a single or few fractions, often employing volumetric arc therapy (VMAT)-based techniques. Potential unintended interference of respiratory target motion and dynamically changing beam parameters during VMAT dose delivery motivates establishing 4D quality assurance (4D QA) procedures to assess appropriateness of generated VMAT treatment plans when taking into account patient-specific motion characteristics. Current approaches are motion phantom-based 4D QA and image-based 4D VMAT dose simulation. Whereas phantom-based 4D QA is usually restricted to a small number of measurements, the computational approaches allow simulating many motion scenarios. However, 4D VMAT dose simulation depends on various input parameters, influencing estimated doses along with mitigating simulation reliability. Thus, aiming at routine use of simulation-based 4D VMAT QA, the impact of such parameters as well as the overall accuracy of the 4D VMAT dose simulation has to be studied in detail-which is the topic of the present work. In detail, we introduce the principles of 4D VMAT dose simulation, identify influencing parameters and assess their impact on 4D dose simulation accuracy by comparison of simulated motion-affected dose distributions to corresponding dosimetric motion phantom measurements. Exploiting an ITV-based treatment planning approach, VMAT treatment plans were generated for a motion phantom and different motion scenarios (sinusoidal motion of different period/direction; regular/irregular motion). 4D VMAT dose simulation results and dose measurements were compared by local 3% / 3 mm γ-evaluation, with the measured dose distributions serving as ground truth. Overall γ-passing rates of simulations and dynamic measurements ranged from 97% to 100% (mean across all motion scenarios: 98% ± 1%); corresponding values for comparison of different day repeat measurements were between 98% and 100%. Parameters of major influence on 4D VMAT dose simulation accuracy were the degree of temporal discretization of the dose delivery process (the higher, the better) and correct alignment of the assumed breathing phases at the beginning of the dose measurements and simulations. Given the high γ-passing rates between simulated motion-affected doses and dynamic measurements, we consider the simulations to provide a reliable basis for assessment of VMAT motion effects that-in the sense of 4D QA of VMAT treatment plans-allows to verify target coverage in hypofractioned VMAT-based radiotherapy of moving targets. Remaining differences between measurements and simulations motivate, however, further detailed studies.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Optimal Bayesian Adaptive Design for Test-Item Calibration.
van der Linden, Wim J; Ren, Hao
2015-06-01
An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.
Double density dynamics: realizing a joint distribution of a physical system and a parameter system
NASA Astrophysics Data System (ADS)
Fukuda, Ikuo; Moritsugu, Kei
2015-11-01
To perform a variety of types of molecular dynamics simulations, we created a deterministic method termed ‘double density dynamics’ (DDD), which realizes an arbitrary distribution for both physical variables and their associated parameters simultaneously. Specifically, we constructed an ordinary differential equation that has an invariant density relating to a joint distribution of the physical system and the parameter system. A generalized density function leads to a physical system that develops under nonequilibrium environment-describing superstatistics. The joint distribution density of the physical system and the parameter system appears as the Radon-Nikodym derivative of a distribution that is created by a scaled long-time average, generated from the flow of the differential equation under an ergodic assumption. The general mathematical framework is fully discussed to address the theoretical possibility of our method, and a numerical example representing a 1D harmonic oscillator is provided to validate the method being applied to the temperature parameters.
Semertzidou, P; Piliposian, G T; Appleby, P G
2016-08-01
The residence time of (210)Pb created in the atmosphere by the decay of gaseous (222)Rn is a key parameter controlling its distribution and fallout onto the landscape. These in turn are key parameters governing the use of this natural radionuclide for dating and interpreting environmental records stored in natural archives such as lake sediments. One of the principal methods for estimating the atmospheric residence time is through measurements of the activities of the daughter radionuclides (210)Bi and (210)Po, and in particular the (210)Bi/(210)Pb and (210)Po/(210)Pb activity ratios. Calculations used in early empirical studies assumed that these were governed by a simple series of equilibrium equations. This approach does however have two failings; it takes no account of the effect of global circulation on spatial variations in the activity ratios, and no allowance is made for the impact of transport processes across the tropopause. This paper presents a simple model for calculating the distributions of (210)Pb, (210)Bi and (210)Po at northern mid-latitudes (30°-65°N), a region containing almost all the available empirical data. By comparing modelled (210)Bi/(210)Pb activity ratios with empirical data a best estimate for the tropospheric residence time of around 10 days is obtained. This is significantly longer than earlier estimates of between 4 and 7 days. The process whereby (210)Pb is transported into the stratosphere when tropospheric concentrations are high and returned from it when they are low, significantly increases the effective residence time in the atmosphere as a whole. The effect of this is to significantly enhance the long range transport of (210)Pb from its source locations. The impact is illustrated by calculations showing the distribution of (210)Pb fallout versus longitude at northern mid-latitudes. Copyright © 2016 Elsevier Ltd. All rights reserved.
A note about Gaussian statistics on a sphere
NASA Astrophysics Data System (ADS)
Chave, Alan D.
2015-11-01
The statistics of directional data on a sphere can be modelled either using the Fisher distribution that is conditioned on the magnitude being unity, in which case the sample space is confined to the unit sphere, or using the latitude-longitude marginal distribution derived from a trivariate Gaussian model that places no constraint on the magnitude. These two distributions are derived from first principles and compared. The Fisher distribution more closely approximates the uniform distribution on a sphere for a given small value of the concentration parameter, while the latitude-longitude marginal distribution is always slightly larger than the Fisher distribution at small off-axis angles for large values of the concentration parameter. Asymptotic analysis shows that the two distributions only become equivalent in the limit of large concentration parameter and very small off-axis angle.
NASA Astrophysics Data System (ADS)
Harrison, T. W.; Polagye, B. L.
2016-02-01
Coastal ecosystems are characterized by spatially and temporally varying hydrodynamics. In marine renewable energy applications, these variations strongly influence project economics and in oceanographic studies, they impact accuracy of biological transport and pollutant dispersion models. While stationary point or profile measurements are relatively straight forward, spatial representativeness of point measurements can be poor due to strong gradients. Moving platforms, such as AUVs or surface vessels, offer better coverage, but suffer from energetic constraints (AUVs) and resolvable scales (vessels). A system of sub-surface, drifting sensor packages is being developed to provide spatially distributed, synoptic data sets of coastal hydrodynamics with meter-scale resolution over a regional extent of a kilometer. Computational investigation has informed system parameters such as drifter size and shape, necessary position accuracy, number of drifters, and deployment methods. A hydrodynamic domain with complex flow features was created using a computational fluid dynamics code. A simple model of drifter dynamics propagate the drifters through the domain in post-processing. System parameters are evaluated relative to their ability to accurately recreate domain hydrodynamics. Implications of these results for an inexpensive, depth-controlled Lagrangian drifter system is presented.
Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Nikbay, Melike; Heeg, Jennifer
2017-01-01
This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.