The use of impact force as a scale parameter for the impact response of composite laminates
NASA Technical Reports Server (NTRS)
Jackson, Wade C.; Poe, C. C., Jr.
1992-01-01
The building block approach is currently used to design composite structures. With this approach, the data from coupon tests is scaled up to determine the design of a structure. Current standard impact tests and methods of relating test data to other structures are not generally understood and are often used improperly. A methodology is outlined for using impact force as a scale parameter for delamination damage for impacts of simple plates. Dynamic analyses were used to define ranges of plate parameters and impact parameters where quasi-static analyses are valid. These ranges include most low velocity impacts where the mass of the impacter is large and the size of the specimen is small. For large mass impacts of moderately thick (0.35 to 0.70 cm) laminates, the maximum extent of delamination damage increased with increasing impact force and decreasing specimen thickness. For large mass impact tests at a given kinetic energy, impact force and hence delamination size depends on specimen size, specimen thickness, boundary conditions, and indenter size and shape. If damage is reported in terms of impact force instead of kinetic energy, large mass test results can be applied directly to other plates of the same size.
The use of impact force as a scale parameter for the impact response of composite laminates
NASA Technical Reports Server (NTRS)
Jackson, Wade C.; Poe, C. C., Jr.
1992-01-01
The building block approach is currently used to design composite structures. With this approach, the data from coupon tests are scaled up to determine the design of a structure. Current standard impact tests and methods of relating test data to other structures are not generally understood and are often used improperly. A methodology is outlined for using impact force as a scale parameter for delamination damage for impacts of simple plates. Dynamic analyses were used to define ranges of plate parameters and impact parameters where quasi-static analyses are valid. These ranges include most low-velocity impacts where the mass of the impacter is large, and the size of the specimen is small. For large-mass impacts of moderately thick (0.35-0.70 cm) laminates, the maximum extent of delamination damage increased with increasing impact force and decreasing specimen thickness. For large-mass impact tests at a given kinetic energy, impact force and hence delamination size depends on specimen size, specimen thickness, boundary conditions, and indenter size and shape. If damage is reported in terms of impact force instead of kinetic energy, large-mass test results can be applied directly to other plates of the same thickness.
Bakken, Tor Haakon; Aase, Anne Guri; Hagen, Dagmar; Sundt, Håkon; Barton, David N; Lujala, Päivi
2014-07-01
Climate change and the needed reductions in the use of fossil fuels call for the development of renewable energy sources. However, renewable energy production, such as hydropower (both small- and large-scale) and wind power have adverse impacts on the local environment by causing reductions in biodiversity and loss of habitats and species. This paper compares the environmental impacts of many small-scale hydropower plants with a few large-scale hydropower projects and one wind power farm, based on the same set of environmental parameters; land occupation, reduction in wilderness areas (INON), visibility and impacts on red-listed species. Our basis for comparison was similar energy volumes produced, without considering the quality of the energy services provided. The results show that small-scale hydropower performs less favourably in all parameters except land occupation. The land occupation of large hydropower and wind power is in the range of 45-50 m(2)/MWh, which is more than two times larger than the small-scale hydropower, where the large land occupation for large hydropower is explained by the extent of the reservoirs. On all the three other parameters small-scale hydropower performs more than two times worse than both large hydropower and wind power. Wind power compares similarly to large-scale hydropower regarding land occupation, much better on the reduction in INON areas, and in the same range regarding red-listed species. Our results demonstrate that the selected four parameters provide a basis for further development of a fair and consistent comparison of impacts between the analysed renewable technologies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Impact of biology knowledge on the conservation and management of large pelagic sharks.
Yokoi, Hiroki; Ijima, Hirotaka; Ohshimo, Seiji; Yokawa, Kotaro
2017-09-06
Population growth rate, which depends on several biological parameters, is valuable information for the conservation and management of pelagic sharks, such as blue and shortfin mako sharks. However, reported biological parameters for estimating the population growth rates of these sharks differ by sex and display large variability. To estimate the appropriate population growth rate and clarify relationships between growth rate and relevant biological parameters, we developed a two-sex age-structured matrix population model and estimated the population growth rate using combinations of biological parameters. We addressed elasticity analysis and clarified the population growth rate sensitivity. For the blue shark, the estimated median population growth rate was 0.384 with a range of minimum and maximum values of 0.195-0.533, whereas those values of the shortfin mako shark were 0.102 and 0.007-0.318, respectively. The maturity age of male sharks had the largest impact for blue sharks, whereas that of female sharks had the largest impact for shortfin mako sharks. Hypotheses for the survival process of sharks also had a large impact on the population growth rate estimation. Both shark maturity age and survival rate were based on ageing validation data, indicating the importance of validating the quality of these data for the conservation and management of large pelagic sharks.
NASA Astrophysics Data System (ADS)
Dabiri, Arman; Butcher, Eric A.; Nazari, Morad
2017-02-01
Compliant impacts can be modeled using linear viscoelastic constitutive models. While such impact models for realistic viscoelastic materials using integer order derivatives of force and displacement usually require a large number of parameters, compliant impact models obtained using fractional calculus, however, can be advantageous since such models use fewer parameters and successfully capture the hereditary property. In this paper, we introduce the fractional Chebyshev collocation (FCC) method as an approximation tool for numerical simulation of several linear fractional viscoelastic compliant impact models in which the overall coefficient of restitution for the impact is studied as a function of the fractional model parameters for the first time. Other relevant impact characteristics such as hysteresis curves, impact force gradient, penetration and separation depths are also studied.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
Sensitivity of Asteroid Impact Risk to Uncertainty in Asteroid Properties and Entry Parameters
NASA Astrophysics Data System (ADS)
Wheeler, Lorien; Mathias, Donovan; Dotson, Jessie L.; NASA Asteroid Threat Assessment Project
2017-10-01
A central challenge in assessing the threat posed by asteroids striking Earth is the large amount of uncertainty inherent throughout all aspects of the problem. Many asteroid properties are not well characterized and can range widely from strong, dense, monolithic irons to loosely bound, highly porous rubble piles. Even for an object of known properties, the specific entry velocity, angle, and impact location can swing the potential consequence from no damage to causing millions of casualties. Due to the extreme rarity of large asteroid strikes, there are also large uncertainties in how different types of asteroids will interact with the atmosphere during entry, how readily they may break up or ablate, and how much surface damage will be caused by the resulting airbursts or impacts.In this work, we use our Probabilistic Asteroid Impact Risk (PAIR) model to investigate the sensitivity of asteroid impact damage to uncertainties in key asteroid properties, entry parameters, or modeling assumptions. The PAIR model combines physics-based analytic models of asteroid entry and damage in a probabilistic Monte Carlo framework to assess the risk posed by a wide range of potential impacts. The model samples from uncertainty distributions of asteroid properties and entry parameters to generate millions of specific impact cases, and models the atmospheric entry and damage for each case, including blast overpressure, thermal radiation, tsunami inundation, and global effects. To assess the risk sensitivity, we alternately fix and vary the different input parameters and compare the effect on the resulting range of damage produced. The goal of these studies is to help guide future efforts in asteroid characterization and model refinement by determining which properties most significantly affect the potential risk.
Size-Related Changes in Foot Impact Mechanics in Hoofed Mammals
Warner, Sharon Elaine; Pickering, Phillip; Panagiotopoulou, Olga; Pfau, Thilo; Ren, Lei; Hutchinson, John Richard
2013-01-01
Foot-ground impact is mechanically challenging for all animals, but how do large animals mitigate increased mass during foot impact? We hypothesized that impact force amplitude scales according to isometry in animals of increasing size through allometric scaling of related impact parameters. To test this, we measured limb kinetics and kinematics in 11 species of hoofed mammals ranging from 18–3157 kg body mass. We found impact force amplitude to be maintained proportional to size in hoofed mammals, but that other features of foot impact exhibit differential scaling patterns depending on the limb; forelimb parameters typically exhibit higher intercepts with lower scaling exponents than hind limb parameters. Our explorations of the size-related consequences of foot impact advance understanding of how body size influences limb morphology and function, foot design and locomotor behaviour. PMID:23382967
Vibro-Impact Type Triboelectric Energy Harvester for Large Amplitude and Wideband Applications
NASA Astrophysics Data System (ADS)
Chen, J. M.; Bu, L.; Xu, W. Y.; Xu, B. J.; Song, L.
2015-12-01
This paper reports the design, fabrication and testing of a novel vibro-impact type triboelectric energy harvester. The dynamics of vibro-impact converts external vibration to large contact force for triboelectric power generation. Strong nonlinearities are measured for this vibro-impact system, and wideband frequency response under diverse structural parameters are analyzed. The proposed device is applied in two large amplitude scenarios, and generates maximal peak-to-peak voltage of 18V in foot swinging condition @2Hz 30cm, and maximal peak-to-peak voltage of 45 V in arm swinging condition during running @5Hz 40cm.
Drop impact upon micro- and nanostructured superhydrophobic surfaces.
Tsai, Peichun; Pacheco, Sergio; Pirat, Christophe; Lefferts, Leon; Lohse, Detlef
2009-10-20
We experimentally investigate drop impact dynamics onto different superhydrophobic surfaces, consisting of regular polymeric micropatterns and rough carbon nanofibers, with similar static contact angles. The main control parameters are the Weber number We and the roughness of the surface. At small We, i.e., small impact velocity, the impact evolutions are similar for both types of substrates, exhibiting Fakir state, complete bouncing, partial rebouncing, trapping of an air bubble, jetting, and sticky vibrating water balls. At large We, splashing impacts emerge forming several satellite droplets, which are more pronounced for the multiscale rough carbon nanofiber jungles. The results imply that the multiscale surface roughness at nanoscale plays a minor role in the impact events for small We less than or approximately equal 120 but an important one for large We greater than or approximately equal 120. Finally, we find the effect of ambient air pressure to be negligible in the explored parameter regime We less than or approximately equal 150.
A study of small impact parameter ion channeling effects in thin crystals
NASA Astrophysics Data System (ADS)
Motapothula, Mallikarjuna Rao; Breese, Mark B. H.
2018-03-01
We have recorded channeling patterns produced by 1-2 MeV protons aligned with ⟨1 1 1⟩ axes in 55 nm thick silicon crystals which exhibit characteristic angular structure for deflection angles up to and beyond the axial critical angle, ψ a . Such large angular deflections are produced by ions incident on atomic strings with small impact parameters, resulting in trajectories which pass through several radial rings of atomic strings before exiting the thin crystal. Each ring may focus, steer or scatter the channeled ions in the transverse direction and the resulting characteristic angular structure beyond 0.6 ψ a at different depths can be related to peaks and troughs in the nuclear encounter probability. Such "radial focusing" underlies other axial channeling phenomena in thin crystals including planar channeling of small impact parameter trajectories, peaks around the azimuthal distribution at small tilts and large shoulders in the nuclear encounter probability at tilts beyond ψ a .
Evaluation of the impact response of textile composites
NASA Technical Reports Server (NTRS)
Portanova, M. A.
1995-01-01
An evaluation of the impact damage resistance and impact damage tolerance of stitched and unstitched uniweaves, 2-D braids, and 3-D weaves was conducted. Uniweave laminates were tested at four thicknesses to determine the sensitivity of the tests to this parameter. Several braid and weave parameters were also varied to establish their velocity (large mass) impacts and then loaded in tension or compression to measure residual strength. Experimental results indicate that stitching significantly improves the uniweaves' damage resistance. The 2-D braids and 3-D weaves offered less damage resistance than the stitched materials. Stitching also improved the compression after impact (CAI) and tension after impact (TAI) strengths of the uniweave materials.
FLORIDA LARGE BUILDING STUDY - POLK COUNTY ADMINISTRATION BUILDING
The report describes an extensive characterization and parameter assessment study of a single, large building in Bartow, FL, with the purpose of assessing the impact on radon entry of design, construction, and operating features of the building, particularly the mechanical subsys...
USDA-ARS?s Scientific Manuscript database
Isothermal inactivation studies are commonly used to quantify thermal inactivation kinetics of bacteria. Meta-analyses and comparisons utilizing results from multiple sources have revealed large variations in reported inactivation parameters for Salmonella, even in similar food materials. Different ...
Photon orbits and thermodynamic phase transition of d -dimensional charged AdS black holes
NASA Astrophysics Data System (ADS)
Wei, Shao-Wen; Liu, Yu-Xiao
2018-05-01
We study the relationship between the null geodesics and thermodynamic phase transition for the charged AdS black hole. In the reduced parameter space, we find that there exist nonmonotonic behaviors of the photon sphere radius and the minimum impact parameter for the pressure below its critical value. The study also shows that the changes of the photon sphere radius and the minimum impact parameter can serve as order parameters for the small-large black hole phase transition. In particular, these changes have an universal exponent of 1/2 near the critical point for any dimension d of spacetime. These results imply that there may exist universal critical behavior of gravity near the thermodynamic critical point of the black hole system.
Climate Impacts on Extreme Energy Consumption of Different Types of Buildings
Li, Mingcai; Shi, Jun; Guo, Jun; Cao, Jingfu; Niu, Jide; Xiong, Mingming
2015-01-01
Exploring changes of building energy consumption and its relationships with climate can provide basis for energy-saving and carbon emission reduction. Heating and cooling energy consumption of different types of buildings during 1981-2010 in Tianjin city, was simulated by using TRNSYS software. Daily or hourly extreme energy consumption was determined by percentile methods, and the climate impact on extreme energy consumption was analyzed. The results showed that days of extreme heating consumption showed apparent decrease during the recent 30 years for residential and large venue buildings, whereas days of extreme cooling consumption increased in large venue building. No significant variations were found for the days of extreme energy consumption for commercial building, although a decreasing trend in extreme heating energy consumption. Daily extreme energy consumption for large venue building had no relationship with climate parameters, whereas extreme energy consumption for commercial and residential buildings was related to various climate parameters. Further multiple regression analysis suggested heating energy consumption for commercial building was affected by maximum temperature, dry bulb temperature, solar radiation and minimum temperature, which together can explain 71.5 % of the variation of the daily extreme heating energy consumption. The daily extreme cooling energy consumption for commercial building was only related to the wet bulb temperature (R2= 0.382). The daily extreme heating energy consumption for residential building was affected by 4 climate parameters, but the dry bulb temperature had the main impact. The impacts of climate on hourly extreme heating energy consumption has a 1-3 hour delay in all three types of buildings, but no delay was found in the impacts of climate on hourly extreme cooling energy consumption for the selected buildings. PMID:25923205
Climate impacts on extreme energy consumption of different types of buildings.
Li, Mingcai; Shi, Jun; Guo, Jun; Cao, Jingfu; Niu, Jide; Xiong, Mingming
2015-01-01
Exploring changes of building energy consumption and its relationships with climate can provide basis for energy-saving and carbon emission reduction. Heating and cooling energy consumption of different types of buildings during 1981-2010 in Tianjin city, was simulated by using TRNSYS software. Daily or hourly extreme energy consumption was determined by percentile methods, and the climate impact on extreme energy consumption was analyzed. The results showed that days of extreme heating consumption showed apparent decrease during the recent 30 years for residential and large venue buildings, whereas days of extreme cooling consumption increased in large venue building. No significant variations were found for the days of extreme energy consumption for commercial building, although a decreasing trend in extreme heating energy consumption. Daily extreme energy consumption for large venue building had no relationship with climate parameters, whereas extreme energy consumption for commercial and residential buildings was related to various climate parameters. Further multiple regression analysis suggested heating energy consumption for commercial building was affected by maximum temperature, dry bulb temperature, solar radiation and minimum temperature, which together can explain 71.5 % of the variation of the daily extreme heating energy consumption. The daily extreme cooling energy consumption for commercial building was only related to the wet bulb temperature (R2= 0.382). The daily extreme heating energy consumption for residential building was affected by 4 climate parameters, but the dry bulb temperature had the main impact. The impacts of climate on hourly extreme heating energy consumption has a 1-3 hour delay in all three types of buildings, but no delay was found in the impacts of climate on hourly extreme cooling energy consumption for the selected buildings.
The Effect of Clustering on Estimations of the UV Ionizing Background from the Proximity Effect
NASA Astrophysics Data System (ADS)
Pascarelle, S. M.; Lanzetta, K. M.; Chen, H. W.
1999-09-01
There have been several determinations of the ionizing background using the proximity effect observed in the distibution of Lyman-alpha absorption lines in the spectra of QSOs at high redshift. It is usually assumed that the distribution of lines should be the same at very small impact parameters to the QSO as it is at large impact parameters, and any decrease in line density at small impact parameters is due to ionizing radiation from the QSO. However, if these Lyman-alpha absorption lines arise in galaxies (Lanzetta et al. 1995, Chen et al. 1998), then the strength of the proximity effect may have been underestimated in previous work, since galaxies are known to cluster around QSOs. Therefore, the UV background estimations have likely been overestimated by the same factor.
Reconciling Rigour and Impact by Collaborative Research Design: Study of Teacher Agency
ERIC Educational Resources Information Center
Pantic, Nataša
2017-01-01
This paper illustrates a new way of working collaboratively on the development of a methodology for studying teacher agency for social justice. Increasing emphasis of impact on change as a purpose of social research raises questions about appropriate research designs. Large-scale quantitative research framed within externally set parameters has…
Cripps, Jemma; Beveridge, Ian; Ploeg, Richard; Coulson, Graeme
2014-01-01
Large mammalian herbivores are commonly infected with gastrointestinal helminths. In many host species, these helminths cause clinical disease and may trigger conspicuous mortality events. However, they may also have subclinical impacts, reducing fitness as well as causing complex changes to host growth patterns and body condition. Theoretically, juveniles should experience significantly greater costs from parasites, being immunologically naive and undergoing a significant growth phase. The aims of our study were to quantify the subclinical effects of helminths in juvenile eastern grey kangaroos (Macropus giganteus), which commonly harbour large burdens of gastrointestinal nematodes and are susceptible to associated mass mortality during cold, wet conditions. We conducted a field experiment on a population of free-ranging kangaroos, removing nematodes from one group of juveniles using an anthelmintic treatment. We then compared growth parameters (body condition and growth rates) and haematological parameters of this group with an age-matched, parasitised (untreated) control group. Treated juvenile kangaroos had significantly higher levels of plasma protein (albumin) but, contrary to our predictions, showed negligible changes in all the other parameters measured. Our results suggest that juvenile kangaroos are largely unaffected by their gastrointestinal helminth burdens, and may be able to compensate for the costs of parasites. PMID:25161906
USDA-ARS?s Scientific Manuscript database
Photosynthetic potential in C3 plants is largely limited by CO2 diffusion through stomata (Ls) and mesophyll (Lm) and photo-biochemical (Lb) processes. Accurate estimation of mesophyll conductance (gm) using gas exchange (GE) and chlorophyll fluorescence (CF) parameters of the photosynthetic proces...
Centrifuge impact cratering experiments: Scaling laws for non-porous targets
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.
1987-01-01
A geotechnical centrifuge was used to investigate large body impacts onto planetary surfaces. At elevated gravity, it is possible to match various dimensionless similarity parameters which were shown to govern large scale impacts. Observations of crater growth and target flow fields have provided detailed and critical tests of a complete and unified scaling theory for impact cratering. Scaling estimates were determined for nonporous targets. Scaling estimates for large scale cratering in rock proposed previously by others have assumed that the crater radius is proportional to powers of the impactor energy and gravity, with no additional dependence on impact velocity. The size scaling laws determined from ongoing centrifuge experiments differ from earlier ones in three respects. First, a distinct dependence of impact velocity is recognized, even for constant impactor energy. Second, the present energy exponent for low porosity targets, like competent rock, is lower than earlier estimates. Third, the gravity exponent is recognized here as being related to both the energy and the velocity exponents.
Impact of large-scale tides on cosmological distortions via redshift-space power spectrum
NASA Astrophysics Data System (ADS)
Akitsu, Kazuyuki; Takada, Masahiro
2018-03-01
Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.
Centrifuge impact cratering experiments: Scaling laws for non-porous targets
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.
1987-01-01
This research is a continuation of an ongoing program whose objective is to perform experiments and to develop scaling relationships for large body impacts onto planetary surfaces. The development of the centrifuge technique has been pioneered by the present investigator and is used to provide experimental data for actual target materials of interest. With both powder and gas guns mounted on a rotor arm, it is possible to match various dimensionless similarity parameters, which have been shown to govern the behavior of large scale impacts. Current work is directed toward the determination of scaling estimates for nonporous targets. The results are presented in summary form.
NASA Astrophysics Data System (ADS)
Dhakal, N.; Jain, S.
2013-12-01
Rare and unusually large events (such as hurricanes and floods) can create unusual and interesting trends in statistics. Generalized Extreme Value (GEV) distribution is usually used to statistically describe extreme rainfall events. A number of the recent studies have shown that the frequency of extreme rainfall events has increased over the last century and as a result, there has been change in parameters of GEV distribution with the time (non-stationary). But what impact does a single unusually large rainfall event (e.g., hurricane Irene) have on the GEV parameters and consequently on the level of risks or the return periods used in designing the civil infrastructures? In other words, if such a large event occurs today, how will it influence the level of risks (estimated based on past rainfall records) for the civil infrastructures? To answer these questions, we performed sensitivity analysis of the distribution parameters of GEV as well as the return periods to unusually large outlier events. The long-term precipitation records over the period of 1981-2010 from 12 USHCN stations across the state of Maine were used for analysis. For most of the stations, addition of each outlier event caused an increase in the shape parameter with a huge decrease on the corresponding return period. This is a key consideration for time-varying engineering design. These isolated extreme weather events should simultaneously be considered with traditional statistical methodology related to extreme events while designing civil infrastructures (such as dams, bridges, and culverts). Such analysis is also useful in understanding the statistical uncertainty of projecting extreme events into future.
A discrete element modelling approach for block impacts on trees
NASA Astrophysics Data System (ADS)
Toe, David; Bourrier, Franck; Olmedo, Ignatio; Berger, Frederic
2015-04-01
These past few year rockfall models explicitly accounting for block shape, especially those using the Discrete Element Method (DEM), have shown a good ability to predict rockfall trajectories. Integrating forest effects into those models still remain challenging. This study aims at using a DEM approach to model impacts of blocks on trees and identify the key parameters controlling the block kinematics after the impact on a tree. A DEM impact model of a block on a tree was developed and validated using laboratory experiments. Then, key parameters were assessed using a global sensitivity analyse. Modelling the impact of a block on a tree using DEM allows taking into account large displacements, material non-linearities and contacts between the block and the tree. Tree stems are represented by flexible cylinders model as plastic beams sustaining normal, shearing, bending, and twisting loading. Root soil interactions are modelled using a rotation stiffness acting on the bending moment at the bottom of the tree and a limit bending moment to account for tree overturning. The crown is taken into account using an additional mass distribute uniformly on the upper part of the tree. The block is represented by a sphere. The contact model between the block and the stem consists of an elastic frictional model. The DEM model was validated using laboratory impact tests carried out on 41 fresh beech (Fagus Sylvatica) stems. Each stem was 1,3 m long with a diameter between 3 to 7 cm. Wood stems were clamped on a rigid structure and impacted by a 149 kg charpy pendulum. Finally an intensive simulation campaign of blocks impacting trees was done to identify the input parameters controlling the block kinematics after the impact on a tree. 20 input parameters were considered in the DEM simulation model : 12 parameters were related to the tree and 8 parameters to the block. The results highlight that the impact velocity, the stem diameter, and the block volume are the three input parameters that control the block kinematics after impact.
Parameters sensitivity on mooring loads of ship-shaped FPSOs
NASA Astrophysics Data System (ADS)
Hasan, Mohammad Saidee
2017-12-01
The work in this paper is focused on special assessment and evaluation of mooring system of ship-shaped FPSO unit. In particular, the purpose of the study is to find the impact on mooring loads for the variation in different parameters using MIMOSA software. First, a selected base case was designed for an intact mooring system in a typical ultimate limit state (ULS) condition, and then the sensitivity to mooring loads on parameters e.g. location of the turret, analysis method (quasi-static vs. dynamic analysis), low-frequency damping level in the surge, pretension and drag coefficients on chain and steel wire has been performed. It is found that mooring loads change due to the change of these parameters. Especially, pretension has a large impact on the maximum tension of mooring lines and low-frequency damping can change surge offset significantly.
Xie, Rong-Rong; Pang, Yong; Zhang, Qian; Chen, Ke; Sun, Ming-Yuan
2012-07-01
For the safety of the water environment in Jiashan county in Zhejiang Province, one-dimensional hydrodynamic and water quality models are established based on three large-scale monitoring of hydrology and water quality in Jiashan county, three water environmental sensitive spots including Hongqitang dam Chijia hydrological station and Luxie pond are selected to investigate weight parameters of water quality impact and risk grade determination. Results indicate as follows (1) Internal pollution impact in Jiashan areas was greater than the external, the average weight parameters of internal chemical oxygen demand (COD) pollution is 55.3%, internal ammonia nitrogen (NH(4+)-N) is 67.4%, internal total phosphor (TP) is 63.1%. Non-point pollution impact in Jiashan areas was greater than point pollution impact, the average weight parameters of non-point COD pollutions is 53.7%, non-point NH(4+)-N is 65.9%, non-point TP is 57.8%. (2) The risk of Hongqitang dam and Chijia hydrological station are in the middle risk. The risk of Luxie pond is also in the middle risk in August, and in April and December the risk of Luxie pond is low. The strategic decision will be suggested to guarantee water environment security and social and economic security in the study.
Coupling SPH and thermochemical models of planets: Methodology and example of a Mars-sized body
NASA Astrophysics Data System (ADS)
Golabek, G. J.; Emsenhuber, A.; Jutzi, M.; Asphaug, E. I.; Gerya, T. V.
2018-02-01
Giant impacts have been suggested to explain various characteristics of terrestrial planets and their moons. However, so far in most models only the immediate effects of the collisions have been considered, while the long-term interior evolution of the impacted planets was not studied. Here we present a new approach, combining 3-D shock physics collision calculations with 3-D thermochemical interior evolution models. We apply the combined methods to a demonstration example of a giant impact on a Mars-sized body, using typical collisional parameters from previous studies. While the material parameters (equation of state, rheology model) used in the impact simulations can have some effect on the long-term evolution, we find that the impact angle is the most crucial parameter for the resulting spatial distribution of the newly formed crust. The results indicate that a dichotomous crustal pattern can form after a head-on collision, while this is not the case when considering a more likely grazing collision. Our results underline that end-to-end 3-D calculations of the entire process are required to study in the future the effects of large-scale impacts on the evolution of planetary interiors.
Farina, Marco; Pappadopulo, Duccio; Rompineve, Fabrizio; ...
2017-01-23
Here, we propose a framework in which the QCD axion has an exponentially large coupling to photons, relying on the “clockwork” mechanism. We discuss the impact of present and future axion experiments on the parameter space of the model. In addition to the axion, the model predicts a large number of pseudoscalars which can be light and observable at the LHC. In the most favorable scenario, axion Dark Matter will give a signal in multiple axion detection experiments and the pseudo-scalars will be discovered at the LHC, allowing us to determine most of the parameters of the model.
Radiation effects on type I fiber Bragg gratings: influence of recoating
NASA Astrophysics Data System (ADS)
Blanchet, T.; Laffont, G.; Cotillard, R.; Marin, E.; Morana, A.; Boukenter, A.; Ouerdane, Y.; Girard, S.
2017-04-01
We investigated the Bragg Wavelength Shift (BWS) induced by X-rays in a large set of conventional FBGs up to 100kGy dose. Obtained results give some insights on the influence of irradiation parameters such as dose, dose rate as well as the impact of some writing process parameters such as thermal treatment or acrylate recoating on the FBG radiation tolerance.
On the impact of large angle CMB polarization data on cosmological parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lattanzi, Massimiliano; Mandolesi, Nazzareno; Natoli, Paolo
We study the impact of the large-angle CMB polarization datasets publicly released by the WMAP and Planck satellites on the estimation of cosmological parameters of the ΛCDM model. To complement large-angle polarization, we consider the high resolution (or 'high-ℓ') CMB datasets from either WMAP or Planck as well as CMB lensing as traced by Planck 's measured four point correlation function. In the case of WMAP, we compute the large-angle polarization likelihood starting over from low resolution frequency maps and their covariance matrices, and perform our own foreground mitigation technique, which includes as a possible alternative Planck 353 GHz datamore » to trace polarized dust. We find that the latter choice induces a downward shift in the optical depth τ, roughly of order 2σ, robust to the choice of the complementary high resolution dataset. When the Planck 353 GHz is consistently used to minimize polarized dust emission, WMAP and Planck 70 GHz large-angle polarization data are in remarkable agreement: by combining them we find τ = 0.066 {sup +0.012}{sub −0.013}, again very stable against the particular choice for high-ℓ data. We find that the amplitude of primordial fluctuations A {sub s} , notoriously degenerate with τ, is the parameter second most affected by the assumptions on polarized dust removal, but the other parameters are also affected, typically between 0.5 and 1σ. In particular, cleaning dust with Planck 's 353 GHz data imposes a 1σ downward shift in the value of the Hubble constant H {sub 0}, significantly contributing to the tension reported between CMB based and direct measurements of the present expansion rate. On the other hand, we find that the appearance of the so-called low ℓ anomaly, a well-known tension between the high- and low-resolution CMB anisotropy amplitude, is not significantly affected by the details of large-angle polarization, or by the particular high-ℓ dataset employed.« less
Tungsten dust impact on ITER-like plasma edge
Smirnov, R. D.; Krasheninnikov, S. I.; Pigarov, A. Yu.; ...
2015-01-12
The impact of tungsten dust originating from divertor plates on the performance of edge plasma in ITER-like discharge is evaluated using computer modeling with the coupled dust-plasma transport code DUSTT-UEDGE. Different dust injection parameters, including dust size and mass injection rates, are surveyed. It is found that tungsten dust injection with rates as low as a few mg/s can lead to dangerously high tungsten impurity concentrations in the plasma core. Dust injections with rates of a few tens of mg/s are shown to have a significant effect on edge plasma parameters and dynamics in ITER scale tokamaks. The large impactmore » of certain phenomena, such as dust shielding by an ablation cloud and the thermal force on tungsten ions, on dust/impurity transport in edge plasma and consequently on core tungsten contamination level is demonstrated. Lastly, it is also found that high-Z impurities provided by dust can induce macroscopic self-sustained plasma oscillations in plasma edge leading to large temporal variations of edge plasma parameters and heat load to divertor target plates.« less
From LCAs to simplified models: a generic methodology applied to wind power electricity.
Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle
2013-02-05
This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.
Constraints on a scale-dependent bias from galaxy clustering
NASA Astrophysics Data System (ADS)
Amendola, L.; Menegoni, E.; Di Porto, C.; Corsi, M.; Branchini, E.
2017-01-01
We forecast the future constraints on scale-dependent parametrizations of galaxy bias and their impact on the estimate of cosmological parameters from the power spectrum of galaxies measured in a spectroscopic redshift survey. For the latter we assume a wide survey at relatively large redshifts, similar to the planned Euclid survey, as the baseline for future experiments. To assess the impact of the bias we perform a Fisher matrix analysis, and we adopt two different parametrizations of scale-dependent bias. The fiducial models for galaxy bias are calibrated using mock catalogs of H α emitting galaxies mimicking the expected properties of the objects that will be targeted by the Euclid survey. In our analysis we have obtained two main results. First of all, allowing for a scale-dependent bias does not significantly increase the errors on the other cosmological parameters apart from the rms amplitude of density fluctuations, σ8 , and the growth index γ , whose uncertainties increase by a factor up to 2, depending on the bias model adopted. Second, we find that the accuracy in the linear bias parameter b0 can be estimated to within 1%-2% at various redshifts regardless of the fiducial model. The nonlinear bias parameters have significantly large errors that depend on the model adopted. Despite this, in the more realistic scenarios departures from the simple linear bias prescription can be detected with a ˜2 σ significance at each redshift explored. Finally, we use the Fisher matrix formalism to assess the impact od assuming an incorrect bias model and find that the systematic errors induced on the cosmological parameters are similar or even larger than the statistical ones.
Improved Strength and Damage Modeling of Geologic Materials
NASA Astrophysics Data System (ADS)
Stewart, Sarah; Senft, Laurel
2007-06-01
Collisions and impact cratering events are important processes in the evolution of planetary bodies. The time and length scales of planetary collisions, however, are inaccessible in the laboratory and require the use of shock physics codes. We present the results from a new rheological model for geological materials implemented in the CTH code [1]. The `ROCK' model includes pressure, temperature, and damage effects on strength, as well as acoustic fluidization during impact crater collapse. We demonstrate that the model accurately reproduces final crater shapes, tensile cracking, and damaged zones from laboratory to planetary scales. The strength model requires basic material properties; hence, the input parameters may be benchmarked to laboratory results and extended to planetary collision events. We show the effects of varying material strength parameters, which are dependent on both scale and strain rate, and discuss choosing appropriate parameters for laboratory and planetary situations. The results are a significant improvement in models of continuum rock deformation during large scale impact events. [1] Senft, L. E., Stewart, S. T. Modeling Impact Cratering in Layered Surfaces, J. Geophys. Res., submitted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oubeidillah, Abdoul A; Kao, Shih-Chieh; Ashfaq, Moetasim
2014-01-01
To extend geographical coverage, refine spatial resolution, and improve modeling efficiency, a computation- and data-intensive effort was conducted to organize a comprehensive hydrologic dataset with post-calibrated model parameters for hydro-climate impact assessment. Several key inputs for hydrologic simulation including meteorologic forcings, soil, land class, vegetation, and elevation were collected from multiple best-available data sources and organized for 2107 hydrologic subbasins (8-digit hydrologic units, HUC8s) in the conterminous United States at refined 1/24 (~4 km) spatial resolution. Using high-performance computing for intensive model calibration, a high-resolution parameter dataset was prepared for the macro-scale Variable Infiltration Capacity (VIC) hydrologic model. The VICmore » simulation was driven by DAYMET daily meteorological forcing and was calibrated against USGS WaterWatch monthly runoff observations for each HUC8. The results showed that this new parameter dataset may help reasonably simulate runoff at most US HUC8 subbasins. Based on this exhaustive calibration effort, it is now possible to accurately estimate the resources required for further model improvement across the entire conterminous United States. We anticipate that through this hydrologic parameter dataset, the repeated effort of fundamental data processing can be lessened, so that research efforts can emphasize the more challenging task of assessing climate change impacts. The pre-organized model parameter dataset will be provided to interested parties to support further hydro-climate impact assessment.« less
NASA Technical Reports Server (NTRS)
Parsons, David S.; Ordway, David; Johnson, Kenneth
2013-01-01
This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.
NASA Technical Reports Server (NTRS)
Parsons, David S.; Ordway, David O.; Johnson, Kenneth L.
2013-01-01
This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.
Impact of High PV Penetration on the Inter-Area Oscillations in the U.S. Eastern Interconnection
You, Shutang; Kou, Gefei; Liu, Yong; ...
2017-03-31
Our study explores the impact of high-photovoltaic (PV) penetration on the inter-area oscillation modes of large-scale power grids. A series of dynamic models with various PV penetration levels are developed based on a detailed model representing the U.S. Eastern Interconnection (EI). Transient simulations are performed to investigate the change of inter-area oscillation modes with PV penetration. The impact of PV control strategies and parameter settings on inter-area oscillations is studied. This paper finds that as PV increases, the damping of the dominant oscillation mode decreases monotonically. We also observed that the mode shape varies with the PV control strategy andmore » new oscillation modes may emerge under inappropriate parameter settings in PV plant controls.« less
Extracting lunar dust parameters from image charge signals produced by the Lunar Dust Experiment
NASA Astrophysics Data System (ADS)
Stanley, J.; Kempf, S.; Horanyi, M.; Szalay, J.
2015-12-01
The Lunar Dust Experiment (LDEX) onboard the Lunar Atmosphere and Dust Environment Explorer (LADEE) is an impact ionization dust detector used to characterize the lunar dust exosphere generated by the impacts of large interplanetary particles and meteor streams (Horanyi et al., 2015). In addition to the mass and speed of these lofted particles, LDEX is sensitive to their charge. The resulting signatures of impact events therefore provide valuable information about not only the ambient plasma environment, but also the speed vectors of these dust grains. Here, impact events produced from LDEX's calibration at the Dust Accelerator Laboratory are analyzed using an image charge model derived from the electrostatic simulation program, Coulomb. We show that parameters such as dust grain speed, size, charge, and position of entry into LDEX can be recovered and applied to data collected during LADEE's seven-month mission.
Relating centrality to impact parameter in nucleus-nucleus collisions
NASA Astrophysics Data System (ADS)
Das, Sruthy Jyothi; Giacalone, Giuliano; Monard, Pierre-Amaury; Ollitrault, Jean-Yves
2018-01-01
In ultrarelativistic heavy-ion experiments, one estimates the centrality of a collision by using a single observable, say n , typically given by the transverse energy or the number of tracks observed in a dedicated detector. The correlation between n and the impact parameter b of the collision is then inferred by fitting a specific model of the collision dynamics, such as the Glauber model, to experimental data. The goal of this paper is to assess precisely which information about b can be extracted from data without any specific model of the collision. Under the sole assumption that the probability distribution of n for a fixed b is Gaussian, we show that the probability distribution of the impact parameter in a narrow centrality bin can be accurately reconstructed up to 5 % centrality. We apply our methodology to data from the Relativistic Heavy Ion Collider and the Large Hadron Collider. We propose a simple measure of the precision of the centrality determination, which can be used to compare different experiments.
Visual exploration of parameter influence on phylogenetic trees.
Hess, Martin; Bremm, Sebastian; Weissgraeber, Stephanie; Hamacher, Kay; Goesele, Michael; Wiemeyer, Josef; von Landesberger, Tatiana
2014-01-01
Evolutionary relationships between organisms are frequently derived as phylogenetic trees inferred from multiple sequence alignments (MSAs). The MSA parameter space is exponentially large, so tens of thousands of potential trees can emerge for each dataset. A proposed visual-analytics approach can reveal the parameters' impact on the trees. Given input trees created with different parameter settings, it hierarchically clusters the trees according to their structural similarity. The most important clusters of similar trees are shown together with their parameters. This view offers interactive parameter exploration and automatic identification of relevant parameters. Biologists applied this approach to real data of 16S ribosomal RNA and protein sequences of ion channels. It revealed which parameters affected the tree structures. This led to a more reliable selection of the best trees.
Determination of Acreage Thermal Protection Foam Loss From Ice and Foam Impacts
NASA Technical Reports Server (NTRS)
Carney, Kelly S.; Lawrence, Charles
2015-01-01
A parametric study was conducted to establish Thermal Protection System (TPS) loss from foam and ice impact conditions similar to what might occur on the Space Launch System. This study was based upon the large amount of testing and analysis that was conducted with both ice and foam debris impacts on TPS acreage foam for the Space Shuttle Project External Tank. Test verified material models and modeling techniques that resulted from Space Shuttle related testing were utilized for this parametric study. Parameters varied include projectile mass, impact velocity and impact angle (5 degree and 10 degree impacts). The amount of TPS acreage foam loss as a result of the various impact conditions is presented.
Online estimation of the wavefront outer scale profile from adaptive optics telemetry
NASA Astrophysics Data System (ADS)
Guesalaga, A.; Neichel, B.; Correia, C. M.; Butterley, T.; Osborn, J.; Masciadri, E.; Fusco, T.; Sauvage, J.-F.
2017-02-01
We describe an online method to estimate the wavefront outer scale profile, L0(h), for very large and future extremely large telescopes. The stratified information on this parameter impacts the estimation of the main turbulence parameters [turbulence strength, Cn2(h); Fried's parameter, r0; isoplanatic angle, θ0; and coherence time, τ0) and determines the performance of wide-field adaptive optics (AO) systems. This technique estimates L0(h) using data from the AO loop available at the facility instruments by constructing the cross-correlation functions of the slopes between two or more wavefront sensors, which are later fitted to a linear combination of the simulated theoretical layers having different altitudes and outer scale values. We analyse some limitations found in the estimation process: (I) its insensitivity to large values of L0(h) as the telescope becomes blind to outer scales larger than its diameter; (II) the maximum number of observable layers given the limited number of independent inputs that the cross-correlation functions provide and (III) the minimum length of data required for a satisfactory convergence of the turbulence parameters without breaking the assumption of statistical stationarity of the turbulence. The method is applied to the Gemini South multiconjugate AO system that comprises five wavefront sensors and two deformable mirrors. Statistics of L0(h) at Cerro Pachón from data acquired during 3 yr of campaigns show interesting resemblance to other independent results in the literature. A final analysis suggests that the impact of error sources will be substantially reduced in instruments of the next generation of giant telescopes.
Fedy, Bradley C.; O'Donnell, Michael; Bowen, Zachary H.
2015-01-01
Human impacts on wildlife populations are widespread and prolific and understanding wildlife responses to human impacts is a fundamental component of wildlife management. The first step to understanding wildlife responses is the documentation of changes in wildlife population parameters, such as population size. Meaningful assessment of population changes in potentially impacted sites requires the establishment of monitoring at similar, nonimpacted, control sites. However, it is often difficult to identify appropriate control sites in wildlife populations. We demonstrated use of Geographic Information System (GIS) data across large spatial scales to select biologically relevant control sites for population monitoring. Greater sage-grouse (Centrocercus urophasianus; hearafter, sage-grouse) are negatively affected by energy development, and monitoring of sage-grouse population within energy development areas is necessary to detect population-level responses. Weused population data (1995–2012) from an energy development area in Wyoming, USA, the Atlantic Rim Project Area (ARPA), and GIS data to identify control sites that were not impacted by energy development for population monitoring. Control sites were surrounded by similar habitat and were within similar climate areas to the ARPA. We developed nonlinear trend models for both the ARPA and control sites and compared long-term trends from the 2 areas. We found little difference between the ARPA and control sites trends over time. This research demonstrated an approach for control site selection across large landscapes and can be used as a template for similar impact-monitoring studies. It is important to note that identification of changes in population parameters between control and treatment sites is only the first step in understanding the mechanisms that underlie those changes. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Sarkozy, Clémentine; Camus, Vincent; Tilly, Hervé; Salles, Gilles; Jardin, Fabrice
2015-07-01
Diffuse large B-cell lymphoma (DLBCL) is the most common form of aggressive non-Hodgkin lymphoma, accounting for 30-40% of newly diagnosed cases. Obesity is a well-defined risk factor for DLBCL. However, the impact of body mass index (BMI) on DLBCL prognosis is controversial. Recent studies suggest that skeletal muscle wasting (sarcopenia) or loss of fat mass can be detected by computed tomography (CT) images and is useful for predicting the clinical outcome in several types of cancer including DLBCL. Several hypotheses have been proposed to explain the differences in DLBCL outcome according to BMI or weight that include tolerance to treatment, inflammatory background and chemotherapy or rituximab metabolism. In this review, we summarize the available literature, addressing the impact and physiopathological relevance of simple anthropometric tools including BMI and tissue distribution measurements. We also discuss their relationship with other nutritional parameters and their potential role in the management of patients with DLBCL.
Verhoest, Niko E.C; Lievens, Hans; Wagner, Wolfgang; Álvarez-Mozos, Jesús; Moran, M. Susan; Mattia, Francesco
2008-01-01
Synthetic Aperture Radar has shown its large potential for retrieving soil moisture maps at regional scales. However, since the backscattered signal is determined by several surface characteristics, the retrieval of soil moisture is an ill-posed problem when using single configuration imagery. Unless accurate surface roughness parameter values are available, retrieving soil moisture from radar backscatter usually provides inaccurate estimates. The characterization of soil roughness is not fully understood, and a large range of roughness parameter values can be obtained for the same surface when different measurement methodologies are used. In this paper, a literature review is made that summarizes the problems encountered when parameterizing soil roughness as well as the reported impact of the errors made on the retrieved soil moisture. A number of suggestions were made for resolving issues in roughness parameterization and studying the impact of these roughness problems on the soil moisture retrieval accuracy and scale. PMID:27879932
BIG BANG NUCLEOSYNTHESIS WITH A NON-MAXWELLIAN DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertulani, C. A.; Fuqua, J.; Hussein, M. S.
The abundances of light elements based on the big bang nucleosynthesis model are calculated using the Tsallis non-extensive statistics. The impact of the variation of the non-extensive parameter q from the unity value is compared to observations and to the abundance yields from the standard big bang model. We find large differences between the reaction rates and the abundance of light elements calculated with the extensive and the non-extensive statistics. We found that the observations are consistent with a non-extensive parameter q = 1{sub -} {sub 0.12}{sup +0.05}, indicating that a large deviation from the Boltzmann-Gibbs statistics (q = 1)more » is highly unlikely.« less
Workflow for Criticality Assessment Applied in Biopharmaceutical Process Validation Stage 1.
Zahel, Thomas; Marschall, Lukas; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Mueller, Eric M; Murphy, Patrick; Natschläger, Thomas; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph
2017-10-12
Identification of critical process parameters that impact product quality is a central task during regulatory requested process validation. Commonly, this is done via design of experiments and identification of parameters significantly impacting product quality (rejection of the null hypothesis that the effect equals 0). However, parameters which show a large uncertainty and might result in an undesirable product quality limit critical to the product, may be missed. This might occur during the evaluation of experiments since residual/un-modelled variance in the experiments is larger than expected a priori. Estimation of such a risk is the task of the presented novel retrospective power analysis permutation test. This is evaluated using a data set for two unit operations established during characterization of a biopharmaceutical process in industry. The results show that, for one unit operation, the observed variance in the experiments is much larger than expected a priori, resulting in low power levels for all non-significant parameters. Moreover, we present a workflow of how to mitigate the risk associated with overlooked parameter effects. This enables a statistically sound identification of critical process parameters. The developed workflow will substantially support industry in delivering constant product quality, reduce process variance and increase patient safety.
NASA Astrophysics Data System (ADS)
Lee, Myoung-Jae; Jung, Young-Dae
2017-05-01
The influence of nonisothermal and quantum shielding on the electron-ion collision process is investigated in strongly coupled two-temperature plasmas. The eikonal method is employed to obtain the eikonal scattering phase shift and eikonal cross section as functions of the impact parameter, collision energy, electron temperature, ion temperature, Debye length, and de Broglie wavelength. The results show that the quantum effect suppresses the eikonal scattering phase shift for the electron-ion collision in two-temperature dense plasmas. It is also found that the differential eikonal cross section decreases for small impact parameters. However, it increases for large impact parameters with increasing de Broglie wavelength. It is also found that the maximum position of the differential eikonal cross section is receded from the collision center with an increase in the nonisothermal character of the plasma. In addition, it is found that the total eikonal cross sections in isothermal plasmas are always greater than those in two-temperature plasmas. The variations of the eikonal cross section due to the two-temperature and quantum shielding effects are also discussed.
The unitary convolution approximation for heavy ions
NASA Astrophysics Data System (ADS)
Grande, P. L.; Schiwietz, G.
2002-10-01
The convolution approximation for the impact-parameter dependent energy loss is reviewed with emphasis on the determination of the stopping force for heavy projectiles. In this method, the energy loss in different impact-parameter regions is well determined and interpolated smoothly. The physical inputs of the model are the projectile-screening function (in the case of dressed ions), the electron density and oscillators strengths of the target atoms. Moreover, the convolution approximation, in the perturbative mode (called PCA), yields remarkable agreement with full semi-classical-approximation (SCA) results for bare as well as for screened ions at all impact parameters. In the unitary mode (called UCA), the method contains some higher-order effects (yielding in some cases rather good agreement with full coupled-channel calculations) and approaches the classical regime similar as the Bohr model for large perturbations ( Z/ v≫1). The results are then used to compare with experimental values of the non-equilibrium stopping force as a function of the projectile charge as well as with the equilibrium energy loss under non-aligned and channeling conditions.
Impact of Next-to-Leading Order Contributions to Cosmic Microwave Background Lensing.
Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth
2017-05-26
In this Letter we study the impact on cosmological parameter estimation, from present and future surveys, due to lensing corrections on cosmic microwave background temperature and polarization anisotropies beyond leading order. In particular, we show how post-Born corrections, large-scale structure effects, and the correction due to the change in the polarization direction between the emission at the source and the detection at the observer are non-negligible in the determination of the polarization spectra. They have to be taken into account for an accurate estimation of cosmological parameters sensitive to or even based on these spectra. We study in detail the impact of higher order lensing on the determination of the tensor-to-scalar ratio r and on the estimation of the effective number of relativistic species N_{eff}. We find that neglecting higher order lensing terms can lead to misinterpreting these corrections as a primordial tensor-to-scalar ratio of about O(10^{-3}). Furthermore, it leads to a shift of the parameter N_{eff} by nearly 2σ considering the level of accuracy aimed by future S4 surveys.
Rapid impact testing for quantitative assessment of large populations of bridges
NASA Astrophysics Data System (ADS)
Zhou, Yun; Prader, John; DeVitis, John; Deal, Adrienne; Zhang, Jian; Moon, Franklin; Aktan, A. Emin
2011-04-01
Although the widely acknowledged shortcomings of visual inspection have fueled significant advances in the areas of non-destructive evaluation and structural health monitoring (SHM) over the last several decades, the actual practice of bridge assessment has remained largely unchanged. The authors believe the lack of adoption, especially of SHM technologies, is related to the 'single structure' scenarios that drive most research. To overcome this, the authors have developed a concept for a rapid single-input, multiple-output (SIMO) impact testing device that will be capable of capturing modal parameters and estimating flexibility/deflection basins of common highway bridges during routine inspections. The device is composed of a trailer-mounted impact source (capable of delivering a 50 kip impact) and retractable sensor arms, and will be controlled by an automated data acquisition, processing and modal parameter estimation software. The research presented in this paper covers (a) the theoretical basis for SISO, SIMO and MIMO impact testing to estimate flexibility, (b) proof of concept numerical studies using a finite element model, and (c) a pilot implementation on an operating highway bridge. Results indicate that the proposed approach can estimate modal flexibility within a few percent of static flexibility; however, the estimated modal flexibility matrix is only reliable for the substructures associated with the various SIMO tests. To overcome this shortcoming, a modal 'stitching' approach for substructure integration to estimate the full Eigen vector matrix is developed, and preliminary results of these methods are also presented.
NASA Astrophysics Data System (ADS)
Eisner, Stephanie; Huang, Shaochun; Majasalmi, Titta; Bright, Ryan; Astrup, Rasmus; Beldring, Stein
2017-04-01
Forests are recognized for their decisive effect on landscape water balance with structural forest characteristics as stand density or species composition determining energy partitioning and dominant flow paths. However, spatial and temporal variability in forest structure is often poorly represented in hydrological modeling frameworks, in particular in regional to large scale hydrological modeling and impact analysis. As a common practice, prescribed land cover classes (including different generic forest types) are linked to parameter values derived from literature, or parameters are determined by calibration. While national forest inventory (NFI) data provide comprehensive, detailed information on hydrologically relevant forest characteristics, their potential to inform hydrological simulation over larger spatial domains is rarely exploited. In this study we present a modeling framework that couples the distributed hydrological model HBV with forest structural information derived from the Norwegian NFI and multi-source remote sensing data. The modeling framework, set up for the entire of continental Norway at 1 km spatial resolution, is explicitly designed to study the combined and isolated impacts of climate change, forest management and land use change on hydrological fluxes. We use a forest classification system based on forest structure rather than biomes which allows to implicitly account for impacts of forest management on forest structural attributes. In the hydrological model, different forest classes are represented by three parameters: leaf area index (LAI), mean tree height and surface albedo. Seasonal cycles of LAI and surface albedo are dynamically simulated to make the framework applicable under climate change conditions. Based on a hindcast for the pilot regions Nord-Trøndelag and Sør-Trøndelag, we show how forest management has affected regional hydrological fluxes during the second half of the 20th century as contrasted to climate variability.
NASA Technical Reports Server (NTRS)
Kearsley, A. T.; Ball, A. D.; Wozniakiewicz, P. A.; Graham, G. A.; Burchell, M. J.; Cole, M. J.; Horz, F.; See, T. H.
2007-01-01
The Stardust spacecraft returned the first undoubted samples of cometary dust, with many grains embedded in the silica aerogel collector . Although many tracks contain one or more large terminal particles of a wide range of mineral compositions , there is also abundant material along the track walls. To help interpret the full particle size, structure and mass, both experimental simulation of impact by shots and numerical modeling of the impact process have been attempted. However, all approaches require accurate and precise measurement of impact track size parameters such as length, width and volume of specific portions. To make such measurements is not easy, especially if extensive aerogel fracturing and discoloration has occurred. In this paper we describe the application and limitations of laser confocal imagery for determination of aerogel track parameters, and for the location of particle remains.
The Impact of Missing Background Data on Subpopulation Estimation
ERIC Educational Resources Information Center
Rutkowski, Leslie
2011-01-01
Although population modeling methods are well established, a paucity of literature appears to exist regarding the effect of missing background data on subpopulation achievement estimates. Using simulated data that follows typical large-scale assessment designs with known parameters and a number of missing conditions, this paper examines the extent…
Modeling High-Impact Weather and Climate: Lessons From a Tropical Cyclone Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Done, James; Holland, Greg; Bruyere, Cindy
2013-10-19
Although the societal impact of a weather event increases with the rarity of the event, our current ability to assess extreme events and their impacts is limited by not only rarity but also by current model fidelity and a lack of understanding of the underlying physical processes. This challenge is driving fresh approaches to assess high-impact weather and climate. Recent lessons learned in modeling high-impact weather and climate are presented using the case of tropical cyclones as an illustrative example. Through examples using the Nested Regional Climate Model to dynamically downscale large-scale climate data the need to treat bias inmore » the driving data is illustrated. Domain size, location, and resolution are also shown to be critical and should be guided by the need to: include relevant regional climate physical processes; resolve key impact parameters; and to accurately simulate the response to changes in external forcing. The notion of sufficient model resolution is introduced together with the added value in combining dynamical and statistical assessments to fill out the parent distribution of high-impact parameters. Finally, through the example of a tropical cyclone damage index, direct impact assessments are resented as powerful tools that distill complex datasets into concise statements on likely impact, and as highly effective communication devices.« less
Effects of 3D Earth structure on W-phase CMT parameters
NASA Astrophysics Data System (ADS)
Morales, Catalina; Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo
2017-04-01
The source inversion of the W-phase has demonstrated a great potential to provide fast and reliable estimates of the centroid moment tensor (CMT) for moderate to large earthquakes. It has since been implemented in different operational environments (NEIC-USGS, PTWC, etc.) with the aim of providing rapid CMT solutions. These solutions are in particular useful for tsunami warning purposes. Computationally, W-phase waveforms are usually synthetized by summation of normal modes at long period (100 - 1000 s) for a spherical Earth model (e.g., PREM). Although the energy of these modes mainly stays in the mantle where lateral structural variations are relatively small, the impact of 3D heterogeneities on W-phase solutions have not yet been quantified. In this study, we investigate possible bias in W-phase source parameters due to unmodeled lateral structural heterogeneities. We generate a simulated dataset consisting of synthetic seismograms of large past earthquakes that accounts for the Earth's 3D structure. The W-phase algorithm is then used to invert the synthetic dataset for earthquake CMT parameters with and without added noise. Results show that the impact of 3D heterogeneities is generally larger for surface-waves than for W-phase waveforms. However, some discrepancies are noted between inverted W-phase parameters and target values. Particular attention is paid to the possible bias induced by the unmodeled 3D structure into the location of the W-phase centroid. Preliminary results indicate that the parameter that is most susceptible to 3D Earth structure seems to be the centroid depth.
Relative importance of local- and large-scale drivers of alpine soil microarthropod communities.
Mitchell, Ruth J; Urpeth, Hannah M; Britton, Andrea J; Black, Helaina; Taylor, Astrid R
2016-11-01
Nitrogen (N) deposition and climate are acknowledged drivers of change in biodiversity and ecosystem function at large scales. However, at a local scale, their impact on functions and community structure of organisms is filtered by drivers like habitat quality and food quality/availability. This study assesses the relative impact of large-scale factors, N deposition and climate (rainfall and temperature), versus local-scale factors of habitat quality and food quality/availability on soil fauna communities at 15 alpine moss-sedge heaths along an N deposition gradient in the UK. Habitat quality and food quality/availability were the primary drivers of microarthropod communities. No direct impacts of N deposition on the microarthropod community were observed, but induced changes in habitat quality (decline in moss cover and depth) and food quality (decreased vegetation C:N) associated with increased N deposition strongly suggest an indirect impact of N. Habitat quality and climate explained variation in the composition of the Oribatida, Mesostigmata, and Collembola communities, while only habitat quality significantly impacted the Prostigmata. Food quality and prey availability were important in explaining the composition of the oribatid and mesostigmatid mite communities, respectively. This study shows that, in alpine habitats, soil microarthropod community structure responds most strongly to local-scale variation in habitat quality and food availability rather than large-scale variation in climate and pollution. However, given the strong links between N deposition and the key habitat quality parameters, we conclude that N deposition indirectly drives changes in the soil microarthropod community, suggesting a mechanism by which large-scale drivers indirectly impacts these functionally important groups.
Low energy peripheral scaling in nucleon-nucleon scattering and uncertainty quantification
NASA Astrophysics Data System (ADS)
Ruiz Simo, I.; Amaro, J. E.; Ruiz Arriola, E.; Navarro Pérez, R.
2018-03-01
We analyze the peripheral structure of the nucleon-nucleon interaction for LAB energies below 350 MeV. To this end we transform the scattering matrix into the impact parameter representation by analyzing the scaled phase shifts (L + 1/2) δ JLS (p) and the scaled mixing parameters (L + 1/2)ɛ JLS (p) in terms of the impact parameter b = (L + 1/2)/p. According to the eikonal approximation, at large angular momentum L these functions should become an universal function of b, independent on L. This allows to discuss in a rather transparent way the role of statistical and systematic uncertainties in the different long range components of the two-body potential. Implications for peripheral waves obtained in chiral perturbation theory interactions to fifth order (N5LO) or from the large body of NN data considered in the SAID partial wave analysis are also drawn from comparing them with other phenomenological high-quality interactions, constructed to fit scattering data as well. We find that both N5LO and SAID peripheral waves disagree more than 5σ with the Granada-2013 statistical analysis, more than 2σ with the 6 statistically equivalent potentials fitting the Granada-2013 database and about 1σ with the historical set of 13 high-quality potentials developed since the 1993 Nijmegen analysis.
Geometric Modeling of Inclusions as Ellipsoids
NASA Technical Reports Server (NTRS)
Bonacuse, Peter J.
2008-01-01
Nonmetallic inclusions in gas turbine disk alloys can have a significant detrimental impact on fatigue life. Because large inclusions that lead to anomalously low lives occur infrequently, probabilistic approaches can be utilized to avoid the excessively conservative assumption of lifing to a large inclusion in a high stress location. A prerequisite to modeling the impact of inclusions on the fatigue life distribution is a characterization of the inclusion occurrence rate and size distribution. To help facilitate this process, a geometric simulation of the inclusions was devised. To make the simulation problem tractable, the irregularly sized and shaped inclusions were modeled as arbitrarily oriented, three independent dimensioned, ellipsoids. Random orientation of the ellipsoid is accomplished through a series of three orthogonal rotations of axes. In this report, a set of mathematical models for the following parameters are described: the intercepted area of a randomly sectioned ellipsoid, the dimensions and orientation of the intercepted ellipse, the area of a randomly oriented sectioned ellipse, the depth and width of a randomly oriented sectioned ellipse, and the projected area of a randomly oriented ellipsoid. These parameters are necessary to determine an inclusion s potential to develop a propagating fatigue crack. Without these mathematical models, computationally expensive search algorithms would be required to compute these parameters.
NASA Astrophysics Data System (ADS)
Coll, Marta; Navarro, Joan; Olson, Robert J.; Christensen, Villy
2013-10-01
We synthesized available information from ecological models at local and regional scales to obtain a global picture of the trophic position and ecological role of squids in marine ecosystems. First, static food-web models were used to analyze basic ecological parameters and indicators of squids: biomass, production, consumption, trophic level, omnivory index, predation mortality diet, and the ecological role. In addition, we developed various dynamic temporal simulations using two food-web models that included squids in their parameterization, and we investigated potential impacts of fishing pressure and environmental conditions for squid populations and, consequently, for marine food webs. Our results showed that squids occupy a large range of trophic levels in marine food webs and show a large trophic width, reflecting the versatility in their feeding behaviors and dietary habits. Models illustrated that squids are abundant organisms in marine ecosystems, and have high growth and consumption rates, but these parameters are highly variable because squids are adapted to a large variety of environmental conditions. Results also show that squids can have a large trophic impact on other elements of the food web, and top-down control from squids to their prey can be high. In addition, some squid species are important prey of apical predators and may be keystone species in marine food webs. In fact, we found strong interrelationships between neritic squids and the populations of their prey and predators in coastal and shelf areas, while the role of squids in open ocean and upwelling ecosystems appeared more constrained to a bottom-up impact on their predators. Therefore, large removals of squids will likely have large-scale effects on marine ecosystems. In addition, simulations confirm that squids are able to benefit from a general increase in fishing pressure, mainly due to predation release, and quickly respond to changes triggered by the environment. Squids may thus be very sensitive to the effects of fishing and climate change.
High-mass diffraction in the QCD dipole picture
NASA Astrophysics Data System (ADS)
Bialas, A.; Navelet, H.; Peschanski, R.
1998-05-01
Using the QCD dipole picture of the BFKL pomeron, the cross-section of single diffractive dissociation of virtual photons at high energy and large diffractively excited masses is calculated. The calculation takes into account the full impact-parameter phase-space and thus allows to obtain an exact value of the triple BFKL Pomeron vertex. It appears large enough to compensate the perturbative 6-gluon coupling factor (α/π)3 thus suggesting a rather appreciable diffractive cross-section.
Crater size estimates for large-body terrestrial impact
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.; Housen, Kevin R.
1988-01-01
Calculating the effects of impacts leading to global catastrophes requires knowledge of the impact process at very large size scales. This information cannot be obtained directly but must be inferred from subscale physical simulations, numerical simulations, and scaling laws. Schmidt and Holsapple presented scaling laws based upon laboratory-scale impact experiments performed on a centrifuge (Schmidt, 1980 and Schmidt and Holsapple, 1980). These experiments were used to develop scaling laws which were among the first to include gravity dependence associated with increasing event size. At that time using the results of experiments in dry sand and in water to provide bounds on crater size, they recognized that more precise bounds on large-body impact crater formation could be obtained with additional centrifuge experiments conducted in other geological media. In that previous work, simple power-law formulae were developed to relate final crater diameter to impactor size and velocity. In addition, Schmidt (1980) and Holsapple and Schmidt (1982) recognized that the energy scaling exponent is not a universal constant but depends upon the target media. Recently, Holsapple and Schmidt (1987) includes results for non-porous materials and provides a basis for estimating crater formation kinematics and final crater size. A revised set of scaling relationships for all crater parameters of interest are presented. These include results for various target media and include the kinematics of formation. Particular attention is given to possible limits brought about by very large impactors.
Impacts of different types of measurements on estimating unsaturated flow parameters
NASA Astrophysics Data System (ADS)
Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru
2015-05-01
This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters
NASA Astrophysics Data System (ADS)
Shi, L.
2015-12-01
This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
NASA Astrophysics Data System (ADS)
Du, Erhu; Cai, Ximing; Brozović, Nicholas; Minsker, Barbara
2017-05-01
Agricultural water markets are considered effective instruments to mitigate the impacts of water scarcity and to increase crop production. However, previous studies have limited understanding of how farmers' behaviors affect the performance of water markets. This study develops an agent-based model to explicitly incorporate farmers' behaviors, namely irrigation behavior (represented by farmers' sensitivity to soil water deficit λ) and bidding behavior (represented by farmers' rent seeking μ and learning rate β), in a hypothetical water market based on a double auction. The model is applied to the Guadalupe River Basin in Texas to simulate a hypothetical agricultural water market under various hydrological conditions. It is found that the joint impacts of the behavioral parameters on the water market are strong and complex. In particular, among the three behavioral parameters, λ affects the water market potential and its impacts on the performance of the water market are significant under most scenarios. The impacts of μ or β on the performance of the water market depend on the other two parameters. The water market could significantly increase crop production only when the following conditions are satisfied: (1) λ is small and (2) μ is small and/or β is large. The first condition requires efficient irrigation scheduling, and the second requires well-developed water market institutions that provide incentives to bid true valuation of water permits.
Imprint of non-linear effects on HI intensity mapping on large scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umeh, Obinna, E-mail: umeobinna@gmail.com
Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on themore » power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.« less
Imprint of non-linear effects on HI intensity mapping on large scales
NASA Astrophysics Data System (ADS)
Umeh, Obinna
2017-06-01
Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.
Guarner, Jeannette; Atuan, Maria Ana; Nix, Barbara; Mishak, Christopher; Vejjajiva, Connie; Curtis, Cheri; Park, Sunita; Mullins, Richard
2010-01-01
Each institution sets specific parameters obtained by automated hematology analyzers to trigger manual counts. We designed a process to decrease the number of manual differential cell counts without impacting patient care. We selected new criteria that prompt manual counts and studied the impact these changes had in 2 days of work and in samples of patients with newly diagnosed leukemia, sickle cell disease, and presence of left shift. By using fewer parameters and expanding our ranges we decreased the number of manual counts by 20%. The parameters that prompted manual counts most frequently were the presence of blast flags and nucleated red blood cells, 2 parameters that were not changed. The parameters that accounted for a decrease in the number of manual counts were the white blood cell count and large unstained cells. Eight of 32 patients with newly diagnosed leukemia did not show blast flags; however, other parameters triggered manual counts. In 47 patients with sickle cell disease, nucleated red cells and red cell variability prompted manual review. Bands were observed in 18% of the specimens and 4% would not have been counted manually with the new criteria, for the latter the mean band count was 2.6%. The process we followed to evaluate hematological parameters that reflex to manual differential cell counts increased efficiency without compromising patient care in our hospital system.
Garg, Abhishek D.; De Ruysscher, Dirk; Agostinis, Patrizia
2016-01-01
ABSTRACT The emerging role of the cancer cell-immune cell interface in shaping tumorigenesis/anticancer immunotherapy has increased the need to identify prognostic biomarkers. Henceforth, our primary aim was to identify the immunogenic cell death (ICD)-derived metagene signatures in breast, lung and ovarian cancer that associate with improved patient survival. To this end, we analyzed the prognostic impact of differential gene-expression of 33 pre-clinically-validated ICD-parameters through a large-scale meta-analysis involving 3,983 patients (‘discovery’ dataset) across lung (1,432), breast (1,115) and ovarian (1,436) malignancies. The main results were also substantiated in ‘validation’ datasets consisting of 818 patients of same cancer-types (i.e. 285 breast/274 lung/259 ovarian). The ICD-associated parameters exhibited a highly-clustered and largely cancer type-specific prognostic impact. Interestingly, we delineated ICD-derived consensus-metagene signatures that exhibited a positive prognostic impact that was either cancer type-independent or specific. Importantly, most of these ICD-derived consensus-metagenes (acted as attractor-metagenes and thereby) ‘attracted’ highly co-expressing sets of genes or convergent-metagenes. These convergent-metagenes also exhibited positive prognostic impact in respective cancer types. Remarkably, we found that the cancer type-independent consensus-metagene acted as an ‘attractor’ for cancer-specific convergent-metagenes. This reaffirms that the immunological prognostic landscape of cancer tends to segregate between cancer-independent and cancer-type specific gene signatures. Moreover, this prognostic landscape was largely dominated by the classical T cell activity/infiltration/function-related biomarkers. Interestingly, each cancer type tended to associate with biomarkers representing a specific T cell activity or function rather than pan-T cell biomarkers. Thus, our analysis confirms that ICD can serve as a platform for discovery of novel prognostic metagenes. PMID:27057433
Impact parameter smearing effects on isospin sensitive observables in heavy ion collisions
NASA Astrophysics Data System (ADS)
Li, Li; Zhang, Yingxun; Li, Zhuxia; Wang, Nan; Cui, Ying; Winkelbauer, Jack
2018-04-01
The validity of impact parameter estimation from the multiplicity of charged particles at low-intermediate energies is checked within the framework of the improved quantum molecular dynamics model. The simulations show that the multiplicity of charged particles cannot estimate the impact parameter of heavy ion collisions very well, especially for central collisions at the beam energies lower than ˜70 MeV/u due to the large fluctuations of the multiplicity of charged particles. The simulation results for the central collisions defined by the charged particle multiplicity are compared to those by using impact parameter b =2 fm and it shows that the charge distribution for 112Sn+112Sn at the beam energy of 50 MeV/u is different evidently for two cases; and the chosen isospin sensitive observable, the coalescence invariant single neutron to proton yield ratio, reduces less than 15% for neutron-rich systems Sn,132124+124Sn at Ebeam=50 MeV/u, while the coalescence invariant double neutron to proton yield ratio does not have obvious difference. The sensitivity of the chosen isospin sensitive observables to effective mass splitting is studied for central collisions defined by the multiplicity of charged particles. Our results show that the sensitivity is enhanced for 132Sn+124Sn relative to that for 124Sn+124Sn , and this reaction system should be measured in future experiments to study the effective mass splitting by heavy ion collisions.
Lumped Parameter Models for Predicting Nitrogen Transport in Lower Coastal Plain Watersheds
Devendra M. Amatya; George M. Chescheir; Glen P. Fernandez; R. Wayne Skaggs; F. Birgand; J.W. Gilliam
2003-01-01
hl recent years physically based comprehensive disfributed watershed scale hydrologic/water quality models have been developed and applied 10 evaluate cumulative effects of land arld water management practices on receiving waters, Although fhesc complex physically based models are capable of simulating the impacts ofthese changes in large watersheds, they are often...
Impact of water and feed deprivation on physiological parameters in steers
USDA-ARS?s Scientific Manuscript database
A report in rats demonstrated that dehydration as the result of 8 d of water deprivation increased leakage of endotoxin from the intestine (Zurovsky and Barbiro, 2000 Experimental and toxicologic pathology 52:37-42). Given the large number of gram negative bacteria in the rumen of cattle, a much sho...
Study on electromagnetic radiation and mechanical characteristics of coal during an SHPB test
NASA Astrophysics Data System (ADS)
Chengwu, Li; Qifei, Wang; Pingyang, Lyu
2016-06-01
Dynamic loads provided by a Split Hopkinson pressure bar are applied in the impact failure experiment on coal with an impact velocity of 4.174-17.652 m s-1. The mechanical property characteristics of coal and an electromagnetic radiation signal can be detected and measured during the experiment. The variation of coal stress, strain, incident energy, dissipated energy and other mechanical parameters are analyzed by the unidimensional stress wave theory. It suggests that with an increase of the impact velocity, the mechanical parameters and electromagnetic radiation increased significantly and the dissipated energy of the coal sample has a high discrete growing trend during the failure process of coal impact. Combined with the received energy of the electromagnetic radiation signal, the relationship between these mechanical parameters and electromagnetic radiation during the failure process of coal burst could be analyzed by the grey correlation model. The results show that the descending order of the gray correlation degree between the mechanical characteristics and electromagnetic radiation energy are impact velocity, maximum stress, the average stress, incident energy, the average strain, maximum strain, the average strain rate and dissipation energy. Due to the correlation degree, the impact velocity and incident energy are relatively large, and the main factor affecting the electromagnetic radiation energy of coal is the energy magnitude. While the relationship between extreme stress and the radiation energy change trend is closed, the stress state of coal has a greater impact on electromagnetic radiation than the strain and destruction which can deepen the research of the coal-rock dynamic disaster electromagnetic monitoring technique.
Pavier, Julien; Langlet, André; Eches, Nicolas; Jacquet, Jean-François
2015-01-01
The development and safety certification of less lethal projectiles require an understanding of the influence of projectile parameters on projectile-chest interaction and on the resulting terminal effect. Several energy-based criteria have been developed for chest injury assessment. Many studies consider kinetic energy (KE) or energy density as the only projectile parameter influencing terminal effect. In a common KE range (100-160 J), analysis of the firing tests of two 40 mm projectiles of different masses on animal surrogates has been made in order to investigate the severity of the injuries in the thoracic region. Experimental results have shown that KE and calibre are not sufficient to discriminate between the two projectiles as regards their injury potential. Parameters, such as momentum, shape and impedance, influence the projectile-chest interaction and terminal effect. A simplified finite element model of projectile-structure interaction confirms the experimental tendencies. Within the range of ballistic parameters used, it has been demonstrated that maximum thoracic deflection is a useful parameter to predict the skeletal level of injury, and it largely depends on the projectile pre-impact momentum. However, numerical simulations show that these results are merely valid for the experimental conditions used and cannot be generalised. Nevertheless, the transmitted impulse seems to be a more general factor governing the thorax deflection.
A model for estimating the impact of changes in children's vaccines.
Simpson, K N; Biddle, A K; Rabinovich, N R
1995-12-01
To assist in strategic planning for the improvement of vaccines and vaccine programs, an economic model was developed and tested that estimates the potential impact of vaccine innovations on health outcomes and costs associated with vaccination and illness. A multistep, iterative process of data extraction/integration was used to develop the model and the scenarios. Parameter replication, sensitivity analysis, and expert review were used to validate the model. The greatest impact on the improvement of health is expected to result from the production of less reactogenic vaccines that require fewer inoculations for immunity. The greatest economic impact is predicted from improvements that decrease the number of inoculations required. Scenario analysis may be useful for integrating health outcomes and economic data into decision making. For childhood infections, this analysis indicates that large cost savings can be achieved in the future if we can improve vaccine efficacy so that the number of required inoculations is reduced. Such an improvement represents a large potential "payback" for the United States and might benefit other countries.
Non-linear matter power spectrum covariance matrix errors and cosmological parameter uncertainties
NASA Astrophysics Data System (ADS)
Blot, L.; Corasaniti, P. S.; Amendola, L.; Kitching, T. D.
2016-06-01
The covariance of the matter power spectrum is a key element of the analysis of galaxy clustering data. Independent realizations of observational measurements can be used to sample the covariance, nevertheless statistical sampling errors will propagate into the cosmological parameter inference potentially limiting the capabilities of the upcoming generation of galaxy surveys. The impact of these errors as function of the number of realizations has been previously evaluated for Gaussian distributed data. However, non-linearities in the late-time clustering of matter cause departures from Gaussian statistics. Here, we address the impact of non-Gaussian errors on the sample covariance and precision matrix errors using a large ensemble of N-body simulations. In the range of modes where finite volume effects are negligible (0.1 ≲ k [h Mpc-1] ≲ 1.2), we find deviations of the variance of the sample covariance with respect to Gaussian predictions above ˜10 per cent at k > 0.3 h Mpc-1. Over the entire range these reduce to about ˜5 per cent for the precision matrix. Finally, we perform a Fisher analysis to estimate the effect of covariance errors on the cosmological parameter constraints. In particular, assuming Euclid-like survey characteristics we find that a number of independent realizations larger than 5000 is necessary to reduce the contribution of sampling errors to the cosmological parameter uncertainties at subpercent level. We also show that restricting the analysis to large scales k ≲ 0.2 h Mpc-1 results in a considerable loss in constraining power, while using the linear covariance to include smaller scales leads to an underestimation of the errors on the cosmological parameters.
Optimizing the Hydrological and Biogeochemical Simulations on a Hillslope with Stony Soil
NASA Astrophysics Data System (ADS)
Zhu, Q.
2017-12-01
Stony soils are widely distributed in the hilly area. However, traditional pedotransfer functions are not reliable in predicting the soil hydraulic parameters for these soils due to the impacts of rock fragments. Therefore, large uncertainties and errors may exist in the hillslope hydrological and biogeochemical simulations in stony soils due to poor estimations of soil hydraulic parameters. In addition, homogenous soil hydraulic parameters are usually used in traditional hillslope simulations. However, soil hydraulic parameters are spatially heterogeneous on the hillslope. This may also cause the unreliable simulations. In this study, we obtained soil hydraulic parameters using five different approaches on a tea hillslope in Taihu Lake basin, China. These five approaches included (1) Rossetta predicted and spatially homogenous, (2) Rossetta predicted and spatially heterogeneous), (3) Rossetta predicted, rock fragment corrected and spatially homogenous, (4) Rossetta predicted, rock fragment corrected and spatially heterogeneous, and (5) extracted from observed soil-water retention curves fitted by dual-pore function and spatially heterogeneous (observed). These five sets of soil hydraulic properties were then input into Hydrus-3D and DNDC to simulate the soil hydrological and biogeochemical processes. The aim of this study is testing two hypotheses. First, considering the spatial heterogeneity of soil hydraulic parameters will improve the simulations. Second, considering the impact of rock fragment on soil hydraulic parameters will improve the simulations.
Kepler-447b: a hot-Jupiter with an extremely grazing transit
NASA Astrophysics Data System (ADS)
Lillo-Box, J.; Barrado, D.; Santos, N. C.; Mancini, L.; Figueira, P.; Ciceri, S.; Henning, Th.
2015-05-01
We present the radial velocity confirmation of the extrasolar planet Kepler-447b, initially detected as a candidate by the Kepler mission. In this work, we analyzeits transit signal and the radial velocity data obtained with the Calar Alto Fiber-fed Echelle spectrograph (CAFE). By simultaneously modeling both datasets, we obtain the orbital and physical properties of the system. According to our results, Kepler-447b is a Jupiter-mass planet (Mp = 1.37+0.48-0.46 MJup), with an estimated radius of Rp = 1.65+0.59-0.56 RJup (uncertainties provided in this work are 3σ unless specified). This translates into a sub-Jupiter density. The planet revolves every ~7.8 days in a slightly eccentric orbit (e = 0.123+0.037-0.036) around a G8V star with detected activity in the Kepler light curve. Kepler-447b transits its host with a large impact parameter (b = 1.076+0.112-0.086), which is one of the few planetary grazing transits confirmed so far and the first in the Kepler large crop of exoplanets. We estimate that only around 20% of the projected planet disk occults the stellar disk. The relatively large uncertainties in the planet radius are due to the large impact parameter and short duration of the transit. Planetary transits with large impact parameters (and in particular grazing transits) can be used to detect and analyze interesting configurations, such as additional perturbing bodies, stellar pulsations, rotation of a non-spherical planet, or polar spot-crossing events. All these scenarios will periodically modify the transit properties (depth, duration, and time of mid-transit), which could be detectable with sufficiently accurate photometry. Short-cadence photometric data (at the 1-min level) would help in the search for these exotic configurations in grazing planetary transits like that of Kepler-447b. This system could then be an excellent target for the forthcoming missions TESS and CHEOPS, which will provide the required photometric precision and cadence to study this type of transit. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max- Planck-Institut für Astronomie (Heidelberg) and the Instituto de Astrofísica de Andalucía (IAA-CSIC, Granada).
Effects of Hurricane Katrina on the fish fauna of the Pascagoula River Drainage
J. Schaefer; P. Mickle; J. Spaeth; B.R. Kreiser; S.B. Adams; W. Matamoros; B. Zuber; P. Vigueira
2006-01-01
Large tropical storms can have dramatic effects on coastal, estuarine and terrestrial ecosystems. However, it is not as well understood how these types of disturbances might impact freshwater communities further inland. Storm surges can change critical water quality parameters for kilometers upstream, potentially causing subtle shifts in community structure or more...
Gravitational lensing and ghost images in the regular Bardeen no-horizon spacetimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schee, Jan; Stuchlík, Zdeněk, E-mail: jan.schee@fpf.slu.cz, E-mail: zdenek.stuchlik@fpf.slu.cz
We study deflection of light rays and gravitational lensing in the regular Bardeen no-horizon spacetimes. Flatness of these spacetimes in the central region implies existence of interesting optical effects related to photons crossing the gravitational field of the no-horizon spacetimes with low impact parameters. These effects occur due to existence of a critical impact parameter giving maximal deflection of light rays in the Bardeen no-horizon spacetimes. We give the critical impact parameter in dependence on the specific charge of the spacetimes, and discuss 'ghost' direct and indirect images of Keplerian discs, generated by photons with low impact parameters. The ghostmore » direct images can occur only for large inclination angles of distant observers, while ghost indirect images can occur also for small inclination angles. We determine the range of the frequency shift of photons generating the ghost images and determine distribution of the frequency shift across these images. We compare them to those of the standard direct images of the Keplerian discs. The difference of the ranges of the frequency shift on the ghost and direct images could serve as a quantitative measure of the Bardeen no-horizon spacetimes. The regions of the Keplerian discs giving the ghost images are determined in dependence on the specific charge of the no-horizon spacetimes. For comparison we construct direct and indirect (ordinary and ghost) images of Keplerian discs around Reissner-Nördström naked singularities demonstrating a clear qualitative difference to the ghost direct images in the regular Bardeen no-horizon spacetimes. The optical effects related to the low impact parameter photons thus give clear signature of the regular Bardeen no-horizon spacetimes, as no similar phenomena could occur in the black hole or naked singularity spacetimes. Similar direct ghost images have to occur in any regular no-horizon spacetimes having nearly flat central region.« less
NASA Astrophysics Data System (ADS)
Kopacz, Michał
2017-09-01
The paper attempts to assess the impact of variability of selected geological (deposit) parameters on the value and risks of projects in the hard coal mining industry. The study was based on simulated discounted cash flow analysis, while the results were verified for three existing bituminous coal seams. The Monte Carlo simulation was based on nonparametric bootstrap method, while correlations between individual deposit parameters were replicated with use of an empirical copula. The calculations take into account the uncertainty towards the parameters of empirical distributions of the deposit variables. The Net Present Value (NPV) and the Internal Rate of Return (IRR) were selected as the main measures of value and risk, respectively. The impact of volatility and correlation of deposit parameters were analyzed in two aspects, by identifying the overall effect of the correlated variability of the parameters and the indywidual impact of the correlation on the NPV and IRR. For this purpose, a differential approach, allowing determining the value of the possible errors in calculation of these measures in numerical terms, has been used. Based on the study it can be concluded that the mean value of the overall effect of the variability does not exceed 11.8% of NPV and 2.4 percentage points of IRR. Neglecting the correlations results in overestimating the NPV and the IRR by up to 4.4%, and 0.4 percentage point respectively. It should be noted, however, that the differences in NPV and IRR values can vary significantly, while their interpretation depends on the likelihood of implementation. Generalizing the obtained results, based on the average values, the maximum value of the risk premium in the given calculation conditions of the "X" deposit, and the correspondingly large datasets (greater than 2500), should not be higher than 2.4 percentage points. The impact of the analyzed geological parameters on the NPV and IRR depends primarily on their co-existence, which can be measured by the strength of correlation. In the analyzed case, the correlations result in limiting the range of variation of the geological parameters and economics results (the empirical copula reduces the NPV and IRR in probabilistic approach). However, this is due to the adjustment of the calculation under conditions similar to those prevailing in the deposit.
An Impact Ejecta Behavior Model for Small, Irregular Bodies
NASA Technical Reports Server (NTRS)
Richardson, J. E.; Melosh, H. J.; Greenberg, R.
2003-01-01
In recent years, spacecraft observations of asteroids 951 Gaspra, 243 Ida, 253 Mathilde, and 433 Eros have shown the overriding dominance of impact processes with regard to the structure and surface morphology of these small, irregular bodies. In particular, impact ejecta play an important role in regolith formation, ranging from small particles to large blocks, as well as surface feature modification and obscuration. To investigate these processes, a numerical model has been developed based upon the impact ejecta scaling laws provided by Housen, Schmidt, and Holsapple, and modified to more properly simulate the late-stage ejection velocities and ejecta plume shape changes (ejection angle variations) shown in impact cratering experiments. A target strength parameter has also been added to allow the simulation of strength-dominated cratering events in addition to the more familiar gravity-dominated cratering events. The result is a dynamical simulation which models -- via tracer particles -- the ejecta plume behavior, ejecta blanket placement, and impact crater area resulting from a specified impact on an irregularly shaped target body, which is modeled in 3-dimensional polygon fashion. This target body can be placed in a simple rotation state about one of its principal axes, with the impact site and projectile/target parameters selected by the user. The gravitational force from the irregular target body (on each tracer particle) is determined using the polygonized surface (polyhedron) gravity technique developed by Werner.
A statistical survey of heat input parameters into the cusp thermosphere
NASA Astrophysics Data System (ADS)
Moen, J. I.; Skjaeveland, A.; Carlson, H. C.
2017-12-01
Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.
Strengthening of surface layer of material by wave deformation multi-contact loading
NASA Astrophysics Data System (ADS)
Kirichek, A. V.; Barinov, S. V.; Aborkin, A. V.; Yashin, A. V.; Zaicev, A. A.
2018-03-01
It has been experimentally established that the possibility of multi-contact shock systems can transmit large total energy of the impact pulse to the deformation center. Thus, an increase in the number of instruments in a shock system from two to four, with the constant energy of the shock pulse, made it possible to increase the depth and the degree of hardening in the surface layer. The performance of multi-contact impact systems can be increased by 50% without degrading the hardening parameters by increasing the distance between the tools.
NASA Astrophysics Data System (ADS)
Xiong, Wei; Skalský, Rastislav; Porter, Cheryl H.; Balkovič, Juraj; Jones, James W.; Yang, Di
2016-09-01
Understanding the interactions between agricultural production and climate is necessary for sound decision-making in climate policy. Gridded and high-resolution crop simulation has emerged as a useful tool for building this understanding. Large uncertainty exists in this utilization, obstructing its capacity as a tool to devise adaptation strategies. Increasing focus has been given to sources of uncertainties for climate scenarios, input-data, and model, but uncertainties due to model parameter or calibration are still unknown. Here, we use publicly available geographical data sets as input to the Environmental Policy Integrated Climate model (EPIC) for simulating global-gridded maize yield. Impacts of climate change are assessed up to the year 2099 under a climate scenario generated by HadEM2-ES under RCP 8.5. We apply five strategies by shifting one specific parameter in each simulation to calibrate the model and understand the effects of calibration. Regionalizing crop phenology or harvest index appears effective to calibrate the model for the globe, but using various values of phenology generates pronounced difference in estimated climate impact. However, projected impacts of climate change on global maize production are consistently negative regardless of the parameter being adjusted. Different values of model parameter result in a modest uncertainty at global level, with difference of the global yield change less than 30% by the 2080s. The uncertainty subjects to decrease if applying model calibration or input data quality control. Calibration has a larger effect at local scales, implying the possible types and locations for adaptation.
2014-01-01
Background Large colon impactions are a common cause of colic in the horse. There are no scientific reports on the clinical presentation, diagnostic tests and treatments used in first opinion practice for large colon impaction cases. The aim of this study was to describe the presentation, diagnostic approach and treatment at the primary assessment of horses with large colon impactions. Methods Data were collected prospectively from veterinary practitioners on the primary assessment of equine colic cases over a 12 month period. Inclusion criteria were a diagnosis of primary large colon impaction and positive findings on rectal examination. Data recorded for each case included history, signalment, clinical and diagnostic findings, treatment on primary assessment and final case outcome. Case outcomes were categorised into three groups: simple medical (resolved with single treatment), complicated medical (resolved with multiple medical treatments) and critical (required surgery, were euthanased or died). Univariable analysis using one-way ANOVA and Tukey’s post-hoc test, Kruskal Wallis with Dunn’s post-hoc test and Chi squared analysis were used to compare between different outcome categories. Results 1032 colic cases were submitted by veterinary practitioners: 120 cases met the inclusion criteria for large colon impaction. Fifty three percent of cases were categorised as simple medical, 36.6% as complicated medical, and 9.2% as critical. Most cases (42.1%) occurred during the winter. Fifty nine percent of horses had had a recent change in management, 43% of horses were not ridden, and 12.5% had a recent / current musculoskeletal injury. Mean heart rate was 43bpm (range 26-88) and most cases showed mild signs of pain (67.5%) and reduced gut sounds (76%). Heart rate was significantly increased and gut sounds significantly decreased in critical compared to simple medical cases (p<0.05). Fifty different treatment combinations were used, with NSAIDs (93%) and oral fluids (71%) being administered most often. Conclusions Large colon impactions typically presented with mild signs of colic; heart rate and gut sounds were the most useful parameters to distinguish between simple and critical cases at the primary assessment. The findings of seasonal incidence and associated management factors are consistent with other studies. Veterinary practitioners currently use a wide range of different treatment combinations for large colon impactions. PMID:25238179
Effects of the seasonal cycle on superrotation in planetary atmospheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Jonathan L.; Vallis, Geoffrey K.; Potter, Samuel F.
2014-05-20
The dynamics of dry atmospheric general circulation model simulations forced by seasonally varying Newtonian relaxation are explored over a wide range of two control parameters and are compared with the large-scale circulation of Earth, Mars, and Titan in their relevant parameter regimes. Of the parameters that govern the behavior of the system, the thermal Rossby number (Ro) has previously been found to be important in governing the spontaneous transition from an Earth-like climatology of winds to a superrotating one with prograde equatorial winds, in the absence of a seasonal cycle. This case is somewhat unrealistic as it applies only ifmore » the planet has zero obliquity or if surface thermal inertia is very large. While Venus has nearly vanishing obliquity, Earth, Mars, and Titan (Saturn) all have obliquities of ∼25° and varying degrees of seasonality due to their differing thermal inertias and orbital periods. Motivated by this, we introduce a time-dependent Newtonian cooling to drive a seasonal cycle using idealized model forcing, and we define a second control parameter that mimics non-dimensional thermal inertia of planetary surfaces. We then perform and analyze simulations across the parameter range bracketed by Earth-like and Titan-like regimes, assess the impact on the spontaneous transition to superrotation, and compare Earth, Mars, and Titan to the model simulations in the relevant parameter regime. We find that a large seasonal cycle (small thermal inertia) prevents model atmospheres with large thermal Rossby numbers from developing superrotation by the influences of (1) cross-equatorial momentum advection by the Hadley circulation and (2) hemispherically asymmetric zonal-mean zonal winds that suppress instabilities leading to equatorial momentum convergence. We also demonstrate that baroclinic instabilities must be sufficiently weak to allow superrotation to develop. In the relevant parameter regimes, our seasonal model simulations compare favorably to large-scale, seasonal phenomena observed on Earth and Mars. In the Titan-like regime the seasonal cycle in our model acts to prevent superrotation from developing, and it is necessary to increase the value of a third parameter—the atmospheric Newtonian cooling time—to achieve a superrotating climatology.« less
The Impact of Mission Duration on a Mars Orbital Mission
NASA Technical Reports Server (NTRS)
Arney, Dale; Earle, Kevin; Cirillo, Bill; Jones, Christopher; Klovstad, Jordan; Grande, Melanie; Stromgren, Chel
2017-01-01
Performance alone is insufficient to assess the total impact of changing mission parameters on a space mission concept, architecture, or campaign; the benefit, cost, and risk must also be understood. This paper examines the impact to benefit, cost, and risk of changing the total mission duration of a human Mars orbital mission. The changes in the sizing of the crew habitat, including consumables and spares, was assessed as a function of duration, including trades of different life support strategies; this was used to assess the impact on transportation system requirements. The impact to benefit is minimal, while the impact on cost is dominated by the increases in transportation costs to achieve shorter total durations. The risk is expected to be reduced by decreasing total mission duration; however, large uncertainty exists around the magnitude of that reduction.
Impact of predator dormancy on prey-predator dynamics
NASA Astrophysics Data System (ADS)
Freire, Joana G.; Gallas, Marcia R.; Gallas, Jason A. C.
2018-05-01
The impact of predator dormancy on the population dynamics of phytoplankton-zooplankton in freshwater ecosystems is investigated using a simple model including dormancy, a strategy to avoid extinction. In addition to recently reported chaos-mediated mixed-mode oscillations, as the carrying capacity grows, we find surprisingly wide phases of nonchaos-mediated mixed-mode oscillations to be present well before the onset of chaos in the system. Nonchaos-mediated cascades display spike-adding sequences, while chaos-mediated cascades show spike-doubling. A host of braided periodic phases with exotic shapes is found embedded in a region of control parameters dominated by chaotic oscillations. We describe the organization of these complicated phases and show how they are interconnected and how their complexity unfolds as control parameters change. The novel nonchaos-mediated phases are found to be large and stable, even for low carrying capacity.
NASA Astrophysics Data System (ADS)
Montopoli, Mario; Cimini, Domenico; Marzano, Frank
2016-04-01
Volcanic eruptions inject both gas and solid particles into the Atmosphere. Solid particles are made by mineral fragments of different sizes (from few microns to meters), generally referred as tephra. Tephra from volcanic eruptions has enormous impacts on social and economical activities through the effects on the environment, climate, public health, and air traffic. The size, density and shape of a particle determine its fall velocity and thus residence time in the Atmosphere. Larger particles tend to fall quickly in the proximity of the volcano, while smaller particles may remain suspended for several days and thus may be transported by winds for thousands of km. Thus, the impact of such hazards involves local as well as large scales effects. Local effects involve mostly the large sized particles, while large scale effects are caused by the transport of the finest ejected tephra (ash) through the atmosphere. Forecasts of ash paths in the atmosphere are routinely run after eruptions using dispersion models. These models make use of meteorological and volcanic source parameters. The former are usually available as output of numerical weather prediction models or large scale reanalysis. Source parameters characterize the volcanic eruption near the vent; these are mainly the ash mass concentration along the vertical column and the top altitude of the volcanic plume, which is strictly related to the flux of the mass ejected at the emission source. These parameters should be known accurately and continuously; otherwise, strong hypothesis are usually needed, leading to large uncertainty in the dispersion forecasts. However, direct observations during an eruption are typically dangerous and impractical. Thus, satellite remote sensing is often exploited to monitor volcanic emissions, using visible (VIS) and infrared (IR) channels available on both Low Earth Orbit (LEO) and Geostationary Earth Orbit (GEO) satellites. VIS and IR satellite imagery are very useful to monitor the dispersal fine-ash cloud, but tend to saturate near the source due to the strong optical extinction of ash cloud top layers. Conversely, observations at microwave (MW) channels from LEO satellites have demonstrated to carry additional information near the volcano source due to the relative lower opacity. This feature makes satellite MW complementary to IR radiometry for estimating source parameters close to the volcano emission, at the cost of coarser spatial resolution. The presentation shows the value of passive MW observations for the detection and quantitative retrieval of volcanic emission source parameters through the investigation of notable case studies, such as the eruptions of Grímsvötn (Iceland, May 2011) and Calbuco (Cile, April 2015), observed by the Special Sensor Microwave Imager/Sounder and the Advanced Technology Microwave Sounder.
Application of Climate Impact Metrics to Rotorcraft Design
NASA Technical Reports Server (NTRS)
Russell, Carl; Johnson, Wayne
2013-01-01
Multiple metrics are applied to the design of large civil rotorcraft, integrating minimum cost and minimum environmental impact. The design mission is passenger transport with similar range and capacity to a regional jet. Separate aircraft designs are generated for minimum empty weight, fuel burn, and environmental impact. A metric specifically developed for the design of aircraft is employed to evaluate emissions. The designs are generated using the NDARC rotorcraft sizing code, and rotor analysis is performed with the CAMRAD II aeromechanics code. Design and mission parameters such as wing loading, disk loading, and cruise altitude are varied to minimize both cost and environmental impact metrics. This paper presents the results of these parametric sweeps as well as the final aircraft designs.
Application of Climate Impact Metrics to Civil Tiltrotor Design
NASA Technical Reports Server (NTRS)
Russell, Carl R.; Johnson, Wayne
2013-01-01
Multiple metrics are applied to the design of a large civil tiltrotor, integrating minimum cost and minimum environmental impact. The design mission is passenger transport with similar range and capacity to a regional jet. Separate aircraft designs are generated for minimum empty weight, fuel burn, and environmental impact. A metric specifically developed for the design of aircraft is employed to evaluate emissions. The designs are generated using the NDARC rotorcraft sizing code, and rotor analysis is performed with the CAMRAD II aeromechanics code. Design and mission parameters such as wing loading, disk loading, and cruise altitude are varied to minimize both cost and environmental impact metrics. This paper presents the results of these parametric sweeps as well as the final aircraft designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marion, William F; Deline, Christopher A; Asgharzadeh, Amir
In this paper, we present the effect of installation parameters (tilt angle, height above ground, and albedo) on the bifacial gain and energy yield of three south-facing photovoltaic (PV) system configurations: a single module, a row of five modules, and five rows of five modules utilizing RADIANCE-based ray tracing model. We show that height and albedo have a direct impact on the performance of bifacial systems. However, the impact of the tilt angle is more complicated. Seasonal optimum tilt angles are dependent on parameters such as height, albedo, size of the system, weather conditions, and time of the year. Formore » a single bifacial module installed in Albuquerque, NM, USA (35 degrees N) with a reasonable clearance (~1 m) from the ground, the seasonal optimum tilt angle is lowest (~5 degrees) for the summer solstice and highest (~65 degrees) for the winter solstice. For larger systems, seasonal optimum tilt angles are usually higher and can be up to 20 degrees greater than that for a single module system. Annual simulations also indicate that for larger fixed-tilt systems installed on a highly reflective ground (such as snow or a white roofing material with an albedo of ~81%), the optimum tilt angle is higher than the optimum angle of the smaller size systems. We also show that modules in larger scale systems generate lower energy due to horizon blocking and large shadowing area cast by the modules on the ground. For albedo of 21%, the center module in a large array generates up to 7% less energy than a single bifacial module. To validate our model, we utilize measured data from Sandia National Laboratories' fixed-tilt bifacial PV testbed and compare it with our simulations.« less
Formation of the Orientale lunar multiring basin.
Johnson, Brandon C; Blair, David M; Collins, Gareth S; Melosh, H Jay; Freed, Andrew M; Taylor, G Jeffrey; Head, James W; Wieczorek, Mark A; Andrews-Hanna, Jeffrey C; Nimmo, Francis; Keane, James T; Miljković, Katarina; Soderblom, Jason M; Zuber, Maria T
2016-10-28
Multiring basins, large impact craters characterized by multiple concentric topographic rings, dominate the stratigraphy, tectonics, and crustal structure of the Moon. Using a hydrocode, we simulated the formation of the Orientale multiring basin, producing a subsurface structure consistent with high-resolution gravity data from the Gravity Recovery and Interior Laboratory (GRAIL) spacecraft. The simulated impact produced a transient crater, ~390 kilometers in diameter, that was not maintained because of subsequent gravitational collapse. Our simulations indicate that the flow of warm weak material at depth was crucial to the formation of the basin's outer rings, which are large normal faults that formed at different times during the collapse stage. The key parameters controlling ring location and spacing are impactor diameter and lunar thermal gradients. Copyright © 2016, American Association for the Advancement of Science.
Radiative Impacts of Cloud Heterogeneity and Overlap in an Atmospheric General Circulation Model
NASA Technical Reports Server (NTRS)
Oreopoulos, L.; Lee, D.; Sud, Y. C.; Suarez, M. J.
2012-01-01
The radiative impacts of introducing horizontal heterogeneity of layer cloud condensate, and vertical overlap of condensate and cloud fraction are examined with the aid of a new radiation package operating in the GEOS-5 Atmospheric General Circulation Model. The impacts are examined in terms of diagnostic top-of-the-atmosphere shortwave (SW) and longwave (LW) cloud radiative effect (CRE) calculations for a range of assumptions and parameter specifications about the overlap. The investigation is conducted for two distinct cloud schemes, the one that comes with the standard GEOS-5 distribution, and another which has been recently used experimentally for its enhanced GEOS-5 distribution, and another which has been recently used experimentally for its enhanced cloud microphysical capabilities; both are coupled to a cloud generator allowing arbitrary cloud overlap specification. We find that cloud overlap radiative impacts are significantly stronger for the operational cloud scheme for which a change of cloud fraction overlap from maximum-random to generalized results to global changes of SW and LW CRE of approximately 4 Watts per square meter, and zonal changes of up to approximately 10 Watts per square meter. This is because of fewer occurrences compared to the other scheme of large layer cloud fractions and of multi-layer situations with large numbers of atmospheric being simultaneously cloudy, conditions that make overlap details more important. The impact on CRE of the details of condensate distribution overlap is much weaker. Once generalized overlap is adopted, both cloud schemes are only modestly sensitive to the exact values of the overlap parameters. We also find that if one of the CRE components is overestimated and the other underestimated, both cannot be driven towards observed values by adjustments to cloud condensate heterogeneity and overlap alone.
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement.
Fabrication and Performance of Large Format Transition Edge Sensor Microcalorimeter Arrays
NASA Technical Reports Server (NTRS)
Chervenak, James A.; Adams, James S.; Bandler, Simon R.; Busch, Sara E.; Eckart, M. E.; Ewin, A. E.; Finkbeiner, F. M.; Kilbourne, C. A.; Kelley, R. L.; Porst, Jan-Patrick;
2012-01-01
We have produced a variety of superconducting transition edge sensor array designs for microcalorimetric detection of x-rays. Designs include kilopixel scale arrays of relatively small sensors (75 micron pitch) atop a thick metal heatsinking layer as well as arrays of membrane-isolated devices on 250 micron pitch and smaller arrays of devices up to 600 micron pitch. We discuss the fabrication techniques used for each type of array focusing on unique aspects where processes vary to achieve the particular designs and required device parameters. For example, we evaluate various material combinations in the production of the thick metal heatsinking, including superconducting and normal metal adhesion layers. We also evaluate the impact of added heatsinking on the membrane isolated devices as it relates to basic device parameters. Arrays can be characterized with a time division SQUID multiplexer such that greater than 10 devices from an array can be measured in the same cooldown. Device parameters can be measured simultaneously so that environmental events such as thermal drifts or changes in magnetic fields can be controlled. For some designs, we will evaluate the uniformity of parameters impacting the intrinsic performance of the microcalorimeters under bias in these arrays and assess the level of thermal crosstalk.
NASA Astrophysics Data System (ADS)
Bassam, S.; Ren, J.
2017-12-01
Predicting future water availability in watersheds is very important for proper water resources management, especially in semi-arid regions with scarce water resources. Hydrological models have been considered as powerful tools in predicting future hydrological conditions in watershed systems in the past two decades. Streamflow and evapotranspiration are the two important components in watershed water balance estimation as the former is the most commonly-used indicator of the overall water budget estimation, and the latter is the second biggest component of water budget (biggest outflow from the system). One of the main concerns in watershed scale hydrological modeling is the uncertainties associated with model prediction, which could arise from errors in model parameters and input meteorological data, or errors in model representation of the physics of hydrological processes. Understanding and quantifying these uncertainties are vital to water resources managers for proper decision making based on model predictions. In this study, we evaluated the impacts of different climate change scenarios on the future stream discharge and evapotranspiration, and their associated uncertainties, throughout a large semi-arid basin using a stochastically-calibrated, physically-based, semi-distributed hydrological model. The results of this study could provide valuable insights in applying hydrological models in large scale watersheds, understanding the associated sensitivity and uncertainties in model parameters, and estimating the corresponding impacts on interested hydrological process variables under different climate change scenarios.
Probabilistic failure assessment with application to solid rocket motors
NASA Technical Reports Server (NTRS)
Jan, Darrell L.; Davidson, Barry D.; Moore, Nicholas R.
1990-01-01
A quantitative methodology is being developed for assessment of risk of failure of solid rocket motors. This probabilistic methodology employs best available engineering models and available information in a stochastic framework. The framework accounts for incomplete knowledge of governing parameters, intrinsic variability, and failure model specification error. Earlier case studies have been conducted on several failure modes of the Space Shuttle Main Engine. Work in progress on application of this probabilistic approach to large solid rocket boosters such as the Advanced Solid Rocket Motor for the Space Shuttle is described. Failure due to debonding has been selected as the first case study for large solid rocket motors (SRMs) since it accounts for a significant number of historical SRM failures. Impact of incomplete knowledge of governing parameters and failure model specification errors is expected to be important.
USDA-ARS?s Scientific Manuscript database
Crop loss of onion bulbs during storage carries an exceptionally high economic impact since a large portion of the production expenses have been expended before storage occurs. Because of this, it is important to define practices that can reduce onion bulb losses caused by storage rots. This study...
The impact of the condenser on cytogenetic image quality in digital microscope system.
Ren, Liqiang; Li, Zheng; Li, Yuhua; Zheng, Bin; Li, Shibo; Chen, Xiaodong; Liu, Hong
2013-01-01
Optimizing operational parameters of the digital microscope system is an important technique to acquire high quality cytogenetic images and facilitate the process of karyotyping so that the efficiency and accuracy of diagnosis can be improved. This study investigated the impact of the condenser on cytogenetic image quality and system working performance using a prototype digital microscope image scanning system. Both theoretical analysis and experimental validations through objectively evaluating a resolution test chart and subjectively observing large numbers of specimen were conducted. The results show that the optimal image quality and large depth of field (DOF) are simultaneously obtained when the numerical aperture of condenser is set as 60%-70% of the corresponding objective. Under this condition, more analyzable chromosomes and diagnostic information are obtained. As a result, the system shows higher working stability and less restriction for the implementation of algorithms such as autofocusing especially when the system is designed to achieve high throughput continuous image scanning. Although the above quantitative results were obtained using a specific prototype system under the experimental conditions reported in this paper, the presented evaluation methodologies can provide valuable guidelines for optimizing operational parameters in cytogenetic imaging using the high throughput continuous scanning microscopes in clinical practice.
Impact induced depolarization of ferroelectric materials
NASA Astrophysics Data System (ADS)
Agrawal, Vinamra; Bhattacharya, Kaushik
2018-06-01
We study the large deformation dynamic behavior and the associated nonlinear electro-thermo-mechanical coupling exhibited by ferroelectric materials in adiabatic environments. This is motivated by a ferroelectric generator which involves pulsed power generation by loading the ferroelectric material with a shock, either by impact or a blast. Upon impact, a shock wave travels through the material inducing a ferroelectric to nonpolar phase transition giving rise to a large voltage difference in an open circuit situation or a large current in a closed circuit situation. In the first part of this paper, we provide a general continuum mechanical treatment of the situation assuming a sharp phase boundary that is possibly charged. We derive the governing laws, as well as the driving force acting on the phase boundary. In the second part, we use the derived equations and a particular constitutive relation that describes the ferroelectric to nonpolar phase transition to study a uniaxial plate impact problem. We develop a numerical method where the phase boundary is tracked but other discontinuities are captured using a finite volume method. We compare our results with experimental observations to find good agreement. Specifically, our model reproduces the observed exponential rise of charge as well as the resistance dependent Hugoniot. We conclude with a parameter study that provides detailed insight into various aspects of the problem.
Curvature constraints from large scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dio, Enea Di; Montanari, Francesco; Raccanelli, Alvise
We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter Ω {sub K} with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependentmore » power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.« less
A Systematic Evaluation of Blood Serum and Plasma Pre-Analytics for Metabolomics Cohort Studies
Jobard, Elodie; Trédan, Olivier; Postoly, Déborah; André, Fabrice; Martin, Anne-Laure; Elena-Herrmann, Bénédicte; Boyault, Sandrine
2016-01-01
The recent thriving development of biobanks and associated high-throughput phenotyping studies requires the elaboration of large-scale approaches for monitoring biological sample quality and compliance with standard protocols. We present a metabolomic investigation of human blood samples that delineates pitfalls and guidelines for the collection, storage and handling procedures for serum and plasma. A series of eight pre-processing technical parameters is systematically investigated along variable ranges commonly encountered across clinical studies. While metabolic fingerprints, as assessed by nuclear magnetic resonance, are not significantly affected by altered centrifugation parameters or delays between sample pre-processing (blood centrifugation) and storage, our metabolomic investigation highlights that both the delay and storage temperature between blood draw and centrifugation are the primary parameters impacting serum and plasma metabolic profiles. Storing the blood drawn at 4 °C is shown to be a reliable routine to confine variability associated with idle time prior to sample pre-processing. Based on their fine sensitivity to pre-analytical parameters and protocol variations, metabolic fingerprints could be exploited as valuable ways to determine compliance with standard procedures and quality assessment of blood samples within large multi-omic clinical and translational cohort studies. PMID:27929400
Field Data on Head Injuries in Side Airbag Vehicles in Lateral Impact
Yoganandan, Narayan; Pintar, Frank A.; Gennarelli, Thomas A.
2005-01-01
Field data on side airbag deployments in lateral crashes and head injuries have largely remained anecdotal. Consequently, the purpose of this research was to report head injuries in lateral motor vehicle impacts. Data from the National Automotive Sampling System files were extracted from side impacts associated with side airbag deployments. Matched pairs with similar vehicle characteristics but without side airbags were also extracted. All data were limited to the United States Federal Motor vehicle Safety Standards FMVSS 214 compliant vehicles so that the information may be more effectively used in the future. In this study, some fundamental analyses are presented regarding occupant- and vehicle-related parameters. PMID:16179147
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less
The Superconducting Supercollider and US Science Policy
NASA Astrophysics Data System (ADS)
Marburger, John H.
2014-06-01
Reasons for the Superconducting Supercollider's (SSC's) termination include significant changes in the attitude of the government towards large scientific projects originating with management reforms introduced decades earlier. In the 1980s, the government insisted on inclusion of elements of these reforms in the SSC's management contract, including increased demands for accountability, additional liability for contractors, and sanctions for infractions. The SSC's planners could not have opted out of the reforms, which were by then becoming part of all large publicly funded projects. Once these reforms were in place, management mistakes in the SSC's planning and construction became highly visible, leading to termination of the machine. This episode contains two key lessons about science policy. One is that the momentum of the government's management reforms was unstoppable, and its impact on large scientific facilities and projects could not be reversed. The other is that specific measures such as cost and schedule-tracking systems to provide measures of program performance and impact were also inevitable; large scientific projects needed new parameters of accountability and transparency in what can be called the Principle of Assurance.
Influence of solidification on the impact of supercooled water drops onto cold surfaces
NASA Astrophysics Data System (ADS)
Li, Hai; Roisman, Ilia V.; Tropea, Cameron
2015-06-01
This study presents an experimental investigation of the impact of a supercooled drop onto hydrophilic and superhydrophobic substrates. The aim is to better understand the process of airframe icing caused by supercooled large droplets, which has been recently identified as a severe hazard in aviation. The Weber number and Reynolds number of the impinging drop ranged from 200 to 300 and from 2600 to 5800, respectively. Drop impact, spreading, and rebound were observed using a high-speed video system. The maximum spreading diameter of an impacting drop on hydrophilic surfaces was measured. The temperature effect on this parameter was only minor for a wide range of the drop and substrate temperatures. However, ice/water mixtures emerged when both the drop and substrate temperatures were below 0 °C. Similarly, drop rebound on superhydrophobic substrates was significantly hindered by solidification when supercooled drop impacted onto substrates below the freezing point. The minimum receding diameter and the speed of ice accretion on the substrate were measured for various wall temperatures. Both parameters increased almost linearly with decreasing wall temperature, but eventually leveled off beyond a certain substrate temperature. The rate of ice formation on the substrate was significantly higher than the growth rate of free ice dendrites, implying that multiple nucleation sites were present.
Single particle momentum and angular distributions in hadron-hadron collisions at ultrahigh energies
NASA Technical Reports Server (NTRS)
Chou, T. T.; Chen, N. Y.
1985-01-01
The forward-backward charged multiplicity distribution (P n sub F, n sub B) of events in the 540 GeV antiproton-proton collider has been extensively studied by the UA5 Collaboration. It was pointed out that the distribution with respect to n = n sub F + n sub B satisfies approximate KNO scaling and that with respect to Z = n sub F - n sub B is binomial. The geometrical model of hadron-hadron collision interprets the large multiplicity fluctuation as due to the widely different nature of collisions at different impact parameters b. For a single impact parameter b, the collision in the geometrical model should exhibit stochastic behavior. This separation of the stochastic and nonstochastic (KNO) aspects of multiparticle production processes gives conceptually a lucid and attractive picture of such collisions, leading to the concept of partition temperature T sub p and the single particle momentum spectrum to be discussed in detail.
Tidal capture of stars by a massive black hole
NASA Technical Reports Server (NTRS)
Novikov, I. D.; Pethick, C. J.; Polnarev, A. G.
1992-01-01
The processes leading to tidal capture of stars by a massive black hole and the consequences of these processes in a dense stellar cluster are discussed in detail. When the amplitude of a tide and the subsequent oscillations are sufficiently large, the energy deposited in a star after periastron passage and formation of a bound orbit cannot be estimated directly using the linear theory of oscillations of a spherical star, but rather numerical estimates must be used. The evolution of a star after tidal capture is discussed. The maximum ratio R of the cross-section for tidal capture to that for tidal disruption is about 3 for real systems. For the case of a stellar system with an empty capture loss cone, even in the case when the impact parameter for tidal capture only slightly exceeds the impact parameter for direct tidal disruption, tidal capture would be much more important than tidal disruption.
NASA Astrophysics Data System (ADS)
Gillmann, Cedric; Golabek, Gregor; Tackley, Paul; Raymond, Sean
2017-04-01
During the end of the accretion, the so-called Late Veneer phase, while the bulk of the mass of terrestrial planets is already in place, a substantial number of large collisions can still occur. Those impacts are thought to be responsible for the repartition of the Highly Siderophile Elements. They are also susceptible to have a strong effect on volatile repartition and mantle convection. We study how Late Veneer impacts modify the evolution of Venus and its atmosphere, using a coupled numerical simulation. We focus on volatile exchanges and their effects on surface conditions. Mantle dynamics, volcanism and degassing processes lead to an input of gases in the atmosphere and are modeled using the StagYY mantle convection code. Volatile losses are estimated through atmospheric escape modeling. It involves two different aspects: hydrodynamic escape (0-500 Myr) and non-thermal escape. Hydrodynamic escape is massive but occurs only when the solar energy input is strong. Post 4 Ga escape from non-thermal processes is comparatively low but long-lived. The resulting state of the atmosphere is used to the calculate greenhouse effect and surface temperature, through a one-dimensional gray radiative-convective model. Large impacts are capable of contributing to (i) atmospheric escape, (ii) volatile replenishment and (iii) energy transfer to the mantle. We test various impactor compositions, impact parameters (velocity, location, size, and timing) and eroding power. Scenarios we tested are adapted from numerical stochastic simulations (Raymond et al., 2013). Impactor sizes are dominated by large bodies (R>500 km). Erosion of the atmosphere by a few large impacts appears limited. Swarms of smaller more mass-effective impactors seem required for this effect to be significant. Large impactors have two main effects on the atmosphere. They can (i) create a large input of volatile from the melting they cause during the impact and through the volatiles they carry. This leads to an increase in atmosphere density and surface temperatures. However, early impacts can also (ii) deplete the mantle of Venus and (assuming strong early escape) ultimately remove volatiles from the system, leading to lower late degassing and lower surface temperatures. The competition between those effects depends on the time of the impact, which directly governs the strength of atmospheric losses.
Off-Center Collisions between Clusters of Galaxies
NASA Astrophysics Data System (ADS)
Ricker, P. M.
1998-03-01
We present numerical simulations of off-center collisions between galaxy clusters made using a new hydrodynamical code based on the piecewise-parabolic method (PPM) and an isolated multigrid potential solver. The current simulations follow only the intracluster gas. We have performed three high-resolution (256 × 1282) simulations of collisions between equal-mass clusters using a nonuniform grid with different values of the impact parameter (0, 5, and 10 times the cluster core radius). Using these simulations, we have studied the variation in equilibration time, luminosity enhancement during the collision, and structure of the merger remnant with varying impact parameter. We find that in off-center collisions the cluster cores (the inner regions where the pressure exceeds the ram pressure) behave quite differently from the clusters' outer regions. A strong, roughly ellipsoidal shock front, similar to that noted in previous simulations of head-on collisions, enables the cores to become bound to each other by dissipating their kinetic energy as heat in the surrounding gas. These cores survive well into the collision, dissipating their orbital angular momentum via spiral bow shocks. After the ellipsoidal shock has passed well outside the interaction region, the material left in its wake falls back onto the merger remnant formed through the inspiral of the cluster cores, creating a roughly spherical accretion shock. For less than one-half of a sound crossing time after the cores first interact, the total X-ray luminosity increases by a large factor; the magnitude of this increase depends sensitively on the size of the impact parameter. Observational evidence of the ongoing collision, in the form of bimodality and distortion in projected X-ray surface brightness and temperature maps, is present for one to two sound crossing times after the collision but only for special viewing angles. The remnant actually requires at least five crossing times to reach virial equilibrium. Since the sound crossing time can be as large as 1-2 Gyr, the equilibration time can thus be a substantial fraction of the age of the universe. The final merger remnant is very similar for impact parameters of 0 and 5 core radii. It possesses a roughly isothermal core with central density and temperature twice the initial values for the colliding clusters. Outside the core, the temperature drops as r-1, and the density roughly as r-3.8. The core radius shows a small increase due to shock heating during the merger. For an impact parameter of 10 core radii, the core of the remnant possesses a more flattened density profile with a steeper drop-off outside the core. In both off-center cases, the merger remnant rotates, but only for the 10 core-radius case does this appear to have an effect on the structure of the remnant.
Rotating Rig Development for Droplet Deformation/Breakup and Impact Induced by Aerodynamic Surfaces
NASA Technical Reports Server (NTRS)
Feo, A.; Vargas, M.; Sor, A.
2012-01-01
This work presents the development of a Rotating Rig Facility by the Instituto Nacional de Tecnica Aeroespacial (INTA) in cooperation with the NASA Glenn Research Center. The facility is located at the INTA installations near Madrid, Spain. It has been designed to study the deformation, breakup and impact of large droplets induced by aerodynamic bodies. The importance of these physical phenomena is related to the effects of Supercooled Large Droplets in icing clouds on the impinging efficiency of the droplets on the body, that may change should these phenomena not be taken into account. The important variables and the similarity parameters that enter in this problem are presented. The facility's components are described and some possible set-ups are explained. Application examples from past experiments are presented in order to indicate the capabilities of the new facility.
The generation of gravitational waves. III - Derivation of bremsstrahlung formulae
NASA Technical Reports Server (NTRS)
Kovacs, S. J.; Thorne, K. S.
1977-01-01
Formulas are derived describing the gravitational waves produced by a stellar encounter of the following type. The two stars have stationary (i.e., nonpulsating) nearly Newtonian structures with arbitrary relative masses; they fly past each other with an arbitrary relative velocity; and their impact parameter is sufficiently large that they gravitationally deflect each other through an angle that is small as compared with 90 deg.
Evaluating the safety impact of adaptive cruise control in traffic oscillations on freeways.
Li, Ye; Li, Zhibin; Wang, Hao; Wang, Wei; Xing, Lu
2017-07-01
Adaptive cruise control (ACC) has been considered one of the critical components of automated driving. ACC adjusts vehicle speeds automatically by measuring the status of the ego-vehicle and leading vehicle. Current commercial ACCs are designed to be comfortable and convenient driving systems. Little attention is paid to the safety impacts of ACC, especially in traffic oscillations when crash risks are the highest. The primary objective of this study was to evaluate the impacts of ACC parameter settings on rear-end collisions on freeways. First, the occurrence of a rear-end collision in a stop-and-go wave was analyzed. A car-following model in an integrated ACC was developed for a simulation analysis. The time-to-collision based factors were calculated as surrogate safety measures of the collision risk. We also evaluated different market penetration rates considering that the application of ACC will be a gradual process. The results showed that the safety impacts of ACC were largely affected by the parameters. Smaller time delays and larger time gaps improved safety performance, but inappropriate parameter settings increased the collision risks and caused traffic disturbances. A higher reduction of the collision risk was achieved as the ACC vehicle penetration rate increased, especially in the initial stage with penetration rates of less than 30%. This study also showed that in the initial stage, the combination of ACC and a variable speed limit achieved better safety improvements on congested freeways than each single technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.
2015-12-01
Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while discharge is more affected by parameters from the whole upstream drainage area. Understanding model output variance behavior will have a direct impact on the design and performance of the ensemble-based data assimilation platform, for which uncertainties are also modeled by variances. It will help to select more objectively RRM parameters to correct.
Schmitt, Neal; Golubovich, Juliya; Leong, Frederick T L
2011-12-01
The impact of measurement invariance and the provision for partial invariance in confirmatory factor analytic models on factor intercorrelations, latent mean differences, and estimates of relations with external variables is investigated for measures of two sets of widely assessed constructs: Big Five personality and the six Holland interests (RIASEC). In comparing models that include provisions for partial invariance with models that do not, the results indicate quite small differences in parameter estimates involving the relations between factors, one relatively large standardized mean difference in factors between the subgroups compared and relatively small differences in the regression coefficients when the factors are used to predict external variables. The results provide support for the use of partially invariant models, but there does not seem to be a great deal of difference between structural coefficients when the measurement model does or does not include separate estimates of subgroup parameters that differ across subgroups. Future research should include simulations in which the impact of various factors related to invariance is estimated.
NASA Astrophysics Data System (ADS)
Zhang, Yunlu; Yan, Lei; Liou, Frank
2018-05-01
The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.
Key Performance Parameter Driven Technology Goals for Electric Machines and Power Systems
NASA Technical Reports Server (NTRS)
Bowman, Cheryl; Jansen, Ralph; Brown, Gerald; Duffy, Kirsten; Trudell, Jeffrey
2015-01-01
Transitioning aviation to low carbon propulsion is one of the crucial strategic research thrust and is a driver in the search for alternative propulsion system for advanced aircraft configurations. This work requires multidisciplinary skills coming from multiple entities. The feasibility of scaling up various electric drive system technologies to meet the requirements of a large commercial transport is discussed in terms of key parameters. Functional requirements are identified that impact the power system design. A breakeven analysis is presented to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.
Beylot, Antoine; Villeneuve, Jacques
2013-12-01
Incineration is the main option for residual Municipal Solid Waste treatment in France. This study compares the environmental performances of 110 French incinerators (i.e., 85% of the total number of plants currently in activity in France) in a Life Cycle Assessment perspective, considering 5 non-toxic impact categories: climate change, photochemical oxidant formation, particulate matter formation, terrestrial acidification and marine eutrophication. Mean, median and lower/upper impact potentials are determined considering the incineration of 1 tonne of French residual Municipal Solid Waste. The results highlight the relatively large variability of the impact potentials as a function of the plant technical performances. In particular, the climate change impact potential of the incineration of 1 tonne of waste ranges from a benefit of -58 kg CO2-eq to a relatively large burden of 408 kg CO2-eq, with 294 kg CO2-eq as the average impact. Two main plant-specific parameters drive the impact potentials regarding the 5 non-toxic impact categories under study: the energy recovery and delivery rate and the NOx process-specific emissions. The variability of the impact potentials as a function of incinerator characteristics therefore calls for the use of site-specific data when required by the LCA goal and scope definition phase, in particular when the study focuses on a specific incinerator or on a local waste management plan, and when these data are available. Copyright © 2013 Elsevier Ltd. All rights reserved.
Schultz, Elise V; Schultz, Christopher J; Carey, Lawrence D; Cecil, Daniel J; Bateman, Monte
2016-01-01
This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system's performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system's performance is evaluated with adjustments to parameter sensitivity. The system's performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system's performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system.
NASA Technical Reports Server (NTRS)
Schultz, Elise; Schultz, Christopher Joseph; Carey, Lawrence D.; Cecil, Daniel J.; Bateman, Monte
2016-01-01
This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system's performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system's performance is evaluated with adjustments to parameter sensitivity. The system's performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system's performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system.
SCHULTZ, ELISE V.; SCHULTZ, CHRISTOPHER J.; CAREY, LAWRENCE D.; CECIL, DANIEL J.; BATEMAN, MONTE
2017-01-01
This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system’s performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system’s performance is evaluated with adjustments to parameter sensitivity. The system’s performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system’s performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system. PMID:29303164
Estimating Consequences of MMOD Penetrations on ISS
NASA Technical Reports Server (NTRS)
Evans, H.; Hyde, James; Christiansen, E.; Lear, D.
2017-01-01
The threat from micrometeoroid and orbital debris (MMOD) impacts on space vehicles is often quantified in terms of the probability of no penetration (PNP). However, for large spacecraft, especially those with multiple compartments, a penetration may have a number of possible outcomes. The extent of the damage (diameter of hole, crack length or penetration depth), the location of the damage relative to critical equipment or crew, crew response, and even the time of day of the penetration are among the many factors that can affect the outcome. For the International Space Station (ISS), a Monte-Carlo style software code called Manned Spacecraft Crew Survivability (MSCSurv) is used to predict the probability of several outcomes of an MMOD penetration-broadly classified as loss of crew (LOC), crew evacuation (Evac), loss of escape vehicle (LEV), and nominal end of mission (NEOM). By generating large numbers of MMOD impacts (typically in the billions) and tracking the consequences, MSCSurv allows for the inclusion of a large number of parameters and models as well as enabling the consideration of uncertainties in the models and parameters. MSCSurv builds upon the results from NASA's Bumper software (which provides the probability of penetration and critical input data to MSCSurv) to allow analysts to estimate the probability of LOC, Evac, LEV, and NEOM. This paper briefly describes the overall methodology used by NASA to quantify LOC, Evac, LEV, and NEOM with particular emphasis on describing in broad terms how MSCSurv works and its capabilities and most significant models.
Predicting the Consequences of MMOD Penetrations on the International Space Station
NASA Technical Reports Server (NTRS)
Hyde, James; Christiansen, E.; Lear, D.; Evans
2018-01-01
The threat from micrometeoroid and orbital debris (MMOD) impacts on space vehicles is often quantified in terms of the probability of no penetration (PNP). However, for large spacecraft, especially those with multiple compartments, a penetration may have a number of possible outcomes. The extent of the damage (diameter of hole, crack length or penetration depth), the location of the damage relative to critical equipment or crew, crew response, and even the time of day of the penetration are among the many factors that can affect the outcome. For the International Space Station (ISS), a Monte-Carlo style software code called Manned Spacecraft Crew Survivability (MSCSurv) is used to predict the probability of several outcomes of an MMOD penetration-broadly classified as loss of crew (LOC), crew evacuation (Evac), loss of escape vehicle (LEV), and nominal end of mission (NEOM). By generating large numbers of MMOD impacts (typically in the billions) and tracking the consequences, MSCSurv allows for the inclusion of a large number of parameters and models as well as enabling the consideration of uncertainties in the models and parameters. MSCSurv builds upon the results from NASA's Bumper software (which provides the probability of penetration and critical input data to MSCSurv) to allow analysts to estimate the probability of LOC, Evac, LEV, and NEOM. This paper briefly describes the overall methodology used by NASA to quantify LOC, Evac, LEV, and NEOM with particular emphasis on describing in broad terms how MSCSurv works and its capabilities and most significant models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leng, Guoyong; Huang, Maoyi; Tang, Qiuhong
2013-09-16
Previous studies on irrigation impacts on land surface fluxes/states were mainly conducted as sensitivity experiments, with limited analysis of uncertainties from the input data and model irrigation schemes used. In this study, we calibrated and evaluated the performance of irrigation water use simulated by the Community Land Model version 4 (CLM4) against observations from agriculture census. We investigated the impacts of irrigation on land surface fluxes and states over the conterminous United States (CONUS) and explored possible directions of improvement. Specifically, we found large uncertainty in the irrigation area data from two widely used sources and CLM4 tended to producemore » unrealistically large temporal variations of irrigation demand for applications at the water resources region scale over CONUS. At seasonal to interannual time scales, the effects of irrigation on surface energy partitioning appeared to be large and persistent, and more pronounced in dry than wet years. Even with model calibration to yield overall good agreement with the irrigation amounts from the National Agricultural Statistics Service (NASS), differences between the two irrigation area datasets still dominate the differences in the interannual variability of land surface response to irrigation. Our results suggest that irrigation amount simulated by CLM4 can be improved by (1) calibrating model parameter values to account for regional differences in irrigation demand and (2) accurate representation of the spatial distribution and intensity of irrigated areas.« less
NASA Astrophysics Data System (ADS)
Kourtidis, Konstantinos; Georgoulias, Aristeidis
2017-04-01
We studied the impact of anthropogenic aerosols, fine mode natural aerosols, Saharan dust, atmospheric water vapor, cloud fraction, cloud optical depth and cloud top height on the magnitude of fair weather PG at the rural station of Xanthi. Fair weather PG was measured in situ while the other parameters were obtained from the MODIS instrument onboard the Terra and Aqua satellites. All of the above parameteres were found to impact fair weather PG magnitude. Regarding aerosols, the impact was larger for Saharan dust and fine mode natural aerosols whereas regarding clouds the impact was larger for cloud fraction while less than that of aerosols. Water vapour and ice precipitable water were also found to influence fair weather PG. Since aerosols and water are ubiquitous in the atmosphere and exhibit large spatial and temporal variability, we postulate that our understanding of the Carnegie curve might need revision.
Probing failure susceptibilities of earthquake faults using small-quake tidal correlations.
Brinkman, Braden A W; LeBlanc, Michael; Ben-Zion, Yehuda; Uhl, Jonathan T; Dahmen, Karin A
2015-01-27
Mitigating the devastating economic and humanitarian impact of large earthquakes requires signals for forecasting seismic events. Daily tide stresses were previously thought to be insufficient for use as such a signal. Recently, however, they have been found to correlate significantly with small earthquakes, just before large earthquakes occur. Here we present a simple earthquake model to investigate whether correlations between daily tidal stresses and small earthquakes provide information about the likelihood of impending large earthquakes. The model predicts that intervals of significant correlations between small earthquakes and ongoing low-amplitude periodic stresses indicate increased fault susceptibility to large earthquake generation. The results agree with the recent observations of large earthquakes preceded by time periods of significant correlations between smaller events and daily tide stresses. We anticipate that incorporating experimentally determined parameters and fault-specific details into the model may provide new tools for extracting improved probabilities of impending large earthquakes.
Bayesian modeling of the mass and density of asteroids
NASA Astrophysics Data System (ADS)
Dotson, Jessie L.; Mathias, Donovan
2017-10-01
Mass and density are two of the fundamental properties of any object. In the case of near earth asteroids, knowledge about the mass of an asteroid is essential for estimating the risk due to (potential) impact and planning possible mitigation options. The density of an asteroid can illuminate the structure of the asteroid. A low density can be indicative of a rubble pile structure whereas a higher density can imply a monolith and/or higher metal content. The damage resulting from an impact of an asteroid with Earth depends on its interior structure in addition to its total mass, and as a result, density is a key parameter to understanding the risk of asteroid impact. Unfortunately, measuring the mass and density of asteroids is challenging and often results in measurements with large uncertainties. In the absence of mass / density measurements for a specific object, understanding the range and distribution of likely values can facilitate probabilistic assessments of structure and impact risk. Hierarchical Bayesian models have recently been developed to investigate the mass - radius relationship of exoplanets (Wolfgang, Rogers & Ford 2016) and to probabilistically forecast the mass of bodies large enough to establish hydrostatic equilibrium over a range of 9 orders of magnitude in mass (from planemos to main sequence stars; Chen & Kipping 2017). Here, we extend this approach to investigate the mass and densities of asteroids. Several candidate Bayesian models are presented, and their performance is assessed relative to a synthetic asteroid population. In addition, a preliminary Bayesian model for probablistically forecasting masses and densities of asteroids is presented. The forecasting model is conditioned on existing asteroid data and includes observational errors, hyper-parameter uncertainties and intrinsic scatter.
van der Velde-Koerts, Trijntje; Breysse, Nicolas; Pattingre, Lauriane; Hamey, Paul Y; Lutze, Jason; Mahieu, Karin; Margerison, Sam; Ossendorp, Bernadette C; Reich, Hermine; Rietveld, Anton; Sarda, Xavier; Vial, Gaelle; Sieke, Christian
2018-06-03
In 2015 a scientific workshop was held in Geneva, where updating the International Estimate of Short-Term Intake (IESTI) equations was suggested. This paper studies the effects of the proposed changes in residue inputs, large portions, variability factors and unit weights on the overall short-term dietary exposure estimate. Depending on the IESTI case equation, a median increase in estimated overall exposure by a factor of 1.0-6.8 was observed when the current IESTI equations are replaced by the proposed IESTI equations. The highest increase in the estimated exposure arises from the replacement of the median residue (STMR) by the maximum residue limit (MRL) for bulked and blended commodities (case 3 equations). The change in large portion parameter does not have a significant impact on the estimated exposure. The use of large portions derived from the general population covering all age groups and bodyweights should be avoided when large portions are not expressed on an individual bodyweight basis. Replacement of the highest residue (HR) by the MRL and removal of the unit weight each increase the estimated exposure for small-, medium- and large-sized commodities (case 1, case 2a or case 2b equations). However, within the EU framework lowering of the variability factor from 7 or 5 to 3 counterbalances the effect of changes in other parameters, resulting in an estimated overall exposure change for the EU situation of a factor of 0.87-1.7 and 0.6-1.4 for IESTI case 2a and case 2b equations, respectively.
Calus, Mario PL; Bijma, Piter; Veerkamp, Roel F
2004-01-01
Covariance functions have been proposed to predict breeding values and genetic (co)variances as a function of phenotypic within herd-year averages (environmental parameters) to include genotype by environment interaction. The objective of this paper was to investigate the influence of definition of environmental parameters and non-random use of sires on expected breeding values and estimated genetic variances across environments. Breeding values were simulated as a linear function of simulated herd effects. The definition of environmental parameters hardly influenced the results. In situations with random use of sires, estimated genetic correlations between the trait expressed in different environments were 0.93, 0.93 and 0.97 while simulated at 0.89 and estimated genetic variances deviated up to 30% from the simulated values. Non random use of sires, poor genetic connectedness and small herd size had a large impact on the estimated covariance functions, expected breeding values and calculated environmental parameters. Estimated genetic correlations between a trait expressed in different environments were biased upwards and breeding values were more biased when genetic connectedness became poorer and herd composition more diverse. The best possible solution at this stage is to use environmental parameters combining large numbers of animals per herd, while losing some information on genotype by environment interaction in the data. PMID:15339629
Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bekele, E. G.; Nicklow, J. W.
2005-12-01
Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.
NASA Astrophysics Data System (ADS)
Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.
2016-12-01
In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.
Constraints on CDM cosmology from galaxy power spectrum, CMB and SNIa evolution
NASA Astrophysics Data System (ADS)
Ferramacho, L. D.; Blanchard, A.; Zolnierowski, Y.
2009-05-01
Aims: We examine the constraints that can be obtained on standard cold dark matter models from the most currently used data set: CMB anisotropies, type Ia supernovae and the SDSS luminous red galaxies. We also examine how these constraints are widened when the equation of state parameter w and the curvature parameter Ωk are left as free parameters. Finally, we investigate the impact on these constraints of a possible form of evolution in SNIa intrinsic luminosity. Methods: We obtained our results from MCMC analysis using the full likelihood of each data set. Results: For the ΛCDM model, our “vanilla” model, cosmological parameters are tightly constrained and consistent with current estimates from various methods. When the dark energy parameter w is free we find that the constraints remain mostly unchanged, i.e. changes are smaller than the 1 sigma uncertainties. Similarly, relaxing the assumption of a flat universe leads to nearly identical constraints on the dark energy density parameter of the universe Ω_Λ , baryon density of the universe Ω_b, the optical depth τ, the index of the power spectrum of primordial fluctuations n_S, with most one sigma uncertainties better than 5%. More significant changes appear on other parameters: while preferred values are almost unchanged, uncertainties for the physical dark matter density Ω_ch^2, Hubble constant H0 and σ8 are typically twice as large. The constraint on the age of the Universe, which is very accurate for the vanilla model, is the most degraded. We found that different methodological approaches on large scale structure estimates lead to appreciable differences in preferred values and uncertainty widths. We found that possible evolution in SNIa intrinsic luminosity does not alter these constraints by much, except for w, for which the uncertainty is twice as large. At the same time, this possible evolution is severely constrained. Conclusions: We conclude that systematic uncertainties for some estimated quantities are similar or larger than statistical ones.
Impact during equine locomotion: techniques for measurement and analysis.
Burn, J F; Wilson, A; Nason, G P
1997-05-01
Impact is implicated in the development of several types of musculoskeletal injury in the horse. Characterisation of impact experienced during strenuous exercise is an important first step towards understanding the mechanism for injury. Measurement and analysis of large, short duration impacts is difficult. The measurement system must be able to record transient peaks and high frequencies accurately. The analysis technique must be able to characterise the impact signal in time and frequency. This paper presents a measurement system and analysis technique for the characterisation of large impacts. A piezo-electric accelerometer was securely mounted on the dorsal surface of the horses hoof. Saddle mounted charge amplifiers and a 20 m coaxial cable transferred these data to a PC based logging system. Data were down-loaded onto a UNIX workstation and analysed using a proprietary statistics package. The values of parameters calculated from the time series data were comparable to those of other authors. A wavelet decomposition showed that the frequency profile of the signal changed with time. While most spectral energy was seen at impact, a significant amount of energy was contained in the signal immediately following impact. Over 99% of this energy was contained in frequencies less than 1250 Hz. The sampling rate and the frequency response of a measurement system for recording impact should be chosen carefully to prevent loss or corruption of data. Time scale analysis using a wavelet decomposition is a powerful technique which can be used to characterise impact data. The use of contour plots provides a highly visual representation of the time and frequency localisation of power during impact.
Impact of large field angles on the requirements for deformable mirror in imaging satellites
NASA Astrophysics Data System (ADS)
Kim, Jae Jun; Mueller, Mark; Martinez, Ty; Agrawal, Brij
2018-04-01
For certain imaging satellite missions, a large aperture with wide field-of-view is needed. In order to achieve diffraction limited performance, the mirror surface Root Mean Square (RMS) error has to be less than 0.05 waves. In the case of visible light, it has to be less than 30 nm. This requirement is difficult to meet as the large aperture will need to be segmented in order to fit inside a launch vehicle shroud. To reduce this requirement and to compensate for the residual wavefront error, Micro-Electro-Mechanical System (MEMS) deformable mirrors can be considered in the aft optics of the optical system. MEMS deformable mirrors are affordable and consume low power, but are small in size. Due to the major reduction in pupil size for the deformable mirror, the effective field angle is magnified by the diameter ratio of the primary and deformable mirror. For wide field of view imaging, the required deformable mirror correction is field angle dependant, impacting the required parameters of a deformable mirror such as size, number of actuators, and actuator stroke. In this paper, a representative telescope and deformable mirror system model is developed and the deformable mirror correction is simulated to study the impact of the large field angles in correcting a wavefront error using a deformable mirror in the aft optics.
Dynamic Experiments and Constitutive Model Performance for Polycarbonate
2014-07-01
phase disabled. Note, positive stress is tensile and negative is compressive ....28 Figure 23. Parameter sensitivity showing numerical contours of axial ... compressive . For the no alpha and no beta cases shown in the axial stress plots of figure 23 at 40 s, an increase in radial compression as compared...traditional Taylor cylinder impact experiment, which achieves large strain and high-strain-rate deformation but under hydrostatic compression
Network approach to patterns in stratocumulus clouds
NASA Astrophysics Data System (ADS)
Glassmeier, Franziska; Feingold, Graham
2017-10-01
Stratocumulus clouds (Sc) have a significant impact on the amount of sunlight reflected back to space, with important implications for Earth’s climate. Representing Sc and their radiative impact is one of the largest challenges for global climate models. Sc fields self-organize into cellular patterns and thus lend themselves to analysis and quantification in terms of natural cellular networks. Based on large-eddy simulations of Sc fields, we present a first analysis of the geometric structure and self-organization of Sc patterns from this network perspective. Our network analysis shows that the Sc pattern is scale-invariant as a consequence of entropy maximization that is known as Lewis’s Law (scaling parameter: 0.16) and is largely independent of the Sc regime (cloud-free vs. cloudy cell centers). Cells are, on average, hexagonal with a neighbor number variance of about 2, and larger cells tend to be surrounded by smaller cells, as described by an Aboav-Weaire parameter of 0.9. The network structure is neither completely random nor characteristic of natural convection. Instead, it emerges from Sc-specific versions of cell division and cell merging that are shaped by cell expansion. This is shown with a heuristic model of network dynamics that incorporates our physical understanding of cloud processes.
The Impact of the Condenser on Cytogenetic Image Quality in Digital Microscope System
Ren, Liqiang; Li, Zheng; Li, Yuhua; Zheng, Bin; Li, Shibo; Chen, Xiaodong; Liu, Hong
2013-01-01
Background: Optimizing operational parameters of the digital microscope system is an important technique to acquire high quality cytogenetic images and facilitate the process of karyotyping so that the efficiency and accuracy of diagnosis can be improved. OBJECTIVE: This study investigated the impact of the condenser on cytogenetic image quality and system working performance using a prototype digital microscope image scanning system. Methods: Both theoretical analysis and experimental validations through objectively evaluating a resolution test chart and subjectively observing large numbers of specimen were conducted. Results: The results show that the optimal image quality and large depth of field (DOF) are simultaneously obtained when the numerical aperture of condenser is set as 60%–70% of the corresponding objective. Under this condition, more analyzable chromosomes and diagnostic information are obtained. As a result, the system shows higher working stability and less restriction for the implementation of algorithms such as autofocusing especially when the system is designed to achieve high throughput continuous image scanning. Conclusions: Although the above quantitative results were obtained using a specific prototype system under the experimental conditions reported in this paper, the presented evaluation methodologies can provide valuable guidelines for optimizing operational parameters in cytogenetic imaging using the high throughput continuous scanning microscopes in clinical practice. PMID:23676284
Network approach to patterns in stratocumulus clouds.
Glassmeier, Franziska; Feingold, Graham
2017-10-03
Stratocumulus clouds (Sc) have a significant impact on the amount of sunlight reflected back to space, with important implications for Earth's climate. Representing Sc and their radiative impact is one of the largest challenges for global climate models. Sc fields self-organize into cellular patterns and thus lend themselves to analysis and quantification in terms of natural cellular networks. Based on large-eddy simulations of Sc fields, we present a first analysis of the geometric structure and self-organization of Sc patterns from this network perspective. Our network analysis shows that the Sc pattern is scale-invariant as a consequence of entropy maximization that is known as Lewis's Law (scaling parameter: 0.16) and is largely independent of the Sc regime (cloud-free vs. cloudy cell centers). Cells are, on average, hexagonal with a neighbor number variance of about 2, and larger cells tend to be surrounded by smaller cells, as described by an Aboav-Weaire parameter of 0.9. The network structure is neither completely random nor characteristic of natural convection. Instead, it emerges from Sc-specific versions of cell division and cell merging that are shaped by cell expansion. This is shown with a heuristic model of network dynamics that incorporates our physical understanding of cloud processes.
Network approach to patterns in stratocumulus clouds
Feingold, Graham
2017-01-01
Stratocumulus clouds (Sc) have a significant impact on the amount of sunlight reflected back to space, with important implications for Earth’s climate. Representing Sc and their radiative impact is one of the largest challenges for global climate models. Sc fields self-organize into cellular patterns and thus lend themselves to analysis and quantification in terms of natural cellular networks. Based on large-eddy simulations of Sc fields, we present a first analysis of the geometric structure and self-organization of Sc patterns from this network perspective. Our network analysis shows that the Sc pattern is scale-invariant as a consequence of entropy maximization that is known as Lewis’s Law (scaling parameter: 0.16) and is largely independent of the Sc regime (cloud-free vs. cloudy cell centers). Cells are, on average, hexagonal with a neighbor number variance of about 2, and larger cells tend to be surrounded by smaller cells, as described by an Aboav–Weaire parameter of 0.9. The network structure is neither completely random nor characteristic of natural convection. Instead, it emerges from Sc-specific versions of cell division and cell merging that are shaped by cell expansion. This is shown with a heuristic model of network dynamics that incorporates our physical understanding of cloud processes. PMID:28904097
Environmental Impacts of a Multi-Borehole Geothermal System: Model Sensitivity Study
NASA Astrophysics Data System (ADS)
Krol, M.; Daemi, N.
2017-12-01
Problems associated with fossil fuel consumption has increased worldwide interest in discovering and developing sustainable energy systems. One such system is geothermal heating, which uses the constant temperature of the ground to heat or cool buildings. Since geothermal heating offers low maintenance, high heating/cooling comfort, and a low carbon footprint, compared to conventional systems, there has been an increasing trend in equipping large buildings with geothermal heating. However, little is known on the potential environmental impact geothermal heating can have on the subsurface, such as the creation of subsurface thermal plumes or changes in groundwater flow dynamics. In the present study, the environmental impacts of a closed-loop, ground source heat pump (GSHP) system was examined with respect to different system parameters. To do this a three-dimensional model, developed using FEFLOW, was used to examine the thermal plumes resulting from ten years of operation of a vertical closed-loop GSHP system with multiple boreholes. A required thermal load typical of an office building located in Canada was calculated and groundwater flow and heat transport in the geological formation was simulated. Consequently, the resulting thermal plumes were studied and a sensitivity analysis was conducted to determine the effect of different parameters like groundwater flow and soil type on the development and movement of thermal plumes. Since thermal plumes can affect the efficiency of a GSHP system, this study provides insight into important system parameters.
NASA Astrophysics Data System (ADS)
Knox, H. A.; Draelos, T.; Young, C. J.; Lawry, B.; Chael, E. P.; Faust, A.; Peterson, M. G.
2015-12-01
The quality of automatic detections from seismic sensor networks depends on a large number of data processing parameters that interact in complex ways. The largely manual process of identifying effective parameters is painstaking and does not guarantee that the resulting controls are the optimal configuration settings. Yet, achieving superior automatic detection of seismic events is closely related to these parameters. We present an automated sensor tuning (AST) system that learns near-optimal parameter settings for each event type using neuro-dynamic programming (reinforcement learning) trained with historic data. AST learns to test the raw signal against all event-settings and automatically self-tunes to an emerging event in real-time. The overall goal is to reduce the number of missed legitimate event detections and the number of false event detections. Reducing false alarms early in the seismic pipeline processing will have a significant impact on this goal. Applicable both for existing sensor performance boosting and new sensor deployment, this system provides an important new method to automatically tune complex remote sensing systems. Systems tuned in this way will achieve better performance than is currently possible by manual tuning, and with much less time and effort devoted to the tuning process. With ground truth on detections in seismic waveforms from a network of stations, we show that AST increases the probability of detection while decreasing false alarms.
Impact of Reservoir Operation to the Inflow Flood - a Case Study of Xinfengjiang Reservoir
NASA Astrophysics Data System (ADS)
Chen, L.
2017-12-01
Building of reservoir shall impact the runoff production and routing characteristics, and changes the flood formation. This impact, called as reservoir flood effect, could be divided into three parts, including routing effect, volume effect and peak flow effect, and must be evaluated in a whole by using hydrological model. After analyzing the reservoir flood formation, the Liuxihe Model for reservoir flood forecasting is proposed. The Xinfengjiang Reservoir is studied as a case. Results show that the routing effect makes peak flow appear 4 to 6 hours in advance, volume effect is bigger for large flood than small one, and when rainfall focus on the reservoir area, this effect also increases peak flow largely, peak flow effect makes peak flow increase 6.63% to 8.95%. Reservoir flood effect is obvious, which have significant impact to reservoir flood. If this effect is not considered in the flood forecasting model, the flood could not be forecasted accurately, particularly the peak flow. Liuxihe Model proposed for Xinfengjiang Reservoir flood forecasting has a good performance, and could be used for real-time flood forecasting of Xinfengjiang Reservoir.Key words: Reservoir flood effect, reservoir flood forecasting, physically based distributed hydrological model, Liuxihe Model, parameter optimization
NASA Astrophysics Data System (ADS)
Dobson, B.; Pianosi, F.; Reed, P. M.; Wagener, T.
2017-12-01
In previous work, we have found that water supply companies are typically hesitant to use reservoir operation tools to inform their release decisions. We believe that this is, in part, due to a lack of faith in the fidelity of the optimization exercise with regards to its ability to represent the real world. In an attempt to quantify this, recent literature has studied the impact on performance from uncertainty arising in: forcing (e.g. reservoir inflows), parameters (e.g. parameters for the estimation of evaporation rate) and objectives (e.g. worst first percentile or worst case). We suggest that there is also epistemic uncertainty in the choices made during model creation, for example in the formulation of an evaporation model or aggregating regional storages. We create `rival framings' (a methodology originally developed to demonstrate the impact of uncertainty arising from alternate objective formulations), each with different modelling choices, and determine their performance impacts. We identify the Pareto approximate set of policies for several candidate formulations and then make them compete with one another in a large ensemble re-evaluation in each other's modelled spaces. This enables us to distinguish the impacts of different structural changes in the model used to evaluate system performance in an effort to generalize the validity of the optimized performance expectations.
Evaluating the impact of the HIV pandemic on measles control and elimination.
Helfand, Rita F; Moss, William J; Harpaz, Rafael; Scott, Susana; Cutts, Felicity
2005-05-01
To estimate the impact of the HIV pandemic on vaccine-acquired population immunity to measles virus because high levels of population immunity are required to eliminate transmission of measles virus in large geographical areas, and HIV infection can reduce the efficacy of measles vaccination. A literature review was conducted to estimate key parameters relating to the potential impact of HIV infection on the epidemiology of measles in sub-Saharan Africa; parameters included the prevalence of HIV, child mortality, perinatal HIV transmission rates and protective immune responses to measles vaccination. These parameter estimates were incorporated into a simple model, applicable to regions that have a high prevalence of HIV, to estimate the potential impact of HIV infection on population immunity against measles. The model suggests that the HIV pandemic should not introduce an insurmountable barrier to measles control and elimination, in part because higher rates of primary and secondary vaccine failure among HIV-infected children are counteracted by their high mortality rate. The HIV pandemic could result in a 2-3% increase in the proportion of the birth cohort susceptible to measles, and more frequent supplemental immunization activities (SIAs) may be necessary to control or eliminate measles. In the model the optimal interval between SIAs was most influenced by the coverage rate for routine measles vaccination. The absence of a second opportunity for vaccination resulted in the greatest increase in the number of susceptible children. These results help explain the initial success of measles elimination efforts in southern Africa, where measles control has been achieved in a setting of high HIV prevalence.
Marcus, Alan D; Higgins, Damien P; Gray, Rachael
2015-06-01
Evaluation of the health status of free-ranging populations is important for understanding the impact of disease on individuals and on population demography and viability. In this study, haematological reference intervals were developed for free-ranging endangered Australian sea lion (Neophoca cinerea) pups within the context of endemic hookworm (Uncinaria sanguinis) infection and the effects of pathogen, host, and environment factors on the variability of haematological parameters were investigated. Uncinaria sanguinis was identified as an important agent of disease, with infection causing regenerative anaemia, hypoproteinaemia, and a predominantly lymphocytic-eosinophilic systemic inflammatory response. Conversely, the effects of sucking lice (Antarctophthirus microchir) were less apparent and infestation in pups appears unlikely to cause clinical impact. Overall, the effects of U. sanguinis, A. microchir, host factors (standard length, body condition, pup sex, moult status, and presence of lesions), and environment factors (capture-type and year of sampling) accounted for 26-65% of the total variance observed in haematological parameters. Importantly, this study demonstrated that anaemia in neonatal Australian sea lion pups is not solely a benign physiological response to host-environment changes, but largely reflects a significant pathological process. This impact of hookworm infection on pup health has potential implications for the development of foraging and diving behaviour, which would subsequently influence the independent survival of juveniles following weaning. The haematological reference intervals developed in this study can facilitate long-term health surveillance, which is critical for the early recognition of changes in disease impact and to inform conservation management. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Roddy, D. J.
1977-01-01
A tabular outline of comparative data is presented for 340 basic dimensional, morphological, and structural parameters and related aspects for three craters of the flat-floored, central uplift type, two of which are natural terrestrial impact craters and one is a large-scale experimental explosion crater. The three craters are part of a general class, in terms of their morphology and structural deformation that is represented on each of the terrestrial planets including the moon. One of the considered craters, the Flynn Creek Crater, was formed by a hypervelocity impact event approximately 360 m.y. ago in what is now north central Tennessee. The impacting body appears to have been a carbonaceous chondrite or a cometary mass. The second crater, the Steinheim Crater, was formed by an impact event approximately 14.7 m.y. ago in what is now southwestern Germany. The Snowball Crater was formed by the detonation of a 500-ton TNT hemisphere on flat-lying, unconsolidated alluvium in Alberta, Canada.
Shape Effect Analysis of Aluminum Projectile Impact on Whipple Shields
NASA Technical Reports Server (NTRS)
Carrasquilla, Maria J.; Miller, Joshua E.
2017-01-01
The informed design with respect to hypervelocity collisions involving micrometeoroid and orbital debris (MMOD) is influential to the success of space missions. For an orbit comparable to that of the International Space Station, velocities for MMOD can range from 1 to 15 km/s, with an average velocity around 10 km/cu s. The high energy released during collisions at these speeds can result in damage to a spacecraft, or worst-case, loss of the spacecraft, thus outlining the importance of methods to predict the likelihood and extent of damage due to an impact. Through experimental testing and numerical simulations, substantial work has been conducted to better understand the effects of hypervelocity impacts (HVI) on spacecraft systems and shields; however, much of the work has been focused on spherical impacting particles. To improve environment models for the analysis of MMOD, a large-scale satellite break-up test was performed at the Arnold Engineering and Development Complex to better understand the varied impactor geometries that could be generated from a large impact. As a part of the post-experiment analysis, an undertaking to characterize the irregular fragments generated is currently being performed by the University of Florida under the management of NASA's Orbital Debris Program Office at Johnson Space Center (JSC). DebriSat was a representative, modern LEO satellite that was catastrophically broken up in a HVI test. The test chamber was lined with a soft-catch system of foam panels that captured the fragments after impact. Initial predictions put the number of fragments larger than 2mm generated from the HVI at roughly 85,000. The number of fragments thus far extracted from the foam panels has exceeded 100,000, with that number continuously increasing. The shapes of the fragments vary dependent upon the material. Carbon-fiber reinforced polymer pieces, for instance, are abundantly found as thin, flat slivers. The characterization of these fragments with respect to their mass, size, and material composition needs to be summarized in a form that can be used in MMOD analysis. The mechanism that brings these fragment traits into MMOD analysis is through ballistic limit equations (BLE) that have been developed largely for a few types of materials1. As a BLE provides the failure threshold for a shield or spacecraft component based on parameters such as the projectile impact velocity and size, and the target's materials, thickness, and configuration, it is used to design protective shields for spacecraft such as Whipple shields (WS) to an acceptable risk level. The majority of experiments and simulations to test shields and validate BLEs have, heretofore, largely used spheres as the impactor, not properly reflecting the irregular shapes of MMOD. This shortfall has motivated a numerical impact analysis study of HVI involving non-spherical geometries to identify key parameters that environment models should provide.
Steigen, Terje K; Claudio, Cheryl; Abbott, David; Schulzer, Michael; Burton, Jeff; Tymchak, Wayne; Buller, Christopher E; John Mancini, G B
2008-06-01
To assess reproducibility of core laboratory performance and impact on sample size calculations. Little information exists about overall reproducibility of core laboratories in contradistinction to performance of individual technicians. Also, qualitative parameters are being adjudicated increasingly as either primary or secondary end-points. The comparative impact of using diverse indexes on sample sizes has not been previously reported. We compared initial and repeat assessments of five quantitative parameters [e.g., minimum lumen diameter (MLD), ejection fraction (EF), etc.] and six qualitative parameters [e.g., TIMI myocardial perfusion grade (TMPG) or thrombus grade (TTG), etc.], as performed by differing technicians and separated by a year or more. Sample sizes were calculated from these results. TMPG and TTG were also adjudicated by a second core laboratory. MLD and EF were the most reproducible, yielding the smallest sample size calculations, whereas percent diameter stenosis and centerline wall motion require substantially larger trials. Of the qualitative parameters, all except TIMI flow grade gave reproducibility characteristics yielding sample sizes of many 100's of patients. Reproducibility of TMPG and TTG was only moderately good both within and between core laboratories, underscoring an intrinsic difficulty in assessing these. Core laboratories can be shown to provide reproducibility performance that is comparable to performance commonly ascribed to individual technicians. The differences in reproducibility yield huge differences in sample size when comparing quantitative and qualitative parameters. TMPG and TTG are intrinsically difficult to assess and conclusions based on these parameters should arise only from very large trials.
SAChES: Scalable Adaptive Chain-Ensemble Sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Ray, Jaideep; Ebeida, Mohamed Salah
We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the usemore » of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.« less
Impact of Regulation on Spectral Clustering
2014-07-22
the eigenvector values. The regularization parameter was taken to be n. The shaded blue and pink regions corresponds to the nodes belonging to the two...values. As before, the shaded blue and pink regions corresponds to the nodes belonging to the two strong clusters. For plots (a) & (b) the blue line... pink regions corresponds to the nodes belonging to the liberal and conservative blogs respectively. insensitive for large τ . In this case 70% of the
NASA Astrophysics Data System (ADS)
Kaba, M.; Zhou, F. C.; Lim, A.; Decoster, D.; Huignard, J.-P.; Tonda, S.; Dolfi, D.; Chazelas, J.
2007-11-01
The applications of microwave optoelectronics are extremely large since they extend from the Radio-over-Fibre to the Homeland security and defence systems. Then, the improved maturity of the optoelectronic components operating up to 40GHz permit to consider new optical processing functions (filtering, beamforming, ...) which can operate over very wideband microwave analogue signals. Specific performances are required which imply optical delay lines able to exhibit large Time-Bandwidth product values. It is proposed to evaluate slow light approach through highly dispersive structures based on either uniform or chirped Bragg Gratings. Therefore, we highlight the impact of the major parameters of such structures: index modulation depth, grating length, grating period, chirp coefficient and demonstrate the high potentiality of Bragg Grating for Large RF signals bandwidth processing under slow-light propagation.
Fabrication of aluminum-carbon composites
NASA Technical Reports Server (NTRS)
Novak, R. C.
1973-01-01
A screening, optimization, and evaluation program is reported of unidirectional carbon-aluminum composites. During the screening phase both large diameter monofilament and small diameter multifilament reinforcements were utilized to determine optimum precursor tape making and consolidation techniques. Difficulty was encountered in impregnating and consolidating the multifiber reinforcements. Large diameter monofilament reinforcement was found easier to fabricate into composites and was selected to carry into the optimization phase in which the hot pressing parameters were refined and the size of the fabricated panels was scaled up. After process optimization the mechanical properties of the carbon-aluminum composites were characterized in tension, stress-rupture and creep, mechanical fatigue, thermal fatigue, thermal aging, thermal expansion, and impact.
Calculating the momentum enhancement factor for asteroid deflection studies
Heberling, Tamra; Gisler, Galen; Plesko, Catherine; ...
2017-10-17
The possibility of kinetic-impact deflection of threatening near-Earth asteroids will be tested for the first time in the proposed AIDA (Asteroid Impact Deflection Assessment) mission, involving NASAs DART (Double Asteroid Redirection Test). The impact of the DART spacecraft onto the secondary of the binary asteroid 65803 Didymos at a speed of 5 to 7 km/s is expected to alter the mutual orbit by an observable amount. Furthermore, the velocity transferred to the secondary depends largely on the momentum enhancement factor, typically referred to as beta. Here, we use two hydrocodes developed at Los Alamos, RAGE and PAGOSA, to calculate anmore » approximate value for beta in laboratory-scale benchmark experiments. Convergence studies comparing the two codes show the importance of mesh size in estimating this crucial parameter.« less
Calculating the momentum enhancement factor for asteroid deflection studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heberling, Tamra; Gisler, Galen; Plesko, Catherine
The possibility of kinetic-impact deflection of threatening near-Earth asteroids will be tested for the first time in the proposed AIDA (Asteroid Impact Deflection Assessment) mission, involving NASAs DART (Double Asteroid Redirection Test). The impact of the DART spacecraft onto the secondary of the binary asteroid 65803 Didymos at a speed of 5 to 7 km/s is expected to alter the mutual orbit by an observable amount. Furthermore, the velocity transferred to the secondary depends largely on the momentum enhancement factor, typically referred to as beta. Here, we use two hydrocodes developed at Los Alamos, RAGE and PAGOSA, to calculate anmore » approximate value for beta in laboratory-scale benchmark experiments. Convergence studies comparing the two codes show the importance of mesh size in estimating this crucial parameter.« less
Sub-luminal pulses from cosmic ray air showers
NASA Technical Reports Server (NTRS)
Linsley, J.
1985-01-01
Some of the signals produced by air showers in scintillators possess a distinctive feature, a sub-luminal pulse (SLP) following the normal one with a time delay of approximately 1.5 r/c. The average amplitude of the SLP corresponds to an energy deposit of about 50 MeV, three times as much as is deposited in a typical scintillator by vertical minimum ionizing muons. The SLP account for approximately 5% of the energy deposited in the atmosphere by IR showers with energy 10 to the 10th power GeV at impact parameters 1 km. Assuming that these pulses are due to neutrons travelling with a speed slightly less than c, they provide a unique means of estimating E sub h, the energy deposited by slow hadrons, in showers of this very high energy. On the other hand, if not allowed for properly, these pulses are liable to cause errors in estimating the impact parameters of large showers from pulse width observations.
Effects of human running cadence and experimental validation of the bouncing ball model
NASA Astrophysics Data System (ADS)
Bencsik, László; Zelei, Ambrus
2017-05-01
The biomechanical analysis of human running is a complex problem, because of the large number of parameters and degrees of freedom. However, simplified models can be constructed, which are usually characterized by some fundamental parameters, like step length, foot strike pattern and cadence. The bouncing ball model of human running is analysed theoretically and experimentally in this work. It is a minimally complex dynamic model when the aim is to estimate the energy cost of running and the tendency of ground-foot impact intensity as a function of cadence. The model shows that cadence has a direct effect on energy efficiency of running and ground-foot impact intensity. Furthermore, it shows that higher cadence implies lower risk of injury and better energy efficiency. An experimental data collection of 121 amateur runners is presented. The experimental results validate the model and provides information about the walk-to-run transition speed and the typical development of cadence and grounded phase ratio in different running speed ranges.
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; ...
2015-02-13
A search for new long-lived particles decaying to leptons is presented using proton-proton collisions produced by the LHC at √s=8 TeV. Data used for the analysis were collected by the CMS detector and correspond to an integrated luminosity of 19.7 fb -1. Events are selected with an electron and muon with opposite charges that both have transverse impact parameter values between 0.02 and 2 cm. The search has been designed to be sensitive to a wide range of models with nonprompt e-μ final states. Limits are set on the “displaced supersymmetry” model, with pair production of top squarks decaying intomore » an e-μ final state via R-parity-violating interactions. Lastly, the results are the most restrictive to date on this model, with the most stringent limit being obtained for a top squark lifetime corresponding to cτ=2 cm, excluding masses below 790 GeV at 95% confidence level.« less
NASA Astrophysics Data System (ADS)
Rastorguev, A. S.; Utkin, N. D.; Chumak, O. V.
2017-08-01
Agekyan's λ-factor that allows for the effect of multiplicity of stellar encounters with large impact parameters has been used for the first time to directly calculate the diffusion coefficients in the phase space of a stellar system. Simple estimates show that the cumulative effect, i.e., the total contribution of distant encounters to the change in the velocity of a test star, given the multiplicity of stellar encounters, is finite, and the logarithmic divergence inherent in the classical description of diffusion is removed, as was shown previously byKandrup using a different, more complex approach. In this case, the expressions for the diffusion coefficients, as in the classical description, contain the logarithm of the ratio of two independent quantities: the mean interparticle distance and the impact parameter of a close encounter. However, the physical meaning of this logarithmic factor changes radically: it reflects not the divergence but the presence of two characteristic length scales inherent in the stellar medium.
Martins, V; Miranda, A I; Carvalho, A; Schaap, M; Borrego, C; Sá, E
2012-01-01
The main purpose of this work is to estimate the impact of forest fires on air pollution applying the LOTOS-EUROS air quality modeling system in Portugal for three consecutive years, 2003-2005. Forest fire emissions have been included in the modeling system through the development of a numerical module, which takes into account the most suitable parameters for Portuguese forest fire characteristics and the burnt area by large forest fires. To better evaluate the influence of forest fires on air quality the LOTOS-EUROS system has been applied with and without forest fire emissions. Hourly concentration results have been compared to measure data at several monitoring locations with better modeling quality parameters when forest fire emissions were considered. Moreover, hourly estimates, with and without fire emissions, can reach differences in the order of 20%, showing the importance and the influence of this type of emissions on air quality. Copyright © 2011 Elsevier B.V. All rights reserved.
Fine-scale patterns of population stratification confound rare variant association tests.
O'Connor, Timothy D; Kiezun, Adam; Bamshad, Michael; Rich, Stephen S; Smith, Joshua D; Turner, Emily; Leal, Suzanne M; Akey, Joshua M
2013-01-01
Advances in next-generation sequencing technology have enabled systematic exploration of the contribution of rare variation to Mendelian and complex diseases. Although it is well known that population stratification can generate spurious associations with common alleles, its impact on rare variant association methods remains poorly understood. Here, we performed exhaustive coalescent simulations with demographic parameters calibrated from exome sequence data to evaluate the performance of nine rare variant association methods in the presence of fine-scale population structure. We find that all methods have an inflated spurious association rate for parameter values that are consistent with levels of differentiation typical of European populations. For example, at a nominal significance level of 5%, some test statistics have a spurious association rate as high as 40%. Finally, we empirically assess the impact of population stratification in a large data set of 4,298 European American exomes. Our results have important implications for the design, analysis, and interpretation of rare variant genome-wide association studies.
NASA Astrophysics Data System (ADS)
Driche, Khaled; Umezawa, Hitoshi; Rouger, Nicolas; Chicot, Gauthier; Gheeraert, Etienne
2017-04-01
Diamond has the advantage of having an exceptionally high critical electric field owing to its large band gap, which implies its high ability to withstand high voltages. At this maximum electric field, the operation of Schottky barrier diodes (SBDs), as well as FETs, may be limited by impact ionization, leading to avalanche multiplication, and hence the devices may breakdown. In this study, three of the reported impact ionization coefficients for electrons, αn, and holes, αp, in diamond at room temperature (300 K) are analyzed. Experimental data on reverse operation characteristics obtained from two different diamond SBDs are compared with those obtained from their corresponding simulated structures. Owing to the crucial role played by the impact ionization rate in determining the carrier transport, the three reported avalanche parameters implemented affect the behavior not only of the breakdown voltage but also of the leakage current for the same structure.
A STUDY OF DUST AND GAS AT MARS FROM COMET C/2013 A1 (SIDING SPRING)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelley, Michael S. P.; Farnham, Tony L.; Bodewits, Dennis
Although the nucleus of comet C/2013 A1 (Siding Spring) will safely pass Mars in 2014 October, the dust in the coma and tail will more closely approach the planet. Using a dynamical model of comet dust, we estimate the impact fluence. Based on our nominal model no impacts are expected at Mars. Relaxing our nominal model's parameters, the fluence is no greater than ∼10{sup –7} grains m{sup –2} for grain radii larger than 10 μm. Mars-orbiting spacecraft are unlikely to be impacted by large dust grains, but Mars may receive as many as ∼10{sup 7} grains, or ∼100 kg of total dust.more » We also estimate the flux of impacting gas molecules commonly observed in comet comae.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beshr, M.; Aute, V.; Sharma, V.
Supermarket refrigeration systems have high environmental impact due to their large refrigerant charge and high leak rates. Consequently, the interest in using low GWP refrigerants such as carbon dioxide (CO 2) and new refrigerant blends is increasing. In this study, an open-source Life Cycle Climate Performance (LCCP) framework is presented and used to compare the environmental impact of four supermarket refrigeration systems: a transcritical CO 2 booster system, a cascade CO 2/N-40 system, a combined secondary circuit with central DX N-40/L-40 system, and a baseline multiplex direct expansion system utilizing R-404A and N-40. The study is performed for different climatesmore » within the USA using EnergyPlus to simulate the systems' hourly performance. Finally, further analyses are presented such as parametric, sensitivity, and uncertainty analyses to study the impact of different system parameters on the LCCP.« less
Voluntary strategy suppresses the positive impact of preferential selection in prisoner’s dilemma
NASA Astrophysics Data System (ADS)
Sun, Lei; Lin, Pei-jie; Chen, Ya-shan
2014-11-01
Impact of aspiration is ubiquitous in social and biological disciplines. In this work, we try to explore the impact of such a trait on voluntary prisoners’ dilemma game via a selection parameter w. w=0 returns the traditional version of random selection. For positive w, the opponent of high payoff will be selected; while negative w means that the partner of low payoff will be chosen. We find that for positive w, cooperation will be greatly promoted in the interval of small b, at variance cooperation is inhibited with large b. For negative w, cooperation is fully restrained, irrespective of b value. It is found that the positive impact of preferential selection is suppressed by the voluntary strategy in prisoner’s dilemma. These observations can be supported by the spatial patterns. Our work may shed light on the emergence and persistence of cooperation with voluntary participation in social dilemma.
NASA Astrophysics Data System (ADS)
Eggl, Siegfried
2014-05-01
Mankind believes to have the capabilities to avert potentially disastrous asteroid impacts. Yet, only the realization of a mitigation demonstration mission can confirm such a claim. The NEOShield project, an international collaboration under European leadership, aims to draw a comprehensive picture of the scientific as well as technical requirements to such an endeavor. One of the top priorities of such a demonstration mission is, of course, that a previously harmless target asteroid shall not be turned into a potentially hazardous object. Given the inherently large uncertainties in an asteroid's physical parameters, as well as the additional uncertainties introduced during the deflection attempt, an in depth analysis of the change in asteroid impact probabilities after a deflection event becomes necessary. We present a post mitigation impact risk analysis of a list of potential deflection test missions and discuss the influence of orbital, physical and mitigation induced uncertainties.
Beshr, M.; Aute, V.; Sharma, V.; ...
2015-04-09
Supermarket refrigeration systems have high environmental impact due to their large refrigerant charge and high leak rates. Consequently, the interest in using low GWP refrigerants such as carbon dioxide (CO 2) and new refrigerant blends is increasing. In this study, an open-source Life Cycle Climate Performance (LCCP) framework is presented and used to compare the environmental impact of four supermarket refrigeration systems: a transcritical CO 2 booster system, a cascade CO 2/N-40 system, a combined secondary circuit with central DX N-40/L-40 system, and a baseline multiplex direct expansion system utilizing R-404A and N-40. The study is performed for different climatesmore » within the USA using EnergyPlus to simulate the systems' hourly performance. Finally, further analyses are presented such as parametric, sensitivity, and uncertainty analyses to study the impact of different system parameters on the LCCP.« less
Influence of a breakwater on nearby rocky intertidal community structure.
Martins, Gustavo M; Amaral, André F; Wallenstein, Francisco M; Neto, Ana I
2009-01-01
It is widely recognised that coastal-defence structures generally affect the structure of the assemblages they support, yet their impact on adjacent systems has been largely ignored. Breakwaters modify the nearby physical environment (e.g. wave action) suggesting a local impact on biological parameters. In the present study, an ACI (After-Control-Impact) design was used to test the general hypothesis that the artificial sheltering of an exposed coast has a strong effect on the structure and functioning of adjacent systems. The effects of a reduction in hydrodynamics were clear for a number of taxa and included the replacement of barnacles, limpets and frondose algae by an increasing cover of ephemeral algae. These effects were evident both at early and late successional stages. Results suggest that the artificial sheltering of naturally exposed coasts can have a strong impact promoting a shift from consumer- to producer-dominated communities, which has important ecological and energetic consequences for the ecosystem.
Asymmetric bubble collapse and jetting in generalized Newtonian fluids
NASA Astrophysics Data System (ADS)
Shukla, Ratnesh K.; Freund, Jonathan B.
2017-11-01
The jetting dynamics of a gas bubble near a rigid wall in a non-Newtonian fluid are investigated using an axisymmetric simulation model. The bubble gas is assumed to be homogeneous, with density and pressure related through a polytropic equation of state. An Eulerian numerical description, based on a sharp interface capturing method for the shear-free bubble-liquid interface and an incompressible Navier-Stokes flow solver for generalized fluids, is developed specifically for this problem. Detailed simulations for a range of rheological parameters in the Carreau model show both the stabilizing and destabilizing non-Newtonian effects on the jet formation and impact. In general, for fixed driving pressure ratio, stand-off distance and reference zero-shear-rate viscosity, shear-thinning and shear-thickening promote and suppress jet formation and impact, respectively. For a sufficiently large high-shear-rate limit viscosity, the jet impact is completely suppressed. Thresholds are also determined for the Carreau power-index and material time constant. The dependence of these threshold rheological parameters on the non-dimensional driving pressure ratio and wall stand-off distance is similarly established. Implications for tissue injury in therapeutic ultrasound will be discussed.
NASA Astrophysics Data System (ADS)
Li, Yue; Yang, Hui; Wang, Tao; MacBean, Natasha; Bacour, Cédric; Ciais, Philippe; Zhang, Yiping; Zhou, Guangsheng; Piao, Shilong
2017-08-01
Reducing parameter uncertainty of process-based terrestrial ecosystem models (TEMs) is one of the primary targets for accurately estimating carbon budgets and predicting ecosystem responses to climate change. However, parameters in TEMs are rarely constrained by observations from Chinese forest ecosystems, which are important carbon sink over the northern hemispheric land. In this study, eddy covariance data from six forest sites in China are used to optimize parameters of the ORganizing Carbon and Hydrology In Dynamics EcosystEms TEM. The model-data assimilation through parameter optimization largely reduces the prior model errors and improves the simulated seasonal cycle and summer diurnal cycle of net ecosystem exchange, latent heat fluxes, and gross primary production and ecosystem respiration. Climate change experiments based on the optimized model are deployed to indicate that forest net primary production (NPP) is suppressed in response to warming in the southern China but stimulated in the northeastern China. Altered precipitation has an asymmetric impact on forest NPP at sites in water-limited regions, with the optimization-induced reduction in response of NPP to precipitation decline being as large as 61% at a deciduous broadleaf forest site. We find that seasonal optimization alters forest carbon cycle responses to environmental change, with the parameter optimization consistently reducing the simulated positive response of heterotrophic respiration to warming. Evaluations from independent observations suggest that improving model structure still matters most for long-term carbon stock and its changes, in particular, nutrient- and age-related changes of photosynthetic rates, carbon allocation, and tree mortality.
Doutres, O; Ouisse, M; Atalla, N; Ichchou, M
2014-10-01
This paper deals with the prediction of the macroscopic sound absorption behavior of highly porous polyurethane foams using two unit-cell microstructure-based models recently developed by Doutres, Atalla, and Dong [J. Appl. Phys. 110, 064901 (2011); J. Appl. Phys. 113, 054901 (2013)]. In these models, the porous material is idealized as a packing of a tetrakaidecahedra unit-cell representative of the disordered network that constitutes the porous frame. The non-acoustic parameters involved in the classical Johnson-Champoux-Allard model (i.e., porosity, airflow resistivity, tortuosity, etc.) are derived from characteristic properties of the unit-cell and semi-empirical relationships. A global sensitivity analysis is performed on these two models in order to investigate how the variability associated with the measured unit-cell characteristics affects the models outputs. This allows identification of the possible limitations of a unit-cell micro-macro approach due to microstructure irregularity. The sensitivity analysis mainly shows that for moderately and highly reticulated polyurethane foams, the strut length parameter is the key parameter since it greatly impacts three important non-acoustic parameters and causes large uncertainty on the sound absorption coefficient even if its measurement variability is moderate. For foams with a slight inhomogeneity and anisotropy, a micro-macro model associated to cell size measurements should be preferred.
NASA Astrophysics Data System (ADS)
Cooper, Elizabeth; Dance, Sarah; Garcia-Pintado, Javier; Nichols, Nancy; Smith, Polly
2017-04-01
Timely and accurate inundation forecasting provides vital information about the behaviour of fluvial flood water, enabling mitigating actions to be taken by residents and emergency services. Data assimilation is a powerful mathematical technique for combining forecasts from hydrodynamic models with observations to produce a more accurate forecast. We discuss the effect of both domain size and channel friction parameter estimation on observation impact in data assimilation for inundation forecasting. Numerical shallow water simulations are carried out in a simple, idealized river channel topography. Data assimilation is performed using an Ensemble Transform Kalman Filter (ETKF) and synthetic observations of water depth in identical twin experiments. We show that reinitialising the numerical inundation model with corrected water levels after an assimilation can cause an initialisation shock if a hydrostatic assumption is made, leading to significant degradation of the forecast for several hours immediately following an assimilation. We demonstrate an effective and novel method for dealing with this. We find that using data assimilation to combine observations of water depth with forecasts from a hydrodynamic model corrects the forecast very effectively at time of the observations. In agreement with other authors we find that the corrected forecast then moves quickly back to the open loop forecast which does not take the observations into account. Our investigations show that the time taken for the forecast to decay back to the open loop case depends on the length of the domain of interest when only water levels are corrected. This is because the assimilation corrects water depths in all parts of the domain, even when observations are only available in one area. Error growth in the forecast step then starts at the upstream part of the domain and propagates downstream. The impact of the observations is therefore longer-lived in a longer domain. We have found that the upstream-downstream pattern of error growth can be due to incorrect friction parameter specification, rather than errors in inflow as shown elsewhere. Our results show that joint state-parameter estimation can recover accurate values for the parameter controlling channel friction processes in the model, even when observations of water level are only available on part of the flood plain. Correcting water levels and the channel friction parameter together leads to a large improvement in the forecast water levels at all simulation times. The impact of the observations is therefore much greater when the channel friction parameter is corrected along with water levels. We find that domain length effects disappear for joint state-parameter estimation.
Impact of relativistic effects on cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Lorenz, Christiane S.; Alonso, David; Ferreira, Pedro G.
2018-01-01
Future surveys will access large volumes of space and hence very long wavelength fluctuations of the matter density and gravitational field. It has been argued that the set of secondary effects that affect the galaxy distribution, relativistic in nature, will bring new, complementary cosmological constraints. We study this claim in detail by focusing on a subset of wide-area future surveys: Stage-4 cosmic microwave background experiments and photometric redshift surveys. In particular, we look at the magnification lensing contribution to galaxy clustering and general-relativistic corrections to all observables. We quantify the amount of information encoded in these effects in terms of the tightening of the final cosmological constraints as well as the potential bias in inferred parameters associated with neglecting them. We do so for a wide range of cosmological parameters, covering neutrino masses, standard dark-energy parametrizations and scalar-tensor gravity theories. Our results show that, while the effect of lensing magnification to number counts does not contain a significant amount of information when galaxy clustering is combined with cosmic shear measurements, this contribution does play a significant role in biasing estimates on a host of parameter families if unaccounted for. Since the amplitude of the magnification term is controlled by the slope of the source number counts with apparent magnitude, s (z ), we also estimate the accuracy to which this quantity must be known to avoid systematic parameter biases, finding that future surveys will need to determine s (z ) to the ˜5 %- 10 % level. On the contrary, large-scale general-relativistic corrections are irrelevant both in terms of information content and parameter bias for most cosmological parameters but significant for the level of primordial non-Gaussianity.
Jafari, Naghmeh; Falahatkar, Bahram; Sajjadi, Mir Masoud
2018-06-16
The effect of various feeding strategies was evaluated on growth performance and biochemical parameters in two sizes of Siberian sturgeon (465.75 ± 11.18 and 250.40 ± 12 g) during 45 days. Fish were distributed into six experimental treatments including large fish with satiation feeding (LA), small fish with satiation feeding (SA), large fish with 50% satiation feeding (LR), small fish with 50% satiation feeding (SR), large starved fish (LS), and small starved fish (SS). Differences in final weight between LA and LR treatments were not noticeable, whereas SA and SR treatments showed significant differences. Growth parameters were more affected in small fish. In condition factor and weight gain in starved treatments, a considerable reduction occurred by interaction between feeding strategies and fish size, so the lowest values were obtained in SS treatment. Glucose levels significantly decreased in small fish during the starvation. Interaction between feeding strategy and fish size indicated the highest and lowest albumin level in SA and SS treatment, respectively. Cholesterol, triglyceride, total protein, and globulin showed no significant differences. It can be deduced that small fish are more sensitive to starvation than the large fish. Since glucose and albumin showed significant decrease in starved small fish, these parameters can help to monitor nutritional status and feeding practices. It was indicated that in both sizes of Siberian sturgeon, feeding 50% satiation reduced the food cost without negative impact on physiological condition, and it can be considered as an appropriate strategy to face unfavorable circumstances.
Event-scale power law recession analysis: quantifying methodological uncertainty
NASA Astrophysics Data System (ADS)
Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.
2017-01-01
The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.
Choosing the appropriate forecasting model for predictive parameter control.
Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars
2014-01-01
All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.
Impacts of a Stochastic Ice Mass-Size Relationship on Squall Line Ensemble Simulations
NASA Astrophysics Data System (ADS)
Stanford, M.; Varble, A.; Morrison, H.; Grabowski, W.; McFarquhar, G. M.; Wu, W.
2017-12-01
Cloud and precipitation structure, evolution, and cloud radiative forcing of simulated mesoscale convective systems (MCSs) are significantly impacted by ice microphysics parameterizations. Most microphysics schemes assume power law relationships with constant parameters for ice particle mass, area, and terminal fallspeed relationships as a function of size, despite observations showing that these relationships vary in both time and space. To account for such natural variability, a stochastic representation of ice microphysical parameters was developed using the Predicted Particle Properties (P3) microphysics scheme in the Weather Research and Forecasting model, guided by in situ aircraft measurements from a number of field campaigns. Here, the stochastic framework is applied to the "a" and "b" parameters of the unrimed ice mass-size (m-D) relationship (m=aDb) with co-varying "a" and "b" values constrained by observational distributions tested over a range of spatiotemporal autocorrelation scales. Diagnostically altering a-b pairs in three-dimensional (3D) simulations of the 20 May 2011 Midlatitude Continental Convective Clouds Experiment (MC3E) squall line suggests that these parameters impact many important characteristics of the simulated squall line, including reflectivity structure (particularly in the anvil region), surface rain rates, surface and top of atmosphere radiative fluxes, buoyancy and latent cooling distributions, and system propagation speed. The stochastic a-b P3 scheme is tested using two frameworks: (1) a large ensemble of two-dimensional idealized squall line simulations and (2) a smaller ensemble of 3D simulations of the 20 May 2011 squall line, for which simulations are evaluated using observed radar reflectivity and radial velocity at multiple wavelengths, surface meteorology, and surface and satellite measured longwave and shortwave radiative fluxes. Ensemble spreads are characterized and compared against initial condition ensemble spreads for a range of variables.
Avanasi, Raghavendhran; Shin, Hyeong-Moo; Vieira, Veronica M; Bartell, Scott M
2016-04-01
We recently utilized a suite of environmental fate and transport models and an integrated exposure and pharmacokinetic model to estimate individual perfluorooctanoate (PFOA) serum concentrations, and also assessed the association of those concentrations with preeclampsia for participants in the C8 Health Project (a cross-sectional study of over 69,000 people who were environmentally exposed to PFOA near a major U.S. fluoropolymer production facility located in West Virginia). However, the exposure estimates from this integrated model relied on default values for key independent exposure parameters including water ingestion rates, the serum PFOA half-life, and the volume of distribution for PFOA. The aim of the present study is to assess the impact of inter-individual variability and epistemic uncertainty in these parameters on the exposure estimates and subsequently, the epidemiological association between PFOA exposure and preeclampsia. We used Monte Carlo simulation to propagate inter-individual variability/epistemic uncertainty in the exposure assessment and reanalyzed the epidemiological association. Inter-individual variability in these parameters mildly impacted the serum PFOA concentration predictions (the lowest mean rank correlation between the estimated serum concentrations in our study and the original predicted serum concentrations was 0.95) and there was a negligible impact on the epidemiological association with preeclampsia (no change in the mean adjusted odds ratio (AOR) and the contribution of exposure uncertainty to the total uncertainty including sampling variability was 7%). However, when epistemic uncertainty was added along with the inter-individual variability, serum PFOA concentration predictions and their association with preeclampsia were moderately impacted (the mean AOR of preeclampsia occurrence was reduced from 1.12 to 1.09, and the contribution of exposure uncertainty to the total uncertainty was increased up to 33%). In conclusion, our study shows that the change of the rank exposure among the study participants due to variability and epistemic uncertainty in the independent exposure parameters was large enough to cause a 25% bias towards the null. This suggests that the true AOR of the association between PFOA and preeclampsia in this population might be higher than the originally reported AOR and has more uncertainty than indicated by the originally reported confidence interval. Copyright © 2016 Elsevier Inc. All rights reserved.
Ocean acidification impacts bacteria-phytoplankton coupling at low-nutrient conditions
NASA Astrophysics Data System (ADS)
Hornick, Thomas; Bach, Lennart T.; Crawfurd, Katharine J.; Spilling, Kristian; Achterberg, Eric P.; Woodhouse, Jason N.; Schulz, Kai G.; Brussaard, Corina P. D.; Riebesell, Ulf; Grossart, Hans-Peter
2017-01-01
The oceans absorb about a quarter of the annually produced anthropogenic atmospheric carbon dioxide (CO2), resulting in a decrease in surface water pH, a process termed ocean acidification (OA). Surprisingly little is known about how OA affects the physiology of heterotrophic bacteria or the coupling of heterotrophic bacteria to phytoplankton when nutrients are limited. Previous experiments were, for the most part, undertaken during productive phases or following nutrient additions designed to stimulate algal blooms. Therefore, we performed an in situ large-volume mesocosm ( ˜ 55 m3) experiment in the Baltic Sea by simulating different fugacities of CO2 (fCO2) extending from present to future conditions. The study was conducted in July-August after the nominal spring bloom, in order to maintain low-nutrient conditions throughout the experiment. This resulted in phytoplankton communities dominated by small-sized functional groups (picophytoplankton). There was no consistent fCO2-induced effect on bacterial protein production (BPP), cell-specific BPP (csBPP) or biovolumes (BVs) of either free-living (FL) or particle-associated (PA) heterotrophic bacteria, when considered as individual components (univariate analyses). Permutational Multivariate Analysis of Variance (PERMANOVA) revealed a significant effect of the fCO2 treatment on entire assemblages of dissolved and particulate nutrients, metabolic parameters and the bacteria-phytoplankton community. However, distance-based linear modelling only identified fCO2 as a factor explaining the variability observed amongst the microbial community composition, but not for explaining variability within the metabolic parameters. This suggests that fCO2 impacts on microbial metabolic parameters occurred indirectly through varying physicochemical parameters and microbial species composition. Cluster analyses examining the co-occurrence of different functional groups of bacteria and phytoplankton further revealed a separation of the four fCO2-treated mesocosms from both control mesocosms, indicating that complex trophic interactions might be altered in a future acidified ocean. Possible consequences for nutrient cycling and carbon export are still largely unknown, in particular in a nutrient-limited ocean.
NASA Astrophysics Data System (ADS)
Paris, Jean-Daniel; Belan, Boris D.; Ancellet, Gérard; Nédélec, Philippe; Arshinov, Mikhail Yu.; Pruvost, Arnaud; Berchet, Antoine; Arzoumanian, Emmanuel; Pison, Isabelle; Ciais, Philippe; Law, Kathy
2014-05-01
Despite the unique scientific value of better knowing atmospheric composition over Siberia, regional observations of the tropospheric composition over this region are still lacking. Large local anthropogenic emissions, strong ecosystem gas exchange across the vast forest expanse, and processes feeding back to global climate such as wetlands CH4 emissions, seabed hydrates destabilization and degrading permafrost make this region particularly crucial to investigate. We aim at addressing this need in the YAK-AEROSIB program by collecting high-precision in-situ measurements of the vertical distribution of CO2, CH4, CO, O3, black carbon and ultrafine particles distribution in the Siberian troposphere, as well as other parameters including aerosol lidar profiles, on a pan-Siberian aircraft transect. Campaigns are performed almost annually since 2006 until now on this regular route, while special campaigns are occasionnally arranged to sample the troposphere elsewere (e.g. Russian Arctic coast). We show the background tropospheric composition obtained from these surveys, the variability and the impact of large-scale transport of anthropogenic emissions from Europe and Asia, as well as the impact of biomass burning plumes both from local wildfires (2012) and from remote sources elsewhere in Asia. Long range transport of anthropogenic emissions is shown to have a discernible impact on O3 distribution, although its lower-tropospheric variability is largely driven by surface deposition. Regional sources and sinks drive the lower troposphere CO2 and CH4 concentrations. Recent efforts aim at better understanding the respective role of CH4 emission processes (including methanogenesis in wetlands and emissions by wildfires) in driving its large scale atmospheric variability over the region. Generally, the YAK AEROSIB provide unique observations over Siberia, documenting both direct impact of regional sources and aged air masses experiencing long range transport toward the high Arctic.
NASA Technical Reports Server (NTRS)
Roberts, E. G.; Johnson, C. M.
1982-01-01
The economics and sensitivities of slicing large diameter silicon ingots for photovoltaic applications were examined. Current economics and slicing add on cost sensitivities are calculated using variable parameters for blade life, slicing yield, and slice cutting speed. It is indicated that cutting speed has the biggest impact on slicing add on cost, followed by slicing yield, and by blade life as the blade life increases.
NASA Technical Reports Server (NTRS)
Dudkin, V. E.; Kovalev, E. E.; Nefedov, N. A.; Antonchik, V. A.; Bogdanov, S. D.; Kosmach, V. F.; Likhachev, A. YU.; Benton, E. V.; Crawford, H. J.
1995-01-01
A method is proposed for finding the dependence of mean multiplicities of secondaries on the nucleus-collision impact parameter from the data on the total interaction ensemble. The impact parameter has been shown to completely define the mean characteristics of an individual interaction event. A difference has been found between experimental results and the data calculated in terms of the cascade-evaporation model at impact-parameter values below 3 fm.
Challenges for MSSM Higgs searches at hadron colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carena, Marcela S.; /Fermilab; Menon, A.
2007-04-01
In this article we analyze the impact of B-physics and Higgs physics at LEP on standard and non-standard Higgs bosons searches at the Tevatron and the LHC, within the framework of minimal flavor violating supersymmetric models. The B-physics constraints we consider come from the experimental measurements of the rare B-decays b {yields} s{gamma} and B{sub u} {yields} {tau}{nu} and the experimental limit on the B{sub s} {yields} {mu}{sup +}{mu}{sup -} branching ratio. We show that these constraints are severe for large values of the trilinear soft breaking parameter A{sub t}, rendering the non-standard Higgs searches at hadron colliders less promising.more » On the contrary these bounds are relaxed for small values of A{sub t} and large values of the Higgsino mass parameter {mu}, enhancing the prospects for the direct detection of non-standard Higgs bosons at both colliders. We also consider the available ATLAS and CMS projected sensitivities in the standard model Higgs search channels, and we discuss the LHC's ability in probing the whole MSSM parameter space. In addition we also consider the expected Tevatron collider sensitivities in the standard model Higgs h {yields} b{bar b} channel to show that it may be able to find 3 {sigma} evidence in the B-physics allowed regions for small or moderate values of the stop mixing parameter.« less
Artíñano, B; Gómez-Moreno, F J; Díaz, E; Amato, F; Pandolfi, M; Alonso-Blanco, E; Coz, E; García-Alonso, S; Becerril-Valle, M; Querol, X; Alastuey, A; van Drooge, B L
2017-09-01
A large and uncontrolled fire of a tire landfill started in Seseña (Toledo, Spain) on May 13, 2016. An experimental deployment was immediately launched in the area for measuring regulated and non-standard air quality parameters to assess the potential impact of the plume at local and regional levels. Outdoor and indoor measurements of different parameters were carried out at a near school, approximately 700m downwind the burning tires. Real time measurements of ambient black carbon (BC) and total number particle concentrations were identified as good tracers of the smoke plume. Simultaneous peaks allowed us to characterize situations of the plume impact on the site. Outdoor total particle number concentrations reached in these occasions 3.8×10 5 particlescm -3 (on a 10min resolution) whereas the indoor concentration was one order of magnitude lower. BC mass concentrations in ambient air were in the range of 2 to 7μgm -3 , whereas concentrations<2μgm -3 were measured indoor. Indoor and outdoor deposited inhalable dust was sampled and chemically characterized. Both indoor and outdoor dust was enriched in tire components (Zn, sulfate) and PAHs associated to the tire combustion process. Infiltration processes have been documented for BC and particle number concentrations causing increases in indoor concentrations. Copyright © 2017 Elsevier B.V. All rights reserved.
Entrainment and scattering in microswimmer-colloid interactions
NASA Astrophysics Data System (ADS)
Shum, Henry; Yeomans, Julia M.
2017-11-01
We use boundary element simulations to study the interaction of model microswimmers with a neutrally buoyant spherical particle. The ratio of the size of the particle to that of the swimmer is varied from RP/RS≪1 , corresponding to swimmer-tracer scattering, to RP/RS≫1 , approximately equivalent to the swimmer interacting with a fixed, flat surface. We find that details of the swimmer and particle trajectories vary for different swimmers. However, the overall characteristics of the scattering event fall into two regimes, depending on the relative magnitudes of the impact parameter, ρ , and the collision radius, Rcoll=RP+RS . The range of particle motion, defined as the maximum distance between two points on the trajectory, has only a weak dependence on the impact parameter when ρ
The prognostic impact of clinical and CT parameters in patients with pontine hemorrhage.
Dziewas, Rainer; Kremer, Marion; Lüdemann, Peter; Nabavi, Darius G; Dräger, Bianca; Ringelstein, Bernd
2003-01-01
In patients with pontine hemorrhage (PH), an accurate prognostic assessment is critical for establishing a reasonable therapeutic approach. The initial clinical symptoms and computed tomography (CT) features were analyzed with multivariate regression analysis in 39 consecutive patients with PH. PHs were classified into three types: (1) large paramedian, (2) basal or basotegmental and (3) lateral tegmental, and the hematomas' diameters were measured. The patients' outcome was evaluated. Twenty-seven patients (69%) died and 12 (31%) survived for more than 1 year after PH. The symptom most predictive of death was coma on admission. The large paramedian type of PH predicted a poor prognosis, whereas the lateral tegmental type was associated with a favorable outcome. The transverse hematoma diameter was also related to outcome, with the threshold value found to be 20 mm. We conclude that PH outcome can be estimated best by combining the CT parameters 'large paramedian PH' and 'transverse diameter >/=20 mm' with the clinical variable 'coma on admission'. Survival is unlikely if all 3 features are present, whereas survival may be expected if only 1 or none of these features is found. Copyright 2003 S. Karger AG, Basel
Analytic Ballistic Performance Model of Whipple Shields
NASA Technical Reports Server (NTRS)
Miller, J. E.; Bjorkman, M. D.; Christiansen, E. L.; Ryan, S. J.
2015-01-01
The dual-wall, Whipple shield is the shield of choice for lightweight, long-duration flight. The shield uses an initial sacrificial wall to initiate fragmentation and melt an impacting threat that expands over a void before hitting a subsequent shield wall of a critical component. The key parameters to this type of shield are the rear wall and its mass which stops the debris, as well as the minimum shock wave strength generated by the threat particle impact of the sacrificial wall and the amount of room that is available for expansion. Ensuring the shock wave strength is sufficiently high to achieve large scale fragmentation/melt of the threat particle enables the expansion of the threat and reduces the momentum flux of the debris on the rear wall. Three key factors in the shock wave strength achieved are the thickness of the sacrificial wall relative to the characteristic dimension of the impacting particle, the density and material cohesion contrast of the sacrificial wall relative to the threat particle and the impact speed. The mass of the rear wall and the sacrificial wall are desirable to minimize for launch costs making it important to have an understanding of the effects of density contrast and impact speed. An analytic model is developed here, to describe the influence of these three key factors. In addition this paper develops a description of a fourth key parameter related to fragmentation and its role in establishing the onset of projectile expansion.
Tse, Kwong Ming; Tan, Long Bin; Lee, Shu Jin; Lim, Siak Piang; Lee, Heow Pueh
2015-06-01
In spite of anatomic proximity of the facial skeleton and cranium, there is lack of information in the literature regarding the relationship between facial and brain injuries. This study aims to correlate brain injuries with facial injuries using finite element method (FEM). Nine common impact scenarios of facial injuries are simulated with their individual stress wave propagation paths in the facial skeleton and the intracranial brain. Fractures of cranio-facial bones and intracranial injuries are evaluated based on the tolerance limits of the biomechanical parameters. General trend of maximum intracranial biomechanical parameters found in nasal bone and zygomaticomaxillary impacts indicates that severity of brain injury is highly associated with the proximity of location of impact to the brain. It is hypothesized that the midface is capable of absorbing considerable energy and protecting the brain from impact. The nasal cartilages dissipate the impact energy in the form of large scale deformation and fracture, with the vomer-ethmoid diverging stress to the "crumpling zone" of air-filled sphenoid and ethmoidal sinuses; in its most natural manner, the face protects the brain. This numerical study hopes to provide surgeons some insight in what possible brain injuries to be expected in various scenarios of facial trauma and to help in better diagnosis of unsuspected brain injury, thereby resulting in decreasing the morbidity and mortality associated with facial trauma. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Galliano, Frédéric
2018-05-01
This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.
Efficient extraction strategies of tea (Camellia sinensis) biomolecules.
Banerjee, Satarupa; Chatterjee, Jyotirmoy
2015-06-01
Tea is a popular daily beverage worldwide. Modulation and modifications of its basic components like catechins, alkaloids, proteins and carbohydrate during fermentation or extraction process changes organoleptic, gustatory and medicinal properties of tea. Through these processes increase or decrease in yield of desired components are evident. Considering the varied impacts of parameters in tea production, storage and processes that affect the yield, extraction of tea biomolecules at optimized condition is thought to be challenging. Implementation of technological advancements in green chemistry approaches can minimize the deviation retaining maximum qualitative properties in environment friendly way. Existed extraction processes with optimization parameters of tea have been discussed in this paper including its prospects and limitations. This exhaustive review of various extraction parameters, decaffeination process of tea and large scale cost effective isolation of tea components with aid of modern technology can assist people to choose extraction condition of tea according to necessity.
Aerodynamic configuration design using response surface methodology analysis
NASA Technical Reports Server (NTRS)
Engelund, Walter C.; Stanley, Douglas O.; Lepsch, Roger A.; Mcmillin, Mark M.; Unal, Resit
1993-01-01
An investigation has been conducted to determine a set of optimal design parameters for a single-stage-to-orbit reentry vehicle. Several configuration geometry parameters which had a large impact on the entry vehicle flying characteristics were selected as design variables: the fuselage fineness ratio, the nose to body length ratio, the nose camber value, the wing planform area scale factor, and the wing location. The optimal geometry parameter values were chosen using a response surface methodology (RSM) technique which allowed for a minimum dry weight configuration design that met a set of aerodynamic performance constraints on the landing speed, and on the subsonic, supersonic, and hypersonic trim and stability levels. The RSM technique utilized, specifically the central composite design method, is presented, along with the general vehicle conceptual design process. Results are presented for an optimized configuration along with several design trade cases.
Massive Statistics of VLF-Induced Ionospheric Disturbances
NASA Astrophysics Data System (ADS)
Pailoor, N.; Cohen, M.; Golkowski, M.
2017-12-01
The impact of lightning of the D-region of the ionosphere has been measured by Very Low Frequency (VLF) remote sensing, and can be seen through the observance of Early-Fast events. Previous research has indicated that several factors control the behavior and occurrence of these events, including the transmitter-receiver geometry, as well as the peak current and polarity of the strike. Unfortunately, since each event is unique due to the wide variety of impacting factors, it is difficult to make broad inferences about the interactions between the lightning and ionosphere. By investigating a large database of lightning-induced disturbances over a span of several years and over a continental-scale region, we seek to quantify the relationship between geometry, lightning parameters, and the apparent disturbance of the ionosphere as measured with VLF transmitters. We began with a set of 860,000 cases where an intense lightning stroke above 150 kA occurred within 300 km of a transmiter-receiver path. To then detect ionospheric disturbances from the large volume of VLF data and lightning incidents, we applied a number of classification methods to the actual VLF amplitude data, and find that the most accurate is a convolutional neural network, which yielded a detection efficiency of 95-98%, and a false positive rate less than 25%. Using this model, we were able to assemble a database of more than 97,000 events, with each event stored with its corresponding time, date, receiver, transmitter, and lightning parameters. Estimates for the peak and slope of each disruption were also calculated. From this data, we were able to chart the relationships between geometry and lightning parameters (peak current and polarity) towards the occurrence probability, perturbation intensity, and recovery time, of the VLF perturbation. The results of this analysis are presented here.
Impact of spurious shear on cosmological parameter estimates from weak lensing observables
Petri, Andrea; May, Morgan; Haiman, Zoltán; ...
2014-12-30
We research, residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (Ω m,w,σ 8) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitudemore » smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of σ sys 2 ≈ 10 -7, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ≈ 100 deg 2, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (Ωm,w,σ8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.« less
NASA Astrophysics Data System (ADS)
Mughal, Maqsood Ali
Clean and environmentally friendly technologies are centralizing industry focus towards obtaining long term solutions to many large-scale problems such as energy demand, pollution, and environmental safety. Thin film solar cell (TFSC) technology has emerged as an impressive photovoltaic (PV) technology to create clean energy from fast production lines with capabilities to reduce material usage and energy required to manufacture large area panels, hence, lowering the costs. Today, cost ($/kWh) and toxicity are the primary challenges for all PV technologies. In that respect, electrodeposited indium sulfide (In2S3) films are proposed as an alternate to hazardous cadmium sulfide (CdS) films, commonly used as buffer layers in solar cells. This dissertation focuses upon the optimization of electrodeposition parameters to synthesize In2S3 films of PV quality. The work describe herein has the potential to reduce the hazardous impact of cadmium (Cd) upon the environment, while reducing the manufacturing cost of TFSCs through efficient utilization of materials. Optimization was performed through use of a statistical approach to study the effect of varying electrodeposition parameters upon the properties of the films. A robust design method referred-to as the "Taguchi Method" helped in engineering the properties of the films, and improved the PV characteristics including optical bandgap, absorption coefficient, stoichiometry, morphology, crystalline structure, thickness, etc. Current density (also a function of deposition voltage) had the most significant impact upon the stoichiometry and morphology of In2S3 films, whereas, deposition temperature and composition of the solution had the least significant impact. The dissertation discusses the film growth mechanism and provides understanding of the regions of low quality (for example, cracks) in films. In2S3 films were systematically and quantitatively investigated by varying electrodeposition parameters including bath composition, current density, deposition time and temperature, stir rate, and electrode potential. These parameters individually and collectively exhibited significant correlation with the properties of the films. Digital imaging analysis (using fracture and buckling analysis software) of scanning electron microscope (SEM) images helped to quantify the cracks and study the defects in films. In addition, the effects of different annealing treatments (200 oC, 300 oC, and 400 oC in air) and coated-glass substrates (Mo, ITO, FTO) upon the properties of the In2S3 films were analyzed.
Iqbal, Muhammad; Rehan, Muhammad; Khaliq, Abdul; Saeed-ur-Rehman; Hong, Keum-Shik
2014-01-01
This paper investigates the chaotic behavior and synchronization of two different coupled chaotic FitzHugh-Nagumo (FHN) neurons with unknown parameters under external electrical stimulation (EES). The coupled FHN neurons of different parameters admit unidirectional and bidirectional gap junctions in the medium between them. Dynamical properties, such as the increase in synchronization error as a consequence of the deviation of neuronal parameters for unlike neurons, the effect of difference in coupling strengths caused by the unidirectional gap junctions, and the impact of large time-delay due to separation of neurons, are studied in exploring the behavior of the coupled system. A novel integral-based nonlinear adaptive control scheme, to cope with the infeasibility of the recovery variable, for synchronization of two coupled delayed chaotic FHN neurons of different and unknown parameters under uncertain EES is derived. Further, to guarantee robust synchronization of different neurons against disturbances, the proposed control methodology is modified to achieve the uniformly ultimately bounded synchronization. The parametric estimation errors can be reduced by selecting suitable control parameters. The effectiveness of the proposed control scheme is illustrated via numerical simulations.
Zeng, Jun; Shen, Ju-Pei; Wang, Jun-Tao; Hu, Hang-Wei; Zhang, Cui-Jing; Bai, Ren; Zhang, Li-Mei; He, Ji-Zheng
2018-05-01
Climate change is projected to have impacts on precipitation and temperature regimes in drylands of high elevation regions, with especially large effects in the Qinghai-Tibetan Plateau. However, there was limited information about how the projected climate change will impact on the soil microbial community and their activity in the region. Here, we present results from a study conducted across 72 soil samples from 24 different sites along a temperature and precipitation gradient (substituted by aridity index ranging from 0.079 to 0.89) of the Plateau, to assess how changes in aridity affect the abundance, community composition, and diversity of bacteria, ammonia-oxidizers, and denitrifers (nirK/S and nosZ genes-containing communities) as well as nitrogen (N) turnover enzyme activities. We found V-shaped or inverted V-shaped relationships between the aridity index (AI) and soil microbial parameters (gene abundance, community structures, microbial diversity, and N turnover enzyme activities) with a threshold at AI = 0.27. The increasing or decreasing rates of the microbial parameters were higher in areas with AI < 0.27 (alpine steppes) than in mesic areas with 0.27 < AI < 0.89 (alpine meadow and swamp meadow). The results indicated that the projected warming and wetting have a strong impact on soil microbial communities in the alpine steppes.
NASA Astrophysics Data System (ADS)
Paul, M.; Negahban-Azar, M.
2017-12-01
The hydrologic models usually need to be calibrated against observed streamflow at the outlet of a particular drainage area through a careful model calibration. However, a large number of parameters are required to fit in the model due to their unavailability of the field measurement. Therefore, it is difficult to calibrate the model for a large number of potential uncertain model parameters. This even becomes more challenging if the model is for a large watershed with multiple land uses and various geophysical characteristics. Sensitivity analysis (SA) can be used as a tool to identify most sensitive model parameters which affect the calibrated model performance. There are many different calibration and uncertainty analysis algorithms which can be performed with different objective functions. By incorporating sensitive parameters in streamflow simulation, effects of the suitable algorithm in improving model performance can be demonstrated by the Soil and Water Assessment Tool (SWAT) modeling. In this study, the SWAT was applied in the San Joaquin Watershed in California covering 19704 km2 to calibrate the daily streamflow. Recently, sever water stress escalating due to intensified climate variability, prolonged drought and depleting groundwater for agricultural irrigation in this watershed. Therefore it is important to perform a proper uncertainty analysis given the uncertainties inherent in hydrologic modeling to predict the spatial and temporal variation of the hydrologic process to evaluate the impacts of different hydrologic variables. The purpose of this study was to evaluate the sensitivity and uncertainty of the calibrated parameters for predicting streamflow. To evaluate the sensitivity of the calibrated parameters three different optimization algorithms (Sequential Uncertainty Fitting- SUFI-2, Generalized Likelihood Uncertainty Estimation- GLUE and Parameter Solution- ParaSol) were used with four different objective functions (coefficient of determination- r2, Nash-Sutcliffe efficiency- NSE, percent bias- PBIAS, and Kling-Gupta efficiency- KGE). The preliminary results showed that using the SUFI-2 algorithm with the objective function NSE and KGE has improved significantly the calibration (e.g. R2 and NSE is found 0.52 and 0.47 respectively for daily streamflow calibration).
NASA Technical Reports Server (NTRS)
Jackson, Wade C.; Portanova, Marc A.
1995-01-01
This paper summarizes three areas of research which were performed to characterize out-of-plane properties of composite materials. In the first investigation, a series of tests was run to characterize the through-the-thickness tensile strength for a variety of composites that included 2D braids, 2D and 3D weaves, and prepreg tapes. A new test method based on a curved beam was evaluated. Failures were significantly different between the 2D materials and the 3D weaves. The 2D materials delaminated between layers due to out-of-plane tensile stresses while the 3D weaves failed due to the formation of radial cracks between the surface plies caused by high circumferential stresses along the inner radius. The strength of the 2D textile composites did not increase relative to the tapes. Final failure in the 3D weaves was caused by a circumferential crack similar to the 2D materials and occurred at a lower bending moment than in other materials. The early failures in the 3D weaves were caused by radial crack formation rather than a low through-the-thickness strength. The second investigation focused on the development of a standard impact test method to measure impact damage resistance. The only impact tests that currently exist are compression after impact (CAI) tests which incorporate elements of both damage resistance and damage tolerance. A new impact test method is under development which uses a quasi-static indentation (QSI) test to directly measure damage resistance. Damage resistance is quantified in terms of the contact force to produce a unit of damage where a metric for damage may be area in C-scan, depth of residual dent , penetration, damage growth, etc. A final draft of an impact standard that uses a QSI test method will be presented to the ASTM Impact Task Group on impact. In the third investigation, the impact damage resistance behavior of a variety of textile materials was studied using the QSI test method. In this study, the force where large damage initiates was measured and the delamination size as a function of force was determined. The force to initiate large damage was significantly lower in braids and weaves. The delamination diameter - impact forace relationship was quanitfied using a damage resistance parameter, Q(*), which related delamination diameter to imapct force over a range of delamination sizes. Using this Q(*) parameter to rate the materials, the stitched uniweaves, toughened epoxy tapes, and through-the-thickness orthogonal interlock weave were the most damage resistant.
Constructing optimal ensemble projections for predictive environmental modelling in Northern Eurasia
NASA Astrophysics Data System (ADS)
Anisimov, Oleg; Kokorev, Vasily
2013-04-01
Large uncertainties in climate impact modelling are associated with the forcing climate data. This study is targeted at the evaluation of the quality of GCM-based climatic projections in the specific context of predictive environmental modelling in Northern Eurasia. To accomplish this task, we used the output from 36 CMIP5 GCMs from the IPCC AR-5 data base for the control period 1975-2005 and calculated several climatic characteristics and indexes that are most often used in the impact models, i.e. the summer warmth index, duration of the vegetation growth period, precipitation sums, dryness index, thawing degree-day sums, and the annual temperature amplitude. We used data from 744 weather stations in Russia and neighbouring countries to analyze the spatial patterns of modern climatic change and to delineate 17 large regions with coherent temperature changes in the past few decades. GSM results and observational data were averaged over the coherent regions and compared with each other. Ultimately, we evaluated the skills of individual models, ranked them in the context of regional impact modelling and identified top-end GCMs that "better than average" reproduce modern regional changes of the selected meteorological parameters and climatic indexes. Selected top-end GCMs were used to compose several ensembles, each combining results from the different number of models. Ensembles were ranked using the same algorithm and outliers eliminated. We then used data from top-end ensembles for the 2000-2100 period to construct the climatic projections that are likely to be "better than average" in predicting climatic parameters that govern the state of environment in Northern Eurasia. The ultimate conclusions of our study are the following. • High-end GCMs that demonstrate excellent skills in conventional atmospheric model intercomparison experiments are not necessarily the best in replicating climatic characteristics that govern the state of environment in Northern Eurasia, and independent model evaluation on regional level is necessary to identify "better than average" GCMs. • Each of the ensembles combining results from several "better than average" models replicate selected meteorological parameters and climatic indexes better than any single GCM. The ensemble skills are parameter-specific and depend on models it consists of. The best results are not necessarily those based on the ensemble comprised by all "better than average" models. • Comprehensive evaluation of climatic scenarios using specific criteria narrows the range of uncertainties in environmental projections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J.I.; Tsai, J.J.; Wu, K.H.
2005-07-01
The impacts of the aeration and the agitation on the composting process of synthetic food wastes made of dog food were studied in a laboratory-scale reactor. Two major peaks of CO{sub 2} evolution rate were observed. Each peak represented an independent stage of composting associated with the activities of thermophilic bacteria. CO{sub 2} evolutions known to correlate well with microbial activities and reactor temperatures were fitted successfully to a modified Gompertz equation, which incorporated three biokinetic parameters, namely, CO{sub 2} evolution potential, specific CO{sub 2} evolution rate, and lag phase time. No parameters that describe the impact of operating variablesmore » are involved. The model is only valid for the specified experimental conditions and may look different with others. The effects of operating parameters such as aeration and agitation were studied statistically with multivariate regression technique. Contour plots were constructed using regression equations for the examination of the dependence of CO{sub 2} evolution potentials on aeration and agitation. In the first stage, a maximum CO{sub 2} evolution potential was found when the aeration rate and the agitation parameter were set at 1.75 l/kg solids-min and 0.35, respectively. In the second stage, a maximum existed when the aeration rate and the agitation parameter were set at 1.8 l/kg solids-min and 0.5, respectively. The methods presented here can also be applied for the optimization of large-scale composting facilities that are operated differently and take longer time.« less
NASA Astrophysics Data System (ADS)
Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.
2015-12-01
Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.
Impact of space dependent eddy mixing on large ocean circulation
NASA Astrophysics Data System (ADS)
Pradal, M. A. S.; Gnanadesikan, A.; Abernathey, R. P.
2016-02-01
Throughout the ocean, mesoscale eddies stir tracers such as heat, oxygen, helium, dissolved CO2, affecting their spatial distribution. Recent work (Gnanadesikan et al., 2013) showed that changes in eddy stirring could result in changes of the volume of hypoxic and anoxic waters, leading to drastic consequences for ocean biogeochemical cycles. The parameterization of mesocale eddies in global climate models (GCMs) is two parts, based on the formulations of Redi (1982) and Gent and McWilliams (1990) which are associated with mixing parameters ARedi and AGM respectively. Numerous studies have looked at the sensitivity of ESMs to changing AGM, either alone or in combination with an ARedi parameter taken to be equivalent to the value of the AGM. By contrast the impact of the Redi parameterization in isolation remains unexplored. In a previous article, Pradal and Gnanadesikan, 2014, described the sensitivity of the climate system to a six fold increase in the Redi parameter. They found that increasing the isopycnal mixing coefficient tended to warm the climate of the planet overall, through an increase of heat absorption linked to a destabilization of the halocline in subpolar regions (particularly the Southern Ocean). This previous work varied a globally constant Redi parameter from 400m2/s to 2400m2/s. New estimates from altimetry (Abernathey and Marshall, 2013) better constrain the spatial patterns and range for the ARedi parameter. Does such spatial variation matter, and if so, where does matter? Following Gnanadesikan et al. (2013) and Pradal and Gnanadesikan, 2014 this study examines this question with a suite of Earth System Models.
Revised coordinates of the Mars Orbiter Laser Altimeter (MOLA) footprints
NASA Astrophysics Data System (ADS)
Annibali, S.; Stark, A.; Gwinner, K.; Hussmann, H.; Oberst, J.
2017-09-01
We revised the Mars Orbiter Laser Altimeter (MOLA) footprint locations (i.e. areocentric body-fixed latitude and longitude), using updated trajectory models for the Mars Global Surveyor and updated rotation parameters of Mars, including precession, nutation and length-of-day variation. We assess the impact of these updates on the gridded MOLA maps. A first comparison reveals that even slight corrections to the rotational state of Mars can lead to height differences up to 100 m (in particular in regions with high slopes, where large interpolation effects are expected). Ultimately, we aim at independent measurements of the rotation parameters of Mars. We co-register MOLA profiles to digital terrain models from stereo images (stereo DTMs) and measure offsets of the two data sets.
NASA Astrophysics Data System (ADS)
Michel, Patrick; Jutzi, M.; Richardson, D. C.; Benz, W.
2010-10-01
Asteroids of dark (e.g. C, D) taxonomic classes as well as Kuiper Belt objects and comets are believed to have high porosity, not only in the form of large voids but also in the form of micro-pores. The presence of such microscale porosity introduces additional physics in the impact process. We have enhanced our 3D SPH hydrocode, used to simulate catastrophic breakups, with a model of porosity [1] and validated it at small scale by comparison with impact experiments on pumice targets [2]. Our model is now ready to be applied to a large range of problems. In particular, accounting for the gravitational phase of an impact, we can study the formation of dark-type asteroid families, such as Veritas, and Kuiper-Belt families, such as Haumea. Recently we characterized for the first time the catastrophic impact energy threshold, usually called Q*D, as a function of the target's diameter, porosity, material strength and impact speed [3]. Regarding the mentioned families, our preliminary results show that accounting for porosity leads to different outcomes that may better represent their properties and constrain their definition. In particular, for Veritas, we find that its membership may need some revision [4]. The parameter space is still large, many interesting families need to be investigated and our model will be applied to a large range of cases. PM, MJ and DCR acknowledge financial support from the French Programme National de Planétologie, NASA PG&G "Small Bodies and Planetary Collisions" and NASA under Grant No. NNX08AM39G issued through the Office of Space Science, respectively. [1] Jutzi et al. 2008. Icarus 198, 242-255; [2] Jutzi et al. 2009. Icarus 201, 802-813; [3] Jutzi et al. 2010. Fragment properties at the catastrophic disruption threshold: The effect of the parent body's internal structure, Icarus 207, 54-65; [4] Michel et al. 2010. Icarus, submitted.
High-speed collision of copper nanoparticle with aluminum surface: Molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Pogorelko, Victor V.; Mayer, Alexander E.; Krasnikov, Vasiliy S.
2016-12-01
We investigate the effect of the high-speed collision of copper nanoparticles with aluminum surface by means of molecular dynamic simulations. Studied diameter of nanoparticles is varied within the range 7.2-22 nm and the velocity of impact is equal to 500 or 1000 m/s. Dislocation analysis shows that a large quantity of dislocations is formed within the impact area. Overall length of dislocations is determined, first of all, by the impact velocity and by the size of incident copper nanoparticle, in other words, by the kinetic energy of the nanoparticle. Dislocations occupy the total volume of the impacted aluminum single crystal layer (40.5 nm in thickness) in the form of intertwined structure in the case of large kinetic energy of the incident nanoparticle. Decrease in the initial kinetic energy or increase in the layer thickness lead to restriction of the penetration depth of the dislocation net; formation of separate dislocation loops is observed in this case. Increase in the initial system temperature slightly raises the dislocation density inside the bombarded layer and considerably decreases the dislocation density inside the nanoparticle. The temperature increase also leads to a deeper penetration of the copper atoms inside the aluminum. Additional molecular dynamic simulations show that the deposited particles demonstrate a very good adhesion even in the case of the considered relatively large nanoparticles. Medium energy of the nanoparticles corresponding to velocity of about 500 m/s and elevated temperature of the system about 700-900 K are optimal parameters for production of high-quality layers of copper on the aluminum surface. These conditions provide both a good adhesion and a less degree of the plastic deformation. At the same time, higher impact velocities can be used for combined treatment consisting of both the plastic deformation and the coating.
Transient Finite Element Analyses Developed to Model Fan Containment Impact Events
NASA Technical Reports Server (NTRS)
Pereira, J. Michael
1997-01-01
Research is underway to establish an increased level of confidence in existing numerical techniques for predicting transient behavior when the fan of a jet engine is released and impacts the fan containment system. To evaluate the predictive accuracy that can currently be obtained, researchers at the NASA Lewis Research Center used the DYNA 3D computer code to simulate large-scale subcomponent impact tests that were conducted at the University of Dayton Research Institute (UDRI) Impact Physics Lab. In these tests, 20- by 40-in. flat metal panels, contoured to the shape of a typical fan case, were impacted by the root section of a fan blade. The panels were oriented at an angle to the path of the projectile that would simulate the conditions in an actual blade-out event. The metal panels were modeled in DYNA 3D using a kinematic hardening model with the strain rate dependence of the yield stress governed by the Cowper-Simons rule. Failure was governed by the effective plastic strain criterion. The model of the fan blade and case just after impact is shown. By varying the maximum effective plastic strain, we obtained good qualitative agreement between the model and the experiments. Both the velocity required to penetrate the case and the deflection during impact compared well. This indicates that the failure criterion and constitutive model may be appropriate, but for DYNA 3D to be useful as a predictive tool, methods to determine accurate model parameters must be established. Simple methods for measuring model parameters are currently being developed. In addition, alternative constitutive models and failure criteria are being investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fediai, Artem, E-mail: artem.fediai@nano.tu-dresden.de; Ryndyk, Dmitry A.; Center for Advancing Electronics Dresden, TU Dresden, 01062 Dresden
2016-09-05
Using a dedicated combination of the non-equilibrium Green function formalism and large-scale density functional theory calculations, we investigated how incomplete metal coverage influences two of the most important electrical properties of carbon nanotube (CNT)-based transistors: contact resistance and its scaling with contact length, and maximum current. These quantities have been derived from parameter-free simulations of atomic systems that are as close as possible to experimental geometries. Physical mechanisms that govern these dependences have been identified for various metals, representing different CNT-metal interaction strengths from chemisorption to physisorption. Our results pave the way for an application-oriented design of CNT-metal contacts.
U.S. pharmaceutical policy in a global marketplace
Lakdawalla, Darius; Goldman, Dana P.; Michaud, Pierre-Carl; Sood, Neeraj; Lempert, Robert; Cong, Ze; de Vries, Han; Gutierrez, Italo
2013-01-01
Markets for innovative goods involve significant spillovers in a global economy. When US consumers pay higher prices for drugs, this stimulates innovation that benefits consumers all over the world. Conversely, when large European markets restrict prices and profits, foreign consumers bear some of the long-run cost in the form of less innovation. The result is a free-riding problem at a global level. These incentives are particularly strong for smaller markets, whose policies have relatively little impact on global innovation, but can have relatively large impacts on national pharmaceutical budgets. The result is a system in which the largest countries bear disproportionate burdens for stimulating innovation. Using a microsimulation approach, we estimate the impact of these incentive effects. The model’s baseline estimates demonstrates that the US adoption of European-style price controls would harm consumers in the US and Europe; over a 50-year period, it would cost $8 trillion in the US, and $5 trillion in Europe. Similarly, repealing European price controls would add $10 trillion to the wealth of US society, and $6 trillion to wealth in Europe. Even under the most conservative assumptions, adopting price controls generates at best a small benefit, but risks a large cost. On the other hand, reducing pharmaceutical copayments would increase wealth in both societies, a result which is robust to a wide variety of parameter values. PMID:19088101
Impact of spot charge inaccuracies in IMPT treatments.
Kraan, Aafke C; Depauw, Nicolas; Clasie, Ben; Giunta, Marina; Madden, Tom; Kooy, Hanne M
2017-08-01
Spot charge is one parameter of pencil-beam scanning dose delivery system whose accuracy is typically high but whose required value has not been investigated. In this work we quantify the dose impact of spot charge inaccuracies on the dose distribution in patients. Knowing the effect of charge errors is relevant for conventional proton machines, as well as for new generation proton machines, where ensuring accurate charge may be challenging. Through perturbation of spot charge in treatment plans for seven patients and a phantom, we evaluated the dose impact of absolute (up to 5× 10 6 protons) and relative (up to 30%) charge errors. We investigated the dependence on beam width by studying scenarios with small, medium and large beam sizes. Treatment plan statistics included the Γ passing rate, dose-volume-histograms and dose differences. The allowable absolute charge error for small spot plans was about 2× 10 6 protons. Larger limits would be allowed if larger spots were used. For relative errors, the maximum allowable error size for small, medium and large spots was about 13%, 8% and 6% for small, medium and large spots, respectively. Dose distributions turned out to be surprisingly robust against random spot charge perturbation. Our study suggests that ensuring spot charge errors as small as 1-2% as is commonly aimed at in conventional proton therapy machines, is clinically not strictly needed. © 2017 American Association of Physicists in Medicine.
epiDMS: Data Management and Analytics for Decision-Making From Epidemic Spread Simulation Ensembles.
Liu, Sicong; Poccia, Silvestro; Candan, K Selçuk; Chowell, Gerardo; Sapino, Maria Luisa
2016-12-01
Carefully calibrated large-scale computational models of epidemic spread represent a powerful tool to support the decision-making process during epidemic emergencies. Epidemic models are being increasingly used for generating forecasts of the spatial-temporal progression of epidemics at different spatial scales and for assessing the likely impact of different intervention strategies. However, the management and analysis of simulation ensembles stemming from large-scale computational models pose challenges, particularly when dealing with multiple interdependent parameters, spanning multiple layers and geospatial frames, affected by complex dynamic processes operating at different resolutions. We describe and illustrate with examples a novel epidemic simulation data management system, epiDMS, that was developed to address the challenges that arise from the need to generate, search, visualize, and analyze, in a scalable manner, large volumes of epidemic simulation ensembles and observations during the progression of an epidemic. epiDMS is a publicly available system that facilitates management and analysis of large epidemic simulation ensembles. epiDMS aims to fill an important hole in decision-making during healthcare emergencies by enabling critical services with significant economic and health impact. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
Population control methods in stochastic extinction and outbreak scenarios.
Segura, Juan; Hilker, Frank M; Franco, Daniel
2017-01-01
Adaptive limiter control (ALC) and adaptive threshold harvesting (ATH) are two related control methods that have been shown to stabilize fluctuating populations. Large variations in population abundance can threaten the constancy and the persistence stability of ecological populations, which may impede the success and efficiency of managing natural resources. Here, we consider population models that include biological mechanisms characteristic for causing extinctions on the one hand and pest outbreaks on the other hand. These models include Allee effects and the impact of natural enemies (as is typical of forest defoliating insects). We study the impacts of noise and different levels of biological parameters in three extinction and two outbreak scenarios. Our results show that ALC and ATH have an effect on extinction and outbreak risks only for sufficiently large control intensities. Moreover, there is a clear disparity between the two control methods: in the extinction scenarios, ALC can be effective and ATH can be counterproductive, whereas in the outbreak scenarios the situation is reversed, with ATH being effective and ALC being potentially counterproductive.
Impacts of tropical deforestation. Part II: The role of large-scale dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, H.; Henderson-Sellers, A.; McGuffie, K.
1996-10-01
This is the second in a pair of papers in which the possible impacts of tropical deforestation are examined using a version of the NCAR CCM1. The emphasis in this paper is on the influence of tropical deforestation on the large-scale climate system. This influence is explored through the examination of the regional moisture budget and through an analysis of the Hadley and Walker circulations. Modification of the model surface parameters to simulate tropical deforestation produces significant modifications of both Hadley and Walker circulations, which result in changes distant from the region of deforestation. A mechanism for propagation to middlemore » and high latitudes of disturbances arising form tropical deforestation is proposed based on Rossby wave propagation mechanisms. These mechanisms, which have also been associated with the extratropical influences of ENSO events, provide a pathway for the dispersion of the tropical disturbances to high latitudes. 27 refs., 20 figs., 1 tab.« less
Evolutionary response when selection and genetic variation covary across environments.
Wood, Corlett W; Brodie, Edmund D
2016-10-01
Although models of evolution usually assume that the strength of selection on a trait and the expression of genetic variation in that trait are independent, whenever the same ecological factor impacts both parameters, a correlation between the two may arise that accelerates trait evolution in some environments and slows it in others. Here, we address the evolutionary consequences and ecological causes of a correlation between selection and expressed genetic variation. Using a simple analytical model, we show that the correlation has a modest effect on the mean evolutionary response and a large effect on its variance, increasing among-population or among-generation variation in the response when positive, and diminishing variation when negative. We performed a literature review to identify the ecological factors that influence selection and expressed genetic variation across traits. We found that some factors - temperature and competition - are unlikely to generate the correlation because they affected one parameter more than the other, and identified others - most notably, environmental novelty - that merit further investigation because little is known about their impact on one of the two parameters. We argue that the correlation between selection and genetic variation deserves attention alongside other factors that promote or constrain evolution in heterogeneous landscapes. © 2016 John Wiley & Sons Ltd/CNRS.
a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.
2017-12-01
We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.
Impact of Ice Morphology on Design Space of Pharmaceutical Freeze-Drying.
Goshima, Hiroshika; Do, Gabsoo; Nakagawa, Kyuya
2016-06-01
It has been known that the sublimation kinetics of a freeze-drying product is affected by its internal ice crystal microstructures. This article demonstrates the impact of the ice morphologies of a frozen formulation in a vial on the design space for the primary drying of a pharmaceutical freeze-drying process. Cross-sectional images of frozen sucrose-bovine serum albumin aqueous solutions were optically observed and digital pictures were acquired. Binary images were obtained from the optical data to extract the geometrical parameters (i.e., ice crystal size and tortuosity) that relate to the mass-transfer resistance of water vapor during the primary drying step. A mathematical model was used to simulate the primary drying kinetics and provided the design space for the process. The simulation results predicted that the geometrical parameters of frozen solutions significantly affect the design space, with large and less tortuous ice morphologies resulting in wide design spaces and vice versa. The optimal applicable drying conditions are influenced by the ice morphologies. Therefore, owing to the spatial distributions of the geometrical parameters of a product, the boundary curves of the design space are variable and could be tuned by controlling the ice morphologies. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch; Zaidi, Habib; Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva
2014-06-15
Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailedmore » investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function superposition and keeping the image representation error to a minimum, is feasible, with the parameter combination range depending upon the scanner's intrinsic resolution characteristics. Conclusions: Using the printed point source array as a MR compatible methodology for experimentally measuring the scanner's PSF, the system's spatially variant resolution properties were successfully evaluated in image space. Overall the PET subsystem exhibits excellent resolution characteristics mainly due to the fact that the raw data are not under-sampled/rebinned, enabling the spatial resolution to be dictated by the scanner's intrinsic resolution and the image reconstruction parameters. Due to the impact of these parameters on the resolution properties of the reconstructed images, the image space PSF varies both under spatial transformations and due to basis function parameter selection. Nonetheless, for a range of basis function parameters, the image space PSF remains unaffected, with the range depending on the scanner's intrinsic resolution properties.« less
NASA Astrophysics Data System (ADS)
Pistolesi, Marco; Cioni, Raffaello; Rosi, Mauro; Aguilera, Eduardo
2014-02-01
The ice-capped Cotopaxi volcano is known worldwide for the large-scale, catastrophic lahars that have occurred in connection with historical explosive eruptions. The most recent large-scale lahar event occurred in 1877 when scoria flows partially melted ice and snow of the summit glacier, generating debris flows that severely impacted all the river valleys originating from the volcano. The 1877 lahars have been considered in the recent years as a maximum expected event to define the hazard associated to lahar generation at Cotopaxi. Conversely, recent field-based studies have shown that such debris flows have occurred several times during the last 800 years of activity at Cotopaxi, and that the scale of lahars has been variable, including events much larger than that of 1877. Despite a rapid retreat of the summit ice cap over the past century, in fact, there are no data clearly suggesting that future events will be smaller than those observed in the deposits of the last 800 years of activity. In addition, geological field data prove that the lahar triggering mechanism also has to be considered as a key input parameter and, under appropriate eruptive mechanisms, a hazard scenario of a lahar with a volume 3-times larger than the 1877 event is likely. In order to analyze the impact scenarios in the southern drainage system of the volcano, simulations of inundation areas were performed with a semi-empirical model (LAHARZ), using input parameters including variable water volume. Results indicate that a lahar 3-times larger than the 1877 event would invade much wider areas than those flooded by the 1877 lahars along the southern valley system, eventually impacting highly-urbanized areas such as the city of Latacunga.
Impact of Energy Gain and Subsystem Characteristics on Fusion Propulsion Performance
NASA Technical Reports Server (NTRS)
Chakrabarti, S.; Schmidt, G. R.
2001-01-01
Rapid transport of large payloads and human crews throughout the solar system requires propulsion systems having very high specific impulse (I(sub sp) > 10(exp 4) to 10(exp 5) s). It also calls for systems with extremely low mass-power ratios (alpha < 10(exp -1) kg/kW). Such low alpha are beyond the reach of conventional power-limited propulsion, but may be attainable with fusion and other nuclear concepts that produce energy within the propellant. The magnitude of energy gain must be large enough to sustain the nuclear process while still providing a high jet power relative to the massive energy-intensive subsystems associated with these concepts. This paper evaluates the impact of energy gain and subsystem characteristics on alpha. Central to the analysis are general parameters that embody the essential features of any 'gain-limited' propulsion power balance. Results show that the gains required to achieve alpha = 10(exp -1) kg/kW with foreseeable technology range from approximately 100 to over 2000, which is three to five orders of magnitude greater than current fusion state of the arL Sensitivity analyses point to the parameters exerting the most influence for either: (1) lowering a and improving mission performance or (2) relaxing gain requirements and reducing demands on the fusion process. The greatest impact comes from reducing mass and increasing efficiency of the thruster and subsystems downstream of the fusion process. High relative gain, through enhanced fusion processes or more efficient drivers and processors, is also desirable. There is a benefit in improving driver and subsystem characteristics upstream of the fusion process, but it diminishes at relative gains > 100.
Subsurface failure in spherical bodies. A formation scenario for linear troughs on Vesta’s surface
Stickle, Angela M.; Schultz, P. H.; Crawford, D. A.
2014-10-13
Many asteroids in the Solar System exhibit unusual, linear features on their surface. The Dawn mission recently observed two sets of linear features on the surface of the asteroid 4 Vesta. Geologic observations indicate that these features are related to the two large impact basins at the south pole of Vesta, though no specific mechanism of origin has been determined. Furthermore, the orientation of the features is offset from the center of the basins. Experimental and numerical results reveal that the offset angle is a natural consequence of oblique impacts into a spherical target. We demonstrate that a set ofmore » shear planes develops in the subsurface of the body opposite to the point of first contact. Moreover, these subsurface failure zones then propagate to the surface under combined tensile-shear stress fields after the impact to create sets of approximately linear faults on the surface. Comparison between the orientation of damage structures in the laboratory and failure regions within Vesta can be used to constrain impact parameters (e.g., the approximate impact point and likely impact trajectory).« less
The Age of the Surface of Venus
NASA Technical Reports Server (NTRS)
Zahnle, K. J.; McKinnon, William B.; Young, Richard E. (Technical Monitor)
1997-01-01
Impact craters on Venus appear to be uniformly and randomly scattered over a once, but no longer, geologically active planet. To first approximation, the planet shows a single surface of a single age. Here we use Monte Carlo cratering simulations to estimate the age of the surface of Venus. The simulations are based on the present populations of Earth-approaching asteroids, Jupiter-family, Halley-family, and long period comets; they use standard Schmidt-Housen crater scalings in the gravity regime; and they describe interaction with the atmosphere using a semi-analytic 'pancake' model that is calibrated to detailed numerical simulations of impactors striking Venus. The lunar and terrestrial cratering records are also simulated. Both of these records suffer from poor statistics. The Moon has few young large craters and fewer still whose ages are known, and the record is biased because small craters tend to look old and large craters tend to look young. The craters of the Earth provide the only reliable ages, but these craters are few, eroded, of uncertain diameter, and statistically incomplete. Together the three cratering records can be inverted to constrain the flux of impacting bodies, crater diameters given impact parameters, and the calibration of atmospheric interactions. The surface age of Venus that results is relatively young. Alternatively, we can use our best estimates for these three input parameters to derive a best estimate for the age of the surface of Venus. Our tentative conclusions are that comets are unimportant, that the lunar and terrestrial crater records are both subject to strong biases, that there is no strong evidence for an increasing cratering flux in recent years, and that that the nominal age of the surface of Venus is about 600 Ma, although the uncertainty is about a factor of two. The chief difference between our estimate and earlier, somewhat younger estimates is that we find that the venusian atmosphere is less permeable to impacting bodies than supposed by earlier studies. An older surface increases the likelihood that Venus is dead.
Laser Simulations of the Destructive Impact of Nuclear Explosions on Hazardous Asteroids
NASA Astrophysics Data System (ADS)
Aristova, E. Yu.; Aushev, A. A.; Baranov, V. K.; Belov, I. A.; Bel'kov, S. A.; Voronin, A. Yu.; Voronich, I. N.; Garanin, R. V.; Garanin, S. G.; Gainullin, K. G.; Golubinskii, A. G.; Gorodnichev, A. V.; Denisova, V. A.; Derkach, V. N.; Drozhzhin, V. S.; Ericheva, I. A.; Zhidkov, N. V.; Il'kaev, R. I.; Krayukhin, A. A.; Leonov, A. G.; Litvin, D. N.; Makarov, K. N.; Martynenko, A. S.; Malinov, V. I.; Mis'ko, V. V.; Rogachev, V. G.; Rukavishnikov, A. N.; Salatov, E. A.; Skorochkin, Yu. V.; Smorchkov, G. Yu.; Stadnik, A. L.; Starodubtsev, V. A.; Starodubtsev, P. V.; Sungatullin, R. R.; Suslov, N. A.; Sysoeva, T. I.; Khatunkin, V. Yu.; Tsoi, E. S.; Shubin, O. N.; Yufa, V. N.
2018-01-01
We present the results of preliminary experiments at laser facilities in which the processes of the undeniable destruction of stony asteroids (chondrites) in space by nuclear explosions on the asteroid surface are simulated based on the principle of physical similarity. We present the results of comparative gasdynamic computations of a model nuclear explosion on the surface of a large asteroid and computations of the impact of a laser pulse on a miniature asteroid simulator confirming the similarity of the key processes in the fullscale and model cases. The technology of fabricating miniature mockups with mechanical properties close to those of stony asteroids is described. For mini-mockups 4-10 mm in size differing by the shape and impact conditions, we have made an experimental estimate of the energy threshold for the undeniable destruction of a mockup and investigated the parameters of its fragmentation at a laser energy up to 500 J. The results obtained confirm the possibility of an experimental determination of the criteria for the destruction of asteroids of various types by a nuclear explosion in laser experiments. We show that the undeniable destruction of a large asteroid is possible at attainable nuclear explosion energies on its surface.
Karami, Manoochehr; Khazaei, Salman
2017-12-06
Clinical decision makings according studies result require the valid and correct data collection, andanalysis. However, there are some common methodological and statistical issues which may ignore by authors. In individual matched case- control design bias arising from the unconditional analysis instead of conditional analysis. Using an unconditional logistic for matched data causes the imposition of a large number of nuisance parameters which may result in seriously biased estimates.
Sensitivity to Uncertainty in Asteroid Impact Risk Assessment
NASA Astrophysics Data System (ADS)
Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.
2015-12-01
The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.
Dielectric elastomer for stretchable sensors: influence of the design and material properties
NASA Astrophysics Data System (ADS)
Jean-Mistral, C.; Iglesias, S.; Pruvost, S.; Duchet-Rumeau, J.; Chesné, S.
2016-04-01
Dielectric elastomers exhibit extended capabilities as flexible sensors for the detection of load distributions, pressure or huge deformations. Tracking the human movements of the fingers or the arms could be useful for the reconstruction of sporting gesture, or to control a human-like robot. Proposing new measurements methods are addressed in a number of publications leading to improving the sensitivity and accuracy of the sensing method. Generally, the associated modelling remains simple (RC or RC transmission line). The material parameters are considered constant or having a negligible effect which can lead to serious reduction of accuracy. Comparisons between measurements and modelling require care and skill, and could be tricky. Thus, we propose here a comprehensive modelling, taking into account the influence of the material properties on the performances of the dielectric elastomer sensor (DES). Various parameters influencing the characteristics of the sensors have been identified: dielectric constant, hyper-elasticity. The variations of these parameters as a function of the strain impact the linearity and sensitivity of the sensor of few percent. The sensitivity of the DES is also evaluated changing geometrical parameters (initial thickness) and its design (rectangular and dog-bone shapes). We discuss the impact of the shape regarding stress. Finally, DES including a silicone elastomer sandwiched between two high conductive stretchable electrodes, were manufactured and investigated. Classic and reliable LCR measurements are detailed. Experimental results validate our numerical model of large strain sensor (>50%).
Lou, Yuting; Chen, Yu
2016-09-07
The purpose of the study is to investigate the multicellular homeostasis in epithelial tissues over very large timescales. Inspired by the receptor dynamics of IBCell model proposed by Rejniak et al. an on-grid agent-based model for multicellular system is constructed. Instead of observing the multicellular architectural morphologies, the diversity of homeostatic states is quantitatively analyzed through a substantial number of simulations by measuring three new order parameters, the phenotypic population structure, the average proliferation age and the relaxation time to stable homeostasis. Nearby the interfaces of distinct homeostatic phases in 3D phase diagrams of the three order parameters, intermediate quasi-stable phases of slow dynamics that features quasi-stability with a large spectrum of relaxation timescales are found. A further exploration on the static and dynamic correlations among the three order parameters reveals that the quasi-stable phases evolve towards two terminations, tumorigenesis and degeneration, which are respectively accompanied by rejuvenation and aging. With the exclusion of the environmental impact and the mutational strategies, the results imply that cancer and aging may share the non-mutational origin in the intrinsic slow dynamics of the multicellular systems. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Statistical Approach to Identify Superluminous Supernovae and Probe Their Diversity
NASA Astrophysics Data System (ADS)
Inserra, C.; Prajs, S.; Gutierrez, C. P.; Angus, C.; Smith, M.; Sullivan, M.
2018-02-01
We investigate the identification of hydrogen-poor superluminous supernovae (SLSNe I) using a photometric analysis, without including an arbitrary magnitude threshold. We assemble a homogeneous sample of previously classified SLSNe I from the literature, and fit their light curves using Gaussian processes. From the fits, we identify four photometric parameters that have a high statistical significance when correlated, and combine them in a parameter space that conveys information on their luminosity and color evolution. This parameter space presents a new definition for SLSNe I, which can be used to analyze existing and future transient data sets. We find that 90% of previously classified SLSNe I meet our new definition. We also examine the evidence for two subclasses of SLSNe I, combining their photometric evolution with spectroscopic information, namely the photospheric velocity and its gradient. A cluster analysis reveals the presence of two distinct groups. “Fast” SLSNe show fast light curves and color evolution, large velocities, and a large velocity gradient. “Slow” SLSNe show slow light curve and color evolution, small expansion velocities, and an almost non-existent velocity gradient. Finally, we discuss the impact of our analyses in the understanding of the powering engine of SLSNe, and their implementation as cosmological probes in current and future surveys.
Impact of TRMM and SSM/I-derived Precipitation and Moisture Data on the GEOS Global Analysis
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.; Olson, William S.
1999-01-01
Current global analyses contain significant errors in primary hydrological fields such as precipitation, evaporation, and related cloud and moisture in the tropics. The Data Assimilation Office at NASA's Goddard Space Flight Center has been exploring the use of space-based rainfall and total precipitable water (TPW) estimates to constrain these hydrological parameters in the Goddard Earth Observing System (GEOS) global data assimilation system. We present results showing that assimilating the 6-hour averaged rain rates and TPW estimates from the Tropical Rainfall Measuring Mission (TRMM) and Special Sensor Microwave/Imager (SSM/I) instruments improves not only the precipitation and moisture estimates but also reduce state-dependent systematic errors in key climate parameters directly linked to convection such as the outgoing longwave radiation, clouds, and the large-scale circulation. The improved analysis also improves short-range forecasts beyond 1 day, but the impact is relatively modest compared with improvements in the time-averaged analysis. The study shows that, in the presence of biases and other errors of the forecast model, improving the short-range forecast is not necessarily prerequisite for improving the assimilation as a climate data set. The full impact of a given type of observation on the assimilated data set should not be measured solely in terms of forecast skills.
Improved cognitive functioning in obese adolescents after a 30-week inpatient weight loss program.
Vantieghem, Stijn; Bautmans, Ivan; Guchtenaere, Ann De; Tanghe, Ann; Provyn, Steven
2018-06-15
Studies linked obesity with a large number of medical conditions including decreased cognitive functioning. The relation between BMI and cognition was proven in adults, but in adolescents the results are conflicting. Further, limited data are available on the impact of weight loss on cognition. This study analyzed the impact of a 30-week lasting weight loss program on cognition and determined the impact of changes in body composition and self-perceived fatigue on changes in cognition. Sixty-two obese adolescents were evaluated at baseline and after 30 weeks. Stroop test (ST; selective attention), Continuous Performance Test (CPT; sustained attention) and Ray Auditory verbal learning test (RAVLT; short-term memory) were assessed. Additionally, body composition parameters and fatigue (MFI-20) were evaluated. Improved reaction times were found for ST and CPT after the intervention, but were independent for reductions in BMI, fat mass, fat%, and fatigue. Short memory also improved with decreased fatigue as an influencing parameter. Accuracy of ST and CPT showed no significant changes. A 30-week lasting inpatient weight loss program improved selective attention, sustained attention, and short-term memory. Changes in body composition did not explain the improvements in cognitive functioning. Decreased fatigue resulted in improved aspects of cognition.
Impact of Inflow Conditions on Coherent Structures in an Aneurysm
NASA Astrophysics Data System (ADS)
Yu, Paulo; Durgesh, Vibhav; Johari, Hamid
2017-11-01
An aneurysm is an enlargement of a weakened arterial wall that can be debilitating or fatal on rupture. Studies have shown that hemodynamics is integral to developing an understanding of aneurysm formation, growth, and rupture. This investigation focuses on a comprehensive study of the impact of varying inflow conditions and aneurysm shapes on spatial and temporal behavior of flow parameters and structures in an aneurysm. Two different shapes of an idealized rigid aneurysm model were studied and the non-dimensional frequency and Reynolds number were varied between 2-5 and 50-250, respectively. A ViVitro Labs SuperPump system was used to precisely control inflow conditions. Particle Image Velocimetry (PIV) measurements were performed at three different locations inside the aneurysm sac to obtain detailed velocity flow field information. The results of this study showed that aneurysm morphology significantly impacts spatial and temporal behavior of large-scale flow structures as well as wall shear stress distribution. The flow behavior and structures showed a significant difference with change in inflow conditions. A primary fluctuating flow structure was observed for Reynolds number of 50, while for higher Reynolds numbers, primary and secondary flow structures were observed. Furthermore, the paths of these coherent structures were dependent on aneurysm shape and inflow parameters.
Nyflot, Matthew J.; Yang, Fei; Byrd, Darrin; Bowen, Stephen R.; Sandison, George A.; Kinahan, Paul E.
2015-01-01
Abstract. Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes. PMID:26251842
Nyflot, Matthew J; Yang, Fei; Byrd, Darrin; Bowen, Stephen R; Sandison, George A; Kinahan, Paul E
2015-10-01
Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes.
Mathaes, Roman; Mahler, Hanns-Christian; Roggo, Yves; Huwyler, Joerg; Eder, Juergen; Fritsch, Kamila; Posset, Tobias; Mohl, Silke; Streubel, Alexander
2016-01-01
Capping equipment used in good manufacturing practice manufacturing features different designs and a variety of adjustable process parameters. The overall capping result is a complex interplay of the different capping process parameters and is insufficiently described in literature. It remains poorly studied how the different capping equipment designs and capping equipment process parameters (e.g., pre-compression force, capping plate height, turntable rotating speed) contribute to the final residual seal force of a sealed container closure system and its relation to container closure integrity and other drug product quality parameters. Stopper compression measured by computer tomography correlated to residual seal force measurements.In our studies, we used different container closure system configurations from different good manufacturing practice drug product fill & finish facilities to investigate the influence of differences in primary packaging, that is, vial size and rubber stopper design on the capping process and the capped drug product. In addition, we compared two large-scale good manufacturing practice manufacturing capping equipment and different capping equipment settings and their impact on product quality and integrity, as determined by residual seal force.The capping plate to plunger distance had a major influence on the obtained residual seal force values of a sealed vial, whereas the capping pre-compression force and the turntable rotation speed showed only a minor influence on the residual seal force of a sealed vial. Capping process parameters could not easily be transferred from capping equipment of different manufacturers. However, the residual seal force tester did provide a valuable tool to compare capping performance of different capping equipment. No vial showed any leakage greater than 10(-8)mbar L/s as measured by a helium mass spectrometry system, suggesting that container closure integrity was warranted in the residual seal force range tested for the tested container closure systems. Capping equipment used in good manufacturing practice manufacturing features different designs and a variety of adjustable process parameters. The overall capping result is a complex interplay of the different capping process parameters and is insufficiently described in the literature. It remains poorly studied how the different capping equipment designs and capping equipment process parameters contribute to the final capping result.In this study, we used different container closure system configurations from different good manufacturing process drug product fill & finish facilities to investigate the influence of the vial size and the rubber stopper design on the capping process. In addition, we compared two examples of large-scale good manufacturing process capping equipment and different capping equipment settings and their impact on product quality and integrity, as determined by residual seal force. © PDA, Inc. 2016.
Machine cataloging of impact craters on Mars
NASA Astrophysics Data System (ADS)
Stepinski, Tomasz F.; Mendenhall, Michael P.; Bue, Brian D.
2009-09-01
This study presents an automated system for cataloging impact craters using the MOLA 128 pixels/degree digital elevation model of Mars. Craters are detected by a two-step algorithm that first identifies round and symmetric topographic depressions as crater candidates and then selects craters using a machine-learning technique. The system is robust with respect to surface types; craters are identified with similar accuracy from all different types of martian surfaces without adjusting input parameters. By using a large training set in its final selection step, the system produces virtually no false detections. Finally, the system provides a seamless integration of crater detection with its characterization. Of particular interest is the ability of our algorithm to calculate crater depths. The system is described and its application is demonstrated on eight large sites representing all major types of martian surfaces. An evaluation of its performance and prospects for its utilization for global surveys are given by means of detailed comparison of obtained results to the manually-derived Catalog of Large Martian Impact Craters. We use the results from the test sites to construct local depth-diameter relationships based on a large number of craters. In general, obtained relationships are in agreement with what was inferred on the basis of manual measurements. However, we have found that, in Terra Cimmeria, the depth/diameter ratio has an abrupt decrease at ˜38°S regardless of crater size. If shallowing of craters is attributed to presence of sub-surface ice, a sudden change in its spatial distribution is suggested by our findings.
A unitary convolution approximation for the impact-parameter dependent electronic energy loss
NASA Astrophysics Data System (ADS)
Schiwietz, G.; Grande, P. L.
1999-06-01
In this work, we propose a simple method to calculate the impact-parameter dependence of the electronic energy loss of bare ions for all impact parameters. This perturbative convolution approximation (PCA) is based on first-order perturbation theory, and thus, it is only valid for fast particles with low projectile charges. Using Bloch's stopping-power result and a simple scaling, we get rid of the restriction to low charge states and derive the unitary convolution approximation (UCA). Results of the UCA are then compared with full quantum-mechanical coupled-channel calculations for the impact-parameter dependent electronic energy loss.
Chemical Characterization of Submicron Aerosol Particles in São Paulo, Brazil
NASA Astrophysics Data System (ADS)
Ferreira De Brito, J.; Rizzo, L. V.; Godoy, J.; Godoy, M. L.; de Assunção, J. V.; Alves, N. D.; Artaxo, P.
2013-12-01
Megacities, large urban conglomerates with a population of 10 million or more inhabitants, are increasingly receiving attention as strong pollution hotspots with significant global impact. The emissions from such large centers in both the developed and developing parts of the world are strongly impacted by the transportation sector. The São Paulo Metropolitan Area (SPMA), located in the Southeast of Brazil, is a megacity with a population of 18 million people and 7 million vehicles, many of which fuelled by a considerably amount of anhydrous ethanol. Such fleet is considered a unique case of large scale biofuel usage worldwide. Despite the large impact on human health and atmospheric chemistry/dynamics, many uncertainties are found in terms of gas and particulate matter emissions from vehicles and their atmospheric reactivity, e.g. secondary organic aerosol formation. In order to better understand aerosol life cycle on such environment, a suite of instruments for gas and particulate matter characterization has been deployed in two sampling sites within the SPMA, including an Aerosol Chemical Speciation Monitor (ACSM). The instrumentation was deployed at the rooftop of a 45m high building in the University of São Paulo during winter/spring 2012. The site is located roughly 6km downwind of the city center with little influence from local sources. The second site is located in a downtown area, sampling at the top floor of the Public Health Faculty, approximately 10m above ground. The instrumentation was deployed at the Downtown site during summer/fall 2013. The average non-refractory submicron aerosol concentration at the University site was 6.7 μg m-3, being organics the most abundant specie (70%), followed by NO3 (12%), NH4 (8%), SO4 (8%) and Chl (2%). At the Downtown site, average aerosol concentration was 15.1 μg m-3, with Organics composing 65% of the mass, followed by NH4 (12%), NO3 (11%), SO4 (11%) and Chl (1%). The analysis of specific fragmentation pattern allows characterization of organic aerosol processing, e.g., m/z 43 (C2H3O+ and/or C3H7+, depending on the source and level or processing) and m/z 44 (mostly CO2+). The parameter f43 and f44, defined as the signal on the corresponding m/z relative to total organic, provides a metric for aerosol processing. As such, the organic aerosol sampled at the University site has shown to be considerably more processed than at the Downtown site, with the parameter f44=0.19 in the former and 0.14 in the latter. Interestingly, little difference has been observed in the f43 parameter, being 0.035 at the University site and 0.036 at the Downtown site. PMF analysis indicates large dominance of SOA relative to POA on both sites. This study shall provide an overview of the atmospheric dynamics of this megacity and its unique fleet, never characterized in such details before.
Zyvoloski, G.; Kwicklis, E.; Eddebbarh, A.-A.; Arnold, B.; Faunt, C.; Robinson, B.A.
2003-01-01
This paper presents several different conceptual models of the Large Hydraulic Gradient (LHG) region north of Yucca Mountain and describes the impact of those models on groundwater flow near the potential high-level repository site. The results are based on a numerical model of site-scale saturated zone beneath Yucca Mountain. This model is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. The numerical model is calibrated by matching available water level measurements using parameter estimation techniques, along with more informal comparisons of the model to hydrologic and geochemical information. The model software (hydrologic simulation code FEHM and parameter estimation software PEST) and model setup allows for efficient calibration of multiple conceptual models. Until now, the Large Hydraulic Gradient has been simulated using a low-permeability, east-west oriented feature, even though direct evidence for this feature is lacking. In addition to this model, we investigate and calibrate three additional conceptual models of the Large Hydraulic Gradient, all of which are based on a presumed zone of hydrothermal chemical alteration north of Yucca Mountain. After examining the heads and permeabilities obtained from the calibrated models, we present particle pathways from the potential repository that record differences in the predicted groundwater flow regime. The results show that Large Hydraulic Gradient can be represented with the alternate conceptual models that include the hydrothermally altered zone. The predicted pathways are mildly sensitive to the choice of the conceptual model and more sensitive to the quality of calibration in the vicinity on the repository. These differences are most likely due to different degrees of fit of model to data, and do not represent important differences in hydrologic conditions for the different conceptual models. ?? 2002 Elsevier Science B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Timmermans, J.; Gokmen, M.; Eden, U.; Abou Ali, M.; Vekerdy, Z.; Su, Z.
2012-04-01
The need to good drought monitoring and management for the Horn of Africa has never been greater. This ongoing drought is the largest in the past sixty years and is effecting the life of around 10 million people, according to the United Nations. The impact of drought is most apparent in food security and health. In addition secondary problems arise related to the drought such as large migration; more than 15000 Somalia have fled to neighboring countries to escape the problems caused by the drought. These problems will only grow in the future to larger areas due to increase in extreme weather patterns due to global climate change. Monitoring drought impact and managing the drought effects are therefore of critical importance. The impact of a drought is hard to characterize as drought depends on several parameters, like precipitation, land use, irrigation. Consequently the effects of the drought vary spatially and range from short-term to long-term. For this reason a drought event can be characterized into four categories: meteorological, agricultural, hydrological and socio-economical. In terms of food production the agricultural drought, or short term dryness near the surface layer, is most important. This drought is usually characterized by low soil moisture content in the root zone, decreased evapotranspiration, and changes in vegetation vigor. All of these parameters can be detected with good accuracy from space. The advantage of remote sensing in Drought monitoring is evident. Drought monitoring is usually performed using drought indices, like the Palmer Index (PDSI), Crop Moisture Index (CMI), Standard Precipitation Index (SPI). With the introduction of remote sensing several indices of these have shown great potential for large scale application. These indices however all incorporate precipitation as the main surface parameter neglecting the response of the surface to the dryness. More recently two agricultural drought indices, the EvapoTranspiration Deficit Index (ETDI) and the Soil Moisture Deficit Index (SMDI), have been proposed to investigate this. The ETDI considers the stress ratio caused by the difference between potential and actual evapotranspiration, while SMDI considers the variation in soil moisture availability to the plant. As there is not a single unique accepted definition of drought, investigation into the impact of drought should not be confined to a single drought index; instead several indices need to be used for this purpose. The objective of this research is to investigate the drought in the Horn of Africa using several remote sensing drought indices and vegetation parameters. In this research the drought will be investigated using SPI, ETDI, SMDI, NDVI and SPI. For this purpose ETDI and SMDI will be estimated from remote sensing products for the period from 2002 till 2011that are created in framework of the WACMOS project. The research involves the comparison of the different drought indices and the research into possible synergies to enhance drought monitoring.
The identification of fungi collected from the ceca of commercial poultry.
Byrd, J A; Caldwell, D Y; Nisbet, D J
2017-07-01
Under normal conditions, fungi are ignored unless a disease/syndrome clinical signs are reported. The scientific communities are largely unaware of the roles fungi play in normal production parameters. Numerous preharvest interventions have demonstrated that beneficial bacteria can play a role in improving productions parameters; however, most researchers have ignored the impact that fungi may have on production. The goal of the present study was to record fungi recovered from commercial broiler and layer houses during production. Over 3,000 cecal samples were isolated using conventional culture methodology and over 890 samples were further characterized using an automated repetitive sequence-based PCR (rep-PCR) methodology. Eighty-eight different fungal and yeast species were identified, including Aspergillus spp., Penicillium spp., and Sporidiobolus spp, and 18 unknown genera were separated using rep-PCR. The results from the present study will provide a normal fungi background genera under commercial conditions and will be a stepping stone for investigating the impact of fungi on the gastrointestinal tract and on the health of poultry. Published by Oxford University Press on behalf of Poultry Science Association 2017.
NASA Astrophysics Data System (ADS)
Steiner, J. F.; Siegfried, T.; Yakovlev, A.
2014-12-01
In the Amu Darya River Basin in Central Asia, the Vakhsh catchment in Tajikistan is a major source of hydropower energy for the country. With a number of large dams already constructed, upstream Tajikistan is interested in the construction of one more large dam and a number of smaller storage facilities with the prospect of supplying its neighboring states with hydropower through a newly planned power grid. The impact of new storage facilities along the river is difficult to estimate and causes considerable concern and consternation among the downstream users. Today, it is one of the vexing poster child studies in international water conflict that awaits resolution. With a lack of meteorological data and a complex topography that makes application of remote sensed data difficult it is a challenge to model runoff correctly. Large parts of the catchment is glacierized and ranges from just 500 m asl to peaks above 7000 m asl. Based on in-situ time series for temperature and precipitation we find local correction factors for remote sensed products. Using this data we employ a model based on the Budyko framework with an extension for snow and ice in the higher altitude bands. The model furthermore accounts for groundwater and soil storage. Runoff data from a number of stations are used for the calibration of the model parameters. With an accurate representation of the existing and planned reservoirs in the Vakhsh cascade we study the potential impacts from the construction of the new large reservoir in the river. Impacts are measured in terms of a) the timing and availability of new hydropower energy, also in light of its potential for export to South Asia, b) shifting challenges with regard to river sediment loads and siltation of reservoirs and c) impacts on downstream runoff and the timely availability of irrigation water there. With our coupled hydro-climatological approach, the challenges of optimal cascade management can be addressed so as to minimize detrimental impacts on all sides if runoff forecast information at seasonal scales is taken into account for optimal operational multi-storage management.
Removing the Impact of Baluns from Measurements of a Novel Antenna for Cosmological HI Measurements
NASA Astrophysics Data System (ADS)
Trung, Vincent; Ewall-Wice, Aaron Michael; Li, Jianshu; Hewitt, Jacqueline; Riley, Daniel; Bradley, Richard F.; Makhija, Krishna; Garza, Sierra; HERA Collaboration
2018-01-01
The Hydrogen Epoch of Reionization Array (HERA) is a low-frequency radio interferometer aiming to detect redshifted 21 cm emission from neutral hydrogen during the Epoch of Reionization at frequencies between 100 and 200 MHz. Extending HERA’s performance to lower frequencies will enable detection of radio waves at higher redshifts, when models predict that gas between galaxies was heated by X-rays from the first stellar-mass black holes. The isolation of foregrounds that are four orders of magnitude brighter than the faint cosmological signal presents and unprecedented set of design specifications for our antennas, including sensitivity and spectral smoothness over a large bandwidth. We are developing a broadband sinuous antenna feed for HERA, extending the bandwidth from 50 to 220 MHz, and we are verifying antenna performance with field measurements and simulations. Electromagnetic simulations compute the differential S-parameters of the antenna. We measure these S-parameters through a lossy balun attached to an unbalanced vector network analyzer. Removing the impact of this balun is critical in obtaining an accurate comparison between our simulations and measurements. I describe measurements to characterize the baluns and how they are used to remove the balun’s impact on the antenna S-parameter measurements. Field measurements of the broadband sinuous antenna dish at MIT and Green Bank Observatory are used to verify our electromagnetic simulations of the broadband sinuous antenna design. After applying our balun corrections, we find that our field measurements are in good agreement with the simulation, giving us confidence that our feeds will perform as designed.
Geometry-based across wafer process control in a dual damascene scenario
NASA Astrophysics Data System (ADS)
Krause, Gerd; Hofmann, Detlef; Habets, Boris; Buhl, Stefan; Gutsch, Manuela; Lopez-Gomez, Alberto; Thrun, Xaver
2018-03-01
Dual damascene is an established patterning process for back-end-of-line to generate copper interconnects and lines. One of the critical output parameters is the electrical resistance of the metal lines. In our 200 mm line, this is currently being controlled by a feed-forward control from the etch process to the final step in the CMP process. In this paper, we investigate the impact of alternative feed-forward control using a calibrated physical model that estimates the impact on electrical resistance of the metal lines* . This is done by simulation on a large set of wafers. Three different approaches are evaluated, one of which uses different feed-forward settings for different radial zones in the CMP process.
Three-body Coulomb problem probed by mapping the Bethe surface in ionizing ion-atom collisions.
Moshammer, R; Perumal, A; Schulz, M; Rodríguez, V D; Kollmus, H; Mann, R; Hagmann, S; Ullrich, J
2001-11-26
The three-body Coulomb problem has been explored in kinematically complete experiments on single ionization of helium by 100 MeV/u C(6+) and 3.6 MeV/u Au(53+) impact. Low-energy electron emission ( E(e)<150 eV) as a function of the projectile deflection theta(p) (momentum transfer), i.e., the Bethe surface [15], has been mapped with Delta theta(p)+/-25 nanoradian resolution at extremely large perturbations ( 3.6 MeV/u Au(53+)) where single ionization occurs at impact parameters of typically 10 times the He K-shell radius. The experimental data are not in agreement with state-of-the-art continuum distorted wave-eikonal initial state theory.
The influence of rail surface irregularities on contact forces and local stresses
NASA Astrophysics Data System (ADS)
Andersson, Robin; Torstensson, Peter T.; Kabo, Elena; Larsson, Fredrik
2015-01-01
The effect of initial rail surface irregularities on promoting further surface degradation is investigated. The study concerns rolling contact fatigue formation, in particular in the form of the so-called squats. The impact of surface irregularities in the form of dimples is quantified by peak magnitudes of dynamic contact stresses and contact forces. To this end simulations of two-dimensional (later extended to three-dimensional) vertical dynamic vehicle-track interaction are employed. The most influencing parameters are identified. It is shown that even very shallow dimples might have a large impact on local contact stresses. Peak magnitudes of contact forces and stresses due to the influence of rail dimples are shown to exceed those due to rail corrugation.
Norris, Scott A; Samela, Juha; Bukonte, Laura; Backman, Marie; Djurabekova, Flyura; Nordlund, Kai; Madi, Charbel S; Brenner, Michael P; Aziz, Michael J
2011-01-01
Energetic particle irradiation can cause surface ultra-smoothening, self-organized nanoscale pattern formation or degradation of the structural integrity of nuclear reactor components. A fundamental understanding of the mechanisms governing the selection among these outcomes has been elusive. Here we predict the mechanism governing the transition from pattern formation to flatness using only parameter-free molecular dynamics simulations of single-ion impacts as input into a multiscale analysis, obtaining good agreement with experiment. Our results overturn the paradigm attributing these phenomena to the removal of target atoms via sputter erosion: the mechanism dominating both stability and instability is the impact-induced redistribution of target atoms that are not sputtered away, with erosive effects being essentially irrelevant. We discuss the potential implications for the formation of a mysterious nanoscale topography, leading to surface degradation, of tungsten plasma-facing fusion reactor walls. Consideration of impact-induced redistribution processes may lead to a new design criterion for stability under irradiation.
Bunte Breccia of the Ries - Continuous deposits of large impact craters
NASA Technical Reports Server (NTRS)
Horz, F.; Ostertag, R.; Rainey, D. A.
1983-01-01
The 26-km-diameter Ries impact crater in south Germany and the mechanism of ejection and emplacement associated with its formation about 15 Myr ago are discussed in detail, and the implications of the findings for models of crater formation on earth, moon, and planets are considered. Field observations and laboratory tests on 560-m core materials from nine locations are reported. The continuous deposits (Bunte Breccia) are found to be a chaotic mixture resulting from deposition at ambient temperatures in a highly turbulent environment, probably in the ballistic scenario proposed by Oberbeck et al. (1975), with an emplacement time of only about 5 min. Further impact parameters are estimated using the 'Z model' of Maxwell (1977): initial radius = 6.5 km, excavation depth = 1650 m, excavation volume = 136 cu km, and transient cavity volume = 230 cu km. The interpretation of lunar and planetary remote-sensing and in situ evidence from impact craters is reviewed in the light of the Ries findings. Numerous photographs, maps, diagrams, and tables illustrate the investigation.
Meteoroid-bumper interactions program
NASA Technical Reports Server (NTRS)
Gough, P. S.
1970-01-01
An investigation has been made of the interaction of meteoroids with shielded structures. The interaction has been simulated by the impact of Lexan cylinders onto lead shields in order to provide the vaporous debris believed to be created by meteoroid impact on a space vehicle. Shock compression data for Lexan was determined. This, in combination with the known shock compression data for the lead shield, has permitted the definition of the initial high pressure states in the impacted projectile and shield. The debris from such impact events has been permitted to interact with aluminum main walls. The walls were chosen to be sufficiently large to be effectively infinite in diameter compared to the loaded area. The thickness of the wall and the spacing from the shield were varied to determine the effect of these parameters. In addition, the effect of having a body of water behind the wall has been assessed. Measurements of the stagnation pressure in the debris cloud have been made and correlated with the response of the main wall.
Impact of Different Economic Performance Metrics on the Perceived Value of Solar Photovoltaics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drury, E.; Denholm, P.; Margolis, R.
2011-10-01
Photovoltaic (PV) systems are installed by several types of market participants, ranging from residential customers to large-scale project developers and utilities. Each type of market participant frequently uses a different economic performance metric to characterize PV value because they are looking for different types of returns from a PV investment. This report finds that different economic performance metrics frequently show different price thresholds for when a PV investment becomes profitable or attractive. Several project parameters, such as financing terms, can have a significant impact on some metrics [e.g., internal rate of return (IRR), net present value (NPV), and benefit-to-cost (B/C)more » ratio] while having a minimal impact on other metrics (e.g., simple payback time). As such, the choice of economic performance metric by different customer types can significantly shape each customer's perception of PV investment value and ultimately their adoption decision.« less
Delafont, Vincent; Bouchon, Didier; Héchard, Yann; Moulin, Laurent
2016-09-01
Free-living amoebae (FLA) constitute an important part of eukaryotic populations colonising drinking water networks. However, little is known about the factors influencing their ecology in such environments. Because of their status as reservoir of potentially pathogenic bacteria, understanding environmental factors impacting FLA populations and their associated bacterial community is crucial. Through sampling of a large drinking water network, the diversity of cultivable FLA and their bacterial community were investigated by an amplicon sequencing approach, and their correlation with physicochemical parameters was studied. While FLA ubiquitously colonised the water network all year long, significant changes in population composition were observed. These changes were partially explained by several environmental parameters, namely water origin, temperature, pH and chlorine concentration. The characterisation of FLA associated bacterial community reflected a diverse but rather stable consortium composed of nearly 1400 OTUs. The definition of a core community highlighted the predominance of only few genera, majorly dominated by Pseudomonas and Stenotrophomonas. Co-occurrence analysis also showed significant patterns of FLA-bacteria association, and allowed uncovering potentially new FLA - bacteria interactions. From our knowledge, this study is the first that combines a large sampling scheme with high-throughput identification of FLA together with associated bacteria, along with their influencing environmental parameters. Our results demonstrate the importance of physicochemical parameters in the ecology of FLA and their bacterial community in water networks. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Farhat, I. A. H.; Gale, E.; Alpha, C.; Isakovic, A. F.
2017-07-01
Optimizing energy performance of Magnetic Tunnel Junctions (MTJs) is the key for embedding Spin Transfer Torque-Random Access Memory (STT-RAM) in low power circuits. Due to the complex interdependencies of the parameters and variables of the device operating energy, it is important to analyse parameters with most effective control of MTJ power. The impact of threshold current density, Jco , on the energy and the impact of HK on Jco are studied analytically, following the expressions that stem from Landau-Lifshitz-Gilbert-Slonczewski (LLGS-STT) model. In addition, the impact of other magnetic material parameters, such as Ms , and geometric parameters such as tfree and λ is discussed. Device modelling study was conducted to analyse the impact at the circuit level. Nano-magnetism simulation based on NMAGTM package was conducted to analyse the impact of controlling HK on the switching dynamics of the film.
Exploring cosmic origins with CORE: Cosmological parameters
NASA Astrophysics Data System (ADS)
Di Valentino, E.; Brinckmann, T.; Gerbino, M.; Poulin, V.; Bouchet, F. R.; Lesgourgues, J.; Melchiorri, A.; Chluba, J.; Clesse, S.; Delabrouille, J.; Dvorkin, C.; Forastieri, F.; Galli, S.; Hooper, D. C.; Lattanzi, M.; Martins, C. J. A. P.; Salvati, L.; Cabass, G.; Caputo, A.; Giusarma, E.; Hivon, E.; Natoli, P.; Pagano, L.; Paradiso, S.; Rubiño-Martin, J. A.; Achúcarro, A.; Ade, P.; Allison, R.; Arroja, F.; Ashdown, M.; Ballardini, M.; Banday, A. J.; Banerji, R.; Bartolo, N.; Bartlett, J. G.; Basak, S.; Baumann, D.; de Bernardis, P.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Borrill, J.; Boulanger, F.; Bucher, M.; Burigana, C.; Buzzelli, A.; Cai, Z.-Y.; Calvo, M.; Carvalho, C. S.; Castellano, G.; Challinor, A.; Charles, I.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; De Petris, M.; De Zotti, G.; Diego, J. M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Ferraro, S.; Finelli, F.; de Gasperis, G.; Génova-Santos, R. T.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Hagstotz, S.; Hanany, S.; Handley, W.; Hazra, D. K.; Hernández-Monteagudo, C.; Hervias-Caimapo, C.; Hills, M.; Kiiveri, K.; Kisner, T.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lewis, A.; Liguori, M.; Lindholm, V.; Lopez-Caniego, M.; Luzzi, G.; Maffei, B.; Martin, S.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; McCarthy, D.; Melin, J.-B.; Mohr, J. J.; Molinari, D.; Monfardini, A.; Negrello, M.; Notari, A.; Paiella, A.; Paoletti, D.; Patanchon, G.; Piacentini, F.; Piat, M.; Pisano, G.; Polastri, L.; Polenta, G.; Pollo, A.; Quartin, M.; Remazeilles, M.; Roman, M.; Ringeval, C.; Tartari, A.; Tomasi, M.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Väliviita, J.; van de Weygaert, R.; Van Tent, B.; Vennin, V.; Vermeulen, G.; Vielva, P.; Vittorio, N.; Young, K.; Zannoni, M.
2018-04-01
We forecast the main cosmological parameter constraints achievable with the CORE space mission which is dedicated to mapping the polarisation of the Cosmic Microwave Background (CMB). CORE was recently submitted in response to ESA's fifth call for medium-sized mission proposals (M5). Here we report the results from our pre-submission study of the impact of various instrumental options, in particular the telescope size and sensitivity level, and review the great, transformative potential of the mission as proposed. Specifically, we assess the impact on a broad range of fundamental parameters of our Universe as a function of the expected CMB characteristics, with other papers in the series focusing on controlling astrophysical and instrumental residual systematics. In this paper, we assume that only a few central CORE frequency channels are usable for our purpose, all others being devoted to the cleaning of astrophysical contaminants. On the theoretical side, we assume ΛCDM as our general framework and quantify the improvement provided by CORE over the current constraints from the Planck 2015 release. We also study the joint sensitivity of CORE and of future Baryon Acoustic Oscillation and Large Scale Structure experiments like DESI and Euclid. Specific constraints on the physics of inflation are presented in another paper of the series. In addition to the six parameters of the base ΛCDM, which describe the matter content of a spatially flat universe with adiabatic and scalar primordial fluctuations from inflation, we derive the precision achievable on parameters like those describing curvature, neutrino physics, extra light relics, primordial helium abundance, dark matter annihilation, recombination physics, variation of fundamental constants, dark energy, modified gravity, reionization and cosmic birefringence. In addition to assessing the improvement on the precision of individual parameters, we also forecast the post-CORE overall reduction of the allowed parameter space with figures of merit for various models increasing by as much as ~ 107 as compared to Planck 2015, and 105 with respect to Planck 2015 + future BAO measurements.
Run-up of Tsunamis in the Gulf of Mexico caused by the Chicxulub Impact Event
NASA Astrophysics Data System (ADS)
Weisz, R.; Wünnenmann, K.; Bahlburg, H.
2003-04-01
The Chicxulub impact event can be investigated on (1) local, (2) regional and in (3) global scales. Our investigations focus on the regional scale, especially on the run-up of tsunami waves on the coast around the Gulf of Mexico caused by the impact. An impact produces two types of tsunami waves: (1) the rim wave, (2) the collapse wave. Both waves propagate over long distances and reach coastal areas. Depending on the tsunami wave characteristics, they have a potentionally large influence on the coastal areas. Run-up distance and run-up height can be used as parameters for assessing this influence. To calculate these parameters, we are using a multi-material hydrocode (SALE) to simulate the generation of the tsunami wave, a non-linear shallow water approach for the propagation, and we implemented a special open boundary for considering the run-up of tsunami waves. With the help of the one-dimensional shallow water approach, we will give run-up heights and distances for the coastal area around the Gulf of Mexico. The calculations are done along several sections from the impact site towards the coast. These are a first approximation to run-up calculations for the entire coast of the Gulf of Mexico. The bathymetric data along the sections, used in the wave propagation and run-up, correspond to a linearized bathymetry of the recent Gulf of Mexico. Additionally, we will present preliminary results from our first two-dimensional experiments of propagation and run-up. These results will be compared with the one-dimensional approach.
Large impacts and the evolution of Venus; an atmosphere/mantle coupled model.
NASA Astrophysics Data System (ADS)
Gillmann, Cedric; Tackley, Paul; Golabek, Gregor
2014-05-01
We investigate the evolution of atmosphere and surface conditions on Venus through a coupled model of mantle/atmosphere evolution by including meteoritic impacts mechanisms. Our main focuses are mechanisms that deplete or replenish the atmosphere: volcanic degassing, atmospheric escape and impacts. The coupling is obtained using feedback of the atmosphere on the mantle evolution. Atmospheric escape modeling involves two different aspects: hydrodynamic escape (dominant during the first few hundred million years) and non-thermal escape mechanisms as observed by the ASPERA instrument. Post 4 Ga escape is low. The atmosphere is replenished by volcanic degassing, using an adapted version of the StagYY mantle dynamics model (Armann and Tackley, 2012) and including episodic lithospheric overturn. Volatile fluxes are estimated for different mantle compositions and partitioning ratios. The evolving surface temperature is calculated from CO2 and water in the atmosphere with a gray radiative-convective atmosphere model. This surface temperature in turn acts as a boundary condition for the mantle dynamics model and has an influence on the convection, volcanism and subsequent degassing. We take into account the effects of meteorites in our simulations by adapting each relevant part of the model. They can bring volatiles as well as erode the atmosphere. Mantle dynamics are modified since the impact itself can also bring large amounts of energy to the mantle. A 2D distribution of the thermal anomaly due to the impact is used and can lead to melting. Volatile evolution due to impacts (especially the large ones) is heavily debated so we test a broad range of impactor parameters (size, velocity, timing) and test different assumptions related to impact erosion going from large eroding power (Ahrens 1993) to recent parameterization (Shuvalov, 2009, 2010). We are able to produce models leading to present-day-like conditions through episodic volcanic activity consistent with Venus observations. Without any impact, CO2 pressure only slightly increases due to degassing. On the other hand, water pressure varies rapidly leading to variations in surface temperatures of up to 200K, which have been identified to have an effect on volcanic activity. We observe a clear correlation between low temperature and mobile lid regime. We observe short term and long term effects of the impacts on planetary evolution. While small (less than kilometer scale) meteorites have a negligible effect, large ones (up to around 100 km) are able to bring volatiles to the planet and generate melt both at the impact and later on, due to volcanic events they triggered due to the changes they make to mantle dynamics. A significant amount of volatiles can be released on a short timescale. Depending on the timing of the impact, this can have significant long term effects on the surface condition evolution. Atmospheric erosion caused by impacts, on the other hand, and according to recent studies seems to have a marginal effect on the simulations, although the effects of the largest impactors is still debatable.
The impact of non-Gaussianity upon cosmological forecasts
NASA Astrophysics Data System (ADS)
Repp, A.; Szapudi, I.; Carron, J.; Wolk, M.
2015-12-01
The primary science driver for 3D galaxy surveys is their potential to constrain cosmological parameters. Forecasts of these surveys' effectiveness typically assume Gaussian statistics for the underlying matter density, despite the fact that the actual distribution is decidedly non-Gaussian. To quantify the effect of this assumption, we employ an analytic expression for the power spectrum covariance matrix to calculate the Fisher information for Baryon Acoustic Oscillation (BAO)-type model surveys. We find that for typical number densities, at kmax = 0.5h Mpc-1, Gaussian assumptions significantly overestimate the information on all parameters considered, in some cases by up to an order of magnitude. However, after marginalizing over a six-parameter set, the form of the covariance matrix (dictated by N-body simulations) causes the majority of the effect to shift to the `amplitude-like' parameters, leaving the others virtually unaffected. We find that Gaussian assumptions at such wavenumbers can underestimate the dark energy parameter errors by well over 50 per cent, producing dark energy figures of merit almost three times too large. Thus, for 3D galaxy surveys probing the non-linear regime, proper consideration of non-Gaussian effects is essential.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stark, Christopher C.; Roberge, Aki; Mandell, Avi
ExoEarth yield is a critical science metric for future exoplanet imaging missions. Here we estimate exoEarth candidate yield using single visit completeness for a variety of mission design and astrophysical parameters. We review the methods used in previous yield calculations and show that the method choice can significantly impact yield estimates as well as how the yield responds to mission parameters. We introduce a method, called Altruistic Yield Optimization, that optimizes the target list and exposure times to maximize mission yield, adapts maximally to changes in mission parameters, and increases exoEarth candidate yield by up to 100% compared to previousmore » methods. We use Altruistic Yield Optimization to estimate exoEarth candidate yield for a large suite of mission and astrophysical parameters using single visit completeness. We find that exoEarth candidate yield is most sensitive to telescope diameter, followed by coronagraph inner working angle, followed by coronagraph contrast, and finally coronagraph contrast noise floor. We find a surprisingly weak dependence of exoEarth candidate yield on exozodi level. Additionally, we provide a quantitative approach to defining a yield goal for future exoEarth-imaging missions.« less
Experimental analysis of synchronization and dynamics in an automobile as a complex system
NASA Astrophysics Data System (ADS)
González-Cruz, C. A.; Jáuregui-Correa, J. C.; López-Cajún, C.; Sen, M.; Domínguez-González, A.
2015-08-01
A complex system is composed of many interacting elements, and its behavior, as a whole, can be quite different from that of the individual elements. An automobile is an example of a common mechanical system composed of a large number of individual elements. These elements are connected through soft and hard linkages that transmit motion to each other. This paper proposes a variety of analytical tools to study experimental data from complex systems using two elements of an automobile as an example. Accelerometer measurements were taken from two elements within an automobile: the door and the dashboard. Two types of data were collected: response to impact loading, and response to road excitation of the automobile driven at different speeds. The signals were processed via Fourier and wavelet transforms, cross-correlation coefficients, Hilbert transform, and Kuramoto order parameters. A new parameter, called the order-deficit parameter, is introduced. Considerable, but not complete, synchronization can be found between the accelerations measured at these two locations in the automobile, and the degree of synchronization is quantified using the order-deficit parameter.
Alteration of metabolomic markers of amino-acid metabolism in piglets with in-feed antibiotics.
Mu, Chunlong; Yang, Yuxiang; Yu, Kaifan; Yu, Miao; Zhang, Chuanjian; Su, Yong; Zhu, Weiyun
2017-04-01
In-feed antibiotics have been used to promote growth in piglets, but its impact on metabolomics profiles associated with host metabolism is largely unknown. In this study, to test the hypothesis that antibiotic treatment may affect metabolite composition both in the gut and host biofluids, metabolomics profiles were analyzed in antibiotic-treated piglets. Piglets were fed a corn-soy basal diet with or without in-feed antibiotics from postnatal day 7 to day 42. The serum biochemical parameters, metabolomics profiles of the serum, urine, and jejunal digesta, and indicators of microbial metabolism (short-chain fatty acids and biogenic amines) were analyzed. Compared to the control group, antibiotics treatment did not have significant effects on serum biochemical parameters except that it increased (P < 0.05) the concentration of urea. Antibiotics treatment increased the relative concentrations of metabolites involved in amino-acid metabolism in the serum, while decreased the relative concentrations of most amino acids in the jejunal content. Antibiotics reduced urinary 2-ketoisocaproate and hippurate. Furthermore, antibiotics decreased (P < 0.05) the concentrations of propionate and butyrate in the feces. Antibiotics significantly affected the concentrations of biogenic amines, which are derived from microbial amino-acid metabolism. The three major amines, putrescine, cadaverine, and spermidine, were all increased (P < 0.05) in the large intestine of antibiotics-treated piglets. These results identified the phenomena that in-feed antibiotics may have significant impact on the metabolomic markers of amino-acid metabolism in piglets.
Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5
Wang, Yong; Zhang, Guang J.
2016-09-29
In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less
Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yong; Zhang, Guang J.
In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less
NASA Astrophysics Data System (ADS)
Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko
2014-05-01
Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands. For most of the country this led to over 15 hours of near-continuous precipitation, which resulted in total event accumulations exceeding 150 mm in the eastern part of the Netherlands. Such accumulations belong to the largest sums ever recorded in this country and gave rise to local flooding. Measuring precipitation by weather radar within such mesoscale convective systems is known to be a challenge, since measurements are affected by multiple sources of error. For the current event the operational weather radar rainfall product only estimated about 30% of the actual amount of precipitation as measured by rain gauges. In the current presentation we will try to identify what gave rise to such large underestimations. In general weather radar measurement errors can be subdivided into two different groups: 1) errors affecting the volumetric reflectivity measurements taken, and 2) errors related to the conversion of reflectivity values in rainfall intensity and attenuation estimates. To correct for the first group of errors, the quality of the weather radar reflectivity data was improved by successively correcting for 1) clutter and anomalous propagation, 2) radar calibration, 3) wet radome attenuation, 4) signal attenuation and 5) the vertical profile of reflectivity. Such consistent corrections are generally not performed by operational meteorological services. Results show a large improvement in the quality of the precipitation data, however still only ~65% of the actual observed accumulations was estimated. To further improve the quality of the precipitation estimates, the second group of errors are corrected for by making use of disdrometer measurements taken in close vicinity of the radar. Based on these data the parameters of a normalized drop size distribution are estimated for the total event as well as for each precipitation type separately (convective, stratiform and undefined). These are then used to obtain coherent parameter sets for the radar reflectivity-rainfall rate (Z-R) and radar reflectivity-attenuation (Z-k) relationship, specifically applicable for this event. By applying a single parameter set to correct for both sources of errors, the quality of the rainfall product improves further, leading to >80% of the observed accumulations. However, by differentiating between precipitation type no better results are obtained as when using the operational relationships. This leads to the question: how representative are local disdrometer observations to correct large scale weather radar measurements? In order to tackle this question a Monte Carlo approach was used to generate >10000 sets of the normalized dropsize distribution parameters and to assess their impact on the estimated precipitation amounts. Results show that a large number of parameter sets result in improved precipitation estimated by the weather radar closely resembling observations. However, these optimal sets vary considerably as compared to those obtained from the local disdrometer measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beylot, Antoine, E-mail: a.beylot@brgm.fr; Villeneuve, Jacques
Highlights: • 110 French incinerators are compared with LCA based on plant-specific data. • Environmental impacts vary as a function of plants energy recovery and NO{sub x} emissions. • E.g. climate change impact ranges from −58 to 408 kg CO{sub 2}-eq/tonne of residual MSW. • Implications for LCA of waste management in a decision-making process are detailed. - Abstract: Incineration is the main option for residual Municipal Solid Waste treatment in France. This study compares the environmental performances of 110 French incinerators (i.e. 85% of the total number of plants currently in activity in France) in a Life Cycle Assessmentmore » perspective, considering 5 non-toxic impact categories: climate change, photochemical oxidant formation, particulate matter formation, terrestrial acidification and marine eutrophication. Mean, median and lower/upper impact potentials are determined considering the incineration of 1 tonne of French residual Municipal Solid Waste. The results highlight the relatively large variability of the impact potentials as a function of the plant technical performances. In particular, the climate change impact potential of the incineration of 1 tonne of waste ranges from a benefit of −58 kg CO{sub 2}-eq to a relatively large burden of 408 kg CO{sub 2}-eq, with 294 kg CO{sub 2}-eq as the average impact. Two main plant-specific parameters drive the impact potentials regarding the 5 non-toxic impact categories under study: the energy recovery and delivery rate and the NO{sub x} process-specific emissions. The variability of the impact potentials as a function of incinerator characteristics therefore calls for the use of site-specific data when required by the LCA goal and scope definition phase, in particular when the study focuses on a specific incinerator or on a local waste management plan, and when these data are available.« less
Lunar and Planetary Science XXXV: Mars Geophysics
NASA Technical Reports Server (NTRS)
2004-01-01
The titles in this section include: 1) Distribution of Large Visible and Buried Impact Basins on Mars: Comparison with Free-Air Gravity, Crustal Thickness, and Magnetization Models; 2) The Early Thermal and Magnetic State of Terra Cimmeria, Southern Highlands of Mars; 3) Compatible Vector Components of the Magnetic Field of the Martian Crust; 4) Vertical Extrapolation of Mars Magnetic Potentials; 5) Rock Magnetic Fields Shield the Surface of Mars from Harmful Radiation; 6) Loading-induced Stresses near the Martian Hemispheric Dichotomy Boundary; 7) Growth of the Hemispheric Dichotomy and the Cessation of Plate Tectonics on Mars; 8) A Look at the Interior of Mars; 9) Uncertainties on Mars Interior Parameters Deduced from Orientation Parameters Using Different Radio-Links: Analytical Simulations; 10) Refinement of Phobos Ephemeris Using Mars Orbiter Laser Altimetry Radiometry.
NASA Astrophysics Data System (ADS)
Schenke, Björn; Tribedy, Prithwish; Venugopalan, Raju
2012-09-01
The event-by-event multiplicity distribution, the energy densities and energy density weighted eccentricity moments ɛn (up to n=6) at early times in heavy-ion collisions at both the BNL Relativistic Heavy Ion Collider (RHIC) (s=200GeV) and the CERN Large Hardron Collider (LHC) (s=2.76TeV) are computed in the IP-Glasma model. This framework combines the impact parameter dependent saturation model (IP-Sat) for nucleon parton distributions (constrained by HERA deeply inelastic scattering data) with an event-by-event classical Yang-Mills description of early-time gluon fields in heavy-ion collisions. The model produces multiplicity distributions that are convolutions of negative binomial distributions without further assumptions or parameters. In the limit of large dense systems, the n-particle gluon distribution predicted by the Glasma-flux tube model is demonstrated to be nonperturbatively robust. In the general case, the effect of additional geometrical fluctuations is quantified. The eccentricity moments are compared to the MC-KLN model; a noteworthy feature is that fluctuation dominated odd moments are consistently larger than in the MC-KLN model.
Scale-dependent temporal variations in stream water geochemistry.
Nagorski, Sonia A; Moore, Iohnnie N; McKinnon, Temple E; Smith, David B
2003-03-01
A year-long study of four western Montana streams (two impacted by mining and two "pristine") evaluated surface water geochemical dynamics on various time scales (monthly, daily, and bi-hourly). Monthly changes were dominated by snowmelt and precipitation dynamics. On the daily scale, post-rain surges in some solute and particulate concentrations were similar to those of early spring runoff flushing characteristics on the monthly scale. On the bi-hourly scale, we observed diel (diurnal-nocturnal) cycling for pH, dissolved oxygen, water temperature, dissolved inorganic carbon, total suspended sediment, and some total recoverable metals at some or all sites. A comparison of the cumulative geochemical variability within each of the temporal groups reveals that for many water quality parameters there were large overlaps of concentration ranges among groups. We found that short-term (daily and bi-hourly) variations of some geochemical parameters covered large proportions of the variations found on a much longer term (monthly) time scale. These results show the importance of nesting short-term studies within long-term geochemical study designs to separate signals of environmental change from natural variability.
Issues Related to Large Flight Hardware Acoustic Qualification Testing
NASA Technical Reports Server (NTRS)
Kolaini, Ali R.; Perry, Douglas C.; Kern, Dennis L.
2011-01-01
The characteristics of acoustical testing volumes generated by reverberant chambers or a circle of loudspeakers with and without large flight hardware within the testing volume are significantly different. The parameters attributing to these differences are normally not accounted for through analysis or acoustic tests prior to the qualification testing without the test hardware present. In most cases the control microphones are kept at least 2-ft away from hardware surfaces, chamber walls, and speaker surfaces to minimize the impact of the hardware in controlling the sound field. However, the acoustic absorption and radiation of sound by hardware surfaces may significantly alter the sound pressure field controlled within the chamber/speaker volume to a given specification. These parameters often result in an acoustic field that may provide under/over testing scenarios for flight hardware. In this paper the acoustic absorption by hardware surfaces will be discussed in some detail. A simple model is provided to account for some of the observations made from Mars Science Laboratory spacecraft that recently underwent acoustic qualification tests in a reverberant chamber.
Scale-dependent temporal variations in stream water geochemistry
Nagorski, S.A.; Moore, J.N.; McKinnon, Temple E.; Smith, D.B.
2003-01-01
A year-long study of four western Montana streams (two impacted by mining and two "pristine") evaluated surface water geochemical dynamics on various time scales (monthly, daily, and bi-hourly). Monthly changes were dominated by snowmelt and precipitation dynamics. On the daily scale, post-rain surges in some solute and particulate concentrations were similar to those of early spring runoff flushing characteristics on the monthly scale. On the bi-hourly scale, we observed diel (diurnal-nocturnal) cycling for pH, dissolved oxygen, water temperature, dissolved inorganic carbon, total suspended sediment, and some total recoverable metals at some or all sites. A comparison of the cumulative geochemical variability within each of the temporal groups reveals that for many water quality parameters there were large overlaps of concentration ranges among groups. We found that short-term (daily and bi-hourly) variations of some geochemical parameters covered large proportions of the variations found on a much longer term (monthly) time scale. These results show the importance of nesting short-term studies within long-term geochemical study designs to separate signals of environmental change from natural variability.
Saville, Benjamin R.; Herring, Amy H.; Kaufman, Jay S.
2013-01-01
Racial/ethnic disparities in birthweight are a large source of differential morbidity and mortality worldwide and have remained largely unexplained in epidemiologic models. We assess the impact of maternal ancestry and census tract residence on infant birth weights in New York City and the modifying effects of race and nativity by incorporating random effects in a multilevel linear model. Evaluating the significance of these predictors involves the test of whether the variances of the random effects are equal to zero. This is problematic because the null hypothesis lies on the boundary of the parameter space. We generalize an approach for assessing random effects in the two-level linear model to a broader class of multilevel linear models by scaling the random effects to the residual variance and introducing parameters that control the relative contribution of the random effects. After integrating over the random effects and variance components, the resulting integrals needed to calculate the Bayes factor can be efficiently approximated with Laplace’s method. PMID:24082430
Impact of Periodic Unsteadiness on Performance and Heat Load in Axial Flow Turbomachines
NASA Technical Reports Server (NTRS)
Sharma, Om P.; Stetson, Gary M.; Daniels, William A,; Greitzer, Edward M.; Blair, Michael F.; Dring, Robert P.
1997-01-01
Results of an analytical and experimental investigation, directed at the understanding of the impact of periodic unsteadiness on the time-averaged flows in axial flow turbomachines, are presented. Analysis of available experimental data, from a large-scale rotating rig (LSRR) (low speed rig), shows that in the time-averaged axisymmetric equations the magnitude of the terms representing the effect of periodic unsteadiness (deterministic stresses) are as large or larger than those due to random unsteadiness (turbulence). Numerical experiments, conducted to highlight physical mechanisms associated with the migration of combustor generated hot-streaks in turbine rotors, indicated that the effect can be simulated by accounting for deterministic stress like terms in the time-averaged mass and energy conservation equations. The experimental portion of this program shows that the aerodynamic loss for the second stator in a 1-1/2 stage turbine are influenced by the axial spacing between the second stator leading edge and the rotor trailing edge. However, the axial spacing has little impact on the heat transfer coefficient. These performance changes are believed to be associated with the change in deterministic stress at the inlet to the second stator. Data were also acquired to quantify the impact of indexing the first stator relative to the second stator. For the range of parameters examined, this effect was found to be of the same order as the effect of axial spacing.
Jaciw, Andrew P; Lin, Li; Ma, Boya
2016-10-18
Prior research has investigated design parameters for assessing average program impacts on achievement outcomes with cluster randomized trials (CRTs). Less is known about parameters important for assessing differential impacts. This article develops a statistical framework for designing CRTs to assess differences in impact among student subgroups and presents initial estimates of critical parameters. Effect sizes and minimum detectable effect sizes for average and differential impacts are calculated before and after conditioning on effects of covariates using results from several CRTs. Relative sensitivities to detect average and differential impacts are also examined. Student outcomes from six CRTs are analyzed. Achievement in math, science, reading, and writing. The ratio of between-cluster variation in the slope of the moderator divided by total variance-the "moderator gap variance ratio"-is important for designing studies to detect differences in impact between student subgroups. This quantity is the analogue of the intraclass correlation coefficient. Typical values were .02 for gender and .04 for socioeconomic status. For studies considered, in many cases estimates of differential impact were larger than of average impact, and after conditioning on effects of covariates, similar power was achieved for detecting average and differential impacts of the same size. Measuring differential impacts is important for addressing questions of equity, generalizability, and guiding interpretation of subgroup impact findings. Adequate power for doing this is in some cases reachable with CRTs designed to measure average impacts. Continuing collection of parameters for assessing differential impacts is the next step. © The Author(s) 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, He; Lv, Hongliang; Guo, Hui, E-mail: hguan@stu.xidian.edu.cn
2015-11-21
Impact ionization affects the radio-frequency (RF) behavior of high-electron-mobility transistors (HEMTs), which have narrow-bandgap semiconductor channels, and this necessitates complex parameter extraction procedures for HEMT modeling. In this paper, an enhanced small-signal equivalent circuit model is developed to investigate the impact ionization, and an improved method is presented in detail for direct extraction of intrinsic parameters using two-step measurements in low-frequency and high-frequency regimes. The practicability of the enhanced model and the proposed direct parameter extraction method are verified by comparing the simulated S-parameters with published experimental data from an InAs/AlSb HEMT operating over a wide frequency range. The resultsmore » demonstrate that the enhanced model with optimal intrinsic parameter values that were obtained by the direct extraction approach can effectively characterize the effects of impact ionization on the RF performance of HEMTs.« less
Higher impact of female than male migration on population structure in large mammals.
Tiedemann, R; Hardy, O; Vekemans, X; Milinkovitch, M C
2000-08-01
We simulated large mammal populations using an individual-based stochastic model under various sex-specific migration schemes and life history parameters from the blue whale and the Asian elephant. Our model predicts that genetic structure at nuclear loci is significantly more influenced by female than by male migration. We identified requisite comigration of mother and offspring during gravidity and lactation as the primary cause of this phenomenon. In addition, our model predicts that the common assumption that geographical patterns of mitochondrial DNA (mtDNA) could be translated into female migration rates (Nmf) will cause biased estimates of maternal gene flow when extensive male migration occurs and male mtDNA haplotypes are included in the analysis.
Fragment distribution in 78,86Kr+181Ta reactions
NASA Astrophysics Data System (ADS)
Zhang, Dong-Hong; Zhang, Feng-Shou
2018-05-01
Within the framework of the isospin-dependent quantum molecular dynamics model, along with the GEMINI model, the 86Kr+181Ta reaction at 80, 120 and 160 MeV/nucleon and the 78Kr+181Ta reaction at 160 MeV/nucleon are studied, and the production cross sections of the generated fragments are calculated. More inter-mediate and large mass fragments can be produced in the reactions with a large range of impact parameter. The production cross sections of nuclei such as the isotopes of Si and P generally decrease with increasing incident energy. Isotopes near the neutron drip line are produced more in the neutron-rich system 86Kr+181Ta. Supported by Youth Research Foundation of Shanxi Datong University (2016Q10)
Constitutive Behavior Modelling of AA1100-O AT Large Strain and High Strain Rates
NASA Astrophysics Data System (ADS)
Testa, Gabriel; Iannitti, Gianluca; Ruggiero, Andrew; Gentile, Domenico; Bonora, Nicola
2017-06-01
Constitutive behavior of AA1100-O, provided as extruded bar, was investigated. Microscopic observation showed that the cross-section has a peculiar microstructure consisting in the inner core with a large grain size surrounded by an external annulus with finer grains. Low and high strain rates tensile tests were carried out at different temperature ranging from -190 ° C to 100 ° C. Constitutive behavior was modelled using a modified version of Rusinek & Klepaczko model. Parameters were calibrated on tensile test results. Tests and numerical simulations of symmetric Taylor (RoR) and dynamic tensile extrusion (DTE) tests at different impact velocities were carried out in order to validate the model under complex deformation paths.
SU-G-IeP4-13: PET Image Noise Variability and Its Consequences for Quantifying Tumor Hypoxia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kueng, R; Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario; Manser, P
Purpose: The values in a PET image which represent activity concentrations of a radioactive tracer are influenced by a large number of parameters including patient conditions as well as image acquisition and reconstruction. This work investigates noise characteristics in PET images for various image acquisition and image reconstruction parameters. Methods: Different phantoms with homogeneous activity distributions were scanned using several acquisition parameters and reconstructed with numerous sets of reconstruction parameters. Images from six PET scanners from different vendors were analyzed and compared with respect to quantitative noise characteristics. Local noise metrics, which give rise to a threshold value defining themore » metric of hypoxic fraction, as well as global noise measures in terms of noise power spectra (NPS) were computed. In addition to variability due to different reconstruction parameters, spatial variability of activity distribution and its noise metrics were investigated. Patient data from clinical trials were mapped onto phantom scans to explore the impact of the scanner’s intrinsic noise variability on quantitative clinical analysis. Results: Local noise metrics showed substantial variability up to an order of magnitude for different reconstruction parameters. Investigations of corresponding NPS revealed reconstruction dependent structural noise characteristics. For the acquisition parameters, noise metrics were guided by Poisson statistics. Large spatial non-uniformity of the noise was observed in both axial and radial direction of a PET image. In addition, activity concentrations in PET images of homogeneous phantom scans showed intriguing spatial fluctuations for most scanners. The clinical metric of the hypoxic fraction was shown to be considerably influenced by the PET scanner’s spatial noise characteristics. Conclusion: We showed that a hypoxic fraction metric based on noise characteristics requires careful consideration of the various dependencies in order to justify its quantitative validity. This work may result in recommendations for harmonizing QA of PET imaging for multi-institutional clinical trials.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keshner, M. S.; Arya, R.
2004-10-01
Hewlett Packard has created a design for a ''Solar City'' factory that will process 30 million sq. meters of glass panels per year and produce 2.1-3.6 GW of solar panels per year-100x the volume of a typical, thin-film, solar panel manufacturer in 2004. We have shown that with a reasonable selection of materials, and conservative assumptions, this ''Solar City'' can produce solar panels and hit the price target of $1.00 per peak watt (6.5x-8.5x lower than prices in 2004) as the total price for a complete and installed rooftop (or ground mounted) solar energy system. This breakthrough in the pricemore » of solar energy comes without the need for any significant new invention. It comes entirely from the manufacturing scale of a large plant and the cost savings inherent in operating at such a large manufacturing scale. We expect that further optimizations from these simple designs will lead to further improvements in cost. The manufacturing process and cost depend on the choice for the active layer that converts sunlight into electricity. The efficiency by which sunlight is converted into electricity can range from 7% to 15%. This parameter has a large effect on the overall price per watt. There are other impacts, as well, and we have attempted to capture them without creating undue distractions. Our primary purpose is to demonstrate the impact of large-scale manufacturing. This impact is largely independent of the choice of active layer. It is not our purpose to compare the pro's and con's for various types of active layers. Significant improvements in cost per watt can also come from scientific advances in active layers that lead to higher efficiency. But, again, our focus is on manufacturing gains and not on the potential advances in the basic technology.« less
Microgravity Impact Experiments: The Prime Campaign on the NASA KC-135
NASA Astrophysics Data System (ADS)
Colwell, Joshua E.; Sture, Stein; Lemos, Andreas R.
2002-11-01
Low velocity collisions (v less than 100 m/s) occur in a number of astrophysical contexts, including planetary rings, protoplanetary disks, the Kuiper belt of comets, and in secondary cratering events on asteroids and planetary satellites. In most of these situations the surface gravity of the target is less than a few per cent of 1 g. Asteroids and planetary satellites are observed to have a regolith consisting of loose, unconsolidated material. Planetary ring particles likely are also coated with dust based on observations of dust within ring systems. The formation of planetesimals in protoplanetary disks begins with the accretion of dust particles. The response of the surface dust layer to collisions in the near absence of gravity is necessary for understanding the evolution of these systems. The Collisions Into Dust Experiment (COLLIDE) performs six impact experiments into simulated regolith in microgravity conditions on the space shuttle. The parameter space to be explored is quite large, including effects such as impactor mass and velocity, impact angle, target porosity, size distribution, and particle shape. We have developed an experiment, the Physics of Regolith Impacts in Microgravity Experiment (PRIME), that is analogous to COLLIDE that is optimized for flight on the NASA KC-135 reduced gravity aircraft. The KC-135 environment provides the advantage of more rapid turnover between experiments, allowing a broader range of parameters to be studied quickly, and more room for the experiment so that more impact experiments can be performed each flight. The acceleration environment of the KC-135 is not as stable and minimal as on the space shuttle, and this requires impact velocities to be higher than the minimum achievable with COLLIDE. The experiment consists of an evacuated PRIME Impact Chamber (PIC) with an aluminum base plate and acrylic sides and top. A target tray, launcher, and mirror mount to the base plate. The launcher may be positioned to allow for impacts at angles of 30, 45, 60, and 90 degrees with respect to the target surface. The target material is contained in a 10 cm by 10 cm by 2 cm tray with a rotating door that is opened via a mechanical feed-through on the base plate. A spring-loaded inner door provides uniform compression on the target material prior to operation of the experiment to keep the material from settling or locking up during vibrations prior to the experiment. Data is recorded with the NASA high speed video camera. Frame rates are selected according to the impact parameters. The direct camera view is orthogonal to the projectile line of motion, and the mirrors within the PIC provide a view normal to the target surface. The spring-loaded launchers allow for projectile speeds between 10 cm/s and 500 cm/s with a variety of impactor sizes and densities. On each flight 8 PICs will be used, each one with a different set of impact parameters. Additional information is included in the original extended abstract.
NASA Astrophysics Data System (ADS)
Borrello, M. C.; Scribner, M.; Chessin, K.
2013-12-01
A growing body of research draws attention to the negative environmental impacts on surface water from large livestock facilities. These impacts are mostly in the form of excessive nutrient loading resulting in significantly decreased oxygen levels. Over-application of animal waste on fields as well as direct discharge into surface water from facilities themselves has been identified as the main contributor to the development of hypoxic zones in Lake Erie, Chesapeake Bay and the Gulf of Mexico. Some regulators claim enforcement of water quality laws is problematic because of the nature and pervasiveness of non-point source impacts. Any direct discharge by a facility is a violation of permits governed by the Clean Water Act, unless the facility has special dispensation for discharge. Previous research by the principal author and others has shown runoff and underdrain transport are the main mechanisms by which nutrients enter surface water. This study utilized previous work to determine if the effects of non-point source discharge can be distinguished from direct (point-source) discharge using simple nutrient analysis and dissolved oxygen (DO) parameters. Nutrient and DO parameters were measured from three sites: 1. A stream adjacent to a field receiving manure, upstream of a large livestock facility with a history of direct discharge, 2. The same stream downstream of the facility and 3. A stream in an area relatively unimpacted by large-scale agriculture (control site). Results show that calculating a simple Pearson correlation coefficient (r) of soluble reactive phosphorus (SRP) and ammonia over time as well as temperature and DO, distinguishes non-point source from point source discharge into surface water. The r value for SRP and ammonia for the upstream site was 0.01 while the r value for the downstream site was 0.92. The control site had an r value of 0.20. Likewise, r values were calculated on temperature and DO for each site. High negative correlations between temperature and DO are indicative of a relatively unimpacted stream. Results from this study are commensurate with nutrient correlations and are: r = -0.97 for the upstream site, r = -0.21 for the downstream site and r = -0.89 for the control site. Results from every site tested were statistically significant (p ≤ 0.05). These results support previous studies and demonstrate that the simple analytical techniques mentioned provide an effective means for regulatory agencies and community groups to monitor and identify point source discharge from large livestock facilities.
Valera, Alexandra; López-Guillermo, Armando; Cardesa-Salzmann, Teresa; Climent, Fina; González-Barca, Eva; Mercadal, Santiago; Espinosa, Íñigo; Novelli, Silvana; Briones, Javier; Mate, José L.; Salamero, Olga; Sancho, Juan M.; Arenillas, Leonor; Serrano, Sergi; Erill, Nadina; Martínez, Daniel; Castillo, Paola; Rovira, Jordina; Martínez, Antonio; Campo, Elias; Colomo, Luis
2013-01-01
MYC alterations influence the survival of patients with diffuse large B-cell lymphoma. Most studies have focused on MYC translocations but there is little information regarding the impact of numerical alterations and protein expression. We analyzed the genetic alterations and protein expression of MYC, BCL2, BCL6, and MALT1 in 219 cases of diffuse large B-cell lymphoma. MYC rearrangement occurred as the sole abnormality (MYC single-hit) in 3% of cases, MYC and concurrent BCL2 and/or BCL6 rearrangements (MYC double/triple-hit) in 4%, MYC amplifications in 2% and MYC gains in 19%. MYC single-hit, MYC double/triple-hit and MYC amplifications, but not MYC gains or other gene rearrangements, were associated with unfavorable progression-free survival and overall survival. MYC protein expression, evaluated using computerized image analysis, captured the unfavorable prognosis of MYC translocations/amplifications and identified an additional subset of patients without gene alterations but with similar poor prognosis. Patients with tumors expressing both MYC/BCL2 had the worst prognosis, whereas those with double-negative tumors had the best outcome. High MYC expression was associated with shorter overall survival irrespectively of the International Prognostic Index and BCL2 expression. In conclusion, MYC protein expression identifies a subset of diffuse large B-cell lymphoma with very poor prognosis independently of gene alterations and other prognostic parameters. PMID:23716551
Valera, Alexandra; López-Guillermo, Armando; Cardesa-Salzmann, Teresa; Climent, Fina; González-Barca, Eva; Mercadal, Santiago; Espinosa, Iñigo; Novelli, Silvana; Briones, Javier; Mate, José L; Salamero, Olga; Sancho, Juan M; Arenillas, Leonor; Serrano, Sergi; Erill, Nadina; Martínez, Daniel; Castillo, Paola; Rovira, Jordina; Martínez, Antonio; Campo, Elias; Colomo, Luis
2013-10-01
MYC alterations influence the survival of patients with diffuse large B-cell lymphoma. Most studies have focused on MYC translocations but there is little information regarding the impact of numerical alterations and protein expression. We analyzed the genetic alterations and protein expression of MYC, BCL2, BCL6, and MALT1 in 219 cases of diffuse large B-cell lymphoma. MYC rearrangement occurred as the sole abnormality (MYC single-hit) in 3% of cases, MYC and concurrent BCL2 and/or BCL6 rearrangements (MYC double/triple-hit) in 4%, MYC amplifications in 2% and MYC gains in 19%. MYC single-hit, MYC double/triple-hit and MYC amplifications, but not MYC gains or other gene rearrangements, were associated with unfavorable progression-free survival and overall survival. MYC protein expression, evaluated using computerized image analysis, captured the unfavorable prognosis of MYC translocations/amplifications and identified an additional subset of patients without gene alterations but with similar poor prognosis. Patients with tumors expressing both MYC/BCL2 had the worst prognosis, whereas those with double-negative tumors had the best outcome. High MYC expression was associated with shorter overall survival irrespectively of the International Prognostic Index and BCL2 expression. In conclusion, MYC protein expression identifies a subset of diffuse large B-cell lymphoma with very poor prognosis independently of gene alterations and other prognostic parameters.
Generating Models of Infinite-State Communication Protocols Using Regular Inference with Abstraction
NASA Astrophysics Data System (ADS)
Aarts, Fides; Jonsson, Bengt; Uijen, Johan
In order to facilitate model-based verification and validation, effort is underway to develop techniques for generating models of communication system components from observations of their external behavior. Most previous such work has employed regular inference techniques which generate modest-size finite-state models. They typically suppress parameters of messages, although these have a significant impact on control flow in many communication protocols. We present a framework, which adapts regular inference to include data parameters in messages and states for generating components with large or infinite message alphabets. A main idea is to adapt the framework of predicate abstraction, successfully used in formal verification. Since we are in a black-box setting, the abstraction must be supplied externally, using information about how the component manages data parameters. We have implemented our techniques by connecting the LearnLib tool for regular inference with the protocol simulator ns-2, and generated a model of the SIP component as implemented in ns-2.
Optimization and application of blasting parameters based on the "pushing-wall" mechanism
NASA Astrophysics Data System (ADS)
Ren, Feng-yu; Sow, Thierno Amadou Mouctar; He, Rong-xing; Liu, Xin-rui
2012-10-01
The large structure parameter of a sublevel caving method was used in Beiminghe iron mine. The ores were generally lower than the medium hardness and easy to be drilled and blasted. However, the questions of boulder yield, "pushing-wall" accident rate, and brow damage rate were not effectively controlled in practical blasting. The model test of a similar material shows that the charge concentration of bottom blastholes in the sector is too high; the pushing wall is the fundamental reason for the poor blasting effect. One of the main methods to adjust the explosive distribution is to increase the length of charged blastholes. Therefore, the field tests with respect to increasing the length of uncharged blastholes were made in 12# stope of -95 subsection and 6# stope of Beiminghe iron mine. This paper took the test result of 12# stope as an example to analyze the impact of charge structure on blasting effect and design an appropriate blasting parameter that is to similar to No.12 stope.
Taccheo, Stefano; Gebavi, Hrvoje; Monteville, Achille; Le Goffic, Olivier; Landais, David; Mechin, David; Tregoat, Denis; Cadier, Benoit; Robin, Thierry; Milanese, Daniel; Durrant, Tim
2011-09-26
We report on an extensive investigation of photodarkening in Yb-doped silica fibers. A set of similar fibers, covering a large Yb concentration range, was made so as to compare the photodarkening induced losses. Careful measurements were made to ensure equal and uniform inversion for all the tested fibers. The results show that, with the specific set-up, the stretching parameter obtained through fitting has a very limited variation. This gives more meaning to the fitting parameters. Results tend to indicate a square law dependence of the concentration of excited ions on the final saturated loss. We also demonstrate self-similarity of loss evolution when experimental curves are simply normalized to fitting parameters. This evidence of self-similarity also supports the possibility of introducing a preliminary figure of merit for Yb-doped fiber. This will allow the impact of photodarkening on laser/amplifier devices to be evaluated. © 2011 Optical Society of America
Homogenization of Large-Scale Movement Models in Ecology
Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.
2011-01-01
A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.
Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar
2012-01-01
Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.
NASA Astrophysics Data System (ADS)
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
Formulation, General Features and Global Calibration of a Bioenergetically-Constrained Fishery Model
Bianchi, Daniele; Galbraith, Eric D.
2017-01-01
Human exploitation of marine resources is profoundly altering marine ecosystems, while climate change is expected to further impact commercially-harvested fish and other species. Although the global fishery is a highly complex system with many unpredictable aspects, the bioenergetic limits on fish production and the response of fishing effort to profit are both relatively tractable, and are sure to play important roles. Here we describe a generalized, coupled biological-economic model of the global marine fishery that represents both of these aspects in a unified framework, the BiOeconomic mArine Trophic Size-spectrum (BOATS) model. BOATS predicts fish production according to size spectra as a function of net primary production and temperature, and dynamically determines harvest spectra from the biomass density and interactive, prognostic fishing effort. Within this framework, the equilibrium fish biomass is determined by the economic forcings of catchability, ex-vessel price and cost per unit effort, while the peak harvest depends on the ecosystem parameters. Comparison of a large ensemble of idealized simulations with observational databases, focusing on historical biomass and peak harvests, allows us to narrow the range of several uncertain ecosystem parameters, rule out most parameter combinations, and select an optimal ensemble of model variants. Compared to the prior distributions, model variants with lower values of the mortality rate, trophic efficiency, and allometric constant agree better with observations. For most acceptable parameter combinations, natural mortality rates are more strongly affected by temperature than growth rates, suggesting different sensitivities of these processes to climate change. These results highlight the utility of adopting large-scale, aggregated data constraints to reduce model parameter uncertainties and to better predict the response of fisheries to human behaviour and climate change. PMID:28103280
Carozza, David A; Bianchi, Daniele; Galbraith, Eric D
2017-01-01
Human exploitation of marine resources is profoundly altering marine ecosystems, while climate change is expected to further impact commercially-harvested fish and other species. Although the global fishery is a highly complex system with many unpredictable aspects, the bioenergetic limits on fish production and the response of fishing effort to profit are both relatively tractable, and are sure to play important roles. Here we describe a generalized, coupled biological-economic model of the global marine fishery that represents both of these aspects in a unified framework, the BiOeconomic mArine Trophic Size-spectrum (BOATS) model. BOATS predicts fish production according to size spectra as a function of net primary production and temperature, and dynamically determines harvest spectra from the biomass density and interactive, prognostic fishing effort. Within this framework, the equilibrium fish biomass is determined by the economic forcings of catchability, ex-vessel price and cost per unit effort, while the peak harvest depends on the ecosystem parameters. Comparison of a large ensemble of idealized simulations with observational databases, focusing on historical biomass and peak harvests, allows us to narrow the range of several uncertain ecosystem parameters, rule out most parameter combinations, and select an optimal ensemble of model variants. Compared to the prior distributions, model variants with lower values of the mortality rate, trophic efficiency, and allometric constant agree better with observations. For most acceptable parameter combinations, natural mortality rates are more strongly affected by temperature than growth rates, suggesting different sensitivities of these processes to climate change. These results highlight the utility of adopting large-scale, aggregated data constraints to reduce model parameter uncertainties and to better predict the response of fisheries to human behaviour and climate change.
NASA Astrophysics Data System (ADS)
Oosthuizen, Nadia; Hughes, Denis A.; Kapangaziwiri, Evison; Mwenge Kahinda, Jean-Marc; Mvandaba, Vuyelwa
2018-05-01
The demand for water resources is rapidly growing, placing more strain on access to water and its management. In order to appropriately manage water resources, there is a need to accurately quantify available water resources. Unfortunately, the data required for such assessment are frequently far from sufficient in terms of availability and quality, especially in southern Africa. In this study, the uncertainty related to the estimation of water resources of two sub-basins of the Limpopo River Basin - the Mogalakwena in South Africa and the Shashe shared between Botswana and Zimbabwe - is assessed. Input data (and model parameters) are significant sources of uncertainty that should be quantified. In southern Africa water use data are among the most unreliable sources of model input data because available databases generally consist of only licensed information and actual use is generally unknown. The study assesses how these uncertainties impact the estimation of surface water resources of the sub-basins. Data on farm reservoirs and irrigated areas from various sources were collected and used to run the model. Many farm dams and large irrigation areas are located in the upper parts of the Mogalakwena sub-basin. Results indicate that water use uncertainty is small. Nevertheless, the medium to low flows are clearly impacted. The simulated mean monthly flows at the outlet of the Mogalakwena sub-basin were between 22.62 and 24.68 Mm3 per month when incorporating only the uncertainty related to the main physical runoff generating parameters. The range of total predictive uncertainty of the model increased to between 22.15 and 24.99 Mm3 when water use data such as small farm and large reservoirs and irrigation were included. For the Shashe sub-basin incorporating only uncertainty related to the main runoff parameters resulted in mean monthly flows between 11.66 and 14.54 Mm3. The range of predictive uncertainty changed to between 11.66 and 17.72 Mm3 after the uncertainty in water use information was added.
NASA Astrophysics Data System (ADS)
Banerjee, Polash; Ghose, Mrinal Kanti; Pradhan, Ratika
2018-05-01
Spatial analysis of water quality impact assessment of highway projects in mountainous areas remains largely unexplored. A methodology is presented here for Spatial Water Quality Impact Assessment (SWQIA) due to highway-broadening-induced vehicular traffic change in the East district of Sikkim. Pollution load of the highway runoff was estimated using an Average Annual Daily Traffic-Based Empirical model in combination with mass balance model to predict pollution in the rivers within the study area. Spatial interpolation and overlay analysis were used for impact mapping. Analytic Hierarchy Process-Based Water Quality Status Index was used to prepare a composite impact map. Model validation criteria, cross-validation criteria, and spatial explicit sensitivity analysis show that the SWQIA model is robust. The study shows that vehicular traffic is a significant contributor to water pollution in the study area. The model is catering specifically to impact analysis of the concerned project. It can be an aid for decision support system for the project stakeholders. The applicability of SWQIA model needs to be explored and validated in the context of a larger set of water quality parameters and project scenarios at a greater spatial scale.
NASA Astrophysics Data System (ADS)
Gillmann, Cedric; Golabek, Gregor; Tackley, Paul
2015-04-01
We investigate the influence of impacts on the history of terrestrial planets from the point of view of internal dynamics and surface conditions. Our work makes use of our previous studies on Venus' long term evolution through a coupled atmosphere/mantle numerical code. The solid part of the planet is simulated using the StagYY code (Armann and Tackley, 2012) and releases volatiles into the atmosphere through degassing. Coupling with the atmosphere is obtained by using surface temperature as a boundary condition. The evolution of surface temperature is calculated from CO2 and water concentrations in the atmosphere with a gray radiative-convective atmosphere model. These concentrations vary due to degassing and escape mechanisms. We take into account hydrodynamic escape, which is dominant during the first hundred million years, and non-thermal processes as observed by the ASPERA instrument and modeled in various works. Impacts can have different effects: they can bring (i) volatiles to the planet, (ii) erode its atmosphere and (iii) modify mantle dynamics due to the large amount of energy they release. A 2D distribution of the thermal anomaly due to the impact is used leading to melting and subjected to transport by the mantle convection. Volatile evolution is still strongly debated. We therefore test a wide range of impactor parameters (size, velocity, timing) and different assumptions related to impact erosion, from large eroding power to more moderate ones (Shuvalov, 2010). Atmospheric erosion appears to have significant effects only for massive impacts and to be mitigated by volatiles brought by the impactor. While small (0-10 km) meteorites have a negligible effect on the global scale, medium ones (50-150 km) are able to bring volatiles to the planet and generate melt, leading to strong short term influence. However, only larger impacts (300+ km) have lasting effects. They can cause volcanic event both immediately after the impact and later on. Additionally, the amount of volatiles released is large enough to modify normal evolution and surface temperatures (tens of Kelvins). This is enough to modify mantle convection patterns. Depending on when such an impact occurs, the surface conditions history can appear radically different. A key factor is thus the timing of the impact and how it interacts with other processes.
Forward scattering in two-beam laser interferometry
NASA Astrophysics Data System (ADS)
Mana, G.; Massa, E.; Sasso, C. P.
2018-04-01
A fractional error as large as 25 pm mm-1 at the zero optical-path difference has been observed in an optical interferometer measuring the displacement of an x-ray interferometer used to determine the lattice parameter of silicon. Detailed investigations have brought to light that the error was caused by light forward-scattered from the beam feeding the interferometer. This paper reports on the impact of forward-scattered light on the accuracy of two-beam optical interferometry applied to length metrology, and supplies a model capable of explaining the observed error.
MAMS: High resolution atmospheric moisture/surface properties
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; Guillory, Anthony R.; Suggs, Ron; Atkinson, Robert J.; Carlson, Grant S.
1991-01-01
Multispectral Atmospheric Mapping Sensor (MAMS) data collected from a number of U2/ER2 aircraft flights were used to investigate atmospheric and surface (land) components of the hydrologic cycle. Algorithms were developed to retrieve surface and atmospheric geophysical parameters which describe the variability of atmospheric moisture, its role in cloud and storm development, and the influence of surface moisture and heat sources on convective activity. Techniques derived with MAMS data are being applied to existing satellite measurements to show their applicability to regional and large process studies and their impact on operational forecasting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenske, George
2016-11-28
Our primary task for this project was to perform FMEP calculations for a broad range of parameters including engine type [spark ignition (SI) or compression ignition (CI)], engine size, engine mode (speed and load), lubricant viscosity, asperity friction, surface finish, oil type (mineral or synthetic), and additive (friction modifier), as discussed previously [1–3]. The actual analysis was limited to a large diesel engine and it included both load and speed dependencies as well as lubricant viscosity and speed.
Active-sterile neutrino conversion: consequences for the r-process and supernova neutrino detection
NASA Astrophysics Data System (ADS)
Fetter, J.; McLaughlin, G. C.; Balantekin, A. B.; Fuller, G. M.
2003-02-01
We examine active-sterile neutrino conversion in the late time post-core-bounce supernova environment. By including the effect of feedback on the Mikheyev-Smirnov-Wolfenstein (MSW) conversion potential, we obtain a large range of neutrino mixing parameters which produce a favorable environment for the r-process. We look at the signature of this effect in the current generation of neutrino detectors now coming on line. We also investigate the impact of the neutrino-neutrino forward-scattering-induced potential on the MSW conversion.
Shock-induced damage in rocks: Application to impact cratering
NASA Astrophysics Data System (ADS)
Ai, Huirong
Shock-induced damage beneath impact craters is studied in this work. Two representative terrestrial rocks, San Marcos granite and Bedford limestone, are chosen as test target. Impacts into the rock targets with different combinations of projectile material, size, impact angle, and impact velocity are carried out at cm scale in the laboratory. Shock-induced damage and fracturing would cause large-scale compressional wave velocity reduction in the recovered target beneath the impact crater. The shock-induced damage is measured by mapping the compressional wave velocity reduction in the recovered target. A cm scale nondestructive tomography technique is developed for this purpose. This technique is proved to be effective in mapping the damage in San Marcos granite, and the inverted velocity profile is in very good agreement with the result from dicing method and cut open directly. Both compressional velocity and attenuation are measured in three orthogonal directions on cubes prepared from one granite target impacted by a lead bullet at 1200 m/s. Anisotropy is observed from both results, but the attenuation seems to be a more useful parameter than acoustic velocity in studying orientation of cracks. Our experiments indicate that the shock-induced damage is a function of impact conditions including projectile type and size, impact velocity, and target properties. Combined with other crater phenomena such as crater diameter, depth, ejecta, etc., shock-induced damage would be used as an important yet not well recognized constraint for impact history. The shock-induced damage is also calculated numerically to be compared with the experiments for a few representative shots. The Johnson-Holmquist strength and failure model, initially developed for ceramics, is applied to geological materials. Strength is a complicated function of pressure, strain, strain rate, and damage. The JH model, coupled with a crack softening model, is used to describe both the inelastic response of rocks in the compressive field near the impact source and the tensile failure in the far field. The model parameters are determined either from direct static measurements, or from indirect numerical adjustment. The agreement between the simulation and experiment is very encouraging.
Miner, Grace L; Bauerle, William L
2017-09-01
The Ball-Berry (BB) model of stomatal conductance (g s ) is frequently coupled with a model of assimilation to estimate water and carbon exchanges in plant canopies. The empirical slope (m) and 'residual' g s (g 0 ) parameters of the BB model influence transpiration estimates, but the time-intensive nature of measurement limits species-specific data on seasonal and stress responses. We measured m and g 0 seasonally and under different water availability for maize and sunflower. The statistical method used to estimate parameters impacted values nominally when inter-plant variability was low, but had substantial impact with larger inter-plant variability. Values for maize (m = 4.53 ± 0.65; g 0 = 0.017 ± 0.016 mol m -2 s -1 ) were 40% higher than other published values. In maize, we found no seasonal changes in m or g 0 , supporting the use of constant seasonal values, but water stress reduced both parameters. In sunflower, inter-plant variability of m and g 0 was large (m = 8.84 ± 3.77; g 0 = 0.354 ± 0.226 mol m -2 s -1 ), presenting a challenge to clear interpretation of seasonal and water stress responses - m values were stable seasonally, even as g 0 values trended downward, and m values trended downward with water stress while g 0 values declined substantially. © 2017 John Wiley & Sons Ltd.
Influence parameters of impact grinding mills
NASA Technical Reports Server (NTRS)
Hoeffl, K.; Husemann, K.; Goldacker, H.
1984-01-01
Significant parameters for impact grinding mills were investigated. Final particle size was used to evaluate grinding results. Adjustment of the parameters toward increased charge load results in improved efficiency; however, it was not possible to define a single, unified set to optimum grinding conditions.
Microwave NDE of impact damaged fiberglass and elastomer layered composites
NASA Astrophysics Data System (ADS)
Greenawald, E. C.; Levenberry, L. J.; Qaddoumi, N.; McHardy, A.; Zoughi, R.; Poranski, C. F.
2000-05-01
Layered composites have been proposed as advanced materials for future use in large naval sonar domes. Unlike today's steel/rubber composite domes, such materials promise engineered acoustic properties and less costly resin-transfer fabrication methods. The development and deployment of these large and complex composite structures will result in challenging NDE requirements for both manufacturing quality assurance and in-service needs. Among the anticipated in-service requirements is the detection and characterization of the impact damage associated with striking a submerged object at sea. A one-sided inspection method is desired, preferably applicable in the underwater environment. In this paper, we present preliminary microwave NDE results from impact test coupons of a proposed thick FRP/elastomer/FRP "sandwich" composite. The coupons were scanned using a near-field microwave probe that responds to the composite's dielectric properties. The unprocessed scan data was displayed in an image format to reveal damaged areas. Results are compared with those from x-ray backscatter imaging and ultrasonic testing, and are verified by destructive analysis of the coupons. The difficulties posed by the application are discussed, as are the operating principles and advantages of the microwave methods. The importance of optimizing inspection parameters such as frequency and standoff distance is emphasized for future work.
Late Coupled Evolution of Venus' Atmosphere and the Effects of Meteoritic Impacts
NASA Astrophysics Data System (ADS)
Gillmann, C.; Tackley, P. J.; Golabek, G.
2013-12-01
We investigate what mechanisms and events could have led to the divergent evolution of Venus and Earth. We propose develop our investigation of the post-magma-ocean history of the atmosphere and surface conditions on Venus through a coupled model of mantle/atmosphere evolution by including meteoritic impacts in our previous work. Our main focuses are mechanisms that deplete or replenish the atmosphere: volcanic degassing, atmospheric escape and impacts. Atmospheric escape modeling involves two different aspects. During the first few hundreds of million years, hydrodynamic escape is dominant. A significant portion of the early atmosphere can be thus removed. For later evolution, on the other hand, non-thermal escape becomes the main process as observed by the ASPERA instrument and modeled in various recent numerical studies. The atmosphere is replenished by volcanic degassing, using an adapted version of the StagYY mantle dynamics model (Armann and Tackley, 2012) and including episodic lithospheric overturn. The evolving surface temperature is calculated from CO2 and water in the atmosphere with a gray radiative-convective atmosphere model. This surface temperature in turn acts as a boundary condition for the mantle dynamics model and has an influence on the convection, volcanism and subsequent degassing. We take into account the effects of meteorites in our simulations by adapting each relevant part of the model. They can bring volatiles as well as erode the atmosphere. Mantle dynamics are modified since the impact itself can also bring large amounts of energy to the mantle. A 2D distribution of the thermal anomaly due to the impact is used and can lead to melting. Volatile evolution due to impacts (especially the large ones) is heavily debated so we test a broad range of impactor parameters (size, velocity, timing) and test different assumptions related to impact erosion going from large eroding power (Ahrens 1993) to recent parameterization (Shuvalov, 2009, 2010). We obtain a Venus-like behavior for the solid planet and atmospheric evolution leading to present-day conditions. Without any impact, CO2 pressure seems unlikely to vary much over the history of the planet, only slightly increasing due to degassing. A late build-up of the atmosphere with several resurfacing events seems unlikely. On the other hand, water pressure is strongly sensitive to volcanic activity and varies rapidly leading to variations in surface temperatures of up to 200K, which have been identified to have an effect on volcanic activity. We observe a clear correlation between low temperature and mobile lid regime. Impacts can strongly change this picture. While small (less than kilometer scale) meteorites have a negligible effect, medium ones are able to bring volatiles to the planet and generate melt both at the impact and later on, due to volcanic events they triggered due to the changes they make to mantle dynamics. A significant amount of volatiles (compared to present-day atmosphere) can be released on a short timescale, which can increase the surface temperature by tens of Kelvin. Larger impactors (~100 km) have even stronger effects as they can blow upwards of 10% of the atmosphere away, depending on the parameters. Removing more than 80% of the atmosphere on the impact is clearly feasible. In these cases, later degassing is also massive, which mitigates the volatile sink.
Evaluation of GCMs in the context of regional predictive climate impact studies.
NASA Astrophysics Data System (ADS)
Kokorev, Vasily; Anisimov, Oleg
2016-04-01
Significant improvements in the structure, complexity, and general performance of earth system models (ESMs) have been made in the recent decade. Despite these efforts, the range of uncertainty in predicting regional climate impacts remains large. The problem is two-fold. Firstly, there is an intrinsic conflict between the local and regional scales of climate impacts and adaptation strategies, on one hand, and larger scales, at which ESMs demonstrate better performance, on the other. Secondly, there is a growing understanding that majority of the impacts involve thresholds, and are thus driven by extreme climate events, whereas accent in climate projections is conventionally made on gradual changes in means. In this study we assess the uncertainty in projecting extreme climatic events within a region-specific and process-oriented context by examining the skills and ranking of ESMs. We developed a synthetic regionalization of Northern Eurasia that accounts for the spatial features of modern climatic changes and major environmental and socio-economical impacts. Elements of such fragmentation could be considered as natural focus regions that bridge the gap between the spatial scales adopted in climate-impacts studies and patterns of climate change simulated by ESMs. In each focus region we selected several target meteorological variables that govern the key regional impacts, and examined the ability of the models to replicate their seasonal and annual means and trends by testing them against observations. We performed a similar evaluation with regard to extremes and statistics of the target variables. And lastly, we used the results of these analyses to select sets of models that demonstrate the best performance at selected focus regions with regard to selected sets of target meteorological parameters. Ultimately, we ranked the models according to their skills, identified top-end models that "better than average" reproduce the behavior of climatic parameters, and eliminated the outliers. Since the criteria of selecting the "best" models are somewhat loose, we constructed several regional ensembles consisting of different number of high-ranked models and compared results from these optimized ensembles with observations and with the ensemble of all models. We tested our approach in specific regional application of the terrestrial Russian Arctic, considering permafrost and Artic biomes as key regional climate-dependent systems, and temperature and precipitation characteristics governing their state as target meteorological parameters. Results of this case study are deposited on the web portal www.permafrost.su/gcms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawton, Craig R.; Welch, Kimberly M.; Kerper, Jessica
2010-06-01
The Department of Defense's (DoD) Energy Posture identified dependence of the US Military on fossil fuel energy as a key issue facing the military. Inefficient energy consumption leads to increased costs, effects operational performance and warfighter protection through large and vulnerable logistics support infrastructures. Military's use of energy is a critical national security problem. DoD's proposed metrics Fully Burdened Cost of Fuel and Energy Efficiency Key Performance Parameter (FBCF and Energy KPP) are a positive step to force energy use accountability onto Military programs. The ability to measure impacts of sustainment are required to fully measure Energy KPP. Sandia's workmore » with Army demonstrates the capability to measure performance which includes energy constraint.« less
A Structural Evaluation of a Large-Scale Quasi-Experimental Microfinance Initiative
Kaboski, Joseph P.; Townsend, Robert M.
2010-01-01
This paper uses a structural model to understand, predict, and evaluate the impact of an exogenous microcredit intervention program, the Thai Million Baht Village Fund program. We model household decisions in the face of borrowing constraints, income uncertainty, and high-yield indivisible investment opportunities. After estimation of parameters using pre-program data, we evaluate the model’s ability to predict and interpret the impact of the village fund intervention. Simulations from the model mirror the data in yielding a greater increase in consumption than credit, which is interpreted as evidence of credit constraints. A cost-benefit analysis using the model indicates that some households value the program much more than its per household cost, but overall the program costs 20 percent more than the sum of these benefits. PMID:22162594
Smoking and diabetes. Epigenetics involvement in osseointegration.
Razzouk, Sleiman; Sarkis, Rami
2013-03-01
Bone quality is a poorly defined parameter for successful implant placement, which largely depends upon many environmental and genetic factors unique to every individual. Smoking and diabetes are among the environmental factors that most impact osseointegration. However, there is an inter-individual variability of bone response in smokers and diabetic patients. Recent data on gene-environment interactions highlight the major role of epigenetic changes to induce a specific phenotype. Histone acetylation and DNA methylation are the main events that occur and modulate the gene expression. In this paper, we emphasize the impact of epigenetics on diabetes and smoking and describe their significance in bone healing. Also, we underscore the importance of adopting a new approach in clinical management for implant placement by customizing the treatment according to the patient's specific characteristics.
A Structural Evaluation of a Large-Scale Quasi-Experimental Microfinance Initiative.
Kaboski, Joseph P; Townsend, Robert M
2011-09-01
This paper uses a structural model to understand, predict, and evaluate the impact of an exogenous microcredit intervention program, the Thai Million Baht Village Fund program. We model household decisions in the face of borrowing constraints, income uncertainty, and high-yield indivisible investment opportunities. After estimation of parameters using pre-program data, we evaluate the model's ability to predict and interpret the impact of the village fund intervention. Simulations from the model mirror the data in yielding a greater increase in consumption than credit, which is interpreted as evidence of credit constraints. A cost-benefit analysis using the model indicates that some households value the program much more than its per household cost, but overall the program costs 20 percent more than the sum of these benefits.
Mars Microprobe Entry Analysis
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Mitcheltree, Robert A.; Cheatwood, F. McNeil
1998-01-01
The Mars Microprobe mission will provide the first opportunity for subsurface measurements, including water detection, near the south pole of Mars. In this paper, performance of the Microprobe aeroshell design is evaluated through development of a six-degree-of-freedom (6-DOF) aerodynamic database and flight dynamics simulation. Numerous mission uncertainties are quantified and a Monte-Carlo analysis is performed to statistically assess mission performance. Results from this 6-DOF Monte-Carlo simulation demonstrate that, in a majority of the cases (approximately 2-sigma), the penetrator impact conditions are within current design tolerances. Several trajectories are identified in which the current set of impact requirements are not satisfied. From these cases, critical design parameters are highlighted and additional system requirements are suggested. In particular, a relatively large angle-of-attack range near peak heating is identified.
Quantifying rates of evolutionary adaptation in response to ocean acidification.
Sunday, Jennifer M; Crim, Ryan N; Harley, Christopher D G; Hart, Michael W
2011-01-01
The global acidification of the earth's oceans is predicted to impact biodiversity via physiological effects impacting growth, survival, reproduction, and immunology, leading to changes in species abundances and global distributions. However, the degree to which these changes will play out critically depends on the evolutionary rate at which populations will respond to natural selection imposed by ocean acidification, which remains largely unquantified. Here we measure the potential for an evolutionary response to ocean acidification in larval development rate in two coastal invertebrates using a full-factorial breeding design. We show that the sea urchin species Strongylocentrotus franciscanus has vastly greater levels of phenotypic and genetic variation for larval size in future CO(2) conditions compared to the mussel species Mytilus trossulus. Using these measures we demonstrate that S. franciscanus may have faster evolutionary responses within 50 years of the onset of predicted year-2100 CO(2) conditions despite having lower population turnover rates. Our comparisons suggest that information on genetic variation, phenotypic variation, and key demographic parameters, may lend valuable insight into relative evolutionary potentials across a large number of species.
NASA Astrophysics Data System (ADS)
Chobaut, Nicolas; Carron, Denis; Saelzle, Peter; Drezet, Jean-Marie
2016-11-01
Solutionizing and quenching are the key steps in the fabrication of heat-treatable aluminum parts such as AA2618 compressor impellers for turbochargers as they highly impact the mechanical characteristics of the product. In particular, quenching induces residual stresses that can cause unacceptable distortions during machining and unfavorable stresses in service. Predicting and controlling stress generation during quenching of large AA2618 forgings are therefore of particular interest. Since possible precipitation during quenching may affect the local yield strength of the material and thus impact the level of macroscale residual stresses, consideration of this phenomenon is required. A material model accounting for precipitation in a simple but realistic way is presented. Instead of modeling precipitation that occurs during quenching, the model parameters are identified using a limited number of tensile tests achieved after representative interrupted cooling paths in a Gleeble machine. This material model is presented, calibrated, and validated against constrained coolings in a Gleeble blocked-jaws configuration. Applications of this model are FE computations of stress generation during quenching of large AA2618 forgings for compressor impellers.
NASA Astrophysics Data System (ADS)
Ida, K.; Nagaoka, K.; Inagaki, S.; Kasahara, H.; Evans, T.; Yoshinuma, M.; Kamiya, K.; Ohdach, S.; Osakabe, M.; Kobayashi, M.; Sudo, S.; Itoh, K.; Akiyama, T.; Emoto, M.; Dinklage, A.; Du, X.; Fujii, K.; Goto, M.; Goto, T.; Hasuo, M.; Hidalgo, C.; Ichiguchi, K.; Ishizawa, A.; Jakubowski, M.; Kawamura, G.; Kato, D.; Morita, S.; Mukai, K.; Murakami, I.; Murakami, S.; Narushima, Y.; Nunami, M.; Ohno, N.; Pablant, N.; Sakakibara, S.; Seki, T.; Shimozuma, T.; Shoji, M.; Tanaka, K.; Tokuzawa, T.; Todo, Y.; Wang, H.; Yokoyama, M.; Yamada, H.; Takeiri, Y.; Mutoh, T.; Imagawa, S.; Mito, T.; Nagayama, Y.; Watanabe, K. Y.; Ashikawa, N.; Chikaraishi, H.; Ejiri, A.; Furukawa, M.; Fujita, T.; Hamaguchi, S.; Igami, H.; Isobe, M.; Masuzaki, S.; Morisaki, T.; Motojima, G.; Nagasaki, K.; Nakano, H.; Oya, Y.; Suzuki, C.; Suzuki, Y.; Sakamoto, R.; Sakamoto, M.; Sanpei, A.; Takahashi, H.; Tsuchiya, H.; Tokitani, M.; Ueda, Y.; Yoshimura, Y.; Yamamoto, S.; Nishimura, K.; Sugama, H.; Yamamoto, T.; Idei, H.; Isayama, A.; Kitajima, S.; Masamune, S.; Shinohara, K.; Bawankar, P. S.; Bernard, E.; von Berkel, M.; Funaba, H.; Huang, X. L.; T., Ii; Ido, T.; Ikeda, K.; Kamio, S.; Kumazawa, R.; Kobayashi, T.; Moon, C.; Muto, S.; Miyazawa, J.; Ming, T.; Nakamura, Y.; Nishimura, S.; Ogawa, K.; Ozaki, T.; Oishi, T.; Ohno, M.; Pandya, S.; Shimizu, A.; Seki, R.; Sano, R.; Saito, K.; Sakaue, H.; Takemura, Y.; Tsumori, K.; Tamura, N.; Tanaka, H.; Toi, K.; Wieland, B.; Yamada, I.; Yasuhara, R.; Zhang, H.; Kaneko, O.; Komori, A.; Collaborators
2015-10-01
The progress in the understanding of the physics and the concurrent parameter extension in the large helical device since the last IAEA-FEC, in 2012 (Kaneko O et al 2013 Nucl. Fusion 53 095024), is reviewed. Plasma with high ion and electron temperatures (Ti(0) ˜ Te(0) ˜ 6 keV) with simultaneous ion and electron internal transport barriers is obtained by controlling recycling and heating deposition. A sign flip of the nondiffusive term of impurity/momentum transport (residual stress and convection flow) is observed, which is associated with the formation of a transport barrier. The impact of the topology of three-dimensional magnetic fields (stochastic magnetic fields and magnetic islands) on heat momentum, particle/impurity transport and magnetohydrodynamic stability is also discussed. In the steady state operation, a 48 min discharge with a line-averaged electron density of 1 × 1019 m-3 and with high electron and ion temperatures (Ti(0) ˜ Te(0) ˜ 2 keV), resulting in 3.36 GJ of input energy, is achieved.
Dangerous Near-Earth Asteroids and Meteorites
NASA Astrophysics Data System (ADS)
Mickaelian, A. M.; Grigoryan, A. E.
2015-07-01
The problem of Near-Earth Objects (NEOs; Astreoids and Meteorites) is discussed. To have an understanding on the probablity of encounters with such objects, one may use two different approaches: 1) historical, based on the statistics of existing large meteorite craters on the Earth, estimation of the source meteorites size and the age of these craters to derive the frequency of encounters with a given size of meteorites and 2) astronomical, based on the study and cataloging of all medium-size and large bodies in the Earth's neighbourhood and their orbits to estimate the probability, angles and other parameters of encounters. Therefore, we discuss both aspects and give our present knowledge on both phenomena. Though dangerous NEOs are one of the main source for cosmic catastrophes, we also focus on other possible dangers, such as even slight changes of Solar irradiance or Earth's orbit, change of Moon's impact on Earth, Solar flares or other manifestations of Solar activity, transit of comets (with impact on Earth's atmosphere), global climate change, dilution of Earth's atmosphere, damage of ozone layer, explosion of nearby Supernovae, and even an attack by extraterrestrial intelligence.
VLBI-derived troposphere parameters during CONT08
NASA Astrophysics Data System (ADS)
Heinkelmann, R.; Böhm, J.; Bolotin, S.; Engelhardt, G.; Haas, R.; Lanotte, R.; MacMillan, D. S.; Negusini, M.; Skurikhina, E.; Titov, O.; Schuh, H.
2011-07-01
Time-series of zenith wet and total troposphere delays as well as north and east gradients are compared, and zenith total delays ( ZTD) are combined on the level of parameter estimates. Input data sets are provided by ten Analysis Centers (ACs) of the International VLBI Service for Geodesy and Astrometry (IVS) for the CONT08 campaign (12-26 August 2008). The inconsistent usage of meteorological data and models, such as mapping functions, causes systematics among the ACs, and differing parameterizations and constraints add noise to the troposphere parameter estimates. The empirical standard deviation of ZTD among the ACs with regard to an unweighted mean is 4.6 mm. The ratio of the analysis noise to the observation noise assessed by the operator/software impact (OSI) model is about 2.5. These and other effects have to be accounted for to improve the intra-technique combination of VLBI-derived troposphere parameters. While the largest systematics caused by inconsistent usage of meteorological data can be avoided and the application of different mapping functions can be considered by applying empirical corrections, the noise has to be modeled in the stochastic model of intra-technique combination. The application of different stochastic models shows no significant effects on the combined parameters but results in different mean formal errors: the mean formal errors of the combined ZTD are 2.3 mm (unweighted), 4.4 mm (diagonal), 8.6 mm [variance component (VC) estimation], and 8.6 mm (operator/software impact, OSI). On the one hand, the OSI model, i.e. the inclusion of off-diagonal elements in the cofactor-matrix, considers the reapplication of observations yielding a factor of about two for mean formal errors as compared to the diagonal approach. On the other hand, the combination based on VC estimation shows large differences among the VCs and exhibits a comparable scaling of formal errors. Thus, for the combination of troposphere parameters a combination of the two extensions of the stochastic model is recommended.
Veremchuk, Lyudmila V; Tsarouhas, Konstantinos; Vitkina, Tatyana I; Mineeva, Elena E; Gvozdenko, Tatyana A; Antonyuk, Marina V; Rakitskii, Valeri N; Sidletskaya, Karolina A; Tsatsakis, Aristidis M; Golokhvast, Kirill S
2018-04-01
Environmental pollution, local climatic conditions and their association with the prevalence and exacerbation of asthma are topics of intense current medical investigation. Air pollution in the area of Vladivostock was estimated both by the index of emission volumes of "air gaseous components" (nitrogen oxide and nitrogen dioxide, formaldehyde, hydrogen sulfide, carbon monoxide) in urban atmosphere and by mass spectrometric analysis of precipitates in snow samples. A total of 172 local asthma patients (101 controlled-asthma patients-CAP and 71 non-controlled asthma patients - nCAP) were evaluated with the use of spirometry and body plethysmography. Airway obstruction reversibility was evaluated with the use of an inhaled bronchodilator. Using discriminant analysis the association of environmental parameters with clinical indices of asthma patients is explored and thresholds of impact are established. CAP presented high sensitivity to large-size suspended air particles and to several of the studied climatic parameters. Discriminant analysis showed high values of Wilks' lambda index (α = 0.69-0.81), which implies limited influence of environmental factors on the respiratory parameters of CAP. nCAP were more sensitive and susceptible to the majority of the environmental factors studied, including air suspended toxic metals particles (Cr, Zn and Ni). Air suspended particles showed higher tendency for pathogenicity in nCAP population than in the CAP, with a wider range of particle sizes being involved. Dust fractions ranging from 0 to 1 μm and from 50 to 100 μm were additionally implicated compared to CAP group. Considerably lowest thresholds levels of impact are calculated for nCAP. Copyright © 2017. Published by Elsevier Ltd.
Impact of Submarine Groundwater Discharge on Marine Water Quality and Reef Biota of Maui
Bishop, James M.
2016-01-01
Generally unseen and infrequently measured, submarine groundwater discharge (SGD) can transport potentially large loads of nutrients and other land-based contaminants to coastal ecosystems. To examine this linkage we employed algal bioassays, benthic community analysis, and geochemical methods to examine water quality and community parameters of nearshore reefs adjacent to a variety of potential, land-based nutrient sources on Maui. Three common reef algae, Acanthophora spicifera, Hypnea musciformis, and Ulva spp. were collected and/or deployed at six locations with SGD. Algal tissue nitrogen (N) parameters (δ15N, N %, and C:N) were compared with nutrient and δ15N-nitrate values of coastal groundwater and nearshore surface water at all locations. Benthic community composition was estimated for ten 10-m transects per location. Reefs adjacent to sugarcane farms had the greatest abundance of macroalgae, low species diversity, and the highest concentrations of N in algal tissues, coastal groundwater, and marine surface waters compared to locations with low anthropogenic impact. Based on δ15N values of algal tissues, we estimate ca. 0.31 km2 of Kahului Bay is impacted by effluent injected underground at the Kahului Wastewater Reclamation Facility (WRF); this region is barren of corals and almost entirely dominated by colonial zoanthids. Significant correlations among parameters of algal tissue N with adjacent surface and coastal groundwater N indicate that these bioassays provided a useful measure of nutrient source and loading. A conceptual model that uses Ulva spp. tissue δ15N and N % to identify potential N source(s) and relative N loading is proposed for Hawaiʻi. These results indicate that SGD can be a significant transport pathway for land-based nutrients with important biogeochemical and ecological implications in tropical, oceanic islands. PMID:27812171
Impact of Submarine Groundwater Discharge on Marine Water Quality and Reef Biota of Maui.
Amato, Daniel W; Bishop, James M; Glenn, Craig R; Dulai, Henrietta; Smith, Celia M
2016-01-01
Generally unseen and infrequently measured, submarine groundwater discharge (SGD) can transport potentially large loads of nutrients and other land-based contaminants to coastal ecosystems. To examine this linkage we employed algal bioassays, benthic community analysis, and geochemical methods to examine water quality and community parameters of nearshore reefs adjacent to a variety of potential, land-based nutrient sources on Maui. Three common reef algae, Acanthophora spicifera, Hypnea musciformis, and Ulva spp. were collected and/or deployed at six locations with SGD. Algal tissue nitrogen (N) parameters (δ15N, N %, and C:N) were compared with nutrient and δ15N-nitrate values of coastal groundwater and nearshore surface water at all locations. Benthic community composition was estimated for ten 10-m transects per location. Reefs adjacent to sugarcane farms had the greatest abundance of macroalgae, low species diversity, and the highest concentrations of N in algal tissues, coastal groundwater, and marine surface waters compared to locations with low anthropogenic impact. Based on δ15N values of algal tissues, we estimate ca. 0.31 km2 of Kahului Bay is impacted by effluent injected underground at the Kahului Wastewater Reclamation Facility (WRF); this region is barren of corals and almost entirely dominated by colonial zoanthids. Significant correlations among parameters of algal tissue N with adjacent surface and coastal groundwater N indicate that these bioassays provided a useful measure of nutrient source and loading. A conceptual model that uses Ulva spp. tissue δ15N and N % to identify potential N source(s) and relative N loading is proposed for Hawai'i. These results indicate that SGD can be a significant transport pathway for land-based nutrients with important biogeochemical and ecological implications in tropical, oceanic islands.
Melchardt, Thomas; Troppan, Katharina; Weiss, Lukas; Hufnagl, Clemens; Neureiter, Daniel; Tränkenschuh, Wolfgang; Schlick, Konstantin; Huemer, Florian; Deutsch, Alexander; Neumeister, Peter; Greil, Richard; Pichler, Martin; Egle, Alexander
2015-12-01
Several serum parameters have been evaluated for adding prognostic value to clinical scoring systems in diffuse large B-cell lymphoma (DLBCL), but none of the reports used multivariate testing of more than one parameter at a time. The goal of this study was to validate widely available serum parameters for their independent prognostic impact in the era of the National Comprehensive Cancer Network-International Prognostic Index (NCCN-IPI) score to determine which were the most useful. This retrospective bicenter analysis includes 515 unselected patients with DLBCL who were treated with rituximab and anthracycline-based chemoimmunotherapy between 2004 and January 2014. Anemia, high C-reactive protein, and high bilirubin levels had an independent prognostic value for survival in multivariate analyses in addition to the NCCN-IPI, whereas neutrophil-to-lymphocyte ratio, high gamma-glutamyl transferase levels, and platelets-to-lymphocyte ratio did not. In our cohort, we describe the most promising markers to improve the NCCN-IPI. Anemia and high C-reactive protein levels retain their power in multivariate testing even in the era of the NCCN-IPI. The negative role of high bilirubin levels may be associated as a marker of liver function. Further studies are warranted to incorporate these markers into prognostic models and define their role opposite novel molecular markers. Copyright © 2015 by the National Comprehensive Cancer Network.
Fire Detection Tradeoffs as a Function of Vehicle Parameters
NASA Technical Reports Server (NTRS)
Urban, David L.; Dietrich, Daniel L.; Brooker, John E.; Meyer, Marit E.; Ruff, Gary A.
2016-01-01
Fire survivability depends on the detection of and response to a fire before it has produced an unacceptable environment in the vehicle. This detection time is the result of interplay between the fire burning and growth rates; the vehicle size; the detection system design; the transport time to the detector (controlled by the level of mixing in the vehicle); and the rate at which the life support system filters the atmosphere, potentially removing the detected species or particles. Given the large differences in critical vehicle parameters (volume, mixing rate and filtration rate) the detection approach that works for a large vehicle (e.g. the ISS) may not be the best choice for a smaller crew capsule. This paper examines the impact of vehicle size and environmental control and life support system parameters on the detectability of fires in comparison to the hazard they present. A lumped element model was developed that considers smoke, heat, and toxic product release rates in comparison to mixing and filtration rates in the vehicle. Recent work has quantified the production rate of smoke and several hazardous species from overheated spacecraft polymers. These results are used as the input data set in the lumped element model in combination with the transport behavior of major toxic products released by overheating spacecraft materials to evaluate the necessary alarm thresholds to enable appropriate response to the fire hazard.
\\varvec{B^0→ K^{*0}μ ^+μ ^-} decay in the aligned two-Higgs-doublet model
NASA Astrophysics Data System (ADS)
Hu, Quan-Yi; Li, Xin-Qiang; Yang, Ya-Dong
2017-03-01
In the aligned two-Higgs-doublet model, we perform a complete one-loop computation of the short-distance Wilson coefficients C_{7,9,10}^{(' )}, which are the most relevant ones for b→ sℓ ^+ℓ ^- transitions. It is found that, when the model parameter | σ u| is much smaller than | σd| , the charged scalar contributes mainly to chirality-flipped C_{9,10}^' , with the corresponding effects being proportional to | σd| ^2. Numerically, the charged-scalar effects fit into two categories: (A) C_{7,9,10}^{H^± } are sizable, but C_{9,10}^' {H^± }}˜eq 0, corresponding to the (large | σu| , small | σd| ) region; (B) C_7^{H^± } and C_{9,10}^' {H^± }} are sizable, but C_{9,10}^{H^± }˜eq 0, corresponding to the (small | σu| , large | σd| ) region. Taking into account phenomenological constraints from the inclusive radiative decay B→ Xs{γ }, as well as the latest model-independent global analysis of b→ sℓ ^+ℓ ^- data, we obtain the much restricted parameter space of the model. We then study the impact of the allowed model parameters on the angular observables P_2 and P_5' of B^0→ K^{*0}μ ^+μ ^- decay, and we find that P_5' could be increased significantly to be consistent with the experimental data in case B.
Wennberg, Christian L; Murtola, Teemu; Hess, Berk; Lindahl, Erik
2013-08-13
The accuracy of electrostatic interactions in molecular dynamics advanced tremendously with the introduction of particle-mesh Ewald (PME) summation almost 20 years ago. Lattice summation electrostatics is now the de facto standard for most types of biomolecular simulations, and in particular, for lipid bilayers, it has been a critical improvement due to the large charges typically present in zwitterionic lipid headgroups. In contrast, Lennard-Jones interactions have continued to be handled with increasingly longer cutoffs, partly because few alternatives have been available despite significant difficulties in tuning cutoffs and parameters to reproduce lipid properties. Here, we present a new Lennard-Jones PME implementation applied to lipid bilayers. We confirm that long-range contributions are well approximated by dispersion corrections in simple systems such as pentadecane (which makes parameters transferable), but for inhomogeneous and anisotropic systems such as lipid bilayers there are large effects on surface tension, resulting in up to 5.5% deviations in area per lipid and order parameters-far larger than many differences for which reparameterization has been attempted. We further propose an approximation for combination rules in reciprocal space that significantly reduces the computational cost of Lennard-Jones PME and makes accurate treatment of all nonbonded interactions competitive with simulations employing long cutoffs. These results could potentially have broad impact on important applications such as membrane proteins and free energy calculations.
Consequences of high-x proton size fluctuations in small collision systems at √{sNN}=200 GeV
NASA Astrophysics Data System (ADS)
McGlinchey, D.; Nagle, J. L.; Perepelitsa, D. V.
2016-08-01
Recent measurements of jet production rates at large transverse momentum (pT) in the collisions of small projectiles with large nuclei at the BNL Relativistic Heavy Ion Collider (RHIC) and the CERN Large Hadron Collider indicate that they have an unexpected relationship with estimates of the collision centrality. One compelling interpretation of the data is that they capture an xp-dependent decrease in the average interaction strength of the nucleon in the projectile undergoing a hard scattering. A weakly interacting or "shrinking" nucleon in the projectile strikes fewer nucleons in the nucleus, resulting in a particular pattern of centrality-dependent modifications to high-pT processes. We describe a simple one-parameter geometric implementation of this picture within a modified Monte Carlo Glauber model tuned to d +Au jet data, and explore two of its major consequences. First, the model predicts a particular projectile-species effect on the centrality dependence at high xp, opposite to that expected from a final state energy loss effect. Second, we find that some of the large centrality dependence observed for forward dihadron production in d +Au collisions at RHIC may arise from the physics of the "shrinking" projectile nucleon, in addition to impact parameter dependent shadowing or saturation effects at low nuclear x . We conclude that analogous measurements in recently collected p +Au and 3He+Au collision data at RHIC can provide a unique test of these predictions.
Impact parameter sensitive study of inner-shell atomic processes in the experimental storage ring
NASA Astrophysics Data System (ADS)
Gumberidze, A.; Kozhuharov, C.; Zhang, R. T.; Trotsenko, S.; Kozhedub, Y. S.; DuBois, R. D.; Beyer, H. F.; Blumenhagen, K.-H.; Brandau, C.; Bräuning-Demian, A.; Chen, W.; Forstner, O.; Gao, B.; Gassner, T.; Grisenti, R. E.; Hagmann, S.; Hillenbrand, P.-M.; Indelicato, P.; Kumar, A.; Lestinsky, M.; Litvinov, Yu. A.; Petridis, N.; Schury, D.; Spillmann, U.; Trageser, C.; Trassinelli, M.; Tu, X.; Stöhlker, Th.
2017-10-01
In this work, we present a pilot experiment in the experimental storage ring (ESR) at GSI devoted to impact parameter sensitive studies of inner shell atomic processes for low-energy (heavy-) ion-atom collisions. The experiment was performed with bare and He-like xenon ions (Xe54+, Xe52+) colliding with neutral xenon gas atoms, resulting in a symmetric collision system. This choice of the projectile charge states was made in order to compare the effect of a filled K-shell with the empty one. The projectile and target X-rays have been measured at different observation angles for all impact parameters as well as for the impact parameter range of ∼35-70 fm.
Sideways fall-induced impact force and its effect on hip fracture risk: a review.
Nasiri Sarvi, M; Luo, Y
2017-10-01
Osteoporotic hip fracture, mostly induced in falls among the elderly, is a major health burden over the world. The impact force applied to the hip is an important factor in determining the risk of hip fracture. However, biomechanical researches have yielded conflicting conclusions about whether the fall-induced impact force can be accurately predicted by the available models. It also has been debated whether or not the effect of impact force has been considered appropriately in hip fracture risk assessment tools. This study aimed to provide a state-of-the-art review of the available methods for predicting the impact force, investigate their strengths/limitations, and suggest further improvements in modeling of human body falling. We divided the effective parameters on impact force to two categories: (1) the parameters that can be determined subject-specifically and (2) the parameters that may significantly vary from fall to fall for an individual and cannot be considered subject-specifically. The parameters in the first category can be investigated in human body fall experiments. Video capture of real-life falls was reported as a valuable method to investigate the parameters in the second category that significantly affect the impact force and cannot be determined in human body fall experiments. The analysis of the gathered data revealed that there is a need to develop modified biomechanical models for more accurate prediction of the impact force and appropriately adopt them in hip fracture risk assessment tools in order to achieve a better precision in identifying high-risk patients. Graphical abstract Impact force to the hip induced in sideways falls is affected by many parameters and may remarkably vary from subject to subject.
An open source GIS-based tool to integrate the fragmentation mechanism in rockfall propagation
NASA Astrophysics Data System (ADS)
Matas, Gerard; Lantada, Nieves; Gili, Josep A.; Corominas, Jordi
2015-04-01
Rockfalls are frequent instability processes in road cuts, open pit mines and quarries, steep slopes and cliffs. Even though the stability of rock slopes can be determined using analytical approaches, the assessment of large rock cliffs require simplifying assumptions due to the difficulty of working with a large amount of joints, the scattering of both the orientations and strength parameters. The attitude and persistency of joints within the rock mass define the size of kinematically unstable rock volumes. Furthermore the rock block will eventually split in several fragments during its propagation downhill due its impact with the ground surface. Knowledge of the size, energy, trajectory… of each block resulting from fragmentation is critical in determining the vulnerability of buildings and protection structures. The objective of this contribution is to present a simple and open source tool to simulate the fragmentation mechanism in rockfall propagation models and in the calculation of impact energies. This tool includes common modes of motion for falling boulders based on the previous literature. The final tool is being implemented in a GIS (Geographic Information Systems) using open source Python programming. The tool under development will be simple, modular, compatible with any GIS environment, open source, able to model rockfalls phenomena correctly. It could be used in any area susceptible to rockfalls with a previous adjustment of the parameters. After the adjustment of the model parameters to a given area, a simulation could be performed to obtain maps of kinetic energy, frequency, stopping density and passing heights. This GIS-based tool and the analysis of the fragmentation laws using data collected from recent rockfall have being developed within the RockRisk Project (2014-2016). This project is funded by the Spanish Ministerio de Economía y Competitividad and entitled "Rockfalls in cliffs: risk quantification and its prevention"(BIA2013-42582-P).
Newman, A; Mann, S; Nydam, D V; Overton, T R; Behling-Kelly, E
2016-02-01
The high energy demands of dairy cows during the transition period from late gestation into early lactation can place them at an increased risk for the development of metabolic and infectious diseases. Modification of the dry period diet has been investigated as a preventive means to minimize the detrimental aspects of metabolic shifts during the transition period. Studies investigating the impact of dry period diet on lipid parameters during the transition period have largely focused on markers of lipolysis and ketogenesis. Total cholesterol declines during the periparturient period and increases in early lactation. The impact total energy in the dry period diet has on the ability of the cow to maintain total serum cholesterol, as well as its natural high-density lipoprotein-rich status, during this metabolically challenging window is not clear. The impact of lipoproteins on inflammation and immune function may have a clinical impact on the cow's ability to ward off production-related diseases. In this study, we hypothesized that the provision of adequate, but not excessive, total metabolizable energy, would better allow the cow to maintain total cholesterol and a higher relative proportion of HDL throughout the transition period. Cows were allocated to one of three dry period dietary treatment groups following a randomized block design. Total serum triglycerides, cholesterol and lipoprotein fractions were measured on a weekly basis from approximately 7 weeks pre-calving to 6 weeks post-calving. The cows on the high energy diet maintained total serum cholesterol as compared to the cows provided a lower energy diet, but there was no significant increase in the LDL fraction of lipoproteins between diet treatment groups. Journal of Animal Physiology and Animal Nutrition © 2015 Blackwell Verlag GmbH.
Calculation of the Pitot tube correction factor for Newtonian and non-Newtonian fluids.
Etemad, S Gh; Thibault, J; Hashemabadi, S H
2003-10-01
This paper presents the numerical investigation performed to calculate the correction factor for Pitot tubes. The purely viscous non-Newtonian fluids with the power-law model constitutive equation were considered. It was shown that the power-law index, the Reynolds number, and the distance between the impact and static tubes have a major influence on the Pitot tube correction factor. The problem was solved for a wide range of these parameters. It was shown that employing Bernoulli's equation could lead to large errors, which depend on the magnitude of the kinetic energy and energy friction loss terms. A neural network model was used to correlate the correction factor of a Pitot tube as a function of these three parameters. This correlation is valid for most Newtonian, pseudoplastic, and dilatant fluids at low Reynolds number.
Crustal Thickness and Magnetization beneath Crisium and Moscoviense Lunar Impact Basins
NASA Astrophysics Data System (ADS)
Quesnel, Y.
2016-12-01
The recent NASA GRAIL mission allowed to derive a high-resolution model of the Moon's crustal thickness. It revealed that the Mare Crisium and Moscoviense large impact basins have the thinnest (< 7-8 km) crust of the Moon. On the other hand, significative magnetic field anomalies were measured over these basins by Lunar Prospector and Kaguya magnetometers. The Crisium lunar impact basin shows two localized intense ( 10 nT at 30 km of altitude) magnetic field anomalies located nearby its North and South borders, while Moscoviense shows a relatively-intense ( 4-5 nT at 30 km) central magnetic field anomaly. In details, these two anomalies are exactly located where the thinnest (<1-3 km) crust within the basins is predicted by the crustal thickness models. In this study we investigate this apparent anti-correlation by modeling the sources of these potential field data using several forward approaches in 2D and 3D. The parameters of the crustal source models are constrained by density and magnetization measurements on APOLLO samples, and by standard values for the lunar mantle and crust. Several possible models will be shown for the two basins. Preliminary results suggest that, beneath the thin Mare basalt layer seen at the floor of both basins, a magnetized layer with laterally-varying thickness is required. This layer may correspond to an impact melt sheet. We here exclude the hypothesis that a part of the lunar upper mantle could be magnetized beneath these basins (perhaps due to post-impact processes?), largely reducing the range of possible depths for the magnetic sources.
NASA Astrophysics Data System (ADS)
Thorhaug, A.
1980-03-01
The principles of the dynamics and interrelationships within the dominant subtropical and tropical Caribbean seagrass community have been studied previously before, during, and after impact. From these and scores of observations of damage and recovery patterns in Thalassia ecosystems, a sense of management recovery strategy has emerged. Artificial restoring of Thalassia testudinum seeds into areas cut off from stock (fruit, seeds) appeared feasible on a large scale after the Turkey Point (Biscayne Bay, Miami, Florida) restoration and test sampling throughout North Biscayne Bay. Two large-scale seeding attempts were made; after 11 months they compared favorably with Turkey Point specimens with regard to growth parameters, despite the turbidity and other persistent pollution. Thus, the possible areas in which Thalassia seed restoration can be used has increased to include estuaries of multiple impact still in various stages of recovery after physical and sewage pollution. This technique should be especially useful to “developing” nations where important nearshore fisheries nurseries based on Thalassia ecosystems have been heavily damaged and now lie barren. Man's impact on the estuary where seed restoration was attempted includes the following activities: 50% of the bay bottom directly dredged or filled (leaving much unconsolidated sediment); 50 million gallons of domestic waste dumped directly into a low flushing part of the bay for 20 years; seven major causeways transecting the bay, restricting circulation and flushing; two artificial inlets made into navigational channels; freshwater sheet flow drastically changed due to channelization by flood-control canals; urban runoff from a million people entering the bay. Most of the impacts have now abated; however, their long-term effects remain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barron, Robert W.; McJeon, Haewon C.
2015-05-01
This paper considers the effect of several key parameters of low carbon energy technologies on the cost of abatement. A methodology for determining the minimum level of performance required for a parameter to have a statistically significant impact on CO2 abatement cost is developed and used to evaluate the impact of eight key parameters of low carbon energy supply technologies on the cost of CO2 abatement. The capital cost of nuclear technology is found to have the greatest impact of the parameters studied. The cost of biomass and CCS technologies also have impacts, while their efficiencies have little, if any.more » Sensitivity analysis of the results with respect to population, GDP, and CO2 emission constraint show that the minimum performance level and impact of nuclear technologies is consistent across the socioeconomic scenarios studied, while the other technology parameters show different performance under higher population, lower GDP scenarios. Solar technology was found to have a small impact, and then only at very low costs. These results indicate that the cost of nuclear is the single most important driver of abatement cost, and that trading efficiency for cost may make biomass and CCS technologies more competitive.« less
Asteroid Impact Deflection and Assessment (AIDA) mission - Properties of Impact Ejecta
NASA Astrophysics Data System (ADS)
Hamilton, Douglas P.; Fahnestock, Eugene G.; Schwartz, Stephen R.; Murdoch, Naomi; Asphaug, Erik; Cheng, Andrew F.; Housen, Kevin R.; Michel, Patrick; Miller, Paul L.; Stickle, Angela; Tancredi, Gonzalo; Vincent, Jean-Baptiste; Wuennemann, Kai; Yu, Yang; AIDA Impact Simulation Working Group
2016-10-01
The Asteroid Impact Deflection and Assessment (AIDA) mission is composed of NASA's Double Asteroid Redirection Test (DART) mission and ESA's Asteroid Impact Monitor (AIM) rendezvous mission. The DART spacecraft is designed to impact the small satellite of near-Earth asteroid 65803 Didymos in October 2022, while the in-situ AIM spacecraft observes. AIDA's Modeling and Simulation of Impact Outcomes Working Group is tasked with investigating properties of the debris ejected from the impact. The orbital evolution of this ejecta has important implications for observations that the AIM spacecraft will take as well as for the safety of the spacecraft itself. Ejecta properties including particle sizes, bulk densities, and velocities all depend on the poorly-known physical properties of Didymos' moon. The moon's density, internal strength, and especially its porosity have a strong effect on all ejecta properties. Making a range of assumptions, we perform a suite of numerical simulations to determine the fate of the ejected material; we will use simulation predictions to optimize AIM observations and safety. Ultimately, combining AIM's observations of the ejecta with detailed numerical simulations will help constrain key satellite parameters.We use distinct types of numerical tools to explore ejecta properties based on additional target parameters (different forms of friction, cohesion), e.g., the shock physics code iSALE, smoothed particle hydrodynamics codes, and the granular code PKDGRAV. Given the large discrepancy between the 6 km/s impact speed of DART and the moon's 6 cm/s escape speed, a great challenge will be to determine properties of the low-speed ejecta. Very low-speed material relevant to the safety of the AIM spacecraft and its ability to conduct its observations may loft from the crater at late stages of the impact process, or from other locations far from the impact site due to seismic energy propagation. The manner in which seismic waves manifests in asteroid regolith is extremely speculative at present. Through experiment, simulation, and observational strategies, we are working to gain insight into this and related phenomenon and will present the ongoing progress of our working group.
Knopman, Debra S.; Voss, Clifford I.
1987-01-01
The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.
Study of microforging of metallic nanoflakes in relation to electronic applications
NASA Astrophysics Data System (ADS)
Kang, Wooseung
This dissertation reports the first systematic study of cold microforging; the conversion of micron scale metal powders to thin flakes by a series of plastically deforming impacts in a ball mill at low temperature. The research focused on processing Fe and Cu flakes with submicron thicknesses (nanoflakes) which are expected to find significant applications in electronics. The principal objectives were to develop a detailed understanding of the underlying materials science of the process, and to characterize the material and processing parameters that maximize the rate at which nanoflakes with a specific aspect ratio (diameter/thickness) can be microforged. A model for microforging was developed using Hertzian impact theory to establish the compressive impact energy (Emf) imparted to a spherical powder particle in a ball-powder-ball impact, and the Coffin-Manson relation for cyclical fatigue to determine the number of plastically deforming impacts it could sustain before fracturing. The rate of microforging in the ball mill was obtained from the product of the impact frequency (f) and the statistical probability of impact (p). Both f and p depend on the number of balls and powders, and the collision velocity (v) and the milling vial volume (V). The parameters Emf, p, v and V are specific to the mill and used to develop scaling laws for transferring the process from small vibratory research mills to large commercial equipment. The empirical parameters required by these models were determined by microforging a few grams of powders in small research mills. The validity of the model was assessed by comparing the time required to microforge several hundred grams of a particular powder in a much larger mill, with that determined by scaling the model equations to account for change in mill parameters. The good agreement obtained provided strong support for the microforging model. SEM microphotos and sieving fractions were used to show that the minimum thicknesses, and maximum aspect ratios of the Fe and Cu nanoflakes that could be produced before fracture, are in the ~0.3 μm-0.5 μm range, and agreed well with those calculated from volume conserving sphere-flake transformations. X-ray diffraction measurements showed that the grain sizes of these powders were ~0.1x their thicknesses, and were little changed by microforging. The magnetic hysteresis and permeabilities of the Fe nanoflakes were in good agreement with those computed from the nanoflake geometries. The results indicate that the model of microforging as a statistical random sequence of plastic deformations can be used to develop a commercial process to support the development of their application potential in electronics.
On extreme events for non-spatial and spatial branching Brownian motions
NASA Astrophysics Data System (ADS)
Avan, Jean; Grosjean, Nicolas; Huillet, Thierry
2015-04-01
We study the impact of having a non-spatial branching mechanism with infinite variance on some parameters (height, width and first hitting time) of an underlying Bienaymé-Galton-Watson branching process. Aiming at providing a comparative study of the spread of an epidemics whose dynamics is given by the modulus of a branching Brownian motion (BBM) we then consider spatial branching processes in dimension d, not necessarily integer. The underlying branching mechanism is either a binary branching model or one presenting infinite variance. In particular we evaluate the chance p(x) of being hit if the epidemics started away at distance x. We compute the large x tail probabilities of this event, both when the branching mechanism is regular and when it exhibits very large fluctuations.
NASA Astrophysics Data System (ADS)
Matveev, O. P.; Shvaika, A. M.; Devereaux, T. P.; Freericks, J. K.
2016-01-01
Using the Kadanoff-Baym-Keldysh formalism, we employ nonequilibrium dynamical mean-field theory to exactly solve for the nonlinear response of an electron-mediated charge-density-wave-ordered material. We examine both the dc current and the order parameter of the conduction electrons as the ordered system is driven by the electric field. Although the formalism we develop applies to all models, for concreteness, we examine the charge-density-wave phase of the Falicov-Kimball model, which displays a number of anomalous behaviors including the appearance of subgap density of states as the temperature increases. These subgap states should have a significant impact on transport properties, particularly the nonlinear response of the system to a large dc electric field.
The Scientific Return of VLT Programmes
NASA Astrophysics Data System (ADS)
Sterzik, M.; Dumas, C.; Grothkopf, U.; Kaufer, A.; Leibundgut, B.; Marteau, S.; Meakins, S.; Patat, F.; Primas, F.; Rejkuba, M.; Romaniello, M.; Stoehr, F.; Tacconi-Garman, L.; Vera, I.
2015-12-01
An in-depth analysis of the publications from 8414 distinct scheduled VLT observing programmes between April 1999 and March 2015 (Periods 63 to 94) is presented. The productivity by mode (Visitor or Service Mode) and type (Normal and Large, Guaranteed Time, Target of Opportunity, Director's Discretionary Time) are examined through their publication records. We investigate how Service Mode rank classes impact the scientific return. Several results derive from this study: Large Programmes result in the highest productivity, whereas only about half of all scheduled observing programmes produce a refereed publication. Programmes that result in a publication yield on average two refereed papers. B rank class Service Mode Programmes appear to be slightly less productive. Follow-up studies will investigate in more detail the parameters that influence the productivity of the Observatory.
Droplet impact on deep liquid pools: Rayleigh jet to formation of secondary droplets
NASA Astrophysics Data System (ADS)
Castillo-Orozco, Eduardo; Davanlou, Ashkan; Choudhury, Pretam K.; Kumar, Ranganathan
2015-11-01
The impact of droplets on a deep pool has applications in cleaning up oil spills, spray cooling, painting, inkjet printing, and forensic analysis, relying on the changes in properties such as viscosity, interfacial tension, and density. Despite the exhaustive research on different aspects of droplet impact, it is not clear how liquid properties can affect the instabilities leading to Rayleigh jet breakup and number of daughter drops formed after its pinch-off. In this article, through systematic experiments we investigate the droplet impact phenomena by varying viscosity and surface tension of liquids as well as impact speeds. Further, using numerical simulations, we show that Rayleigh-Plateau instability is influenced by these parameters, and capillary time scale is the appropriate scale to normalize the breakup time. Based on Ohnesorge number (Oh) and impact Weber number (We), a regime map for no breakup, Rayleigh jet breakup, and crown splash is suggested. Interestingly, crown splash is observed to occur at all Ohnesorge numbers; however, at high Oh, a large portion of kinetic energy is dissipated, and thus the Rayleigh jet is suppressed regardless of high impact velocity. The normalized required time for the Rayleigh jet to reach its peak varies linearly with the critical height of the jet.
Universality, maximum radiation, and absorption in high-energy collisions of black holes with spin.
Sperhake, Ulrich; Berti, Emanuele; Cardoso, Vitor; Pretorius, Frans
2013-07-26
We explore the impact of black hole spins on the dynamics of high-energy black hole collisions. We report results from numerical simulations with γ factors up to 2.49 and dimensionless spin parameter χ=+0.85, +0.6, 0, -0.6, -0.85. We find that the scattering threshold becomes independent of spin at large center-of-mass energies, confirming previous conjectures that structure does not matter in ultrarelativistic collisions. It has further been argued that in this limit all of the kinetic energy of the system may be radiated by fine tuning the impact parameter to threshold. On the contrary, we find that only about 60% of the kinetic energy is radiated for γ=2.49. By monitoring apparent horizons before and after scattering events we show that the "missing energy" is absorbed by the individual black holes in the encounter, and moreover the individual black-hole spins change significantly. We support this conclusion with perturbative calculations. An extrapolation of our results to the limit γ→∞ suggests that about half of the center-of-mass energy of the system can be emitted in gravitational radiation, while the rest must be converted into rest-mass and spin energy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, A.L.; Spigarelli, J.A.; Thommes, M.M.
1982-01-01
Two conventional fishery stock assessment models, the surplus-production model and the dynamic-pool model, were applied to assess the impacts of water withdrawals by electricity-generating plants, industries, and municipalities on the standing stocks and yields of alewife Alosa pseudoharengus, rainbow smelt Osmerus mordax, and yellow perch Perca flavescens in Lake Michigan. Impingement and entrainment estimates were based on data collected at 15 power plants. The surplus-production model was fitted to the three populations with catch and effort data from the commercial fisheries. Dynamic-pool model parameters were estimated from published data. The numbers entrained and impinged are large, but the proportions ofmore » the standing stocks impinged and the proportions of the eggs and larvae entrained are small. The reductions in biomass of the stocks and in maximum sustainable yields are larger than the proportions impinged. The reductions in biomass, based on 1975 data and an assumed full water withdrawal, are 2.86% for alewife, 0.76% for rainbow smelt, and 0.28% for yellow perch. Fishery models are an economical means of impact assessment in situations where catch and effort data are available for estimation of model parameters.« less
Evaluation of the Hanford 200 West Groundwater Treatment System: Fluidized Bed Bioreactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Looney, Brian B.; Jackson, Dennis G.; Dickson, John O.
A fluidized bed reactor (FBR) in the 200W water treatment facility at Hanford is removing nitrate from groundwater as part of the overall pump-treat-reinject process. Control of the FBR bed solids has proven challenging, impacting equipment, increasing operations and maintenance (O&M), and limiting the throughput of the facility. In response to the operational challenges, the Department of Energy Richland Office (DOE-RL) commissioned a technical assistance team to facilitate a system engineering evaluation and provide focused support recommendations to the Hanford Team. The DOE Environmental Management (EM) technical assistance process is structured to identify and triage technologies and strategies that addressmore » the target problem(s). The process encourages brainstorming and dialog and allows rapid identification and prioritization of possible options. Recognizing that continuous operation of a large-scale FBR is complex, requiring careful attention to system monitoring data and changing conditions, the technical assistance process focused on explicit identification of the available control parameters (“knobs”), how these parameters interact and impact the FBR system, and how these can be adjusted under different scenarios to achieve operational goals. The technical assistance triage process was performed in collaboration with the Hanford team.« less
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
A review of hemorheology: Measuring techniques and recent advances
NASA Astrophysics Data System (ADS)
Sousa, Patrícia C.; Pinho, Fernando T.; Alves, Manuel A.; Oliveira, Mónica S. N.
2016-02-01
Significant progress has been made over the years on the topic of hemorheology, not only in terms of the development of more accurate and sophisticated techniques, but also in terms of understanding the phenomena associated with blood components, their interactions and impact upon blood properties. The rheological properties of blood are strongly dependent on the interactions and mechanical properties of red blood cells, and a variation of these properties can bring further insight into the human health state and can be an important parameter in clinical diagnosis. In this article, we provide both a reference for hemorheological research and a resource regarding the fundamental concepts in hemorheology. This review is aimed at those starting in the field of hemodynamics, where blood rheology plays a significant role, but also at those in search of the most up-to-date findings (both qualitative and quantitative) in hemorheological measurements and novel techniques used in this context, including technical advances under more extreme conditions such as in large amplitude oscillatory shear flow or under extensional flow, which impose large deformations comparable to those found in the microcirculatory system and in diseased vessels. Given the impressive rate of increase in the available knowledge on blood flow, this review is also intended to identify areas where current knowledge is still incomplete, and which have the potential for new, exciting and useful research. We also discuss the most important parameters that can lead to an alteration of blood rheology, and which as a consequence can have a significant impact on the normal physiological behavior of blood.
[HYGIENIC ASSESSMENT OF NOISE FACTOR OF THE LARGE CITY].
Chubirko, M L; Stepkin, Yu I; Seredenko, O V
2015-01-01
The article is devoted to the problem of the negative impact of traffic noise on the health and living conditions of the population in conditions of the large city. Every day on the streets there are appeared more and more different modes of transport, and to date almost all transportation network has reached his traffic performance. The increase in traffic noise certainly has an impact on the human body. The most common and intense noise is caused by the traffic of urban automobile and electric transport. This is explained by the existence of the heavy traffic (2-3 thousand crews/h) on almost all main roads in historically emerged parts of the city. In addition, sources of external noise in the city can be a railway running in residential zone, access roads, industrial enterprises, located in close proximity to residential areas and on the borders of residential zones, planes of military and civil aviation. For the evaluation of the different noises sound levels were measured with the use of sound level meters. The most common parameter for the assessment ofthe noise generatedfrom motor vehicles on residential areas and usedfor the noise characteristics of the traffic flows, is the equivalent sound level/A EQ dB. This parameter is used in the majority of normative-technical documentation as hygienic noise standard. With the aim of the assessment of noise exposure there were selected 122 control points at intersections of roads of different traffic performance where there were made instrumental measurements the equivalent sound level, followed by its comparison with permissible levels.
Exploring ammonium tolerance in a large panel of Arabidopsis thaliana natural accessions
Sarasketa, Asier; González-Moro, María Begoña; González-Murua, Carmen; Marino, Daniel
2014-01-01
Plants are dependent on exogenous nitrogen (N) supply. Ammonium (NH4 +), together with nitrate (NO3 –), is one of the main nitrogenous compounds available in the soil. Paradoxically, although NH4 + assimilation requires less energy than that of NO3 –, many plants display toxicity symptoms when grown with NH4 + as the sole N source. However, in addition to species-specific ammonium toxicity, intraspecific variability has also been shown. Thus, the aim of this work was to study the intraspecific ammonium tolerance in a large panel of Arabidopsis thaliana natural accessions. Plants were grown with either 1mM NO3 – or NH4 + as the N source, and several parameters related to ammonium tolerance and assimilation were determined. Overall, high variability was observed in A. thaliana shoot growth under both forms of N nutrition. From the parameters determined, tissue ammonium content was the one with the highest impact on shoot biomass, and interestingly this was also the case when N was supplied as NO3 –. Enzymes of nitrogen assimilation did not have an impact on A. thaliana biomass variation, but the N source affected their activity. Glutamate dehydrogenase (GDH) aminating activity was, in general, higher in NH4 +-fed plants. In contrast, GDH deaminating activity was higher in NO3 –-fed plants, suggesting a differential role for this enzyme as a function of the N form supplied. Overall, NH4 + accumulation seems to be an important player in Arabidopsis natural variability in ammonium tolerance rather than the cell NH4 + assimilation capacity. PMID:25205573
On a fast calculation of structure factors at a subatomic resolution.
Afonine, P V; Urzhumtsev, A
2004-01-01
In the last decade, the progress of protein crystallography allowed several protein structures to be solved at a resolution higher than 0.9 A. Such studies provide researchers with important new information reflecting very fine structural details. The signal from these details is very weak with respect to that corresponding to the whole structure. Its analysis requires high-quality data, which previously were available only for crystals of small molecules, and a high accuracy of calculations. The calculation of structure factors using direct formulae, traditional for 'small-molecule' crystallography, allows a relatively simple accuracy control. For macromolecular crystals, diffraction data sets at a subatomic resolution contain hundreds of thousands of reflections, and the number of parameters used to describe the corresponding models may reach the same order. Therefore, the direct way of calculating structure factors becomes very time expensive when applied to large molecules. These problems of high accuracy and computational efficiency require a re-examination of computer tools and algorithms. The calculation of model structure factors through an intermediate generation of an electron density [Sayre (1951). Acta Cryst. 4, 362-367; Ten Eyck (1977). Acta Cryst. A33, 486-492] may be much more computationally efficient, but contains some parameters (grid step, 'effective' atom radii etc.) whose influence on the accuracy of the calculation is not straightforward. At the same time, the choice of parameters within safety margins that largely ensure a sufficient accuracy may result in a significant loss of the CPU time, making it close to the time for the direct-formulae calculations. The impact of the different parameters on the computer efficiency of structure-factor calculation is studied. It is shown that an appropriate choice of these parameters allows the structure factors to be obtained with a high accuracy and in a significantly shorter time than that required when using the direct formulae. Practical algorithms for the optimal choice of the parameters are suggested.
A Comparison of Quasi-Static Indentation Testing to Low Velocity Impact Testing
NASA Technical Reports Server (NTRS)
Nettles, Alan T.; Douglas, Michael J.
2001-01-01
The need for a static test method for modeling low-velocity foreign object impact events to composites would prove to be very beneficial to researchers since much more data can be obtained from a static test than from an impact test. In order to examine if this is feasible, a series of static indentation and low velocity impact tests were carried out and compared. Square specimens of many sizes and thickness were utilized to cover the array of types of low velocity impact events. Laminates with a n/4 stacking sequence were employed since this is by the most common type of engineering laminate. Three distinct flexural rigidities under two different boundary conditions were tested in order to obtain damage due to large deflections, contact stresses and both to examine if the static indentation-impact comparisons are valid under the spectrum of damage modes that can be experienced. Comparisons between static indentation and low velocity impact tests were based on the maximum applied transverse load. The dependent parameters examined included dent depth, back surface crack length, delamination area and to a limited extent, load-deflection behavior. Results showed that no distinct differences could be seen between the static indentation tests and the low velocity impact tests, indicating that static indentation can be used to represent a low velocity impact event.
A Comparison of Quasi-Static Indentation to Low-Velocity Impact
NASA Technical Reports Server (NTRS)
Nettles, A. T.; Douglas, M. J.
2000-01-01
A static test method for modeling low-velocity foreign object impact events to composites would prove to be very beneficial to researchers since much more data can be obtained from a static test than from an impact test. In order to examine if this is feasible, a series of static indentation and low-velocity impact tests were carried out and compared. Square specimens of many sizes and thicknesses were utilized to cover the array of types of low velocity impact events. Laminates with a pi/4 stacking sequence were employed since this is by far the most common type of engineering laminate. Three distinct flexural rigidities -under two different boundary conditions were tested in order to obtain damage ranging from that due to large deflection to contact stresses and levels in-between to examine if the static indentation-impact comparisons are valid under the spectrum of damage modes that can be experienced. Comparisons between static indentation and low-velocity impact tests were based on the maximum applied transverse load. The dependent parameters examined included dent depth, back surface crack length, delamination area, and to a limited extent, load-deflection behavior. Results showed that no distinct differences could be seen between the static indentation tests and the low-velocity impact tests, indicating that static indentation can be used to represent a low-velocity impact event.
Medical smart textiles based on fiber optic technology: an overview.
Massaroni, Carlo; Saccomandi, Paola; Schena, Emiliano
2015-04-13
The growing interest in the development of smart textiles for medical applications is driven by the aim to increase the mobility of patients who need a continuous monitoring of such physiological parameters. At the same time, the use of fiber optic sensors (FOSs) is gaining large acceptance as an alternative to traditional electrical and mechanical sensors for the monitoring of thermal and mechanical parameters. The potential impact of FOSs is related to their good metrological properties, their small size and their flexibility, as well as to their immunity from electromagnetic field. Their main advantage is the possibility to use textile based on fiber optic in a magnetic resonance imaging environment, where standard electronic sensors cannot be employed. This last feature makes FOSs suitable for monitoring biological parameters (e.g., respiratory and heartbeat monitoring) during magnetic resonance procedures. Research interest in combining FOSs and textiles into a single structure to develop wearable sensors is rapidly growing. In this review we provide an overview of the state-of-the-art of textiles, which use FOSs for monitoring of mechanical parameters of physiological interest. In particular we briefly describe the working principle of FOSs employed in this field and their relevant advantages and disadvantages. Also reviewed are their applications for the monitoring of mechanical parameters of physiological interest.
NASA Astrophysics Data System (ADS)
Kim, Youngseok; Philip, Timothy M.; Park, Moon Jip; Gilbert, Matthew J.; University of Illinois at Urbana; Champaign Team
As a promising candidate system to realize topological superconductivity (SC), 3D time-reversal invariant topological insulators (TI) proximity-coupled to s-wave superconductors have been intensively studied. Recent experiments on proximity-coupled TI have shown that superconductivity may be induced in ultrathin TI. One proposal to observe the topological SC in proximity-coupled ultrathin TI system is to add magnetic dopants to the TI. However, detailed study on the impact of the experimental parameters on possible topological phase is sparse. In this work, we investigate ultrathin, magnetically-doped, proximity-coupled TI in order to determine the experimentally relevant parameters needed to observe topological SC. We find that, due to the spin-momentum locked nature of the surface states in TI, the induced s-wave order parameter within the surface states persists even at large magnitudes of the Zeeman energy, allowing us to explore the system in parameter space. We elucidate the phase diagram as a function of: the hybridization gap, Zeeman energy, and chemical potential of the TI system. Our findings provide a useful guide in choosing relevant parameters to facilitate the observation of topological SC in thin film TI-superconductor hybrid systems. National Science Foundation (NSF) under Grant CAREER ECCS-1351871.
What can the CMB tell about the microphysics of cosmic reheating?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drewes, Marco, E-mail: marcodrewes@googlemail.com
In inflationary cosmology, cosmic reheating after inflation sets the initial conditions for the hot big bang. We investigate how CMB data can be used to study the effective potential and couplings of the inflaton during reheating to constrain the underlying microphysics. If there is a phase of preheating that is driven by a parametric resonance or other instability, then the thermal history and expansion history during the reheating era depend on a large number of microphysical parameters in a complicated way. In this case the connection between CMB observables and microphysical parameters can only established with intense numerical studies. Suchmore » studies can help to improve CMB constraints on the effective inflaton potential in specific models, but parameter degeneracies usually make it impossible to extract meaningful best-fit values for individual microphysical parameters. If, on the other hand, reheating is driven by perturbative processes, then it can be possible to constrain the inflaton couplings and the reheating temperature from CMB data. This provides an indirect probe of fundamental microphysical parameters that most likely can never be measured directly in the laboratory, but have an immense impact on the evolution of the cosmos by setting the stage for the hot big bang.« less
Medical Smart Textiles Based on Fiber Optic Technology: An Overview
Massaroni, Carlo; Saccomandi, Paola; Schena, Emiliano
2015-01-01
The growing interest in the development of smart textiles for medical applications is driven by the aim to increase the mobility of patients who need a continuous monitoring of such physiological parameters. At the same time, the use of fiber optic sensors (FOSs) is gaining large acceptance as an alternative to traditional electrical and mechanical sensors for the monitoring of thermal and mechanical parameters. The potential impact of FOSs is related to their good metrological properties, their small size and their flexibility, as well as to their immunity from electromagnetic field. Their main advantage is the possibility to use textile based on fiber optic in a magnetic resonance imaging environment, where standard electronic sensors cannot be employed. This last feature makes FOSs suitable for monitoring biological parameters (e.g., respiratory and heartbeat monitoring) during magnetic resonance procedures. Research interest in combining FOSs and textiles into a single structure to develop wearable sensors is rapidly growing. In this review we provide an overview of the state-of-the-art of textiles, which use FOSs for monitoring of mechanical parameters of physiological interest. In particular we briefly describe the working principle of FOSs employed in this field and their relevant advantages and disadvantages. Also reviewed are their applications for the monitoring of mechanical parameters of physiological interest. PMID:25871010
NASA Astrophysics Data System (ADS)
Laverick, M.; Lobel, A.; Merle, T.; Royer, P.; Martayan, C.; David, M.; Hensberge, H.; Thienpont, E.
2018-04-01
Context. Fundamental atomic parameters, such as oscillator strengths, play a key role in modelling and understanding the chemical composition of stars in the Universe. Despite the significant work underway to produce these parameters for many astrophysically important ions, uncertainties in these parameters remain large and can propagate throughout the entire field of astronomy. Aims: The Belgian repository of fundamental atomic data and stellar spectra (BRASS) aims to provide the largest systematic and homogeneous quality assessment of atomic data to date in terms of wavelength, atomic and stellar parameter coverage. To prepare for it, we first compiled multiple literature occurrences of many individual atomic transitions, from several atomic databases of astrophysical interest, and assessed their agreement. In a second step synthetic spectra will be compared against extremely high-quality observed spectra, for a large number of BAFGK spectral type stars, in order to critically evaluate the atomic data of a large number of important stellar lines. Methods: Several atomic repositories were searched and their data retrieved and formatted in a consistent manner. Data entries from all repositories were cross-matched against our initial BRASS atomic line list to find multiple occurrences of the same transition. Where possible we used a new non-parametric cross-match depending only on electronic configurations and total angular momentum values. We also checked for duplicate entries of the same physical transition, within each retrieved repository, using the non-parametric cross-match. Results: We report on the number of cross-matched transitions for each repository and compare their fundamental atomic parameters. We find differences in log(gf) values of up to 2 dex or more. We also find and report that 2% of our line list and Vienna atomic line database retrievals are composed of duplicate transitions. Finally we provide a number of examples of atomic spectral lines with different retrieved literature log(gf) values, and discuss the impact of these uncertain log(gf) values on quantitative spectroscopy. All cross-matched atomic data and duplicate transition pairs are available to download at http://brass.sdf.org
Low velocity impact of 6082-T6 aluminum plates
NASA Astrophysics Data System (ADS)
Mocian, Oana Alexandra; Constantinescu, Dan Mihai; Sandu, Marin; Sorohan, Ştefan
2018-02-01
The low velocity domain covers vehicle impacts, ship collisions and even accidentally tool drops. Even though more and more research is needed into these fields, most of the papers concerning impact problems focus on impact at medium and high velocities. Understanding the behavior of structures subjected to low velocity impact is of major importance when referring to impact resistance and damage tolerance. The paper presents an experimental and numerical investigation on the low velocity behavior of 6082-T6 aluminum plates. Impact tests were performed using an Instron Ceast 9340 drop-weight testing machine. In the experimental procedure, square plates were mounted on a circular support, fixed with a pneumatic clamping system and impacted with a hemispherical steel projectile. Specimens were impacted at constant weight and different impact velocities. The effect of different impact energies was investigated. The impact event was then simulated using the nonlinear finite element code LS_DYNA in order to determine the effect of strain rate upon the mechanical behavior of the aluminum plates. Moreover, in order to capture the exact behavior of the material, a special attention has been given to the selection of the correct material model and its parameters, which, in large extent, depend on the observed behavior of the aluminum plate during the test and the actual response of the plate under simulation. The numerical predictions are compared with the experimental observations and the applicability of the numerical model for further researches is analyzed.
Oblique hypervelocity impact response of dual-sheet structures
NASA Technical Reports Server (NTRS)
Schonberg, William P.; Taylor, Roy A.
1989-01-01
The results of a continuing investigation of the phenomena associated with the oblique hypervelocity impact of spherical projectiles onto multi-sheet aluminum structures are given. A series of equations that quantitatively describes these phenomena is obtained through a regression of experimental data. These equations characterize observed ricochet and penetration damage phenomena in a multi-sheet structure as functions of geometric parameters of the structure and the diameter, obliquity, and velocity of the impacting projectile. Crater damage observed on the ricochet witness plates is used to determine the sizes and speeds of the ricochet debris particles that caused the damage. It is observed that the diameter of the most damaging ricochet debris particle can be as large as 40 percent of the original particle diameter and can travel at speeds between 24 percent and 36 percent of the original projectile impact velocity. The equations necessary for the design of shielding panels that will protect external systems from such ricochet debris damage are also developed. The dimensions of these shielding panels are shown to be strongly dependent on their inclination and on their circumferential distribution around the spacecraft.
NASA Astrophysics Data System (ADS)
Sorantin, Max E.; Dorda, Antonius; Held, Karsten; Arrigoni, Enrico
2018-03-01
We study a simple model of photovoltaic energy harvesting across a Mott-insulating gap consisting of a correlated layer connected to two metallic leads held at different chemical potentials. We address, in particular, the issue of impact ionization, whereby a particle photoexcited to the high-energy part of the upper Hubbard band uses its extra energy to produce a second particle-hole excitation. We find a drastic increase of the photocurrent upon entering the frequency regime where impact ionization is possible. At large values of the Mott gap, where impact ionization is energetically not allowed, we observe a suppression of the current and a piling up of charge in the high-energy part of the upper Hubbard band. Our study is based on a Floquet dynamical mean-field theory treatment of the steady state with the so-called auxiliary master equation approach as impurity solver. We verify that an additional approximation, taking the self-energy diagonal in the Floquet indices, is appropriate for the parameter range we are considering.
Impacts of feral horses on a desert environment
2009-01-01
Background Free-ranging horses (Equus caballus) in North America are considered to be feral animals since they are descendents of non-native domestic horses introduced to the continent. We conducted a study in a southern California desert to understand how feral horse movements and horse feces impacted this arid ecosystem. We evaluated five parameters susceptible to horse trampling: soil strength, vegetation cover, percent of nonnative vegetation, plant species diversity, and macroinvertebrate abundance. We also tested whether or not plant cover and species diversity were affected by the presence of horse feces. Results Horse trailing resulted in reduced vegetation cover, compacted soils, and in cases of intermediate intensity disturbance, increased plant species diversity. The presence of horse feces did not affect plant cover, but it did increase native plant diversity. Conclusion Adverse impacts, such as soil compaction and increased erosion potential, were limited to established horse trails. In contrast, increased native plant diversity near trails and feces could be viewed as positive outcomes. Extensive trailing can result in a surprisingly large impact area: we estimate that < 30 horses used > 25 km2 of trails in our study area. PMID:19903355
Impacts of feral horses on a desert environment.
Ostermann-Kelm, Stacey D; Atwill, Edward A; Rubin, Esther S; Hendrickson, Larry E; Boyce, Walter M
2009-11-10
Free-ranging horses (Equus caballus) in North America are considered to be feral animals since they are descendents of non-native domestic horses introduced to the continent. We conducted a study in a southern California desert to understand how feral horse movements and horse feces impacted this arid ecosystem. We evaluated five parameters susceptible to horse trampling: soil strength, vegetation cover, percent of nonnative vegetation, plant species diversity, and macroinvertebrate abundance. We also tested whether or not plant cover and species diversity were affected by the presence of horse feces. Horse trailing resulted in reduced vegetation cover, compacted soils, and in cases of intermediate intensity disturbance, increased plant species diversity. The presence of horse feces did not affect plant cover, but it did increase native plant diversity. Adverse impacts, such as soil compaction and increased erosion potential, were limited to established horse trails. In contrast, increased native plant diversity near trails and feces could be viewed as positive outcomes. Extensive trailing can result in a surprisingly large impact area: we estimate that < 30 horses used > 25 km2 of trails in our study area.
Mirocha, Jeffrey D.; Rajewski, Daniel A.; Marjanovic, Nikola; ...
2015-08-27
In this study, wind turbine impacts on the atmospheric flow are investigated using data from the Crop Wind Energy Experiment (CWEX-11) and large-eddy simulations (LESs) utilizing a generalized actuator disk (GAD) wind turbine model. CWEX-11 employed velocity-azimuth display (VAD) data from two Doppler lidar systems to sample vertical profiles of flow parameters across the rotor depth both upstream and in the wake of an operating 1.5 MW wind turbine. Lidar and surface observations obtained during four days of July 2011 are analyzed to characterize the turbine impacts on wind speed and flow variability, and to examine the sensitivity of thesemore » changes to atmospheric stability. Significant velocity deficits (VD) are observed at the downstream location during both convective and stable portions of four diurnal cycles, with large, sustained deficits occurring during stable conditions. Variances of the streamwise velocity component, σ u, likewise show large increases downstream during both stable and unstable conditions, with stable conditions supporting sustained small increases of σ u , while convective conditions featured both larger magnitudes and increased variability, due to the large coherent structures in the background flow. Two representative case studies, one stable and one convective, are simulated using LES with a GAD model at 6 m resolution to evaluate the compatibility of the simulation framework with validation using vertically profiling lidar data in the near wake region. Virtual lidars were employed to sample the simulated flow field in a manner consistent with the VAD technique. Simulations reasonably reproduced aggregated wake VD characteristics, albeit with smaller magnitudes than observed, while σu values in the wake are more significantly underestimated. The results illuminate the limitations of using a GAD in combination with coarse model resolution in the simulation of near wake physics, and validation thereof using VAD data.« less
Sacco, Rosaria; Bussman, Rita; Oesch, Peter; Kesselring, Jürg; Beer, Serafin
2011-05-01
Gait impairment and fatigue are common and disabling problems in multiple sclerosis (MS). Characterisation of abnormal gait in MS patients has been done mainly using observational studies and simple walking tests providing only limited quantitative and no qualitative data, or using intricate and time-consuming assessment procedures. In addition, the correlation of gait impairments with fatigue is largely unknown. The aim of this study was to characterise spatio-temporal gait parameters by a simple and easy-to-use gait analysis system (GAITRite®) in MS patients compared with healthy controls, and to analyse changes and correlation with fatigue during inpatient rehabilitation. Twenty-four MS patients (EDSS <6.5) admitted for inpatient rehabilitation and 19 healthy subjects were evaluated using the GAITRite® Functional Ambulation System. Between-group differences and changes of gait parameters during inpatient rehabilitation were analysed, and correlation with fatigue, using the Wurzburg Fatigue Inventory for Multiple Sclerosis (WEIMuS), was determined. Compared to healthy controls MS patients showed significant impairments in different spatio-temporal gait parameters, which showed a significant improvement during inpatient rehabilitation. Different gait parameters were correlated with fatigue physical score, and change of gait parameters was correlated with improvement of fatigue. Spatio-temporal gait analysis is helpful to assess specific walking impairments in MS patients and subtle changes during rehabilitation. Correlation with fatigue may indicate a possible negative impact of fatigue on rehabilitation outcome.
Gonzalez, L. M.; Fogle, C. A.; Baker, W. T.; Hughes, F. E.; Law, J. M.; Motsinger-Reif, A. A.; Blikslager, A. T.
2014-01-01
Summary Reasons for performing the study There is an important need for objective parameters that accurately predict the outcome of horses with large colon volvulus. Objectives To evaluate the predictive value of a series of histomorphometric parameters on short-term outcome, as well as the impact of colonic resection on horses with large colon volvulus. Study Design Retrospective cohort study Methods Adult horses admitted to the Equine and Farm Animal Veterinary Center at North Carolina State University, Peterson & Smith and Chino Valley Equine Hospitals between 2006–2013 undergoing an exploratory celiotomy, diagnosed with large colon volvulus of ≥360 degrees, where a pelvic flexure biopsy was obtained, and that recovered from general anaesthesia, were selected for inclusion in the study. Logistic regression was used to determine associations between signalment, histomorphometric measurements of interstitial: crypt ratio, degree of haemorrhage, percentage loss of luminal and glandular epithelium, as well as colonic resection with short-term outcome (discharge from the hospital). Results Pelvic flexure biopsies from 47 horses with large colon volvulus were evaluated. Factors that were significantly associated with short-term outcome on univariate logistic regression were Thoroughbred breed (P = 0.04), interstitial: crypt ratio >1 (P = 0.02) and haemorrhage score ≥3 (P = 0.005). Resection (P = 0.92) was not found to be significantly associated with short-term outcome. No combined factors increased the likelihood of death in forward stepwise logistic regression modelling. A digitally quantified haemorrhage area measurement strengthened the association of haemorrhage with non-survival in cases of large colon volvulus. Conclusions Histomorphometric measurements of interstitial: crypt ratio and degree of haemorrhage predict short-term outcome in cases of large colon volvulus. Resection was not associated with short-term outcome in horses selected for this study. Accurate quantification of mucosal haemorrhage at the time of surgery may improve veterinary surgeons’ prognostic capabilities in horses with large colon volvulus. PMID:24735170
Gonzalez, L M; Fogle, C A; Baker, W T; Hughes, F E; Law, J M; Motsinger-Reif, A A; Blikslager, A T
2015-05-01
There is an important need for objective parameters that accurately predict the outcome of horses with large colon volvulus. To evaluate the predictive value of a series of histomorphometric parameters on short-term outcome, as well as the impact of colonic resection on horses with large colon volvulus. Retrospective cohort study. Adult horses admitted to the Equine and Farm Animal Veterinary Center at North Carolina State University, Peterson and Smith and Chino Valley Equine Hospitals between 2006 and 2013 that underwent an exploratory coeliotomy, diagnosed with large colon volvulus of ≥360 degrees, where a pelvic flexure biopsy was obtained, and that recovered from general anaesthesia, were selected for inclusion in the study. Logistic regression was used to determine associations between signalment, histomorphometric measurements of interstitium-to-crypt ratio, degree of haemorrhage, percentage loss of luminal and glandular epithelium, as well as colonic resection with short-term outcome (discharge from the hospital). Pelvic flexure biopsies from 47 horses with large colon volvulus were evaluated. Factors that were significantly associated with short-term outcome on univariate logistic regression were Thoroughbred breed (P = 0.04), interstitium-to-crypt ratio >1 (P = 0.02) and haemorrhage score ≥3 (P = 0.005). Resection (P = 0.92) was not found to be associated significantly with short-term outcome. No combined factors increased the likelihood of death in forward stepwise logistic regression modelling. A digitally quantified measurement of haemorrhage area strengthened the association of haemorrhage with nonsurvival in cases of large colon volvulus. Histomorphometric measurements of interstitium-to-crypt ratio and degree of haemorrhage predict short-term outcome in cases of large colon volvulus. Resection was not associated with short-term outcome in horses selected for this study. Accurate quantification of mucosal haemorrhage at the time of surgery may improve veterinary surgeons' prognostic capabilities in horses with large colon volvulus. © 2014 EVJ Ltd.
NASA Astrophysics Data System (ADS)
Gnaneswara Reddy, M.
2017-09-01
This communication presents the transportation of third order hydromagnetic fluid with thermal radiation by peristalsis through an irregular channel configuration filled a porous medium under the low Reynolds number and large wavelength approximations. Joule heating, Hall current and homogeneous-heterogeneous reactions effects are considered in the energy and species equations. The Second-order velocity and energy slip restrictions are invoked. Final dimensionless governing transport equations along the boundary restrictions are resolved numerically with the help of NDsolve in Mathematica package. Impact of involved sundry parameters on the non-dimensional axial velocity, fluid temperature and concentration characteristics have been analyzed via plots and tables. It is manifest that an increasing porosity parameter leads to maximum velocity in the core part of the channel. Fluid velocity boosts near the walls of the channel where as the reverse effect in the central part of the channel for higher values of first order slip. Larger values of thermal radiation parameter R reduce the fluid temperature field. Also, an increase in heterogeneous reaction parameter Ks magnifies the concentration profile. The present study has the crucial application of thermal therapy in biomedical engineering.
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
Information spreading dynamics in hypernetworks
NASA Astrophysics Data System (ADS)
Suo, Qi; Guo, Jin-Li; Shen, Ai-Zhong
2018-04-01
Contact pattern and spreading strategy fundamentally influence the spread of information. Current mathematical methods largely assume that contacts between individuals are fixed by networks. In fact, individuals are affected by all his/her neighbors in different social relationships. Here, we develop a mathematical approach to depict the information spreading process in hypernetworks. Each individual is viewed as a node, and each social relationship containing the individual is viewed as a hyperedge. Based on SIS epidemic model, we construct two spreading models. One model is based on global transmission, corresponding to RP strategy. The other is based on local transmission, corresponding to CP strategy. These models can degenerate into complex network models with a special parameter. Thus hypernetwork models extend the traditional models and are more realistic. Further, we discuss the impact of parameters including structure parameters of hypernetwork, spreading rate, recovering rate as well as information seed on the models. Propagation time and density of informed nodes can reveal the overall trend of information dissemination. Comparing these two models, we find out that there is no spreading threshold in RP, while there exists a spreading threshold in CP. The RP strategy induces a broader and faster information spreading process under the same parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Ji-Young; Hong, Song-You; Sunny Lim, Kyo-Sun
The sensitivity of a cumulus parameterization scheme (CPS) to a representation of precipitation production is examined. To do this, the parameter that determines the fraction of cloud condensate converted to precipitation in the simplified Arakawa–Schubert (SAS) convection scheme is modified following the results from a cloud-resolving simulation. While the original conversion parameter is assumed to be constant, the revised parameter includes a temperature dependency above the freezing level, whichleadstolessproductionoffrozenprecipitating condensate with height. The revised CPS has been evaluated for a heavy rainfall event over Korea as well as medium-range forecasts using the Global/Regional Integrated Model system (GRIMs). The inefficient conversionmore » of cloud condensate to convective precipitation at colder temperatures generally leads to a decrease in pre-cipitation, especially in the category of heavy rainfall. The resultant increase of detrained moisture induces moistening and cooling at the top of clouds. A statistical evaluation of the medium-range forecasts with the revised precipitation conversion parameter shows an overall improvement of the forecast skill in precipitation and large-scale fields, indicating importance of more realistic representation of microphysical processes in CPSs.« less
NASA Astrophysics Data System (ADS)
Ye, Xuchun; Xu, Chong-Yu; Li, Xianghu; Zhang, Qi
2018-05-01
The occurrence of flood and drought frequency is highly correlated with the temporal fluctuations of streamflow series; understanding of these fluctuations is essential for the improved modeling and statistical prediction of extreme changes in river basins. In this study, the complexity of daily streamflow fluctuations was investigated by using multifractal detrended fluctuation analysis (MF-DFA) in a large heterogeneous lake basin, the Poyang Lake basin in China, and the potential impacts of human activities were also explored. Major results indicate that the multifractality of streamflow fluctuations shows significant regional characteristics. In the study catchment, all the daily streamflow series present a strong long-range correlation with Hurst exponents bigger than 0.8. The q-order Hurst exponent h( q) of all the hydrostations can be characterized well by only two parameters: a (0.354 ≤ a ≤ 0.384) and b (0.627 ≤ b ≤ 0.677), with no pronounced differences. Singularity spectrum analysis pointed out that small fluctuations play a dominant role in all daily streamflow series. Our research also revealed that both the correlation properties and the broad probability density function (PDF) of hydrological series can be responsible for the multifractality of streamflow series that depends on watershed areas. In addition, we emphasized the relationship between watershed area and the estimated multifractal parameters, such as the Hurst exponent and fitted parameters a and b from the q-order Hurst exponent h( q). However, the relationship between the width of the singularity spectrum (Δ α) and watershed area is not clear. Further investigation revealed that increasing forest coverage and reservoir storage can effectively enhance the persistence of daily streamflow, decrease the hydrological complexity of large fluctuations, and increase the small fluctuations.
Sea Extremes: Integrated impact assessment in coastal climate adaptation
NASA Astrophysics Data System (ADS)
Sorensen, Carlo; Knudsen, Per; Broge, Niels; Molgaard, Mads; Andersen, Ole
2016-04-01
We investigate effects of sea level rise and a change in precipitation pattern on coastal flooding hazards. Historic and present in situ and satellite data of water and groundwater levels, precipitation, vertical ground motion, geology, and geotechnical soil properties are combined with flood protection measures, topography, and infrastructure to provide a more complete picture of the water-related impact from climate change at an exposed coastal location. Results show that future sea extremes evaluated from extreme value statistics may, indeed, have a large impact. The integrated effects from future storm surges and other geo- and hydro-parameters need to be considered in order to provide for the best protection and mitigation efforts, however. Based on the results we present and discuss a simple conceptual model setup that can e.g. be used for 'translation' of regional sea level rise evidence and projections to concrete impact measures. This may be used by potentially affected stakeholders -often working in different sectors and across levels of governance, in a common appraisal of the challenges faced ahead. The model may also enter dynamic tools to evaluate local impact as sea level research advances and projections for the future are updated.
Computational Modeling of Pathophysiologic Responses to Exercise in Fontan Patients
Kung, Ethan; Perry, James C.; Davis, Christopher; Migliavacca, Francesco; Pennati, Giancarlo; Giardini, Alessandro; Hsia, Tain-Yen; Marsden, Alison
2014-01-01
Reduced exercise capacity is nearly universal among Fontan patients. Although many factors have emerged as possible contributors, the degree to which each impacts the overall hemodynamics is largely unknown. Computational modeling provides a means to test hypotheses of causes of exercise intolerance via precisely controlled virtual experiments and measurements. We quantified the physiological impacts of commonly encountered, clinically relevant dysfunctions introduced to the exercising Fontan system via a previously developed lumped-parameter model of Fontan exercise. Elevated pulmonary arterial pressure was observed in all cases of dysfunction, correlated with lowered cardiac output, and often mediated by elevated atrial pressure. Pulmonary vascular resistance was not the most significant factor affecting exercise performance as measured by cardiac output. In the absence of other dysfunctions, atrioventricular valve insufficiency alone had significant physiological impact, especially under exercise demands. The impact of isolated dysfunctions can be linearly summed to approximate the combined impact of several dysfunctions occurring in the same system. A single dominant cause of exercise intolerance was not identified, though several hypothesized dysfunctions each led to variable decreases in performance. Computational predictions of performance improvement associated with various interventions should be weighed against procedural risks and potential complications, contributing to improvements in routine patient management protocol. PMID:25260878
Calculation of Organ Doses for a Large Number of Patients Undergoing CT Examinations.
Bahadori, Amir; Miglioretti, Diana; Kruger, Randell; Flynn, Michael; Weinmann, Sheila; Smith-Bindman, Rebecca; Lee, Choonsik
2015-10-01
The objective of our study was to develop an automated calculation method to provide organ dose assessment for a large cohort of pediatric and adult patients undergoing CT examinations. We adopted two dose libraries that were previously published: the volume CT dose index-normalized organ dose library and the tube current-exposure time product (100 mAs)-normalized weighted CT dose index library. We developed an algorithm to calculate organ doses using the two dose libraries and the CT parameters available from DICOM data. We calculated organ doses for pediatric (n = 2499) and adult (n = 2043) CT examinations randomly selected from four health care systems in the United States and compared the adult organ doses with the values calculated from the ImPACT calculator. The median brain dose was 20 mGy (pediatric) and 24 mGy (adult), and the brain dose was greater than 40 mGy for 11% (pediatric) and 18% (adult) of the head CT studies. Both the National Cancer Institute (NCI) and ImPACT methods provided similar organ doses (median discrepancy < 20%) for all organs except the organs located close to the scanning boundaries. The visual comparisons of scanning coverage and phantom anatomies revealed that the NCI method, which is based on realistic computational phantoms, provides more accurate organ doses than the ImPACT method. The automated organ dose calculation method developed in this study reduces the time needed to calculate doses for a large number of patients. We have successfully used this method for a variety of CT-related studies including retrospective epidemiologic studies and CT dose trend analysis studies.
Progress with lossy compression of data from the Community Earth System Model
NASA Astrophysics Data System (ADS)
Xu, H.; Baker, A.; Hammerling, D.; Li, S.; Clyne, J.
2017-12-01
Climate models, such as the Community Earth System Model (CESM), generate massive quantities of data, particularly when run at high spatial and temporal resolutions. The burden of storage is further exacerbated by creating large ensembles, generating large numbers of variables, outputting at high frequencies, and duplicating data archives (to protect against disk failures). Applying lossy compression methods to CESM datasets is an attractive means of reducing data storage requirements, but ensuring that the loss of information does not negatively impact science objectives is critical. In particular, test methods are needed to evaluate whether critical features (e.g., extreme values and spatial and temporal gradients) have been preserved and to boost scientists' confidence in the lossy compression process. We will provide an overview on our progress in applying lossy compression to CESM output and describe our unique suite of metric tests that evaluate the impact of information loss. Further, we will describe our processes how to choose an appropriate compression algorithm (and its associated parameters) given the diversity of CESM data (e.g., variables may be constant, smooth, change abruptly, contain missing values, or have large ranges). Traditional compression algorithms, such as those used for images, are not necessarily ideally suited for floating-point climate simulation data, and different methods may have different strengths and be more effective for certain types of variables than others. We will discuss our progress towards our ultimate goal of developing an automated multi-method parallel approach for compression of climate data that both maximizes data reduction and minimizes the impact of data loss on science results.
Impact of intermittent fasting on glucose homeostasis.
Varady, Krista A
2016-07-01
This article provides an overview of the most recent human trials that have examined the impact of intermittent fasting on glucose homeostasis. Our literature search retrieved one human trial of alternate day fasting, and three trials of Ramadan fasting published in the past 12 months. Current evidence suggests that 8 weeks of alternate day fasting that produces mild weight loss (4% from baseline) has no effect on glucose homeostasis. As for Ramadan fasting, decreases in fasting glucose, insulin, and insulin resistance have been noted after 4 weeks in healthy normal weight individuals with mild weight loss (1-2% from baseline). However, Ramadan fasting may have little impact on glucoregulatory parameters in women with polycystic ovarian syndrome who failed to observe weight loss. Whether intermittent fasting is an effective means of regulating glucose homeostasis remains unclear because of the scarcity of studies in this area. Large-scale, longer-term randomized controlled trials will be required before the use of fasting can be recommended for the prevention and treatment of metabolic diseases.
Dynamic Open-Rotor Composite Shield Impact Test Report
NASA Technical Reports Server (NTRS)
Seng, Silvia; Frankenberger, Charles; Ruggeri, Charles R.; Revilock, Duane M.; Pereira, J. Michael; Carney, Kelly S.; Emmerling, William C.
2015-01-01
The Federal Aviation Administration (FAA) is working with the European Aviation Safety Agency to determine the certification base for proposed new engines that would not have a containment structure on large commercial aircraft. Equivalent safety to the current fleet is desired by the regulators, which means that loss of a single fan blade will not cause hazard to the aircraft. NASA Glenn and Naval Air Warfare Center (NAWC) China Lake collaborated with the FAA Aircraft Catastrophic Failure Prevention Program to design and test a shield that would protect the aircraft passengers and critical systems from a released blade that could impact the fuselage. This report documents the live-fire test from a full-scale rig at NAWC China Lake. NASA provided manpower and photogrammetry expertise to document the impact and damage to the shields. The test was successful: the blade was stopped from penetrating the shield, which validates the design analysis method and the parameters used in the analysis. Additional work is required to implement the shielding into the aircraft.
Effect of Occupant and Impact Factors on Forces within Neck: II. Analysis of Specific Subsets
NASA Astrophysics Data System (ADS)
Shaibani, Saami J.
2000-03-01
The forces generated in the cervical spine were evaluated for a substantial number of motor-vehicle occupants in an associated study.[1] Correlation between these forces and various occupant- and impact-related parameters was generally not high for the broad groupings of the population considered at that time. In this research, smaller subsets with more elements in common were extracted from the data to try to detect any underlying relationships that might exist for the neck force. Although correlation coefficients for these subsets were higher than those for the previous groupings in more than three-quarters of the matches undertaken, the values still did not indicate consistently good fits. This suggests that there is no simple relationship for the force within the cervical spine and this, in turn, means that the potential for neck injury has to be evaluated on a case-by-case basis. 1. Effect of Occupant and Impact Factors on Forces within Neck: I. Overview of Large Population, Bull. Am. Phys. Soc. in press (2000).
Three-dimensional derailment analysis of a crashed city tram
NASA Astrophysics Data System (ADS)
Zhou, Hechao; Wang, Wenbin; Hecht, Markus
2013-08-01
City tram collisions are simulated using multi-body dynamics. The aim of this paper is to investigate the collision-induced derailment. Simulation results demonstrate that the corner obstacle collision scenario defined in EN 15227 is mainly focused on the energy absorption process. Due to the large impact angle (45°), it is unlikely for a city tram to comply with this scenario without derailment. In order to avoid derailment, the maximum impact angle between city tram and oblique obstacle should be reduced to 25°. Moreover, some influence factors are analysed, such as mass of loaded passengers, friction coefficient, impact angle, etc. Derailment phenomenon is shown to be significantly dependent on these parameters. Two measures are proposed to prevent the collided city tram from derailment. One is using secondary lateral dampers to absorb collision energy. Another is increasing the lateral stiffness of secondary springs as well as the lateral clearance, so that more collision energy can be stored in the suspension. With these measures, the safety against derailment can be improved.
Impact parameter determination in experimental analysis using a neural network
NASA Astrophysics Data System (ADS)
Haddad, F.; Hagel, K.; Li, J.; Mdeiwayeh, N.; Natowitz, J. B.; Wada, R.; Xiao, B.; David, C.; Freslier, M.; Aichelin, J.
1997-03-01
A neural network is used to determine the impact parameter in 40Ca+40Ca reactions. The effect of the detection efficiency as well as the model dependence of the training procedure has been studied carefully. An overall improvement of the impact parameter determination of 25% is obtained using this technique. The analysis of Amphora 40Ca+40Ca data at 35 MeV per nucleon using a neural network shows two well-separated classes of events among the selected ``complete'' events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Jerel G.; Kruzic, Michael; Castillo, Carlos
2013-07-01
Chalk River Laboratory (CRL), located in Ontario Canada, has a large number of remediation projects currently in the Nuclear Legacy Liabilities Program (NLLP), including hundreds of facility decommissioning projects and over one hundred environmental remediation projects, all to be executed over the next 70 years. Atomic Energy of Canada Limited (AECL) utilized WorleyParsons to prioritize the NLLP projects at the CRL through a risk-based prioritization and ranking process, using the WorleyParsons Sequencing Unit Prioritization and Estimating Risk Model (SUPERmodel). The prioritization project made use of the SUPERmodel which has been previously used for other large-scale site prioritization and sequencing ofmore » facilities at nuclear laboratories in the United States. The process included development and vetting of risk parameter matrices as well as confirmation/validation of project risks. Detailed sensitivity studies were also conducted to understand the impacts that risk parameter weighting and scoring had on prioritization. The repeatable prioritization process yielded an objective, risk-based and technically defendable process for prioritization that gained concurrence from all stakeholders, including Natural Resources Canada (NRCan) who is responsible for the oversight of the NLLP. (authors)« less
NASA Astrophysics Data System (ADS)
Khan, Afed U.; Jiang, Jiping; Wang, Peng; Zheng, Yi
2017-10-01
Surface waters exhibit regionalization due to various climatic conditions and anthropogenic activities. Here we assess the impact of topographic and socio-economic factors on the climate sensitivity of surface water quality, estimated using an elasticity approach (climate elasticity of water quality (CEWQ)), and identify potential risks of instability in different regions and climatic conditions. Large global datasets were used for 12 main water quality parameters from 43 water quality monitoring stations located at large major rivers. The results demonstrated that precipitation elasticity shows higher sensitivity to topographic and socio-economic determinants as compared to temperature elasticity. In tropical climate class (A), gross domestic product (GDP) played an important role in stabilizing the CEWQ. In temperate climate class (C), GDP played the same role in stability, while the runoff coefficient, slope, and population density fuelled the risk of instability. The results implied that watersheds with lower runoff coefficient, thick population density, over fertilization and manure application face a higher risk of instability. We discuss the socio-economic and topographic factors that cause instability of CEWQ parameters and conclude with some suggestions for watershed managers to bring sustainability in freshwater bodies.
Eruptive Source Parameters from Near-Source Gravity Waves Induced by Large Vulcanian eruptions
NASA Astrophysics Data System (ADS)
Barfucci, Giulia; Ripepe, Maurizio; De Angelis, Silvio; Lacanna, Giorgio; Marchetti, Emanuele
2016-04-01
The sudden ejection of hot material from volcanic vent perturbs the atmosphere generating a broad spectrum of pressure oscillations from acoustic infrasound (<10 Hz) to gravity waves (<0.03 Hz). However observations of gravity waves excited by volcanic eruptions are still rare, mostly limited to large sub-plinian eruptions and frequently at large distance from the source (>100 km). Atmospheric Gravity waves are induced by perturbations of the hydrostatic equilibrium of the atmosphere and propagate within a medium with internal density stratification. They are initiated by mechanisms that cause the atmosphere to be displaced as for the injection of volcanic ash plume during an eruption. We use gravity waves to infer eruptive source parameters, such as mass eruption rate (MER) and duration of the eruption, which may be used as inputs in the volcanic ash transport and dispersion models. We present the analysis of near-field observations (<7 km) of atmospheric gravity waves, with frequencies of 0.97 and 1.15 mHz, recorded by a pressure sensors network during two explosions in July and December 2008 at Soufrière Hills Volcano, Montserrat. We show that gravity waves at Soufrière Hills Volcano originate above the volcanic dome and propagate with an apparent horizontal velocities of 8-10 m/s. Assuming a single mass injection point source model, we constrain the source location at ~3.5 km a.s.l., above the vent, duration of the gas thrust < 140 s and MERs of 2.6 and 5.4 x10E7 kg/s, for the two eruptive events. Source duration and MER derived by modeling Gravity Waves are fully compatible with others independent estimates from field observations. Our work strongly supports the use of gravity waves to model eruption source parameters and can have a strong impact on our ability to monitor volcanic eruption at a large distance and may have future application in assessing the relative magnitude of volcanic explosions.
NASA Astrophysics Data System (ADS)
Matthies, A.; Leckebusch, G. C.; Rohlfing, G.; Ulbrich, U.
2009-04-01
Extreme weather events such as thunderstorms, hail and heavy rain or snowfall can pose a threat to human life and to considerable tangible assets. Yet there is a lack of knowledge about present day climatological risk and its economic effects, and its changes due to rising greenhouse gas concentrations. Therefore, parts of economy particularly sensitve to extreme weather events such as insurance companies and airports require regional risk-analyses, early warning and prediction systems to cope with such events. Such an attempt is made for southern Germany, in close cooperation with stakeholders. Comparing ERA40 and station data with impact records of Munich Re and Munich Airport, the 90th percentile was found to be a suitable threshold for extreme impact relevant precipitation events. Different methods for the classification of causing synoptic situations have been tested on ERA40 reanalyses. An objective scheme for the classification of Lamb's circulation weather types (CWT's) has proved to be most suitable for correct classification of the large-scale flow conditions. Certain CWT's have been turned out to be prone to heavy precipitation or on the other side to have a very low risk of such events. Other large-scale parameters are tested in connection with CWT's to find out a combination that has the highest skill to identify extreme precipitation events in climate model data (ECHAM5 and CLM). For example vorticity advection in 700 hPa shows good results, but assumes knowledge of regional orographic particularities. Therefore ongoing work is focused on additional testing of parameters that indicate deviations of a basic state of the atmosphere like the Eady Growth Rate or the newly developed Dynamic State Index. Evaluation results will be used to estimate the skill of the regional climate model CLM concerning the simulation of frequency and intensity of the extreme weather events. Data of the A1B scenario (2000-2050) will be examined for a possible climate change signal.
NASA Astrophysics Data System (ADS)
McKague, Darren Shawn
2001-12-01
The statistical properties of clouds and precipitation on a global scale are important to our understanding of climate. Inversion methods exist to retrieve the needed cloud and precipitation properties from satellite data pixel-by-pixel that can then be summarized over large data sets to obtain the desired statistics. These methods can be quite computationally expensive, and typically don't provide errors on the statistics. A new method is developed to directly retrieve probability distributions of parameters from the distribution of measured radiances. The method also provides estimates of the errors on the retrieved distributions. The method can retrieve joint distributions of parameters that allows for the study of the connection between parameters. A forward radiative transfer model creates a mapping from retrieval parameter space to radiance space. A Monte Carlo procedure uses the mapping to transform probability density from the observed radiance histogram to a two- dimensional retrieval property probability distribution function (PDF). An estimate of the uncertainty in the retrieved PDF is calculated from random realizations of the radiance to retrieval parameter PDF transformation given the uncertainty of the observed radiances, the radiance PDF, the forward radiative transfer, the finite number of prior state vectors, and the non-unique mapping to retrieval parameter space. The retrieval method is also applied to the remote sensing of precipitation from SSM/I microwave data. A method of stochastically generating hydrometeor fields based on the fields from a numerical cloud model is used to create the precipitation parameter radiance space transformation. The impact of vertical and horizontal variability within the hydrometeor fields has a significant impact on algorithm performance. Beamfilling factors are computed from the simulated hydrometeor fields. The beamfilling factors vary quite a bit depending upon the horizontal structure of the rain. The algorithm is applied to SSM/I images from the eastern tropical Pacific and is compared to PDFs of rain rate computed using pixel-by-pixel retrievals from Wilheit and from Liu and Curry. Differences exist between the three methods, but good general agreement is seen between the PDF retrieval algorithm and the algorithm of Liu and Curry. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan
2016-09-01
Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Employer model of workplace impacts of anti-TNF therapy for rheumatoid arthritis.
Birnbaum, Howard; Pike, Crystal; Kaufman, Rebecca; Cifaldi, Mary
2009-10-01
Rheumatoid arthritis (RA) greatly affects patients' abilities to perform work, which can translate into substantial employer costs. We developed a customizable model that allows employers to calculate workplace impacts of RA therapies in employees with RA. Costs of medical leave (absenteeism)/disability, reduced productivity, job turnover, and work-equipment adaptations for employees with RA were calculated. Costs of the tumor necrosis factor antagonist adalimumab were compared with those of other RA treatments. Default parameters were based on literature, clinical trials, government sources, and employers' data. Annual per-employee workplace cost was $9071 for adalimumab versus $16,335 for other RA therapies. Costs included reduced productivity (57%), absenteeism/disability (21%), and job turnover (21%). RA imposes a large financial burden on employers, predominantly owing to lost productivity. When compared with other RA therapies, adalimumab substantially reduced employers' costs.
Transfer, loss and physical processing of water in hit-and-run collisions of planetary embryos
NASA Astrophysics Data System (ADS)
Burger, C.; Maindl, T. I.; Schäfer, C. M.
2018-01-01
Collisions between large, similar-sized bodies are believed to shape the final characteristics and composition of terrestrial planets. Their inventories of volatiles such as water are either delivered or at least significantly modified by such events. Besides the transition from accretion to erosion with increasing impact velocity, similar-sized collisions can also result in hit-and-run outcomes for sufficiently oblique impact angles and large enough projectile-to-target mass ratios. We study volatile transfer and loss focusing on hit-and-run encounters by means of smooth particle hydrodynamics simulations, including all main parameters: impact velocity, impact angle, mass ratio and also the total colliding mass. We find a broad range of overall water losses, up to 75% in the most energetic hit-and-run events, and confirm the much more severe consequences for the smaller body also for stripping of volatile layers. Transfer of water between projectile and target inventories is found to be mostly rather inefficient, and final water contents are dominated by pre-collision inventories reduced by impact losses, for similar pre-collision water mass fractions. Comparison with our numerical results shows that current collision outcome models are not accurate enough to reliably predict these composition changes in hit-and-run events. To also account for non-mechanical losses, we estimate the amount of collisionally vaporized water over a broad range of masses and find that these contributions are particularly important in collisions of ˜ Mars-sized bodies, with sufficiently high impact energies, but still relatively low gravity. Our results clearly indicate that the cumulative effect of several (hit-and-run) collisions can efficiently strip protoplanets of their volatile layers, especially the smaller body, as it might be common, e.g., for Earth-mass planets in systems with Super-Earths. An accurate model for stripping of volatiles that can be included in future planet formation simulations has to account for the peculiarities of hit-and-run events and track compositional changes in both large post-collision fragments.
NASA Astrophysics Data System (ADS)
Kirshen, P. H.; Knott, J. F.; Ray, P.; Elshaer, M.; Daniel, J.; Jacobs, J. M.
2016-12-01
Transportation climate change vulnerability and adaptation studies have primarily focused on surface-water flooding from sea-level rise (SLR); little attention has been given to the effects of climate change and SLR on groundwater and subsequent impacts on the unbound foundation layers of coastal-road infrastructure. The magnitude of service-life reduction depends on the height of the groundwater in the unbound pavement materials, the pavement structure itself, and the loading. Using a steady-state groundwater model, and a multi-layer elastic pavement evaluation model, the strain changes in the layers can be determined as a function of parameter values and the strain changes translated into failure as measured by number of loading cycles to failure. For a section of a major coastal road in New Hampshire, future changes in sea-level, precipitation, temperature, land use, and groundwater pumping are characterized by deep uncertainty. Parameters that describe the groundwater system such as hydraulic conductivity can be probabilistically described while road characteristics are assumed to be deterministic. To understand the vulnerability of this road section, a bottom-up planning approach was employed over time where the combinations of parameter values that cause failure were determined and their plausibility of their occurring was analyzed. To design a robust adaptation strategy that will function reasonably well in the present and the future given the large number of uncertain parameter values, performance of adaptation options were investigated. Adaptation strategies that were considered include raising the road, load restrictions, increasing pavement layer thicknesses, replacing moisture-sensitive materials with materials that are not moisture sensitive, improving drainage systems, and treatment of the underlying materials.
Deng, Nina; Anatchkova, Milena D; Waring, Molly E; Han, Kyung T; Ware, John E
2015-08-01
The Quality-of-life (QOL) Disease Impact Scale (QDIS(®)) standardizes the content and scoring of QOL impact attributed to different diseases using item response theory (IRT). This study examined the IRT invariance of the QDIS-standardized IRT parameters in an independent sample. The differential functioning of items and test (DFIT) of a static short-form (QDIS-7) was examined across two independent sources: patients hospitalized for acute coronary syndrome (ACS) in the TRACE-CORE study (N = 1,544) and chronically ill US adults in the QDIS standardization sample. "ACS-specific" IRT item parameters were calibrated and linearly transformed to compare to "standardized" IRT item parameters. Differences in IRT model-expected item, scale and theta scores were examined. The DFIT results were also compared in a standard logistic regression differential item functioning analysis. Item parameters estimated in the ACS sample showed lower discrimination parameters than the standardized discrimination parameters, but only small differences were found for thresholds parameters. In DFIT, results on the non-compensatory differential item functioning index (range 0.005-0.074) were all below the threshold of 0.096. Item differences were further canceled out at the scale level. IRT-based theta scores for ACS patients using standardized and ACS-specific item parameters were highly correlated (r = 0.995, root-mean-square difference = 0.09). Using standardized item parameters, ACS patients scored one-half standard deviation higher (indicating greater QOL impact) compared to chronically ill adults in the standardization sample. The study showed sufficient IRT invariance to warrant the use of standardized IRT scoring of QDIS-7 for studies comparing the QOL impact attributed to acute coronary disease and other chronic conditions.
The state of the art of the impact of sampling uncertainty on measurement uncertainty
NASA Astrophysics Data System (ADS)
Leite, V. J.; Oliveira, E. C.
2018-03-01
The measurement uncertainty is a parameter that marks the reliability and can be divided into two large groups: sampling and analytical variations. Analytical uncertainty is a controlled process, performed in the laboratory. The same does not occur with the sampling uncertainty, which, because it faces several obstacles and there is no clarity on how to perform the procedures, has been neglected, although it is admittedly indispensable to the measurement process. This paper aims at describing the state of the art of sampling uncertainty and at assessing its relevance to measurement uncertainty.
Quantum Computing Architectural Design
NASA Astrophysics Data System (ADS)
West, Jacob; Simms, Geoffrey; Gyure, Mark
2006-03-01
Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.
NASA Astrophysics Data System (ADS)
Faizan-Ur-Rab, M.; Zahiri, S. H.; King, P. C.; Busch, C.; Masood, S. H.; Jahedi, M.; Nagarajah, R.; Gulizia, S.
2017-12-01
Cold spray is a solid-state rapid deposition technology in which metal powder is accelerated to supersonic speeds within a de Laval nozzle and then impacts onto the surface of a substrate. It is possible for cold spray to build thick structures, thus providing an opportunity for melt-less additive manufacturing. Image analysis of particle impact location and focused ion beam dissection of individual particles were utilized to validate a 3D multicomponent model of cold spray. Impact locations obtained using the 3D model were found to be in close agreement with the empirical data. Moreover, the 3D model revealed the particles' velocity and temperature just before impact—parameters which are paramount for developing a full understanding of the deposition process. Further, it was found that the temperature and velocity variations in large-size particles before impact were far less than for the small-size particles. Therefore, an optimal particle temperature and velocity were identified, which gave the highest deformation after impact. The trajectory of the particles from the injection point to the moment of deposition in relation to propellant gas is visualized. This detailed information is expected to assist with the optimization of the deposition process, contributing to improved mechanical properties for additively manufactured cold spray titanium parts.
How to assess the impact of a physical parameterization in simulations of moist convection?
NASA Astrophysics Data System (ADS)
Grabowski, Wojciech
2017-04-01
A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.
Effects of spot parameters in pencil beam scanning treatment planning.
Kraan, Aafke Christine; Depauw, Nicolas; Clasie, Ben; Giunta, Marina; Madden, Tom; Kooy, Hanne M
2018-01-01
Spot size σ (in air at isocenter), interspot spacing d, and spot charge q influence dose delivery efficiency and plan quality in Intensity Modulated Proton Therapy (IMPT) treatment planning. The choice and range of parameters varies among different manufacturers. The goal of this work is to demonstrate the influence of the spot parameters on dose quality and delivery in IMPT treatment plans, to show their interdependence, and to make practitioners aware of the spot parameter values for a certain facility. Our study could help as a guideline to make the trade-off between treatment quality and time in existing PBS centers and in future systems. We created plans for seven patients and a phantom, with different tumor sites and volumes, and compared the effect of small-, medium-, and large-spot widths (σ = 2.5, 5, and 10 mm) and interspot distances (1σ, 1.5σ, and 1.75σ) on dose, spot charge, and treatment time. Moreover, we quantified how postplanning charge threshold cuts affect plan quality and the total number of spots to deliver, for different spot widths and interspot distances. We show the effect of a minimum charge (or MU) cutoff value for a given proton delivery system. Spot size had a strong influence on dose: larger spots resulted in more protons delivered outside the target region. We observed dose differences of 2-13 Gy (RBE) between 2.5 mm and 10 mm spots, where the amount of extra dose was due to dose penumbra around the target region. Interspot distance had little influence on dose quality for our patient group. Both parameters strongly influence spot charge in the plans and thus the possible impact of postplanning charge threshold cuts. If such charge thresholds are not included in the treatment planning system (TPS), it is important that the practitioner validates that a given combination of lower charge threshold, interspot spacing, and spot size does not result in a plan degradation. Low average spot charge occurs for small spots, small interspot distances, many beam directions, and low fractional dose values. The choice of spot parameters values is a trade-off between accelerator and beam line design, plan quality, and treatment efficiency. We recommend the use of small spot sizes for better organ-at-risk sparing and lateral interspot distances of 1.5σ to avoid long treatment times. We note that plan quality is influenced by the charge cutoff. Our results show that the charge cutoff can be sufficiently large (i.e., 10 6 protons) to accommodate limitations on beam delivery systems. It is, therefore, not necessary per se to include the charge cutoff in the treatment planning optimization such that Pareto navigation (e.g., as practiced at our institution) is not excluded and optimal plans can be obtained without, perhaps, a bias from the charge cutoff. We recommend that the impact of a minimum charge cut impact is carefully verified for the spot sizes and spot distances applied or that it is accommodated in the TPS. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Belqorchi, Abdelghafour
Forty years after Watson and Manchur conducted the Stand-Still Frequency Response (SSFR) test on a large turbogenerator, the applicability of this technic on a powerful salient pole synchronous generator has yet to be confirmed. The scientific literature on the subject is rare and very few have attempted to compare SSFR parameter results with those deduced by classical tests. The validity of SSFR on large salient pole machines has still to be proven. The present work aims in participating to fill this knowledge gap. It can be used to build a database of measurements highly needed to draw the validity of the technic. Also, the author hopes to demonstrate the potential of SSFR model to represent the machine, not only in cases of weak disturbances but also strong ones such as instantaneous three-phase short-circuit faults. The difficulties raised by previous searchers are: The lack of accuracy in very low frequency measurements; The difficulty in rotor positioning, according to d and q axes, in case of salient pole machines; The measurement current level influence on magnetizing inductances, in axes-d and; The rotation impact on damper circuits for some rotors design. Aware of the above difficulties, the author conducted an SSFR test on a large salient pole machine (285 MVA). The generator under test has laminated non isolated rotor and an integral slot number. The damper windings in adjacent poles are connected together, via the polar core and the rotor rim. Finally, the damping circuit is unaffected by rotation. To improve the measurement accuracy, in very low frequencies, the most precise frequency response analyser available on the market was used. Besides, the frequency responses of the signals conditioning modules (i.e., isolation, amplification...) were accounted for to correct the four measured SSFR transfer functions. Immunization against noise and use of instrumentation in their optimum range, were other technics rigorously applied. Magnetizing inductances, being influenced by the measurement current magnitude, the latter was maintained constant in the range 1mHz-20Hz. Other problems such as the rotation impact on damper circuits or the difficulty of rotor positioning are eliminated or attenuated by the intrinsic characteristics of the machine. Regarding the data analysis, the Maximum Likelihood Estimation (MLE) method was used to determine the third and second order equivalent circuit from SSFR measurements. In d-axis, the approaches of adjustment to two and three transfer functions (Ld(s), sG(s) and Lafo(s)) were explored. The second order model, derived from (Ld( s) and G(s)), was used to deduce the machine standard parameters. The latter were compared with the values given by the manufacturer and by conventional on-site tests: Instantaneous three-phase short-circuit, Dalton-Cameron and the d-axis transient time constant at open stator (T'do). The comparison showed the good accuracy of SSFR values. Subsequently, a machine model was built in EMTP-RV based on SSFR standard parameters. The model was able to reproduce stator and rotor currents measured during instantaneous three-phase short-circuit test. Some adjustments, to SSFR parameters, were needed to reproduce stator voltage and rotor current acquired during load rejection d-axis test. It is worthwhile noting that the load rejection d-axis test, recently added to IEEE 115-2009 annex, must be modified to take into account the saturation and excitation impedance impact on deduced parameters. Regarding this issue, some suggestions are proposed by the author. The obtained SSFR results, contribute to raise confidence on SSFR application on large salient pole machines. In addition, it shows the aptitude of the SSFR model to represent the machine in both cases of weak and strong disturbances, at least on machines similar the one studied. Index Terms: Salient pole, frequency response, SSFR, equivalent circuit, operational inductance.
Simplified Models of Vector Control Impact upon Malaria Transmission by Zoophagic Mosquitoes
Kiware, Samson S.; Chitnis, Nakul; Moore, Sarah J.; Devine, Gregor J.; Majambere, Silas; Merrill, Stephen; Killeen, Gerry F.
2012-01-01
Background High coverage of personal protection measures that kill mosquitoes dramatically reduce malaria transmission where vector populations depend upon human blood. However, most primary malaria vectors outside of sub-Saharan Africa can be classified as “very zoophagic,” meaning they feed occasionally (<10% of blood meals) upon humans, so personal protection interventions have negligible impact upon their survival. Methods and Findings We extended a published malaria transmission model to examine the relationship between transmission, control, and the baseline proportion of bloodmeals obtained from humans (human blood index). The lower limit of the human blood index enables derivation of simplified models for zoophagic vectors that (1) Rely on only three field-measurable parameters. (2) Predict immediate and delayed (with and without assuming reduced human infectivity, respectively) impacts of personal protection measures upon transmission. (3) Illustrate how appreciable indirect communal-level protection for non-users can be accrued through direct personal protection of users. (4) Suggest the coverage and efficacy thresholds required to attain epidemiological impact. The findings suggest that immediate, indirect, community-wide protection of users and non-users alike may linearly relate to the efficacy of a user’s direct personal protection, regardless of whether that is achieved by killing or repelling mosquitoes. High protective coverage and efficacy (≥80%) are important to achieve epidemiologically meaningful impact. Non-users are indirectly protected because the two most common species of human malaria are strict anthroponoses. Therefore, the small proportion of mosquitoes that are killed or diverted while attacking humans can represent a large proportion of those actually transmitting malaria. Conclusions Simplified models of malaria transmission by very zoophagic vectors may be used by control practitioners to predict intervention impact interventions using three field-measurable parameters; the proportion of human exposure to mosquitoes occurring when an intervention can be practically used, its protective efficacy when used, and the proportion of people using it. PMID:22701527
Identifying critical road geometry parameters affecting crash rate and crash type.
Othman, Sarbaz; Thomson, Robert; Lannér, Gunnar
2009-10-01
The objective of this traffic safety investigation was to find critical road parameters affecting crash rate (CR). The study was based on crash and road maintenance data from Western Sweden. More than 3000 crashes, reported from 2000 to 2005 on median-separated roads, were collected and combined with road geometric and surface data. The statistical analysis showed variations in CR when road elements changed confirming that road characteristics affect CR. The findings indicated that large radii right-turn curves were more dangerous than left curves, in particular, during lane changing manoeuvres. However sharper curves are more dangerous in both left and right curves. Moreover, motorway carriageways with no or limited shoulders have the highest CR when compared to other carriageway widths, while one lane carriageway sections on 2+1 roads were the safest. Road surface results showed that both wheel rut depth and road roughness have negative impacts on traffic safety.
Shock enhancement of cellular materials subjected to intensive pulse loading
NASA Astrophysics Data System (ADS)
Zhang, J.; Fan, J.; Wang, Z.; Zhao, L.; Li, Z.
2018-03-01
Cellular materials can dissipate a large amount of energy due to their considerable stress plateau, which contributes to their extensive applications in structural design for crashworthiness. However, in some experiments with specimens subjected to intense impact loads, transmitted stress enhancement has been observed, leading to severe damage to the objects protected. Transmitted stress through two-dimensional Voronoi cellular materials as a protective device is qualitatively studied in this paper. Dimensionless parameters of material properties and loading parameters are defined to give critical conditions for shock enhancement and clarify the correlation between the deformations and stress enhancement. The effect of relative density on this amplifying phenomenon is investigated as well. In addition, local strain fields are calculated by using the optimal local deformation gradient, which gives a clear presentation of deformations and possible local non-uniformity in the crushing process. This research provides valuable insight into the reliability of cellular materials as protective structures.
Identifying Critical Road Geometry Parameters Affecting Crash Rate and Crash Type
Othman, Sarbaz; Thomson, Robert; Lannér, Gunnar
2009-01-01
The objective of this traffic safety investigation was to find critical road parameters affecting crash rate (CR). The study was based on crash and road maintenance data from Western Sweden. More than 3000 crashes, reported from 2000 to 2005 on median-separated roads, were collected and combined with road geometric and surface data. The statistical analysis showed variations in CR when road elements changed confirming that road characteristics affect CR. The findings indicated that large radii right-turn curves were more dangerous than left curves, in particular, during lane changing manoeuvres. However sharper curves are more dangerous in both left and right curves. Moreover, motorway carriageways with no or limited shoulders have the highest CR when compared to other carriageway widths, while one lane carriageway sections on 2+1 roads were the safest. Road surface results showed that both wheel rut depth and road roughness have negative impacts on traffic safety. PMID:20184841
Sun, Rubao; An, Daizhi; Lu, Wei; Shi, Yun; Wang, Lili; Zhang, Can; Zhang, Ping; Qi, Hongjuan; Wang, Qiang
2016-02-01
In this study, we present a method for identifying sources of water pollution and their relative contributions in pollution disasters. The method uses a combination of principal component analysis and factor analysis. We carried out a case study in three rural villages close to Beijing after torrential rain on July 21, 2012. Nine water samples were analyzed for eight parameters, namely turbidity, total hardness, total dissolved solids, sulfates, chlorides, nitrates, total bacterial count, and total coliform groups. All of the samples showed different degrees of pollution, and most were unsuitable for drinking water as concentrations of various parameters exceeded recommended thresholds. Principal component analysis and factor analysis showed that two factors, the degree of mineralization and agricultural runoff, and flood entrainment, explained 82.50% of the total variance. The case study demonstrates that this method is useful for evaluating and interpreting large, complex water-quality data sets.
Cui, Shihai; Li, Haiyan; Li, Xiangnan; Ruan, Jesse
2015-01-01
Brain tissue mechanical properties are of importance to investigate child head injury using finite element (FE) method. However, these properties used in child head FE model normally vary in a large range in published literatures because of the insufficient child cadaver experiments. In this work, a head FE model with detailed anatomical structures is developed from the computed tomography (CT) data of a 6-year-old healthy child head. The effects of brain tissue mechanical properties on traumatic brain response are also analyzed by reconstruction of a head impact on engine hood according to Euro-NCAP testing regulation using FE method. The result showed that the variations of brain tissue mechanical parameters in linear viscoelastic constitutive model had different influences on the intracranial response. Furthermore, the opposite trend was obtained in the predicted shear stress and shear strain of brain tissues caused by the variations of mentioned parameters.
Parameters for assessing the aquatic environmental impact of cosmetic products.
Vita, N A; Brohem, C A; Canavez, A D P M; Oliveira, C F S; Kruger, O; Lorencini, M; Carvalho, C M
2018-05-01
The cosmetic industry's growing concern about the impact of its supply chain on the environment, sustainability of raw materials, and biodiversity increases the need to ensure that the final product has a lower environmental impact. The objective of this review is to summarize and compare the information available from international organizations and legislation regarding the main criteria used to assess raw materials for aquatic toxicity, as well as the most suitable alternative methods for obtaining assessment parameters. Using the literature available in databases, a review of the scientific literature and international legislation, this work discusses and compares the parameters established by international organizations such as the Environmental Protection Agency (EPA) and Cradle to Cradle (C2C), as well as European legislation, namely, European Regulation 1272/2008, for assessing environmental impact. Defining the ecotoxicity parameters of the main classes of raw materials in rinse-off cosmetic products can enable the development of products that are more environmentally sustainable, prioritizing substances with less environmental impact. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Verbeke, C.; Asvestari, E.; Scolini, C.; Pomoell, J.; Poedts, S.; Kilpua, E.
2017-12-01
Coronal Mass Ejections (CMEs) are one of the big influencers on the coronal and interplanetary dynamics. Understanding their origin and evolution from the Sun to the Earth is crucial in order to determine the impact on our Earth and society. One of the key parameters that determine the geo-effectiveness of the coronal mass ejection is its internal magnetic configuration. We present a detailed parameter study of the Gibson-Low flux rope model. We focus on changes in the input parameters and how these changes affect the characteristics of the CME at Earth. Recently, the Gibson-Low flux rope model has been implemented into the inner heliosphere model EUHFORIA, a magnetohydrodynamics forecasting model of large-scale dynamics from 0.1 AU up to 2 AU. Coronagraph observations can be used to constrain the kinematics and morphology of the flux rope. One of the key parameters, the magnetic field, is difficult to determine directly from observations. In this work, we approach the problem by conducting a parameter study in which flux ropes with varying magnetic configurations are simulated. We then use the obtained dataset to look for signatures in imaging observations and in-situ observations in order to find an empirical way of constraining the parameters related to the magnetic field of the flux rope. In particular, we focus on events observed by at least two spacecraft (STEREO + L1) in order to discuss the merits of using observations from multiple viewpoints in constraining the parameters.
Iurciuc, Stela; Avram, Claudiu; Turi, Vladiana; Militaru, Anda; Avram, Adina; Cimpean, Anca Maria; Iurciuc, Mircea
2016-01-01
To evaluate the impact of physical training on central hemodynamic parameters and elasticity of large arteries in hypertensive patients. A total of 129 hypertensive patients were divided into two groups: group A followed lifestyle changes and physical training; and group B acted as a control group; seven parameters were recorded: Pulse wave velocity (PWVao), systolic blood pressure (SBP), diastolic blood pressure (DBP), pulse pressure (PP), central aortic systolic blood pressure (SBPao), aortic diastolic blood pressure (DBPao), and central aortic pulse pressure (PPao). The difference between values at 4 months and baseline (Δ) were as follows: ΔPWVao was -1.02 m/s (p<0.001) versus 0.17 m/s (p=0.035), ΔSBPao was -9.6 mmHg (p=0.009) versus 1.6 mmHg (p=0.064), and ΔPPao was -6.8 mmHg (p<0.001) versus 3.2 mmHg, (p=0.029) in group A versus B, respectively. Exercise training improves SBP, PP, SBPao, PPao and may delay arterial ageing. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Bremer, Peer-Timo; Weber, Gunther; Tierny, Julien; Pascucci, Valerio; Day, Marcus S; Bell, John B
2011-09-01
Large-scale simulations are increasingly being used to study complex scientific and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral part of the scientific process. Often, a key step in extracting insight from these large simulations involves the definition, extraction, and evaluation of features in the space and time coordinates of the solution. However, in many applications, these features involve a range of parameters and decisions that will affect the quality and direction of the analysis. Examples include particular level sets of a specific scalar field, or local inequalities between derived quantities. A critical step in the analysis is to understand how these arbitrary parameters/decisions impact the statistical properties of the features, since such a characterization will help to evaluate the conclusions of the analysis as a whole. We present a new topological framework that in a single-pass extracts and encodes entire families of possible features definitions as well as their statistical properties. For each time step we construct a hierarchical merge tree a highly compact, yet flexible feature representation. While this data structure is more than two orders of magnitude smaller than the raw simulation data it allows us to extract a set of features for any given parameter selection in a postprocessing step. Furthermore, we augment the trees with additional attributes making it possible to gather a large number of useful global, local, as well as conditional statistic that would otherwise be extremely difficult to compile. We also use this representation to create tracking graphs that describe the temporal evolution of the features over time. Our system provides a linked-view interface to explore the time-evolution of the graph interactively alongside the segmentation, thus making it possible to perform extensive data analysis in a very efficient manner. We demonstrate our framework by extracting and analyzing burning cells from a large-scale turbulent combustion simulation. In particular, we show how the statistical analysis enabled by our techniques provides new insight into the combustion process.
A planetary dust ring generated by impact-ejection from the Galilean satellites
NASA Astrophysics Data System (ADS)
Sachse, Manuel
2018-03-01
All outer planets in the Solar System are surrounded by a ring system. Many of these rings are dust rings or they contain at least a high proportion of dust. They are often formed by impacts of micro-meteoroids onto embedded bodies. The ejected material typically consists of micron-sized charged particles, which are susceptible to gravitational and non-gravitational forces. Generally, detailed information on the dynamics and distribution of the dust requires expensive numerical simulations of a large number of particles. Here we develop a relatively simple and fast, semi-analytical model for an impact-generated planetary dust ring governed by the planet's gravity and the relevant perturbation forces for the dynamics of small charged particles. The most important parameter of the model is the dust production rate, which is a linear factor in the calculation of the dust densities. We apply our model to dust ejected from the Galilean satellites using production rates obtained from flybys of the dust sources. The dust densities predicted by our model are in good agreement with numerical simulations and with in situ measurements by the Galileo spacecraft. The lifetimes of large particles are about two orders of magnitude greater than those of small ones, which implies a flattening of the size distribution in circumplanetary space. Information about the distribution of circumplanetary dust is also important for the risk assessment of spacecraft orbits in the respective regions.
NASA Technical Reports Server (NTRS)
Burton, S. P.; Ferrare, R. A.; Hostetler, C. A.; Hair, J. W.; Rogers, R. R.; Obland, M. D.; Butler, C. F.; Cook, A. L.; Harper, D. B.; Froyd, K. D.;
2012-01-01
Knowledge of the vertical profile, composition, concentration, and size of aerosols is required for assessing the direct impact of aerosols on radiation, the indirect effects of aerosols on clouds and precipitation, and attributing these effects to natural and anthropogenic aerosols. Because anthropogenic aerosols are predominantly submicrometer, fine mode fraction (FMF) retrievals from satellite have been used as a tool for deriving anthropogenic aerosols. Although column and profile satellite retrievals of FMF have been performed over the ocean, such retrievals have not yet been been done over land. Consequently, uncertainty in satellite estimates of the anthropogenic component of the aerosol direct radiative forcing is greatest over land, due in large part to uncertainties in the FMF. Satellite measurements have been used to detect and evaluate aerosol impacts on clouds; however, such efforts have been hampered by the difficulty in retrieving vertically-resolved cloud condensation nuclei (CCN) concentration, which is the most direct parameter linking aerosol and clouds. Recent studies have shown correlations between average satellite derived column aerosol optical thickness (AOT) and in situ measured CCN. However, these same studies, as well as others that use detailed airborne in situ measurements have noted that vertical variability of the aerosol distribution, impacts of relative humidity, and the presence of coarse mode aerosols such as dust introduce large uncertainties in such relations.
Jiang, Jiping; Sharma, Ashish; Sivakumar, Bellie; Wang, Peng
2014-01-15
To uncover climate-water quality relationships in large rivers on a global scale, the present study investigates the climate elasticity of river water quality (CEWQ) using long-term monthly records observed at 14 large rivers. Temperature and precipitation elasticities of 12 water quality parameters, highlighted by N- and P-nutrients, are assessed. General observations on elasticity values show the usefulness of this approach to describe the magnitude of stream water quality responses to climate change, which improves that of simple statistical correlation. Sensitivity type, intensity and variability rank of CEWQ are reported and specific characteristics and mechanism of elasticity of nutrient parameters are also revealed. Among them, the performance of ammonia, total phosphorus-air temperature models, and nitrite, orthophosphorus-precipitation models are the best. Spatial and temporal assessment shows that precipitation elasticity is more variable in space than temperature elasticity and that seasonal variation is more evident for precipitation elasticity than for temperature elasticity. Moreover, both anthropogenic activities and environmental factors are found to impact CEWQ for select variables. The major relationships that can be inferred include: (1) human population has a strong linear correlation with temperature elasticity of turbidity and total phosphorus; and (2) latitude has a strong linear correlation with precipitation elasticity of turbidity and N nutrients. As this work improves our understanding of the relation between climate factors and surface water quality, it is potentially helpful for investigating the effect of climate change on water quality in large rivers, such as on the long-term change of nutrient concentrations. © 2013.
Parameters estimation of sandwich beam model with rigid polyurethane foam core
NASA Astrophysics Data System (ADS)
Barbieri, Nilson; Barbieri, Renato; Winikes, Luiz Carlos
2010-02-01
In this work, the physical parameters of sandwich beams made with the association of hot-rolled steel, Polyurethane rigid foam and High Impact Polystyrene, used for the assembly of household refrigerators and food freezers are estimated using measured and numeric frequency response functions (FRFs). The mathematical models are obtained using the finite element method (FEM) and the Timoshenko beam theory. The physical parameters are estimated using the amplitude correlation coefficient and genetic algorithm (GA). The experimental data are obtained using the impact hammer and four accelerometers displaced along the sample (cantilevered beam). The parameters estimated are Young's modulus and the loss factor of the Polyurethane rigid foam and the High Impact Polystyrene.
Space Shuttle Solid Rocket Booster decelerator subsystem - Air drop test vehicle/B-52 design
NASA Technical Reports Server (NTRS)
Runkle, R. E.; Drobnik, R. F.
1979-01-01
The air drop development test program for the Space Shuttle Solid Rocket Booster Recovery System required the design of a large drop test vehicle that would meet all the stringent requirements placed on it by structural loads, safety considerations, flight recovery system interfaces, and sequence. The drop test vehicle had to have the capability to test the drogue and the three main parachutes both separately and in the total flight deployment sequence and still be low-cost to fit in a low-budget development program. The design to test large ribbon parachutes to loads of 300,000 pounds required the detailed investigation and integration of several parameters such as carrier aircraft mechanical interface, drop test vehicle ground transportability, impact point ground penetration, salvageability, drop test vehicle intelligence, flight design hardware interfaces, and packaging fidelity.
Escape probability of the super-Penrose process
NASA Astrophysics Data System (ADS)
Ogasawara, Kota; Harada, Tomohiro; Miyamoto, Umpei; Igata, Takahisa
2017-06-01
We consider a head-on collision of two massive particles that move in the equatorial plane of an extremal Kerr black hole, which results in the production of two massless particles. Focusing on a typical case, where both of the colliding particles have zero angular momenta, we show that a massless particle produced in such a collision can escape to infinity with arbitrarily large energy in the near-horizon limit of the collision point. Furthermore, if we assume that the emission of the produced massless particles is isotropic in the center-of-mass frame but confined to the equatorial plane, the escape probability of the produced massless particle approaches 5 /12 , and almost all escaping massless particles have arbitrarily large energy at infinity and an impact parameter approaching 2 G M /c2, where M is the mass of the black hole.
How Do Microphysical Processes Influence Large-Scale Precipitation Variability and Extremes?
Hagos, Samson; Ruby Leung, L.; Zhao, Chun; ...
2018-02-10
Convection permitting simulations using the Model for Prediction Across Scales-Atmosphere (MPAS-A) are used to examine how microphysical processes affect large-scale precipitation variability and extremes. An episode of the Madden-Julian Oscillation is simulated using MPAS-A with a refined region at 4-km grid spacing over the Indian Ocean. It is shown that cloud microphysical processes regulate the precipitable water (PW) statistics. Because of the non-linear relationship between precipitation and PW, PW exceeding a certain critical value (PWcr) contributes disproportionately to precipitation variability. However, the frequency of PW exceeding PWcr decreases rapidly with PW, so changes in microphysical processes that shift the columnmore » PW statistics relative to PWcr even slightly have large impacts on precipitation variability. Furthermore, precipitation variance and extreme precipitation frequency are approximately linearly related to the difference between the mean and critical PW values. Thus observed precipitation statistics could be used to directly constrain model microphysical parameters as this study demonstrates using radar observations from DYNAMO field campaign.« less
How Do Microphysical Processes Influence Large-Scale Precipitation Variability and Extremes?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson; Ruby Leung, L.; Zhao, Chun
Convection permitting simulations using the Model for Prediction Across Scales-Atmosphere (MPAS-A) are used to examine how microphysical processes affect large-scale precipitation variability and extremes. An episode of the Madden-Julian Oscillation is simulated using MPAS-A with a refined region at 4-km grid spacing over the Indian Ocean. It is shown that cloud microphysical processes regulate the precipitable water (PW) statistics. Because of the non-linear relationship between precipitation and PW, PW exceeding a certain critical value (PWcr) contributes disproportionately to precipitation variability. However, the frequency of PW exceeding PWcr decreases rapidly with PW, so changes in microphysical processes that shift the columnmore » PW statistics relative to PWcr even slightly have large impacts on precipitation variability. Furthermore, precipitation variance and extreme precipitation frequency are approximately linearly related to the difference between the mean and critical PW values. Thus observed precipitation statistics could be used to directly constrain model microphysical parameters as this study demonstrates using radar observations from DYNAMO field campaign.« less
Cheng, Nai-Ming; Fang, Yu-Hua Dean; Tsan, Din-Li
2016-01-01
Purpose We compared attenuation correction of PET images with helical CT (PET/HCT) and respiration-averaged CT (PET/ACT) in patients with non-small-cell lung cancer (NSCLC) with the goal of investigating the impact of respiration-averaged CT on 18F FDG PET texture parameters. Materials and Methods A total of 56 patients were enrolled. Tumors were segmented on pretreatment PET images using the adaptive threshold. Twelve different texture parameters were computed: standard uptake value (SUV) entropy, uniformity, entropy, dissimilarity, homogeneity, coarseness, busyness, contrast, complexity, grey-level nonuniformity, zone-size nonuniformity, and high grey-level large zone emphasis. Comparisons of PET/HCT and PET/ACT were performed using Wilcoxon signed-rank tests, intraclass correlation coefficients, and Bland-Altman analysis. Receiver operating characteristic (ROC) curves as well as univariate and multivariate Cox regression analyses were used to identify the parameters significantly associated with disease-specific survival (DSS). A fixed threshold at 45% of the maximum SUV (T45) was used for validation. Results SUV maximum and total lesion glycolysis (TLG) were significantly higher in PET/ACT. However, texture parameters obtained with PET/ACT and PET/HCT showed a high degree of agreement. The lowest levels of variation between the two modalities were observed for SUV entropy (9.7%) and entropy (9.8%). SUV entropy, entropy, and coarseness from both PET/ACT and PET/HCT were significantly associated with DSS. Validation analyses using T45 confirmed the usefulness of SUV entropy and entropy in both PET/HCT and PET/ACT for the prediction of DSS, but only coarseness from PET/ACT achieved the statistical significance threshold. Conclusions Our results indicate that 1) texture parameters from PET/ACT are clinically useful in the prediction of survival in NSCLC patients and 2) SUV entropy and entropy are robust to attenuation correction methods. PMID:26930211
Testing the robustness of management decisions to uncertainty: Everglades restoration scenarios.
Fuller, Michael M; Gross, Louis J; Duke-Sylvester, Scott M; Palmer, Mark
2008-04-01
To effectively manage large natural reserves, resource managers must prepare for future contingencies while balancing the often conflicting priorities of different stakeholders. To deal with these issues, managers routinely employ models to project the response of ecosystems to different scenarios that represent alternative management plans or environmental forecasts. Scenario analysis is often used to rank such alternatives to aid the decision making process. However, model projections are subject to uncertainty in assumptions about model structure, parameter values, environmental inputs, and subcomponent interactions. We introduce an approach for testing the robustness of model-based management decisions to the uncertainty inherent in complex ecological models and their inputs. We use relative assessment to quantify the relative impacts of uncertainty on scenario ranking. To illustrate our approach we consider uncertainty in parameter values and uncertainty in input data, with specific examples drawn from the Florida Everglades restoration project. Our examples focus on two alternative 30-year hydrologic management plans that were ranked according to their overall impacts on wildlife habitat potential. We tested the assumption that varying the parameter settings and inputs of habitat index models does not change the rank order of the hydrologic plans. We compared the average projected index of habitat potential for four endemic species and two wading-bird guilds to rank the plans, accounting for variations in parameter settings and water level inputs associated with hypothetical future climates. Indices of habitat potential were based on projections from spatially explicit models that are closely tied to hydrology. For the American alligator, the rank order of the hydrologic plans was unaffected by substantial variation in model parameters. By contrast, simulated major shifts in water levels led to reversals in the ranks of the hydrologic plans in 24.1-30.6% of the projections for the wading bird guilds and several individual species. By exposing the differential effects of uncertainty, relative assessment can help resource managers assess the robustness of scenario choice in model-based policy decisions.
Park, Y.; Krause, E.; Dodelson, S.; ...
2016-09-30
The joint analysis of galaxy-galaxy lensing and galaxy clustering is a promising method for inferring the growth function of large scale structure. Our analysis will be carried out on data from the Dark Energy Survey (DES), with its measurements of both the distribution of galaxies and the tangential shears of background galaxies induced by these foreground lenses. We develop a practical approach to modeling the assumptions and systematic effects affecting small scale lensing, which provides halo masses, and large scale galaxy clustering. Introducing parameters that characterize the halo occupation distribution (HOD), photometric redshift uncertainties, and shear measurement errors, we studymore » how external priors on different subsets of these parameters affect our growth constraints. Degeneracies within the HOD model, as well as between the HOD and the growth function, are identified as the dominant source of complication, with other systematic effects sub-dominant. The impact of HOD parameters and their degeneracies necessitate the detailed joint modeling of the galaxy sample that we employ. Finally, we conclude that DES data will provide powerful constraints on the evolution of structure growth in the universe, conservatively/optimistically constraining the growth function to 7.9%/4.8% with its first-year data that covered over 1000 square degrees, and to 3.9%/2.3% with its full five-year data that will survey 5000 square degrees, including both statistical and systematic uncertainties.« less
Development of a GNSS-Enhanced Tsunami Early Warning System
NASA Astrophysics Data System (ADS)
Bawden, G. W.; Melbourne, T. I.; Bock, Y.; Song, Y. T.; Komjathy, A.
2015-12-01
The past decade has witnessed a terrible loss of life and economic disruption caused by large earthquakes and resultant tsunamis impacting coastal communities and infrastructure across the Indo-Pacific region. NASA has funded the early development of a prototype real-time Global Navigation Satellite System (RT-GNSS) based rapid earthquake and tsunami early warning (GNSS-TEW) system that may be used to enhance seismic tsunami early warning systems for large earthquakes. This prototype GNSS-TEW system geodetically estimates fault parameters (earthquake magnitude, location, strike, dip, and slip magnitude/direction on a gridded fault plane both along strike and at depth) and tsunami source parameters (seafloor displacement, tsunami energy scale, and 3D tsunami initials) within minutes after the mainshock based on dynamic numerical inversions/regressions of the real-time measured displacements within a spatially distributed real-time GNSS network(s) spanning the epicentral region. It is also possible to measure fluctuations in the ionosphere's total electron content (TEC) in the RT-GNSS data caused by the pressure wave from the tsunami. This TEC approach can detect if a tsunami has been triggered by an earthquake, track its waves as they propagate through the oceanic basins, and provide upwards of 45 minutes early warning. These combined real-time geodetic approaches will very quickly address a number of important questions in the immediate minutes following a major earthquake: How big was the earthquake and what are its fault parameters? Could the earthquake have produced a tsunami and was a tsunami generated?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y.; Krause, E.; Dodelson, S.
The joint analysis of galaxy-galaxy lensing and galaxy clustering is a promising method for inferring the growth function of large scale structure. Our analysis will be carried out on data from the Dark Energy Survey (DES), with its measurements of both the distribution of galaxies and the tangential shears of background galaxies induced by these foreground lenses. We develop a practical approach to modeling the assumptions and systematic effects affecting small scale lensing, which provides halo masses, and large scale galaxy clustering. Introducing parameters that characterize the halo occupation distribution (HOD), photometric redshift uncertainties, and shear measurement errors, we studymore » how external priors on different subsets of these parameters affect our growth constraints. Degeneracies within the HOD model, as well as between the HOD and the growth function, are identified as the dominant source of complication, with other systematic effects sub-dominant. The impact of HOD parameters and their degeneracies necessitate the detailed joint modeling of the galaxy sample that we employ. Finally, we conclude that DES data will provide powerful constraints on the evolution of structure growth in the universe, conservatively/optimistically constraining the growth function to 7.9%/4.8% with its first-year data that covered over 1000 square degrees, and to 3.9%/2.3% with its full five-year data that will survey 5000 square degrees, including both statistical and systematic uncertainties.« less
NASA Astrophysics Data System (ADS)
Roten, D.; Hogue, S.; Spell, P.; Marland, E.; Marland, G.
2017-12-01
There is an increasing role for high resolution, CO2 emissions inventories across multiple arenas. The breadth of the applicability of high-resolution data is apparent from their use in atmospheric CO2 modeling, their potential for validation of space-based atmospheric CO2 remote-sensing, and the development of climate change policy. This work focuses on increasing our understanding of the uncertainty in these inventories and the implications on their downstream use. The industrial point sources of emissions (power generating stations, cement manufacturing plants, paper mills, etc.) used in the creation of these inventories often have robust emissions characteristics, beyond just their geographic location. Physical parameters of the emission sources such as number of exhaust stacks, stack heights, stack diameters, exhaust temperatures, and exhaust velocities, as well as temporal variability and climatic influences can be important in characterizing emissions. Emissions from large point sources can behave much differently than emissions from areal sources such as automobiles. For many applications geographic location is not an adequate characterization of emissions. This work demonstrates the sensitivities of atmospheric models to the physical parameters of large point sources and provides a methodology for quantifying parameter impacts at multiple locations across the United States. The sensitivities highlight the importance of location and timing and help to highlight potential aspects that can guide efforts to reduce uncertainty in emissions inventories and increase the utility of the models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Ben; Zhang, Yaocun; Qian, Yun
In this study, we apply an efficient sampling approach and conduct a large number of simulations to explore the sensitivity of the simulated Asian summer monsoon (ASM) precipitation, including the climatological state and interannual variability, to eight parameters related to the cloud and precipitation processes in the Beijing Climate Center AGCM version 2.1 (BCC_AGCM2.1). Our results show that BCC_AGCM2.1 has large biases in simulating the ASM precipitation. The precipitation efficiency and evaporation coefficient for deep convection are the most sensitive parameters in simulating the ASM precipitation. With optimal parameter values, the simulated precipitation climatology could be remarkably improved, e.g. increasedmore » precipitation over the equator Indian Ocean, suppressed precipitation over the Philippine Sea, and more realistic Meiyu distribution over Eastern China. The ASM precipitation interannual variability is further analyzed, with a focus on the ENSO impacts. It shows the simulations with better ASM precipitation climatology can also produce more realistic precipitation anomalies during El Niño decaying summer. In the low-skill experiments for precipitation climatology, the ENSO-induced precipitation anomalies are most significant over continents (vs. over ocean in observation) in the South Asian monsoon region. More realistic results are derived from the higher-skill experiments with stronger anomalies over the Indian Ocean and weaker anomalies over India and the western Pacific, favoring more evident easterly anomalies forced by the tropical Indian Ocean warming and stronger Indian Ocean-western Pacific tele-connection as observed. Our model results reveal a strong connection between the simulated ASM precipitation climatological state and interannual variability in BCC_AGCM2.1 when key parameters are perturbed.« less
Aftermath of early Hit-and-Run collisions in the Inner Solar System
NASA Astrophysics Data System (ADS)
Sarid, Gal; Stewart, Sarah T.; Leinhardt, zoe M.
2015-08-01
Planet formation epoch, in the terrestrial planet region and the asteroid belt, was characterized by a vigorous dynamical environment that was conducive to giant impacts among planetary embryos and asteroidal parent bodies, leading to diverse outcomes. Among these the greatest potential for producing diverse end-members lies is the erosive Hit-and-Run regime (small mass ratios, off-axis oblique impacts and non-negligible ejected mass), which is also more probable in terms of the early dynamical encounter configuration in the inner solar system. This collision regime has been invoked to explain outstanding issues, such as planetary volatile loss records, origin of the Moon and mantle stripping from Mercury and some of the larger asteroids (Vesta, Psyche).We performed and analyzed a set of simulations of Hit-and-Run events, covering a large range of mass ratios (1-20), impact parameters (0.25-0.96, for near head-on to barely grazing) and impact velocities (~1.5-5 times the mutual escape velocity, as dependent on the mass ratio). We used an SPH code with tabulated EOS and a nominal simlated time >1 day, to track the collisional shock processing and the provenance of material components. of collision debris. Prior to impact runs, all bodies were allowed to initially settle to negligible particle velocities in isolation, within ~20 simulated hrs. The total number of particles involved in each of our collision simulations was between (1-3 x 105). Resulting configurations include stripped mantles, melting/vaporization of rock and/or iron cores and strong variations of asteroid parent bodies fromcanonical chondritic composition.In the context of large planetary formation simulations, velocity and impact angle distributions are necessary to asses impact probabilities. The mass distribution and interaction within planetary embryo and asteroid swarms depends both on gravitational dynamics and the applied fragmentation mechanism. We will present results pertaining to general projectile remnant scaling relations, constitution of ejected unbound material and the composition of variedcollision remnants, which become available to seed the asteroid belt.
Impact of aerosols on ice crystal size
NASA Astrophysics Data System (ADS)
Zhao, Bin; Liou, Kuo-Nan; Gu, Yu; Jiang, Jonathan H.; Li, Qinbin; Fu, Rong; Huang, Lei; Liu, Xiaohong; Shi, Xiangjun; Su, Hui; He, Cenlin
2018-01-01
The interactions between aerosols and ice clouds represent one of the largest uncertainties in global radiative forcing from pre-industrial time to the present. In particular, the impact of aerosols on ice crystal effective radius (Rei), which is a key parameter determining ice clouds' net radiative effect, is highly uncertain due to limited and conflicting observational evidence. Here we investigate the effects of aerosols on Rei under different meteorological conditions using 9-year satellite observations. We find that the responses of Rei to aerosol loadings are modulated by water vapor amount in conjunction with several other meteorological parameters. While there is a significant negative correlation between Rei and aerosol loading in moist conditions, consistent with the "Twomey effect" for liquid clouds, a strong positive correlation between the two occurs in dry conditions. Simulations based on a cloud parcel model suggest that water vapor modulates the relative importance of different ice nucleation modes, leading to the opposite aerosol impacts between moist and dry conditions. When ice clouds are decomposed into those generated from deep convection and formed in situ, the water vapor modulation remains in effect for both ice cloud types, although the sensitivities of Rei to aerosols differ noticeably between them due to distinct formation mechanisms. The water vapor modulation can largely explain the difference in the responses of Rei to aerosol loadings in various seasons. A proper representation of the water vapor modulation is essential for an accurate estimate of aerosol-cloud radiative forcing produced by ice clouds.
Analytic Ballistic Performance Model of Whipple Shields
NASA Technical Reports Server (NTRS)
Miller, J. E.; Bjorkman, M. D.; Christiansen, E. L.; Ryan, S. J.
2014-01-01
The dual-wall Whipple shield is the shield of choice for lightweight, long-duration flight. The shield uses an initial sacrificial wall to initiate fragmentation and melt an impacting threat that expands over a void before hitting a subsequent shield wall of a critical component. The key parameters to this type of shield are the rear wall and its mass which stops the debris, as well as the minimum pressure generated under threat particle impact of the sacrificial wall and the amount of void that is available for expansion. Ensuring the minimum pressure is sufficiently high to achieve large scale fragmentation/melt of the threat particle enables the expansion of the threat and reduces the momentum flux of the debris on the rear wall. Three key factors in the minimum pressure achieved are the thickness of the sacrificial wall relative to the characteristic dimension of the impacting particle, the density and material cohesion contrast of the sacrificial wall relative to the threat particle and the impact speed. The mass of the rear wall and the sacrificial wall are desirable to minimize for launch costs and dynamic concerns making it important to have an understanding of the effects of density contrast and impact speed. In this paper a fourth key parameter is identified related to fragmentation, which corresponds to the ratio of the size of the projectile relative to the transition from brittle to ductile hole growth in the projectile. Ballistic limit equations have been developed to define the failure limits of a MMOD shield, generally in terms of projectile diameter (or mass), impact velocity, and angle. Within the range of impact velocities relevant for Earth-orbiting spacecraft, three distinct regions of penetration phenomenology have been identified for Whipple shields: center dot Low velocity: the projectile is eroded (and possibly deformed) during its passage through the bumper plate, but is not fragmented. Thus, perforation of the rear wall is by a fragment with a mass and speed equal to or less than the original impactor. center dot Intermediate (shatter) velocity: impact velocities are sufficient to induce projectile fragmentation upon impact with the bumper plate, resulting in a coarse debris cloud with large solid fragments. Increasing velocity within the shatter regime results in increased fragmentation, and eventually melting, of the projectile and bumper fragments, generating a finer and more evenly dispersed debris cloud. Failure of the rear wall is a complicated combination of modes observed at low- and hypervelocity. center dot Hypervelocity: the projectile and holed-out bumper material is completely, or nearly completely, melted and/or vaporized by the initial impact. The resultant debris cloud impacts over a dispersed area of the rear wall, loading it impulsively and inducing failure through rupture or petalling. While each of these regimes are well observed with extensive empirical methods to describe these regions, differences in impactor materials, configurations of shields and questions about the limitations of the attainable impact speeds have left questions that are difficult to answer from completely empirical methods.
ERIC Educational Resources Information Center
Garber, Mel; Adams, Katherine R.
2017-01-01
Collective impact is a model for achieving tangible change and improvement in communities through a series of well-defined parameters of collaboration. This article provides a 10-year reflection on the University of Georgia Archway Partnership, a university-community collaboration, in the context of the parameters of collective impact. Emphasis is…
NASA Astrophysics Data System (ADS)
Dioguardi, Fabio; Mele, Daniela
2018-03-01
This paper presents PYFLOW_2.0, a hazard tool for the calculation of the impact parameters of dilute pyroclastic density currents (DPDCs). DPDCs represent the dilute turbulent type of gravity flows that occur during explosive volcanic eruptions; their hazard is the result of their mobility and the capability to laterally impact buildings and infrastructures and to transport variable amounts of volcanic ash along the path. Starting from data coming from the analysis of deposits formed by DPDCs, PYFLOW_2.0 calculates the flow properties (e.g., velocity, bulk density, thickness) and impact parameters (dynamic pressure, deposition time) at the location of the sampled outcrop. Given the inherent uncertainties related to sampling, laboratory analyses, and modeling assumptions, the program provides ranges of variations and probability density functions of the impact parameters rather than single specific values; from these functions, the user can interrogate the program to obtain the value of the computed impact parameter at any specified exceedance probability. In this paper, the sedimentological models implemented in PYFLOW_2.0 are presented, program functionalities are briefly introduced, and two application examples are discussed so as to show the capabilities of the software in quantifying the impact of the analyzed DPDCs in terms of dynamic pressure, volcanic ash concentration, and residence time in the atmosphere. The software and user's manual are made available as a downloadable electronic supplement.
Genovart, Meritxell; Sanz-Aguilar, Ana; Fernández-Chacón, Albert; Igual, Jose M; Pradel, Roger; Forero, Manuela G; Oro, Daniel
2013-01-01
Large-scale seasonal climatic indices, such as the North Atlantic Oscillation (NAO) index or the Southern Oscillation Index (SOI), account for major variations in weather and climate around the world and may influence population dynamics in many organisms. However, assessing the extent of climate impacts on species and their life-history traits requires reliable quantitative statistical approaches. We used a new analytical tool in mark-recapture, the multi-event modelling, to simultaneously assess the influence of climatic variation on multiple demographic parameters (i.e. adult survival, transient probability, reproductive skipping and nest dispersal) at two Mediterranean colonies of the Cory's shearwater Calonectris diomedea, a trans-equatorial migratory long-lived seabird. We also analysed the impact of climate in the breeding success at the two colonies. We found a clear temporal variation of survival for Cory's shearwaters, strongly associated to the large-scale SOI especially in one of the colonies (up to 66% of variance explained). Atlantic hurricane season is modulated by the SOI and coincides with shearwater migration to their wintering areas, directly affecting survival probabilities. However, the SOI was a better predictor of survival probabilities than the frequency of hurricanes; thus, we cannot discard an indirect additive effect of SOI via food availability. Accordingly, the proportion of transients was also correlated with SOI values, indicating higher costs of first reproduction (resulting in either mortality or permanent dispersal) when bad environmental conditions occurred during winter before reproduction. Breeding success was also affected by climatic factors, the NAO explaining c. 41% of variance, probably as a result of its effect in the timing of peak abundance of squid and small pelagics, the main prey for shearwaters. No climatic effect was found either on reproductive skipping or on nest dispersal. Contrarily to what we expect for a long-lived organism, large-scale climatic indexes had a more pronounced effect on survival and transient probabilities than on less sensitive fitness parameters such reproductive skipping or nest dispersal probabilities. The potential increase in hurricane frequency because of global warming may interact with other global change agents (such as incidental bycatch and predation by alien species) nowadays impacting shearwaters, affecting future viability of populations. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.
Geological implications of impacts of large asteroids and comets on the earth
NASA Technical Reports Server (NTRS)
Silver, L. T. (Editor); Schultz, P. H. (Editor)
1982-01-01
The present conference discusses such topics as large object fluxes in near-earth space and the probabilities of terrestrial impacts, the geological record of impacts, dynamics modeling for large body impacts on continents and oceans, physical, chemical, and biological models of large impacts' atmospheric effects, dispersed impact ejecta and their signatures, general considerations concerning mass biological extinctions, the Cretaceous/Tertiary boundary event, geochemical signatures in the stratigraphic record, and other phanerozoic events. Attention is given to terrestrial impact rates for long- and short-period comets, estimates of crater size for large body impact, a first-order estimate of shock heating and vaporization in oceanic impacts, atmospheric effects in the first few minutes after an impact, a feasibility test for biogeographic extinction, and the planktonic and dinosaur extinctions.
NASA Astrophysics Data System (ADS)
Aydemir, Birsen; Kiziler, Ali Riza; Onaran, Ilhan; Alici, Bülent; Özkara, Hamdi; Akyolcu, Mehmet Can
2007-04-01
To investigate the impact of testosterone, zinc, calcium and magnesium concentrations in serum and seminal plasma on sperm parameters. There were significant decrease in sperm parameters, serum and seminal plasma zinc levels in subfertile males. It indicates zinc has a essential role in male infertility; the determination the level of zinc during infertility investigation is recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Mi-Young; Yoon, Jung-Sik; Jung, Young-Dae, E-mail: ydjung@hanyang.ac.kr
2015-04-15
The renormalization shielding effects on the electron-impact ionization of hydrogen atom are investigated in dense partially ionized plasmas. The effective projectile-target interaction Hamiltonian and the semiclassical trajectory method are employed to obtain the transition amplitude as well as the ionization probability as functions of the impact parameter, the collision energy, and the renormalization parameter. It is found that the renormalization shielding effect suppresses the transition amplitude for the electron-impact ionization process in dense partially ionized plasmas. It is also found that the renormalization effect suppresses the differential ionization cross section in the peak impact parameter region. In addition, it ismore » found that the influence of renormalization shielding on the ionization cross section decreases with an increase of the relative collision energy. The variations of the renormalization shielding effects on the electron-impact ionization cross section are also discussed.« less
A holistic approach towards defined product attributes by Maillard-type food processing.
Davidek, Tomas; Illmann, Silke; Rytz, Andreas; Blank, Imre
2013-07-01
A fractional factorial experimental design was used to quantify the impact of process and recipe parameters on selected product attributes of extruded products (colour, viscosity, acrylamide, and the flavour marker 4-hydroxy-2,5-dimethyl-3(2H)-furanone, HDMF). The study has shown that recipe parameters (lysine, phosphate) can be used to modulate the HDMF level without changing the specific mechanical energy (SME) and consequently the texture of the product, while processing parameters (temperature, moisture) impact both HDMF and SME in parallel. Similarly, several parameters, including phosphate level, temperature and moisture, simultaneously impact both HDMF and acrylamide formation, while pH and addition of lysine showed different trends. Therefore, the latter two options can be used to mitigate acrylamide without a negative impact on flavour. Such a holistic approach has been shown as a powerful tool to optimize various product attributes upon food processing.
Optical properties of the Einstein-de Sitter-Kasner universe
NASA Astrophysics Data System (ADS)
Landry, Sylvie; Dyer, Charles C.
1997-09-01
Most studies of gravitational lensing and their impact on observations concentrate on lensing structures which are bounded, that is, of some finite size in an otherwise reasonably smooth background universe. In this paper, we consider a model of the universe, the ``cheese slice'' universe, where the lensing is caused by very large scale structures: large slabs of alternating pure vacuum and Friedmann-Lemaı⁁tre-Robertson-Walker (FLRW) dust. The ray tracing problem is solved and shows that only the Kasner regions will introduce a bending in the beam as it propagates. The Kasner slices also introduce anisotropic redshift effects. The optical scalar equations are used as a tool to obtain the cross-sectional area and shape of the beam. All physical properties of a bundle of rays traveling through the cheese slice model are obtained analytically. The only nonanalytical result is the evaluation, in Kasner regions, of the time variable along the beam as a function of the affine parameter. Practical model results are obtained from a computer code. Multislice models are studied and the resulting impact on astronomical observations, which includes the introduction of shear and amplification, is demonstrated.
Körzendörfer, Adrian; Nöbel, Stefan; Hinrichs, Jörg
2017-07-01
Two major quality defects of yogurt are syneresis and the presence of large particles, and several reasons have been extensively discussed. Vibrations during fermentation, particularly generated by pumps, must be considered as a further cause as latest research showed that both ultrasound and low frequencies induced visible particles. The aim of this study was to investigate the impact of sonication during fermentation with starter cultures differing in exopolysaccharide (EPS) synthesis on the physical properties of set (syneresis, firmness) and stirred yogurt (large particles, laser diffraction, rheology). Skim milk was fermented with starter cultures YC-471 (low EPS) or YF-L 901 (high EPS) (Chr. Hansen) and sonicated for 5min at pH5.2. Sonicated set gels exhibited syneresis and were softer than respective controls. The mechanical treatment was adjusted to quantify visible particles (d≥0.9mm) in stirred yogurts properly. Sonication significantly increased particle numbers, however, the effect was less pronounced when YF-L 901 was used, indicating EPS as a tool to reduce syneresis and particle formation due to vibrations. Rheological parameters and size of microgel particles were rather influenced by starter cultures than by sonication. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tian, Jiting; Zhou, Wei; Feng, Qijie; Zheng, Jian
2018-03-01
An unsolved problem in research of sputtering from metals induced by energetic large cluster ions is that molecular dynamics (MD) simulations often produce sputtering yields much higher than experimental results. Different from the previous simulations considering only elastic atomic interactions (nuclear stopping), here we incorporate inelastic electrons-atoms interactions (electronic stopping, ES) into MD simulations using a friction model. In this way we have simulated continuous 45° impacts of 10-20 keV C60 on a Ag(111) surface, and found that the calculated sputtering yields can be very close to the experimental results when the model parameter is appropriately assigned. Conversely, when we ignore the effect of ES, the yields are much higher, just like the previous studies. We further expand our research to the sputtering of Au induced by continuous keV C60 or Ar100 bombardments, and obtain quite similar results. Our study indicates that the gap between the experimental and the simulated sputtering yields is probably induced by the ignorance of ES in the simulations, and that a careful treatment of this issue is important for simulations of cluster-ion-induced sputtering, especially for those aiming to compare with experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacon, Diana Holford; Locke II, Randall A.; Keating, Elizabeth
The National Risk Assessment Partnership (NRAP) has developed a suite of tools to assess and manage risk at CO2 sequestration sites (1). The NRAP tool suite includes the Aquifer Impact Model (AIM), based on reduced order models developed using site-specific data from two aquifers (alluvium and carbonate). The models accept aquifer parameters as a range of variable inputs so they may have more broad applicability. Guidelines have been developed for determining the aquifer types for which the ROMs should be applicable. This paper considers the applicability of the aquifer models in AIM to predicting the impact of CO2 or Brinemore » leakage were it to occur at the Illinois Basin Decatur Project (IBDP). Based on the results of the sensitivity analysis, the hydraulic parameters and leakage source term magnitude are more sensitive than clay fraction or cation exchange capacity. Sand permeability was the only hydraulic parameter measured at the IBDP site. More information on the other hydraulic parameters, such as sand fraction and sand/clay correlation lengths, could reduce uncertainty in risk estimates. Some non-adjustable parameters, such as the initial pH and TDS and the pH no-impact threshold, are significantly different for the ROM than for the observations at the IBDP site. The reduced order model could be made more useful to a wider range of sites if the initial conditions and no-impact threshold values were adjustable parameters.« less
Effects of cosmic acceleration on black hole thermodynamics
NASA Astrophysics Data System (ADS)
Mandal, Abhijit
2016-07-01
Direct local impacts of cosmic acceleration upon a black hole are matters of interest. Babichev et. al. had published before that the Friedmann equations which are prevailing the part of fluid filled up in the universe to lead (or to be very specific, `dominate') the other constituents of universe and are forcing the universe to undergo present-day accelerating phase (or to lead to violate the strong energy condition and latter the week energy condition), will themselves tell that the rate of change of mass of the central black hole due to such exotic fluid's accretion will essentially shrink the mass of the black hole. But this is a global impact indeed. The local changes in the space time geometry next to the black hole can be analysed from a modified metric governing the surrounding space time of a black hole. A charged deSitter black hole solution encircled by quintessence field is chosen for this purpose. Different thermodynamic parameters are analysed for different values of quintessence equation of state parameter, ω_q. Specific jumps in the nature of the thermodynamic space near to the quintessence or phantom barrier are noted and physically interpreted as far as possible. Nature of phase transitions and the situations at which these transitions are taking place are also explored. It is determined that before quintessence starts to work (ω_q=-0.33>-1/3) it was preferable to have a small unstable black hole followed by a large stable one. But in quintessence (-1/3>ω_q>-1), black holes are destined to be unstable large ones pre-quelled by stable/ unstable small/ intermediate mass black holes.
Impact of Multiple Factors on the Degree of Tinnitus Distress.
Brüggemann, Petra; Szczepek, Agnieszka J; Rose, Matthias; McKenna, Laurence; Olze, Heidi; Mazurek, Birgit
2016-01-01
The primary cause of subjective tinnitus is a dysfunction of the auditory system; however, the degree of distress tinnitus causes depends largely on the psychological status of the patient. Our goal was to attempt to associate the grade of tinnitus-related distress with the psychological distress, physical, or psychological discomfort patients experienced, as well as potentially relevant social parameters, through a simultaneous analysis of these factors. We determined the level of tinnitus-related distress in 531 tinnitus patients using the German version of the tinnitus questionnaire (TQ). In addition, we used the Perceived Stress Questionnaire (PSQ); General Depression Scale Allgemeine Depression Skala (ADS), Berlin Mood Questionnaire (BSF); somatic symptoms inventory (BI), and SF-8 health survey as well as general information collected through a medical history. The TQ score significantly correlated with a score obtained using PSQ, ADS, BSF, BI, and SF-8 alongside psychosocial factors such as age, gender, and marital status. The level of hearing loss and the auditory properties of the specific tinnitus combined with perceived stress and the degree of depressive mood and somatic discomfort of a patient were identified as medium-strong predictors of chronic tinnitus. Social factors such as gender, age, or marital status also had an impact on the degree of tinnitus distress. The results that were obtained were implemented in a specific cortical distress network model. Using a large representative sample of patients with chronic tinnitus permitted a simultaneous statistical measurement of psychometric and audiological parameters in predicting tinnitus distress. We demonstrate that single factors can be distinguished in a manner that explains their causative association and influence on the induction of tinnitus-related distress.
Impact of Multiple Factors on the Degree of Tinnitus Distress
Brüggemann, Petra; Szczepek, Agnieszka J.; Rose, Matthias; McKenna, Laurence; Olze, Heidi; Mazurek, Birgit
2016-01-01
Objective: The primary cause of subjective tinnitus is a dysfunction of the auditory system; however, the degree of distress tinnitus causes depends largely on the psychological status of the patient. Our goal was to attempt to associate the grade of tinnitus-related distress with the psychological distress, physical, or psychological discomfort patients experienced, as well as potentially relevant social parameters, through a simultaneous analysis of these factors. Methods: We determined the level of tinnitus-related distress in 531 tinnitus patients using the German version of the tinnitus questionnaire (TQ). In addition, we used the Perceived Stress Questionnaire (PSQ); General Depression Scale Allgemeine Depression Skala (ADS), Berlin Mood Questionnaire (BSF); somatic symptoms inventory (BI), and SF-8 health survey as well as general information collected through a medical history. Results: The TQ score significantly correlated with a score obtained using PSQ, ADS, BSF, BI, and SF-8 alongside psychosocial factors such as age, gender, and marital status. The level of hearing loss and the auditory properties of the specific tinnitus combined with perceived stress and the degree of depressive mood and somatic discomfort of a patient were identified as medium-strong predictors of chronic tinnitus. Social factors such as gender, age, or marital status also had an impact on the degree of tinnitus distress. The results that were obtained were implemented in a specific cortical distress network model. Conclusions: Using a large representative sample of patients with chronic tinnitus permitted a simultaneous statistical measurement of psychometric and audiological parameters in predicting tinnitus distress. We demonstrate that single factors can be distinguished in a manner that explains their causative association and influence on the induction of tinnitus-related distress. PMID:27445776
Electroweak production of the top quark in the Run II of the D0 experiment (in French)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clement, Benoit
The work exposed in this thesis deals with the search for electroweak production of top quark (single top) in proton-antiproton collisions at √s = 1.96 TeV. This production mode has not been observed yet. Analyzed data have been collected during the Run II of the D0 experiment at the Fermilab Tevatron collider. These data correspond to an integrated luminosity of 370 pb -1. In the Standard Model, the decay of a top quark always produce a high momentum bottom quark. Therefore bottom quark jets identification plays a major role in this analysis. The large lifetime of b hadrons and themore » subsequent large impact parameters relative to the interaction vertex of charged particle tracks are used to tag bottom quark jets. Impact parameters of tracks attached to a jet are converted into the probability for the jet to originate from the primary vertex. This algorithm has a 45% tagging efficiency for a 0.5% mistag rate. Two processes (s and t channels) dominate single top production with slightly different final states. The searched signature consists in 2 to 4 jets with at least one bottom quark jet, one charged lepton (electron or muon) and missing energy accounting for a neutrino. This final state is background dominated and multivariate techniques are needed to separate the signal from the two main backgrounds: associated production of a W boson and jets and top quarks pair production. The achieved sensitivity is not enough to reach observation and we computed upper limits at the 95% confidence level at 5 pb (s-channel) and 4.3 pb (t-channel) on single top production cross-sections.« less
Aircraft type influence on contrail properties
NASA Astrophysics Data System (ADS)
Jeßberger, P.; Voigt, C.; Schumann, U.; Sölch, I.; Schlager, H.; Kaufmann, S.; Petzold, A.; Schäuble, D.; Gayet, J.-F.
2013-05-01
The investigation of the impact of aircraft parameters on contrail properties helps to better understand the climate impact from aviation. Yet, in observations, it is a challenge to separate aircraft and meteorological influences on contrail formation. During the CONCERT campaign in November 2008, contrails from 3 Airbus passenger aircraft of type A319-111, A340-311 and A380-841 were probed at cruise under similar meteorological conditions with in-situ instruments on board the DLR research aircraft Falcon. Within the 2 min old contrails detected near ice saturation, we find similar effective diameters Deff (5.2-5.9 μm), but differences in particle number densities nice (162-235 cm-3) and in vertical contrail extensions (120-290 m), resulting in large differences in contrail optical depths τ (0.25-0.94). Hence larger aircraft produce optically thicker contrails. Based on the observations, we apply the EULAG-LCM model with explicit ice microphysics and in addition the Contrail and Cirrus Prediction model CoCiP to calculate the aircraft type impact on young contrails under identical meteorological conditions. The observed increase in τ for heavier aircraft is confirmed by the models, yet for generally smaller τ. An aircraft dependence of climate relevant contrail properties persists during contrail lifetime, adding importance to aircraft dependent model initialization. We finally derive an analytical relationship between contrail, aircraft and meteorological parameters. Near ice saturation, contrail width × τ scales linearly with fuel flow rate as confirmed by observations. For higher saturation ratios approximations from theory suggest a non-linear increase in the form (RHI-1)2/3. Summarized our combined results could help to more accurately assess the climate impact from aviation using an aircraft dependent contrail parameterization.
Aircraft type influence on contrail properties
NASA Astrophysics Data System (ADS)
Jeßberger, P.; Voigt, C.; Schumann, U.; Sölch, I.; Schlager, H.; Kaufmann, S.; Petzold, A.; Schäuble, D.; Gayet, J.-F.
2013-12-01
The investigation of the impact of aircraft parameters on contrail properties helps to better understand the climate impact from aviation. Yet, in observations, it is a challenge to separate aircraft and meteorological influences on contrail formation. During the CONCERT campaign in November 2008, contrails from 3 Airbus passenger aircraft of types A319-111, A340-311 and A380-841 were probed at cruise under similar meteorological conditions with in situ instruments on board DLR research aircraft Falcon. Within the 2 min-old contrails detected near ice saturation, we find similar effective diameters Deff (5.2-5.9 μm), but differences in particle number densities nice (162-235 cm-3) and in vertical contrail extensions (120-290 m), resulting in large differences in contrail optical depths τ at 550 nm (0.25-0.94). Hence larger aircraft produce optically thicker contrails. Based on the observations, we apply the EULAG-LCM model with explicit ice microphysics and, in addition, the Contrail and Cirrus Prediction (CoCiP) model to calculate the aircraft type impact on young contrails under identical meteorological conditions. The observed increase in τ for heavier aircraft is confirmed by the models, yet for generally smaller τ. CoCiP model results suggest that the aircraft dependence of climate-relevant contrail properties persists during contrail lifetime, adding importance to aircraft-dependent model initialization. We finally derive an analytical relationship between contrail, aircraft and meteorological parameters. Near ice saturation, contrail width × τ scales linearly with the fuel flow rate, as confirmed by observations. For higher relative humidity with respect to ice (RHI), the analytical relationship suggests a non-linear increase in the form (RHI-12/3. Summarized, our combined results could help to more accurately assess the climate impact from aviation using an aircraft-dependent contrail parameterization.
Beer tapping: dynamics of bubbles after impact
NASA Astrophysics Data System (ADS)
Mantič-Lugo, V.; Cayron, A.; Brun, P.-T.; Gallaire, F.
2015-12-01
Beer tapping is a well known prank where a bottle of beer is impacted from the top by a solid object, usually another bottle, leading to a sudden foam overflow. A description of the shock-driven bubble dynamics leading to foaming is presented based on an experimental and numerical study evoking the following physical picture. First, the solid impact produces a sudden downwards acceleration of the bottle creating a strong depression in the liquid bulk. The existing bubbles undergo a strong expansion and a sudden contraction ending in their collapse and fragmentation into a large amount of small bubbles. Second, the bubble clouds present a large surface area to volume ratio, enhancing the CO2 diffusion from the supersaturated liquid, hence growing rapidly and depleting the CO2. The clouds of bubbles migrate upwards in the form of plumes pulling the surrounding liquid with them and eventually resulting in the foam overflow. The sudden pressure drop that triggers the bubble dynamics with a collapse and oscillations is modelled by the Rayleigh-Plesset equation. The bubble dynamics from impact to collapse occurs over a time (tb ≃ 800 μs) much larger than the acoustic time scale of the liquid bulk (tac = 2H/c ≃ 80 μs), for the experimental container of height H = 6 cm and a speed of sound around c ≃ 1500 m/s. This scale separation, together with the comparison of numerical and experimental results, suggests that the pressure drop is controlled by two parameters: the acceleration of the container and the distance from the bubble to the free surface.
Impact of Simulated 1/f Noise for HI Intensity Mapping Experiments
NASA Astrophysics Data System (ADS)
Harper, S.; Dickinson, C.; Battye, R. A.; Roychowdhury, S.; Browne, I. W. A.; Ma, Y.-Z.; Olivari, L. C.; Chen, T.
2018-05-01
Cosmology has entered an era where the experimental limitations are not due to instrumental sensitivity but instead due to inherent systematic uncertainties in the instrumentation and data analysis methods. The field of HI intensity mapping (IM) is still maturing, however early attempts are already systematics limited. One such systematic limitation is 1/f noise, which largely originates within the instrumentation and manifests as multiplicative gain fluctuations. To date there has been little discussion about the possible impact of 1/f noise on upcoming single-dish HI IM experiments such as BINGO, FAST or SKA. Presented in this work are Monte-Carlo end-to-end simulations of a 30 day HI IM survey using the SKA-MID array covering a bandwidth of 950 and 1410 MHz. These simulations extend 1/f noise models to include not just temporal fluctuations but also correlated gain fluctuations across the receiver bandpass. The power spectral density of the spectral gain fluctuations are modelled as a power-law, and characterised by a parameter β. It is found that the degree of 1/f noise frequency correlation will be critical to the success of HI IM experiments. Small values of β (β < 0.25) or high correlation is preferred as this is more easily removed using current component separation techniques. Spectral index of temporal fluctuations (α) is also found to have a large impact on signal-to-noise. Telescope slew speed has a smaller impact, and a scan speed of 1 deg s-1 should be sufficient for a HI IM survey with the SKA.
Erosion resistance of arc-sprayed coatings to iron ore at 25 and 315 °C
NASA Astrophysics Data System (ADS)
Dallaire, S.; Levert, H.; Legoux, J.-G.
2001-06-01
Iron ore pellets are sintered and reduced in large continuous industrial oil-fired furnaces. From the furnace, powerful fans extract large volumes of hot gas. Being exposed to gas-borne iron ore particles and temperatures ranging between 125 and 328 °C, fan components are rapidly eroded. Extensive part repair or replacement is required for maintaining a profitable operation. The arc spraying technique has been suggested for repair provided it could produce erosion-resistant coatings. Conventional and cored wires (1.6 mm diameter) were arc sprayed using various spray parameters to produce 250 to 300 µm thick coatings. Arc-sprayed coatings and reference specimens were erosion tested at 25 and 315 °C and impact angles of 25 and 90° in a laboratory gas-blast erosion rig. This device was designed to impact materials with coarse (32 to 300 µm) iron ore particles at a speed of 100 m/s. The coating volume loss due to erosion was measured with a laser profilometer built by National Research Council Canada several years ago. Few arc-sprayed coatings exhibited erosion resistance comparable with structural steel at low impact angles. Erosion of arc-sprayed coatings and reference specimens dramatically increases at 315 °C for both 25° and 90° impact angles. Erosion-enhanced oxidation was found to be responsible for the increase in volume loss above room temperature. Though arc spraying can be appropriate for on-site repair, the development of more erosion-resistant coatings is required for intermediate temperatures.
NASA Astrophysics Data System (ADS)
Thober, S.; Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Branch, O.; Wulfmeyer, V.; Attinger, S.
2016-12-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The agility of the models to react to different meteorological conditions is artificially constrained by having hard-coded parameters in their equations. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options in addition to the 71 standard parameters. We performed a Sobol' global sensitivity analysis to variations of the standard and hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff, their component fluxes, as well as photosynthesis and sensible heat were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Latent heat and total runoff show very similar sensitivities towards standard and hard-coded parameters. They are sensitive to both soil and plant parameters, which means that model calibrations of hydrologic or land surface models should take both soil and plant parameters into account. Sensible and latent heat exhibit almost the same sensitivities so that calibration or sensitivity analysis can be performed with either of the two. Photosynthesis has almost the same sensitivities as transpiration, which are different from the sensitivities of latent heat. Including photosynthesis and latent heat in model calibration might therefore be beneficial. Surface runoff is sensitive to almost all hard-coded snow parameters. These sensitivities get, however, diminished in total runoff. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
The quality estimation of exterior wall’s and window filling’s construction design
NASA Astrophysics Data System (ADS)
Saltykov, Ivan; Bovsunovskaya, Maria
2017-10-01
The article reveals the term of “artificial envelope” in dwelling building. Authors offer a complex multifactorial approach to the design quality estimation of external fencing structures, which is based on various parameters impact. These referred parameters are: functional, exploitation, cost, and also, the environmental index is among them. The quality design index Qк is inputting for the complex characteristic of observed above parameters. The mathematical relation of this index from these parameters is the target function for the quality design estimation. For instance, the article shows the search of optimal variant for wall and window designs in small, middle and large square dwelling premises of economic class buildings. The graphs of target function single parameters are expressed for the three types of residual chamber’s dimensions. As a result of the showing example, there is a choice of window opening’s dimensions, which make the wall’s and window’s constructions properly correspondent to the producible complex requirements. The authors reveal the comparison of recommended window filling’s square in accordance with the building standards, and the square, due to the finding of the optimal variant of the design quality index. The multifactorial approach for optimal design searching, which is mentioned in this article, can be used in consideration of various construction elements of dwelling buildings in accounting of suitable climate, social and economic construction area features.
Wu, Fangli; Xie, Zhe; Lan, Yawen; Dupont, Sam; Sun, Meng; Cui, Shuaikang; Huang, Xizhi; Huang, Wei; Liu, Liping; Hu, Menghong; Lu, Weiqun; Wang, Youji
2018-01-01
With the release of large amounts of CO 2 , ocean acidification is intensifying and affecting aquatic organisms. In addition, salinity also plays an important role for marine organisms and fluctuates greatly in estuarine and coastal ecosystem, where ocean acidification frequently occurs. In present study, flow cytometry was used to investigate immune parameters of haemocytes in the thick shell mussel Mytilus coruscus exposed to different salinities (15, 25, and 35‰) and two pH levels (7.3 and 8.1). A 7-day in vivo and a 5-h in vitro experiments were performed. In both experiments, low pH had significant effects on all tested immune parameters. When exposed to decreased pH, total haemocyte count (THC), phagocytosis (Pha), esterase (Est), and lysosomal content (Lyso) were significantly decreased, whereas haemocyte mortality (HM) and reactive oxygen species (ROS) were increased. High salinity had no significant effects on the immune parameters of haemocytes as compared with low salinity. However, an interaction between pH and salinity was observed in both experiments for most tested haemocyte parameters. This study showed that high salinity, low salinity and low pH have negative and interactive effects on haemocytes of mussels. As a consequence, it can be expected that the combined effect of low pH and changed salinity will have more severe effects on mussel health than predicted by single exposure.
NASA Astrophysics Data System (ADS)
Ballinas, R.; Versini, P.-A.; Sempere, D.; Escaler, I.
2009-09-01
Any long-term change in the patterns of average weather in a global or regional scale is called climate change. It may cause a progressive increase of atmospheric temperature and consequently may change the amount, frequency and intensity of precipitation. All these changes of meteorological parameters may modify the water cycle: run-off, infiltration, aquifer recharge, etc. Recent studies in Catalonia foresee changes in hydrological systems caused by climate change. This will lead to alterations in the hydrological cycle that could impact in land use, in the regimen of water extractions, in the hydrological characteristics of the territory and reduced groundwater recharge. Besides, can expect a loss of flow in rivers. In addition to possible increases in the frequency of extreme rainfall, being necessary to modify the design of infrastructure. Because this, it work focuses on studying the impacts of climate change in one of the most important basins in Catalonia, the Llobregat River Basin. The basin is the hub of the province of Barcelona. It is a highly populated and urbanized catchment, where water resources are used for different purposes, as drinking water production, agricultural irrigation, industry and hydro-electrical energy production. In consequence, many companies and communities depend on these resources. To study the impact of climate change in the Llobregat basin, storms (frequency, intensity) mainly, we will need regional climate change information. A regional climate is determined by interactions at large, regional and local scales. The general circulation models (GCMs) are run at too coarse resolution to permit accurate description of these regional and local interactions. So far, they have been unable to provide consistent estimates of climate change on a local scale. Several regionalization techniques have been developed to bridge the gap between the large-scale information provided by GCMs and fine spatial scales required for regional and environmental impact studies. Downscaling methods to assess the effect of large-scale circulations on local parameters have. Statistical downscaling methods are based on the view that regional climate can be conditioned by two factors: large-scale climatic state and regional/local features. Local climate information is derived by first developing a statistical model which relates large-scale variables or "predictors" for which GCMs are trustable to regional or local surface "predictands" for which models are less skilful. The main advantage of these methods is that they are computationally inexpensive, and can be applied to outputs from different GCM experiments. Three statistical downscaling methods are applied: Analogue method, Delta Change and Direct Forcing. These methods have been used to determine daily precipitation projections at rain gauge location to study the intensity, frequency and variability of storms in a context of climate change in the Llobregat River Basin in Catalonia, Spain. This work is part of the European project "Water Change" (included in the LIFE + Environment Policy and Governance program). It deals with Medium and long term water resources modelling as a tool for planning and global change adaptation. Two stakeholders involved in the project provided the historical time series: Catalan Water Agency (ACA) and the State Meteorological Agency (AEMET).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Xin; Ou, Xiaomin; Xu, Tingting
Purpose: To determine dosimetric risk factors for the occurrence of temporal lobe necrosis (TLN) among nasopharyngeal carcinoma (NPC) patients treated with intensity modulated radiation therapy (IMRT) and to investigate the impact of dose-volume histogram (DVH) parameters on the volume of TLN lesions (V-N). Methods and Materials: Forty-three NPC patients who had developed TLN following IMRT and 43 control subjects free of TLN were retrospectively assessed. DVH parameters included maximum dose (Dmax), minimum dose (Dmin), mean dose (Dmean), absolute volumes receiving specific dose (Vds) from 20 to 76 Gy (V20-V76), and doses covering certain volumes (Dvs) from 0.25 to 6.0 cm{sup 3} (D0.25-D6.0).more » V-Ns were quantified with axial magnetic resonance images. Results: DVH parameters were ubiquitously higher in temporal lobes with necrosis than in healthy temporal lobes. Increased Vds and Dvs were significantly associated with higher risk of TLN occurrence (P<.05). In particular, Vds at a dose of ≥70 Gy were found with the highest odds ratios. A common increasing trend was detected between V-N and DVH parameters through trend tests (P for trend of <.05). Linear regression analysis showed that V45 had the strongest predictive power for V-N (adjusted R{sup 2} = 0.305, P<.0001). V45 of <15.1 cm{sup 3} was relatively safe as the dose constraint for preventing large TLN lesions with V-N of >5 cm{sup 3}. Conclusions: Dosimetric parameters are significantly associated with TLN occurrence and the extent of temporal lobe injury. To better manage TLN, it would be important to avoid both focal high dose and moderate dose delivered to a large area in TLs.« less
Kim, Kwang Hyun; Yoon, Hyun Suk; Song, Wan; Choo, Hee Jung; Yoon, Hana; Chung, Woo Sik; Sim, Bong Suk; Lee, Dong Hyeon
2017-01-01
To classify patients with orthotopic neobladder based on urodynamic parameters using cluster analysis and to characterize the voiding function of each group. From January 2012 to November 2015, 142 patients with bladder cancer underwent radical cystectomy and Studer neobladder reconstruction at our institute. Of the 142 patients, 103 with complete urodynamic data and information on urinary functional outcomes were included in this study. K-means clustering was performed with urodynamic parameters which included maximal cystometric capacity, residual volume, maximal flow rate, compliance, and detrusor pressure at maximum flow rate. Three groups emerged by cluster analysis. Urodynamic parameters and urinary function outcomes were compared between three groups. Group 1 (n = 44) had ideal urodynamic parameters with a mean maximal bladder capacity of 513.3 ml and mean residual urine volume of 33.1 ml. Group 2 (n = 42) was characterized by small bladder capacity with low compliance. Patients in group 2 had higher rates of daytime incontinence and nighttime incontinence than patients in group 1. Group 3 (n = 17) was characterized by large residual urine volume with high compliance. When we examined gender differences in urodynamics and functional outcomes, residual urine volume and the rate of daytime incontinence were only marginally significant. However, females were significantly more likely to belong to group 2 or 3 (P = 0.003). In multivariate analysis to identify factors associated with group 1 which has the most ideal urodynamic pattern, age (OR 0.95, P = 0.017) and male gender (OR 7.57, P = 0.003) were identified as significant factors. While patients with ileal neobladder present with various voiding symptoms, three urodynamic patterns were identified by cluster analysis. Approximately half of patients had ideal urodynamic parameters. The other two groups were characterized by large residual urine and small capacity bladder with low compliance. Young age and male gender appear to have a favorable impact on urodynamic and voiding outcomes in patients undergoing orthotopic neobladder reconstruction.
Analysis of flash flood parameters and human impacts in the US from 2006 to 2012
NASA Astrophysics Data System (ADS)
Špitalar, Maruša; Gourley, Jonathan J.; Lutoff, Celine; Kirstetter, Pierre-Emmanuel; Brilly, Mitja; Carr, Nicholas
2014-11-01
Several different factors external to the natural hazard of flash flooding can contribute to the type and magnitude of their resulting damages. Human exposure, vulnerability, fatality and injury rates can be minimized by identifying and then mitigating the causative factors for human impacts. A database of flash flooding was used for statistical analysis of human impacts across the U.S. 21,549 flash flood events were analyzed during a 6-year period from October 2006 to 2012. Based on the information available in the database, physical parameters were introduced and then correlated to the reported human impacts. Probability density functions of the frequency of flash flood events and the PDF of occurrences weighted by the number of injuries and fatalities were used to describe the influence of each parameter. The factors that emerged as the most influential on human impacts are short flood durations, small catchment sizes in rural areas, vehicles, and nocturnal events with low visibility. Analyzing and correlating a diverse range of parameters to human impacts give us important insights into what contributes to fatalities and injuries and further raises questions on how to manage them.
NASA Astrophysics Data System (ADS)
Planck Collaboration; Aghanim, N.; Akrami, Y.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Benabed, K.; Bersanelli, M.; Bielewicz, P.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Calabrese, E.; Cardoso, J.-F.; Challinor, A.; Chiang, H. C.; Colombo, L. P. L.; Combet, C.; Crill, B. P.; Curto, A.; Cuttaia, F.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Di Valentino, E.; Dickinson, C.; Diego, J. M.; Doré, O.; Ducout, A.; Dupac, X.; Dusini, S.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fantaye, Y.; Finelli, F.; Forastieri, F.; Frailis, M.; Franceschi, E.; Frolov, A.; Galeotta, S.; Galli, S.; Ganga, K.; Génova-Santos, R. T.; Gerbino, M.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gruppuso, A.; Gudmundsson, J. E.; Herranz, D.; Hivon, E.; Huang, Z.; Jaffe, A. H.; Jones, W. C.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kim, J.; Kisner, T. S.; Knox, L.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Levrier, F.; Lewis, A.; Liguori, M.; Lilje, P. B.; Lilley, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Ma, Y.-Z.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Matarrese, S.; Mauri, N.; McEwen, J. D.; Meinhold, P. R.; Mennella, A.; Migliaccio, M.; Millea, M.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Moss, A.; Narimani, A.; Natoli, P.; Oxborrow, C. A.; Pagano, L.; Paoletti, D.; Partridge, B.; Patanchon, G.; Patrizii, L.; Pettorino, V.; Piacentini, F.; Polastri, L.; Polenta, G.; Puget, J.-L.; Rachen, J. P.; Racine, B.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Ruiz-Granados, B.; Salvati, L.; Sandri, M.; Savelainen, M.; Scott, D.; Sirignano, C.; Sirri, G.; Stanco, L.; Suur-Uski, A.-S.; Tauber, J. A.; Tavagnacco, D.; Tenti, M.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Vittorio, N.; Wandelt, B. D.; Wehus, I. K.; White, M.; Zacchei, A.; Zonca, A.
2017-11-01
The six parameters of the standard ΛCDM model have best-fit values derived from the Planck temperature power spectrum that are shifted somewhat from the best-fit values derived from WMAP data. These shifts are driven by features in the Planck temperature power spectrum at angular scales that had never before been measured to cosmic-variance level precision. We have investigated these shifts to determine whether they are within the range of expectation and to understand their origin in the data. Taking our parameter set to be the optical depth of the reionized intergalactic medium τ, the baryon density ωb, the matter density ωm, the angular size of the sound horizon θ∗, the spectral index of the primordial power spectrum, ns, and Ase- 2τ (where As is the amplitude of the primordial power spectrum), we have examined the change in best-fit values between a WMAP-like large angular-scale data set (with multipole moment ℓ < 800 in the Planck temperature power spectrum) and an all angular-scale data set (ℓ < 2500Planck temperature power spectrum), each with a prior on τ of 0.07 ± 0.02. We find that the shifts, in units of the 1σ expected dispersion for each parameter, are { Δτ,ΔAse- 2τ,Δns,Δωm,Δωb,Δθ∗ } = { -1.7,-2.2,1.2,-2.0,1.1,0.9 }, with a χ2 value of 8.0. We find that this χ2 value is exceeded in 15% of our simulated data sets, and that a parameter deviates by more than 2.2σ in 9% of simulated data sets, meaning that the shifts are not unusually large. Comparing ℓ < 800 instead to ℓ> 800, or splitting at a different multipole, yields similar results. We examined the ℓ < 800 model residuals in the ℓ> 800 power spectrum data and find that the features there that drive these shifts are a set of oscillations across a broad range of angular scales. Although they partly appear similar to the effects of enhanced gravitational lensing, the shifts in ΛCDM parameters that arise in response to these features correspond to model spectrum changes that are predominantly due to non-lensing effects; the only exception is τ, which, at fixed Ase- 2τ, affects the ℓ> 800 temperature power spectrum solely through the associated change in As and the impact of that on the lensing potential power spectrum. We also ask, "what is it about the power spectrum at ℓ < 800 that leads to somewhat different best-fit parameters than come from the full ℓ range?" We find that if we discard the data at ℓ < 30, where there is a roughly 2σ downward fluctuation in power relative to the model that best fits the full ℓ range, the ℓ < 800 best-fit parameters shift significantly towards the ℓ < 2500 best-fit parameters. In contrast, including ℓ < 30, this previously noted "low-ℓ deficit" drives ns up and impacts parameters correlated with ns, such as ωm and H0. As expected, the ℓ < 30 data have a much greater impact on the ℓ < 800 best fit than on the ℓ < 2500 best fit. So although the shifts are not very significant, we find that they can be understood through the combined effects of an oscillatory-like set of high-ℓ residuals and the deficit in low-ℓ power, excursions consistent with sample variance that happen to map onto changes in cosmological parameters. Finally, we examine agreement between PlanckTT data and two other CMB data sets, namely the Planck lensing reconstruction and the TT power spectrum measured by the South Pole Telescope, again finding a lack of convincing evidence of any significant deviations in parameters, suggesting that current CMB data sets give an internally consistent picture of the ΛCDM model.
NASA Astrophysics Data System (ADS)
Arfeuille, F.; Rozanov, E.; Peter, T.; Weisenstein, D.; Hadorn, G.; Bodenmann, T.; Brönnimann, S.
2010-09-01
One famous example of an extreme climatic event is the cold summer of 1816 in Europe and North America. This specific year, which was later called the "Year without summer 1816", had profound social and environmental effects. The cataclysmic eruption of Mt Tambora is now commonly known to have largely contributed to the negative temperature anomalies of the summer 1816, but some uncertainties remain. The eruption which occurred in April 1815 is the largest within the last 500 years and this extreme climatic forcing provides a real test for climate models. A crucial parameter to assess in order to simulate this eruption is the aerosol size distribution, which strongly influences the radiative impact of the aerosols (through changes in albedo and residence time in the stratosphere, among others) and the impacts on dynamics and chemistry. The representation of this major forcing is done by using the AER-2D aerosol model which calculates the size distribution of the aerosols formed after the eruption. The modeling of the climatic impacts is then done by the state-of-the-art Chemistry-Climate model (CCM) SOCOL. The characteristics of the Tambora eruption and results from simulations made using the aerosol model/CCM, with an emphasis on the radiative and chemical implications of the large aerosol, will be shown. For instance, the specific absorption/scattering ratio of Mt.Tambora aerosols induced a large stratospheric warming which will be analyzed. The climatic impacts will also be discussed in regards of the high sedimentation rate of Mt. Tambora aerosols, leading to a fast decrease of the atmospheric optical depth in the first two years after the eruption. The link will be made between the modeling results and proxy-reconstructions as well as with available historical daily data from Geneva, Switzerland. Finally, insights on the contemporary response to this climatic extreme will be shown.
NASA Technical Reports Server (NTRS)
Kearsley, A. T.; Burchell, M. J.; Horz, F.; Cole, M. J.; Schwandt, C. S.
2006-01-01
Metallic aluminium alloy foils exposed on the forward, comet-facing surface of the aerogel tray on the Stardust spacecraft are likely to have been impacted by the same cometary particle population as the dedicated impact sensors and the aerogel collector. The ability of soft aluminium alloy to record hypervelocity impacts as bowl-shaped craters offers an opportunistic substrate for recognition of impacts by particles of a wide potential size range. In contrast to impact surveys conducted on samples from low Earth orbit, the simple encounter geometry for Stardust and Wild 2, with a known and constant spacecraft-particle relative velocity and effective surface-perpendicular impact trajectories, permits closely comparable simulation in laboratory experiments. For a detailed calibration programme we have selected a suite of spherical glass projectiles of uniform density and hardness characteristics, with well-documented particle size range from 10 microns to nearly 100 microns. Light gas gun buckshot firings of these particles at approximately 6km s)exp -1) onto samples of the same foil as employed on Stardust have yielded large numbers of craters. Scanning electron microscopy of both projectiles and impact features has allowed construction of a calibration plot, showing a linear relationship between impacting particle size and impact crater diameter. The close match between our experimental conditions and the Stardust mission encounter parameters should provide another opportunity to measure particle size distributions and fluxes close to the nucleus of Wild 2, independent of the active impact detector instruments aboard the Stardust spacecraft.
UncertiantyQuantificationinTsunamiEarlyWarningCalculations
NASA Astrophysics Data System (ADS)
Anunziato, Alessandro
2016-04-01
The objective of the Tsunami calculations is the estimation of the impact of waves caused by large seismic events on the coasts and the determination of potential inundation areas. In the case of Early Warning Systems, i.e. systems that should allow to anticipate the possible effects and give the possibility to react consequently (i.e. issue evacuation of areas at risk), this must be done in very short time (minutes) to be effective. In reality, the above estimation includes several uncertainty factors which make the prediction extremely difficult. The quality of the very first estimations of the seismic parameters is not very precise: the uncertainty in the determination of the seismic components (location, magnitude and depth) decreases with time because as time passes it is possible to use more and more seismic signals and the event characterization becomes more precise. On the other hand other parameters that are necessary to establish for the performance of a calculation (i.e. fault mechanism) are difficult to estimate accurately also after hours (and in some cases remain unknown) and therefore this uncertainty remains in the estimated impact evaluations; when a quick tsunami calculation is necessary (early warning systems) the possibility to include any possible future variation of the conditions to establish the "worst case scenario" is particularly important. The consequence is that the number of uncertain parameters is so large that it is not easy to assess the relative importance of each of them and their effect on the predicted results. In general the complexity of system computer codes is generated by the multitude of different models which are assembled into a single program to give the global response for a particular phenomenon. Each of these model has associated a determined uncertainty coming from the application of that model to single cases and/or separated effect test cases. The difficulty in the prediction of a Tsunami calculation response is additionally increased by the not perfect knowledge of the initial and boundary conditions so that the response can change even with small variations of the input. The paper analyses a number of potential events in the Mediterranean Sea and in the Atlantic Ocean and for each of them a large number of calculations is performed (Monte Carlo simulation) in order to identify the relative importance of each of the uncertain parameter that is adopted. It is shown that even if after several hours the variation on the estimate is reduces, still remains and in some cases it can lead to different conclusions if this information is used as alerting method. The cases considered are: a mild event in the Hellenic arc (Mag. 6.9), a relatively medium event in Algeria (Mag. 7.2) and a quite relevant event in the Gulf of Cadiz (Mag. 8.2).
Dependence of elastic hadron collisions on impact parameter
NASA Astrophysics Data System (ADS)
Procházka, Jiří; Lokajíček, Miloš V.; Kundrát, Vojtěch
2016-05-01
Elastic proton-proton collisions represent probably the greatest ensemble of available measured data, the analysis of which may provide a large amount of new physical results concerning fundamental particles. It is, however, necessary to analyze first some conclusions concerning pp collisions and their interpretations differing fundamentally from our common macroscopic experience. It has been argued, e.g., that elastic hadron collisions have been more central than inelastic ones, even if any explanation of the existence of so different processes, i.e., elastic and inelastic (with hundreds of secondary particles) collisions, under the same conditions has not been given until now. The given conclusion has been based on a greater number of simplifying mathematical assumptions (already done in earlier calculations), without their influence on physical interpretation being analyzed and entitled; the corresponding influence has started to be studied in the approach based on the eikonal model. The possibility of a peripheral interpretation of elastic collisions will be demonstrated and the corresponding results summarized. The arguments will be given on why no preference may be given to the mentioned centrality against the standard peripheral behaviour. The corresponding discussion on the contemporary description of elastic hadronic collision in dependence on the impact parameter will be summarized and the justification of some important assumptions will be considered.
NASA Technical Reports Server (NTRS)
Hansman, R. J., Jr.
1982-01-01
The feasibility of computerized simulation of the physics of advanced microwave anti-icing systems, which preheat impinging supercooled water droplets prior to impact, was investigated. Theoretical and experimental work performed to create a physically realistic simulation is described. The behavior of the absorption cross section for melting ice particles was measured by a resonant cavity technique and found to agree with theoretical predictions. Values of the dielectric parameters of supercooled water were measured by a similar technique at lambda = 2.82 cm down to -17 C. The hydrodynamic behavior of accelerated water droplets was studied photograhically in a wind tunnel. Droplets were found to initially deform as oblate spheroids and to eventually become unstable and break up in Bessel function modes for large values of acceleration or droplet size. This confirms the theory as to the maximum stable droplet size in the atmosphere. A computer code which predicts droplet trajectories in an arbitrary flow field was written and confirmed experimentally. The results were consolidated into a simulation to study the heating by electromagnetic fields of droplets impinging onto an object such as an airfoil. It was determined that there is sufficient time to heat droplets prior to impact for typical parameter values. Design curves for such a system are presented.
A Comparative Analysis of Life-Cycle Assessment Tools for ...
We identified and evaluated five life-cycle assessment tools that community decision makers can use to assess the environmental and economic impacts of end-of-life (EOL) materials management options. The tools evaluated in this report are waste reduction mode (WARM), municipal solid waste-decision support tool (MSW-DST), solid waste optimization life-cycle framework (SWOLF), environmental assessment system for environmental technologies (EASETECH), and waste and resources assessment for the environment (WRATE). WARM, MSW-DST, and SWOLF were developed for US-specific materials management strategies, while WRATE and EASETECH were developed for European-specific conditions. All of the tools (with the exception of WARM) allow specification of a wide variety of parameters (e.g., materials composition and energy mix) to a varying degree, thus allowing users to model specific EOL materials management methods even outside the geographical domain they are originally intended for. The flexibility to accept user-specified input for a large number of parameters increases the level of complexity and the skill set needed for using these tools. The tools were evaluated and compared based on a series of criteria, including general tool features, the scope of the analysis (e.g., materials and processes included), and the impact categories analyzed (e.g., climate change, acidification). A series of scenarios representing materials management problems currently relevant to c
Defining Uncertainty and Error in Planktic Foraminiferal Oxygen Isotope Measurements
NASA Astrophysics Data System (ADS)
Fraass, A. J.; Lowery, C.
2016-12-01
Foraminifera are the backbone of paleoceanography, and planktic foraminifera are one of the leading tools for reconstructing water column structure. Currently, there are unconstrained variables when dealing with the reproducibility of oxygen isotope measurements. This study presents the first results from a simple model of foraminiferal calcification (Foraminiferal Isotope Reproducibility Model; FIRM), designed to estimate the precision and accuracy of oxygen isotope measurements. FIRM produces synthetic isotope data using parameters including location, depth habitat, season, number of individuals included in measurement, diagenesis, misidentification, size variation, and vital effects. Reproducibility is then tested using Monte Carlo simulations. The results from a series of experiments show that reproducibility is largely controlled by the number of individuals in each measurement, but also strongly a function of local oceanography if the number of individuals is held constant. Parameters like diagenesis or misidentification have an impact on both the precision and the accuracy of the data. Currently FIRM is a tool to estimate isotopic error values best employed in the Holocene. It is also a tool to explore the impact of myriad factors on the fidelity of paleoceanographic records. FIRM was constructed in the open-source computing environment R and is freely available via GitHub. We invite modification and expansion, and have planned inclusions for benthic foram reproducibility and stratigraphic uncertainty.
Sheth, Poonam; Grimes, Matthew R; Stein, Stephen W; Myrdal, Paul B
2017-08-07
Pressurized metered dose inhalers (pMDIs) are widely used for the treatment of pulmonary diseases. The overall efficiency of pMDI drug delivery may be defined by in vitro parameters such as the amount of drug that deposits on the model throat and the proportion of the emitted dose that has particles that are sufficiently small to deposit in the lung (i.e., fine particle fraction, FPF). The study presented examines product performance of ten solution pMDI formulations containing a variety of cosolvents with diverse chemical characteristics by cascade impaction with three inlets (USP induction port, Alberta Idealized Throat, and a large volume chamber). Through the data generated in support of this study, it was demonstrated that throat deposition, cascade impactor deposition, FPF, and mass median aerodynamic diameter of solution pMDIs depend on the concentration and vapor pressure of the cosolvent, and the selection of model throat. Theoretical droplet lifetimes were calculated for each formulation using a discrete two-stage evaporation process model and it was determined that the droplet lifetime is highly correlated to throat deposition and FPF indicating that evaporation kinetics significantly influences pMDI drug delivery. Copyright © 2017 Elsevier B.V. All rights reserved.
Resilience scales of a dammed tropical river
NASA Astrophysics Data System (ADS)
Calamita, Elisa; Schmid, Martin; Wehrli, Bernhard
2017-04-01
Artificial river impoundments disrupt the seasonality and dynamics of thermal, chemical, morphological and ecological regimes in river systems. These alterations affect the aquatic ecosystems in space and time and specifically modify the seasonality and the longitudinal gradients of important biogeochemical processes. Resilience of river systems to anthropogenic stressors enables their recovery along the flow path; however little is known about the longitudinal distance that rivers need to partially restore their physical, chemical and biological integrity. In this study, the concept of a "resilience scale" will be explored for different water quality parameters downstream of Kariba dam, the largest artificial lake in the Zambezi basin (South-East Africa). The goal of this project is to develop a modelling framework to investigate and quantify the impact of large dams on downstream water quality in tropical context. In particular, we aim to assess the degree of reversibility of the main downstream alterations (temperature, oxygen, nutrients) and consequently the quantification of their longitudinal extent. Coupling in-situ measurements with hydraulic and hydrological parameters such as travel times, will allow us to define a physically-based parametrization of the different resilience scales for tropical rivers. The results will be used for improving future dam management at the local scale and assessing the ecological impact of planned dams at the catchment scale.
NASA Astrophysics Data System (ADS)
Xu, Y. C.; Jing, H. Y.; Han, Y. D.; Xu, L. Y.
2017-08-01
This paper exhibits a novel in situ remediation technique named friction tapered stud overlap welding (FTSOW) to repair a through crack in structures and components in extremely harsh environments. Furthermore, this paper presents variations in process data, including rotational speed, stud displacement, welding force, and torque for a typical FTSOW weld. In the present study, the effects of welding parameters on the microstructures and mechanical properties of the welding joints were investigated. Inapposite welding parameters consisted of low rotational speeds and welding forces, and when utilized, they increased the occurrence of a lack of bonding and unfilled defects within the weld. The microstructures with a welding zone and heat-affected zone mainly consisted of upper bainite. The hardness value was highest in the welding zone and lowest in the base material. During the pull-out tests, all the welds failed in the stud. Moreover, the defect-free welds broke at the interface of the lap plate and substrate during the cruciform uniaxial tensile test. The best tensile test results at different depths and shear tests were 721.6 MPa and 581.9 MPa, respectively. The favorable Charpy impact-absorbed energy was 68.64 J at 0 °C. The Charpy impact tests revealed a brittle fracture characteristic with a large area of cleavage.
Magaril, Elena
2016-04-01
The environmental and operational characteristics of motor transport, one of the main consumers of motor fuel and source of toxic emissions, soot, and greenhouse gases, are determined to a large extent by the fuel quality which is characterized by many parameters. Fuel density is one of these parameters and it can serve as an indicator of fuel quality. It has been theoretically substantiated that an increased density of motor fuel has a negative impact both on the environmental and operational characteristics of motor transport. The use of fuels with a high density leads to an increase in carbonization within the engine, adversely affecting the vehicle performance and increasing environmental pollution. A program of technological measures targeted at reducing the density of the fuel used was offered. It includes a solution to the problem posed by changes in the refining capacities ratio and the temperature range of gasoline and diesel fuel boiling, by introducing fuel additives and adding butanes to the gasoline. An environmental tax has been developed which allows oil refineries to have a direct impact on the production of fuels with improved environmental performance, taking into account the need to minimize the density of the fuel within a given category of quality.
Nelson, Peter T.; Abner, Erin L.; Schmitt, Frederick A.; Kryscio, Richard J.; Jicha, Gregory A.; Smith, Charles D.; Davis, Daron G.; Poduska, John W.; Patel, Ela; Mendiondo, Marta S.; Markesbery, William R.
2009-01-01
We evaluated the association between mini-mental status examination (MMSE) scores proximal to death and the values of 43 different clinical and pathological parameters. Studies were performed using data from 334 elderly, longitudinally evaluated research subjects who had undergone autopsy and satisfied inclusion criteria from an initial study group of 501. Interindividual variance in MMSE scores was used as a surrogate for the severity of cognitive impairment linked to aging (CILA). A statistical linear regression-based model provided a framework for assessing the parameters with significant, direct impact on CILA severity. Strong association between CILA and Alzheimer’s disease (AD) pathology, especially isocortical neurofibrillary tangles, was evident. The pattern of association between AD lesion densities with cognitive impairment severity was biologically informative, with neuritic plaques having more impact in relatively high-functioning individuals. Abundant isocortical Lewy bodies tended to be an additive pathology correlating with final MMSE scores approximately 10 points lower. In a subset of cases we found evidence for association between TDP-43-related pathology and CILA severity, independent of AD or hippocampal sclerosis. There was no support for independent association between CILA severity and most evaluated indices including diffuse plaques, argyrophilic grains, heart disease, education level, apolipoprotein E alleles or diabetes. PMID:19021630
New functional pavements for pedestrians and cyclists.
Wallqvist, V; Kjell, G; Cupina, E; Kraft, L; Deck, C; Willinger, R
2017-08-01
When many fields of pedestrian and cyclist safety have been extensively studied, the surfacing has long been left unquestioned, despite being developed for another mode of transport and being one of the main causes for falls and fall injuries. In this project new surfacing materials for pedestrian and cyclist safety have been produced. Focusing on augmenting previously largely disregarded parameters as impact absorption, comfort and visibility at the same time as avoiding deteriorating of crucial parameters as friction and wear resistance. Rubber content, binder type, and pigment addition have been varied and evaluated. The results demonstrate that by increasing rubber content of the mixtures the head injury criterion (HIC) value and injury risk can be decreased while maintaining frictional properties according to existing criteria. Assembly of test-lanes demonstrate that some developed materials experience lower flow and component separation than standard materials due to rubber addition, calling for further optimisation of construction procedure linked to content development. Initial trials on the test-lanes indicate that a polyurethane (PU) based material has high cycling comfort, visibility and can be modified with phosphorescence properties. For standard asphalt, impact absorption might be inflicted by modification of bitumen alone but is mostly augmented by rubber addition. The results also indicate that rubber content can decrease ice formation on the materials. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mantry, Sonny; Petriello, Frank
2010-05-01
We derive a factorization theorem for the Higgs boson transverse momentum (pT) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for mh≫pT≫ΛQCD, where mh denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the pT scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the pT-scale physics simplifies the implementation of higher order radiative corrections in αs(pT). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in pT/mh and ΛQCD/pT can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-pT resummation.
NASA Astrophysics Data System (ADS)
Gebhart, T. E.; Martinez-Rodriguez, R. A.; Baylor, L. R.; Rapp, J.; Winfrey, A. L.
2017-08-01
To produce a realistic tokamak-like plasma environment in linear plasma device, a transient source is needed to deliver heat and particle fluxes similar to those seen in an edge localized mode (ELM). ELMs in future large tokamaks will deliver heat fluxes of ˜1 GW/m2 to the divertor plasma facing components at a few Hz. An electrothermal plasma source can deliver heat fluxes of this magnitude. These sources operate in an ablative arc regime which is driven by a DC capacitive discharge. An electrothermal source was configured with two pulse lengths and tested under a solenoidal magnetic field to determine the resulting impact on liner ablation, plasma parameters, and delivered heat flux. The arc travels through and ablates a boron nitride liner and strikes a tungsten plate. The tungsten target plate is analyzed for surface damage using a scanning electron microscope.
Erosion of soil organic carbon: implications for carbon sequestration
Van Oost, Kristof; Van Hemelryck, Hendrik; Harden, Jennifer W.; McPherson, B.J.; Sundquist, E.T.
2009-01-01
Agricultural activities have substantially increased rates of soil erosion and deposition, and these processes have a significant impact on carbon (C) mineralization and burial. Here, we present a synthesis of erosion effects on carbon dynamics and discuss the implications of soil erosion for carbon sequestration strategies. We demonstrate that for a range of data-based parameters from the literature, soil erosion results in increased C storage onto land, an effect that is heterogeneous on the landscape and is variable on various timescales. We argue that the magnitude of the erosion term and soil carbon residence time, both strongly influenced by soil management, largely control the strength of the erosion-induced sink. In order to evaluate fully the effects of soil management strategies that promote carbon sequestration, a full carbon account must be made that considers the impact of erosion-enhanced disequilibrium between carbon inputs and decomposition, including effects on net primary productivity and decomposition rates.
A Method of Effective Quarry Water Purifying Using Artificial Filtering Arrays
NASA Astrophysics Data System (ADS)
Tyulenev, M.; Garina, E.; Khoreshok, A.; Litvin, O.; Litvin, Y.; Maliukhina, E.
2017-01-01
The development of open pit mining in the large coal basins of Russia and other countries increases their negative impact on the environment. Along with the damage of land and air pollution by dust and combustion gases of blasting, coal pits have a significant negative impact on water resources. Polluted quarry water worsens the ecological situation on a much larger area than covered by air pollution and land damage. This significantly worsens the conditions of people living in cities and towns located near the coal pits, and complicates the subsequent restoration of the environment, irreversibly destroying the nature. Therefore, the research of quarry wastewater purifying is becoming an important mater for scholars of technical colleges and universities in the regions with developing open-pit mining. This paper describes the method of determining the basic parameters of the artificial filtering arrays formed on coal pits of Kuzbass (Western Siberia, Russia), and gives recommendations on its application.
High sensitivity of Indian summer monsoon to Middle East dust absorptive properties.
Jin, Qinjian; Yang, Zong-Liang; Wei, Jiangfeng
2016-07-28
The absorptive properties of dust aerosols largely determine the magnitude of their radiative impacts on the climate system. Currently, climate models use globally constant values of dust imaginary refractive index (IRI), a parameter describing the dust absorption efficiency of solar radiation, although it is highly variable. Here we show with model experiments that the dust-induced Indian summer monsoon (ISM) rainfall differences (with dust minus without dust) change from -9% to 23% of long-term climatology as the dust IRI is changed from zero to the highest values used in the current literature. A comparison of the model results with surface observations, satellite retrievals, and reanalysis data sets indicates that the dust IRI values used in most current climate models are too low, tending to significantly underestimate dust radiative impacts on the ISM system. This study highlights the necessity for developing a parameterization of dust IRI for climate studies.
Impact vaporization: Late time phenomena from experiments
NASA Technical Reports Server (NTRS)
Schultz, P. H.; Gault, D. E.
1987-01-01
While simple airflow produced by the outward movement of the ejecta curtain can be scaled to large dimensions, the interaction between an impact-vaporized component and the ejecta curtain is more complicated. The goal of these experiments was to examine such interaction in a real system involving crater growth, ejection of material, two phased mixtures of gas and dust, and strong pressure gradients. The results will be complemented by theoretical studies at laboratory scales in order to separate the various parameters for planetary scale processes. These experiments prompt, however, the following conclusions that may have relevance at broader scales. First, under near vacuum or low atmospheric pressures, an expanding vapor cloud scours the surrounding surface in advance of arriving ejecta. Second, the effect of early-time vaporization is relatively unimportant at late-times. Third, the overpressure created within the crater cavity by significant vaporization results in increased cratering efficiency and larger aspect ratios.
Tidal disruption of inviscid protoplanets
NASA Technical Reports Server (NTRS)
Boss, Alan P.; Cameron, A. G. W.; Benz, W.
1991-01-01
Roche showed that equilibrium is impossible for a small fluid body synchronously orbiting a primary within a critical radius now termed the Roche limit. Tidal disruption of orbitally unbound bodies is a potentially important process for planetary formation through collisional accumulation, because the area of the Roche limit is considerably larger then the physical cross section of a protoplanet. Several previous studies were made of dynamical tidal disruption and different models of disruption were proposed. Because of the limitation of these analytical models, we have used a smoothed particle hydrodynamics (SPH) code to model the tidal disruption process. The code is basically the same as the one used to model giant impacts; we simply choose impact parameters large enough to avoid collisions. The primary and secondary both have iron cores and silicate mantles, and are initially isothermal at a molten temperature. The conclusions based on the analytical and numerical models are summarized.
Environmental conditions regulate the impact of plants on cloud formation
Zhao, D. F.; Buchholz, A.; Tillmann, R.; Kleist, E.; Wu, C.; Rubach, F.; Kiendler-Scharr, A.; Rudich, Y.; Wildt, J.; Mentel, Th. F.
2017-01-01
The terrestrial vegetation emits large amounts of volatile organic compounds (VOC) into the atmosphere, which on oxidation produce secondary organic aerosol (SOA). By acting as cloud condensation nuclei (CCN), SOA influences cloud formation and climate. In a warming climate, changes in environmental factors can cause stresses to plants, inducing changes of the emitted VOC. These can modify particle size and composition. Here we report how induced emissions eventually affect CCN activity of SOA, a key parameter in cloud formation. For boreal forest tree species, insect infestation by aphids causes additional VOC emissions which modifies SOA composition thus hygroscopicity and CCN activity. Moderate heat increases the total amount of constitutive VOC, which has a minor effect on hygroscopicity, but affects CCN activity by increasing the particles' size. The coupling of plant stresses, VOC composition and CCN activity points to an important impact of induced plant emissions on cloud formation and climate. PMID:28218253
Environmental conditions regulate the impact of plants on cloud formation.
Zhao, D F; Buchholz, A; Tillmann, R; Kleist, E; Wu, C; Rubach, F; Kiendler-Scharr, A; Rudich, Y; Wildt, J; Mentel, Th F
2017-02-20
The terrestrial vegetation emits large amounts of volatile organic compounds (VOC) into the atmosphere, which on oxidation produce secondary organic aerosol (SOA). By acting as cloud condensation nuclei (CCN), SOA influences cloud formation and climate. In a warming climate, changes in environmental factors can cause stresses to plants, inducing changes of the emitted VOC. These can modify particle size and composition. Here we report how induced emissions eventually affect CCN activity of SOA, a key parameter in cloud formation. For boreal forest tree species, insect infestation by aphids causes additional VOC emissions which modifies SOA composition thus hygroscopicity and CCN activity. Moderate heat increases the total amount of constitutive VOC, which has a minor effect on hygroscopicity, but affects CCN activity by increasing the particles' size. The coupling of plant stresses, VOC composition and CCN activity points to an important impact of induced plant emissions on cloud formation and climate.
Spatial sensitivity of inorganic carbon to model setup: North Sea and Baltic Sea with ECOSMO
NASA Astrophysics Data System (ADS)
Castano Primo, Rocio; Schrum, Corinna; Daewel, Ute
2015-04-01
In ocean biogeochemical models it is critical to capture the key processes adequately so they do not only reproduce the observations but that those processes are reproduced correctly. One key issue is the choice of parameters, which in most cases are estimates with large uncertainties. This can be the product of actual lack of detailed knowledge of the process, or the manner the processes are implemented, more or less complex. In addition, the model sensitivity is not necessarily homogenous across the spatial domain modelled, which adds another layer of complexity to biogeochemical modelling. In the particular case of the inorganic carbon cycle, there are several sets of carbonate constants that can be chosen. The calculated air-sea CO2 flux is largely dependent on the parametrization chosen. In addition, the different parametrizations all the underlying processes that in some way impact the carbon cycle beyond the carbonate dissociation and fluxes give results that can be significantly different. Examples of these processes are phytoplankton growth rates or remineralization rates. Despite their geographical proximity, the North and Baltic Seas exhibit very different dynamics. The North Sea receives important inflows of Atlantic waters, while the Baltic Sea is an almost enclosed system, with very little exchange from the North Sea. Wind, tides, and freshwater supply act very differently, but dominantly structure the ecosystem dynamics on spatial and temporal scales. The biological community is also different. Cyanobacteria, which are important due to their ability to fix atmospheric nitrogen, and they are only present in the Baltic Sea. These differentiating features have a strong impact in the biogeochemical cycles and ultimately shape the variations in the carbonate chemistry. Here the ECOSMO model was employed on the North Sea and Baltic Sea. The model is set so both are modelled at the same time, instead of having them run separately. ECOSMO is a 3-D coupled physical-biogeochemical model, which resolves the cycles of nitrogen, phosphorus and silicate. It includes 3 functional groups of phytoplankton and 2 groups of zooplankton. In addition, an inorganic carbon module has been incorporated and coupled. Alkalinity and DIC are chosen as prognostic variables, from which pH, pCO2 and air-sea CO2 flux are calculated. The model is run with different sets of carbonate dissociation parameters, air-sea flux parametrizations, phytoplankton growth and remineralization rates. The sensitivity of the inorganic carbon variables will be assessed, both in the whole model domain and the North and Baltic Sea independently. We search for the critical parameters that have a larger impact, whether such impact is spatially dependent and the effect on the validation of the carbonate module.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, Erlend K.F.; Hole, Knut Hakon; Lund, Kjersti V.
Purpose: To systematically screen the tumor contrast enhancement of locally advanced cervical cancers to assess the prognostic value of two descriptive parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Methods and Materials: This study included a prospectively collected cohort of 81 patients who underwent DCE-MRI with gadopentetate dimeglumine before chemoradiotherapy. The following descriptive DCE-MRI parameters were extracted voxel by voxel and presented as histograms for each time point in the dynamic series: normalized relative signal increase (nRSI) and normalized area under the curve (nAUC). The first to 100th percentiles of the histograms were included in a log-rank survival test,more » resulting in p value and relative risk maps of all percentile-time intervals for each DCE-MRI parameter. The maps were used to evaluate the robustness of the individual percentile-time pairs and to construct prognostic parameters. Clinical endpoints were locoregional control and progression-free survival. The study was approved by the institutional ethics committee. Results: The p value maps of nRSI and nAUC showed a large continuous region of percentile-time pairs that were significantly associated with locoregional control (p < 0.05). These parameters had prognostic impact independent of tumor stage, volume, and lymph node status on multivariate analysis. Only a small percentile-time interval of nRSI was associated with progression-free survival. Conclusions: The percentile-time screening identified DCE-MRI parameters that predict long-term locoregional control after chemoradiotherapy of cervical cancer.« less
NASA Astrophysics Data System (ADS)
Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao
2018-06-01
This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.
Impact of dyeing industry effluent on germination and growth of pea (Pisum sativum).
Malaviya, Piyush; Hali, Rajesh; Sharma, Neeru
2012-11-01
Dye industry effluent was analyzed for physico-chemical characteristics and its impact on germination and growth behaviour of Pea (Pisum sativum). The 100% effluent showed high pH (10.3) and TDS (1088 mg l(-1)). The germination parameters included percent germination, delay index, speed of germination, peak value and germination period while growth parameters comprised of root and shoot length, root and shootweight, root-shoot ratio and number of stipules. The study showed the maximum values of positive germination parameters viz. speed of germination (7.85), peak value (3.28), germination index (123.87) and all growth parameters at 20% effluent concentration while the values of negative germination parameters viz. delay index (-0.14) and percent inhibition (-8.34) were found to be minimum at 20% effluent concentration. The study demonstrated that at lower concentrations the dyeing industry effluent caused a positive impact on germination and growth of Pisum sativum.
Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production
NASA Astrophysics Data System (ADS)
Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne
2018-05-01
A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.
A Study on Planetary Atmospheric Circulations using THOR
NASA Astrophysics Data System (ADS)
Mendonça, João; Grosheintz, Luc; Lukas Grimm, Simon; Heng, Kevin
2015-12-01
The large variety of planetary parameters observed leads us to think that exoplanets may show a large range of possible climates. It is therefore of the uttermost importance to investigate the influence of astronomical and planetary bulk parameters in driving the atmospheric circulations. In the solar system the results from planetary spacecraft missions have demonstrated how different the planetary climate and atmospheric circulations can be. The study of exoplanets will require probing a far wider range of physical and orbital parameters than the ones of our neighbor planets. For this reason, such a study will involve exploring an even larger diversity of circulation and climate regimes. Our new atmospheric model, THOR, is intended to be extremely flexible and to explore the large diversity of planetary atmospheres.THOR is part of the Exoclimes Simulation Platform, and is a project of the Exoplanet and Exoclimes Group (see www.exoclime.org). THOR solves the complex atmospheric fluid equations in a rotating sphere (fully compressible - nonhydrostatic system) using an icosahedral grid. The main advantages of using our new platform against other recent exoplanet models is that 1) The atmospheric fluid equations are completely represented and no approximations are used that could compromise the physics of the problem; 2) The model uses for the first time in exoplanet studies, a specific icosahedral grid that solves the pole problem; 3) The interface is user friendly and can be easily adapted to a multitude of atmospheric conditions; 4) By using GPU computation, our code greatly improves the typical code running time.We will present and discuss the first detailed results of our simulations, more specifically of two benchmark tests that are a representative sample of the large range of exoplanetary parameters: Earth-like conditions (the Held-Suarez test) and a tidally locked hot-Jupiter. THOR has successfully passed these tests and is able to determine the main mechanisms driving the circulation in the simulated planets. From the 3D numerical simulations we found that some hot-Jupiters atmospheres can sustain multiple dynamical steady states. The results also suggest the presence of a new mechanism that transports heat from the upper to the lower atmosphere. The presence and impact of this mechanism in the global temperature will be discussed in this presentation.
Scenario-Based Case Study Analysis of Asteroid Mitigation in the Short Response Time Regime
NASA Astrophysics Data System (ADS)
Seery, B.; Greenaugh, K. C.
2017-12-01
Asteroid impact on Earth is a rare but inevitable occurrence, with potentially cataclysmic consequences. If a pending impact is discovered, mitigation options include civil-defense preparations as well as missions to deflect the asteroid and/or robustly disrupt and disperse it to an extent that only a negligible fraction remains on a threatening path (National Research Council's "Defending the Planet," 2010). If discovered with sufficient warning time, a kinetic impactor can deflect smaller objects, but response delays can rule out the option. If a body is too large to deflect by kinetic impactor, or the time for response is insufficient, deflection or disruption can be achieved with a nuclear device. The use of nuclear ablation is considered within the context of current capabilities, requiring no need for nuclear testing. Existing, well-understood devices are sufficient for the largest known Potentially Hazardous Objects (PHOs). The National Aeronautics and Space Administration/Goddard Space Flight Center and the Department of Energy/National Nuclear Security Administration are collaborating to determine the critical characterization issues that define the boundaries for the asteroid-deflection options. Drawing from such work, we examine the timeline for a deflection mission, and how to provide the best opportunity for an impactor to suffice by minimizing the response time. This integrated problem considers the physical process of the deflection method (impact or ablation), along with the spacecraft, launch capability, risk analysis, and the available intercept flight trajectories. Our joint DOE/NASA team has conducted case study analysis of three distinctly different PHOs, on a hypothetical earth impacting trajectory. The size of the design reference bodies ranges from 100 - 500 meters in diameter, with varying physical parameters such as composition, spin state, and metallicity, to name a few. We assemble the design reference of the small body in question using known values for key parameters and expert elicitation to make educated guesses on the unknown parameters, including an estimate of the overall uncertainties in those values. Our scenario-based systems approach includes 2-D and 3-D physics-based modeling and simulations.
NASA Astrophysics Data System (ADS)
Vaithiyanathan, Thanapal; Sundaramoorthy, Perumal
2017-12-01
Sugar industry is a very important agro-based industry in India and it discharges large amount of effluent into water bodies to create high pollution in water bodies which affects the plants and other living organisms. In the present investigation, the physico-chemical analyses of N. P. K. R. Ramaswamy co-operative sugar mill effluent was determined and impact of different concentrations (control, 10, 25, 50, 75 and 100%) of sugar mill effluent on seed germination behavior of African marigold ( Tagetes erecta L.) was studied. The morphological parameters such as germination percentage, shoot length, root length, fresh weight and dry weight of seedlings, seed vigour index, tolerance index and percentage of phytotoxicity were calculated. The results recorded for the analyses of sugar mill effluent indicated their some parameters such as PH, EC, acidity, TDS, TS, BOD, COD, sulphate, magnesium, nitrogen, zinc, iron, copper, lead, manganese and oil and grease exceeded the permissible limit compared to Tamil Nadu Pollution Control Board (TNPCB) and then germination and growth parameters increased in lower (10%) concentration of sugar mill effluent and this morphological parameters gradually decreased with increasing effluent concentration. The lower (10%) concentration of sugar mill effluent may be used for irrigation purposes.
Filtration, haze and foam characteristics of fermented wort mediated by yeast strain.
Douglas, P; Meneses, F J; Jiranek, V
2006-01-01
To investigate the influence of the choice of yeast strain on the haze, shelf life, filterability and foam quality characteristics of fermented products. Twelve strains were used to ferment a chemically defined wort and hopped ale or stout wort. Fermented products were assessed for foam using the Rudin apparatus, and filterability and haze characteristics using the European Brewing Convention methods, to reveal differences in these parameters as a consequence of the choice of yeast strain and growth medium. Under the conditions used, the choice of strain of Saccharomyces cerevisiae effecting the primary fermentation has an impact on all of the parameters investigated, most notably when the fermentation medium is devoid of macromolecular material. The filtration of fermented products has a large cost implication for many brewers and wine makers, and the haze of the resulting filtrate is a key quality criterion. Also of importance to the quality of beer and some wines is the foaming and head retention of these beverages. The foam characteristics, filterability and potential for haze formation in a fermented product have long been known to be dependant on the raw materials used, as well as other production parameters. The choice of Saccharomyces cerevisiae strain used to ferment has itself been shown here to influence these parameters.
The effect of seasonal birth pulses on pathogen persistence in wild mammal populations.
Peel, A J; Pulliam, J R C; Luis, A D; Plowright, R K; O'Shea, T J; Hayman, D T S; Wood, J L N; Webb, C T; Restif, O
2014-07-07
The notion of a critical community size (CCS), or population size that is likely to result in long-term persistence of a communicable disease, has been developed based on the empirical observations of acute immunizing infections in human populations, and extended for use in wildlife populations. Seasonal birth pulses are frequently observed in wildlife and are expected to impact infection dynamics, yet their effect on pathogen persistence and CCS have not been considered. To investigate this issue theoretically, we use stochastic epidemiological models to ask how host life-history traits and infection parameters interact to determine pathogen persistence within a closed population. We fit seasonal birth pulse models to data from diverse mammalian species in order to identify realistic parameter ranges. When varying the synchrony of the birth pulse with all other parameters being constant, our model predicted that the CCS can vary by more than two orders of magnitude. Tighter birth pulses tended to drive pathogen extinction by creating large amplitude oscillations in prevalence, especially with high demographic turnover and short infectious periods. Parameters affecting the relative timing of the epidemic and birth pulse peaks determined the intensity and direction of the effect of pre-existing immunity in the population on the pathogen's ability to persist beyond the initial epidemic following its introduction.
A New Parameter for Cardiac Efficiency Analysis
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Rajan, Navaneetha Krishnan; Song, Zeying; Hoffmann, Kenneth; MacMahon, Eileen; Belohlavek, Marek
2014-11-01
Detecting and evaluating a heart with suboptimal pumping efficiency is a significant clinical goal. However, the routine parameters such as ejection fraction, quantified with current non-invasive techniques are not predictive of heart disease prognosis. Furthermore, they only represent left-ventricular (LV) ejection function and not the efficiency, which might be affected before apparent changes in the function. We propose a new parameter, called the hemodynamic efficiency (H-efficiency) and defined as the ratio of the useful to total power, for cardiac efficiency analysis. Our results indicate that the change in the shape/motion of the LV will change the pumping efficiency of the LV even if the ejection fraction is kept constant at 55% (normal value), i.e., H-efficiency can be used for suboptimal cardiac performance diagnosis. To apply H-efficiency on a patient-specific basis, we are developing a system that combines echocardiography (echo) and computational fluid dynamics (CFD) to provide the 3D pressure and velocity field to directly calculate the H-efficiency parameter. Because the method is based on clinically used 2D echo, which has faster acquisition time and lower cost relative to other imaging techniques, it can have a significant impact on a large number of patients. This work is partly supported by the American Heart Association.
The effect of seasonal birth pulses on pathogen persistence in wild mammal populations
Peel, A. J.; Pulliam, J. R. C.; Luis, A. D.; Plowright, R. K.; O'Shea, T. J.; Hayman, D. T. S.; Wood, J. L. N.; Webb, C. T.; Restif, O.
2014-01-01
The notion of a critical community size (CCS), or population size that is likely to result in long-term persistence of a communicable disease, has been developed based on the empirical observations of acute immunizing infections in human populations, and extended for use in wildlife populations. Seasonal birth pulses are frequently observed in wildlife and are expected to impact infection dynamics, yet their effect on pathogen persistence and CCS have not been considered. To investigate this issue theoretically, we use stochastic epidemiological models to ask how host life-history traits and infection parameters interact to determine pathogen persistence within a closed population. We fit seasonal birth pulse models to data from diverse mammalian species in order to identify realistic parameter ranges. When varying the synchrony of the birth pulse with all other parameters being constant, our model predicted that the CCS can vary by more than two orders of magnitude. Tighter birth pulses tended to drive pathogen extinction by creating large amplitude oscillations in prevalence, especially with high demographic turnover and short infectious periods. Parameters affecting the relative timing of the epidemic and birth pulse peaks determined the intensity and direction of the effect of pre-existing immunity in the population on the pathogen's ability to persist beyond the initial epidemic following its introduction. PMID:24827436
Where do golf driver swings go wrong? Factors influencing driver swing consistency.
Zhang, X; Shan, G
2014-10-01
One of the challenging skills in golfing is the driver swing. There have been a large number of studies characterizing golf swings, yielding insightful instructions on how to swing well. As a result, achieving a sub-18 handicap is no longer the top problem for golfers. Instead, players are now most troubled by a lack of consistency during swing execution. The goal of this study was to determine how to consistently execute good golf swings. Using 3D motion capture and full-body biomechanical modeling, 22 experienced golfers were analysed. For characterizing both successful and failed swings, 19 selected parameters (13 angles, 4 time parameters, and 2 distances) were used. The results showed that 14 parameters are highly sensitive and/or prone to motor control variations. These parameters sensitized five distinct areas of swing to variation: (a) ball positioning, (b) transverse club angle, (c) transition, (d) wrist control, and (e) posture migration between takeaway and impact. Suggestions were provided for how to address these five distinct problem areas. We hope our findings on how to achieve consistency in golf swings will benefit all levels of golf pedagogy and help maintain/develop interests to involve more golf/physical activity for a healthy lifestyle. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca
2018-02-01
Employing recently proposed metamodeling for the nucleonic matter equation of state, we analyze neutron star global properties such as masses, radii, momentum of inertia, and others. The impact of the uncertainty on empirical parameters on these global properties is analyzed in a Bayesian statistical approach. Physical constraints, such as causality and stability, are imposed on the equation of state and different hypotheses for the direct Urca (dUrca) process are investigated. In addition, only metamodels with maximum masses above 2 M⊙ are selected. Our main results are the following: the equation of state exhibits a universal behavior against the dUrca hypothesis under the condition of charge neutrality and β equilibrium; neutron stars, if composed exclusively of nucleons and leptons, have a radius of 12.7 ±0.4 km for masses ranging from 1 up to 2 M⊙ ; a small radius lower than 11 km is very marginally compatible with our present knowledge of the nuclear empirical parameters; and finally, the most important empirical parameters which are still affected by large uncertainties and play an important role in determining the radius of neutrons stars are the slope and curvature of the symmetry energy (Lsym and Ksym) and, to a lower extent, the skewness parameters (Qsat /sym).
Impact factor: Universalism and reliability of assessment.
Grzybowski, Andrzej; Patryn, Rafał
In 1955, Eugene Garfield (1925-1917) published a paper in Science where for the first time he advocated the necessity of introducing parameters to assess the quality of scientific journals. Underlying this necessity was an observation of a trend where the whole area of influence in academic publishing was dominated by a narrow group of large interdisciplinary research journals. For this reason, along with Irving H. Sher, they created the impact factor (IF), also called the Garfield impact factor, journal citation rate, journal influence, and journal impact factor. The concept of IF concerns a research discipline called bibliometrics, which uses mathematical and statistical methods to analyze scientific publications. Established by Garfield in 1963, the Science Citation Index, a record of scientific publications and citations therein, contributed directly to the increased importance of this method. Since the 1960s, the register of scientific publications has expanded and their evaluation by the IF has become a fundamental and universal measure of the journal's value. Contrary to the authors' intentions in the creation of the index (IF), it is often used to assess the quality of contributions, simultaneously assessing the authors' achievements or academic career and academic institutions' funding possibilities. Copyright © 2016 Elsevier Inc. All rights reserved.
Zhang, Jiayi; Shao, Xiongjun; Townsend, Oliver V; Lynd, Lee R
2009-12-01
A kinetic model was developed to predict batch simultaneous saccharification and co-fermentation (SSCF) of paper sludge by the xylose-utilizing yeast Saccharomyces cerevisiae RWB222 and the commercial cellulase preparation Spezyme CP. The model accounts for cellulose and xylan enzymatic hydrolysis and competitive uptake of glucose and xylose. Experimental results show that glucan and xylan enzymatic hydrolysis are highly correlated, and that the low concentrations of xylose encountered during SSCF do not have a significant inhibitory effect on enzymatic hydrolysis. Ethanol is found to not only inhibit the specific growth rate, but also to accelerate cell death. Glucose and xylose uptake rates were found to be competitively inhibitory, but this did not have a large impact during SSCF because the sugar concentrations are low. The model was used to evaluate which constants had the greatest impact on ethanol titer for a fixed substrate loading, enzyme loading, and fermentation time. The cellulose adsorption capacity and cellulose hydrolysis rate constants were found to have the greatest impact among enzymatic hydrolysis related constants, and ethanol yield and maximum ethanol tolerance had the greatest impact among fermentation related constants.
Influence of mass transfer on bubble plume hydrodynamics.
Lima Neto, Iran E; Parente, Priscila A B
2016-03-01
This paper presents an integral model to evaluate the impact of gas transfer on the hydrodynamics of bubble plumes. The model is based on the Gaussian type self-similarity and functional relationships for the entrainment coefficient and factor of momentum amplification due to turbulence. The impact of mass transfer on bubble plume hydrodynamics is investigated considering different bubble sizes, gas flow rates and water depths. The results revealed a relevant impact when fine bubbles are considered, even for moderate water depths. Additionally, model simulations indicate that for weak bubble plumes (i.e., with relatively low flow rates and large depths and slip velocities), both dissolution and turbulence can affect plume hydrodynamics, which demonstrates the importance of taking the momentum amplification factor relationship into account. For deeper water conditions, simulations of bubble dissolution/decompression using the present model and classical models available in the literature resulted in a very good agreement for both aeration and oxygenation processes. Sensitivity analysis showed that the water depth, followed by the bubble size and the flow rate are the most important parameters that affect plume hydrodynamics. Lastly, dimensionless correlations are proposed to assess the impact of mass transfer on plume hydrodynamics, including both the aeration and oxygenation modes.
A GLOBAL GALACTIC DYNAMO WITH A CORONA CONSTRAINED BY RELATIVE HELICITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, A.; Mangalam, A., E-mail: avijeet@iiap.res.in, E-mail: mangalam@iiap.res.in
We present a model for a global axisymmetric turbulent dynamo operating in a galaxy with a corona that treats the parameters of turbulence driven by supernovae and by magneto-rotational instability under a common formalism. The nonlinear quenching of the dynamo is alleviated by the inclusion of small-scale advective and diffusive magnetic helicity fluxes, which allow the gauge-invariant magnetic helicity to be transferred outside the disk and consequently to build up a corona during the course of dynamo action. The time-dependent dynamo equations are expressed in a separable form and solved through an eigenvector expansion constructed using the steady-state solutions ofmore » the dynamo equation. The parametric evolution of the dynamo solution allows us to estimate the final structure of the global magnetic field and the saturated value of the turbulence parameter α{sub m}, even before solving the dynamical equations for evolution of magnetic fields in the disk and the corona, along with α-quenching. We then solve these equations simultaneously to study the saturation of the large-scale magnetic field, its dependence on the small-scale magnetic helicity fluxes, and the corresponding evolution of the force-free field in the corona. The quadrupolar large-scale magnetic field in the disk is found to reach equipartition strength within a timescale of 1 Gyr. The large-scale magnetic field in the corona obtained is much weaker than the field inside the disk and has only a weak impact on the dynamo operation.« less
Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii
2015-01-01
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.
Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B.; Liu, Shih-Chii
2015-01-01
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time. PMID:26217169
On the impact of reducing global geophysical fluid model deformations in SLR data processing
NASA Astrophysics Data System (ADS)
Weigelt, Matthias; Thaller, Daniela
2016-04-01
Mass redistributions in the atmosphere, oceans and the continental hydrology cause elastic loading deformations of the Earth's crust and thus systematically influence Earth-bound observation systems such as VLBI, GNSS or SLR. Causing non-linear station variations, these loading deformations have a direct impact on the estimated station coordinates and an indirect impact on other parameters of global space-geodetic solutions, e.g. Earth orientation parameters, geocenter coordinates, satellite orbits or troposphere parameters. Generally, the impact can be mitigated by co-parameterisation or by reducing deformations derived from global geophysical fluid models. Here, we focus on the latter approach. A number of data sets modelling the (non-tidal) loading deformations are generated by various groups. They show regionally and locally significant differences and consequently the impact on the space-geodetic solutions heavily depends on the available network geometry. We present and discuss the differences between these models and choose SLR as the speace-geodetic technique of interest in order to discuss the impact of atmospheric, oceanic and hydrological loading on the parameters of space-geodetic solutions when correcting for the global geophysical fluid models at the observation level. Special emphasis is given to a consistent usage of models for geometric and gravimetric corrections during the data processing. We quantify the impact of the different deformation models on the station coordinates and discuss the improvement in the Earth orientation parameters and the geocenter motion. We also show that a significant reduction in the RMS of the station coordinates can be achieved depending on the model of choice.
Implosion of Cylindrical Cavities via Short Duration Impulsive Loading
NASA Astrophysics Data System (ADS)
Huneault, Justin; Higgins, Andrew
2014-11-01
An apparatus has been developed to study the collapse of a cylindrical cavity in gelatin subjected to a symmetric impact-driven impulsive loading. A gas-driven annular projectile is accelerated to approximately 50 m/s, at which point it impacts a gelatin casting confined by curved steel surfaces that allow a transition from an annular geometry to a cylindrically imploding motion. The implosion is visualized by a high-speed camera through a window which forms the top confining wall of the implosion cavity. The initial size of the cavity is such that the gelatin wall is two to five times thicker than the impacting projectile. Thus, during impact the compression wave which travels towards the cavity is closely followed by a rarefaction resulting from the free surface reflection of the compression wave in the projectile. As the compression wave in the gelatin reaches the inner surface, it will also reflect as a rarefaction wave. The interaction between the rarefaction waves from the gelatin and projectile free surfaces leads to large tensile stresses resulting in the spallation of a relatively thin shell. The study focuses on the effect of impact parameters on the thickness and uniformity of the imploding shell formed by the cavitation in the imploding gelatin cylinder.
NASA Technical Reports Server (NTRS)
Zirin, R. M.; Witmer, E. A.
1972-01-01
An approximate collision analysis, termed the collision-force method, was developed for studying impact-interaction of an engine rotor blade fragment with an initially circular containment ring. This collision analysis utilizes basic mass, material property, geometry, and pre-impact velocity information for the fragment, together with any one of three postulated patterns of blade deformation behavior: (1) the elastic straight blade model, (2) the elastic-plastic straight shortening blade model, and (3) the elastic-plastic curling blade model. The collision-induced forces are used to predict the resulting motions of both the blade fragment and the containment ring. Containment ring transient responses are predicted by a finite element computer code which accommodates the large deformation, elastic-plastic planar deformation behavior of simple structures such as beams and/or rings. The effects of varying the values of certain parameters in each blade-behavior model were studied. Comparisons of predictions with experimental data indicate that of the three postulated blade-behavior models, the elastic-plastic curling blade model appears to be the most plausible and satisfactory for predicting the impact-induced motions of a ductile engine rotor blade and a containment ring against which the blade impacts.
Galileo dust data from the jovian system: 2000 to 2003
NASA Astrophysics Data System (ADS)
Krüger, H.; Bindschadler, D.; Dermott, S. F.; Graps, A. L.; Grün, E.; Gustafson, B. A.; Hamilton, D. P.; Hanner, M. S.; Horányi, M.; Kissel, J.; Linkert, D.; Linkert, G.; Mann, I.; McDonnell, J. A. M.; Moissl, R.; Morfill, G. E.; Polanskey, C.; Roy, M.; Schwehm, G.; Srama, R.
2010-06-01
The Galileo spacecraft was the first man-made satellite of Jupiter, orbiting the planet between December 1995 and September 2003. The spacecraft was equipped with a highly sensitive dust detector that monitored the jovian dust environment between approximately 2 and 370 RJ (jovian radius RJ=71 492 km). The Galileo dust detector was a twin of the one flying on board the Ulysses spacecraft. This is the tenth in a series of papers dedicated to presenting Galileo and Ulysses dust data. Here we present data from the Galileo dust instrument for the period January 2000 to September 2003 until Galileo was destroyed in a planned impact with Jupiter. The previous Galileo dust data set contains data of 2883 particles detected during Galileo's interplanetary cruise and 12 978 particles detected in the jovian system between 1996 and 1999. In this paper we report on the data of additional 5389 particles measured between 2000 and the end of the mission in 2003. The majority of the 21 250 particles for which the full set of measured impact parameters (impact time, impact direction, charge rise times, charge amplitudes, etc.) was transmitted to Earth were tiny grains (about 10 nm in radius), most of them originating from Jupiter's innermost Galilean moon Io. They were detected throughout the jovian system and the impact rates frequently exceeded 10 min -1. Surprisingly large impact rates up to 100 min -1 occurred in August/September 2000 when Galileo was far away (≈280RJ) from Jupiter, implying dust ejection rates in excess of 100 kg s -1. This peak in dust emission appears to coincide with strong changes in the release of neutral gas from the Io torus. Strong variability in the Io dust flux was measured on timescales of days to weeks, indicating large variations in the dust release from Io or the Io torus or both on such short timescales. Galileo has detected a large number of bigger micron-sized particles mostly in the region between the Galilean moons. A surprisingly large number of such bigger grains was measured in March 2003 within a four-day interval when Galileo was outside Jupiter's magnetosphere at approximately 350 RJ jovicentric distance. Two passages of Jupiter's gossamer rings in 2002 and 2003 provided the first actual comparison of in-situ dust data from a planetary ring with the results inferred from inverting optical images. Strong electronics degradation of the dust instrument due to the harsh radiation environment of Jupiter led to increased calibration uncertainties of the dust data.
Modeling the Economic Impacts of Large Deployments on Local Communities
2008-12-01
MODELING THE ECONOMIC IMPACTS OF LARGE DEPLOYMENTS ON LOCAL COMMUNITIES THESIS Aaron L... MODELING THE ECONOMIC IMPACTS OF LARGE DEPLOYMENTS ON LOCAL COMMUNITIES THESIS Presented to the Faculty Department of Systems Engineering and...APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED AFIT/GCA/ENV/08-D01 MODELING THE ECONOMIC IMPACTS OF LARGE DEPLOYMENTS ON LOCAL
Scaling law deduced from impact-cratering experiments on basalt targets
NASA Astrophysics Data System (ADS)
Takagi, Y.; Hasegawa, S.; Suzuki, A.
2014-07-01
Since impact-cratering phenomena on planetary bodies were the key process which modified the surface topography and formed regolith layers, many experiments on non-cohesive materials (sand, glass beads) were performed. On the other hand, experiments on natural rocks were limited. Especially, experiments on basalt targets are rare, although basalt is the most common rocky material on planetary surfaces. The reason may be the difficulties of obtaining basalt samples suitable for cratering experiments. Recently, we obtained homogenous and crackless large basalt blocks. We performed systematic cratering experiments using the basalt targets. Experimental Procedure: Impact experiments were performed using a double stage light-gas (hydrogen) gun on the JAXA Sagamihara campus. Spherical projectiles of nylon, aluminum, stainless steel, and tungsten carbide were launched at velocities between 2400 and 6100 m/sec. The projectiles were 1.0 to 7.1 mm in diameter and 0.004 to 0.22 g in mass. The incidence angle was fixed at 90 degrees. The targets were rectangular blocks of Ukrainian basalt. The impact plane was a square with 20-cm sides. The thickness was 9 cm. Samples were cut out from a columnar block so that the impact plane might become perpendicular to the axis of the columnar joint. The mass was about 10.5 kg. The density was 2920 ± 10 kg/m^3 . Twenty eight shots were performed. Three-dimensional shapes of craters were measured by an X-Y stage with a laser displacement sensor (Keyence LK-H150). The interval between the measurement points was 200 micrometer. The volume, depth, and aperture area of the crater were calculated from the 3-D data using analytical software. Since the shapes of the formed craters are markedly asymmetrical, the diameter of the circle whose area is equal to the aperture area was taken as the crater diameter. Results: The diameter, depth, and the volume of the formed craters are normalized by the π parameters. Experimental conditions are also expressed by the π parameters. The figure shows the relation of the normalized volume and the π_3 parameter. A clear dependency on the projectile density is shown in the figure. Multiple regression analyses yield the relation π_V ∝ π_3^{-1.04 ± 0.14} π_4^{0.45 ± 0.18} . Other results and comparisons with those of previous studies are presented in the paper.
Long term measurements of the estimated hygroscopic enhancement of aerosol optical properties
NASA Astrophysics Data System (ADS)
Hervo, Maxime; Sellegri, Karine; Pichon, Jean Marc; Roger, Jean Claude; Laj, Paolo
2015-04-01
Water vapour has a major impact on aerosol optical properties, thus on the Radiative Forcing for aerosol-radiation interaction (RFari). However there is few studies measuring this impact over a large period. Optical properties of aerosols were measured at the GAW Puy de Dôme station (1465m) over a seven year period (2006-2012). The impact of hygroscopicity on aerosol optical properties was calculated over a two year period (2010-2011). The analysis of the spatial and temporal variability of the dry optical properties showed that while no long term trend was found, a clear seasonal and diurnal variation was observed on the extensive parameters (scattering, absorption). Scattering and absorption coefficients were highest during the warm season and daytime, in concordance with the seasonality and diurnal variation of the planetary boundary layer height reaching the site. Intensive parameters (single scattering albedo, asymmetry factor, refractive index) did not show such a strong diurnal variability, but still indicated different values depending on the season. Both extensive and intensive optical parameters were sensitive to the air mass origin. A strong impact of hygroscopicity on aerosol optical properties was calculated, mainly on aerosol scattering, with a dependence on the aerosol type and the season. At 90% humidity, the scattering factor enhancement (fsca) was more than 4.4 for oceanic aerosol that have mixed with a pollution plume. Consequently, the aerosol radiative forcing was estimated to be 2.8 times higher at RH= 90% and 1.75 times higher at ambient RH when hygroscopic growth of the aerosol was considered. The hygroscopicity enhancement factor of the scattering coefficient was parameterized as a function of humidity and air mass type. To our knowledge, these results are one of the first presenting the impact of water vapour on the aerosol optical properties for a long period, and the first for a site at the border between the planetary boundary layer and the free troposphere. Acknowledgements. The authors would like to acknowledge the OPGC and its staff and INSU/CNRS for their contribution to establishing and maintaining the PdD measurement site. This work was performed in the frame of the european EUSAAR (R113-CT-2006-026140) and EUCAARI (0136833-2) and the french ORAURE SOERE.
Impact phenomena as factors in the evolution of the Earth
NASA Technical Reports Server (NTRS)
Grieve, R. A. F.; Parmentier, E. M.
1984-01-01
It is estimated that 30 to 200 large impact basins could have been formed on the early Earth. These large impacts may have resulted in extensive volcanism and enhanced endogenic geologic activity over large areas. Initial modelling of the thermal and subsidence history of large terrestrial basins indicates that they created geologic and thermal anomalies which lasted for geologically significant times. The role of large-scale impact in the biological evolution of the Earth has been highlighted by the discovery of siderophile anomalies at the Cretaceous-Tertiary boundary and associated with North American microtektites. Although in neither case has an associated crater been identified, the observations are consistent with the deposition of projectile-contaminated high-speed ejecta from major impact events. Consideration of impact processes reveals a number of mechanisms by which large-scale impact may induce extinctions.
Estimation of Graded Response Model Parameters Using MULTILOG.
ERIC Educational Resources Information Center
Baker, Frank B.
1997-01-01
Describes an idiosyncracy of the MULTILOG (D. Thissen, 1991) parameter estimation process discovered during a simulation study involving the graded response model. A misordering reflected in boundary function location parameter estimates resulted in a large negative contribution to the true score followed by a large positive contribution. These…
NASA Astrophysics Data System (ADS)
Baniasadi, Neda; Wang, Mengyu; Wang, Hui; Jin, Qingying; Mahd, Mufeed; Elze, Tobias
2017-02-01
Purpose: To evaluate the effects of four anatomical parameters (angle between superior and inferior temporal retinal arteries [inter-artery angle, IAA], optic disc [OD] rotation, retinal curvature, and central retinal vessel trunk entry point location [CRVTL]) on retinal nerve fiber layer thickness (RNFLT) abnormality marks by OCT machines. Methods: Cirrus OCT circumpapillary RNFLT measurements and Humphrey visual fields (HVF 24-2) of 421 patients from a large glaucoma clinic were included. Ellipses were fitted to the OD borders. Ellipse rotation relative to the vertical axis defined OD rotation. CRVTL was manually marked on the horizontal axis of the ellipse on the OCT fundus image. IAA was calculated between manually marked retinal artery locations at the 1.73mm radius around OD. Retinal curvature was determined by the inner limiting membrane on the horizontal B-scan closest to the OD center. For each location on the circumpapillary scanning area, logistic regression was used to determine if each of the four parameters had a significant impact on RNFLT abnormality marks independent of disease severity. The results are presented on spatial maps of the entire scanning area. Results: Variations in IAA significantly influenced abnormality marks on 38.8% of the total scanning area, followed by CRVTL (19.2%) and retinal curvature (18.7%). The effect of OD rotation was negligible (<1%). Conclusions: A natural variation in IAA, retinal curvature, and CRVTL can affect OCT abnormality ratings, which may bias clinical diagnosis. Our spatial maps may help OCT manufacturers to introduce location specific norms to ensure that abnormality marks indicate ocular disease instead of variations in eye anatomy.