Sample records for realistic parameter values

  1. Calibration of infiltration parameters on hydrological tank model using runoff coefficient of rational method

    NASA Astrophysics Data System (ADS)

    Suryoputro, Nugroho; Suhardjono, Soetopo, Widandi; Suhartanto, Ery

    2017-09-01

    In calibrating hydrological models, there are generally two stages of activity: 1) determining realistic model initial parameters in representing natural component physical processes, 2) entering initial parameter values which are then processed by trial error or automatically to obtain optimal values. To determine a realistic initial value, it takes experience and user knowledge of the model. This is a problem for beginner model users. This paper will present another approach to estimate the infiltration parameters in the tank model. The parameters will be approximated by the runoff coefficient of rational method. The value approach of infiltration parameter is simply described as the result of the difference in the percentage of total rainfall minus the percentage of runoff. It is expected that the results of this research will accelerate the calibration process of tank model parameters. The research was conducted on the sub-watershed Kali Bango in Malang Regency with an area of 239,71 km2. Infiltration measurements were carried out in January 2017 to March 2017. Analysis of soil samples at Soil Physics Laboratory, Department of Soil Science, Faculty of Agriculture, Universitas Brawijaya. Rainfall and discharge data were obtained from UPT PSAWS Bango Gedangan in Malang. Temperature, evaporation, relative humidity, wind speed data was obtained from BMKG station of Karang Ploso, Malang. The results showed that the infiltration coefficient at the top tank outlet can be determined its initial value by using the approach of the coefficient of runoff rational method with good result.

  2. Brownian motion model with stochastic parameters for asset prices

    NASA Astrophysics Data System (ADS)

    Ching, Soo Huei; Hin, Pooi Ah

    2013-09-01

    The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.

  3. Collective behaviour in vertebrates: a sensory perspective

    PubMed Central

    Collignon, Bertrand; Fernández-Juricic, Esteban

    2016-01-01

    Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616

  4. System parameters for erythropoiesis control model: Comparison of normal values in human and mouse model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer model for erythropoietic control was adapted to the mouse system by altering system parameters originally given for the human to those which more realistically represent the mouse. Parameter values were obtained from a variety of literature sources. Using the mouse model, the mouse was studied as a potential experimental model for spaceflight. Simulation studies of dehydration and hypoxia were performed. A comparison of system parameters for the mouse and human models is presented. Aside from the obvious differences expected in fluid volumes, blood flows and metabolic rates, larger differences were observed in the following: erythrocyte life span, erythropoietin half-life, and normal arterial pO2.

  5. Exemplifying the Effects of Parameterization Shortcomings in the Numerical Simulation of Geological Energy and Mass Storage

    NASA Astrophysics Data System (ADS)

    Dethlefsen, Frank; Tilmann Pfeiffer, Wolf; Schäfer, Dirk

    2016-04-01

    Numerical simulations of hydraulic, thermal, geomechanical, or geochemical (THMC-) processes in the subsurface have been conducted for decades. Often, such simulations are commenced by applying a parameter set that is as realistic as possible. Then, a base scenario is calibrated on field observations. Finally, scenario simulations can be performed, for instance to forecast the system behavior after varying input data. In the context of subsurface energy and mass storage, however, these model calibrations based on field data are often not available, as these storage actions have not been carried out so far. Consequently, the numerical models merely rely on the parameter set initially selected, and uncertainties as a consequence of a lack of parameter values or process understanding may not be perceivable, not mentioning quantifiable. Therefore, conducting THMC simulations in the context of energy and mass storage deserves a particular review of the model parameterization with its input data, and such a review so far hardly exists to the required extent. Variability or aleatory uncertainty exists for geoscientific parameter values in general, and parameters for that numerous data points are available, such as aquifer permeabilities, may be described statistically thereby exhibiting statistical uncertainty. In this case, sensitivity analyses for quantifying the uncertainty in the simulation resulting from varying this parameter can be conducted. There are other parameters, where the lack of data quantity and quality implies a fundamental changing of ongoing processes when such a parameter value is varied in numerical scenario simulations. As an example for such a scenario uncertainty, varying the capillary entry pressure as one of the multiphase flow parameters can either allow or completely inhibit the penetration of an aquitard by gas. As the last example, the uncertainty of cap-rock fault permeabilities and consequently potential leakage rates of stored gases into shallow compartments are regarded as recognized ignorance by the authors of this study, as no realistic approach exists to determine this parameter and values are best guesses only. In addition to these aleatory uncertainties, an equivalent classification is possible for rating epistemic uncertainties describing the degree of understanding processes such as the geochemical and hydraulic effects following potential gas intrusions from deeper reservoirs into shallow aquifers. As an outcome of this grouping of uncertainties, prediction errors of scenario simulations can be calculated by sensitivity analyses, if the uncertainties are identified as statistical. However, if scenario uncertainties exist or even recognized ignorance has to be attested to a parameter or a process in question, the outcomes of simulations mainly depend on the decision of the modeler by choosing parameter values or by interpreting the occurring of processes. In that case, the informative value of numerical simulations is limited by ambiguous simulation results, which cannot be refined without improving the geoscientific database through laboratory or field studies on a longer term basis, so that the effects of the subsurface use may be predicted realistically. This discussion, amended by a compilation of available geoscientific data to parameterize such simulations, will be presented in this study.

  6. Parameter interdependence and uncertainty induced by lumping in a hydrologic model

    NASA Astrophysics Data System (ADS)

    Gallagher, Mark R.; Doherty, John

    2007-05-01

    Throughout the world, watershed modeling is undertaken using lumped parameter hydrologic models that represent real-world processes in a manner that is at once abstract, but nevertheless relies on algorithms that reflect real-world processes and parameters that reflect real-world hydraulic properties. In most cases, values are assigned to the parameters of such models through calibration against flows at watershed outlets. One criterion by which the utility of the model and the success of the calibration process are judged is that realistic values are assigned to parameters through this process. This study employs regularization theory to examine the relationship between lumped parameters and corresponding real-world hydraulic properties. It demonstrates that any kind of parameter lumping or averaging can induce a substantial amount of "structural noise," which devices such as Box-Cox transformation of flows and autoregressive moving average (ARMA) modeling of residuals are unlikely to render homoscedastic and uncorrelated. Furthermore, values estimated for lumped parameters are unlikely to represent average values of the hydraulic properties after which they are named and are often contaminated to a greater or lesser degree by the values of hydraulic properties which they do not purport to represent at all. As a result, the question of how rigidly they should be bounded during the parameter estimation process is still an open one.

  7. Automated dynamic analytical model improvement for damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J. S.; Berman, A.

    1985-01-01

    A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.

  8. Non-minimal quartic inflation in supersymmetric SO(10)

    DOE PAGES

    Leontaris, George K.; Okada, Nobuchika; Shafi, Qaisar

    2016-12-16

    Here, we describe how quartic (λφ 4) inflation with non-minimal coupling to gravity is realized in realistic supersymmetric SO(10)models. In a well-motivated example the 16 -more » $$\\overline{16}$$ Higgs multiplets, which break SO(10) to SU(5) and yield masses for the right-handed neutrinos, provide the inflaton field φ. Thus, leptogenesis is a natural outcome in this class of SO(10) models. Moreover, the adjoint (45-plet) Higgs also acquires a GUT scale value during inflation so that the monopole problem is evaded. The scalar spectral index n s in good agreement with the observations and r, the tensor to scalar ratio, is predicted for realistic values of GUT parameters to be of order 10 -3-10 -2.« less

  9. A statistical survey of heat input parameters into the cusp thermosphere

    NASA Astrophysics Data System (ADS)

    Moen, J. I.; Skjaeveland, A.; Carlson, H. C.

    2017-12-01

    Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.

  10. A Model-Based Investigation of Charge-Generation According to the Relative Diffusional Growth Rate Theory

    NASA Astrophysics Data System (ADS)

    Glassmeier, F.; Arnold, L.; Lohmann, U.; Dietlicher, R.; Paukert, M.

    2016-12-01

    Our current understanding of charge generation in thunderclouds is based on collisional charge transfer between graupel and ice crystals in the presence of liquid water droplets as dominant mechanism. The physical process of charge transfer and the sign of net charge generated on graupel and ice crystals under different cloud conditions is not yet understood. The Relative-Diffusional-Growth-Rate (RDGR) theory (Baker et al. 1987) suggests that the particle with the faster diffusional radius growth is charged positively. In this contribution, we use simulations of idealized thunderclouds with two-moment warm and cold cloud microphysics to generate realistic combinations of RDGR-parameters. We find that these realistic parameter combinations result in a relationship between sign of charge, cloud temperature and effective water content that deviates from previous theoretical and laboratory studies. This deviation indicates that the RDGR theory is sensitive to correlations between parameters that occur in clouds but are not captured in studies that vary temperature and water content while keeping other parameters at fixed values. In addition, our results suggest that diffusional growth from the riming-related local water vapor field, a key component of the RDGR theory, is negligible for realistic parameter combinations. Nevertheless, we confirm that the RDGR theory results in positive or negative charging of particles under different cloud conditions. Under specific conditions, charge generation via the RDGR theory alone might thus be sufficient to explain tripolar charge structures in thunderclouds. In general, however, additional charge generation mechanisms and adaptations to the RDGR theory that consider riming other than via local vapor deposition seem necessary.

  11. NLC Luminosity as a Function of Beam Parameters

    NASA Astrophysics Data System (ADS)

    Nosochkov, Y.

    2002-06-01

    Realistic calculation of NLC luminosity has been performed using particle tracking in DIMAD and beam-beam simulations in GUINEA-PIG code for various values of beam emittance, energy and beta functions at the Interaction Point (IP). Results of the simulations are compared with analytic luminosity calculations. The optimum range of IP beta functions for high luminosity was identified.

  12. Parameter estimation and sensitivity analysis for a mathematical model with time delays of leukemia

    NASA Astrophysics Data System (ADS)

    Cândea, Doina; Halanay, Andrei; Rǎdulescu, Rodica; Tǎlmaci, Rodica

    2017-01-01

    We consider a system of nonlinear delay differential equations that describes the interaction between three competing cell populations: healthy, leukemic and anti-leukemia T cells involved in Chronic Myeloid Leukemia (CML) under treatment with Imatinib. The aim of this work is to establish which model parameters are the most important in the success or failure of leukemia remission under treatment using a sensitivity analysis of the model parameters. For the most significant parameters of the model which affect the evolution of CML disease during Imatinib treatment we try to estimate the realistic values using some experimental data. For these parameters, steady states are calculated and their stability is analyzed and biologically interpreted.

  13. Path integrals with higher order actions: Application to realistic chemical systems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.

    2018-02-01

    Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.

  14. Low-frequency fluctuations in vertical cavity lasers: Experiments versus Lang-Kobayashi dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torcini, Alessandro; Istituto Nazionale di Fisica Nucleare, Sezione di Firenze, via Sansone 1, 50019 Sesto Fiorentino; Barland, Stephane

    2006-12-15

    The limits of applicability of the Lang-Kobayashi (LK) model for a semiconductor laser with optical feedback are analyzed. The model equations, equipped with realistic values of the parameters, are investigated below the solitary laser threshold where low-frequency fluctuations (LFF's) are usually observed. The numerical findings are compared with experimental data obtained for the selected polarization mode from a vertical cavity surface emitting laser (VCSEL) subject to polarization selective external feedback. The comparison reveals the bounds within which the dynamics of the LK model can be considered as realistic. In particular, it clearly demonstrates that the deterministic LK model, for realisticmore » values of the linewidth enhancement factor {alpha}, reproduces the LFF's only as a transient dynamics towards one of the stationary modes with maximal gain. A reasonable reproduction of real data from VCSEL's can be obtained only by considering the noisy LK or alternatively deterministic LK model for extremely high {alpha} values.« less

  15. An extensive study of Bose-Einstein condensation in liquid helium using Tsallis statistics

    NASA Astrophysics Data System (ADS)

    Guha, Atanu; Das, Prasanta Kumar

    2018-05-01

    Realistic scenario can be represented by general canonical ensemble way better than the ideal one, with proper parameter sets involved. We study the Bose-Einstein condensation phenomena of liquid helium within the framework of Tsallis statistics. With a comparatively high value of the deformation parameter q(∼ 1 . 4) , the theoretically calculated value of the critical temperature (Tc) of the phase transition of liquid helium is found to agree with the experimentally determined value (Tc = 2 . 17 K), although they differs from each other for q = 1 (undeformed scenario). This throws a light on the understanding of the phenomenon and connects temperature fluctuation(non-equilibrium conditions) with the interactions between atoms qualitatively. More interactions between atoms give rise to more non-equilibrium conditions which is as expected.

  16. Estimation of bare soil evaporation using multifrequency airborne SAR

    NASA Technical Reports Server (NTRS)

    Soares, Joao V.; Shi, Jiancheng; Van Zyl, Jakob; Engman, E. T.

    1992-01-01

    It is shown that for homogeneous areas soil moisture can be derived from synthetic aperture radar (SAR) measurements, so that the use of microwave remote sensing can given realistic estimates of energy fluxes if coupled to a simple two-layer model repesenting the soil. The model simulates volumetric water content (Wg) using classical meterological data, provided that some of the soil thermal and hydraulic properties are known. Only four parameters are necessary: mean water content, thermal conductivity and diffusitivity, and soil resistance to evaporation. They may be derived if a minimal number of measured values of Wg and surface layer temperature (Tg) are available together with independent measurements of energy flux to compare with the estimated values. The estimated evaporation is shown to be realistic and in good agreement with drying stage theory in which the transfer of water in the soil is in vapor form.

  17. Get Real!--Physically Reasonable Values for Teaching Electrostatics

    ERIC Educational Resources Information Center

    Morse, Robert A.

    2016-01-01

    Students get a sense of realistic values for physical situations from texts, but more importantly from solving problems. Therefore, problems should use realistic values for quantities to provide needed practice. Unfortunately, some problems on tests and in textbooks do not use realistic values. Physical situations in electrostatics seem to be…

  18. Analysis of the Impact of Realistic Wind Size Parameter on the Delft3D Model

    NASA Astrophysics Data System (ADS)

    Washington, M. H.; Kumar, S.

    2017-12-01

    The wind size parameter, which is the distance from the center of the storm to the location of the maximum winds, is currently a constant in the Delft3D model. As a result, the Delft3D model's output prediction of the water levels during a storm surge are inaccurate compared to the observed data. To address these issues, an algorithm to calculate a realistic wind size parameter for a given hurricane was designed and implemented using the observed water-level data for Hurricane Matthew. A performance evaluation experiment was conducted to demonstrate the accuracy of the model's prediction of water levels using the realistic wind size input parameter compared to the default constant wind size parameter for Hurricane Matthew, with the water level data observed from October 4th, 2016 to October 9th, 2016 from National Oceanic and Atmospheric Administration (NOAA) as a baseline. The experimental results demonstrate that the Delft3D water level output for the realistic wind size parameter, compared to the default constant size parameter, matches more accurately with the NOAA reference water level data.

  19. SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices

    NASA Astrophysics Data System (ADS)

    Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto

    2017-08-01

    Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.

  20. A computer-based matrix for rapid calculation of pulmonary hemodynamic parameters in congenital heart disease

    PubMed Central

    Lopes, Antonio Augusto; dos Anjos Miranda, Rogério; Gonçalves, Rilvani Cavalcante; Thomaz, Ana Maria

    2009-01-01

    BACKGROUND: In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. OBJECTIVE: We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. MATERIALS AND METHODS: Using Microsoft® Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. RESULTS: By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups (P <.001) and between-methods (P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. CONCLUSION: The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. PMID:19641642

  1. Clairvoyant fusion: a new methodology for designing robust detection algorithms

    NASA Astrophysics Data System (ADS)

    Schaum, Alan

    2016-10-01

    Many realistic detection problems cannot be solved with simple statistical tests for known alternative probability models. Uncontrollable environmental conditions, imperfect sensors, and other uncertainties transform simple detection problems with likelihood ratio solutions into composite hypothesis (CH) testing problems. Recently many multi- and hyperspectral sensing CH problems have been addressed with a new approach. Clairvoyant fusion (CF) integrates the optimal detectors ("clairvoyants") associated with every unspecified value of the parameters appearing in a detection model. For problems with discrete parameter values, logical rules emerge for combining the decisions of the associated clairvoyants. For many problems with continuous parameters, analytic methods of CF have been found that produce closed-form solutions-or approximations for intractable problems. Here the principals of CF are reviewed and mathematical insights are described that have proven useful in the derivation of solutions. It is also shown how a second-stage fusion procedure can be used to create theoretically superior detection algorithms for ALL discrete parameter problems.

  2. A theoretical investigation of chirp insonification of ultrasound contrast agents.

    PubMed

    Barlow, Euan; Mulholland, Anthony J; Gachagan, Anthony; Nordon, Alison

    2011-08-01

    A theoretical investigation of second harmonic imaging of an Ultrasound Contrast Agent (UCA) under chirp insonification is considered. By solving the UCA's dynamical equation analytically, the effect that the chirp signal parameters and the UCA shell parameters have on the amplitude of the second harmonic frequency are examined. This allows optimal parameter values to be identified which maximise the UCA's second harmonic response. A relationship is found for the chirp parameters which ensures that a signal can be designed to resonate a UCA for a given set of shell parameters. It is also shown that the shell thickness, shell viscosity and shell elasticity parameter should be as small as realistically possible in order to maximise the second harmonic amplitude. Keller-Herring, Second Harmonic, Chirp, Ultrasound Contrast Agent. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. The Rényi entropy H2 as a rigorous, measurable lower bound for the entropy of the interaction region in multi-particle production processes

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-10-01

    A model-independent lower bound on the entropy S of the multi-particle system produced in high energy collisions, provided by the measurable Rényi entropy H2, is shown to be very effective. Estimates show that the ratio H2/S remains close to one half for all realistic values of the parameters.

  4. Radon decay products in realistic living rooms and their activity distributions in human respiratory system.

    PubMed

    Mohery, M; Abdallah, A M; Baz, S S; Al-Amoudi, Z M

    2014-12-01

    In this study, the individual activity concentrations of attached short-lived radon decay products ((218)Po, (214)Pb and (214)Po) in aerosol particles were measured in ten poorly ventilated realistic living rooms. Using standard methodologies, the samples were collected using a filter holder technique connected with alpha-spectrometric. The mean value of air activity concentration of these radionuclides was found to be 5.3±0.8, 4.5±0.5 and 3.9±0.4 Bq m(-3), respectively. Based on the physical properties of the attached decay products and physiological parameters of light work activity for an adult human male recommended by ICRP 66 and considering the parameters of activity size distribution (AMD = 0.25 μm and σ(g) = 2.5) given by NRC, the total and regional deposition fractions in each airway generation could be evaluated. Moreover, the total and regional equivalent doses in the human respiratory tract could be estimated. In addition, the surface activity distribution per generation is calculated for the bronchial region (BB) and the bronchiolar region (bb) of the respiratory system. The maximum values of these activities were found in the upper bronchial airway generations. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Parametric Sensitivity Analysis for the Asian Summer Monsoon Precipitation Simulation in the Beijing Climate Center AGCM Version 2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Ben; Zhang, Yaocun; Qian, Yun

    In this study, we apply an efficient sampling approach and conduct a large number of simulations to explore the sensitivity of the simulated Asian summer monsoon (ASM) precipitation, including the climatological state and interannual variability, to eight parameters related to the cloud and precipitation processes in the Beijing Climate Center AGCM version 2.1 (BCC_AGCM2.1). Our results show that BCC_AGCM2.1 has large biases in simulating the ASM precipitation. The precipitation efficiency and evaporation coefficient for deep convection are the most sensitive parameters in simulating the ASM precipitation. With optimal parameter values, the simulated precipitation climatology could be remarkably improved, e.g. increasedmore » precipitation over the equator Indian Ocean, suppressed precipitation over the Philippine Sea, and more realistic Meiyu distribution over Eastern China. The ASM precipitation interannual variability is further analyzed, with a focus on the ENSO impacts. It shows the simulations with better ASM precipitation climatology can also produce more realistic precipitation anomalies during El Niño decaying summer. In the low-skill experiments for precipitation climatology, the ENSO-induced precipitation anomalies are most significant over continents (vs. over ocean in observation) in the South Asian monsoon region. More realistic results are derived from the higher-skill experiments with stronger anomalies over the Indian Ocean and weaker anomalies over India and the western Pacific, favoring more evident easterly anomalies forced by the tropical Indian Ocean warming and stronger Indian Ocean-western Pacific tele-connection as observed. Our model results reveal a strong connection between the simulated ASM precipitation climatological state and interannual variability in BCC_AGCM2.1 when key parameters are perturbed.« less

  6. Estimation of the sea surface's two-scale backscatter parameters

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1978-01-01

    The relationship between the sea-surface normalized radar cross section and the friction velocity vector is determined using a parametric two-scale scattering model. The model parameters are found from a nonlinear maximum likelihood estimation. The estimation is based on aircraft scatterometer measurements and the sea-surface anemometer measurements collected during the JONSWAP '75 experiment. The estimates of the ten model parameters converge to realistic values that are in good agreement with the available oceanographic data. The rms discrepancy between the model and the cross section measurements is 0.7 db, which is the rms sum of a 0.3 db average measurement error and a 0.6 db modeling error.

  7. Three-dimensional skyrmions in spin-2 Bose–Einstein condensates

    NASA Astrophysics Data System (ADS)

    Tiurev, Konstantin; Ollikainen, Tuomas; Kuopanportti, Pekko; Nakahara, Mikio; Hall, David S.; Möttönen, Mikko

    2018-05-01

    We introduce topologically stable three-dimensional skyrmions in the cyclic and biaxial nematic phases of a spin-2 Bose–Einstein condensate. These skyrmions exhibit exceptionally high mapping degrees resulting from the versatile symmetries of the corresponding order parameters. We show how these structures can be created in existing experimental setups and study their temporal evolution and lifetime by numerically solving the three-dimensional Gross–Pitaevskii equations for realistic parameter values. Although the biaxial nematic and cyclic phases are observed to be unstable against transition towards the ferromagnetic phase, their lifetimes are long enough for the skyrmions to be imprinted and detected experimentally.

  8. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  9. Forecasts of health care utilization related to pandemic A(H1N1)2009 influenza in the Nord-Pas-de-Calais region, France.

    PubMed

    Giovannelli, J; Loury, P; Lainé, M; Spaccaferri, G; Hubert, B; Chaud, P

    2015-05-01

    To describe and evaluate the forecasts of the load that pandemic A(H1N1)2009 influenza would have on the general practitioners (GP) and hospital care systems, especially during its peak, in the Nord-Pas-de-Calais (NPDC) region, France. Modelling study. The epidemic curve was modelled using an assumption of normal distribution of cases. The values for the forecast parameters were estimated from a literature review of observed data from the Southern hemisphere and French Overseas Territories, where the pandemic had already occurred. Two scenarios were considered, one realistic, the other pessimistic, enabling the authors to evaluate the 'reasonable worst case'. Forecasts were then assessed by comparing them with observed data in the NPDC region--of 4 million people. The realistic scenarios forecasts estimated 300,000 cases, 1500 hospitalizations, 225 intensive care units (ICU) admissions for the pandemic wave; 115 hospital beds and 45 ICU beds would be required per day during the peak. The pessimistic scenario's forecasts were 2-3 times higher than the realistic scenario's forecasts. Observed data were: 235,000 cases, 1585 hospitalizations, 58 ICU admissions; and a maximum of 11.6 ICU beds per day. The realistic scenario correctly estimated the temporal distribution of GP and hospitalized cases but overestimated the number of cases admitted to ICU. Obtaining more robust data for parameters estimation--particularly the rate of ICU admission among the population that the authors recommend to use--may provide better forecasts. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  10. Numerical weather prediction model tuning via ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  11. NWP model forecast skill optimization via closure parameter variations

    NASA Astrophysics Data System (ADS)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  12. Precise measurement of renal filtration and vascular parameters using a two-compartment model for dynamic contrast-enhanced MRI of the kidney gives realistic normal values.

    PubMed

    Tofts, Paul S; Cutajar, Marica; Mendichovszky, Iosif A; Peters, A Michael; Gordon, Isky

    2012-06-01

    To model the uptake phase of T(1)-weighted DCE-MRI data in normal kidneys and to demonstrate that the fitted physiological parameters correlate with published normal values. The model incorporates delay and broadening of the arterial vascular peak as it appears in the capillary bed, two distinct compartments for renal intravascular and extravascular Gd tracer, and uses a small-vessel haematocrit value of 24%. Four physiological parameters can be estimated: regional filtration K ( trans ) (ml min(-1) [ml tissue](-1)), perfusion F (ml min(-1) [100 ml tissue](-1)), blood volume v ( b ) (%) and mean residence time MRT (s). From these are found the filtration fraction (FF; %) and total GFR (ml min(-1)). Fifteen healthy volunteers were imaged twice using oblique coronal slices every 2.5 s to determine the reproducibility. Using parenchymal ROIs, group mean values for renal biomarkers all agreed with published values: K ( trans ): 0.25; F: 219; v ( b ): 34; MRT: 5.5; FF: 15; GFR: 115. Nominally cortical ROIs consistently underestimated total filtration (by ~50%). Reproducibility was 7-18%. Sensitivity analysis showed that these fitted parameters are most vulnerable to errors in the fixed parameters kidney T(1), flip angle, haematocrit and relaxivity. These renal biomarkers can potentially measure renal physiology in diagnosis and treatment. • Dynamic contrast-enhanced magnetic resonance imaging can measure renal function. • Filtration and perfusion values in healthy volunteers agree with published normal values. • Precision measured in healthy volunteers is between 7 and 15%.

  13. The scattering of low energy positrons by helium

    NASA Technical Reports Server (NTRS)

    Humberston, J. W.

    1973-01-01

    Kohn's variational method is used to calculate the positron-helium scattering length and low energy S-wave phase shifts for a quite realistic Hylleraas type of helium function containing an electron-electron correlation term. The zero energy wavefunction is used to calculate the value of the annihilation rate parameter Z sub eff. All the results are significantly different from those for Drachman's helium model B, but are in better agreement with the available experimental data.

  14. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    NASA Astrophysics Data System (ADS)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  15. Bivalves: From individual to population modelling

    NASA Astrophysics Data System (ADS)

    Saraiva, S.; van der Meer, J.; Kooijman, S. A. L. M.; Ruardij, P.

    2014-11-01

    An individual based population model for bivalves was designed, built and tested in a 0D approach, to simulate the population dynamics of a mussel bed located in an intertidal area. The processes at the individual level were simulated following the dynamic energy budget theory, whereas initial egg mortality, background mortality, food competition, and predation (including cannibalism) were additional population processes. Model properties were studied through the analysis of theoretical scenarios and by simulation of different mortality parameter combinations in a realistic setup, imposing environmental measurements. Realistic criteria were applied to narrow down the possible combination of parameter values. Field observations obtained in the long-term and multi-station monitoring program were compared with the model scenarios. The realistically selected modeling scenarios were able to reproduce reasonably the timing of some peaks in the individual abundances in the mussel bed and its size distribution but the number of individuals was not well predicted. The results suggest that the mortality in the early life stages (egg and larvae) plays an important role in population dynamics, either by initial egg mortality, larvae dispersion, settlement failure or shrimp predation. Future steps include the coupling of the population model with a hydrodynamic and biogeochemical model to improve the simulation of egg/larvae dispersion, settlement probability, food transport and also to simulate the feedback of the organisms' activity on the water column properties, which will result in an improvement of the food quantity and quality characterization.

  16. The Role of Economic Uncertainty on the Block Economic Value - a New Valuation Approach / Rola Czynnika Niepewności Przy Obliczaniu Wskaźnika Rentowności - Nowe Podejście

    NASA Astrophysics Data System (ADS)

    Dehghani, H.; Ataee-Pour, M.

    2012-12-01

    The block economic value (EV) is one of the most important parameters in mine evaluation. This parameter can affect significant factors such as mining sequence, final pit limit and net present value. Nowadays, the aim of open pit mine planning is to define optimum pit limits and an optimum life of mine production scheduling that maximizes the pit value under some technical and operational constraints. Therefore, it is necessary to calculate the block economic value at the first stage of the mine planning process, correctly. Unrealistic block economic value estimation may cause the mining project managers to make the wrong decision and thus may impose inexpiable losses to the project. The effective parameters such as metal price, operating cost, grade and so forth are always assumed certain in the conventional methods of EV calculation. While, obviously, these parameters have uncertain nature. Therefore, usually, the conventional methods results are far from reality. In order to solve this problem, a new technique is used base on an invented binomial tree which is developed in this research. This method can calculate the EV and project PV under economic uncertainty. In this paper, the EV and project PV were initially determined using Whittle formula based on certain economic parameters and a multivariate binomial tree based on the economic uncertainties such as the metal price and cost uncertainties. Finally the results were compared. It is concluded that applying the metal price and cost uncertainties causes the calculated block economic value and net present value to be more realistic than certain conditions.

  17. Effect of Critical Displacement Parameter on Slip Regime at Subduction Fault

    NASA Astrophysics Data System (ADS)

    Muldashev, Iskander; Sobolev, Stephan

    2016-04-01

    It is widely accepted that for the simple fault models value of critical displacement parameter (Dc) in Ruina-Dietrich's rate-and-state friction law is responsible for the transition from stick-slip regime at low Dc to non-seismic creep regime at large Dc. However, neither the value of "transition" Dc parameter nor the character of the transition is known for the realistic subduction zone setting. Here we investigate effect of Dc on regime of slip at subduction faults for two setups, generic model similar to simple shear elastic slider under quasistatic loading and full subduction model with appropriate geometry, stress and temperature distribution similar to the setting at the site of the Great Chile Earthquake of 1960. In our modeling we use finite element numerical technique that employs non-linear elasto-visco-plastic rheology in the entire model domain with rate-and-state plasticity within the fault zone. The model generates spontaneous earthquake sequence. Adaptive time-step integration procedure varies time step from 40 seconds at instability (earthquake), and gradually increases it to 5 years during postseismic relaxation. The technique allows observing the effect of Dc on period, magnitude of earthquakes through the cycles. We demonstrate that our modeling results for the generic model are consistent with the previous theoretical and numeric modeling results. For the full subduction model we obtain transition from non-seismic creep to stick-slip regime at Dc about 20 cm. We will demonstrate and discuss the features of the transition regimes in both generic and realistic subduction models.

  18. Extensional channel flow revisited: a dynamical systems perspective

    PubMed Central

    Meseguer, Alvaro; Mellibovsky, Fernando; Weidman, Patrick D.

    2017-01-01

    Extensional self-similar flows in a channel are explored numerically for arbitrary stretching–shrinking rates of the confining parallel walls. The present analysis embraces time integrations, and continuations of steady and periodic solutions unfolded in the parameter space. Previous studies focused on the analysis of branches of steady solutions for particular stretching–shrinking rates, although recent studies focused also on the dynamical aspects of the problems. We have adopted a dynamical systems perspective, analysing the instabilities and bifurcations the base state undergoes when increasing the Reynolds number. It has been found that the base state becomes unstable for small Reynolds numbers, and a transitional region including complex dynamics takes place at intermediate Reynolds numbers, depending on the wall acceleration values. The base flow instabilities are constitutive parts of different codimension-two bifurcations that control the dynamics in parameter space. For large Reynolds numbers, the restriction to self-similarity results in simple flows with no realistic behaviour, but the flows obtained in the transition region can be a valuable tool for the understanding of the dynamics of realistic Navier–Stokes solutions. PMID:28690413

  19. Application of the precipitation-runoff model in the Warrior coal field, Alabama

    USGS Publications Warehouse

    Kidd, Robert E.; Bossong, C.R.

    1987-01-01

    A deterministic precipitation-runoff model, the Precipitation-Runoff Modeling System, was applied in two small basins located in the Warrior coal field, Alabama. Each basin has distinct geologic, hydrologic, and land-use characteristics. Bear Creek basin (15.03 square miles) is undisturbed, is underlain almost entirely by consolidated coal-bearing rocks of Pennsylvanian age (Pottsville Formation), and is drained by an intermittent stream. Turkey Creek basin (6.08 square miles) contains a surface coal mine and is underlain by both the Pottsville Formation and unconsolidated clay, sand, and gravel deposits of Cretaceous age (Coker Formation). Aquifers in the Coker Formation sustain flow through extended rainless periods. Preliminary daily and storm calibrations were developed for each basin. Initial parameter and variable values were determined according to techniques recommended in the user's manual for the modeling system and through field reconnaissance. Parameters with meaningful sensitivity were identified and adjusted to match hydrograph shapes and to compute realistic water year budgets. When the developed calibrations were applied to data exclusive of the calibration period as a verification exercise, results were comparable to those for the calibration period. The model calibrations included preliminary parameter values for the various categories of geology and land use in each basin. The parameter values for areas underlain by the Pottsville Formation in the Bear Creek basin were transferred directly to similar areas in the Turkey Creek basin, and these parameter values were held constant throughout the model calibration. Parameter values for all geologic and land-use categories addressed in the two calibrations can probably be used in ungaged basins where similar conditions exist. The parameter transfer worked well, as a good calibration was obtained for Turkey Creek basin.

  20. Dynamic 99mTc-MAG3 renography: images for quality control obtained by combining pharmacokinetic modelling, an anthropomorphic computer phantom and Monte Carlo simulated scintillation camera imaging

    NASA Astrophysics Data System (ADS)

    Brolin, Gustav; Sjögreen Gleisner, Katarina; Ljungberg, Michael

    2013-05-01

    In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for 99mTc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.

  1. A preliminary study of head-up display assessment techniques. 2: HUD symbology and panel information search time

    NASA Technical Reports Server (NTRS)

    Guercio, J. G.; Haines, R. F.

    1978-01-01

    Twelve commercial pilots were shown 50 high-fidelity slides of a standard aircraft instrument panel with the airspeed, altitude, ADI, VSI, and RMI needles in various realistic orientations. Fifty slides showing an integrated head-up display (HUD) symbology containing an equivalent number of flight parameters as above (with flight path replacing VSI) were also shown. Each subject was told what flight parameter to search for just before each slide was exposed and was given as long as needed (12 sec maximum) to respond by verbalizing the parameter's displayed value. The results for the 100-percent correct data indicated that: there was no significant difference in mean reaction time (averaged across all five flight parameters) between the instrument panel and HUD slides; and a statistically significant difference in mean reaction time was found in responding to different flight parameters.

  2. Realistic and efficient 2D crack simulation

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek

    2010-04-01

    Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.

  3. Two-dimensional advective transport in ground-water flow parameter estimation

    USGS Publications Warehouse

    Anderman, E.R.; Hill, M.C.; Poeter, E.P.

    1996-01-01

    Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.

  4. Convective dynamics and chemical disequilibrium in the atmospheres of substellar objects

    NASA Astrophysics Data System (ADS)

    Bordwell, Baylee; Brown, Benjamin P.; Oishi, Jeffrey S.

    2017-11-01

    The thousands of substellar objects now known provide a unique opportunity to test our understanding of atmospheric dynamics across a range of environments. The chemical timescales of certain species transition from being much shorter than the dynamical timescales to being much longer than them at a point in the atmosphere known as the quench point. This transition leads to a state of dynamical disequilibrium, the effects of which can be used to probe the atmospheric dynamics of these objects. Unfortunately, due to computational constraints, models that inform the interpretation of these observations are run at dynamical parameters which are far from realistic values. In this study, we explore the behavior of a disequilibrium chemical process with increasingly realistic planetary conditions, to quantify the effects of the approximations used in current models. We simulate convection in 2-D, plane-parallel, polytropically-stratified atmospheres, into which we add reactive passive tracers that explore disequilibrium behavior. We find that as we increase the Rayleigh number, and thus achieve more realistic planetary conditions, the behavior of these tracers does not conform to the classical predictions of disequilibrium chemistry.

  5. CONVECTION THEORY AND SUB-PHOTOSPHERIC STRATIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnett, David; Meakin, Casey; Young, Patrick A., E-mail: darnett@as.arizona.ed, E-mail: casey.meakin@gmail.co, E-mail: patrick.young.1@asu.ed

    2010-02-20

    As a preliminary step toward a complete theoretical integration of three-dimensional compressible hydrodynamic simulations into stellar evolution, convection at the surface and sub-surface layers of the Sun is re-examined, from a restricted point of view, in the language of mixing-length theory (MLT). Requiring that MLT use a hydrodynamically realistic dissipation length gives a new constraint on solar models. While the stellar structure which results is similar to that obtained by Yale Rotational Evolution Code (Guenther et al.; Bahcall and Pinsonneault) and Garching models (Schlattl et al.), the theoretical picture differs. A new quantitative connection is made between macro-turbulence, micro-turbulence, andmore » the convective velocity scale at the photosphere, which has finite values. The 'geometric parameter' in MLT is found to correspond more reasonably with the thickness of the superadiabatic region (SAR), as it must for consistency in MLT, and its integrated effect may correspond to that of the strong downward plumes which drive convection (Stein and Nordlund), and thus has a physical interpretation even in MLT. If we crudely require the thickness of the SAR to be consistent with the 'geometric factor' used in MLT, there is no longer a free parameter, at least in principle. Use of three-dimensional simulations of both adiabatic convection and stellar atmospheres will allow the determination of the dissipation length and the geometric parameter (i.e., the entropy jump) more realistically, and with no astronomical calibration. A physically realistic treatment of convection in stellar evolution will require substantial additional modifications beyond MLT, including nonlocal effects of kinetic energy flux, entrainment (the most dramatic difference from MLT found by Meakin and Arnett), rotation, and magnetic fields.« less

  6. Development of numerical phantoms by MRI for RF electromagnetic dosimetry: a female model.

    PubMed

    Mazzurana, M; Sandrini, L; Vaccari, A; Malacarne, C; Cristoforetti, L; Pontalti, R

    2004-01-01

    Numerical human models for electromagnetic dosimetry are commonly obtained by segmentation of CT or MRI images and complex permittivity values are ascribed to each issue according to literature values. The aim of this study is to provide an alternative semi-automatic method by which non-segmented images, obtained by a MRI tomographer, can be automatically related to the complex permittivity values through two frequency dependent transfer functions. In this way permittivity and conductivity vary with continuity--even in the same tissue--reflecting the intrinsic realistic spatial dispersion of such parameters. A female human model impinged by a plane wave is tested using finite-difference time-domain algorithm and the results of the total body and layer-averaged specific absorption rate are reported.

  7. Should the patent system for pharmaceuticals be replaced? A theoretical approach.

    PubMed

    Antoñanzas, Fernando; Rodríguez-Ibeas, Roberto; Juárez-Castelló, Carmelo A

    2014-10-01

    This paper acknowledges the difficulties of providing access to innovative drugs in some jurisdictions under the patent system and it contributes to the current debate on mechanisms aimed at facilitating such access. We employ a highly stylized static model of two markets (North and South) to analyse the conditions under which a new system based on royalty payments would be preferred to a patent system for pharmaceuticals. In the welfare calculations we have considered explicitly the influence of marketing activities by the patent owner as well as the shadow price of public funds needed to finance the royalties. The bargaining power of the firm in terms of obtaining higher compensation is also considered. The result: are not unambiguously conclusive being heavily dependent on the relevant values of the parameters. Nevertheless, it seems that for realistic parameter values, the new system could be preferred by all the parties involved.

  8. Electromagnetic absorption in the head of adults and children due to mobile phone operation close to the head.

    PubMed

    de Salles, Alvaro A; Bulla, Giovani; Rodriguez, Claudio E Fernández

    2006-01-01

    The Specific Absorption Rate (SAR) produced by mobile phones in the head of adults and children is simulated using an algorithm based on the Finite Difference Time Domain (FDTD) method. Realistic models of the child and adult head are used. The electromagnetic parameters are fitted to these models. Comparison also are made with the SAR calculated in the children model when using adult human electromagnetic parameters values. Microstrip (or patch) antennas and quarter wavelength monopole antennas are used in the simulations. The frequencies used to feed the antennas are 1850 MHz and 850 MHz. The SAR results are compared with the available international recommendations. It is shown that under similar conditions, the 1g-SAR calculated for children is higher than that for the adults. When using the 10-year old child model, SAR values higher than 60% than those for adults are obtained.

  9. The Aggregate Representation of Terrestrial Land Covers Within Global Climate Models (GCM)

    NASA Technical Reports Server (NTRS)

    Shuttleworth, W. James; Sorooshian, Soroosh

    1996-01-01

    This project had four initial objectives: (1) to create a realistic coupled surface-atmosphere model to investigate the aggregate description of heterogeneous surfaces; (2) to develop a simple heuristic model of surface-atmosphere interactions; (3) using the above models, to test aggregation rules for a variety of realistic cover and meteorological conditions; and (4) to reconcile biosphere-atmosphere transfer scheme (BATS) land covers with those that can be recognized from space; Our progress in meeting these objectives can be summarized as follows. Objective 1: The first objective was achieved in the first year of the project by coupling the Biosphere-Atmosphere Transfer Scheme (BATS) with a proven two-dimensional model of the atmospheric boundary layer. The resulting model, BATS-ABL, is described in detail in a Masters thesis and reported in a paper in the Journal of Hydrology Objective 2: The potential value of the heuristic model was re-evaluated early in the project and a decision was made to focus subsequent research around modeling studies with the BATS-ABL model. The value of using such coupled surface-atmosphere models in this research area was further confirmed by the success of the Tucson Aggregation Workshop. Objective 3: There was excellent progress in using the BATS-ABL model to test aggregation rules for a variety of realistic covers. The foci of attention have been the site of the First International Satellite Land Surface Climatology Project Field Experiment (FIFE) in Kansas and one of the study sites of the Anglo-Brazilian Amazonian Climate Observational Study (ABRACOS) near the city of Manaus, Amazonas, Brazil. These two sites were selected because of the ready availability of relevant field data to validate and initiate the BATS-ABL model. The results of these tests are given in a Masters thesis, and reported in two papers. Objective 4: Progress far exceeded original expectations not only in reconciling BATS land covers with those that can be recognized from space, but also in then applying remotely-sensed land cover data to map aggregate values of BATS parameters for heterogeneous covers and interpreting these parameters in terms of surface-atmosphere exchanges.

  10. A DTI-based model for TMS using the independent impedance method with frequency-dependent tissue parameters

    NASA Astrophysics Data System (ADS)

    De Geeter, N.; Crevecoeur, G.; Dupré, L.; Van Hecke, W.; Leemans, A.

    2012-04-01

    Accurate simulations on detailed realistic head models are necessary to gain a better understanding of the response to transcranial magnetic stimulation (TMS). Hitherto, head models with simplified geometries and constant isotropic material properties are often used, whereas some biological tissues have anisotropic characteristics which vary naturally with frequency. Moreover, most computational methods do not take the tissue permittivity into account. Therefore, we calculate the electromagnetic behaviour due to TMS in a head model with realistic geometry and where realistic dispersive anisotropic tissue properties are incorporated, based on T1-weighted and diffusion-weighted magnetic resonance images. This paper studies the impact of tissue anisotropy, permittivity and frequency dependence, using the anisotropic independent impedance method. The results show that anisotropy yields differences up to 32% and 19% of the maximum induced currents and electric field, respectively. Neglecting the permittivity values leads to a decrease of about 72% and 24% of the maximum currents and field, respectively. Implementing the dispersive effects of biological tissues results in a difference of 6% of the maximum currents. The cerebral voxels show limited sensitivity of the induced electric field to changes in conductivity and permittivity, whereas the field varies approximately linearly with frequency. These findings illustrate the importance of including each of the above parameters in the model and confirm the need for accuracy in the applied patient-specific method, which can be used in computer-assisted TMS.

  11. The viscosity to entropy ratio: From string theory motivated bounds to warm dense matter

    DOE PAGES

    Faussurier, G.; Libby, S. B.; Silvestrelli, P. L.

    2014-07-04

    Here, we study the ratio of viscosity to entropy density in Yukawa one-component plasmas as a function of coupling parameter at fixed screening, and in realistic warm dense matter models as a function of temperature at fixed density. In these two situations, the ratio is minimized for values of the coupling parameters that depend on screening, and for temperatures that in turn depend on density and material. In this context, we also examine Rosenfeld arguments relating transport coefficients to excess reduced entropy for Yukawa one-component plasmas. For these cases we show that this ratio is always above the lower-bound conjecturemore » derived from string theory ideas.« less

  12. Study of the effect of static/dynamic Coulomb friction variation at the tape-head interface of a spacecraft tape recorder by non-linear time response simulation

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, A. K.

    1978-01-01

    A description is presented of six simulation cases investigating the effect of the variation of static-dynamic Coulomb friction on servo system stability/performance. The upper and lower levels of dynamic Coulomb friction which allowed operation within requirements were determined roughly to be three times and 50% respectively of nominal values considered in a table. A useful application for the nonlinear time response simulation is the sensitivity analysis of final hardware design with respect to such system parameters as cannot be varied realistically or easily in the actual hardware. Parameters of the static/dynamic Coulomb friction fall in this category.

  13. Estimation of key parameters in adaptive neuron model according to firing patterns based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Chunhua; Wang, Jiang; Yi, Guosheng

    2017-03-01

    Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.

  14. Simulations of water nano-confined between corrugated planes

    NASA Astrophysics Data System (ADS)

    Zubeltzu, Jon; Artacho, Emilio

    2017-11-01

    Water confined to nanoscale widths in two dimensions between ideal planar walls has been the subject of ample study, aiming at understanding the intrinsic response of water to confinement, avoiding the consideration of the chemistry of actual confining materials. In this work, we study the response of such nanoconfined water to the imposition of a periodicity in the confinement by means of computer simulations, both using empirical potentials and from first-principles. For that we propose a periodic confining potential emulating the atomistic oscillation of the confining walls, which allows varying the lattice parameter and amplitude of the oscillation. We do it for a triangular lattice, with several values of the lattice parameter: one which is ideal for commensuration with layers of Ih ice and other values that would correspond to more realistic substrates. For the former, the phase diagram shows an overall rise of the melting temperature. The liquid maintains a bi-layer triangular structure, however, despite the fact that it is not favoured by the external periodicity. The first-principles liquid is significantly affected by the modulation in its layering and stacking even at relatively small amplitudes of the confinement modulation. Beyond some critical modulation amplitude, the hexatic phase present in flat confinement is replaced by a trilayer crystalline phase unlike any of the phases encountered for flat confinement. For more realistic lattice parameters, the liquid does not display higher tendency to freeze, but it clearly shows inhomogeneous behaviour as the strength of the rugosity increases. In spite of this expected inhomogeneity, the structural and dynamical response of the liquid is surprisingly insensitive to the external modulation. Although the first-principles calculations give a more triangular liquid than the one observed with empirical potentials (TIP4P/2005), both agree remarkably well for the main conclusions of the study.

  15. An adaptive drug delivery design using neural networks for effective treatment of infectious diseases: a simulation study.

    PubMed

    Padhi, Radhakant; Bhardhwaj, Jayender R

    2009-06-01

    An adaptive drug delivery design is presented in this paper using neural networks for effective treatment of infectious diseases. The generic mathematical model used describes the coupled evolution of concentration of pathogens, plasma cells, antibodies and a numerical value that indicates the relative characteristic of a damaged organ due to the disease under the influence of external drugs. From a system theoretic point of view, the external drugs can be interpreted as control inputs, which can be designed based on control theoretic concepts. In this study, assuming a set of nominal parameters in the mathematical model, first a nonlinear controller (drug administration) is designed based on the principle of dynamic inversion. This nominal drug administration plan was found to be effective in curing "nominal model patients" (patients whose immunological dynamics conform to the mathematical model used for the control design exactly. However, it was found to be ineffective in curing "realistic model patients" (patients whose immunological dynamics may have off-nominal parameter values and possibly unwanted inputs) in general. Hence, to make the drug delivery dosage design more effective for realistic model patients, a model-following adaptive control design is carried out next by taking the help of neural networks, that are trained online. Simulation studies indicate that the adaptive controller proposed in this paper holds promise in killing the invading pathogens and healing the damaged organ even in the presence of parameter uncertainties and continued pathogen attack. Note that the computational requirements for computing the control are very minimal and all associated computations (including the training of neural networks) can be carried out online. However it assumes that the required diagnosis process can be carried out at a sufficient faster rate so that all the states are available for control computation.

  16. Economic design of control charts considering process shift distributions

    NASA Astrophysics Data System (ADS)

    Vommi, Vijayababu; Kasarapu, Rukmini V.

    2014-09-01

    Process shift is an important input parameter in the economic design of control charts. Earlier control chart designs considered constant shifts to occur in the mean of the process for a given assignable cause. This assumption has been criticized by many researchers since it may not be realistic to produce a constant shift whenever an assignable cause occurs. To overcome this difficulty, in the present work, a distribution for the shift parameter has been considered instead of a single value for a given assignable cause. Duncan's economic design model for chart has been extended to incorporate the distribution for the process shift parameter. It is proposed to minimize total expected loss-cost to obtain the control chart parameters. Further, three types of process shifts namely, positively skewed, uniform and negatively skewed distributions are considered and the situations where it is appropriate to use the suggested methodology are recommended.

  17. Magnetic resonance fingerprinting based on realistic vasculature in mice

    PubMed Central

    Pouliot, Philippe; Gagnon, Louis; Lam, Tina; Avti, Pramod K.; Bowen, Chris; Desjardins, Michèle; Kakkar, Ashok K.; Thorin, E.; Sakadzic, Sava; Boas, David A.; Lesage, Frédéric

    2017-01-01

    Magnetic resonance fingerprinting (MRF) was recently proposed as a novel strategy for MR data acquisition and analysis. A variant of MRF called vascular MRF (vMRF) followed, that extracted maps of three parameters of physiological importance: cerebral oxygen saturation (SatO2), mean vessel radius and cerebral blood volume (CBV). However, this estimation was based on idealized 2-dimensional simulations of vascular networks using random cylinders and the empirical Bloch equations convolved with a diffusion kernel. Here we focus on studying the vascular MR fingerprint using real mouse angiograms and physiological values as the substrate for the MR simulations. The MR signal is calculated ab initio with a Monte Carlo approximation, by tracking the accumulated phase from a large number of protons diffusing within the angiogram. We first study the identifiability of parameters in simulations, showing that parameters are fully estimable at realistically high signal-to-noise ratios (SNR) when the same angiogram is used for dictionary generation and parameter estimation, but that large biases in the estimates persist when the angiograms are different. Despite these biases, simulations show that differences in parameters remain estimable. We then applied this methodology to data acquired using the GESFIDE sequence with SPIONs injected into 9 young wild type and 9 old atherosclerotic mice. Both the pre injection signal and the ratio of post-to-pre injection signals were modeled, using 5-dimensional dictionaries. The vMRF methodology extracted significant differences in SatO2, mean vessel radius and CBV between the two groups, consistent across brain regions and dictionaries. Further validation work is essential before vMRF can gain wider application. PMID:28043909

  18. Bayesian Modal Estimation of the Four-Parameter Item Response Model in Real, Realistic, and Idealized Data Sets.

    PubMed

    Waller, Niels G; Feuerstahler, Leah

    2017-01-01

    In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).

  19. Information content of slug tests for estimating hydraulic properties in realistic, high-conductivity aquifer scenarios

    NASA Astrophysics Data System (ADS)

    Cardiff, Michael; Barrash, Warren; Thoma, Michael; Malama, Bwalya

    2011-06-01

    SummaryA recently developed unified model for partially-penetrating slug tests in unconfined aquifers ( Malama et al., in press) provides a semi-analytical solution for aquifer response at the wellbore in the presence of inertial effects and wellbore skin, and is able to model the full range of responses from overdamped/monotonic to underdamped/oscillatory. While the model provides a unifying framework for realistically analyzing slug tests in aquifers (with the ultimate goal of determining aquifer properties such as hydraulic conductivity K and specific storage Ss), it is currently unclear whether parameters of this model can be well-identified without significant prior information and, thus, what degree of information content can be expected from such slug tests. In this paper, we examine the information content of slug tests in realistic field scenarios with respect to estimating aquifer properties, through analysis of both numerical experiments and field datasets. First, through numerical experiments using Markov Chain Monte Carlo methods for gauging parameter uncertainty and identifiability, we find that: (1) as noted by previous researchers, estimation of aquifer storage parameters using slug test data is highly unreliable and subject to significant uncertainty; (2) joint estimation of aquifer and skin parameters contributes to significant uncertainty in both unless prior knowledge is available; and (3) similarly, without prior information joint estimation of both aquifer radial and vertical conductivity may be unreliable. These results have significant implications for the types of information that must be collected prior to slug test analysis in order to obtain reliable aquifer parameter estimates. For example, plausible estimates of aquifer anisotropy ratios and bounds on wellbore skin K should be obtained, if possible, a priori. Secondly, through analysis of field data - consisting of over 2500 records from partially-penetrating slug tests in a heterogeneous, highly conductive aquifer, we present some general findings that have applicability to slug testing. In particular, we find that aquifer hydraulic conductivity estimates obtained from larger slug heights tend to be lower on average (presumably due to non-linear wellbore losses) and tend to be less variable (presumably due to averaging over larger support volumes), supporting the notion that using the smallest slug heights possible to produce measurable water level changes is an important strategy when mapping aquifer heterogeneity. Finally, we present results specific to characterization of the aquifer at the Boise Hydrogeophysical Research Site. Specifically, we note that (1) K estimates obtained using a range of different slug heights give similar results, generally within ±20%; (2) correlations between estimated K profiles with depth at closely-spaced wells suggest that K values obtained from slug tests are representative of actual aquifer heterogeneity and not overly affected by near-well media disturbance (i.e., "skin"); (3) geostatistical analysis of K values obtained indicates reasonable correlation lengths for sediments of this type; and (4) overall, K values obtained do not appear to correlate well with porosity data from previous studies.

  20. ON THE MAGNETIC FIELD OF PULSARS WITH REALISTIC NEUTRON STAR CONFIGURATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belvedere, R.; Rueda, Jorge A.; Ruffini, R., E-mail: riccardo.belvedere@icra.it, E-mail: jorge.rueda@icra.it, E-mail: ruffini@icra.it

    2015-01-20

    We have recently developed a neutron star model fulfilling global and not local charge neutrality, both in the static and in the uniformly rotating cases. The model is described by the coupled Einstein-Maxwell-Thomas-Fermi equations, in which all fundamental interactions are accounted for in the framework of general relativity and relativistic mean field theory. Uniform rotation is introduced following Hartle's formalism. We show that the use of realistic parameters of rotating neutron stars, obtained from numerical integration of the self-consistent axisymmetric general relativistic equations of equilibrium, leads to values of the magnetic field and radiation efficiency of pulsars that are verymore » different from estimates based on fiducial parameters that assume a neutron star mass M = 1.4 M {sub ☉}, radius R = 10 km, and moment of inertia I = 10{sup 45} g cm{sup 2}. In addition, we compare and contrast the magnetic field inferred from the traditional Newtonian rotating magnetic dipole model with respect to the one obtained from its general relativistic analog, which takes into account the effect of the finite size of the source. We apply these considerations to the specific high-magnetic field pulsar class and show that, indeed, all of these sources can be described as canonical pulsars driven by the rotational energy of the neutron star, and have magnetic fields lower than the quantum critical field for any value of the neutron star mass.« less

  1. Shells, orbit bifurcations, and symmetry restorations in Fermi systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magner, A. G., E-mail: magner@kinr.kiev.ua; Koliesnik, M. V.; Arita, K.

    The periodic-orbit theory based on the improved stationary-phase method within the phase-space path integral approach is presented for the semiclassical description of the nuclear shell structure, concerning themain topics of the fruitful activity ofV.G. Soloviev. We apply this theory to study bifurcations and symmetry breaking phenomena in a radial power-law potential which is close to the realistic Woods–Saxon one up to about the Fermi energy. Using the realistic parametrization of nuclear shapes we explain the origin of the double-humped fission barrier and the asymmetry in the fission isomer shapes by the bifurcations of periodic orbits. The semiclassical origin of themore » oblate–prolate shape asymmetry and tetrahedral shapes is also suggested within the improved periodic-orbit approach. The enhancement of shell structures at some surface diffuseness and deformation parameters of such shapes are explained by existence of the simple local bifurcations and new non-local bridge-orbit bifurcations in integrable and partially integrable Fermi-systems. We obtained good agreement between the semiclassical and quantum shell-structure components of the level density and energy for several surface diffuseness and deformation parameters of the potentials, including their symmetry breaking and bifurcation values.« less

  2. Linking the Weather Generator with Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Dubrovsky, Martin; Farda, Ales; Skalak, Petr; Huth, Radan

    2013-04-01

    One of the downscaling approaches, which transform the raw outputs from the climate models (GCMs or RCMs) into data with more realistic structure, is based on linking the stochastic weather generator with the climate model output. The present contribution, in which the parametric daily surface weather generator (WG) M&Rfi is linked to the RCM output, follows two aims: (1) Validation of the new simulations of the present climate (1961-1990) made by the ALADIN-Climate Regional Climate Model at 25 km resolution. The WG parameters are derived from the RCM-simulated surface weather series and compared to those derived from weather series observed in 125 Czech meteorological stations. The set of WG parameters will include statistics of the surface temperature and precipitation series (including probability of wet day occurrence). (2) Presenting a methodology for linking the WG with RCM output. This methodology, which is based on merging information from observations and RCM, may be interpreted as a downscaling procedure, whose product is a gridded WG capable of producing realistic synthetic multivariate weather series for weather-ungauged locations. In this procedure, WG is calibrated with RCM-simulated multi-variate weather series in the first step, and the grid specific WG parameters are then de-biased by spatially interpolated correction factors based on comparison of WG parameters calibrated with gridded RCM weather series and spatially scarcer observations. The quality of the weather series produced by the resultant gridded WG will be assessed in terms of selected climatic characteristics (focusing on characteristics related to variability and extremes of surface temperature and precipitation). Acknowledgements: The present experiment is made within the frame of projects ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR) and VALUE (COST ES 1102 action).

  3. A Critical Approach to School Mathematical Knowledge: The Case of "Realistic" Problems in Greek Primary School Textbooks for Seven-Year-Old Pupils

    ERIC Educational Resources Information Center

    Zacharos, Konstantinos; Koustourakis, Gerassimos

    2011-01-01

    The reference contexts that accompany the "realistic" problems chosen for teaching mathematical concepts in the first school grades play a major educational role. However, choosing "realistic" problems in teaching is a complex process that must take into account various pedagogical, sociological and psychological parameters.…

  4. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-06-01

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  5. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less

  6. Systematic comparison of jet energy-loss schemes in a realistic hydrodynamic medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bass, Steffen A.; Majumder, Abhijit; Gale, Charles

    2009-02-15

    We perform a systematic comparison of three different jet energy-loss approaches. These include the Armesto-Salgado-Wiedemann scheme based on the approach of Baier-Dokshitzer-Mueller-Peigne-Schiff and Zakharov (BDMPS-Z/ASW), the higher twist (HT) approach and a scheme based on the Arnold-Moore-Yaffe (AMY) approach. In this comparison, an identical medium evolution will be utilized for all three approaches: this entails not only the use of the same realistic three-dimensional relativistic fluid dynamics (RFD) simulation, but also the use of identical initial parton-distribution functions and final fragmentation functions. We are, thus, in a unique position to not only isolate fundamental differences between the various approaches butmore » also make rigorous calculations for different experimental measurements using state of the art components. All three approaches are reduced to versions containing only one free tunable parameter, this is then related to the well-known transport parameter q. We find that the parameters of all three calculations can be adjusted to provide a good description of inclusive data on R{sub AA} vs transverse momentum. However, we do observe slight differences in their predictions for the centrality and azimuthal angular dependence of R{sub AA} vs p{sub T}. We also note that the values of the transport coefficient q in the three approaches to describe the data differ significantly.« less

  7. Hemodynamic Changes Caused by Flow Diverters in Rabbit Aneurysm Models: Comparison of Virtual and Realistic FD Deployments Based on Micro-CT Reconstruction

    PubMed Central

    Fang, Yibin; Yu, Ying; Cheng, Jiyong; Wang, Shengzhang; Wang, Kuizhong; Liu, Jian-Min; Huang, Qinghai

    2013-01-01

    Adjusting hemodynamics via flow diverter (FD) implantation is emerging as a novel method of treating cerebral aneurysms. However, most previous FD-related hemodynamic studies were based on virtual FD deployment, which may produce different hemodynamic outcomes than realistic (in vivo) FD deployment. We compared hemodynamics between virtual FD and realistic FD deployments in rabbit aneurysm models using computational fluid dynamics (CFD) simulations. FDs were implanted for aneurysms in 14 rabbits. Vascular models based on rabbit-specific angiograms were reconstructed for CFD studies. Real FD configurations were reconstructed based on micro-CT scans after sacrifice, while virtual FD configurations were constructed with SolidWorks software. Hemodynamic parameters before and after FD deployment were analyzed. According to the metal coverage (MC) of implanted FDs calculated based on micro-CT reconstruction, 14 rabbits were divided into two groups (A, MC >35%; B, MC <35%). Normalized mean wall shear stress (WSS), relative residence time (RRT), inflow velocity, and inflow volume in Group A were significantly different (P<0.05) from virtual FD deployment, but pressure was not (P>0.05). The normalized mean WSS in Group A after realistic FD implantation was significantly lower than that of Group B. All parameters in Group B exhibited no significant difference between realistic and virtual FDs. This study confirmed MC-correlated differences in hemodynamic parameters between realistic and virtual FD deployment. PMID:23823503

  8. Noise effects on entanglement distribution by separable state

    NASA Astrophysics Data System (ADS)

    Bordbar, Najmeh Tabe; Memarzadeh, Laleh

    2018-02-01

    We investigate noise effects on the performance of entanglement distribution by separable state. We consider a realistic situation in which the mediating particle between two distant nodes of the network goes through a noisy channel. For a large class of noise models, we show that the average value of distributed entanglement between two parties is equal to entanglement between particular bipartite partitions of target qubits and exchange qubit in intermediate steps of the protocol. This result is valid for distributing two-qubit/qudit and three-qubit entangled states. In explicit examples of the noise family, we show that there exists a critical value of noise parameter beyond which distribution of distillable entanglement is not possible. Furthermore, we determine how this critical value increases in terms of Hilbert space dimension, when distributing d-dimensional Bell states.

  9. The Population Biology of Bacterial Plasmids: A PRIORI Conditions for the Existence of Conjugationally Transmitted Factors

    PubMed Central

    Stewart, Frank M.; Levin, Bruce R.

    1977-01-01

    A mathematical model for the population dynamics of conjugationally transmitted plasmids in bacterial populations is presented and its properties analyzed. Consideration is given to nonbacteriocinogenic factors that are incapable of incorporation into the chromosome of their host cells, and to bacterial populations maintained in either continuous (chemostat) or discrete (serial transfer) culture. The conditions for the establishment and maintenance of these infectious extrachromosomal elements and equilibrium frequencies of cells carrying them are presented for different values of the biological parameters: population growth functions, conjugational transfer and segregation rate constants. With these parameters in a biologically realistic range, the theory predicts a broad set of physical conditions, resource concentrations and dilution rates, where conjugationally transmitted plasmids can become established and where cells carrying them will maintain high frequencies in bacterial populations. This can occur even when plasmid-bearing cells are much less fit (i.e., have substantially lower growth rates) than cells free of these factors. The implications of these results and the reality and limitations of the model are discussed and the values of its parameters in natural populations speculated upon. PMID:17248761

  10. Single neuron modeling and data assimilation in BNST neurons

    NASA Astrophysics Data System (ADS)

    Farsian, Reza

    Neurons, although tiny in size, are vastly complicated systems, which are responsible for the most basic yet essential functions of any nervous system. Even the most simple models of single neurons are usually high dimensional, nonlinear, and contain many parameters and states which are unobservable in a typical neurophysiological experiment. One of the most fundamental problems in experimental neurophysiology is the estimation of these parameters and states, since knowing their values is essential in identification, model construction, and forward prediction of biological neurons. Common methods of parameter and state estimation do not perform well for neural models due to their high dimensionality and nonlinearity. In this dissertation, two alternative approaches for parameters and state estimation of biological neurons have been demonstrated: dynamical parameter estimation (DPE) and a Markov Chain Monte Carlo (MCMC) method. The first method uses elements of chaos control and synchronization theory for parameter and state estimation. MCMC is a statistical approach which uses a path integral formulation to evaluate a mean and an error bound for these unobserved parameters and states. These methods have been applied to biological system of neurons in Bed Nucleus of Stria Termialis neurons (BNST) of rats. State and parameters of neurons in both systems were estimated, and their value were used for recreating a realistic model and predicting the behavior of the neurons successfully. The knowledge of biological parameters can ultimately provide a better understanding of the internal dynamics of a neuron in order to build robust models of neuron networks.

  11. The statistical fluctuation study of quantum key distribution in means of uncertainty principle

    NASA Astrophysics Data System (ADS)

    Liu, Dunwei; An, Huiyao; Zhang, Xiaoyu; Shi, Xuemei

    2018-03-01

    Laser defects in emitting single photon, photon signal attenuation and propagation of error cause our serious headaches in practical long-distance quantum key distribution (QKD) experiment for a long time. In this paper, we study the uncertainty principle in metrology and use this tool to analyze the statistical fluctuation of the number of received single photons, the yield of single photons and quantum bit error rate (QBER). After that we calculate the error between measured value and real value of every parameter, and concern the propagation error among all the measure values. We paraphrase the Gottesman-Lo-Lutkenhaus-Preskill (GLLP) formula in consideration of those parameters and generate the QKD simulation result. In this study, with the increase in coding photon length, the safe distribution distance is longer and longer. When the coding photon's length is N = 10^{11}, the safe distribution distance can be almost 118 km. It gives a lower bound of safe transmission distance than without uncertainty principle's 127 km. So our study is in line with established theory, but we make it more realistic.

  12. Nucleon decay in non-minimal supersymmetric SO(10)

    NASA Astrophysics Data System (ADS)

    Macpherson, Alick L.

    1996-02-01

    Evaluation of nucleon decay modes and branching ratios in a non-minimal supersymmetric SO(10) grand unified theory is presented. The non-minimal GUT considered is the supersymmetrised version of the 'realistic' SO(10) model originally proposed by Harvey, Reiss and Ramond, which is realistic in that it gives acceptable charged fermion and neutrino masses within the context of a phenomenological fit to the low-energy standard model inputs. Despite a complicated Higgs sector, the SO(10) 10 Higgs superfield mass insertion is found to be the sole contribution to the tree-level F-term governing nucleon decay. The resulting dimension-5 operators that mediate nucleon decay give branching ratio predictions parameterised by a single parameter, the ratio of the Yukawa couplings of the 10 to the fermion generations. For parameter values corresponding to a lack of dominance of the third family self-coupling, the dominant nucleon decay modes are p → K + + overlineνμand n → K 0 + overlineνμ as expected. Further, the charged muon decay modes are enhanced by two orders of magnitude over the standard minimal SUSY SU(5) predictions, thus predicting a distinct spectrum of 'visible' modes. These charged muon decay modes, along with p → π + + overlineνμand n → π 0 + overlineνμ, which are moderately enhanced over the SUSY SU(5) prediction, suggest a distinguishing fingerprint of this particular GUT model, and if nucleon decay is observed at Super-KAMIOKANDE the predicted branching ratio spectrum can be used to determine the validity of this 'realistic' SO(10) SUSY GUT model.

  13. Calculation of Expectation Values of Operators in the Complex Scaling Method

    DOE PAGES

    Papadimitriou, G.

    2016-06-14

    In the complex scaling method (CSM) provides with a way to obtain resonance parameters of particle unstable states by rotating the coordinates and momenta of the original Hamiltonian. It is convenient to use an L 2 integrable basis to resolve the complex rotated or complex scaled Hamiltonian H θ , with θ being the angle of rotation in the complex energy plane. Within the CSM, resonance and scattering solutions have fall-off asymptotics. Furthermore, one of the consequences is that, expectation values of operators in a resonance or scattering complex scaled solution are calculated by complex rotating the operators. In thismore » work we are exploring applications of the CSM on calculations of expectation values of quantum mechanical operators by using the regularized backrotation technique and calculating hence the expectation value using the unrotated operator. Moreover, the test cases involve a schematic two-body Gaussian model and also applications using realistic interactions.« less

  14. A parametric model for the changes in the complex valued conductivity of a lung during tidal breathing

    NASA Astrophysics Data System (ADS)

    Nordebo, Sven; Dalarsson, Mariana; Khodadad, Davood; Müller, Beat; Waldmann, Andreas D.; Becher, Tobias; Frerichs, Inez; Sophocleous, Louiza; Sjöberg, Daniel; Seifnaraghi, Nima; Bayford, Richard

    2018-05-01

    Classical homogenization theory based on the Hashin–Shtrikman coated ellipsoids is used to model the changes in the complex valued conductivity (or admittivity) of a lung during tidal breathing. Here, the lung is modeled as a two-phase composite material where the alveolar air-filling corresponds to the inclusion phase. The theory predicts a linear relationship between the real and the imaginary parts of the change in the complex valued conductivity of a lung during tidal breathing, and where the loss cotangent of the change is approximately the same as of the effective background conductivity and hence easy to estimate. The theory is illustrated with numerical examples based on realistic parameter values and frequency ranges used with electrical impedance tomography (EIT). The theory may be potentially useful for imaging and clinical evaluations in connection with lung EIT for respiratory management and control.

  15. Random sampling and validation of covariance matrices of resonance parameters

    NASA Astrophysics Data System (ADS)

    Plevnik, Lucijan; Zerovnik, Gašper

    2017-09-01

    Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  16. Magnetic resonance fingerprinting based on realistic vasculature in mice.

    PubMed

    Pouliot, Philippe; Gagnon, Louis; Lam, Tina; Avti, Pramod K; Bowen, Chris; Desjardins, Michèle; Kakkar, Ashok K; Thorin, Eric; Sakadzic, Sava; Boas, David A; Lesage, Frédéric

    2017-04-01

    Magnetic resonance fingerprinting (MRF) was recently proposed as a novel strategy for MR data acquisition and analysis. A variant of MRF called vascular MRF (vMRF) followed, that extracted maps of three parameters of physiological importance: cerebral oxygen saturation (SatO 2 ), mean vessel radius and cerebral blood volume (CBV). However, this estimation was based on idealized 2-dimensional simulations of vascular networks using random cylinders and the empirical Bloch equations convolved with a diffusion kernel. Here we focus on studying the vascular MR fingerprint using real mouse angiograms and physiological values as the substrate for the MR simulations. The MR signal is calculated ab initio with a Monte Carlo approximation, by tracking the accumulated phase from a large number of protons diffusing within the angiogram. We first study the identifiability of parameters in simulations, showing that parameters are fully estimable at realistically high signal-to-noise ratios (SNR) when the same angiogram is used for dictionary generation and parameter estimation, but that large biases in the estimates persist when the angiograms are different. Despite these biases, simulations show that differences in parameters remain estimable. We then applied this methodology to data acquired using the GESFIDE sequence with SPIONs injected into 9 young wild type and 9 old atherosclerotic mice. Both the pre injection signal and the ratio of post-to-pre injection signals were modeled, using 5-dimensional dictionaries. The vMRF methodology extracted significant differences in SatO 2 , mean vessel radius and CBV between the two groups, consistent across brain regions and dictionaries. Further validation work is essential before vMRF can gain wider application. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Smsynth: AN Imagery Synthesis System for Soil Moisture Retrieval

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Xu, L.; Peng, J.

    2018-04-01

    Soil moisture (SM) is a important variable in various research areas, such as weather and climate forecasting, agriculture, drought and flood monitoring and prediction, and human health. An ongoing challenge in estimating SM via synthetic aperture radar (SAR) is the development of the retrieval SM methods, especially the empirical models needs as training samples a lot of measurements of SM and soil roughness parameters which are very difficult to acquire. As such, it is difficult to develop empirical models using realistic SAR imagery and it is necessary to develop methods to synthesis SAR imagery. To tackle this issue, a SAR imagery synthesis system based on the SM named SMSynth is presented, which can simulate radar signals that are realistic as far as possible to the real SAR imagery. In SMSynth, SAR backscatter coefficients for each soil type are simulated via the Oh model under the Bayesian framework, where the spatial correlation is modeled by the Markov random field (MRF) model. The backscattering coefficients simulated based on the designed soil parameters and sensor parameters are added into the Bayesian framework through the data likelihood where the soil parameters and sensor parameters are set as realistic as possible to the circumstances on the ground and in the validity range of the Oh model. In this way, a complete and coherent Bayesian probabilistic framework is established. Experimental results show that SMSynth is capable of generating realistic SAR images that suit the needs of a large amount of training samples of empirical models.

  18. Application of an automatic approach to calibrate the NEMURO nutrient-phytoplankton-zooplankton food web model in the Oyashio region

    NASA Astrophysics Data System (ADS)

    Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi

    2010-10-01

    The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.

  19. Oracle estimation of parametric models under boundary constraints.

    PubMed

    Wong, Kin Yau; Goldberg, Yair; Fine, Jason P

    2016-12-01

    In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.

  20. A dust spectral energy distribution model with hierarchical Bayesian inference - I. Formalism and benchmarking

    NASA Astrophysics Data System (ADS)

    Galliano, Frédéric

    2018-05-01

    This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.

  1. First-Order Model of Thermal Lensing in a Virtual Eye

    DTIC Science & Technology

    2009-03-01

    beam would have an m 2 value of 1.0, but realistically, the value is greater than 1.0. To solve for the rJ2T/ i )r2, the eye can be treated as a cy...along the z axis in a medium as The complex beam parameter is described by the real and imaginary portions of a wavefront by 1 1 m 2X. q(z) =R(z) - i ...for solving this integral have been reported in Swofford and Morrell [27]. o Z.AD I -42 em ( I M3 1.2 Ll Ml/__I r- 7···_······0······K···~ J. r:..8 1

  2. A geostatistical extreme-value framework for fast simulation of natural hazard events

    PubMed Central

    Stephenson, David B.

    2016-01-01

    We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements. PMID:27279768

  3. Parameter optimisation for a better representation of drought by LSMs: inverse modelling vs. sequential data assimilation

    NASA Astrophysics Data System (ADS)

    Dewaele, Hélène; Munier, Simon; Albergel, Clément; Planque, Carole; Laanaia, Nabil; Carrer, Dominique; Calvet, Jean-Christophe

    2017-09-01

    Soil maximum available water content (MaxAWC) is a key parameter in land surface models (LSMs). However, being difficult to measure, this parameter is usually uncertain. This study assesses the feasibility of using a 15-year (1999-2013) time series of satellite-derived low-resolution observations of leaf area index (LAI) to estimate MaxAWC for rainfed croplands over France. LAI interannual variability is simulated using the CO2-responsive version of the Interactions between Soil, Biosphere and Atmosphere (ISBA) LSM for various values of MaxAWC. Optimal value is then selected by using (1) a simple inverse modelling technique, comparing simulated and observed LAI and (2) a more complex method consisting in integrating observed LAI in ISBA through a land data assimilation system (LDAS) and minimising LAI analysis increments. The evaluation of the MaxAWC estimates from both methods is done using simulated annual maximum above-ground biomass (Bag) and straw cereal grain yield (GY) values from the Agreste French agricultural statistics portal, for 45 administrative units presenting a high proportion of straw cereals. Significant correlations (p value < 0.01) between Bag and GY are found for up to 36 and 53 % of the administrative units for the inverse modelling and LDAS tuning methods, respectively. It is found that the LDAS tuning experiment gives more realistic values of MaxAWC and maximum Bag than the inverse modelling experiment. Using undisaggregated LAI observations leads to an underestimation of MaxAWC and maximum Bag in both experiments. Median annual maximum values of disaggregated LAI observations are found to correlate very well with MaxAWC.

  4. Realistic tissue visualization using photoacoustic image

    NASA Astrophysics Data System (ADS)

    Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong

    2018-02-01

    Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.

  5. An isothermal equation of state for solids

    NASA Astrophysics Data System (ADS)

    Bose Roy, Papiya; Bose Roy, Sushil

    2004-07-01

    An isothermal equation of state (EOS) for solids, recently suggested by the authors in the realistic form, V/ V0= f( P), with relative volume as the dependent and the pressure as the independent variable, was shown to have an advantage for some close-packed materials in that it allows B‧ ∞=(∂ Bs/∂ P) s( P→∞) to be fitted, and this is where the usual standard equations fail. In the present study, our EOS is applied to a number of inorganic as well as organic solids, including alloys, glasses, rubbers and plastics; varying widely in their bonding and structural characteristics, as well as in their bulk modulus values. A very good agreement is observed between the data and fits. The results obtained are compared with those from two well-known equations, expressible in the realistic form, proposed by Murnaghan and Luban. Further, the results are also compared with those from the widely used two- and three-parameter EOSs, expressible in the unrealistic form only, P= f( V/ V0), proposed by Birch-and also with those from the EOS model of Keane in which B‧ ∞ is explicitly expressed as an equation of state parameter. The results obtained from our model compare well to these EOSs. Our EOS, in general, yields the smallest mean-squared deviations between data and fits. The values of B‧ ∞calculated from our EOS are compared with those from Keane's model. Further, we have studied the variation of B‧ ∞with temperature using the experimental isotherms of Mo and W at 10 different temperatures ranging from 100 to 1000 K, and observed that the values of B‧ ∞ yielded by our model and that of Keane vary, as expected, within a narrow range. Furthermore, our EOS is applied to study the stability of the fit parameters with variation in the pressure ranges with reference to the isothermal compression data on Mo and W-and also to study the variation of isothermal bulk modulus with pressure, with reference to the ultrasonic data on NaCl and noted a very good agreement with experiment. In addition, our model is applied, with B0 and B‧ 0 constrained to the theoretical values, to the five theoretical isotherms of MgO at 300, 500, 1000, 1500 and 2000 K obtained on the basis of a first principles approach-a good agreement is observed with the predictions, and the values of B‧ ∞ inferred at different temperatures tend to converge to a constant value.

  6. Using realist synthesis to understand the mechanisms of interprofessional teamwork in health and social care.

    PubMed

    Hewitt, Gillian; Sims, Sarah; Harris, Ruth

    2014-11-01

    Realist synthesis offers a novel and innovative way to interrogate the large literature on interprofessional teamwork in health and social care teams. This article introduces realist synthesis and its approach to identifying and testing the underpinning processes (or "mechanisms") that make an intervention work, the contexts that trigger those mechanisms and their subsequent outcomes. A realist synthesis of the evidence on interprofessional teamwork is described. Thirteen mechanisms were identified in the synthesis and findings for one mechanism, called "Support and value" are presented in this paper. The evidence for the other twelve mechanisms ("collaboration and coordination", "pooling of resources", "individual learning", "role blurring", "efficient, open and equitable communication", "tactical communication", "shared responsibility and influence", "team behavioural norms", "shared responsibility and influence", "critically reviewing performance and decisions", "generating and implementing new ideas" and "leadership") are reported in a further three papers in this series. The "support and value" mechanism referred to the ways in which team members supported one another, respected other's skills and abilities and valued each other's contributions. "Support and value" was present in some, but far from all, teams and a number of contexts that explained this variation were identified. The article concludes with a discussion of the challenges and benefits of undertaking this realist synthesis.

  7. Fire, ice, water, and dirt: A simple climate model

    NASA Astrophysics Data System (ADS)

    Kroll, John

    2017-07-01

    A simple paleoclimate model was developed as a modeling exercise. The model is a lumped parameter system consisting of an ocean (water), land (dirt), glacier, and sea ice (ice) and driven by the sun (fire). In comparison with other such models, its uniqueness lies in its relative simplicity yet yielding good results. For nominal values of parameters, the system is very sensitive to small changes in the parameters, yielding equilibrium, steady oscillations, and catastrophes such as freezing or boiling oceans. However, stable solutions can be found, especially naturally oscillating solutions. For nominally realistic conditions, natural periods of order 100kyrs are obtained, and chaos ensues if the Milankovitch orbital forcing is applied. An analysis of a truncated system shows that the naturally oscillating solution is a limit cycle with the characteristics of a relaxation oscillation in the two major dependent variables, the ocean temperature and the glacier ice extent. The key to getting oscillations is having the effective emissivity decreasing with temperature and, at the same time, the effective ocean albedo decreases with increasing glacier extent. Results of the original model compare favorably to the proxy data for ice mass variation, but not for temperature variation. However, modifications to the effective emissivity and albedo can be made to yield much more realistic results. The primary conclusion is that the opinion of Saltzman [Clim. Dyn. 5, 67-78 (1990)] is plausible that the external Milankovitch orbital forcing is not sufficient to explain the dominant 100kyr period in the data.

  8. Fire, ice, water, and dirt: A simple climate model.

    PubMed

    Kroll, John

    2017-07-01

    A simple paleoclimate model was developed as a modeling exercise. The model is a lumped parameter system consisting of an ocean (water), land (dirt), glacier, and sea ice (ice) and driven by the sun (fire). In comparison with other such models, its uniqueness lies in its relative simplicity yet yielding good results. For nominal values of parameters, the system is very sensitive to small changes in the parameters, yielding equilibrium, steady oscillations, and catastrophes such as freezing or boiling oceans. However, stable solutions can be found, especially naturally oscillating solutions. For nominally realistic conditions, natural periods of order 100kyrs are obtained, and chaos ensues if the Milankovitch orbital forcing is applied. An analysis of a truncated system shows that the naturally oscillating solution is a limit cycle with the characteristics of a relaxation oscillation in the two major dependent variables, the ocean temperature and the glacier ice extent. The key to getting oscillations is having the effective emissivity decreasing with temperature and, at the same time, the effective ocean albedo decreases with increasing glacier extent. Results of the original model compare favorably to the proxy data for ice mass variation, but not for temperature variation. However, modifications to the effective emissivity and albedo can be made to yield much more realistic results. The primary conclusion is that the opinion of Saltzman [Clim. Dyn. 5, 67-78 (1990)] is plausible that the external Milankovitch orbital forcing is not sufficient to explain the dominant 100kyr period in the data.

  9. The rotation-powered nature of some soft gamma-ray repeaters and anomalous X-ray pulsars

    NASA Astrophysics Data System (ADS)

    Coelho, Jaziel G.; Cáceres, D. L.; de Lima, R. C. R.; Malheiro, M.; Rueda, J. A.; Ruffini, R.

    2017-03-01

    Context. Soft gamma-ray repeaters (SGRs) and anomalous X-ray pulsars (AXPs) are slow rotating isolated pulsars whose energy reservoir is still matter of debate. Adopting neutron star (NS) fiducial parameters; mass M = 1.4 M⊙, radius R = 10 km, and moment of inertia, I = 1045 g cm2, the rotational energy loss, Ėrot, is lower than the observed luminosity (dominated by the X-rays) LX for many of the sources. Aims: We investigate the possibility that some members of this family could be canonical rotation-powered pulsars using realistic NS structure parameters instead of fiducial values. Methods: We compute the NS mass, radius, moment of inertia and angular momentum from numerical integration of the axisymmetric general relativistic equations of equilibrium. We then compute the entire range of allowed values of the rotational energy loss, Ėrot, for the observed values of rotation period P and spin-down rate Ṗ. We also estimate the surface magnetic field using a general relativistic model of a rotating magnetic dipole. Results: We show that realistic NS parameters lowers the estimated value of the magnetic field and radiation efficiency, LX/Ėrot, with respect to estimates based on fiducial NS parameters. We show that nine SGRs/AXPs can be described as canonical pulsars driven by the NS rotational energy, for LX computed in the soft (2-10 keV) X-ray band. We compute the range of NS masses for which LX/Ėrot< 1. We discuss the observed hard X-ray emission in three sources of the group of nine potentially rotation-powered NSs. This additional hard X-ray component dominates over the soft one leading to LX/Ėrot > 1 in two of them. Conclusions: We show that 9 SGRs/AXPs can be rotation-powered NSs if we analyze their X-ray luminosity in the soft 2-10 keV band. Interestingly, four of them show radio emission and six have been associated with supernova remnants (including Swift J1834.9-0846 the first SGR observed with a surrounding wind nebula). These observations give additional support to our results of a natural explanation of these sources in terms of ordinary pulsars. Including the hard X-ray emission observed in three sources of the group of potential rotation-powered NSs, this number of sources with LX/Ėrot< 1 becomes seven. It remains open to verification 1) the accuracy of the estimated distances and 2) the possible contribution of the associated supernova remnants to the hard X-ray emission.

  10. Determination of Earth rotation by the combination of data from different space geodetic systems

    NASA Technical Reports Server (NTRS)

    Archinal, Brent Allen

    1987-01-01

    Formerly, Earth Rotation Parameters (ERP), i.e., polar motion and UTI-UTC values, have been determined using data from only one observational system at a time, or by the combination of parameters previously obtained in such determinations. The question arises as to whether a simultaneous solution using data from several sources would provide an improved determination of such parameters. To pursue this reasoning, fifteen days of observations have been simulated using realistic networks of Lunar Laser Ranging (LLR), Satellite Laser Ranging (SLR) to Lageos, and Very Long Baseline Interferometry (VLBI) stations. A comparison has been done of the accuracy and precision of the ERP obtained from: (1) the individual system solutions, (2) the weighted means of those values, (3) all of the data by means of the combination of the normal equations obtained in 1, and (4) a grand solution with all the data. These simulations show that solutions done by the normal equation combination and grand solution methods provide the best or nearly the best ERP for all the periods considered, but that weighted mean solutions provide nearly the same accuracy and precision. VLBI solutions also provide similar accuracies.

  11. Controlled recovery of phylogenetic communities from an evolutionary model using a network approach

    NASA Astrophysics Data System (ADS)

    Sousa, Arthur M. Y. R.; Vieira, André P.; Prado, Carmen P. C.; Andrade, Roberto F. S.

    2016-04-01

    This works reports the use of a complex network approach to produce a phylogenetic classification tree of a simple evolutionary model. This approach has already been used to treat proteomic data of actual extant organisms, but an investigation of its reliability to retrieve a traceable evolutionary history is missing. The used evolutionary model includes key ingredients for the emergence of groups of related organisms by differentiation through random mutations and population growth, but purposefully omits other realistic ingredients that are not strictly necessary to originate an evolutionary history. This choice causes the model to depend only on a small set of parameters, controlling the mutation probability and the population of different species. Our results indicate that for a set of parameter values, the phylogenetic classification produced by the used framework reproduces the actual evolutionary history with a very high average degree of accuracy. This includes parameter values where the species originated by the evolutionary dynamics have modular structures. In the more general context of community identification in complex networks, our model offers a simple setting for evaluating the effects, on the efficiency of community formation and identification, of the underlying dynamics generating the network itself.

  12. Refraction tomography mapping of near-surface dipping layers using landstreamer data at East Canyon Dam, Utah

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Markiewicz, R.D.; Xia, J.

    2008-01-01

    We apply the P-wave refraction-tomography method to seismic data collected with a landstreamer. Refraction-tomography inversion solutions were determined using regularization parameters that provided the most realistic near-surface solutions that best matched the dipping layer structure of nearby outcrops. A reasonably well matched solution was obtained using an unusual set of optimal regularization parameters. In comparison, the use of conventional regularization parameters did not provide as realistic results. Thus, we consider that even if there is only qualitative a-priori information about a site (i.e., visual) - in the case of the East Canyon Dam, Utah - it might be possible to minimize the refraction nonuniqueness by estimating the most appropriate regularization parameters.

  13. Neurobiologically realistic determinants of self-organized criticality in networks of spiking neurons.

    PubMed

    Rubinov, Mikail; Sporns, Olaf; Thivierge, Jean-Philippe; Breakspear, Michael

    2011-06-01

    Self-organized criticality refers to the spontaneous emergence of self-similar dynamics in complex systems poised between order and randomness. The presence of self-organized critical dynamics in the brain is theoretically appealing and is supported by recent neurophysiological studies. Despite this, the neurobiological determinants of these dynamics have not been previously sought. Here, we systematically examined the influence of such determinants in hierarchically modular networks of leaky integrate-and-fire neurons with spike-timing-dependent synaptic plasticity and axonal conduction delays. We characterized emergent dynamics in our networks by distributions of active neuronal ensemble modules (neuronal avalanches) and rigorously assessed these distributions for power-law scaling. We found that spike-timing-dependent synaptic plasticity enabled a rapid phase transition from random subcritical dynamics to ordered supercritical dynamics. Importantly, modular connectivity and low wiring cost broadened this transition, and enabled a regime indicative of self-organized criticality. The regime only occurred when modular connectivity, low wiring cost and synaptic plasticity were simultaneously present, and the regime was most evident when between-module connection density scaled as a power-law. The regime was robust to variations in other neurobiologically relevant parameters and favored systems with low external drive and strong internal interactions. Increases in system size and connectivity facilitated internal interactions, permitting reductions in external drive and facilitating convergence of postsynaptic-response magnitude and synaptic-plasticity learning rate parameter values towards neurobiologically realistic levels. We hence infer a novel association between self-organized critical neuronal dynamics and several neurobiologically realistic features of structural connectivity. The central role of these features in our model may reflect their importance for neuronal information processing.

  14. Chemically Realistic Tetrahedral Lattice Models for Polymer Chains: Application to Polyethylene Oxide.

    PubMed

    Dietschreit, Johannes C B; Diestler, Dennis J; Knapp, Ernst W

    2016-05-10

    To speed up the generation of an ensemble of poly(ethylene oxide) (PEO) polymer chains in solution, a tetrahedral lattice model possessing the appropriate bond angles is used. The distance between noncovalently bonded atoms is maintained at realistic values by generating chains with an enhanced degree of self-avoidance by a very efficient Monte Carlo (MC) algorithm. Potential energy parameters characterizing this lattice model are adjusted so as to mimic realistic PEO polymer chains in water simulated by molecular dynamics (MD), which serves as a benchmark. The MD data show that PEO chains have a fractal dimension of about two, in contrast to self-avoiding walk lattice models, which exhibit the fractal dimension of 1.7. The potential energy accounts for a mild hydrophobic effect (HYEF) of PEO and for a proper setting of the distribution between trans and gauche conformers. The potential energy parameters are determined by matching the Flory radius, the radius of gyration, and the fraction of trans torsion angles in the chain. A gratifying result is the excellent agreement of the pair distribution function and the angular correlation for the lattice model with the benchmark distribution. The lattice model allows for the precise computation of the torsional entropy of the chain. The generation of polymer conformations of the adjusted lattice model is at least 2 orders of magnitude more efficient than MD simulations of the PEO chain in explicit water. This method of generating chain conformations on a tetrahedral lattice can also be applied to other types of polymers with appropriate adjustment of the potential energy function. The efficient MC algorithm for generating chain conformations on a tetrahedral lattice is available for download at https://github.com/Roulattice/Roulattice .

  15. Statistical mechanics of neocortical interactions. Derivation of short-term-memory capacity

    NASA Astrophysics Data System (ADS)

    Ingber, Lester

    1984-06-01

    A theory developed by the author to describe macroscopic neocortical interactions demonstrates that empirical values of chemical and electrical parameters of synaptic interactions establish several minima of the path-integral Lagrangian as a function of excitatory and inhibitory columnar firings. The number of possible minima, their time scales of hysteresis and probable reverberations, and their nearest-neighbor columnar interactions are all consistent with well-established empirical rules of human short-term memory. Thus, aspects of conscious experience are derived from neuronal firing patterns, using modern methods of nonlinear nonequilibrium statistical mechanics to develop realistic explicit synaptic interactions.

  16. Analysis of System-Wide Investment in the National Airspace System: A Portfolio Analytical Framework and an Example

    NASA Technical Reports Server (NTRS)

    Bhadra, Dipasis; Morser, Frederick R.

    2006-01-01

    In this paper, the authors review the FAA s current program investments and lay out a preliminary analytical framework to undertake projects that may address some of the noted deficiencies. By drawing upon the well developed theories from corporate finance, an analytical framework is offered that can be used for choosing FAA s investments taking into account risk, expected returns and inherent dependencies across NAS programs. The framework can be expanded into taking multiple assets and realistic values for parameters in drawing an efficient risk-return frontier for the entire FAA investment programs.

  17. Hippocampal effective synchronization values are not pre-seizure indicator without considering the state of the onset channels

    PubMed Central

    Shayegh, Farzaneh; Sadri, Saeed; Amirfattahi, Rassoul; Ansari-Asl, Karim; Bellanger, Jean-Jacques; Senhadji, Lotfi

    2014-01-01

    In this paper, a model-based approach is presented to quantify the effective synchrony between hippocampal areas from depth-EEG signals. This approach is based on the parameter identification procedure of a realistic Multi-Source/Multi-Channel (MSMC) hippocampal model that simulates the function of different areas of hippocampus. In the model it is supposed that the observed signals recorded using intracranial electrodes are generated by some hidden neuronal sources, according to some parameters. An algorithm is proposed to extract the intrinsic (solely relative to one hippocampal area) and extrinsic (coupling coefficients between two areas) model parameters, simultaneously, by a Maximum Likelihood (ML) method. Coupling coefficients are considered as the measure of effective synchronization. This work can be considered as an application of Dynamic Causal Modeling (DCM) that enables us to understand effective synchronization changes during transition from inter-ictal to pre -ictal state. The algorithm is first validated by using some synthetic datasets. Then by extracting the coupling coefficients of real depth-EEG signals by the proposed approach, it is observed that the coupling values show no significant difference between ictal, pre-ictal and inter-ictal states, i.e., either the increase or decrease of coupling coefficients has been observed in all states. However, taking the value of intrinsic parameters into account, pre-seizure state can be distinguished from inter-ictal state. It is claimed that seizures start to appear when there are seizure-related physiological parameters on the onset channel, and its coupling coefficient toward other channels increases simultaneously. As a result of considering both intrinsic and extrinsic parameters as the feature vector, inter-ictal, pre-ictal and ictal activities are discriminated from each other with an accuracy of 91.33% accuracy. PMID:25061815

  18. Kinetic analysis of single molecule FRET transitions without trajectories

    NASA Astrophysics Data System (ADS)

    Schrangl, Lukas; Göhring, Janett; Schütz, Gerhard J.

    2018-03-01

    Single molecule Förster resonance energy transfer (smFRET) is a popular tool to study biological systems that undergo topological transitions on the nanometer scale. smFRET experiments typically require recording of long smFRET trajectories and subsequent statistical analysis to extract parameters such as the states' lifetimes. Alternatively, analysis of probability distributions exploits the shapes of smFRET distributions at well chosen exposure times and hence works without the acquisition of time traces. Here, we describe a variant that utilizes statistical tests to compare experimental datasets with Monte Carlo simulations. For a given model, parameters are varied to cover the full realistic parameter space. As output, the method yields p-values which quantify the likelihood for each parameter setting to be consistent with the experimental data. The method provides suitable results even if the actual lifetimes differ by an order of magnitude. We also demonstrated the robustness of the method to inaccurately determine input parameters. As proof of concept, the new method was applied to the determination of transition rate constants for Holliday junctions.

  19. Rye Canyon X-ray noise test: One-third octave-band data

    NASA Technical Reports Server (NTRS)

    Willshire, W. L., Jr.

    1983-01-01

    Acoustic data were obtained for the 25 ft. diameter X-wing rotor model during performance testing of the rotor system in hover. Data collected at the outdoor whirl tower test facility with a twelve microphone array were taken for approximately 150 test conditions comprised of various combinations of RPM, blade pressure ratio (BPR), and blade angle of attack (collective). The three test parameters had four values of RPM from 404 to 497, twelve values of BPR from 1.0 to 2.1, and six values of collective from 0.0 deg to 8.5 deg. Fifteen to twenty seconds of acoustic data were reduced to obtain an average 1/3 octave band spectrum for each microphone for each test condition. The complete, as measured, 1/3 octave band results for all the acoustic data are listed. Another part of the X-wing noise test was the acoustic calibration of the Rye Canyon whirl tower bowl. Corrections were computed which, when applied to as measured data, yield estimates of the free field X-wing noise. The free field estimates provide a more realistic measure of the rotor system noise levels. Trend analysis of the three test parameters on noise level were performed.

  20. Single Neuron Optimization as a Basis for Accurate Biophysical Modeling: The Case of Cerebellar Granule Cells.

    PubMed

    Masoli, Stefano; Rizza, Martina F; Sgritta, Martina; Van Geit, Werner; Schürmann, Felix; D'Angelo, Egidio

    2017-01-01

    In realistic neuronal modeling, once the ionic channel complement has been defined, the maximum ionic conductance (G i-max ) values need to be tuned in order to match the firing pattern revealed by electrophysiological recordings. Recently, selection/mutation genetic algorithms have been proposed to efficiently and automatically tune these parameters. Nonetheless, since similar firing patterns can be achieved through different combinations of G i-max values, it is not clear how well these algorithms approximate the corresponding properties of real cells. Here we have evaluated the issue by exploiting a unique opportunity offered by the cerebellar granule cell (GrC), which is electrotonically compact and has therefore allowed the direct experimental measurement of ionic currents. Previous models were constructed using empirical tuning of G i-max values to match the original data set. Here, by using repetitive discharge patterns as a template, the optimization procedure yielded models that closely approximated the experimental G i-max values. These models, in addition to repetitive firing, captured additional features, including inward rectification, near-threshold oscillations, and resonance, which were not used as features. Thus, parameter optimization using genetic algorithms provided an efficient modeling strategy for reconstructing the biophysical properties of neurons and for the subsequent reconstruction of large-scale neuronal network models.

  1. Calculating broad neutron resonances in a cut-off Woods-Saxon potential

    NASA Astrophysics Data System (ADS)

    Baran, Á.; Noszály, Cs.; Salamon, P.; Vertse, T.

    2015-07-01

    In a cut-off Woods-Saxon (CWS) potential with realistic depth S -matrix poles being far from the imaginary wave number axis form a sequence where the distances of the consecutive resonances are inversely proportional with the cut-off radius value, which is an unphysical parameter. Other poles lying closer to the imaginary wave number axis might have trajectories with irregular shapes as the depth of the potential increases. Poles being close repel each other, and their repulsion is responsible for the changes of the directions of the corresponding trajectories. The repulsion might cause that certain resonances become antibound and later resonances again when they collide on the imaginary axis. The interaction is extremely sensitive to the cut-off radius value, which is an apparent handicap of the CWS potential.

  2. Fast rotating neutron stars with realistic nuclear matter equation of state

    NASA Astrophysics Data System (ADS)

    Cipolletta, F.; Cherubini, C.; Filippi, S.; Rueda, J. A.; Ruffini, R.

    2015-07-01

    We construct equilibrium configurations of uniformly rotating neutron stars for selected relativistic mean-field nuclear matter equations of state (EOS). We compute, in particular, the gravitational mass (M ), equatorial (Req) and polar (Rpol) radii, eccentricity, angular momentum (J ), moment of inertia (I ) and quadrupole moment (M2) of neutron stars stable against mass shedding and secular axisymmetric instability. By constructing the constant frequency sequence f =716 Hz of the fastest observed pulsar, PSR J1748-2446ad, and constraining it to be within the stability region, we obtain a lower mass bound for the pulsar, Mmin=[1.2 - 1.4 ]M⊙ , for the EOS employed. Moreover, we give a fitting formula relating the baryonic mass (Mb) and gravitational mass of nonrotating neutron stars, Mb/M⊙=M /M⊙+(13 /200 )(M /M⊙)2 [or M /M⊙=Mb/M⊙-(1 /20 )(Mb/M⊙)2], which is independent of the EOS. We also obtain a fitting formula, although not EOS independent, relating the gravitational mass and the angular momentum of neutron stars along the secular axisymmetric instability line for each EOS. We compute the maximum value of the dimensionless angular momentum, a /M ≡c J /(G M2) (or "Kerr parameter"), (a /M )max≈0.7 , found to be also independent of the EOS. We then compare and contrast the quadrupole moment of rotating neutron stars with the one predicted by the Kerr exterior solution for the same values of mass and angular momentum. Finally, we show that, although the mass quadrupole moment of realistic neutron stars never reaches the Kerr value, the latter is closely approached from above at the maximum mass value, as physically expected from the no-hair theorem. In particular, the stiffer the EOS, the closer the mass quadrupole moment approaches the value of the Kerr solution.

  3. Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan

    2016-04-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  4. Clinical tooth preparations and associated measuring methods: a systematic review.

    PubMed

    Tiu, Janine; Al-Amleh, Basil; Waddell, J Neil; Duncan, Warwick J

    2015-03-01

    The geometries of tooth preparations are important features that aid in the retention and resistance of cemented complete crowns. The clinically relevant values and the methods used to measure these are not clear. The purpose of this systematic review was to retrieve, organize, and critically appraise studies measuring clinical tooth preparation parameters, specifically the methodology used to measure the preparation geometry. A database search was performed in Scopus, PubMed, and ScienceDirect with an additional hand search on December 5, 2013. The articles were screened for inclusion and exclusion criteria and information regarding the total occlusal convergence (TOC) angle, margin design, and associated measuring methods were extracted. The values and associated measuring methods were tabulated. A total of 1006 publications were initially retrieved. After removing duplicates and filtering by using exclusion and inclusion criteria, 983 articles were excluded. Twenty-three articles reported clinical tooth preparation values. Twenty articles reported the TOC, 4 articles reported margin designs, 4 articles reported margin angles, and 3 articles reported the abutment height of preparations. A variety of methods were used to measure these parameters. TOC values seem to be the most important preparation parameter. Recommended TOC values have increased over the past 4 decades from an unachievable 2- to 5-degree taper to a more realistic 10 to 22 degrees. Recommended values are more likely to be achieved under experimental conditions if crown preparations are performed outside of the mouth. We recommend that a standardized measurement method based on the cross sections of crown preparations and standardized reporting be developed for future studies analyzing preparation geometry. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  5. Effects of including surface depressions in the application of the Precipitation-Runoff Modeling System in the Upper Flint River Basin, Georgia

    USGS Publications Warehouse

    Viger, Roland J.; Hay, Lauren E.; Jones, John W.; Buell, Gary R.

    2010-01-01

    This report documents an extension of the Precipitation Runoff Modeling System that accounts for the effect of a large number of water-holding depressions in the land surface on the hydrologic response of a basin. Several techniques for developing the inputs needed by this extension also are presented. These techniques include the delineation of the surface depressions, the generation of volume estimates for the surface depressions, and the derivation of model parameters required to describe these surface depressions. This extension is valuable for applications in basins where surface depressions are too small or numerous to conveniently model as discrete spatial units, but where the aggregated storage capacity of these units is large enough to have a substantial effect on streamflow. In addition, this report documents several new model concepts that were evaluated in conjunction with the depression storage functionality, including: ?hydrologically effective? imperviousness, rates of hydraulic conductivity, and daily streamflow routing. All of these techniques are demonstrated as part of an application in the Upper Flint River Basin, Georgia. Simulated solar radiation, potential evapotranspiration, and water balances match observations well, with small errors for the first two simulated data in June and August because of differences in temperatures from the calibration and evaluation periods for those months. Daily runoff simulations show increasing accuracy with streamflow and a good fit overall. Including surface depression storage in the model has the effect of decreasing daily streamflow for all but the lowest flow values. The report discusses the choices and resultant effects involved in delineating and parameterizing these features. The remaining enhancements to the model and its application provide a more realistic description of basin geography and hydrology that serve to constrain the calibration process to more physically realistic parameter values.

  6. Spiral arms and disc stability in the Andromeda galaxy

    NASA Astrophysics Data System (ADS)

    Tenjes, P.; Tuvikene, T.; Tamm, A.; Kipper, R.; Tempel, E.

    2017-04-01

    Aims: Density waves are often considered as the triggering mechanism of star formation in spiral galaxies. Our aim is to study relations between different star formation tracers (stellar UV and near-IR radiation and emission from H I, CO, and cold dust) in the spiral arms of M 31, to calculate stability conditions in the galaxy disc, and to draw conclusions about possible star formation triggering mechanisms. Methods: We selected fourteen spiral arm segments from the de-projected data maps and compared emission distributions along the cross sections of the segments in different datasets to each other, in order to detect spatial offsets between young stellar populations and the star-forming medium. By using the disc stability condition as a function of perturbation wavelength and distance from the galaxy centre, we calculated the effective disc stability parameters and the least stable wavelengths at different distances. For this we used a mass distribution model of M 31 with four disc components (old and young stellar discs, cold and warm gaseous discs) embedded within the external potential of the bulge, the stellar halo, and the dark matter halo. Each component is considered to have a realistic finite thickness. Results: No systematic offsets between the observed UV and CO/far-IR emission across the spiral segments are detected. The calculated effective stability parameter has a lowest value of Qeff ≃ 1.8 at galactocentric distances of 12-13 kpc. The least stable wavelengths are rather long, with the lowest values starting from ≃ 3 kpc at distances R > 11 kpc. Conclusions: The classical density wave theory is not a realistic explanation for the spiral structure of M 31. Instead, external causes should be considered, such as interactions with massive gas clouds or dwarf companions of M 31.

  7. Evaluation of Shiraz wastewater treatment plant effluent quality for agricultural irrigation by Canadian Water Quality Index (CWQI)

    PubMed Central

    2013-01-01

    Background Using treated wastewater in agriculture irrigation could be a realistic solution for the shortage of fresh water in Iran, however, it is associated with environmental and health threats; therefore, effluent quality assessment is quite necessary before use. The present study aimed to evaluate the physicochemical and microbial quality of Shiraz wastewater treatment plant effluent for being used in agricultural irrigation. In this study, 20 physicochemical and 3 microbial parameters were measured during warm (April to September) and cold months (October to march). Using the measured parameters and the Canadian Water Quality Index, the quality of the effluent was determined in both warm and cold seasons and in all the seasons together. Results The calculated index for the physicochemical parameters in the effluent was equal (87) in warm and cold months and it was obtained as 85 for the seasons all together. When the microbial parameters were used in order to calculate the index, it declined to 67 in warm and cold seasons and 64 in all the seasons together. Also, it was found that three physicochemical parameters (TDS, EC, and NO3) and three microbial parameters (Fecal coliform, Helminthes egg, and Total coliform) had the most contribution to the reduction of the index value. Conclusions The results showed that the physicochemical quality of Shiraz Wastewater Treatment Plant Effluent was good for irrigation in the warm, cold, and total of the two kinds of seasons. However, by applying the microbial parameter, the index value declined dramatically and the quality of the effluent was marginal. PMID:23566673

  8. Non-standard interactions and neutrinos from dark matter annihilation in the Sun

    NASA Astrophysics Data System (ADS)

    Demidov, S. V.

    2018-02-01

    We perform an analysis of the influence of non-standard neutrino interactions (NSI) on neutrino signal from dark matter annihilations in the Sun. Taking experimentally allowed benchmark values for the matter NSI parameters we show that the evolution of such neutrinos with energies at GeV scale can be considerably modified. We simulate propagation of neutrinos from the Sun to the Earth for realistic dark matter annihilation channels and find that the matter NSI can result in at most 30% correction to the signal rate of muon track events at neutrino telescopes. Still present experimental bounds on dark matter from these searches are robust in the presence of NSI within considerable part of their allowed parameter space. At the same time electron neutrino flux from dark matter annihilation in the Sun can be changed by a factor of few.

  9. Energy harvesting from sea waves with consideration of airy and JONSWAP theory and optimization of energy harvester parameters

    NASA Astrophysics Data System (ADS)

    Mirab, Hadi; Fathi, Reza; Jahangiri, Vahid; Ettefagh, Mir Mohammad; Hassannejad, Reza

    2015-12-01

    One of the new methods for powering low-power electronic devices at sea is a wave energy harvesting system. In this method, piezoelectric material is employed to convert the mechanical energy of sea waves into electrical energy. The advantage of this method is based on avoiding a battery charging system. Studies have been done on energy harvesting from sea waves, however, considering energy harvesting with random JONSWAP wave theory, then determining the optimum values of energy harvested is new. This paper does that by implementing the JONSWAP wave model, calculating produced power, and realistically showing that output power is decreased in comparison with the more simple airy wave model. In addition, parameters of the energy harvester system are optimized using a simulated annealing algorithm, yielding increased produced power.

  10. Accounting for uncertainty in model-based prevalence estimation: paratuberculosis control in dairy herds.

    PubMed

    Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R

    2012-09-10

    A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.

  11. Realistic simplified gaugino-higgsino models in the MSSM

    NASA Astrophysics Data System (ADS)

    Fuks, Benjamin; Klasen, Michael; Schmiemann, Saskia; Sunder, Marthijn

    2018-03-01

    We present simplified MSSM models for light neutralinos and charginos with realistic mass spectra and realistic gaugino-higgsino mixing, that can be used in experimental searches at the LHC. The formerly used naive approach of defining mass spectra and mixing matrix elements manually and independently of each other does not yield genuine MSSM benchmarks. We suggest the use of less simplified, but realistic MSSM models, whose mass spectra and mixing matrix elements are the result of a proper matrix diagonalisation. We propose a novel strategy targeting the design of such benchmark scenarios, accounting for user-defined constraints in terms of masses and particle mixing. We apply it to the higgsino case and implement a scan in the four relevant underlying parameters {μ , tan β , M1, M2} for a given set of light neutralino and chargino masses. We define a measure for the quality of the obtained benchmarks, that also includes criteria to assess the higgsino content of the resulting charginos and neutralinos. We finally discuss the distribution of the resulting models in the MSSM parameter space as well as their implications for supersymmetric dark matter phenomenology.

  12. Modeling and analyzing cascading dynamics of the Internet based on local congestion information

    NASA Astrophysics Data System (ADS)

    Zhu, Qian; Nie, Jianlong; Zhu, Zhiliang; Yu, Hai; Xue, Yang

    2018-06-01

    Cascading failure has already become one of the vital issues in network science. By considering realistic network operational settings, we propose the congestion function to represent the congested extent of node and construct a local congestion-aware routing strategy with a tunable parameter. We investigate the cascading failures on the Internet triggered by deliberate attacks. Simulation results show that the tunable parameter has an optimal value that makes the network achieve a maximum level of robustness. The robustness of the network has a positive correlation with tolerance parameter, but it has a negative correlation with the packets generation rate. In addition, there exists a threshold of the attacking proportion of nodes that makes the network achieve the lowest robustness. Moreover, by introducing the concept of time delay for information transmission on the Internet, we found that an increase of the time delay will decrease the robustness of the network rapidly. The findings of the paper will be useful for enhancing the robustness of the Internet in the future.

  13. Structural kinetic modeling of metabolic networks.

    PubMed

    Steuer, Ralf; Gross, Thilo; Selbig, Joachim; Blasius, Bernd

    2006-08-08

    To develop and investigate detailed mathematical models of metabolic processes is one of the primary challenges in systems biology. However, despite considerable advance in the topological analysis of metabolic networks, kinetic modeling is still often severely hampered by inadequate knowledge of the enzyme-kinetic rate laws and their associated parameter values. Here we propose a method that aims to give a quantitative account of the dynamical capabilities of a metabolic system, without requiring any explicit information about the functional form of the rate equations. Our approach is based on constructing a local linear model at each point in parameter space, such that each element of the model is either directly experimentally accessible or amenable to a straightforward biochemical interpretation. This ensemble of local linear models, encompassing all possible explicit kinetic models, then allows for a statistical exploration of the comprehensive parameter space. The method is exemplified on two paradigmatic metabolic systems: the glycolytic pathway of yeast and a realistic-scale representation of the photosynthetic Calvin cycle.

  14. A computational model of oxygen delivery by hemoglobin-based oxygen carriers in three-dimensional microvascular networks.

    PubMed

    Tsoukias, Nikolaos M; Goldman, Daniel; Vadapalli, Arjun; Pittman, Roland N; Popel, Aleksander S

    2007-10-21

    A detailed computational model is developed to simulate oxygen transport from a three-dimensional (3D) microvascular network to the surrounding tissue in the presence of hemoglobin-based oxygen carriers. The model accounts for nonlinear O(2) consumption, myoglobin-facilitated diffusion and nonlinear oxyhemoglobin dissociation in the RBCs and plasma. It also includes a detailed description of intravascular resistance to O(2) transport and is capable of incorporating realistic 3D microvascular network geometries. Simulations in this study were performed using a computer-generated microvascular architecture that mimics morphometric parameters for the hamster cheek pouch retractor muscle. Theoretical results are presented next to corresponding experimental data. Phosphorescence quenching microscopy provided PO(2) measurements at the arteriolar and venular ends of capillaries in the hamster retractor muscle before and after isovolemic hemodilution with three different hemodilutents: a non-oxygen-carrying plasma expander and two hemoglobin solutions with different oxygen affinities. Sample results in a microvascular network show an enhancement of diffusive shunting between arterioles, venules and capillaries and a decrease in hemoglobin's effectiveness for tissue oxygenation when its affinity for O(2) is decreased. Model simulations suggest that microvascular network anatomy can affect the optimal hemoglobin affinity for reducing tissue hypoxia. O(2) transport simulations in realistic representations of microvascular networks should provide a theoretical framework for choosing optimal parameter values in the development of hemoglobin-based blood substitutes.

  15. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  16. Generation of uniformly distributed dose points for anatomy-based three-dimensional dose optimization methods in brachytherapy.

    PubMed

    Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N

    2000-05-01

    We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.

  17. Influence of simulation parameters on the speed and accuracy of Monte Carlo calculations using PENEPMA

    NASA Astrophysics Data System (ADS)

    Llovet, X.; Salvat, F.

    2018-01-01

    The accuracy of Monte Carlo simulations of EPMA measurements is primarily determined by that of the adopted interaction models and atomic relaxation data. The code PENEPMA implements the most reliable general models available, and it is known to provide a realistic description of electron transport and X-ray emission. Nonetheless, efficiency (i.e., the simulation speed) of the code is determined by a number of simulation parameters that define the details of the electron tracking algorithm, which may also have an effect on the accuracy of the results. In addition, to reduce the computer time needed to obtain X-ray spectra with a given statistical accuracy, PENEPMA allows the use of several variance-reduction techniques, defined by a set of specific parameters. In this communication we analyse and discuss the effect of using different values of the simulation and variance-reduction parameters on the speed and accuracy of EPMA simulations. We also discuss the effectiveness of using multi-core computers along with a simple practical strategy implemented in PENEPMA.

  18. Results of an integrated structure/control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.

  19. Dependence of tropical cyclone development on coriolis parameter: A theoretical model

    NASA Astrophysics Data System (ADS)

    Deng, Liyuan; Li, Tim; Bi, Mingyu; Liu, Jia; Peng, Melinda

    2018-03-01

    A simple theoretical model was formulated to investigate how tropical cyclone (TC) intensification depends on the Coriolis parameter. The theoretical framework includes a two-layer free atmosphere and an Ekman boundary layer at the bottom. The linkage between the free atmosphere and the boundary layer is through the Ekman pumping vertical velocity in proportion to the vorticity at the top of the boundary layer. The closure of this linear system assumes a simple relationship between the free atmosphere diabatic heating and the boundary layer moisture convergence. Under a set of realistic atmospheric parameter values, the model suggests that the most preferred latitude for TC development is around 5° without considering other factors. The theoretical result is confirmed by high-resolution WRF model simulations in a zero-mean flow and a constant SST environment on an f -plane with different Coriolis parameters. Given an initially balanced weak vortex, the TC-like vortex intensifies most rapidly at the reference latitude of 5°. Thus, the WRF model simulations confirm the f-dependent characteristics of TC intensification rate as suggested by the theoretical model.

  20. Neuromodulation impact on nonlinear firing behavior of a reduced model motoneuron with the active dendrite

    PubMed Central

    Kim, Hojeong; Heckman, C. J.

    2014-01-01

    Neuromodulatory inputs from brainstem systems modulate the normal function of spinal motoneurons by altering the activation properties of persistent inward currents (PICs) in their dendrites. However, the effect of the PIC on firing outputs also depends on its location in the dendritic tree. To investigate the interaction between PIC neuromodulation and PIC location dependence, we used a two-compartment model that was biologically realistic in that it retains directional and frequency-dependent electrical coupling between the soma and the dendrites, as seen in multi-compartment models based on full anatomical reconstructions of motoneurons. Our two-compartment approach allowed us to systematically vary the coupling parameters between the soma and the dendrite to accurately reproduce the effect of location of the dendritic PIC on the generation of nonlinear (hysteretic) motoneuron firing patterns. Our results show that as a single parameter value for PIC activation was either increased or decreased by 20% from its default value, the solution space of the coupling parameter values for nonlinear firing outputs was drastically reduced by approximately 80%. As a result, the model tended to fire only in a linear mode at the majority of dendritic PIC sites. The same results were obtained when all parameters for the PIC activation simultaneously changed only by approximately ±10%. Our results suggest the democratization effect of neuromodulation: the neuromodulation by the brainstem systems may play a role in switching the motoneurons with PICs at different dendritic locations to a similar mode of firing by reducing the effect of the dendritic location of PICs on the firing behavior. PMID:25309410

  1. Effects of viscosity and constraints on the dispersion and dissipation of waves in large blood vessels. II.

    NASA Technical Reports Server (NTRS)

    Jones, E.; Anliker, M.; Chang, I.

    1971-01-01

    Comparison of previously described theoretical predictions with in vivo data from anesthetized dogs. It is shown that the observed attenuation of the pressure and axial waves cannot be accounted for by fluid viscosity alone. For large values of the frequency parameter alpha, the previous analysis is extended to include the effects of viscoelasticity of the vessel wall. The results indicate that the speeds of both types of waves are essentially unaffected by a realistic viscoelasticity model while the attenuation per wavelength is significantly increased and becomes frequency independent. There is fair agreement between theory and experiment.

  2. Chiral solitons in spinor polariton rings

    NASA Astrophysics Data System (ADS)

    Zezyulin, D. A.; Gulevich, D. R.; Skryabin, D. V.; Shelykh, I. A.

    2018-04-01

    We consider theoretically one-dimensional polariton ring accounting for both longitudinal-transverse (TE-TM) and Zeeman splittings of spinor polariton states and spin-dependent polariton-polariton interactions. We present a class of solutions in the form of the localized defects rotating with constant angular velocity and analyze their properties for realistic values of the parameters of the system. We show that the effects of the geometric phase arising from the interplay between the external magnetic field and the TE-TM splitting introduce chirality in the system and make solitons propagating in clockwise and anticlockwise directions nonequivalent. This can be interpreted as a solitonic analog of the Aharonov-Bohm effect.

  3. NLSE: Parameter-Based Inversion Algorithm

    NASA Astrophysics Data System (ADS)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.

    Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.

  4. Earth Global Reference Atmospheric Model (GRAM99): Short Course

    NASA Technical Reports Server (NTRS)

    Leslie, Fred W.; Justus, C. G.

    2007-01-01

    Earth-GRAM is a FORTRAN software package that can run on a variety of platforms including PC's. For any time and location in the Earth's atmosphere, Earth-GRAM provides values of atmospheric quantities such as temperature, pressure, density, winds, constituents, etc.. Dispersions (perturbations) of these parameters are also provided and have realistic correlations, means, and variances - useful for Monte Carlo analysis. Earth-GRAM is driven by observations including a tropospheric database available from the National Climatic Data Center. Although Earth-GRAM can be run in a "stand-alone" mode, many users incorporate it into their trajectory codes. The source code is distributed free-of-charge to eligible recipients.

  5. Bayesian calibration of mechanistic aquatic biogeochemical models and benefits for environmental management

    NASA Astrophysics Data System (ADS)

    Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu

    2008-09-01

    Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.

  6. Applicability of mode-coupling theory to polyisobutylene: a molecular dynamics simulation study.

    PubMed

    Khairy, Y; Alvarez, F; Arbe, A; Colmenero, J

    2013-10-01

    The applicability of Mode Coupling Theory (MCT) to the glass-forming polymer polyisobutylene (PIB) has been explored by using fully atomistic molecular dynamics simulations. MCT predictions for the so-called asymptotic regime have been successfully tested on the dynamic structure factor and the self-correlation function of PIB main-chain carbons calculated from the simulated cell. The factorization theorem and the time-temperature superposition principle are satisfied. A consistent fitting procedure of the simulation data to the MCT asymptotic power-laws predicted for the α-relaxation regime has delivered the dynamic exponents of the theory-in particular, the exponent parameter λ-the critical non-ergodicity parameters, and the critical temperature T(c). The obtained values of λ and T(c) agree, within the uncertainties involved in both studies, with those deduced from depolarized light scattering experiments [A. Kisliuk et al., J. Polym. Sci. Part B: Polym. Phys. 38, 2785 (2000)]. Both, λ and T(c)/T(g) values found for PIB are unusually large with respect to those commonly obtained in low molecular weight systems. Moreover, the high T(c)/T(g) value is compatible with a certain correlation of this parameter with the fragility in Angell's classification. Conversely, the value of λ is close to that reported for real polymers, simulated "realistic" polymers and simple polymer models with intramolecular barriers. In the framework of the MCT, such finding should be the signature of two different mechanisms for the glass-transition in real polymers: intermolecular packing and intramolecular barriers combined with chain connectivity.

  7. SU-E-CAMPUS-I-05: Internal Dosimetric Calculations for Several Imaging Radiopharmaceuticals in Preclinical Studies and Quantitative Assessment of the Mouse Size Impact On Them. Realistic Monte Carlo Simulations Based On the 4D-MOBY Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostou, T; Papadimitroulas, P; Kagadis, GC

    2014-06-15

    Purpose: Commonly used radiopharmaceuticals were tested to define the most important dosimetric factors in preclinical studies. Dosimetric calculations were applied in two different whole-body mouse models, with varying organ size, so as to determine their impact on absorbed doses and S-values. Organ mass influence was evaluated with computational models and Monte Carlo(MC) simulations. Methods: MC simulations were executed on GATE to determine dose distribution in the 4D digital MOBY mouse phantom. Two mouse models, 28 and 34 g respectively, were constructed based on realistic preclinical exams to calculate the absorbed doses and S-values of five commonly used radionuclides in SPECT/PETmore » studies (18F, 68Ga, 177Lu, 111In and 99mTc).Radionuclide biodistributions were obtained from literature. Realistic statistics (uncertainty lower than 4.5%) were acquired using the standard physical model in Geant4. Comparisons of the dosimetric calculations on the two different phantoms for each radiopharmaceutical are presented. Results: Dose per organ in mGy was calculated for all radiopharmaceuticals. The two models introduced a difference of 0.69% in their brain masses, while the largest differences were observed in the marrow 18.98% and in the thyroid 18.65% masses.Furthermore, S-values of the most important target-organs were calculated for each isotope. Source-organ was selected to be the whole mouse body.Differences on the S-factors were observed in the 6.0–30.0% range. Tables with all the calculations as reference dosimetric data were developed. Conclusion: Accurate dose per organ and the most appropriate S-values are derived for specific preclinical studies. The impact of the mouse model size is rather high (up to 30% for a 17.65% difference in the total mass), and thus accurate definition of the organ mass is a crucial parameter for self-absorbed S values calculation.Our goal is to extent the study for accurate estimations in small animal imaging, whereas it is known that there is a large variety in the anatomy of the organs.« less

  8. All Sky Cloud Coverage Monitoring for SONG-China Project

    NASA Astrophysics Data System (ADS)

    Tian, J. F.; Deng, L. C.; Yan, Z. Z.; Wang, K.; Wu, Y.

    2016-05-01

    In order to monitor the cloud distributions at Qinghai station, a site selected for SONG (Stellar Observations Network Group)-China node, the design of the proto-type of all sky camera (ASC) applied in Xinglong station is adopted. Both hardware and software improvements have been made in order to be more precise and deliver quantitative measurements. The ARM (Advanced Reduced Instruction Set Computer Machine) MCU (Microcontroller Unit) instead of PC is used to control the upgraded version of ASC. A much higher reliability has been realized in the current scheme. Independent of the positions of the Sun and Moon, the weather conditions are constantly changing, therefore it is difficult to get proper exposure parameters using only the temporal information of the major light sources. A realistic exposure parameters for the ASC can actually be defined using a real-time sky brightness monitor that is also installed at the same site. The night sky brightness value is a very sensitive function of the cloud coverage, and can be accurately measured by the sky quality monitor. We study the correlation between the exposure parameter and night sky brightness value, and give the mathematical relation. The images of the all sky camera are inserted into database directly. All sky quality images are archived in FITS format which can be used for further analysis.

  9. Validity of strong lensing statistics for constraints on the galaxy evolution model

    NASA Astrophysics Data System (ADS)

    Matsumoto, Akiko; Futamase, Toshifumi

    2008-02-01

    We examine the usefulness of the strong lensing statistics to constrain the evolution of the number density of lensing galaxies by adopting the values of the cosmological parameters determined by recent Wilkinson Microwave Anisotropy Probe observation. For this purpose, we employ the lens-redshift test proposed by Kochanek and constrain the parameters in two evolution models, simple power-law model characterized by the power-law indexes νn and νv, and the evolution model by Mitchell et al. based on cold dark matter structure formation scenario. We use the well-defined lens sample from the Sloan Digital Sky Survey (SDSS) and this is similarly sized samples used in the previous studies. Furthermore, we adopt the velocity dispersion function of early-type galaxies based on SDSS DR1 and DR5. It turns out that the indexes of power-law model are consistent with the previous studies, thus our results indicate the mild evolution in the number and velocity dispersion of early-type galaxies out to z = 1. However, we found that the values for p and q used by Mitchell et al. are inconsistent with the presently available observational data. More complete sample is necessary to withdraw more realistic determination on these parameters.

  10. Delineation and Analysis of Uncertainty of Contributing Areas to Wells at the Southbury Training School, Southbury, Connecticut

    USGS Publications Warehouse

    Starn, J. Jeffrey; Stone, Janet Radway; Mullaney, John R.

    2000-01-01

    Contributing areas to public-supply wells at the Southbury Training School in Southbury, Connecticut, were mapped by simulating ground-water flow in stratified glacial deposits in the lower Transylvania Brook watershed. The simulation used nonlinear regression methods and informational statistics to estimate parameters of a ground-water flow model using drawdown data from an aquifer test. The goodness of fit of the model and the uncertainty associated with model predictions were statistically measured. A watershed-scale model, depicting large-scale ground-water flow in the Transylvania Brook watershed, was used to estimate the distribution of groundwater recharge. Estimates of recharge from 10 small basins in the watershed differed on the basis of the drainage characteristics of each basin. Small basins having well-defined stream channels contributed less ground-water recharge than basins having no defined channels because potential ground-water recharge was carried away in the stream channel. Estimates of ground-water recharge were used in an aquifer-scale parameter-estimation model. Seven variations of the ground-water-flow system were posed, each representing the ground-water-flow system in slightly different but realistic ways. The model that most closely reproduced measured hydraulic heads and flows with realistic parameter values was selected as the most representative of the ground-water-flow system and was used to delineate boundaries of the contributing areas. The model fit revealed no systematic model error, which indicates that the model is likely to represent the major characteristics of the actual system. Parameter values estimated during the simulation are as follows: horizontal hydraulic conductivity of coarse-grained deposits, 154 feet per day; vertical hydraulic conductivity of coarse-grained deposits, 0.83 feet per day; horizontal hydraulic conductivity of fine-grained deposits, 29 feet per day; specific yield, 0.007; specific storage, 1.6E-05. Average annual recharge was estimated using the watershed-scale model with no parameter estimation and was determined to be 24 inches per year in the valley areas and 9 inches per year in the upland areas. The parameter estimates produced in the model are similar to expected values, with two exceptions. The estimated specific yield of the stratified glacial deposits is lower than expected, which could be caused by the layered nature of the deposits. The recharge estimate produced by the model was also lower?about 32 percent of the average annual rate. This could be caused by the timing of the aquifer test with respect to the annual cycle of ground-water recharge, and by some of the expected recharge going to parts of the flow system that were not simulated. The data used in the calibration were collected during an aquifer test from October 30 to November 4, 1996. The model fit was very good, as indicated by the correlation coefficient (0.999) between the weighted simulated values and weighted observed values. The model also reproduced the general rise in ground-water levels caused by ground-water recharge and the cyclic fluctuations caused by pumping prior to the aquifer test. Contributing areas were delineated using a particle-tracking procedure. Hypothetical particles of water were introduced at each model cell in the top layer and were tracked to determine whether or not they reached the pumped well. A deterministic contributing area was calculated using the calibrated model, and a probabilistic contributing area was calculated using a Monte Carlo approach along with the calibrated model. The Monte Carlo simulation was done, using the parameter variance/covariance matrix generated by the regression model, to estimate probabilities associated with the contributing area to the wells. The probabilities arise from uncertainty in the estimated parameter values, which in turn arise from the adequacy of the data available to comprehensively describe the groundwater-flow sy

  11. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-03-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  12. Model of a fluxtube with a twisted magnetic field in the stratified solar atmosphere

    NASA Astrophysics Data System (ADS)

    Sen, S.; Mangalam, A.

    2018-01-01

    We build a single vertical straight magnetic fluxtube spanning the solar photosphere and the transition region which does not expand with height. We assume that the fluxtube containing twisted magnetic fields is in magnetohydrostatic equilibrium within a realistic stratified atmosphere subject to solar gravity. Incorporating specific forms of current density and gas pressure in the Grad-Shafranov equation, we solve the magnetic flux function, and find it to be separable with a Coulomb wave function in radial direction while the vertical part of the solution decreases exponentially. We employ improved fluxtube boundary conditions and take a realistic ambient external pressure for the photosphere to transition region, to derive a family of solutions for reasonable values of the fluxtube radius and magnetic field strength at the base of the axis that are the free parameters in our model. We find that our model estimates are consistent with the magnetic field strength and the radii of Magnetic bright points (MBPs) as estimated from observations. We also derive thermodynamic quantities inside the fluxtube.

  13. A portfolio-based approach to optimize proof-of-concept clinical trials.

    PubMed

    Mallinckrodt, Craig; Molenberghs, Geert; Persinger, Charles; Ruberg, Stephen; Sashegyi, Andreas; Lindborg, Stacy

    2012-01-01

    Improving proof-of-concept (PoC) studies is a primary lever for improving drug development. Since drug development is often done by institutions that work on multiple drugs simultaneously, the present work focused on optimum choices for rates of false positive (α) and false negative (β) results across a portfolio of PoC studies. Simple examples and a newly derived equation provided conceptual understanding of basic principles regarding optimum choices of α and β in PoC trials. In examples that incorporated realistic development costs and constraints, the levels of α and β that maximized the number of approved drugs and portfolio value varied by scenario. Optimum choices were sensitive to the probability the drug was effective and to the proportion of total investment cost prior to establishing PoC. Results of the present investigation agree with previous research in that it is important to assess optimum levels of α and β. However, the present work also highlighted the need to consider cost structure using realistic input parameters relevant to the question of interest.

  14. Editor’s message: Groundwater modeling fantasies - Part 1, adrift in the details

    USGS Publications Warehouse

    Voss, Clifford I.

    2011-01-01

    Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it. …Simplicity does not precede complexity, but follows it. (Epigrams in Programming by Alan Perlis, a computer scientist; Perlis 1982).A doctoral student creating a groundwater model of a regional aquifer put individual circular regions around data points where he had hydraulic head measurements, so that each region’s parameter values could be adjusted to get perfect fit with the measurement at that point. Nearly every measurement point had its own parameter-value region. After calibration, the student was satisfied because his model correctly reproduced all of his data. Did he really get the true field values of parameters in this manner? Did this approach result in a realistic, meaningful and useful groundwater model?—truly doubtful. Is this story a sign of a common style of educating hydrogeology students these days? Where this is the case, major changes are needed to add back ‘common-sense hydrogeology’ to the curriculum. Worse, this type of modeling approach has become an industry trend in application of groundwater models to real systems, encouraged by the advent of automatic model calibration software that has no problem providing numbers for as many parameter value estimates as desired. Just because a computer program can easily create such values does not mean that they are in any sense useful—but unquestioning practitioners are happy to follow such software developments, perhaps because of an implied promise that highly parameterized models, here referred to as ‘complex’, are somehow superior. This and other fallacies are implicit in groundwater modeling studies, most usually not acknowledged when presenting results. This two-part Editor’s Message deals with the state of groundwater modeling: part 1 (here) focuses on problems and part 2 (Voss 2011) on prospects.

  15. A comprehensive study of g-factors, elastic, structural and electronic properties of III-V semiconductors using hybrid-density functional theory

    NASA Astrophysics Data System (ADS)

    Bastos, Carlos M. O.; Sabino, Fernando P.; Sipahi, Guilherme M.; Da Silva, Juarez L. F.

    2018-02-01

    Despite the large number of theoretical III-V semiconductor studies reported every year, our atomistic understanding is still limited. The limitations of the theoretical approaches to yield accurate structural and electronic properties on an equal footing, is due to the unphysical self-interaction problem that mainly affects the band gap and spin-orbit splitting (SOC) in semiconductors and, in particular, III-V systems with similar magnitude of the band gap and SOC. In this work, we report a consistent study of the structural and electronic properties of the III-V semiconductors by using the screening hybrid-density functional theory framework, by fitting the α parameters for 12 different III-V compounds, namely, AlN, AlP, AlAs, AlSb, GaN, GaP, GaAs, GaSb, InN, InP, InAs, and InSb, to minimize the deviation between the theoretical and experimental values of the band gap and SOC. Structural relaxation effects were also included. Except for AlP, whose α = 0.127, we obtained α values that ranged from 0.209 to 0.343, which deviate by less than 0.1 from the universal value of 0.25. Our results for the lattice parameter and elastic constants indicate that the fitting of α does not affect those structural parameters when compared with the HSE06 functional, where α = 0.25. Our analysis of the band structure based on the k ṡ p method shows that the effective masses are in agreement with the experimental values, which can be attributed to the simultaneous fitting of the band gap and SOC. Also, we estimate the values of g-factors, extracted directly from the band structure, which are close to experimental results, which indicate that the obtained band structure produced a realistic set of k ṡ p parameters.

  16. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    NASA Astrophysics Data System (ADS)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-12-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  17. Tunneling of heat: Beyond linear response regime

    NASA Astrophysics Data System (ADS)

    Walczak, Kamil; Saroka, David

    2018-02-01

    We examine nanoscale processes of heat (energy) transfer as carried by electrons tunneling via potential barriers and molecular interconnects between two heat reservoirs (thermal baths). For that purpose, we use Landauer-type formulas to calculate thermal conductance and quadratic correction to heat flux flowing via quantum systems. As an input, we implement analytical expressions for transmission functions related to simple potential barriers and atomic bridges. Our results are discussed with respect to energy of tunneling electrons, temperature, the presence of resonant states, and specific parameters characterizing potential barriers as well as heat carriers. The simplicity of semi-analytical models developed by us allows to fit experimental data and extract crucial information about the values of model parameters. Further investigations are expected for more realistic transmission functions, while time-dependent aspects of nanoscale heat transfer may be addressed by using the concept of wave packets scattered on potential barriers and point-like defects within regular (periodic) nanostructures.

  18. Design of landfill daily cells.

    PubMed

    Panagiotakopoulos, D; Dokas, I

    2001-08-01

    The objective of this paper is to study the behaviour of the landfill soil-to-refuse (S/R) ratio when size, geometry and operating parameters of the daily cell vary over realistic ranges. A simple procedure is presented (1) for calculating the cell parameters values which minimise the S/R ratio and (2) for studying the sensitivity of this minimum S/R ratio to variations in cell size, final refuse density, working face length, lift height and cover thickness. In countries where daily soil cover is required, savings in landfill space could be realised following this procedure. The sensitivity of minimum S/R to variations in cell dimensions decreases with cell size. Working face length and lift height affect the S/R ratio significantly. This procedure also offers the engineer an additional tool for comparing one large daily cell with two or more smaller ones, at two different working faces within the same landfill.

  19. An Open Singularity-Free Cosmological Model with Inflation

    NASA Astrophysics Data System (ADS)

    Karaca, Koray; Bayin, Selçuk

    In the light of recent observations which point to an open universe (Ω0 < 1), we construct an open singularity-free cosmological model by reconsidering a model originally constructed for a closed universe. Our model starts from a nonsingular state called prematter, governed by an inflationary equation of state P = (γp - 1)ρ where γp (~= 10-3) is a small positive parameter representing the initial vacuum dominance of the universe. Unlike the closed models universe cannot be initially static hence, starts with an initial expansion rate represented by the initial value of the Hubble constant H(0). Therefore, our model is a two-parameter universe model (γp,H(0)). Comparing the predictions of this model for the present properties of the universe with the recent observational results, we argue that the model constructed in this work could be used as a realistic universe model.

  20. The interaction of radio frequency electromagnetic fields with atmospheric water droplets and applications to aircraft ice prevention. Thesis

    NASA Technical Reports Server (NTRS)

    Hansman, R. J., Jr.

    1982-01-01

    The feasibility of computerized simulation of the physics of advanced microwave anti-icing systems, which preheat impinging supercooled water droplets prior to impact, was investigated. Theoretical and experimental work performed to create a physically realistic simulation is described. The behavior of the absorption cross section for melting ice particles was measured by a resonant cavity technique and found to agree with theoretical predictions. Values of the dielectric parameters of supercooled water were measured by a similar technique at lambda = 2.82 cm down to -17 C. The hydrodynamic behavior of accelerated water droplets was studied photograhically in a wind tunnel. Droplets were found to initially deform as oblate spheroids and to eventually become unstable and break up in Bessel function modes for large values of acceleration or droplet size. This confirms the theory as to the maximum stable droplet size in the atmosphere. A computer code which predicts droplet trajectories in an arbitrary flow field was written and confirmed experimentally. The results were consolidated into a simulation to study the heating by electromagnetic fields of droplets impinging onto an object such as an airfoil. It was determined that there is sufficient time to heat droplets prior to impact for typical parameter values. Design curves for such a system are presented.

  1. Transition to Turbulent Dynamo Saturation

    NASA Astrophysics Data System (ADS)

    Seshasayanan, Kannabiran; Gallet, Basile; Alexakis, Alexandros

    2017-11-01

    While the saturated magnetic energy is independent of viscosity in dynamo experiments, it remains viscosity dependent in state-of-the-art 3D direct numerical simulations (DNS). Extrapolating such viscous scaling laws to realistic parameter values leads to an underestimation of the magnetic energy by several orders of magnitude. The origin of this discrepancy is that fully 3D DNS cannot reach low enough values of the magnetic Prandtl number Pm. To bypass this limitation and investigate dynamo saturation at very low Pm, we focus on the vicinity of the dynamo threshold in a rapidly rotating flow: the velocity field then depends on two spatial coordinates only, while the magnetic field consists of a single Fourier mode in the third direction. We perform numerical simulations of the resulting set of reduced equations for Pm down to 2 ×10-5. This parameter regime is currently out of reach to fully 3D DNS. We show that the magnetic energy transitions from a high-Pm viscous scaling regime to a low-Pm turbulent scaling regime, the latter being independent of viscosity. The transition to the turbulent saturation regime occurs at a low value of the magnetic Prandtl number, Pm ≃10-3 , which explains why it has been overlooked by numerical studies so far.

  2. Transition to Turbulent Dynamo Saturation.

    PubMed

    Seshasayanan, Kannabiran; Gallet, Basile; Alexakis, Alexandros

    2017-11-17

    While the saturated magnetic energy is independent of viscosity in dynamo experiments, it remains viscosity dependent in state-of-the-art 3D direct numerical simulations (DNS). Extrapolating such viscous scaling laws to realistic parameter values leads to an underestimation of the magnetic energy by several orders of magnitude. The origin of this discrepancy is that fully 3D DNS cannot reach low enough values of the magnetic Prandtl number Pm. To bypass this limitation and investigate dynamo saturation at very low Pm, we focus on the vicinity of the dynamo threshold in a rapidly rotating flow: the velocity field then depends on two spatial coordinates only, while the magnetic field consists of a single Fourier mode in the third direction. We perform numerical simulations of the resulting set of reduced equations for Pm down to 2×10^{-5}. This parameter regime is currently out of reach to fully 3D DNS. We show that the magnetic energy transitions from a high-Pm viscous scaling regime to a low-Pm turbulent scaling regime, the latter being independent of viscosity. The transition to the turbulent saturation regime occurs at a low value of the magnetic Prandtl number, Pm≃10^{-3}, which explains why it has been overlooked by numerical studies so far.

  3. Modeling individual movement decisions of brown hare (Lepus europaeus) as a key concept for realistic spatial behavior and exposure: A population model for landscape-level risk assessment.

    PubMed

    Kleinmann, Joachim U; Wang, Magnus

    2017-09-01

    Spatial behavior is of crucial importance for the risk assessment of pesticides and for the assessment of effects of agricultural practice or multiple stressors, because it determines field use, exposition, and recovery. Recently, population models have increasingly been used to understand the mechanisms driving risk and recovery or to conduct landscape-level risk assessments. To include spatial behavior appropriately in population models for use in risk assessments, a new method, "probabilistic walk," was developed, which simulates the detailed daily movement of individuals by taking into account food resources, vegetation cover, and the presence of conspecifics. At each movement step, animals decide where to move next based on probabilities being determined from this information. The model was parameterized to simulate populations of brown hares (Lepus europaeus). A detailed validation of the model demonstrated that it can realistically reproduce various natural patterns of brown hare ecology and behavior. Simulated proportions of time animals spent in fields (PT values) were also comparable to field observations. It is shown that these important parameters for the risk assessment may, however, vary in different landscapes. The results demonstrate the value of using population models to reduce uncertainties in risk assessment and to better understand which factors determine risk in a landscape context. Environ Toxicol Chem 2017;36:2299-2307. © 2017 SETAC. © 2017 SETAC.

  4. Two-Capacitor Problem: A More Realistic View.

    ERIC Educational Resources Information Center

    Powell, R. A.

    1979-01-01

    Discusses the two-capacitor problem by considering the self-inductance of the circuit used and by determining how well the usual series RC circuit approximates the two-capacitor problem when realistic values of L, C, and R are chosen. (GA)

  5. Parameter Balancing in Kinetic Models of Cell Metabolism†

    PubMed Central

    2010-01-01

    Kinetic modeling of metabolic pathways has become a major field of systems biology. It combines structural information about metabolic pathways with quantitative enzymatic rate laws. Some of the kinetic constants needed for a model could be collected from ever-growing literature and public web resources, but they are often incomplete, incompatible, or simply not available. We address this lack of information by parameter balancing, a method to complete given sets of kinetic constants. Based on Bayesian parameter estimation, it exploits the thermodynamic dependencies among different biochemical quantities to guess realistic model parameters from available kinetic data. Our algorithm accounts for varying measurement conditions in the input data (pH value and temperature). It can process kinetic constants and state-dependent quantities such as metabolite concentrations or chemical potentials, and uses prior distributions and data augmentation to keep the estimated quantities within plausible ranges. An online service and free software for parameter balancing with models provided in SBML format (Systems Biology Markup Language) is accessible at www.semanticsbml.org. We demonstrate its practical use with a small model of the phosphofructokinase reaction and discuss its possible applications and limitations. In the future, parameter balancing could become an important routine step in the kinetic modeling of large metabolic networks. PMID:21038890

  6. Managing geological uncertainty in CO2-EOR reservoir assessments

    NASA Astrophysics Data System (ADS)

    Welkenhuysen, Kris; Piessens, Kris

    2014-05-01

    Recently the European Parliament has agreed that an atlas for the storage potential of CO2 is of high importance to have a successful commercial introduction of CCS (CO2 capture and geological storage) technology in Europe. CO2-enhanced oil recovery (CO2-EOR) is often proposed as a promising business case for CCS, and likely has a high potential in the North Sea region. Traditional economic assessments for CO2-EOR largely neglect the geological reality of reservoir uncertainties because these are difficult to introduce realistically in such calculations. There is indeed a gap between the outcome of a reservoir simulation and the input values for e.g. cost-benefit evaluations, especially where it concerns uncertainty. The approach outlined here is to turn the procedure around, and to start from which geological data is typically (or minimally) requested for an economic assessment. Thereafter it is evaluated how this data can realistically be provided by geologists and reservoir engineers. For the storage of CO2 these parameters are total and yearly CO2 injection capacity, and containment or potential on leakage. Specifically for the EOR operation, two additional parameters can be defined: the EOR ratio, or the ratio of recovered oil over injected CO2, and the CO2 recycling ratio of CO2 that is reproduced after breakthrough at the production well. A critical but typically estimated parameter for CO2-EOR projects is the EOR ratio, taken in this brief outline as an example. The EOR ratio depends mainly on local geology (e.g. injection per well), field design (e.g. number of wells), and time. Costs related to engineering can be estimated fairly good, given some uncertainty range. The problem is usually to reliably estimate the geological parameters that define the EOR ratio. Reliable data is only available from (onshore) CO2-EOR projects in the US. Published studies for the North Sea generally refer to these data in a simplified form, without uncertainty ranges, and are therefore not suited for cost-benefit analysis. They likely result in too optimistic results because onshore configurations are cheaper and different. We propose to translate the detailed US data to the North Sea, retaining their uncertainty ranges. In a first step, a general cost correction can be applied to account for costs specific to the EU and the offshore setting. In a second step site-specific data, including laboratory tests and reservoir modelling, are used to further adapt the EOR ratio values taking into account all available geological reservoir-specific knowledge. And lastly, an evaluation of the field configuration will have an influence on both the cost and local geology dimension, because e.g. horizontal drilling is needed (cost) to improve injectivity (geology). As such, a dataset of the EOR field is obtained which contains all aspects and their uncertainty ranges. With these, a geologically realistic basis is obtained for further cost-benefit analysis of a specific field, where the uncertainties are accounted for using a stochastic evaluation. Such ad-hoc evaluation of geological parameters will provide a better assessment of the CO2-EOR potential of the North Sea oil fields.

  7. Unveiling hidden properties of young star clusters: differential reddening, star-formation spread, and binary fraction

    NASA Astrophysics Data System (ADS)

    Bonatto, C.; Lima, E. F.; Bica, E.

    2012-04-01

    Context. Usually, important parameters of young, low-mass star clusters are very difficult to obtain by means of photometry, especially when differential reddening and/or binaries occur in large amounts. Aims: We present a semi-analytical approach (ASAmin) that, when applied to the Hess diagram of a young star cluster, is able to retrieve the values of mass, age, star-formation spread, distance modulus, foreground and differential reddening, and binary fraction. Methods: The global optimisation method known as adaptive simulated annealing (ASA) is used to minimise the residuals between the observed and simulated Hess diagrams of a star cluster. The simulations are realistic and take the most relevant parameters of young clusters into account. Important features of the simulations are a normal (Gaussian) differential reddening distribution, a time-decreasing star-formation rate, the unresolved binaries, and the smearing effect produced by photometric uncertainties on Hess diagrams. Free parameters are cluster mass, age, distance modulus, star-formation spread, foreground and differential reddening, and binary fraction. Results: Tests with model clusters built with parameters spanning a broad range of values show that ASAmin retrieves the input values with a high precision for cluster mass, distance modulus, and foreground reddening, but they are somewhat lower for the remaining parameters. Given the statistical nature of the simulations, several runs should be performed to obtain significant convergence patterns. Specifically, we find that the retrieved (absolute minimum) parameters converge to mean values with a low dispersion as the Hess residuals decrease. When applied to actual young clusters, the retrieved parameters follow convergence patterns similar to the models. We show how the stochasticity associated with the early phases may affect the results, especially in low-mass clusters. This effect can be minimised by averaging out several twin clusters in the simulated Hess diagrams. Conclusions: Even for low-mass star clusters, ASAmin is sensitive to the values of cluster mass, age, distance modulus, star-formation spread, foreground and differential reddening, and to a lesser degree, binary fraction. Compared with simpler approaches, including binaries, a decaying star-formation rate, and a normally distributed differential reddening appears to yield more constrained parameters, especially the mass, age, and distance from the Sun. A robust determination of cluster parameters may have a positive impact on many fields. For instance, age, mass, and binary fraction are important for establishing the dynamical state of a cluster or for deriving a more precise star-formation rate in the Galaxy.

  8. Double β-decay nuclear matrix elements for the A=48 and A=58 systems

    NASA Astrophysics Data System (ADS)

    Skouras, L. D.; Vergados, J. D.

    1983-11-01

    The nuclear matrix elements entering the double β decays of the 48Ca-48Ti and 58Ni-58Fe systems have been calculated using a realistic two nucleon interaction and realistic shell model spaces. Effective transition operators corresponding to a variety of gauge theory models have been considered. The stability of such matrix elements against variations of the nuclear parameters is examined. Appropriate lepton violating parameters are extracted from the A=48 data and predictions are made for the lifetimes of the positron decays of the A=58 system. RADIOACTIVITY Double β decay. Gauge theories. Lepton nonconservation. Neutrino mass. Shell model calculations.

  9. A scheme for synchronizing clocks connected by a packet communication network

    NASA Astrophysics Data System (ADS)

    dos Santos, R. V.; Monteiro, L. H. A.

    2012-07-01

    Consider a communication system in which a transmitter equipment sends fixed-size packets of data at a uniform rate to a receiver equipment. Consider also that these equipments are connected by a packet-switched network, which introduces a random delay to each packet. Here we propose an adaptive clock recovery scheme able of synchronizing the frequencies and the phases of these devices, within specified limits of precision. This scheme for achieving frequency and phase synchronization is based on measurements of the packet arrival times at the receiver, which are used to control the dynamics of a digital phase-locked loop. The scheme performance is evaluated via numerical simulations performed by using realistic parameter values.

  10. Design and Validation of 3D Printed Complex Bone Models with Internal Anatomic Fidelity for Surgical Training and Rehearsal.

    PubMed

    Unger, Bertram J; Kraut, Jay; Rhodes, Charlotte; Hochman, Jordan

    2014-01-01

    Physical models of complex bony structures can be used for surgical skills training. Current models focus on surface rendering but suffer from a lack of internal accuracy due to limitations in the manufacturing process. We describe a technique for generating internally accurate rapid-prototyped anatomical models with solid and hollow structures from clinical and microCT data using a 3D printer. In a face validation experiment, otolaryngology residents drilled a cadaveric bone and its corresponding printed model. The printed bone models were deemed highly realistic representations across all measured parameters and the educational value of the models was strongly appreciated.

  11. Scale-Invariant Forms of Conservation Equations in Reactive Fields and a Modified Hydro-Thermo-Diffusive Theory of Laminar Flames

    NASA Technical Reports Server (NTRS)

    Sohrab, Siavash H.; Piltch, Nancy (Technical Monitor)

    2000-01-01

    A scale-invariant model of statistical mechanics is applied to present invariant forms of mass, energy, linear, and angular momentum conservation equations in reactive fields. The resulting conservation equations at molecular-dynamic scale are solved by the method of large activation energy asymptotics to describe the hydro-thermo-diffusive structure of laminar premixed flames. The predicted temperature and velocity profiles are in agreement with the observations. Also, with realistic physico-chemical properties and chemical-kinetic parameters for a single-step overall combustion of stoichiometric methane-air premixed flame, the laminar flame propagation velocity of 42.1 cm/s is calculated in agreement with the experimental value.

  12. Meridionally propagating interannual-to-interdecadal variability in a linear ocean-atmosphere model

    NASA Technical Reports Server (NTRS)

    Mehta, Vikram M.

    1992-01-01

    Meridional oscillation modes in a global, primitive-equation coupled ocean-atmosphere model have been analyzed in order to determine whether they contain such meridionally propagating modes as surface-pressure perturbations with years-to-decades oscillation periods. A two-layer global ocean model and a two-level global atmosphere model were then formulated. For realistic parameter values and basic states, meridional modes oscillating at periods of several years to several decades are noted to be present in the coupled ocean-atmosphere model; the oscillation periods, travel times, and meridional structures of surface pressure perturbations in one of the modes are found to be comparable to the corresponding characteristics of observed sea-level pressure perturbations.

  13. Analysis of Hydrogen Generation through Thermochemical Gasification of Coconut Shell Using Thermodynamic Equilibrium Model Considering Char and Tar

    PubMed Central

    Rupesh, Shanmughom; Muraleedharan, Chandrasekharan; Arun, Palatel

    2014-01-01

    This work investigates the potential of coconut shell for air-steam gasification using thermodynamic equilibrium model. A thermodynamic equilibrium model considering tar and realistic char conversion was developed using MATLAB software to predict the product gas composition. After comparing it with experimental results the prediction capability of the model is enhanced by multiplying equilibrium constants with suitable coefficients. The modified model is used to study the effect of key process parameters like temperature, steam to biomass ratio, and equivalence ratio on product gas yield, composition, and heating value of syngas along with gasification efficiency. For a steam to biomass ratio of unity, the maximum mole fraction of hydrogen in the product gas is found to be 36.14% with a lower heating value of 7.49 MJ/Nm3 at a gasification temperature of 1500 K and equivalence ratio of 0.15. PMID:27433487

  14. Analysis of Hydrogen Generation through Thermochemical Gasification of Coconut Shell Using Thermodynamic Equilibrium Model Considering Char and Tar.

    PubMed

    Rupesh, Shanmughom; Muraleedharan, Chandrasekharan; Arun, Palatel

    2014-01-01

    This work investigates the potential of coconut shell for air-steam gasification using thermodynamic equilibrium model. A thermodynamic equilibrium model considering tar and realistic char conversion was developed using MATLAB software to predict the product gas composition. After comparing it with experimental results the prediction capability of the model is enhanced by multiplying equilibrium constants with suitable coefficients. The modified model is used to study the effect of key process parameters like temperature, steam to biomass ratio, and equivalence ratio on product gas yield, composition, and heating value of syngas along with gasification efficiency. For a steam to biomass ratio of unity, the maximum mole fraction of hydrogen in the product gas is found to be 36.14% with a lower heating value of 7.49 MJ/Nm(3) at a gasification temperature of 1500 K and equivalence ratio of 0.15.

  15. Assessing the accuracy of subject-specific, muscle-model parameters determined by optimizing to match isometric strength.

    PubMed

    DeSmitt, Holly J; Domire, Zachary J

    2016-12-01

    Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.

  16. Spherically symmetric Einstein-aether perfect fluid models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coley, Alan A.; Latta, Joey; Leon, Genly

    We investigate spherically symmetric cosmological models in Einstein-aether theory with a tilted (non-comoving) perfect fluid source. We use a 1+3 frame formalism and adopt the comoving aether gauge to derive the evolution equations, which form a well-posed system of first order partial differential equations in two variables. We then introduce normalized variables. The formalism is particularly well-suited for numerical computations and the study of the qualitative properties of the models, which are also solutions of Horava gravity. We study the local stability of the equilibrium points of the resulting dynamical system corresponding to physically realistic inhomogeneous cosmological models and astrophysicalmore » objects with values for the parameters which are consistent with current constraints. In particular, we consider dust models in (β−) normalized variables and derive a reduced (closed) evolution system and we obtain the general evolution equations for the spatially homogeneous Kantowski-Sachs models using appropriate bounded normalized variables. We then analyse these models, with special emphasis on the future asymptotic behaviour for different values of the parameters. Finally, we investigate static models for a mixture of a (necessarily non-tilted) perfect fluid with a barotropic equations of state and a scalar field.« less

  17. Determination of uncertainties of PWR spent fuel radionuclide inventory based on real operational history data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fast, Ivan; Bosbach, Dirk; Aksyutina, Yuliya

    A requisite for the official approval of the safe final disposal of SNF is a comprehensive specification and declaration of the nuclear inventory in SNF by the waste supplier. In the verification process both the values of the radionuclide (RN) activities and their uncertainties are required. Burn-up (BU) calculations based on typical and generic reactor operational parameters do not encompass any possible uncertainties observed in real reactor operations. At the same time, the details of the irradiation history are often not well known, which complicates the assessment of declared RN inventories. Here, we have compiled a set of burnup calculationsmore » accounting for the operational history of 339 published or anonymized real PWR fuel assemblies (FA). These histories were used as a basis for a 'SRP analysis', to provide information about the range of the values of the associated secondary reactor parameters (SRP's). Hence, we can calculate the realistic variation or spectrum of RN inventories. SCALE 6.1 has been employed for the burn-up calculations. The results have been validated using experimental data from the online database - SFCOMPO-1 and -2. (authors)« less

  18. Gene flow from domesticated species to wild relatives: migration load in a model of multivariate selection.

    PubMed

    Tufto, Jarle

    2010-01-01

    Domesticated species frequently spread their genes into populations of wild relatives through interbreeding. The domestication process often involves artificial selection for economically desirable traits. This can lead to an indirect response in unknown correlated traits and a reduction in fitness of domesticated individuals in the wild. Previous models for the effect of gene flow from domesticated species to wild relatives have assumed that evolution occurs in one dimension. Here, I develop a quantitative genetic model for the balance between migration and multivariate stabilizing selection. Different forms of correlational selection consistent with a given observed ratio between average fitness of domesticated and wild individuals offsets the phenotypic means at migration-selection balance away from predictions based on simpler one-dimensional models. For almost all parameter values, correlational selection leads to a reduction in the migration load. For ridge selection, this reduction arises because the distance the immigrants deviates from the local optimum in effect is reduced. For realistic parameter values, however, the effect of correlational selection on the load is small, suggesting that simpler one-dimensional models may still be adequate in terms of predicting mean population fitness and viability.

  19. Blade Tip Rubbing Stress Prediction

    NASA Technical Reports Server (NTRS)

    Davis, Gary A.; Clough, Ray C.

    1991-01-01

    An analytical model was constructed to predict the magnitude of stresses produced by rubbing a turbine blade against its tip seal. This model used a linearized approach to the problem, after a parametric study, found that the nonlinear effects were of insignificant magnitude. The important input parameters to the model were: the arc through which rubbing occurs, the turbine rotor speed, normal force exerted on the blade, and the rubbing coefficient of friction. Since it is not possible to exactly specify some of these parameters, values were entered into the model which bracket likely values. The form of the forcing function was another variable which was impossible to specify precisely, but the assumption of a half-sine wave with a period equal to the duration of the rub was taken as a realistic assumption. The analytical model predicted resonances between harmonics of the forcing function decomposition and known harmonics of the blade. Thus, it seemed probable that blade tip rubbing could be at least a contributor to the blade-cracking phenomenon. A full-scale, full-speed test conducted on the space shuttle main engine high pressure fuel turbopump Whirligig tester was conducted at speeds between 33,000 and 28,000 RPM to confirm analytical predictions.

  20. Comparison of Achievable Magnetic Fields with Superconducting and Cryogenic Permanent Magnet Undulators – A Comprehensive Study of Computed and Measured Values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moog, E. R.; Dejus, R. J.; Sasaki, S.

    2017-01-01

    Magnetic modeling was performed to estimate achievable magnetic field strengths of superconducting undulators (SCUs) and to compare them with those of cryogenically cooled permanent magnet undulators (CPMUs). Starting with vacuum (beam stay-clear) gaps of 4.0 and 6.0 mm, realistic allowances for beam chambers (in the SCU case) and beam liners (in the CPMU case) were added. (A 6.0-mm vacuum gap is planned for the upgraded APS). The CPMU magnetic models consider both CPMUs that use NdFeB magnets at ~150 K and PrFeB magnets at 77 K. Parameters of the magnetic models are presented along with fitted coefficients of a Halbach-typemore » expression for the field dependence on the gap-to-period ratio. Field strengths for SCUs are estimated using a scaling law for planar SCUs; an equation for that is given. The SCUs provide higher magnetic fields than the highest-field CPMUs – those using PrFeB at 77 K – for period lengths longer than ~14 mm for NbTi-based SCUs and ~10 mm for Nb3Sn-based SCUs. To show that the model calculations and scaling law results are realistic, they are compared to CPMUs that have been built and NbTi-based SCUs that have been built. Brightness tuning curves of CPMUs (PrFeB) and SCUs (NbTi) for the upgraded APS lattice are also provided for realistic period lengths.« less

  1. Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.

    PubMed

    Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew

    2014-12-26

    Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.

  2. Patient-specific bronchoscopy visualization through BRDF estimation and disocclusion correction.

    PubMed

    Chung, Adrian J; Deligianni, Fani; Shah, Pallav; Wells, Athol; Yang, Guang-Zhong

    2006-04-01

    This paper presents an image-based method for virtual bronchoscope with photo-realistic rendering. The technique is based on recovering bidirectional reflectance distribution function (BRDF) parameters in an environment where the choice of viewing positions, directions, and illumination conditions are restricted. Video images of bronchoscopy examinations are combined with patient-specific three-dimensional (3-D) computed tomography data through two-dimensional (2-D)/3-D registration and shading model parameters are then recovered by exploiting the restricted lighting configurations imposed by the bronchoscope. With the proposed technique, the recovered BRDF is used to predict the expected shading intensity, allowing a texture map independent of lighting conditions to be extracted from each video frame. To correct for disocclusion artefacts, statistical texture synthesis was used to recreate the missing areas. New views not present in the original bronchoscopy video are rendered by evaluating the BRDF with different viewing and illumination parameters. This allows free navigation of the acquired 3-D model with enhanced photo-realism. To assess the practical value of the proposed technique, a detailed visual scoring that involves both real and rendered bronchoscope images is conducted.

  3. q-deformed Einstein's model to describe specific heat of solid

    NASA Astrophysics Data System (ADS)

    Guha, Atanu; Das, Prasanta Kumar

    2018-04-01

    Realistic phenomena can be described more appropriately using generalized canonical ensemble, with proper parameter sets involved. We have generalized the Einstein's theory for specific heat of solid in Tsallis statistics, where the temperature fluctuation is introduced into the theory via the fluctuation parameter q. At low temperature the Einstein's curve of the specific heat in the nonextensive Tsallis scenario exactly lies on the experimental data points. Consequently this q-modified Einstein's curve is found to be overlapping with the one predicted by Debye. Considering only the temperature fluctuation effect(even without considering more than one mode of vibration is being triggered) we found that the CV vs T curve is as good as obtained by considering the different modes of vibration as suggested by Debye. Generalizing the Einstein's theory in Tsallis statistics we found that a unique value of the Einstein temperature θE along with a temperature dependent deformation parameter q(T) , can well describe the phenomena of specific heat of solid i.e. the theory is equivalent to Debye's theory with a temperature dependent θD.

  4. Modeling the Hyperdistribution of Item Parameters To Improve the Accuracy of Recovery in Estimation Procedures.

    ERIC Educational Resources Information Center

    Matthews-Lopez, Joy L.; Hombo, Catherine M.

    The purpose of this study was to examine the recovery of item parameters in simulated Automatic Item Generation (AIG) conditions, using Markov chain Monte Carlo (MCMC) estimation methods to attempt to recover the generating distributions. To do this, variability in item and ability parameters was manipulated. Realistic AIG conditions were…

  5. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.

  6. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons

    PubMed Central

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons. PMID:24416013

  7. Simulation of low clouds in the Southeast Pacific by the NCEP GFS: sensitivity to vertical mixing

    NASA Astrophysics Data System (ADS)

    Sun, R.; Moorthi, S.; Xiao, H.; Mechoso, C. R.

    2010-12-01

    The NCEP Global Forecast System (GFS) model has an important systematic error shared by many other models: stratocumuli are missed over the subtropical eastern oceans. It is shown that this error can be alleviated in the GFS by introducing a consideration of the low-level inversion and making two modifications in the model's representation of vertical mixing. The modifications consist of (a) the elimination of background vertical diffusion above the inversion and (b) the incorporation of a stability parameter based on the cloud-top entrainment instability (CTEI) criterion, which limits the strength of shallow convective mixing across the inversion. A control simulation and three experiments are performed in order to examine both the individual and combined effects of modifications on the generation of the stratocumulus clouds. Individually, both modifications result in enhanced cloudiness in the Southeast Pacific (SEP) region, although the cloudiness is still low compared to the ISCCP climatology. If the modifications are applied together, however, the total cloudiness produced in the southeast Pacific has realistic values. This nonlinearity arises as the effects of both modifications reinforce each other in reducing the leakage of moisture across the inversion. Increased moisture trapped below the inversion than in the control run without modifications leads to an increase in cloud amount and cloud-top radiative cooling. Then a positive feedback due to enhanced turbulent mixing in the planetary boundary layer by cloud-top radiative cooling leads to and maintains the stratocumulus cover. Although the amount of total cloudiness obtained with both modifications has realistic values, the relative contributions of low, middle, and high layers tend to differ from the observations. These results demonstrate that it is possible to simulate realistic marine boundary clouds in large-scale models by implementing direct and physically based improvements in the model parameterizations.

  8. Simulation of low clouds in the Southeast Pacific by the NCEP GFS: sensitivity to vertical mixing

    NASA Astrophysics Data System (ADS)

    Sun, R.; Moorthi, S.; Xiao, H.; Mechoso, C.-R.

    2010-08-01

    The NCEP Global Forecast System (GFS) model has an important systematic error shared by many other models: stratocumuli are missed over the subtropical eastern oceans. It is shown that this error can be alleviated in the GFS by introducing a consideration of the low-level inversion and making two modifications in the model's representation of vertical mixing. The modifications consist of (a) the elimination of background vertical diffusion above the inversion and (b) the incorporation of a stability parameter based on the cloud-top entrainment instability (CTEI) criterion, which limits the strength of shallow convective mixing across the inversion. A control simulation and three experiments are performed in order to examine both the individual and combined effects of modifications on the generation of the stratocumulus clouds. Individually, both modifications result in enhanced cloudiness in the Southeast Pacific (SEP) region, although the cloudiness is still low compared to the ISCCP climatology. If the modifications are applied together, however, the total cloudiness produced in the southeast Pacific has realistic values. This nonlinearity arises as the effects of both modifications reinforce each other in reducing the leakage of moisture across the inversion. Increased moisture trapped below the inversion than in the control run without modifications leads to an increase in cloud amount and cloud-top radiative cooling. Then a positive feedback due to enhanced turbulent mixing in the planetary boundary layer by cloud-top radiative cooling leads to and maintains the stratocumulus cover. Although the amount of total cloudiness obtained with both modifications has realistic values, the relative contributions of low, middle, and high layers tend to differ from the observations. These results demonstrate that it is possible to simulate realistic marine boundary clouds in large-scale models by implementing direct and physically based improvements in the model parameterizations.

  9. Effect of minimal length uncertainty on the mass-radius relation of white dwarfs

    NASA Astrophysics Data System (ADS)

    Mathew, Arun; Nandy, Malay K.

    2018-06-01

    Generalized uncertainty relation that carries the imprint of quantum gravity introduces a minimal length scale into the description of space-time. It effectively changes the invariant measure of the phase space through a factor (1 + βp2) - 3 so that the equation of state for an electron gas undergoes a significant modification from the ideal case. It has been shown in the literature (Rashidi 2016) that the ideal Chandrasekhar limit ceases to exist when the modified equation of state due to the generalized uncertainty is taken into account. To assess the situation in a more complete fashion, we analyze in detail the mass-radius relation of Newtonian white dwarfs whose hydrostatic equilibria are governed by the equation of state of the degenerate relativistic electron gas subjected to the generalized uncertainty principle. As the constraint of minimal length imposes a severe restriction on the availability of high momentum states, it is speculated that the central Fermi momentum cannot have values arbitrarily higher than pmax ∼β - 1 / 2. When this restriction is imposed, it is found that the system approaches limiting mass values higher than the Chandrasekhar mass upon decreasing the parameter β to a value given by a legitimate upper bound. Instead, when the more realistic restriction due to inverse β-decay is considered, it is found that the mass and radius approach the values 1.4518 M⊙ and 601.18 km near the legitimate upper bound for the parameter β.

  10. Modeling the Normal and Neoplastic Cell Cycle with 'Realistic Boolean Genetic Networks': Their Application for Understanding Carcinogenesis and Assessing Therapeutic Strategies

    NASA Technical Reports Server (NTRS)

    Szallasi, Zoltan; Liang, Shoudan

    2000-01-01

    In this paper we show how Boolean genetic networks could be used to address complex problems in cancer biology. First, we describe a general strategy to generate Boolean genetic networks that incorporate all relevant biochemical and physiological parameters and cover all of their regulatory interactions in a deterministic manner. Second, we introduce 'realistic Boolean genetic networks' that produce time series measurements very similar to those detected in actual biological systems. Third, we outline a series of essential questions related to cancer biology and cancer therapy that could be addressed by the use of 'realistic Boolean genetic network' modeling.

  11. Effects of damping on mode shapes, volume 1

    NASA Technical Reports Server (NTRS)

    Gates, R. M.

    1977-01-01

    Displacement, velocity, and acceleration admittances were calculated for a realistic NASTRAN structural model of space shuttle for three conditions: liftoff, maximum dynamic pressure and end of solid rocket booster burn. The realistic model of the orbiter, external tank, and solid rocket motors included the representation of structural joint transmissibilities by finite stiffness and damping elements. Methods developed to incorporate structural joints and their damping characteristics into a finite element model of the space shuttle, to determine the point damping parameters required to produce realistic damping in the primary modes, and to calculate the effect of distributed damping on structural resonances through the calculation of admittances.

  12. Precision Modeling Of Targets Using The VALUE Computer Program

    NASA Astrophysics Data System (ADS)

    Hoffman, George A.; Patton, Ronald; Akerman, Alexander

    1989-08-01

    The 1976-vintage LASERX computer code has been augmented to produce realistic electro-optical images of targets. Capabilities lacking in LASERX but recently incorporated into its VALUE successor include: •Shadows cast onto the ground •Shadows cast onto parts of the target •See-through transparencies (e.g.,canopies) •Apparent images due both to atmospheric scattering and turbulence •Surfaces characterized by multiple bi-directional reflectance functions VALUE provides not only realistic target modeling by its precise and comprehensive representation of all target attributes, but additionally VALUE is very user friendly. Specifically, setup of runs is accomplished by screen prompting menus in a sequence of queries that is logical to the user. VALUE also incorporates the Optical Encounter (OPEC) software developed by Tricor Systems,Inc., Elgin, IL.

  13. Realist Evaluation: An Emerging Theory in Support of Practice.

    ERIC Educational Resources Information Center

    Henry, Gary T., Ed.; Julnes, George, Ed.; Mark, Melvin M., Ed.

    1998-01-01

    The five articles of this sourcebook, organized around the five-component framework for evaluation described by W. Shadish, T. Cook, and L. Leviton (1991), present a new theory of realist evaluation that captures the sensemaking contributions of postpositivism and the sensitivity to values from the constructivist traditions. (SLD)

  14. The temporal patterns of disease severity and prevalence in schistosomiasis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciddio, Manuela; Gatto, Marino, E-mail: marino.gatto@polimi.it; Casagrandi, Renato, E-mail: renato.casagrandi@polimi.it

    2015-03-15

    Schistosomiasis is one of the most widespread public health problems in the world. In this work, we introduce an eco-epidemiological model for its transmission and dynamics with the purpose of explaining both intra- and inter-annual fluctuations of disease severity and prevalence. The model takes the form of a system of nonlinear differential equations that incorporate biological complexity associated with schistosome's life cycle, including a prepatent period in snails (i.e., the time between initial infection and onset of infectiousness). Nonlinear analysis is used to explore the parametric conditions that produce different temporal patterns (stationary, endemic, periodic, and chaotic). For the time-invariantmore » model, we identify a transcritical and a Hopf bifurcation in the space of the human and snail infection parameters. The first corresponds to the occurrence of an endemic equilibrium, while the latter marks the transition to interannual periodic oscillations. We then investigate a more realistic time-varying model in which fertility of the intermediate host population is assumed to seasonally vary. We show that seasonality can give rise to a cascade of period-doubling bifurcations leading to chaos for larger, though realistic, values of the amplitude of the seasonal variation of fertility.« less

  15. Analysis and Modeling of Realistic Compound Channels in Transparent Relay Transmissions

    PubMed Central

    Kanjirathumkal, Cibile K.; Mohammed, Sameer S.

    2014-01-01

    Analytical approaches for the characterisation of the compound channels in transparent multihop relay transmissions over independent fading channels are considered in this paper. Compound channels with homogeneous links are considered first. Using Mellin transform technique, exact expressions are derived for the moments of cascaded Weibull distributions. Subsequently, two performance metrics, namely, coefficient of variation and amount of fade, are derived using the computed moments. These metrics quantify the possible variations in the channel gain and signal to noise ratio from their respective average values and can be used to characterise the achievable receiver performance. This approach is suitable for analysing more realistic compound channel models for scattering density variations of the environment, experienced in multihop relay transmissions. The performance metrics for such heterogeneous compound channels having distinct distribution in each hop are computed and compared with those having identical constituent component distributions. The moments and the coefficient of variation computed are then used to develop computationally efficient estimators for the distribution parameters and the optimal hop count. The metrics and estimators proposed are complemented with numerical and simulation results to demonstrate the impact of the accuracy of the approaches. PMID:24701175

  16. Realistic absorption coefficient of each individual film in a multilayer architecture

    NASA Astrophysics Data System (ADS)

    Cesaria, M.; Caricato, A. P.; Martino, M.

    2015-02-01

    A spectrophotometric strategy, termed multilayer-method (ML-method), is presented and discussed to realistically calculate the absorption coefficient of each individual layer embedded in multilayer architectures without reverse engineering, numerical refinements and assumptions about the layer homogeneity and thickness. The strategy extends in a non-straightforward way a consolidated route, already published by the authors and here termed basic-method, able to accurately characterize an absorbing film covering transparent substrates. The ML-method inherently accounts for non-measurable contribution of the interfaces (including multiple reflections), describes the specific film structure as determined by the multilayer architecture and used deposition approach and parameters, exploits simple mathematics, and has wide range of applicability (high-to-weak absorption regions, thick-to-ultrathin films). Reliability tests are performed on films and multilayers based on a well-known material (indium tin oxide) by deliberately changing the film structural quality through doping, thickness-tuning and underlying supporting-film. Results are found consistent with information obtained by standard (optical and structural) analysis, the basic-method and band gap values reported in the literature. The discussed example-applications demonstrate the ability of the ML-method to overcome the drawbacks commonly limiting an accurate description of multilayer architectures.

  17. Broadband light sources based on InAs/InGaAs metamorphic quantum dots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seravalli, L.; Trevisi, G.; Frigeri, P.

    We propose a design for a semiconductor structure emitting broadband light in the infrared, based on InAs quantum dots (QDs) embedded into a metamorphic step-graded In{sub x}Ga{sub 1−x}As buffer. We developed a model to calculate the metamorphic QD energy levels based on the realistic QD parameters and on the strain-dependent material properties; we validated the results of simulations by comparison with the experimental values. On this basis, we designed a p-i-n heterostructure with a graded index profile toward the realization of an electrically pumped guided wave device. This has been done by adding layers where QDs are embedded in In{submore » x}Al{sub y}Ga{sub 1−x−y}As layers, to obtain a symmetric structure from a band profile point of view. To assess the room temperature electro-luminescence emission spectrum under realistic electrical injection conditions, we performed device-level simulations based on a coupled drift-diffusion and QD rate equation model. On the basis of the device simulation results, we conclude that the present proposal is a viable option to realize broadband light-emitting devices.« less

  18. The temporal patterns of disease severity and prevalence in schistosomiasis

    NASA Astrophysics Data System (ADS)

    Ciddio, Manuela; Mari, Lorenzo; Gatto, Marino; Rinaldo, Andrea; Casagrandi, Renato

    2015-03-01

    Schistosomiasis is one of the most widespread public health problems in the world. In this work, we introduce an eco-epidemiological model for its transmission and dynamics with the purpose of explaining both intra- and inter-annual fluctuations of disease severity and prevalence. The model takes the form of a system of nonlinear differential equations that incorporate biological complexity associated with schistosome's life cycle, including a prepatent period in snails (i.e., the time between initial infection and onset of infectiousness). Nonlinear analysis is used to explore the parametric conditions that produce different temporal patterns (stationary, endemic, periodic, and chaotic). For the time-invariant model, we identify a transcritical and a Hopf bifurcation in the space of the human and snail infection parameters. The first corresponds to the occurrence of an endemic equilibrium, while the latter marks the transition to interannual periodic oscillations. We then investigate a more realistic time-varying model in which fertility of the intermediate host population is assumed to seasonally vary. We show that seasonality can give rise to a cascade of period-doubling bifurcations leading to chaos for larger, though realistic, values of the amplitude of the seasonal variation of fertility.

  19. Computational Difficulties in the Identification and Optimization of Control Systems.

    DTIC Science & Technology

    1980-01-01

    necessary and Identify by block number) - -. 3. iABSTRACT (Continue on revers, side It necessary and Identify by block number) As more realistic models ...Island 02912 ABSTRACT As more realistic models for resource management are developed, the need for efficient computational techniques for parameter...optimization (optimal control) in "state" models which This research was supported in part by ttfe National Science Foundation under grant NSF-MCS 79-05774

  20. Constraining cosmologies with fundamental constants - I. Quintessence and K-essence

    NASA Astrophysics Data System (ADS)

    Thompson, Rodger I.; Martins, C. J. A. P.; Vielzeuf, P. E.

    2013-01-01

    Many cosmological models invoke rolling scalar fields to account for the observed acceleration of the expansion of the Universe. These theories generally include a potential V(φ) which is a function of the scalar field φ. Although V(φ) can be represented by a very diverse set of functions, recent work has shown that under some conditions, such as the slow-roll conditions, the equation of state parameter w is either independent of the form of V(φ) or part of family of solutions with only a few parameters. In realistic models of this type the scalar field couples to other sectors of the model leading to possibly observable changes in the fundamental constants such as the fine structure constant α and the proton to electron mass ratio μ. Although the current situation on a possible variance of α is complicated, there are firm limitations on the variance of μ in the early universe. This paper explores the limits this puts on the validity of various cosmologies that invoke rolling scalar fields. We find that the limit on the variation of μ puts significant constraints on the product of a cosmological parameter w + 1 and a new physics parameter ζ2μ, the coupling constant between μ and the rolling scalar field. Even when the cosmologies are restricted to very slow roll conditions either the value of ζμ must be at the lower end of or less than its expected values or the value of w + 1 must be restricted to values vanishingly close to 0. This implies that either the rolling scalar field is very weakly coupled to the electromagnetic field, small ζμ, very weakly coupled to gravity, (w + 1) ≈ 0 or both. These results stress that adherence to the measured invariance in μ is a very significant test of the validity of any proposed cosmology and any new physics it requires. The limits on the variation of μ also produces a significant tension with the reported changes in the value of α.

  1. The expression of the skeletal muscle force-length relationship in vivo: a simulation study.

    PubMed

    Winter, Samantha L; Challis, John H

    2010-02-21

    The force-length relationship is one of the most important mechanical characteristics of skeletal muscle in humans and animals. For a physiologically realistic joint range of motion and therefore range of muscle fibre lengths only part of the force-length curve may be used in vivo, i.e. only a section of the force-length curve is expressed. A generalised model of a mono-articular muscle-tendon complex was used to examine the effect of various muscle architecture parameters on the expressed section of the force-length relationship for a 90 degrees joint range of motion. The parameters investigated were: the ratio of tendon resting length to muscle fibre optimum length (L(TR):L(F.OPT)) (varied from 0.5 to 11.5), the ratio of muscle fibre optimum length to average moment arm (L(F.OPT):r) (varied from 0.5 to 5), the normalised tendon strain at maximum isometric force (c) (varied from 0 to 0.08), the muscle fibre pennation angle (theta) (varied from 0 degrees to 45 degrees) and the joint angle at which the optimum muscle fibre length occurred (phi). The range of values chosen for each parameter was based on values reported in the literature for five human mono-articular muscles with different functional roles. The ratios L(TR):L(F.OPT) and L(F.OPT):r were important in determining the amount of variability in the expressed section of the force-length relationship. The modelled muscle operated over only one limb at intermediate values of these two ratios (L(TR):L(F.OPT)=5; L(F.OPT):r=3), whether this was the ascending or descending limb was determined by the precise values of the other parameters. It was concluded that inter-individual variability in the expressed section of the force-length relationship is possible, particularly for muscles with intermediate values of L(TR):L(F.OPT) and L(F.OPT):r such as the brachialis and vastus lateralis. Understanding the potential for inter-individual variability in the expressed section is important when using muscle models to simulate movement. (c) 2009 Elsevier Ltd. All rights reserved.

  2. The Electrostatic Instability for Realistic Pair Distributions in Blazar/EBL Cascades

    NASA Astrophysics Data System (ADS)

    Vafin, S.; Rafighi, I.; Pohl, M.; Niemiec, J.

    2018-04-01

    This work revisits the electrostatic instability for blazar-induced pair beams propagating through the intergalactic medium (IGM) using linear analysis and PIC simulations. We study the impact of the realistic distribution function of pairs resulting from the interaction of high-energy gamma-rays with the extragalactic background light. We present analytical and numerical calculations of the linear growth rate of the instability for the arbitrary orientation of wave vectors. Our results explicitly demonstrate that the finite angular spread of the beam dramatically affects the growth rate of the waves, leading to the fastest growth for wave vectors quasi-parallel to the beam direction and a growth rate at oblique directions that is only a factor of 2–4 smaller compared to the maximum. To study the nonlinear beam relaxation, we performed PIC simulations that take into account a realistic wide-energy distribution of beam particles. The parameters of the simulated beam-plasma system provide an adequate physical picture that can be extrapolated to realistic blazar-induced pairs. In our simulations, the beam looses only 1% of its energy, and we analytically estimate that the beam would lose its total energy over about 100 simulation times. An analytical scaling is then used to extrapolate the parameters of realistic blazar-induced pair beams. We find that they can dissipate their energy slightly faster by the electrostatic instability than through inverse-Compton scattering. The uncertainties arising from, e.g., details of the primary gamma-ray spectrum are too large to make firm statements for individual blazars, and an analysis based on their specific properties is required.

  3. Electric currents induced by twisted light in Quantum Rings.

    PubMed

    Quinteiro, G F; Berakdar, J

    2009-10-26

    We theoretically investigate the generation of electric currents in quantum rings resulting from the optical excitation with twisted light. Our model describes the kinetics of electrons in a two-band model of a semiconductor-based mesoscopic quantum ring coupled to light having orbital angular momentum (twisted light). We find the analytical solution, which exhibits a "circular" photon-drag effect and an induced magnetization, suggesting that this system is the circular analog of that of a bulk semiconductor excited by plane waves. For realistic values of the electric field and material parameters, the computed electric current can be as large as microA; from an applied perspective, this opens new possibilities to the optical control of the magnetization in semiconductors.

  4. Stress intensity factors in a reinforced thick-walled cylinder

    NASA Technical Reports Server (NTRS)

    Tang, R.; Erdogan, F.

    1984-01-01

    An elastic thick-walled cylinder containing a radial crack is considered. It is assumed that the cylinder is reinforced by an elastic membrane on its inner surface. The model is intended to simulate pressure vessels with cladding. The formulation of the problem is reduced to a singular integral equation. Various special cases including that of a crack terminating at the cylinder-reinforcement interface are investigated and numerical examples are given. Results indicate that in the case of the crack touching the interface the crack surface displacement derivative is finite and consequently the stress state around the corresponding crack tip is bounded; and generally, for realistic values of the stiffness parameter, the effect of the reinforcement is not very significant.

  5. Modeling the electrophoretic separation of short biological molecules in nanofluidic devices

    NASA Astrophysics Data System (ADS)

    Fayad, Ghassan; Hadjiconstantinou, Nicolas

    2010-11-01

    Via comparisons with Brownian Dynamics simulations of the worm-like-chain and rigid-rod models, and the experimental results of Fu et al. [Phys. Rev. Lett., 97, 018103 (2006)], we demonstrate that, for the purposes of low-to-medium field electrophoretic separation in periodic nanofilter arrays, sufficiently short biomolecules can be modeled as point particles, with their orientational degrees of freedom accounted for using partition coefficients. This observation is used in the present work to build a particularly simple and efficient Brownian Dynamics simulation method. Particular attention is paid to the model's ability to quantitatively capture experimental results using realistic values of all physical parameters. A variance-reduction method is developed for efficiently simulating arbitrarily small forcing electric fields.

  6. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 3: Demonstration problems

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    Program NSEG is a rapid mission analysis code based on the use of approximate flight path equations of motion. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelope performance mapping capabilities. For example, rate-of-climb, turn rates, and energy maneuverability parameter values may be mapped in the Mach-altitude plane. Approximate take off and landing analyses are also performed. At high speeds, centrifugal lift effects are accounted for. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  7. Realistic versus Schematic Interactive Visualizations for Learning Surveying Practices: A Comparative Study

    ERIC Educational Resources Information Center

    Dib, Hazar; Adamo-Villani, Nicoletta; Garver, Stephen

    2014-01-01

    Many benefits have been claimed for visualizations, a general assumption being that learning is facilitated. However, several researchers argue that little is known about the cognitive value of graphical representations, be they schematic visualizations, such as diagrams or more realistic, such as virtual reality. The study reported in the paper…

  8. Cognitive Workload and Psychophysiological Parameters During Multitask Activity in Helicopter Pilots.

    PubMed

    Gaetan, Sophie; Dousset, Erick; Marqueste, Tanguy; Bringoux, Lionel; Bourdin, Christophe; Vercher, Jean-Louis; Besson, Patricia

    2015-12-01

    Helicopter pilots are involved in a complex multitask activity, implying overuse of cognitive resources, which may result in piloting task impairment or in decision-making failure. Studies usually investigate this phenomenon in well-controlled, poorly ecological situations by focusing on the correlation between physiological values and either cognitive workload or emotional state. This study aimed at jointly exploring workload induced by a realistic simulated helicopter flight mission and emotional state, as well as physiological markers. The experiment took place in the helicopter full flight dynamic simulator. Six participants had to fly on two missions. Workload level, skin conductance, RMS-EMG, and emotional state were assessed. Joint analysis of psychological and physiological parameters associated with workload estimation revealed particular dynamics in each of three profiles. 1) Expert pilots showed a slight increase of measured physiological parameters associated with the increase in difficulty level. Workload estimates never reached the highest level and the emotional state for this profile only referred to positive emotions with low emotional intensity. 2) Non-Expert pilots showed increasing physiological values as the perceived workload increased. However, their emotional state referred to either positive or negative emotions, with a greater variability in emotional intensity. 3) Intermediate pilots were similar to Expert pilots regarding emotional states and similar to Non-Expert pilots regarding physiological patterns. Overall, high interindividual variability of these results highlight the complex link between physiological and psychological parameters with workload, and question whether physiology alone could predict a pilot's inability to make the right decision at the right time.

  9. 3-D transient hydraulic tomography in unconfined aquifers with fast drainage response

    NASA Astrophysics Data System (ADS)

    Cardiff, M.; Barrash, W.

    2011-12-01

    We investigate, through numerical experiments, the viability of three-dimensional transient hydraulic tomography (3DTHT) for identifying the spatial distribution of groundwater flow parameters (primarily, hydraulic conductivity K) in permeable, unconfined aquifers. To invert the large amount of transient data collected from 3DTHT surveys, we utilize an iterative geostatistical inversion strategy in which outer iterations progressively increase the number of data points fitted and inner iterations solve the quasi-linear geostatistical formulas of Kitanidis. In order to base our numerical experiments around realistic scenarios, we utilize pumping rates, geometries, and test lengths similar to those attainable during 3DTHT field campaigns performed at the Boise Hydrogeophysical Research Site (BHRS). We also utilize hydrologic parameters that are similar to those observed at the BHRS and in other unconsolidated, unconfined fluvial aquifers. In addition to estimating K, we test the ability of 3DTHT to estimate both average storage values (specific storage Ss and specific yield Sy) as well as spatial variability in storage coefficients. The effects of model conceptualization errors during unconfined 3DTHT are investigated including: (1) assuming constant storage coefficients during inversion and (2) assuming stationary geostatistical parameter variability. Overall, our findings indicate that estimation of K is slightly degraded if storage parameters must be jointly estimated, but that this effect is quite small compared with the degradation of estimates due to violation of "structural" geostatistical assumptions. Practically, we find for our scenarios that assuming constant storage values during inversion does not appear to have a significant effect on K estimates or uncertainty bounds.

  10. Multidrug efflux transporter activity in sea urchin embryos:Does localization provide a diffusive advantage?

    NASA Astrophysics Data System (ADS)

    Song, Xianfeng; Setayeshgar, Sima; Cole, Bryan; Hamdoun, Amro; Epel, David

    2008-03-01

    Experiments have shown upregulation of multidrug efflux transporter activity approximately 30 min after fertilization in the sea urchin embryo [1]. These ATP-hydrolyzing transporter proteins pump moderately hydrophobic molecules out of the cell and represent the cell's first line of defense againstexogenous toxins. It has also been shown that transporters are moved in vesicles along microfilaments and localized to tips of microvilli prior to activation. We have constructed a geometrically realistic model of the embryo, including microvilli, to explore the functional role of this localization in the efficient elimination of toxins from the standpoint of diffusion. We compute diffusion of toxins in extracellular, membrane and intracellular spaces coupled with transporter activity, using experimentally derived values for physical parameters. For transporters uniformly distributed along microvilli and tip-localized transporters we compare regions in parameter space where each distribution provides diffusive advantage, and comment on the physically expected conditions. [1] A. M. Hamdoun, G. N. Cherr, T. A. Roepke and D. Epel, Developmental Biology 276 452 (2004).

  11. Omnibus experiment: CPT and CP violation with sterile neutrinos

    NASA Astrophysics Data System (ADS)

    Loo, K. K.; Novikov, Yu N.; Smirnov, M. V.; Trzaska, W. H.; Wurm, M.

    2017-09-01

    The verification of the sterile neutrino hypothesis and, if confirmed, the determination of the relevant oscillation parameters is one of the goals of the neutrino physics in near future. We propose to search for the sterile neutrinos with a high statistics measurement utilizing the radioactive sources and oscillometric approach with large liquid scintillator detector like LENA, JUNO, or RENO-50. Our calculations indicate that such an experiment is realistic and could be performed in parallel to the main research plan for JUNO, LENA, or RENO-50. Assuming as the starting point the values of the oscillation parameters indicated by the current global fit (in 3 + 1 scenario) and requiring at least 5σ confidence level, we estimate that we would be able to detect differences in the mass squared differences Δ m41^2 of electron neutrinos and electron antineutrinos of the order of 1% or larger. That would allow to probe the CPT symmetry with neutrinos with an unprecedented accuracy.

  12. Open-source LCA tool for estimating greenhouse gas emissions from crude oil production using field characteristics.

    PubMed

    El-Houjeiri, Hassan M; Brandt, Adam R; Duffy, James E

    2013-06-04

    Existing transportation fuel cycle emissions models are either general and calculate nonspecific values of greenhouse gas (GHG) emissions from crude oil production, or are not available for public review and auditing. We have developed the Oil Production Greenhouse Gas Emissions Estimator (OPGEE) to provide open-source, transparent, rigorous GHG assessments for use in scientific assessment, regulatory processes, and analysis of GHG mitigation options by producers. OPGEE uses petroleum engineering fundamentals to model emissions from oil and gas production operations. We introduce OPGEE and explain the methods and assumptions used in its construction. We run OPGEE on a small set of fictional oil fields and explore model sensitivity to selected input parameters. Results show that upstream emissions from petroleum production operations can vary from 3 gCO2/MJ to over 30 gCO2/MJ using realistic ranges of input parameters. Significant drivers of emissions variation are steam injection rates, water handling requirements, and rates of flaring of associated gas.

  13. Early Thermal History of Rhea: The Role of Serpentinization and Liquid State Convection

    NASA Astrophysics Data System (ADS)

    Czechowski, Leszek; Łosiak, Anna

    2016-12-01

    Early thermal history of Rhea is investigated. The role of the following parameters of the model is investigated: time of beginning of accretion, tini, duration of accretion, tac, viscosity of ice close to the melting point, η0, activation energy in the formula for viscosity, E, thermal conductivity of silicate component, ksil, ammonia content, XNH3, and energy of serpentinization, cserp. We found that tini and tac are crucial for evolution. All other parameters are also important, but no dramatic differences are found for realistic values. The process of differentiation is also investigated. It is found that liquid state convection could delay the differentiation for hundreds of My. The results are confronted with observational data from Cassini spacecraft. It is possible that differentiation is fully completed but the density of formed core is close to the mean density. If this interpretation is correct, then Rhea could have accreted any time before 3-4 My after formation of CAI.

  14. Analysis of a decision model in the context of equilibrium pricing and order book pricing

    NASA Astrophysics Data System (ADS)

    Wagner, D. C.; Schmitt, T. A.; Schäfer, R.; Guhr, T.; Wolf, D. E.

    2014-12-01

    An agent-based model for financial markets has to incorporate two aspects: decision making and price formation. We introduce a simple decision model and consider its implications in two different pricing schemes. First, we study its parameter dependence within a supply-demand balance setting. We find realistic behavior in a wide parameter range. Second, we embed our decision model in an order book setting. Here, we observe interesting features which are not present in the equilibrium pricing scheme. In particular, we find a nontrivial behavior of the order book volumes which reminds of a trend switching phenomenon. Thus, the decision making model alone does not realistically represent the trading and the stylized facts. The order book mechanism is crucial.

  15. The Multiscale Robin Coupled Method for flows in porous media

    NASA Astrophysics Data System (ADS)

    Guiraldello, Rafael T.; Ausas, Roberto F.; Sousa, Fabricio S.; Pereira, Felipe; Buscaglia, Gustavo C.

    2018-02-01

    A multiscale mixed method aiming at the accurate approximation of velocity and pressure fields in heterogeneous porous media is proposed. The procedure is based on a new domain decomposition method in which the local problems are subject to Robin boundary conditions. The domain decomposition procedure is defined in terms of two independent spaces on the skeleton of the decomposition, corresponding to interface pressures and fluxes, that can be chosen with great flexibility to accommodate local features of the underlying permeability fields. The well-posedness of the new domain decomposition procedure is established and its connection with the method of Douglas et al. (1993) [12], is identified, also allowing us to reinterpret the known procedure as an optimized Schwarz (or Two-Lagrange-Multiplier) method. The multiscale property of the new domain decomposition method is indicated, and its relation with the Multiscale Mortar Mixed Finite Element Method (MMMFEM) and the Multiscale Hybrid-Mixed (MHM) Finite Element Method is discussed. Numerical simulations are presented aiming at illustrating several features of the new method. Initially we illustrate the possibility of switching from MMMFEM to MHM by suitably varying the Robin condition parameter in the new multiscale method. Then we turn our attention to realistic flows in high-contrast, channelized porous formations. We show that for a range of values of the Robin condition parameter our method provides better approximations for pressure and velocity than those computed with either the MMMFEM and the MHM. This is an indication that our method has the potential to produce more accurate velocity fields in the presence of rough, realistic permeability fields of petroleum reservoirs.

  16. RECONSTRUCTING THE SOLAR WIND FROM ITS EARLY HISTORY TO CURRENT EPOCH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Airapetian, Vladimir S.; Usmanov, Arcadi V., E-mail: vladimir.airapetian@nasa.gov, E-mail: avusmanov@gmail.com

    Stellar winds from active solar-type stars can play a crucial role in removal of stellar angular momentum and erosion of planetary atmospheres. However, major wind properties except for mass-loss rates cannot be directly derived from observations. We employed a three-dimensional magnetohydrodynamic Alfvén wave driven solar wind model, ALF3D, to reconstruct the solar wind parameters including the mass-loss rate, terminal velocity, and wind temperature at 0.7, 2, and 4.65 Gyr. Our model treats the wind thermal electrons, protons, and pickup protons as separate fluids and incorporates turbulence transport, eddy viscosity, turbulent resistivity, and turbulent heating to properly describe proton and electronmore » temperatures of the solar wind. To study the evolution of the solar wind, we specified three input model parameters, the plasma density, Alfvén wave amplitude, and the strength of the dipole magnetic field at the wind base for each of three solar wind evolution models that are consistent with observational constrains. Our model results show that the velocity of the paleo solar wind was twice as fast, ∼50 times denser and 2 times hotter at 1 AU in the Sun's early history at 0.7 Gyr. The theoretical calculations of mass-loss rate appear to be in agreement with the empirically derived values for stars of various ages. These results can provide realistic constraints for wind dynamic pressures on magnetospheres of (exo)planets around the young Sun and other active stars, which is crucial in realistic assessment of the Joule heating of their ionospheres and corresponding effects of atmospheric erosion.« less

  17. Radiation transmission data for radionuclides and materials relevant to brachytherapy facility shielding.

    PubMed

    Papagiannis, P; Baltas, D; Granero, D; Pérez-Calatayud, J; Gimeno, J; Ballester, F; Venselaar, J L M

    2008-11-01

    To address the limited availability of radiation shielding data for brachytherapy as well as some disparity in existing data, Monte Carlo simulation was used to generate radiation transmission data for 60Co, 137CS, 198Au, 192Ir 169Yb, 170Tm, 131Cs, 125I, and 103pd photons through concrete, stainless steel, lead, as well as lead glass and baryte concrete. Results accounting for the oblique incidence of radiation to the barrier, spectral variation with barrier thickness, and broad beam conditions in a realistic geometry are compared to corresponding data in the literature in terms of the half value layer (HVL) and tenth value layer (TVL) indices. It is also shown that radiation shielding calculations using HVL or TVL values could overestimate or underestimate the barrier thickness required to achieve a certain reduction in radiation transmission. This questions the use of HVL or TVL indices instead of the actual transmission data. Therefore, a three-parameter model is fitted to results of this work to facilitate accurate and simple radiation shielding calculations.

  18. Polarization of the Radiation Reflected and Transmitted by the Earth's Atmosphere.

    PubMed

    Plass, G N; Kattawar, G W

    1970-05-01

    The polarization of the reflected and transmitted radiation is calculated for a realistic model of the earth's atmosphere at five wavelengths ranging from 0.27 micro to 1.67 micro. The single scattering matrix is calculated from the Mie theory for an aerosol size distribution appropriate for our atmosphere. The solar photons are followed through multiple collisions with the aerosols and the Rayleigh scattering centers in the atmosphere by a Monte Carlo method. The aerosol number density as well as the ratio of aerosol to Rayleigh scattering varies with height. The proportion of aerosol to Rayleigh scattering is adjusted for each wavelength; ozone absorption is included where appropriate. The polarization is presented as a function of the zenith and azimuthal angle for six values of the earth's albedo, two values of the solar zenith angle, and four values of the total aerosol concentration. In general the polarization decreases as the wavelength increases and as the total aerosol concentration increases (because of the increasing importance of aerosol scattering). In most situations the polarization is much more sensitive than the radiance to changes in the parameters which specify the atmosphere.

  19. [Investigation into the formation of proportions of "realistic thinking vs magical thinking" in paranoid schizophrenia].

    PubMed

    Jarosz, M; Pankiewicz, Z; Buczek, I; Poprawska, I; Rojek, J; Zaborowski, A

    1993-01-01

    Both magical thinking among healthy persons and magical and symbolic thinking in schizophrenia were discussed. The investigation covered 100 paranoid schizophrenics. They also underwent an examination in connection with the formation of the remaining 3 proportions. Both "realistic thinking and magical thinking" scales were used. An ability to think realistically was preserved, to a varying degree, in all patients, with 50% of those examined having shown an explicit or very explicit ability to follow realistic thinking. The above findings deviate from a simplified cognitive model within the discussed range. It was further confirmed that realistic thinking may coexist with magical thinking, and, in some cases, it concerns the same events. That type of disorders of the content of thinking are referred to as magical-realistic interpenetration. The results, and particularly high coefficient of negative correlation within the scales of the examined proportions, confirm the correctness of the assumption that the investigated modes of thinking form an antithetic bipolarity of proportions, aggregating antithetic values, therefore being also complementary.

  20. Bayesian Statistical Inference in Ion-Channel Models with Exact Missed Event Correction.

    PubMed

    Epstein, Michael; Calderhead, Ben; Girolami, Mark A; Sivilotti, Lucia G

    2016-07-26

    The stochastic behavior of single ion channels is most often described as an aggregated continuous-time Markov process with discrete states. For ligand-gated channels each state can represent a different conformation of the channel protein or a different number of bound ligands. Single-channel recordings show only whether the channel is open or shut: states of equal conductance are aggregated, so transitions between them have to be inferred indirectly. The requirement to filter noise from the raw signal further complicates the modeling process, as it limits the time resolution of the data. The consequence of the reduced bandwidth is that openings or shuttings that are shorter than the resolution cannot be observed; these are known as missed events. Postulated models fitted using filtered data must therefore explicitly account for missed events to avoid bias in the estimation of rate parameters and therefore assess parameter identifiability accurately. In this article, we present the first, to our knowledge, Bayesian modeling of ion-channels with exact missed events correction. Bayesian analysis represents uncertain knowledge of the true value of model parameters by considering these parameters as random variables. This allows us to gain a full appreciation of parameter identifiability and uncertainty when estimating values for model parameters. However, Bayesian inference is particularly challenging in this context as the correction for missed events increases the computational complexity of the model likelihood. Nonetheless, we successfully implemented a two-step Markov chain Monte Carlo method that we called "BICME", which performs Bayesian inference in models of realistic complexity. The method is demonstrated on synthetic and real single-channel data from muscle nicotinic acetylcholine channels. We show that parameter uncertainty can be characterized more accurately than with maximum-likelihood methods. Our code for performing inference in these ion channel models is publicly available. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  1. Assessment of spatial distrilbution of porosity and aquifer geohydraulic parameters in parts of the Tertiary - Quaternary hydrogeoresource of south-eastern Nigeria

    NASA Astrophysics Data System (ADS)

    George, N. J.; Akpan, A. E.; Akpan, F. S.

    2017-12-01

    An integrated attempt exploring information deduced from extensive surface resistivity study in three Local Government Areas of Akwa Ibom State, Nigeria and data from hydrogeological sources obtained from water boreholes have been explored to economically estimate porosity and coefficient of permeability/hydraulic conductivity in parts of the clastic Tertiary - Quaternary sediments of the Niger Delta region. Generally, these parameters are predominantly estimated from empirical analysis of core samples and pumping test data generated from boreholes in the laboratory. However, this analysis is not only costly and time consuming, but also limited in areal coverage. The chosen technique employs surface resistivity data, core samples and pumping test data in order to estimate porosity and aquifer hydraulic parameters (transverse resistance, hydraulic conductivity and transmissivity). In correlating the two sets of results, Porosity and hydraulic conductivity were observed to be more elevated near the riverbanks. Empirical models utilising Archie's, Waxman-Smits and Kozeny-Carman Bear relations were employed characterising the formation parameters with wonderfully deduced good fits. The effect of surface conduction occasioned by clay usually disregarded or ignored in Archie's model was estimated to be 2.58 × 10-5 Siemens. This conductance can be used as a corrective factor to the conduction values obtained from Archie's equation. Interpretation aided measures such as graphs, mathematical models and maps which geared towards realistic conclusions and interrelationship between the porosity and other aquifer parameters were generated. The values of the hydraulic conductivity estimated from Waxman-Smits model was approximately 9.6 × 10-5m/s everywhere. This revelation indicates that there is no pronounced change in the quality of the saturating fluid and the geological formations that serve as aquifers even though the porosities were varying. The deciphered parameter relations can be used to estimate geohydraulic parameters in other locations with little or no borehole data.

  2. TraPy-MAC: Traffic Priority Aware Medium Access Control Protocol for Wireless Body Area Network.

    PubMed

    Ullah, Fasee; Abdullah, Abdul Hanan; Kaiwartya, Omprakash; Cao, Yue

    2017-06-01

    Recently, Wireless Body Area Network (WBAN) has witnessed significant attentions in research and product development due to the growing number of sensor-based applications in healthcare domain. Design of efficient and effective Medium Access Control (MAC) protocol is one of the fundamental research themes in WBAN. Static on-demand slot allocation to patient data is the main approach adopted in the design of MAC protocol in literature, without considering the type of patient data specifically the level of severity on patient data. This leads to the degradation of the performance of MAC protocols considering effectiveness and traffic adjustability in realistic medical environments. In this context, this paper proposes a Traffic Priority-Aware MAC (TraPy-MAC) protocol for WBAN. It classifies patient data into emergency and non-emergency categories based on the severity of patient data. The threshold value aided classification considers a number of parameters including type of sensor, body placement location, and data transmission time for allocating dedicated slots patient data. Emergency data are not required to carry out contention and slots are allocated by giving the due importance to threshold value of vital sign data. The contention for slots is made efficient in case of non-emergency data considering threshold value in slot allocation. Moreover, the slot allocation to emergency and non-emergency data are performed parallel resulting in performance gain in channel assignment. Two algorithms namely, Detection of Severity on Vital Sign data (DSVS), and ETS Slots allocation based on the Severity on Vital Sign (ETS-SVS) are developed for calculating threshold value and resolving the conflicts of channel assignment, respectively. Simulations are performed in ns2 and results are compared with the state-of-the-art MAC techniques. Analysis of results attests the benefit of TraPy-MAC in comparison with the state-of-the-art MAC in channel assignment in realistic medical environments.

  3. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  4. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    PubMed

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  5. Predicting the cosmological constant with the scale-factor cutoff measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Simone, Andrea; Guth, Alan H.; Salem, Michael P.

    2008-09-15

    It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less

  6. Developing Skills: Realistic Work Environments in Further Education. FEDA Reports.

    ERIC Educational Resources Information Center

    Armstrong, Paul; Hughes, Maria

    To establish the prevalence and perceived value of realistic work environments (RWEs) in colleges and their use as learning resources, all further education (FE) sector colleges in Great Britain were surveyed in the summer of 1998. Of 175 colleges that responded to 2 questionnaires for senior college managers and RWE managers, 127 had at least 1…

  7. Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process

    NASA Astrophysics Data System (ADS)

    Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.

    2016-12-01

    Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.

  8. Direct recovery of regional tracer kinetics from temporally inconsistent dynamic ECT projections using dimension-reduced time-activity basis

    NASA Astrophysics Data System (ADS)

    Maltz, Jonathan S.

    2000-11-01

    We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.

  9. Electro-osmotic flow of couple stress fluids in a micro-channel propagated by peristalsis

    NASA Astrophysics Data System (ADS)

    Tripathi, Dharmendra; Yadav, Ashu; Anwar Bég, O.

    2017-04-01

    A mathematical model is developed for electro-osmotic peristaltic pumping of a non-Newtonian liquid in a deformable micro-channel. Stokes' couple stress fluid model is employed to represent realistic working liquids. The Poisson-Boltzmann equation for electric potential distribution is implemented owing to the presence of an electrical double layer (EDL) in the micro-channel. Using long wavelength, lubrication theory and Debye-Huckel approximations, the linearized transformed dimensionless boundary value problem is solved analytically. The influence of electro-osmotic parameter (inversely proportional to Debye length), maximum electro-osmotic velocity (a function of external applied electrical field) and couple stress parameter on axial velocity, volumetric flow rate, pressure gradient, local wall shear stress and stream function distributions is evaluated in detail with the aid of graphs. The Newtonian fluid case is retrieved as a special case with vanishing couple stress effects. With increasing the couple stress parameter there is a significant increase in the axial pressure gradient whereas the core axial velocity is reduced. An increase in the electro-osmotic parameter both induces flow acceleration in the core region (around the channel centreline) and it also enhances the axial pressure gradient substantially. The study is relevant in the simulation of novel smart bio-inspired space pumps, chromatography and medical micro-scale devices.

  10. Improving Land-Surface Model Hydrology: Is an Explicit Aquifer Model Better than a Deeper Soil Profile?

    NASA Technical Reports Server (NTRS)

    Gulden, L. E.; Rosero, E.; Yang, Z.-L.; Rodell, Matthew; Jackson, C. S.; Niu, G.-Y.; Yeh, P. J.-F.; Famiglietti, J. S.

    2007-01-01

    Land surface models (LSMs) are computer programs, similar to weather and climate prediction models, which simulate the storage and movement of water (including soil moisture, snow, evaporation, and runoff) after it falls to the ground as precipitation. It is not currently possible to measure all of the variables of interest everywhere on Earth with sufficient accuracy. Hence LSMs have been developed to integrate the available information, including satellite observations, using powerful computers, in order to track water storage and redistribution. The maps are used to improve weather forecasts, support water resources and agricultural applications, and study the Earth's water cycle and climate variability. Recently, the models have begun to simulate groundwater storage. In this paper, we compare several possible approaches, and examine the pitfalls associated with trying to estimate aquifer parameters (such as porosity) that are required by the models. We find that explicit representation of groundwater, as opposed to the addition of deeper soil layers, considerably decreases the sensitivity of modeled terrestrial water storage to aquifer parameter choices. We also show that approximate knowledge of parameter values is not sufficient to guarantee realistic model performance: because interaction among parameters is significant, they must be prescribed as a harmonious set.

  11. Weak-value amplification and optimal parameter estimation in the presence of correlated noise

    NASA Astrophysics Data System (ADS)

    Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.

    2017-11-01

    We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.

  12. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  13. Evaluating the Uncertainties in the Electron Temperature and Radial Speed Measurements Using White Light Corona Eclipse Observations

    NASA Technical Reports Server (NTRS)

    Reginald, Nelson L.; Davilla, Joseph M.; St. Cyr, O. C.; Rastaetter, Lutz

    2014-01-01

    We examine the uncertainties in two plasma parameters from their true values in a simulated asymmetric corona. We use the Corona Heliosphere (CORHEL) and Magnetohydrodynamics Around the Sphere (MAS) models in the Community Coordinated Modeling Center (CCMC) to investigate the differences between an assumed symmetric corona and a more realistic, asymmetric one. We were able to predict the electron temperatures and electron bulk flow speeds to within +/-0.5 MK and +/-100 km s(exp-1), respectively, over coronal heights up to 5.0 R from Sun center.We believe that this technique could be incorporated in next-generation white-light coronagraphs to determine these electron plasma parameters in the low solar corona. We have conducted experiments in the past during total solar eclipses to measure the thermal electron temperature and the electron bulk flow speed in the radial direction in the low solar corona. These measurements were made at different altitudes and latitudes in the low solar corona by measuring the shape of the K-coronal spectra between 350 nm and 450 nm and two brightness ratios through filters centered at 385.0 nm/410.0 nm and 398.7 nm/423.3 nm with a bandwidth of is approximately equal to 4 nm. Based on symmetric coronal models used for these measurements, the two measured plasma parameters were expected to represent those values at the points where the lines of sight intersected the plane of the solar limb.

  14. Adaptive Particle Swarm Optimizer with Varying Acceleration Coefficients for Finding the Most Stable Conformer of Small Molecules.

    PubMed

    Agrawal, Shikha; Silakari, Sanjay; Agrawal, Jitendra

    2015-11-01

    A novel parameter automation strategy for Particle Swarm Optimization called APSO (Adaptive PSO) is proposed. The algorithm is designed to efficiently control the local search and convergence to the global optimum solution. Parameters c1 controls the impact of the cognitive component on the particle trajectory and c2 controls the impact of the social component. Instead of fixing the value of c1 and c2 , this paper updates the value of these acceleration coefficients by considering time variation of evaluation function along with varying inertia weight factor in PSO. Here the maximum and minimum value of evaluation function is use to gradually decrease and increase the value of c1 and c2 respectively. Molecular energy minimization is one of the most challenging unsolved problems and it can be formulated as a global optimization problem. The aim of the present paper is to investigate the effect of newly developed APSO on the highly complex molecular potential energy function and to check the efficiency of the proposed algorithm to find the global minimum of the function under consideration. The proposed algorithm APSO is therefore applied in two cases: Firstly, for the minimization of a potential energy of small molecules with up to 100 degrees of freedom and finally for finding the global minimum energy conformation of 1,2,3-trichloro-1-flouro-propane molecule based on a realistic potential energy function. The computational results of all the cases show that the proposed method performs significantly better than the other algorithms. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. The supernova-regulated ISM. III. Generation of vorticity, helicity, and mean flows

    NASA Astrophysics Data System (ADS)

    Käpylä, M. J.; Gent, F. A.; Väisälä, M. S.; Sarson, G. R.

    2018-03-01

    Context. The forcing of interstellar turbulence, driven mainly by supernova (SN) explosions, is irrotational in nature, but the development of significant amounts of vorticity and helicity, accompanied by large-scale dynamo action, has been reported. Aim. Several earlier investigations examined vorticity production in simpler systems; here all the relevant processes can be considered simultaneously. We also investigate the mechanisms for the generation of net helicity and large-scale flow in the system. Methods: We use a three-dimensional, stratified, rotating and shearing local simulation domain of the size 1 × 1 × 2 kpc3, forced with SN explosions occurring at a rate typical of the solar neighbourhood in the Milky Way. In addition to the nominal simulation run with realistic Milky Way parameters, we vary the rotation and shear rates, but keep the absolute value of their ratio fixed. Reversing the sign of shear vs. rotation allows us to separate the rotation- and shear-generated contributions. Results: As in earlier studies, we find the generation of significant amounts of vorticity, the rotational flow comprising on average 65% of the total flow. The vorticity production can be related to the baroclinicity of the flow, especially in the regions of hot, dilute clustered supernova bubbles. In these regions, the vortex stretching acts as a sink of vorticity. In denser, compressed regions, the vortex stretching amplifies vorticity, but remains sub-dominant to baroclinicity. The net helicities produced by rotation and shear are of opposite signs for physically motivated rotation laws, with the solar neighbourhood parameters resulting in the near cancellation of the total net helicity. We also find the excitation of oscillatory mean flows, the strength and oscillation period of which depend on the Coriolis and shear parameters; we interpret these as signatures of the anisotropic-kinetic-α (AKA) effect. We use the method of moments to fit for the turbulent transport coefficients, and find αAKA values of the order 3-5 km s-1. Conclusions: Even in a weakly rotationally and shear-influenced system, small-scale anisotropies can lead to significant effects at large scales. Here we report on two consequences of such effects, namely on the generation of net helicity and on the emergence of large-scale flows by the AKA effect, the latter detected for the first time in a direct numerical simulation of a realistic astrophysical system.

  16. The Role of the Cooling Prescription for Disk Fragmentation: Numerical Convergence and Critical Cooling Parameter in Self-gravitating Disks

    NASA Astrophysics Data System (ADS)

    Baehr, Hans; Klahr, Hubert

    2015-12-01

    Protoplanetary disks fragment due to gravitational instability when there is enough mass for self-gravitation, described by the Toomre parameter, and when heat can be lost at a rate comparable to the local dynamical timescale, described by {t}{{c}}=β {{{Ω }}}-1. Simulations of self-gravitating disks show that the cooling parameter has a rough critical value at {β }{{crit}}=3. When below {β }{{crit}}, gas overdensities will contract under their own gravity and fragment into bound objects while otherwise maintaining a steady state of gravitoturbulence. However, previous studies of the critical cooling parameter have found dependences on simulation resolution, indicating that the simulation of self-gravitating protoplanetary disks is not so straightforward. In particular, the simplicity of the cooling timescale tc prevents fragments from being disrupted by pressure support as temperatures rise. We alter the cooling law so that the cooling timescale is dependent on local surface density fluctuations, which is a means of incorporating optical depth effects into the local cooling of an object. For lower resolution simulations, this results in a lower critical cooling parameter and a disk that is more stable to gravitational stresses, suggesting that the formation of large gas giants planets in large, cool disks is generally suppressed by more realistic cooling. At our highest resolution, however, the model becomes unstable to fragmentation for cooling timescales up to β =10.

  17. Comparison of SWAT Hydrological Model Results from TRMM 3B42, NEXRAD Stage III, and Oklahoma Mesonet Data

    NASA Astrophysics Data System (ADS)

    Tobin, K. J.; Bennett, M. E.

    2008-05-01

    The Cimarron River Basin (3110 sq km) between Dodge and Guthrie, Oklahoma is located in northern Oklahoma and was used as a test bed to compare the hydrological model performance associated with different methods of precipitation quantification. The Soil and Water Assessment Tool (SWAT) was selected for this project, which is a comprehensive model that, besides quantifying watershed hydrology, can simulate water quality as well as nutrient and sediment loading within stream reaches. An advantage of this location is the extensive monitoring of MET parameters (precipitation, temperature, relative humidity, wind speed, solar radiation) afforded by the Oklahoma Mesonet, which has been documented to improve the performance of SWAT. The utility of TRMM 3B42 and NEXRAD Stage III data in supporting the hydrologic modeling of Cimarron River Basin is demonstrated. Minor adjustments to selected model parameters were made to make parameter values more realistic based on results from previous studies and information and to more realistically simulate base flow. Significantly, no ad hoc adjustments to major parameters such as Curve Number or Available Soil Water were made and robust simulations were obtained. TRMM and NEXRAD data are aggregated into an average daily estimate of precipitation for each TRMM grid cell (0.25 degree X 0.25 degree). Preliminary simulation of stream flow (year 2004 to 2006) in the Cimarron River Basin yields acceptable monthly results with very little adjustment of model parameters using TRMM 3B42 precipitation data (mass balance error = 3 percent; Monthly Nash-Sutcliffe efficiency coefficients (NS) = 0.77). However, both Oklahoma Mesonet rain gauge (mass balance error = 13 percent; Monthly NS = 0.91; Daily NS = 0.64) and NEXRAD Stage III data (mass balance error = -5 percent; Monthly NS = 0.95; Daily NS = 0.69) produces superior simulations even at a sub-monthly time scale; daily results are time averaged over a three day period. Note that all types of precipitation data perform better than a synthetic precipitation dataset generated using a weather simulator (mass balance error = 12 percent; Monthly NS = 0.40). Our study again documents that merged precipitation satellite products, such as TRMM 3B42, can support semi-distributed hydrologic modeling at the watershed scale. However, apparently additional work is required to improve TRMM precipitation retrievals over land to generate a product that yields more robust hydrological simulations especially at finer time scales. Additionally, ongoing work in this basin will compare TRMM results with stream flow model results generated using CMORPH precipitation estimates. Finally, in the future we plan to use simulated, semi-distributed soil moisture values determined by SWAT for comparison with gridded soil moisture estimates from TRMM-TMI that should provide further validation of our modeling efforts.

  18. Evaluating the biochemical methane potential (BMP) of low-organic waste at Danish landfills.

    PubMed

    Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter

    2014-11-01

    The biochemical methane potential (BMP) is an essential parameter when using first order decay (FOD) landfill gas (LFG) generation models to estimate methane (CH4) generation from landfills. Different categories of waste (mixed, shredder and sludge waste) with a low-organic content and temporarily stored combustible waste were sampled from four Danish landfills. The waste was characterized in terms of physical characteristics (TS, VS, TC and TOC) and the BMP was analyzed in batch tests. The experiment was set up in triplicate, including blank and control tests. Waste samples were incubated at 55°C for more than 60 days, with continuous monitoring of the cumulative CH4 generation. Results showed that samples of mixed waste and shredder waste had similar BMP results, which was in the range of 5.4-9.1 kg CH4/ton waste (wet weight) on average. As a calculated consequence, their degradable organic carbon content (DOCC) was in the range of 0.44-0.70% of total weight (wet waste). Numeric values of both parameters were much lower than values of traditional municipal solid waste (MSW), as well as default numeric values in current FOD models. The sludge waste and temporarily stored combustible waste showed BMP values of 51.8-69.6 and 106.6-117.3 kg CH4/ton waste on average, respectively, and DOCC values of 3.84-5.12% and 7.96-8.74% of total weight. The same category of waste from different Danish landfills did not show significant variation. This research studied the BMP of Danish low-organic waste for the first time, which is important and valuable for using current FOD LFG generation models to estimate realistic CH4 emissions from modern landfills receiving low-organic waste. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Effects of binary stellar populations on direct collapse black hole formation

    NASA Astrophysics Data System (ADS)

    Agarwal, Bhaskar; Cullen, Fergus; Khochfar, Sadegh; Klessen, Ralf S.; Glover, Simon C. O.; Johnson, Jarrett

    2017-06-01

    The critical Lyman-Werner (LW) flux required for direct collapse blackholes (DCBH) formation, or Jcrit, depends on the shape of the irradiating spectral energy distribution (SED). The SEDs employed thus far have been representative of realistic single stellar populations. We study the effect of binary stellar populations on the formation of DCBH, as a result of their contribution to the LW radiation field. Although binary populations with ages > 10 Myr yield a larger LW photon output, we find that the corresponding values of Jcrit can be up to 100 times higher than single stellar populations. We attribute this to the shape of the binary SEDs as they produce a sub-critical rate of H- photodetaching 0.76 eV photons as compared to single stellar populations, reaffirming the role that H- plays in DCBH formation. This further corroborates the idea that DCBH formation is better understood in terms of a critical region in the H2-H- photodestruction rate parameter space, rather than a single value of LW flux.

  20. Spin current and spin transfer torque in ferromagnet/superconductor spin valves

    NASA Astrophysics Data System (ADS)

    Moen, Evan; Valls, Oriol T.

    2018-05-01

    Using fully self-consistent methods, we study spin transport in fabricable spin valve systems consisting of two magnetic layers, a superconducting layer, and a spacer normal layer between the ferromagnets. Our methods ensure that the proper relations between spin current gradients and spin transfer torques are satisfied. We present results as a function of geometrical parameters, interfacial barrier values, misalignment angle between the ferromagnets, and bias voltage. Our main results are for the spin current and spin accumulation as functions of position within the spin valve structure. We see precession of the spin current about the exchange fields within the ferromagnets, and penetration of the spin current into the superconductor for biases greater than the critical bias, defined in the text. The spin accumulation exhibits oscillating behavior in the normal metal, with a strong dependence on the physical parameters both as to the structure and formation of the peaks. We also study the bias dependence of the spatially averaged spin transfer torque and spin accumulation. We examine the critical-bias effect of these quantities, and their dependence on the physical parameters. Our results are predictive of the outcome of future experiments, as they take into account imperfect interfaces and a realistic geometry.

  1. Prospect theory based estimation of drivers' risk attitudes in route choice behaviors.

    PubMed

    Zhou, Lizhen; Zhong, Shiquan; Ma, Shoufeng; Jia, Ning

    2014-12-01

    This paper applied prospect theory (PT) to describe drivers' route choice behavior under Variable Message Sign (VMS), which presented visual traffic information to assist them to make route choice decisions. A quite rich empirical data from questionnaire and field spot was used to estimate parameters of PT. In order to make the parameters more realistic with drivers' attitudes, they were classified into different types by significant factors influencing their behaviors. Based on the travel time distribution of alternative routes and route choice results from questionnaire, the parameterized value function of each category was figured out, which represented drivers' risk attitudes and choice characteristics. The empirical verification showed that the estimates were acceptable and effective. The result showed drivers' risk attitudes and route choice characteristics could be captured by PT under real-time information shown on VMS. For practical application, once drivers' route choice characteristics and parameters were identified, their route choice behavior under different road conditions could be predicted accurately, which was the basis of traffic guidance measures formulation and implementation for targeted traffic management. Moreover, the heterogeneous risk attitudes among drivers should be considered when releasing traffic information and regulating traffic flow. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Consistency relations for sharp features in the primordial spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mooij, Sander; Palma, Gonzalo A.; Panotopoulos, Grigoris

    We study the generation of sharp features in the primordial spectra within the framework of effective field theory of inflation, wherein curvature perturbations are the consequence of the dynamics of a single scalar degree of freedom. We identify two sources in the generation of features: rapid variations of the sound speed c{sub s} (at which curvature fluctuations propagate) and rapid variations of the expansion rate H during inflation. With this in mind, we propose a non-trivial relation linking these two quantities that allows us to study the generation of sharp features in realistic scenarios where features are the result ofmore » the simultaneous occurrence of these two sources. This relation depends on a single parameter with a value determined by the particular model (and its numerical input) responsible for the rapidly varying background. As a consequence, we find a one-parameter consistency relation between the shape and size of features in the bispectrum and features in the power spectrum. To substantiate this result, we discuss several examples of models for which this one-parameter relation (between c{sub s} and H) holds, including models in which features in the spectra are both sudden and resonant.« less

  3. Investigation of statistical iterative reconstruction for dedicated breast CT

    PubMed Central

    Makeev, Andrey; Glick, Stephen J.

    2013-01-01

    Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images were compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue. Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters. Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved performance (by a factor of greater than 10) compared to FBP. The hyperbolic prior was also observed to be superior to the TV norm. A few of the best-performing parameter pairs for the PML method also demonstrated superior performance for various radiation doses. In fact, using PML with certain parameter values results in better images, acquired using 2 mGy dose, than FBP-reconstructed images acquired using 6 mGy dose. Conclusions: A range of optimal free parameters for the PML algorithm with hyperbolic and TV norm-based potentials is presented for the microcalcification detection task, in dedicated breast CT. The reported values can be used as starting values of the free parameters, when SIR techniques are used for image reconstruction. Significant improvement in image quality can be achieved by using PML with optimal combination of parameters, as compared to FBP. Importantly, these results suggest improved detection of microcalcifications can be obtained by using PML with lower radiation dose to the patient, than using FBP with higher dose. PMID:23927318

  4. SU-F-I-14: 3D Breast Digital Phantom for XACT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, S; Laaroussi, R; Chen, J

    Purpose: The X-ray induced acoustic computed tomography (XACT) is a new imaging modality which combines X-ray contrast and high ultrasonic resolution in a single modality. Using XACT in breast imaging, a 3D breast volume can be imaged by only one pulsed X-ray radiation, which could dramatically reduce the imaging dose for patients undergoing breast cancer screening and diagnosis. A 3D digital phantom that contains both X-ray properties and acoustic properties of different tissue types is indeed needed for developing and optimizing the XACT system. The purpose of this study is to offer a realistic breast digital phantom as a valuablemore » tool for improving breast XACT imaging techniques and potentially leading to better diagnostic outcomes. Methods: A series of breast CT images along the coronal plane from a patient who has breast calcifications are used as the source images. A HU value based segmentation algorithm is employed to identify breast tissues in five categories, namely the skin tissue, fat tissue, glandular tissue, chest bone and calcifications. For each pixel, the dose related parameters, such as material components and density, and acoustic related parameters, such as frequency-dependent acoustic attenuation coefficient and bandwidth, are assigned based on tissue types. Meanwhile, other parameters which are used in sound propagation, including the sound speed, thermal expansion coefficient, and heat capacity are also assigned to each tissue. Results: A series of 2D tissue type image is acquired first and the 3D digital breast phantom is obtained by using commercial 3D reconstruction software. When giving specific settings including dose depositions and ultrasound center frequency, the X-ray induced initial pressure rise can be calculated accordingly. Conclusion: The proposed 3D breast digital phantom represents a realistic breast anatomic structure and provides a valuable tool for developing and evaluating the system performance for XACT.« less

  5. Evaluating Force-Field London Dispersion Coefficients Using the Exchange-Hole Dipole Moment Model.

    PubMed

    Mohebifar, Mohamad; Johnson, Erin R; Rowley, Christopher N

    2017-12-12

    London dispersion interactions play an integral role in materials science and biophysics. Force fields for atomistic molecular simulations typically represent dispersion interactions by the 12-6 Lennard-Jones potential using empirically determined parameters. These parameters are generally underdetermined, and there is no straightforward way to test if they are physically realistic. Alternatively, the exchange-hole dipole moment (XDM) model from density-functional theory predicts atomic and molecular London dispersion coefficients from first principles, providing an innovative strategy to validate the dispersion terms of molecular-mechanical force fields. In this work, the XDM model was used to obtain the London dispersion coefficients of 88 organic molecules relevant to biochemistry and pharmaceutical chemistry and the values compared with those derived from the Lennard-Jones parameters of the CGenFF, GAFF, OPLS, and Drude polarizable force fields. The molecular dispersion coefficients for the CGenFF, GAFF, and OPLS models are systematically higher than the XDM-calculated values by a factor of roughly 1.5, likely due to neglect of higher order dispersion terms and premature truncation of the dispersion-energy summation. The XDM dispersion coefficients span a large range for some molecular-mechanical atom types, suggesting an unrecognized source of error in force-field models, which assume that atoms of the same type have the same dispersion interactions. Agreement with the XDM dispersion coefficients is even poorer for the Drude polarizable force field. Popular water models were also examined, and TIP3P was found to have dispersion coefficients similar to the experimental and XDM references, although other models employ anomalously high values. Finally, XDM-derived dispersion coefficients were used to parametrize molecular-mechanical force fields for five liquids-benzene, toluene, cyclohexane, n-pentane, and n-hexane-which resulted in improved accuracy in the computed enthalpies of vaporization despite only having to evaluate a much smaller section of the parameter space.

  6. Recalibrating disease parameters for increasing realism in modeling epidemics in closed settings.

    PubMed

    Bioglio, Livio; Génois, Mathieu; Vestergaard, Christian L; Poletto, Chiara; Barrat, Alain; Colizza, Vittoria

    2016-11-14

    The homogeneous mixing assumption is widely adopted in epidemic modelling for its parsimony and represents the building block of more complex approaches, including very detailed agent-based models. The latter assume homogeneous mixing within schools, workplaces and households, mostly for the lack of detailed information on human contact behaviour within these settings. The recent data availability on high-resolution face-to-face interactions makes it now possible to assess the goodness of this simplified scheme in reproducing relevant aspects of the infection dynamics. We consider empirical contact networks gathered in different contexts, as well as synthetic data obtained through realistic models of contacts in structured populations. We perform stochastic spreading simulations on these contact networks and in populations of the same size under a homogeneous mixing hypothesis. We adjust the epidemiological parameters of the latter in order to fit the prevalence curve of the contact epidemic model. We quantify the agreement by comparing epidemic peak times, peak values, and epidemic sizes. Good approximations of the peak times and peak values are obtained with the homogeneous mixing approach, with a median relative difference smaller than 20 % in all cases investigated. Accuracy in reproducing the peak time depends on the setting under study, while for the peak value it is independent of the setting. Recalibration is found to be linear in the epidemic parameters used in the contact data simulations, showing changes across empirical settings but robustness across groups and population sizes. An adequate rescaling of the epidemiological parameters can yield a good agreement between the epidemic curves obtained with a real contact network and a homogeneous mixing approach in a population of the same size. The use of such recalibrated homogeneous mixing approximations would enhance the accuracy and realism of agent-based simulations and limit the intrinsic biases of the homogeneous mixing.

  7. Model-based analysis of the torsional loss modulus in human hair and of the effects of cosmetic processing.

    PubMed

    Wortmann, Franz J; Wortmann, Gabriele; Haake, Hans-Martin; Eisfeld, Wolf

    Torsional analysis of single human hairs is especially suited to determine the properties of the cuticle and its changes through cosmetic processing. The two primary parameters, which are obtained by free torsional oscillation using the torsional pendulum method, are storage ( G ') and loss modulus ( G ″). Based on previous work on G ', the current investigation focuses on G ″. The results show an increase of G ″ with a drop of G ' and vice versa , as is expected for a viscoelastic material well below its glass transition. The overall power of G ″ to discriminate between samples is quite low. This is attributed to the systematic decrease of the parameter values with increasing fiber diameter, with a pronounced correlation between G ″ and G '. Analyzing this effect on the basis of a core/shell model for the cortex/cuticle structure of hair by nonlinear regression leads to estimates for the loss moduli of cortex ( G ″ co ) and cuticle ( G ″ cu ). Although the values for G ″ co turn out to be physically not plausible, due to limitations of the applied model, those for G ″ cu are considered as generally realistic against relevant literature values. Significant differences between the loss moduli of the cuticle for the different samples provide insight into changes of the torsional energy loss due to the cosmetic processes and products, contributing toward a consistent view of torsional energy storage and loss, namely, in the cuticle of hair.

  8. Estimation of k-ε parameters using surrogate models and jet-in-crossflow data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan

    2014-11-01

    We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of themore » calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.« less

  9. Improving the Fitness of High-Dimensional Biomechanical Models via Data-Driven Stochastic Exploration

    PubMed Central

    Bustamante, Carlos D.; Valero-Cuevas, Francisco J.

    2010-01-01

    The field of complex biomechanical modeling has begun to rely on Monte Carlo techniques to investigate the effects of parameter variability and measurement uncertainty on model outputs, search for optimal parameter combinations, and define model limitations. However, advanced stochastic methods to perform data-driven explorations, such as Markov chain Monte Carlo (MCMC), become necessary as the number of model parameters increases. Here, we demonstrate the feasibility and, what to our knowledge is, the first use of an MCMC approach to improve the fitness of realistically large biomechanical models. We used a Metropolis–Hastings algorithm to search increasingly complex parameter landscapes (3, 8, 24, and 36 dimensions) to uncover underlying distributions of anatomical parameters of a “truth model” of the human thumb on the basis of simulated kinematic data (thumbnail location, orientation, and linear and angular velocities) polluted by zero-mean, uncorrelated multivariate Gaussian “measurement noise.” Driven by these data, ten Markov chains searched each model parameter space for the subspace that best fit the data (posterior distribution). As expected, the convergence time increased, more local minima were found, and marginal distributions broadened as the parameter space complexity increased. In the 36-D scenario, some chains found local minima but the majority of chains converged to the true posterior distribution (confirmed using a cross-validation dataset), thus demonstrating the feasibility and utility of these methods for realistically large biomechanical problems. PMID:19272906

  10. Technical Note: Artificial coral reef mesocosms for ocean acidification investigations

    NASA Astrophysics Data System (ADS)

    Leblud, J.; Moulin, L.; Batigny, A.; Dubois, P.; Grosjean, P.

    2014-11-01

    The design and evaluation of replicated artificial mesocosms are presented in the context of a thirteen month experiment on the effects of ocean acidification on tropical coral reefs. They are defined here as (semi)-closed (i.e. with or without water change from the reef) mesocosms in the laboratory with a more realistic physico-chemical environment than microcosms. Important physico-chemical parameters (i.e. pH, pO2, pCO2, total alkalinity, temperature, salinity, total alkaline earth metals and nutrients availability) were successfully monitored and controlled. Daily variations of irradiance and pH were applied to approach field conditions. Results highlighted that it was possible to maintain realistic physico-chemical parameters, including daily changes, into artificial mesocosms. On the other hand, the two identical artificial mesocosms evolved differently in terms of global community oxygen budgets although the initial biological communities and physico-chemical parameters were comparable. Artificial reef mesocosms seem to leave enough degrees of freedom to the enclosed community of living organisms to organize and change along possibly diverging pathways.

  11. A baroclinic quasigeostrophic open ocean model

    NASA Technical Reports Server (NTRS)

    Miller, R. N.; Robinson, A. R.; Haidvogel, D. B.

    1983-01-01

    A baroclinic quasigeostrophic open ocean model is presented, calibrated by a series of test problems, and demonstrated to be feasible and efficient for application to realistic mid-oceanic mesoscale eddy flow regimes. Two methods of treating the depth dependence of the flow, a finite difference method and a collocation method, are tested and intercompared. Sample Rossby wave calculations with and without advection are performed with constant stratification and two levels of nonlinearity, one weaker than and one typical of real ocean flows. Using exact analytical solutions for comparison, the accuracy and efficiency of the model is tabulated as a function of the computational parameters and stability limits set; typically, errors were controlled between 1 percent and 10 percent RMS after two wave periods. Further Rossby wave tests with realistic stratification and wave parameters chosen to mimic real ocean conditions were performed to determine computational parameters for use with real and simulated data. Finally, a prototype calculation with quasiturbulent simulated data was performed successfully, which demonstrates the practicality of the model for scientific use.

  12. Effect of Anatomically Realistic Full-Head Model on Activation of Cortical Neurons in Subdural Cortical Stimulation—A Computational Study

    NASA Astrophysics Data System (ADS)

    Seo, Hyeon; Kim, Donghyeon; Jun, Sung Chan

    2016-06-01

    Electrical brain stimulation (EBS) is an emerging therapy for the treatment of neurological disorders, and computational modeling studies of EBS have been used to determine the optimal parameters for highly cost-effective electrotherapy. Recent notable growth in computing capability has enabled researchers to consider an anatomically realistic head model that represents the full head and complex geometry of the brain rather than the previous simplified partial head model (extruded slab) that represents only the precentral gyrus. In this work, subdural cortical stimulation (SuCS) was found to offer a better understanding of the differential activation of cortical neurons in the anatomically realistic full-head model than in the simplified partial-head models. We observed that layer 3 pyramidal neurons had comparable stimulation thresholds in both head models, while layer 5 pyramidal neurons showed a notable discrepancy between the models; in particular, layer 5 pyramidal neurons demonstrated asymmetry in the thresholds and action potential initiation sites in the anatomically realistic full-head model. Overall, the anatomically realistic full-head model may offer a better understanding of layer 5 pyramidal neuronal responses. Accordingly, the effects of using the realistic full-head model in SuCS are compelling in computational modeling studies, even though this modeling requires substantially more effort.

  13. A Multi-Parameter Approach for Calculating Crack Instability

    NASA Technical Reports Server (NTRS)

    Zanganeh, M.; Forman, R. G.

    2014-01-01

    An accurate fracture control analysis of spacecraft pressure systems, boosters, rocket hardware and other critical low-cycle fatigue cases where the fracture toughness highly impacts cycles to failure requires accurate knowledge of the material fracture toughness. However, applicability of the measured fracture toughness values using standard specimens and transferability of the values to crack instability analysis of the realistically complex structures is refutable. The commonly used single parameter Linear Elastic Fracture Mechanics (LEFM) approach which relies on the key assumption that the fracture toughness is a material property would result in inaccurate crack instability predictions. In the past years extensive studies have been conducted to improve the single parameter (K-controlled) LEFM by introducing parameters accounting for the geometry or in-plane constraint effects]. Despite the importance of the thickness (out-of-plane constraint) effects in fracture control problems, the literature is mainly limited to some empirical equations for scaling the fracture toughness data] and only few theoretically based developments can be found. In aerospace hardware where the structure might have only one life cycle and weight reduction is crucial, reducing the design margin of safety by decreasing the uncertainty involved in fracture toughness evaluations would result in lighter hardware. In such conditions LEFM would not suffice and an elastic-plastic analysis would be vital. Multi-parameter elastic plastic crack tip field quantifying developments combined with statistical methods] have been shown to have the potential to be used as a powerful tool for tackling such problems. However, these approaches have not been comprehensively scrutinized using experimental tests. Therefore, in this paper a multi-parameter elastic-plastic approach has been used to study the crack instability problem and the transferability issue by considering the effects of geometrical constraints as well as the thickness. The feasibility of the approach has been examined using a wide range of specimen geometries and thicknesses manufactured from 7075-T7351 aluminum alloy.

  14. Mapping Curie temperature depth in the western United States with a fractal model for crustal magnetization

    USGS Publications Warehouse

    Bouligand, C.; Glen, J.M.G.; Blakely, R.J.

    2009-01-01

    We have revisited the problem of mapping depth to the Curie temperature isotherm from magnetic anomalies in an attempt to provide a measure of crustal temperatures in the western United States. Such methods are based on the estimation of the depth to the bottom of magnetic sources, which is assumed to correspond to the temperature at which rocks lose their spontaneous magnetization. In this study, we test and apply a method based on the spectral analysis of magnetic anomalies. Early spectral analysis methods assumed that crustal magnetization is a completely uncorrelated function of position. Our method incorporates a more realistic representation where magnetization has a fractal distribution defined by three independent parameters: the depths to the top and bottom of magnetic sources and a fractal parameter related to the geology. The predictions of this model are compatible with radial power spectra obtained from aeromagnetic data in the western United States. Model parameters are mapped by estimating their value within a sliding window swept over the study area. The method works well on synthetic data sets when one of the three parameters is specified in advance. The application of this method to western United States magnetic compilations, assuming a constant fractal parameter, allowed us to detect robust long-wavelength variations in the depth to the bottom of magnetic sources. Depending on the geologic and geophysical context, these features may result from variations in depth to the Curie temperature isotherm, depth to the mantle, depth to the base of volcanic rocks, or geologic settings that affect the value of the fractal parameter. Depth to the bottom of magnetic sources shows several features correlated with prominent heat flow anomalies. It also shows some features absent in the map of heat flow. Independent geophysical and geologic data sets are examined to determine their origin, thereby providing new insights on the thermal and geologic crustal structure of the western United States.

  15. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods

    PubMed Central

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-01-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626

  16. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods.

    PubMed

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-04-07

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.

  17. Accelerated tumor invasion under non-isotropic cell dispersal in glioblastomas

    NASA Astrophysics Data System (ADS)

    Fort, Joaquim; Solé, Ricard V.

    2013-05-01

    Glioblastomas are highly diffuse, malignant tumors that have so far evaded clinical treatment. The strongly invasive behavior of cells in these tumors makes them very resistant to treatment, and for this reason both experimental and theoretical efforts have been directed toward understanding the spatiotemporal pattern of tumor spreading. Although usual models assume a standard diffusion behavior, recent experiments with cell cultures indicate that cells tend to move in directions close to that of glioblastoma invasion, thus indicating that a biased random walk model may be much more appropriate. Here we show analytically that, for realistic parameter values, the speeds predicted by biased dispersal are consistent with experimentally measured data. We also find that models beyond reaction-diffusion-advection equations are necessary to capture this substantial effect of biased dispersal on glioblastoma spread.

  18. The effect of recruitment rate and other demographic parameters on the transmission of dengue disease

    NASA Astrophysics Data System (ADS)

    Supriatna, A. K.; Anggriani, N.

    2015-03-01

    One of important factors which always appears in most of dengue transmission mathematical model is the number of new susceptible recruited into the susceptible compartment. In this paper we discuss the effect of different rates of recruitment on the transmission of dengue disease. We choose a dengue transmission model with the most realistic form of recruitment rate and analyze the effect of environmental change to the transmission of dengue based on the selected model. We model the effect of environmental change by considering that it can alter the value of mosquito's carrying capacity and mosquito's death rate. We found that the most prevalent effect of the environmental change to the transmission of dengue is when it can alter the death rate of the mosquitoes.

  19. Redefining the Axion Window

    NASA Astrophysics Data System (ADS)

    Di Luzio, Luca; Mescia, Federico; Nardi, Enrico

    2017-01-01

    A major goal of axion searches is to reach inside the parameter space region of realistic axion models. Currently, the boundaries of this region depend on somewhat arbitrary criteria, and it would be desirable to specify them in terms of precise phenomenological requirements. We consider hadronic axion models and classify the representations RQ of the new heavy quarks Q . By requiring that (i) the Q 's are sufficiently short lived to avoid issues with long-lived strongly interacting relics, (ii) no Landau poles are induced below the Planck scale; 15 cases are selected which define a phenomenologically preferred axion window bounded by a maximum (minimum) value of the axion-photon coupling about 2 times (4 times) larger than is commonly assumed. Allowing for more than one RQ, larger couplings, as well as complete axion-photon decoupling, become possible.

  20. Shaking stack model of ion conduction through the Ca(2+)-activated K+ channel.

    PubMed Central

    Schumaker, M F

    1992-01-01

    Motivated by the results of Neyton and Miller (1988. J. Gen. Physiol. 92:549-586), suggesting that the Ca(2+)-activated K+ channel has four high affinity ion binding sites, we propose a physically attractive variant of the single-vacancy conduction mechanism for this channel. Simple analytical expressions for conductance, current, flux ratio exponent, and reversal potential under bi-ionic conditions are found. A set of conductance data are analyzed to determine a realistic range of parameter values. Using these, we find qualitative agreement with a variety of experimental results previously reported in the literature. The exquisite selectivity of the Ca(2+)-activated K+ channel may be explained as a consequence of the concerted motion of the "stack" in the proposed mechanism. PMID:1420923

  1. Constraints on brane-world inflation from the CMB power spectrum: revisited

    NASA Astrophysics Data System (ADS)

    Gangopadhyay, Mayukh R.; Mathews, Grant J.

    2018-03-01

    We analyze the Randal Sundrum brane-world inflation scenario in the context of the latest CMB constraints from Planck. We summarize constraints on the most popular classes of models and explore some more realistic inflaton effective potentials. The constraint on standard inflationary parameters changes in the brane-world scenario. We confirm that in general the brane-world scenario increases the tensor-to-scalar ratio, thus making this paradigm less consistent with the Planck constraints. Indeed, when BICEP2/Keck constraints are included, all monomial potentials in the brane-world scenario become disfavored compared to the standard scenario. However, for natural inflation the brane-world scenario could fit the constraints better due to larger allowed values of e-foldings N before the end of inflation in the brane-world.

  2. A CDMA system implementation with dimming control for visible light communication

    NASA Astrophysics Data System (ADS)

    Chen, Danyang; Wang, Jianping; Jin, Jianli; Lu, Huimin; Feng, Lifang

    2018-04-01

    Visible light communication (VLC), using solid-state lightings to transmit information, has become a complement technology to wireless radio communication. As a realistic multiple access scheme for VLC system, code division multiple access (CDMA) has attracted more and more attentions in recent years. In this paper, we address and implement an improved CDMA scheme for VLC system. The simulation results reveal that the improved CDMA scheme not only supports multi-users' transmission but also maintains dimming value at about 50% and enhances the system efficiency. It can also realize the flexible dimming control by adjusting some parameters of system structure, which rarely affects the system BER performance. A real-time experimental VLC system with improved CDMA scheme is performed based on field programmable gate array (FPGA), reaching a good BER performance.

  3. An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates

    NASA Astrophysics Data System (ADS)

    Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin

    2014-03-01

    The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.

  4. Nuclear effects in (anti)neutrino charge-current quasielastic scattering at MINER νA kinematics

    NASA Astrophysics Data System (ADS)

    Ivanov, M. V.; Antonov, A. N.; Megias, G. D.; González-Jiménez, R.; Barbaro, M. B.; Caballero, J. A.; Donnelly, T. W.; Udías, J. M.

    2018-05-01

    We compare the characteristics of the charged-current quasielastic (anti)neutrino scattering obtained in two different nuclear models, the phenomenological SuperScaling Approximation and the model using a realistic spectral function S(p, ɛ) that gives a scaling function in accordance with the (e, e‧ ) scattering data, with the recent data published by the MiniBooNE, MINER νA, and NOMAD collaborations. The spectral function accounts for the nucleon-nucleon (NN) correlations by using natural orbitals from the Jastrow correlation method and has a realistic energy dependence. Both models provide a good description of the MINER νA and NOMAD data without the need of an ad hoc increase of the value of the mass parameter in the axial-vector dipole form factor. The models considered in this work, based on the the impulse approximation (IA), underpredict the MiniBooNE data for the flux-averaged charged-current quasielastic {ν }μ ({\\bar{ν }}μ ){+}12\\text{C} differential cross section per nucleon and the total cross sections, although the shape of the cross sections is represented by the approaches. The discrepancy is most likely due to missing of the effects beyond the IA, e.g., those of the 2p–2h meson exchange currents that have contribution in the transverse responses.

  5. Analytical multiple scattering correction to the Mie theory: Application to the analysis of the lidar signal

    NASA Technical Reports Server (NTRS)

    Flesia, C.; Schwendimann, P.

    1992-01-01

    The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.

  6. Sulfates as chromophores for multiwavelength photoacoustic imaging phantoms

    NASA Astrophysics Data System (ADS)

    Fonseca, Martina; An, Lu; Beard, Paul; Cox, Ben

    2017-12-01

    As multiwavelength photoacoustic imaging becomes increasingly widely used to obtain quantitative estimates, the need for validation studies conducted on well-characterized experimental phantoms becomes ever more pressing. One challenge that such studies face is the design of stable, well-characterized phantoms and absorbers with properties in a physiologically realistic range. This paper performs a full experimental characterization of aqueous solutions of copper and nickel sulfate, whose properties make them close to ideal as chromophores in multiwavelength photoacoustic imaging phantoms. Their absorption varies linearly with concentration, and they mix linearly. The concentrations needed to yield absorption values within the physiological range are below the saturation limit. The shape of their absorption spectra makes them useful analogs for oxy- and deoxyhemoglobin. They display long-term photostability (no indication of bleaching) as well as resistance to transient effects (no saturable absorption phenomena), and are therefore suitable for exposure to typical pulsed photoacoustic light sources, even when exposed to the high number of pulses required in scanning photoacoustic imaging systems. In addition, solutions with tissue-realistic, predictable, and stable scattering can be prepared by mixing sulfates and Intralipid, as long as an appropriate emulsifier is used. Finally, the Grüneisen parameter of the sulfates was found to be larger than that of water and increased linearly with concentration.

  7. Linking the Weather Generator with Regional Climate Model: Effect of Higher Resolution

    NASA Astrophysics Data System (ADS)

    Dubrovsky, Martin; Huth, Radan; Farda, Ales; Skalak, Petr

    2014-05-01

    This contribution builds on our last year EGU contribution, which followed two aims: (i) validation of the simulations of the present climate made by the ALADIN-Climate Regional Climate Model (RCM) at 25 km resolution, and (ii) presenting a methodology for linking the parametric weather generator (WG) with RCM output (aiming to calibrate a gridded WG capable of producing realistic synthetic multivariate weather series for weather-ungauged locations). Now we have available new higher-resolution (6.25 km) simulations with the same RCM. The main topic of this contribution is an anser to a following question: What is an effect of using a higher spatial resolution on a quality of simulating the surface weather characteristics? In the first part, the high resolution RCM simulation of the present climate will be validated in terms of selected WG parameters, which are derived from the RCM-simulated surface weather series and compared to those derived from weather series observed in 125 Czech meteorological stations. The set of WG parameters will include statistics of the surface temperature and precipitation series. When comparing the WG parameters from the two sources (RCM vs observations), we interpolate the RCM-based parameters into the station locations while accounting for the effect of altitude. In the second part, we will discuss an effect of using the higher resolution: the results of the validation tests will be compared with those obtained with the lower-resolution RCM. Acknowledgements: The present experiment is made within the frame of projects ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR) and VALUE (COST ES 1102 action).

  8. Magnetic and velocity fields in a dynamo operating at extremely small Ekman and magnetic Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Šimkanin, Ján; Kyselica, Juraj

    2017-12-01

    Numerical simulations of the geodynamo are becoming more realistic because of advances in computer technology. Here, the geodynamo model is investigated numerically at the extremely low Ekman and magnetic Prandtl numbers using the PARODY dynamo code. These parameters are more realistic than those used in previous numerical studies of the geodynamo. Our model is based on the Boussinesq approximation and the temperature gradient between upper and lower boundaries is a source of convection. This study attempts to answer the question how realistic the geodynamo models are. Numerical results show that our dynamo belongs to the strong-field dynamos. The generated magnetic field is dipolar and large-scale while convection is small-scale and sheet-like flows (plumes) are preferred to a columnar convection. Scales of magnetic and velocity fields are separated, which enables hydromagnetic dynamos to maintain the magnetic field at the low magnetic Prandtl numbers. The inner core rotation rate is lower than that in previous geodynamo models. On the other hand, dimensional magnitudes of velocity and magnetic fields and those of the magnetic and viscous dissipation are larger than those expected in the Earth's core due to our parameter range chosen.

  9. Modeling the Performance Limitations and Prospects of Perovskite/Si Tandem Solar Cells under Realistic Operating Conditions

    PubMed Central

    2017-01-01

    Perovskite/Si tandem solar cells have the potential to considerably out-perform conventional solar cells. Under standard test conditions, perovskite/Si tandem solar cells already outperform the Si single junction. Under realistic conditions, however, as we show, tandem solar cells made from current record cells are hardly more efficient than the Si cell alone. We model the performance of realistic perovskite/Si tandem solar cells under real-world climate conditions, by incorporating parasitic cell resistances, nonradiative recombination, and optical losses into the detailed-balance limit. We show quantitatively that when optimizing these parameters in the perovskite top cell, perovskite/Si tandem solar cells could reach efficiencies above 38% under realistic conditions, even while leaving the Si cell untouched. Despite the rapid efficiency increase of perovskite solar cells, our results emphasize the need for further material development, careful device design, and light management strategies, all necessary for highly efficient perovskite/Si tandem solar cells. PMID:28920081

  10. Modeling the Performance Limitations and Prospects of Perovskite/Si Tandem Solar Cells under Realistic Operating Conditions.

    PubMed

    Futscher, Moritz H; Ehrler, Bruno

    2017-09-08

    Perovskite/Si tandem solar cells have the potential to considerably out-perform conventional solar cells. Under standard test conditions, perovskite/Si tandem solar cells already outperform the Si single junction. Under realistic conditions, however, as we show, tandem solar cells made from current record cells are hardly more efficient than the Si cell alone. We model the performance of realistic perovskite/Si tandem solar cells under real-world climate conditions, by incorporating parasitic cell resistances, nonradiative recombination, and optical losses into the detailed-balance limit. We show quantitatively that when optimizing these parameters in the perovskite top cell, perovskite/Si tandem solar cells could reach efficiencies above 38% under realistic conditions, even while leaving the Si cell untouched. Despite the rapid efficiency increase of perovskite solar cells, our results emphasize the need for further material development, careful device design, and light management strategies, all necessary for highly efficient perovskite/Si tandem solar cells.

  11. Comparison between two photovoltaic module models based on transistors

    NASA Astrophysics Data System (ADS)

    Saint-Eve, Frédéric; Sawicki, Jean-Paul; Petit, Pierre; Maufay, Fabrice; Aillerie, Michel

    2018-05-01

    The main objective of this paper is to verify the possibility to reduce to a simple electronic circuit with very few components the behavior simulation of an un-shaded photovoltaic (PV) module. Particularly, two models based on well-tried elementary structures, i.e., the Darlington structure in first model and the voltage regulation with programmable Zener diode in the second are analyzed. Specifications extracted from the behavior of a real I-V characteristic of a panel are considered and the principal electrical variables are deduced. The two models are expected to match with open circuit voltage, maximum power point (MPP) and short circuit current, without forgetting realistic current slopes on the both sides of MPP. The robustness is mentioned when irradiance varies and is considered as an additional fundamental property. For both models, two simulations are done to identify influence of some parameters. In the first model, a parameter allowing to adjust current slope on left side of MPP proves to be also important for the calculation of open circuit voltage. Besides this model does not authorize an entirely adjustment of I-V characteristic and MPP moves significantly away from real value when irradiance increases. On the contrary, the second model seems to have only qualities: open circuit voltage is easy to calculate, current slopes are realistic and there is perhaps a good robustness when irradiance variations are simulated by adjusting short circuit current of PV module. We have shown that these two simplified models are expected to make reliable and easier simulations of complex PV architecture integrating many different devices like PV modules or other renewable energy sources and storage capacities coupled in parallel association.

  12. Nonlinear vibration analysis of bladed disks with dry friction dampers

    NASA Astrophysics Data System (ADS)

    Ciğeroğlu, Ender; Özgüven, H. Nevzat

    2006-08-01

    In this work, a new model is proposed for the vibration analysis of turbine blades with dry friction dampers. The aim of the study is to develop a multiblade model that is accurate and yet easy to be analyzed so that it can be used efficiently in the design of friction dampers. The suggested nonlinear model for a bladed disk assembly includes all the blades with blade to blade and/or blade to cover plate dry friction dampers. An important feature of the model is that both macro-slip and micro-slip models are used in representing dry friction dampers. The model is simple to be analyzed as it is the case in macro-slip model, and yet it includes the features of more realistic micro-slip model. The nonlinear multidegree-of-freedom (mdof) model of bladed disk system is analyzed in frequency domain by applying a quasi-linearization technique, which transforms the nonlinear differential equations into a set of nonlinear algebraic equations. The solution method employed reduces the computational effort drastically compared to time solution methods for nonlinear systems, which makes it possible to obtain a more realistic model by the inclusion of all blades around the disk, disk itself and all friction dampers since in general system parameters are not identical throughout the geometry. The validation of the method is demonstrated by comparing the results obtained in this study with those given in literature and also with results obtained by time domain analysis. In the case studies presented the effect of friction damper parameters on vibration characteristics of tuned and mistuned bladed disk systems is studied by using a 20 blade system. It is shown that the method presented can be used to find the optimum friction damper values in a bladed disk assembly.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archambault, L; Papaconstadopoulos, P; Seuntjens, J

    Purpose: To study Cherenkov light emission in plastic scintillation detectors (PSDs) from a theoretical point of view to identify situations that may arise where the calibration coefficient obtained in one condition is not applicable to another condition. By identifying problematic situations, we hope to provide guidance on how to confidently use PSDs. Methods: Cherenkov light emission in PSD was modelled using basic physical principles. In particular, changes in refractive index as a function of wavelength were accounted for using the Sellmeier empirical equation. Both electron and photon beams were considered. For photons, realistic distributions of secondary charged particles were calculatedmore » using Klein-Nishina’s formula. Cherenkov production and collection in PSDs were studied for a range of parameters including beam energy, charged particle momentum distribution, detector orientation and material composition. Finally, experimental validation was made using a commercial plastic scintillation detector. Results: In specific situations, results show that the Cherenkov spectrum coupled in the PSD can deviate from its expected behaviour (i.e. one over the square of the wavelength). In these cases were the model is realistic it is possible to see a peak wavelength instead of a monotonically decreasing function. Consequences of this phenomenon are negligible when the momentum of charged particle is distributed randomly, but in some clinically relevant cases, such as an electron beam at depth close to R50 or for photon beams with minimal scatter component, the value of the calibration coefficient can be altered. Experimental tests with electron beams showed changes in the Cherenkov light ratio, the parameter used in the calibration of PSDs, up to 2–3% depending on the PSD orientation. Conclusion: This work is the first providing a physical explanation for apparent change in PSD calibration coefficient. With this new information at hand, it will be possible to better guide the clinical use of PSDs.« less

  14. Analysis of the right-handed Majorana neutrino mass in an S U (4 )×S U (2 )L×S U (2 )R Pati-Salam model with democratic texture

    NASA Astrophysics Data System (ADS)

    Yang, Masaki J. S.

    2017-03-01

    In this paper, we attempt to build a unified model with the democratic texture, that has some unification between up-type Yukawa interactions Yν and Yu . Since the S3 L×S3 R flavor symmetry is chiral, the unified gauge group is assumed to be Pati-Salam type S U (4 )c×S U (2 )L×S U (2 )R. The breaking scheme of the flavor symmetry is considered to be S3 L×S3 R→S2 L×S2 R→0 . In this picture, the four-zero texture is desirable for realistic masses and mixings. This texture is realized by a specific representation for the second breaking of the S3 L×S3 R flavor symmetry. Assuming only renormalizable Yukawa interactions, type-I seesaw mechanism, and neglecting C P phases for simplicity, the right-handed neutrino mass matrix MR can be reconstructed from low energy input values. Numerical analysis shows that the texture of MR basically behaves like the "waterfall texture." Since MR tends to be the "cascade texture" in the democratic texture approach, a model with type-I seesaw and up-type Yukawa unification Yν≃Yu basically requires fine-tunings between parameters. Therefore, it seems to be more realistic to consider universal waterfall textures for both Yf and MR, e.g., by the radiative mass generation or the Froggatt-Nielsen mechanism. Moreover, analysis of eigenvalues shows that the lightest mass eigenvalue MR 1 is too light to achieve successful thermal leptogenesis. Although the resonant leptogenesis might be possible, it also requires fine-tunings of parameters.

  15. Predictive uncertainty analysis of plume distribution for geological carbon sequestration using sparse-grid Bayesian method

    NASA Astrophysics Data System (ADS)

    Shi, X.; Zhang, G.

    2013-12-01

    Because of the extensive computational burden, parametric uncertainty analyses are rarely conducted for geological carbon sequestration (GCS) process based multi-phase models. The difficulty of predictive uncertainty analysis for the CO2 plume migration in realistic GCS models is not only due to the spatial distribution of the caprock and reservoir (i.e. heterogeneous model parameters), but also because the GCS optimization estimation problem has multiple local minima due to the complex nonlinear multi-phase (gas and aqueous), and multi-component (water, CO2, salt) transport equations. The geological model built by Doughty and Pruess (2004) for the Frio pilot site (Texas) was selected and assumed to represent the 'true' system, which was composed of seven different facies (geological units) distributed among 10 layers. We chose to calibrate the permeabilities of these facies. Pressure and gas saturation values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. Each simulation of the model lasts about 2 hours. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid stochastic collocation method. This surrogate response surface global optimization algorithm is firstly used to calibrate the model parameters, then prediction uncertainty of the CO2 plume position is quantified due to the propagation from parametric uncertainty in the numerical experiments, which is also compared to the actual plume from the 'true' model. Results prove that the approach is computationally efficient for multi-modal optimization and prediction uncertainty quantification for computationally expensive simulation models. Both our inverse methodology and findings can be broadly applicable to GCS in heterogeneous storage formations.

  16. Characteristics of sub-daily precipitation extremes in observed data and regional climate model simulations

    NASA Astrophysics Data System (ADS)

    Beranová, Romana; Kyselý, Jan; Hanel, Martin

    2018-04-01

    The study compares characteristics of observed sub-daily precipitation extremes in the Czech Republic with those simulated by Hadley Centre Regional Model version 3 (HadRM3) and Rossby Centre Regional Atmospheric Model version 4 (RCA4) regional climate models (RCMs) driven by reanalyses and examines diurnal cycles of hourly precipitation and their dependence on intensity and surface temperature. The observed warm-season (May-September) maxima of short-duration (1, 2 and 3 h) amounts show one diurnal peak in the afternoon, which is simulated reasonably well by RCA4, although the peak occurs too early in the model. HadRM3 provides an unrealistic diurnal cycle with a nighttime peak and an afternoon minimum coinciding with the observed maximum for all three ensemble members, which suggests that convection is not captured realistically. Distorted relationships of the diurnal cycles of hourly precipitation to daily maximum temperature in HadRM3 further evidence that underlying physical mechanisms are misrepresented in this RCM. Goodness-of-fit tests indicate that generalised extreme value distribution is an applicable model for both observed and RCM-simulated precipitation maxima. However, the RCMs are not able to capture the range of the shape parameter estimates of distributions of short-duration precipitation maxima realistically, leading to either too many (nearly all for HadRM3) or too few (RCA4) grid boxes in which the shape parameter corresponds to a heavy tail. This means that the distributions of maxima of sub-daily amounts are distorted in the RCM-simulated data and do not match reality well. Therefore, projected changes of sub-daily precipitation extremes in climate change scenarios based on RCMs not resolving convection need to be interpreted with caution.

  17. Scaling of flow and transport behavior in heterogeneous groundwater systems

    NASA Astrophysics Data System (ADS)

    Scheibe, Timothy; Yabusaki, Steven

    1998-11-01

    Three-dimensional numerical simulations using a detailed synthetic hydraulic conductivity field developed from geological considerations provide insight into the scaling of subsurface flow and transport processes. Flow and advective transport in the highly resolved heterogeneous field were modeled using massively parallel computers, providing a realistic baseline for evaluation of the impacts of parameter scaling. Upscaling of hydraulic conductivity was performed at a variety of scales using a flexible power law averaging technique. A series of tests were performed to determine the effects of varying the scaling exponent on a number of metrics of flow and transport behavior. Flow and transport simulation on high-performance computers and three-dimensional scientific visualization combine to form a powerful tool for gaining insight into the behavior of complex heterogeneous systems. Many quantitative groundwater models utilize upscaled hydraulic conductivity parameters, either implicitly or explicitly. These parameters are designed to reproduce the bulk flow characteristics at the grid or field scale while not requiring detailed quantification of local-scale conductivity variations. An example from applied groundwater modeling is the common practice of calibrating grid-scale model hydraulic conductivity or transmissivity parameters so as to approximate observed hydraulic head and boundary flux values. Such parameterizations, perhaps with a bulk dispersivity imposed, are then sometimes used to predict transport of reactive or non-reactive solutes. However, this work demonstrates that those parameters that lead to the best upscaling for hydraulic conductivity and head do not necessarily correspond to the best upscaling for prediction of a variety of transport behaviors. This result reflects the fact that transport is strongly impacted by the existence and connectedness of extreme-valued hydraulic conductivities, in contrast to bulk flow which depends more strongly on mean values. It provides motivation for continued research into upscaling methods for transport that directly address advection in heterogeneous porous media. An electronic version of this article is available online at the journal's homepage at http://www.elsevier.nl/locate/advwatres or http://www.elsevier.com/locate/advwatres (see "Special section on vizualization". The online version contains additional supporting information, graphics, and a 3D animation of simulated particle movement. Limited. All rights reserved

  18. Effects of damping on mode shapes, volume 2

    NASA Technical Reports Server (NTRS)

    Gates, R. M.; Merchant, D. H.; Arnquist, J. L.

    1977-01-01

    Displacement, velocity, and acceleration admittances were calculated for a realistic NASTRAN structural model of space shuttle for three conditions: liftoff, maximum dynamic pressure and end of solid rocket booster burn. The realistic model of the orbiter, external tank, and solid rocket motors included the representation of structural joint transmissibilities by finite stiffness and damping elements. Data values for the finite damping elements were assigned to duplicate overall low-frequency modal damping values taken from tests of similar vehicles. For comparison with the calculated admittances, position and rate gains were computed for a conventional shuttle model for the liftoff condition. Dynamic characteristics and admittances for the space shuttle model are presented.

  19. Family of columns isospectral to gravity-loaded columns with tip force: A discrete approach

    NASA Astrophysics Data System (ADS)

    Ramachandran, Nirmal; Ganguli, Ranjan

    2018-06-01

    A discrete model is introduced to analyze transverse vibration of straight, clamped-free (CF) columns of variable cross-sectional geometry under the influence of gravity and a constant axial force at the tip. The discrete model is used to determine critical combinations of loading parameters - a gravity parameter and a tip force parameter - that cause onset of dynamic instability in the CF column. A methodology, based on matrix-factorization, is described to transform the discrete model into a family of models corresponding to weightless and unloaded clamped-free (WUCF) columns, each with a transverse vibration spectrum isospectral to the original model. Characteristics of models in this isospectral family are dependent on three transformation parameters. A procedure is discussed to convert the isospectral discrete model description into geometric description of realistic columns i.e. from the discrete model, we construct isospectral WUCF columns with rectangular cross-sections varying in width and depth. As part of numerical studies to demonstrate efficacy of techniques presented, frequency parameters of a uniform column and three types of tapered CF columns under different combinations of loading parameters are obtained from the discrete model. Critical combinations of these parameters for a typical tapered column are derived. These results match with published results. Example CF columns, under arbitrarily-chosen combinations of loading parameters are considered and for each combination, isospectral WUCF columns are constructed. Role of transformation parameters in determining characteristics of isospectral columns is discussed and optimum values are deduced. Natural frequencies of these WUCF columns computed using Finite Element Method (FEM) match well with those of the given gravity-loaded CF column with tip force, hence confirming isospectrality.

  20. The Thermal Regulation of Gravitational Instabilities in Protoplanetary Disks. III. Simulations with Radiative Cooling and Realistic Opacities

    NASA Astrophysics Data System (ADS)

    Boley, Aaron C.; Mejía, Annie C.; Durisen, Richard H.; Cai, Kai; Pickett, Megan K.; D'Alessio, Paola

    2006-11-01

    This paper presents a fully three-dimensional radiative hydrodymanics simulation with realistic opacities for a gravitationally unstable 0.07 Msolar disk around a 0.5 Msolar star. We address the following aspects of disk evolution: the strength of gravitational instabilities under realistic cooling, mass transport in the disk that arises from GIs, comparisons between the gravitational and Reynolds stresses measured in the disk and those expected in an α-disk, and comparisons between the SED derived for the disk and SEDs derived from observationally determined parameters. The mass transport in this disk is dominated by global modes, and the cooling times are too long to permit fragmentation for all radii. Moreover, our results suggest a plausible explanation for the FU Ori outburst phenomenon.

  1. Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting

    NASA Astrophysics Data System (ADS)

    Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang

    In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.

  2. The impact of standard and hard-coded parameters on the hydrologic fluxes in the Noah-MP land surface model

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan

    2016-09-01

    Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  3. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  4. Fluid Physics in a Fluctuating Acceleration Environment

    NASA Technical Reports Server (NTRS)

    Thomson, J. Ross; Drolet, Francois; Vinals, Jorge

    1996-01-01

    We summarize several aspects of an ongoing investigation of the effects that stochastic residual accelerations (g-jitter) onboard spacecraft can have on experiments conducted in a microgravity environment. The residual acceleration field is modeled as a narrow band noise, characterized by three independent parameters: intensity (g(exp 2)), dominant angular frequency Omega, and characteristic correlation time tau. Realistic values for these parameters are obtained from an analysis of acceleration data corresponding to the SL-J mission, as recorded by the SAMS instruments. We then use the model to address the random motion of a solid particle suspended in an incompressible fluid subjected to such random accelerations. As an extension, the effect of jitter on coarsening of a solid-liquid mixture is briefly discussed, and corrections to diffusion controlled coarsening evaluated. We conclude that jitter will not be significant in the experiment 'Coarsening of solid-liquid mixtures' to be conducted in microgravity. Finally, modifications to the location of onset of instability in systems driven by a random force are discussed by extending the standard reduction to the center manifold to the stochastic case. Results pertaining to time-modulated oscillatory convection are briefly discussed.

  5. A Robust and Fast Method to Compute Shallow States without Adjustable Parameters: Simulations for a Silicon-Based Qubit

    NASA Astrophysics Data System (ADS)

    Debernardi, Alberto; Fanciulli, Marco

    Within the framework of the envelope function approximation we have computed - without adjustable parameters and with a reduced computational effort due to analytical expression of relevant Hamiltonian terms - the energy levels of the shallow P impurity in silicon and the hyperfine and superhyperfine splitting of the ground state. We have studied the dependence of these quantities on the applied external electric field along the [001] direction. Our results reproduce correctly the experimental splitting of the impurity ground states detected at zero electric field and provide reliable predictions for values of the field where experimental data are lacking. Further, we have studied the effect of confinement of a shallow state of a P atom at the center of a spherical Si-nanocrystal embedded in a SiO2 matrix. In our simulations the valley-orbit interaction of a realistically screened Coulomb potential and of the core potential are included exactly, within the numerical accuracy due to the use of a finite basis set, while band-anisotropy effects are taken into account within the effective-mass approximation.

  6. On the chaotic diffusion in multidimensional Hamiltonian systems

    NASA Astrophysics Data System (ADS)

    Cincotta, P. M.; Giordano, C. M.; Martí, J. G.; Beaugé, C.

    2018-01-01

    We present numerical evidence that diffusion in the herein studied multidimensional near-integrable Hamiltonian systems departs from a normal process, at least for realistic timescales. Therefore, the derivation of a diffusion coefficient from a linear fit on the variance evolution of the unperturbed integrals fails. We review some topics on diffusion in the Arnold Hamiltonian and yield numerical and theoretical arguments to show that in the examples we considered, a standard coefficient would not provide a good estimation of the speed of diffusion. However, numerical experiments concerning diffusion would provide reliable information about the stability of the motion within chaotic regions of the phase space. In this direction, we present an extension of previous results concerning the dynamical structure of the Laplace resonance in Gliese-876 planetary system considering variations of the orbital parameters accordingly to the error introduced by the radial velocity determination. We found that a slight variation of the eccentricity of planet c would destabilize the inner region of the resonance that, though chaotic, shows stable when adopting the best fit values for the parameters.

  7. Taking error into account when fitting models using Approximate Bayesian Computation.

    PubMed

    van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M

    2018-03-01

    Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.

  8. Impact of time delay on the dynamics of SEIR epidemic model using cellular automata

    NASA Astrophysics Data System (ADS)

    Sharma, Natasha; Gupta, Arvind Kumar

    2017-04-01

    The delay of an infectious disease is significant when aiming to predict its strength and spreading patterns. In this paper the SEIR ​(susceptible-exposed-infected-recovered) epidemic spread with time delay is analyzed through a two-dimensional cellular automata model. The time delay corresponding to the infectious span, predominantly, includes death during the latency period in due course of infection. The advancement of whole system is described by SEIR transition function complemented with crucial factors like inhomogeneous population distribution, birth and disease independent mortality. Moreover, to reflect more realistic population dynamics some stochastic parameters like population movement and connections at local level are also considered. The existence and stability of disease free equilibrium is investigated. Two prime behavioral patterns of disease dynamics is found depending on delay. The critical value of delay, beyond which there are notable variations in spread patterns, is computed. The influence of important parameters affecting the disease dynamics on basic reproduction number is also examined. The results obtained show that delay plays an affirmative role to control disease progression in an infected host.

  9. Parameterization of Keeling's network generation algorithm.

    PubMed

    Badham, Jennifer; Abbass, Hussein; Stocker, Rob

    2008-09-01

    Simulation is increasingly being used to examine epidemic behaviour and assess potential management options. The utility of the simulations rely on the ability to replicate those aspects of the social structure that are relevant to epidemic transmission. One approach is to generate networks with desired social properties. Recent research by Keeling and his colleagues has generated simulated networks with a range of properties, and examined the impact of these properties on epidemic processes occurring over the network. However, published work has included only limited analysis of the algorithm itself and the way in which the network properties are related to the algorithm parameters. This paper identifies some relationships between the algorithm parameters and selected network properties (mean degree, degree variation, clustering coefficient and assortativity). Our approach enables users of the algorithm to efficiently generate a network with given properties, thereby allowing realistic social networks to be used as the basis of epidemic simulations. Alternatively, the algorithm could be used to generate social networks with a range of property values, enabling analysis of the impact of these properties on epidemic behaviour.

  10. Effect of spin-orbit and on-site Coulomb interactions on the electronic structure and lattice dynamics of uranium monocarbide

    NASA Astrophysics Data System (ADS)

    Wdowik, U. D.; Piekarz, P.; Legut, D.; Jagło, G.

    2016-08-01

    Uranium monocarbide, a potential fuel material for the generation IV reactors, is investigated within density functional theory. Its electronic, magnetic, elastic, and phonon properties are analyzed and discussed in terms of spin-orbit interaction and localized versus itinerant behavior of the 5 f electrons. The localization of the 5 f states is tuned by varying the local Coulomb repulsion interaction parameter. We demonstrate that the theoretical electronic structure, elastic constants, phonon dispersions, and their densities of states can reproduce accurately the results of x-ray photoemission and bremsstrahlung isochromat measurements as well as inelastic neutron scattering experiments only when the 5 f states experience the spin-orbit interaction and simultaneously remain partially localized. The partial localization of the 5 f electrons could be represented by a moderate value of the on-site Coulomb interaction parameter of about 2 eV. The results of the present studies indicate that both strong electron correlations and spin-orbit effects are crucial for realistic theoretical description of the ground-state properties of uranium carbide.

  11. Discrete simulations of spatio-temporal dynamics of small water bodies under varied stream flow discharges

    NASA Astrophysics Data System (ADS)

    Daya Sagar, B. S.

    2005-01-01

    Spatio-temporal patterns of small water bodies (SWBs) under the influence of temporally varied stream flow discharge are simulated in discrete space by employing geomorphologically realistic expansion and contraction transformations. Cascades of expansion-contraction are systematically performed by synchronizing them with stream flow discharge simulated via the logistic map. Templates with definite characteristic information are defined from stream flow discharge pattern as the basis to model the spatio-temporal organization of randomly situated surface water bodies of various sizes and shapes. These spatio-temporal patterns under varied parameters (λs) controlling stream flow discharge patterns are characterized by estimating their fractal dimensions. At various λs, nonlinear control parameters, we show the union of boundaries of water bodies that traverse the water body and non-water body spaces as geomorphic attractors. The computed fractal dimensions of these attractors are 1.58, 1.53, 1.78, 1.76, 1.84, and 1.90, respectively, at λs of 1, 2, 3, 3.46, 3.57, and 3.99. These values are in line with general visual observations.

  12. A TCP model for external beam treatment of intermediate-risk prostate cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walsh, Sean; Putten, Wil van der

    2013-03-15

    Purpose: Biological models offer the ability to predict clinical outcomes. The authors describe a model to predict the clinical response of intermediate-risk prostate cancer to external beam radiotherapy for a variety of fractionation regimes. Methods: A fully heterogeneous population averaged tumor control probability model was fit to clinical outcome data for hyper, standard, and hypofractionated treatments. The tumor control probability model was then employed to predict the clinical outcome of extreme hypofractionation regimes, as utilized in stereotactic body radiotherapy. Results: The tumor control probability model achieves an excellent level of fit, R{sup 2} value of 0.93 and a root meanmore » squared error of 1.31%, to the clinical outcome data for hyper, standard, and hypofractionated treatments using realistic values for biological input parameters. Residuals Less-Than-Or-Slanted-Equal-To 1.0% are produced by the tumor control probability model when compared to clinical outcome data for stereotactic body radiotherapy. Conclusions: The authors conclude that this tumor control probability model, used with the optimized radiosensitivity values obtained from the fit, is an appropriate mechanistic model for the analysis and evaluation of external beam RT plans with regard to tumor control for these clinical conditions.« less

  13. Influence of Resting Venous Blood Volume Fraction on Dynamic Causal Modeling and System Identifiability.

    PubMed

    Hu, Zhenghui; Ni, Pengyu; Wan, Qun; Zhang, Yan; Shi, Pengcheng; Lin, Qiang

    2016-07-08

    Changes in BOLD signals are sensitive to the regional blood content associated with the vasculature, which is known as V0 in hemodynamic models. In previous studies involving dynamic causal modeling (DCM) which embodies the hemodynamic model to invert the functional magnetic resonance imaging signals into neuronal activity, V0 was arbitrarily set to a physiolog-ically plausible value to overcome the ill-posedness of the inverse problem. It is interesting to investigate how the V0 value influences DCM. In this study we addressed this issue by using both synthetic and real experiments. The results show that the ability of DCM analysis to reveal information about brain causality depends critically on the assumed V0 value used in the analysis procedure. The choice of V0 value not only directly affects the strength of system connections, but more importantly also affects the inferences about the network architecture. Our analyses speak to a possible refinement of how the hemody-namic process is parameterized (i.e., by making V0 a free parameter); however, the conditional dependencies induced by a more complex model may create more problems than they solve. Obtaining more realistic V0 information in DCM can improve the identifiability of the system and would provide more reliable inferences about the properties of brain connectivity.

  14. Electromechanical coupling factor of capacitive micromachined ultrasonic transducers.

    PubMed

    Caronti, Alessandro; Carotenuto, Riccardo; Pappalardo, Massimo

    2003-01-01

    Recently, a linear, analytical distributed model for capacitive micromachined ultrasonic transducers (CMUTs) was presented, and an electromechanical equivalent circuit based on the theory reported was used to describe the behavior of the transducer [IEEE Trans. Ultrason. Ferroelectr. Freq. Control 49, 159-168 (2002)]. The distributed model is applied here to calculate the dynamic coupling factor k(w) of a lossless CMUT, based on a definition that involves the energies stored in a dynamic vibration cycle, and the results are compared with those obtained with a lumped model. A strong discrepancy is found between the two models as the bias voltage increases. The lumped model predicts an increasing dynamic k factor up to unity, whereas the distributed model predicts a more realistic saturation of this parameter to values substantially lower. It is demonstrated that the maximum value of k(w), corresponding to an operating point close to the diaphragm collapse, is 0.4 for a CMUT single cell with a circular membrane diaphragm and no parasitic capacitance (0.36 for a cell with a circular plate diaphragm). This means that the dynamic coupling factor of a CMUT is comparable to that of a piezoceramic plate oscillating in the thickness mode. Parasitic capacitance decreases the value of k(w), because it does not contribute to the energy conversion. The effective coupling factor k(eff) is also investigated, showing that this parameter coincides with k(w) within the lumped model approximation, but a quite different result is obtained if a computation is made with the more accurate distributed model. As a consequence, k(eff), which can be measured from the transducer electrical impedance, does not give a reliable value of the actual dynamic coupling factor.

  15. Electromechanical coupling factor of capacitive micromachined ultrasonic transducers

    NASA Astrophysics Data System (ADS)

    Caronti, Alessandro; Carotenuto, Riccardo; Pappalardo, Massimo

    2003-01-01

    Recently, a linear, analytical distributed model for capacitive micromachined ultrasonic transducers (CMUTs) was presented, and an electromechanical equivalent circuit based on the theory reported was used to describe the behavior of the transducer [IEEE Trans. Ultrason. Ferroelectr. Freq. Control 49, 159-168 (2002)]. The distributed model is applied here to calculate the dynamic coupling factor kw of a lossless CMUT, based on a definition that involves the energies stored in a dynamic vibration cycle, and the results are compared with those obtained with a lumped model. A strong discrepancy is found between the two models as the bias voltage increases. The lumped model predicts an increasing dynamic k factor up to unity, whereas the distributed model predicts a more realistic saturation of this parameter to values substantially lower. It is demonstrated that the maximum value of kw, corresponding to an operating point close to the diaphragm collapse, is 0.4 for a CMUT single cell with a circular membrane diaphragm and no parasitic capacitance (0.36 for a cell with a circular plate diaphragm). This means that the dynamic coupling factor of a CMUT is comparable to that of a piezoceramic plate oscillating in the thickness mode. Parasitic capacitance decreases the value of kw, because it does not contribute to the energy conversion. The effective coupling factor keff is also investigated, showing that this parameter coincides with kw within the lumped model approximation, but a quite different result is obtained if a computation is made with the more accurate distributed model. As a consequence, keff, which can be measured from the transducer electrical impedance, does not give a reliable value of the actual dynamic coupling factor.

  16. An operational retrieval algorithm for determining aerosol optical properties in the ultraviolet

    NASA Astrophysics Data System (ADS)

    Taylor, Thomas E.; L'Ecuyer, Tristan S.; Slusser, James R.; Stephens, Graeme L.; Goering, Christian D.

    2008-02-01

    This paper describes a number of practical considerations concerning the optimization and operational implementation of an algorithm used to characterize the optical properties of aerosols across part of the ultraviolet (UV) spectrum. The algorithm estimates values of aerosol optical depth (AOD) and aerosol single scattering albedo (SSA) at seven wavelengths in the UV, as well as total column ozone (TOC) and wavelength-independent asymmetry factor (g) using direct and diffuse irradiances measured with a UV multifilter rotating shadowband radiometer (UV-MFRSR). A novel method for cloud screening the irradiance data set is introduced, as well as several improvements and optimizations to the retrieval scheme which yield a more realistic physical model for the inversion and increase the efficiency of the algorithm. Introduction of a wavelength-dependent retrieval error budget generated from rigorous forward model analysis as well as broadened covariances on the a priori values of AOD, SSA and g and tightened covariances of TOC allows sufficient retrieval sensitivity and resolution to obtain unique solutions of aerosol optical properties as demonstrated by synthetic retrievals. Analysis of a cloud screened data set (May 2003) from Panther Junction, Texas, demonstrates that the algorithm produces realistic values of the optical properties that compare favorably with pseudo-independent methods for AOD, TOC and calculated Ångstrom exponents. Retrieval errors of all parameters (except TOC) are shown to be negatively correlated to AOD, while the Shannon information content is positively correlated, indicating that retrieval skill improves with increasing atmospheric turbidity. When implemented operationally on more than thirty instruments in the Ultraviolet Monitoring and Research Program's (UVMRP) network, this retrieval algorithm will provide a comprehensive and internally consistent climatology of ground-based aerosol properties in the UV spectral range that can be used for both validation of satellite measurements as well as regional aerosol and ultraviolet transmission studies.

  17. The mass spectra, hierarchy and cosmology of B-L MSSM heterotic compactifications

    DOE PAGES

    Ambroso, Michael; Ovrut, Burt A.

    2011-04-10

    The matter spectrum of the MSSM, including three right-handed neutrino supermultiplets and one pair of Higgs-Higgs conjugate superfields, can be obtained by compactifying the E₈ x E₈ heterotic string and M-theory on Calabi-Yau manifolds with specific SU(4) vector bundles. These theories have the standard model gauge group augmented by an additional gauged U(1) B-L. Their minimal content requires that the B-L gauge symmetry be spontaneously broken by a vacuum expectation value of at least one right-handed neutrino. In previous papers, we presented the results of a quasi-analytic renormalization group analysis showing that B-L gauge symmetry is indeed radiatively broken withmore » an appropriate B-L/electroweak hierarchy. In this paper, we extend these results by 1) enlarging the initial parameter space and 2) explicitly calculating all renormalization group equations numerically. The regions of the initial parameter space leading to realistic vacua are presented and the B-L/electroweak hierarchy computed over these regimes. At representative points, the mass spectrum for all particles and Higgs fields is calculated and shown to be consistent with present experimental bounds. Some fundamental phenomenological signatures of a non-zero right-handed neutrino expectation value are discussed, particularly the cosmology and proton lifetime arising from induced lepton and baryon number violating interactions.« less

  18. Prepositioning emergency supplies under uncertainty: a parametric optimization method

    NASA Astrophysics Data System (ADS)

    Bai, Xuejie; Gao, Jinwu; Liu, Yankui

    2018-07-01

    Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.

  19. Analysis of Self-Associating Proteins by Singular Value Decomposition of Solution Scattering Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Tim E.; Craig, Bruce A.; Kondrashkina, Elena

    2008-07-08

    We describe a method by which a single experiment can reveal both association model (pathway and constants) and low-resolution structures of a self-associating system. Small-angle scattering data are collected from solutions at a range of concentrations. These scattering data curves are mass-weighted linear combinations of the scattering from each oligomer. Singular value decomposition of the data yields a set of basis vectors from which the scattering curve for each oligomer is reconstructed using coefficients that depend on the association model. A search identifies the association pathway and constants that provide the best agreement between reconstructed and observed data. Using simulatedmore » data with realistic noise, our method finds the correct pathway and association constants. Depending on the simulation parameters, reconstructed curves for each oligomer differ from the ideal by 0.050.99% in median absolute relative deviation. The reconstructed scattering curves are fundamental to further analysis, including interatomic distance distribution calculation and low-resolution ab initio shape reconstruction of each oligomer in solution. This method can be applied to x-ray or neutron scattering data from small angles to moderate (or higher) resolution. Data can be taken under physiological conditions, or particular conditions (e.g., temperature) can be varied to extract fundamental association parameters ({Delta}H{sub ass}, S{sub ass}).« less

  20. Constraining the surface properties of effective Skyrme interactions

    NASA Astrophysics Data System (ADS)

    Jodon, R.; Bender, M.; Bennaceur, K.; Meyer, J.

    2016-08-01

    Background: Deformation energy surfaces map how the total binding energy of a nuclear system depends on the geometrical properties of intrinsic configurations, thereby providing a powerful tool to interpret nuclear spectroscopy and large-amplitude collective-motion phenomena such as fission. The global behavior of the deformation energy is known to be directly connected to the surface properties of the effective interaction used for its calculation. Purpose: The precise control of surface properties during the parameter adjustment of an effective interaction is key to obtain a reliable and predictive description of nuclear properties. The most relevant indicator is the surface-energy coefficient asurf. There are several possibilities for its definition and estimation, which are not fully equivalent and require a computational effort that can differ by orders of magnitude. The purpose of this study is threefold: first, to identify a scheme for the determination of asurf that offers the best compromise between robustness, precision, and numerical efficiency; second, to analyze the correlation between values for asurf and the characteristic energies of the fission barrier of 240Pu; and third, to lay out an efficient and robust procedure for how the deformation properties of the Skyrme energy density functional (EDF) can be constrained during the parameter fit. Methods: There are several frequently used possibilities to define and calculate the surface energy coefficient asurf of effective interactions built for the purpose of self-consistent mean-field calculations. The most direct access is provided by the model system of semi-infinite nuclear matter, but asurf can also be extracted from the systematics of binding energies of finite nuclei. Calculations can be carried out either self-consistently [Hartree-Fock (HF)], which incorporates quantal shell effects, or in one of the semiclassical extended Thomas-Fermi (ETF) or modified Thomas-Fermi (MTF) approximations. The latter is of particular interest because it provides asurf as a numerical integral without the need to solve self-consistent equations. Results for semi-infinite nuclear matter obtained with the HF, ETF, and MTF methods will be compared with one another and with asurf, as deduced from ETF calculations of very heavy fictitious nuclei. Results: The surface energy coefficient of 76 parametrizations of the Skyrme EDF have been calculated. Values obtained with the HF, ETF, and MTF methods are not identical, but differ by fairly constant systematic offsets. By contrast, extracting asurf from the binding energy of semi-infinite matter or of very large nuclei within the same method gives the same result within the numerical uncertainties. Conclusions: Despite having some drawbacks compared to the other methods studied here, the MTF approach provides sufficiently precise values for asurf such that it can be used as a very robust constraint on surface properties during a parameter fit at negligible additional cost. While the excitation energy of superdeformed states and the height of fission barriers is obviously strongly correlated to asurf, the presence of shell effects prevents a one-to-one correspondence between them. As in addition the value of asurf providing realistic fission barriers depends on the choices made for corrections for spurious motion, its "best value" (within a given scheme to calculate it) depends on the fit protocol. Through the construction of a series of eight parametrizations SLy5s1-SLy5s8 of the standard Skyrme EDF with systematically varied asurf value, it is shown how to arrive at a fit with realistic deformation properties.

  1. The Cost-Income Compenent of Program Evaluation.

    ERIC Educational Resources Information Center

    Miner, Norris

    Cost-income studies are designed to serve two functions in instructional program evaluation. First, they act as the indicator of the economic value of a program. This economic value in conjunction with the other educational values needed in program evaluation allow for the most realistic appraisal of program worth. Second, if the studies show a…

  2. Vocational Interests and Basic Values.

    ERIC Educational Resources Information Center

    Sagiv, Lilach

    2002-01-01

    Study 1 (n=97) provided evidence of the correlation of Holland's model of vocational interests with Schwartz' theory of basic values. Realistic career interests did not correlate with values. Study 2 (n=545) replicated these findings, showing a better match for individuals who had reached a career decision in counseling than for the undecided.…

  3. Realistic Subsurface Anomaly Discrimination Using Electromagnetic Induction and an SVM Classifier

    DTIC Science & Technology

    2010-01-01

    proposed by Pasion and Oldenburg [25]: Q(t) = kt−βe−γt. (10) Various combinations of these fitting parameters can be used as inputs to classifier... Pasion -Oldenburg parameters k, β, and γ for each anomaly by a direct nonlinear least-squares fit of (10) and by linear (pseudo)inversion of its...combinations of the Pasion -Oldenburg parameters. Com- bining k and γ yields results similar to those of k and R, as Figure 7 and Table 2 show. Figure 8 and

  4. On energy harvesting from a vibro-impact oscillator with dielectric membranes

    NASA Astrophysics Data System (ADS)

    Lai, Z. H.; Thomson, G.; Yurchenko, D.; Val, D. V.; Rodgers, E.

    2018-07-01

    A vibro-impact mechanical system comprising of a ball moving freely between two dielectric membranes located at a certain distance from each other is studied. The system generates electricity when the ball moving due ambient vibrations impacts one of the membranes. The energy harvesting principle of the proposed system is explained and then used to formulate a numerical model for estimating the system output voltage. The dynamic behavior and output performance of the system are thoroughly studied under a harmonic excitation, as well as different initial conditions and various values of the restitution coefficient of the membranes. The delivered research results are useful for selecting the system parameters to achieve its optimal output performance in a realistic vibrational environment. Potential application of the proposed system for energy harvesting from car engine vibrations is presented.

  5. Fluid-structure interaction in straight pipelines with different anchoring conditions

    NASA Astrophysics Data System (ADS)

    Ferras, David; Manso, Pedro A.; Schleiss, Anton J.; Covas, Dídia I. C.

    2017-04-01

    This investigation aims at assessing the fluid-structure interaction (FSI) occurring during hydraulic transients in straight pipeline systems fixed to anchor blocks. A two mode 4-equation model is implemented incorporating the main interacting mechanisms: Poisson, friction and junction coupling. The resistance to movement due to inertia and dry friction of the anchor blocks is treated as junction coupling. Unsteady skin friction is taken into account in friction coupling. Experimental waterhammer tests collected from a straight copper pipe-rig are used for model validation in terms of wave shape, timing and damping. Numerical results successfully reproduce laboratory measurements for realistic values of calibration parameters. The novelty of this paper is the presentation of a 1D FSI solver capable of describing the resistance to movement of anchor blocks and its effect on the transient pressure wave propagation in straight pipelines.

  6. Strain manipulation of Majorana fermions in graphene armchair nanoribbons

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-Hua; Castro, Eduardo V.; Lin, Hai-Qing

    2018-01-01

    Graphene nanoribbons with armchair edges are studied for externally enhanced but realistic parameter values: enhanced Rashba spin-orbit coupling due to proximity to a transition-metal dichalcogenide, such as WS2, and enhanced Zeeman field due to exchange coupling with a magnetic insulator, such as EuS under an applied magnetic field. The presence of s -wave superconductivity, induced either by proximity or by decoration with alkali-metal atoms, such as Ca or Li, leads to a topological superconducting phase with Majorana end modes. The topological phase is highly sensitive to the application of uniaxial strain with a transition to the trivial state above a critical strain well below 0.1%. This sensitivity allows for real-space manipulation of Majorana fermions by applying nonuniform strain profiles. Similar manipulation is also possible by applying an inhomogeneous Zeeman field or chemical potential.

  7. [The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].

    PubMed

    Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang

    2009-08-01

    Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.

  8. Toxicokinetics of perfluorooctane sulfonate in birds under environmentally realistic exposure conditions and development of a kinetic predictive model.

    PubMed

    Tarazona, J V; Rodríguez, C; Alonso, E; Sáez, M; González, F; San Andrés, M D; Jiménez, B; San Andrés, M I

    2015-01-22

    This article describes the toxicokinetics of perfluorooctane sulfonate (PFOS) in birds under low repeated dosing, equivalent to 0.085 μg/kg per day, representing environmentally realistic exposure conditions. The best fitting was provided by a simple pseudo monocompartmental first-order kinetics model, regulated by two rates, with a pseudo first-order dissipation half-life of 230 days, accounting for real elimination as well as binding of PFOS to non-exchangeable structures. The calculated assimilation efficiency was 0.66 with confidence intervals of 0.64 and 0.68. The model calculations confirmed that the measured maximum concentrations were still far from the steady state situation, which for this dose regime, was estimated at a value of about 65 μg PFOS/L serum achieved after a theoretical 210 weeks continuous exposure. The results confirm a very different kinetics than that observed in single-dose experiments confirming clear dose-related differences in apparent elimination rates in birds, as described for humans and monkeys; suggesting that a capacity-limited saturable process should also be considered in the kinetic behavior of PFOS in birds. Pseudo first-order kinetic models are highly convenient and frequently used for predicting bioaccumulation of chemicals in livestock and wildlife; the study suggests that previous bioaccumulation models using half-lives obtained at high doses are expected to underestimate the biomagnification potential of PFOS. The toxicokinetic parameters presented here can be used for higher-tier bioaccumulation estimations of PFOS in chickens and as surrogate values for modeling PFOS kinetics in wild bird species. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. A modified social force model for crowd dynamics

    NASA Astrophysics Data System (ADS)

    Hassan, Ummi Nurmasyitah; Zainuddin, Zarita; Abu-Sulyman, Ibtesam M.

    2017-08-01

    The Social Force Model (SFM) is one of the most successful models in microscopic pedestrian studies that is used to study the movement of pedestrians. Many modifications have been done to improvise the SFM by earlier researchers such as the incorporation of a constant respect factor into the self-stopping mechanism. Before the new mechanism is introduced, the researchers found out that a pedestrian will immediately come to a halt if other pedestrians are near to him, which seems to be an unrealistic behavior. Therefore, researchers introduce a self-slowing mechanism to gradually stop a pedestrian when he is approaching other pedestrians. Subsequently, the dynamic respect factor is introduced into the self-slowing mechanism based on the density of the pedestrians to make the model even more realistic. In real life situations, the respect factor of the pedestrians should be dynamic values instead of a constant value. However, when we reproduce the simulation of the dynamic respect factor, we found that the movement of the pedestrians are unrealistic because the pedestrians are lacking perception of the pedestrians in front of him. In this paper, we adopted both dynamic respect factor and dynamic angular parameter, called modified dynamic respect factor, which is dependent on the density of the pedestrians. Simulations are performed in a normal unidirectional walkway to compare the simulated pedestrians' movements produced by both models. The results obtained showed that the modified dynamic respect factor produces more realistic movement of the pedestrians which conform to the real situation. Moreover, we also found that the simulations endow the pedestrian with a self-slowing mechanism and a perception of other pedestrians in front of him.

  10. Investigation of statistical iterative reconstruction for dedicated breast CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makeev, Andrey; Glick, Stephen J.

    2013-08-15

    Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images weremore » compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue.Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters.Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved performance (by a factor of greater than 10) compared to FBP. The hyperbolic prior was also observed to be superior to the TV norm. A few of the best-performing parameter pairs for the PML method also demonstrated superior performance for various radiation doses. In fact, using PML with certain parameter values results in better images, acquired using 2 mGy dose, than FBP-reconstructed images acquired using 6 mGy dose.Conclusions: A range of optimal free parameters for the PML algorithm with hyperbolic and TV norm-based potentials is presented for the microcalcification detection task, in dedicated breast CT. The reported values can be used as starting values of the free parameters, when SIR techniques are used for image reconstruction. Significant improvement in image quality can be achieved by using PML with optimal combination of parameters, as compared to FBP. Importantly, these results suggest improved detection of microcalcifications can be obtained by using PML with lower radiation dose to the patient, than using FBP with higher dose.« less

  11. An efficient framework for optimization and parameter sensitivity analysis in arterial growth and remodeling computations

    PubMed Central

    Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.

    2013-01-01

    Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters. PMID:23626380

  12. Calibration process of highly parameterized semi-distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group. Third step is to set appropriate bounds to parameters in their range of realistic values. Fourth step is to use of singular value decomposition (SVD) ensures that PEST maintains numerical stability, regardless of how ill-posed is the inverse problem Fifth step is to run PWTADJ1. This creates a new PEST control file in which weights are adjusted such that the contribution made to the total objective function by each observation group is the same. This prevents the information content of any group from being invisible to the inversion process. Sixth step is to add Tikhonov regularization to the PEST control file by running the ADDREG1 utility (Doherty, J, 2013). In adding regularization to the PEST control file ADDREG1 automatically provides a prior information equation for each parameter in which the preferred value of that parameter is equated to its initial value. Last step is to run PEST. We run BeoPEST which a parallel version of PEST and can be run on multiple computers in parallel in same time on TCP communications and this speedup process of calibrations. The case study with results of calibration and validation of the model will be presented.

  13. Studies on the use of helicopters for oil spill clearance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinelli, F.N.

    A program of work was undertaken to assess the use of a commercially available underslung cropspraying bucket for spraying oil spill dispersants. The study consisted of land-based trials to measure relevant parameters of the spray and the effect on these parameters of spray height and dispersant viscosity. A sea trial was undertaken to observe the system under realistic conditions. (Copyright (c) Crown Copyright.)

  14. Robust characterization of small grating boxes using rotating stage Mueller matrix polarimeter

    NASA Astrophysics Data System (ADS)

    Foldyna, M.; De Martino, A.; Licitra, C.; Foucher, J.

    2010-03-01

    In this paper we demonstrate the robustness of the Mueller matrix polarimetry used in multiple-azimuth configuration. We first demonstrate the efficiency of the method for the characterization of small pitch gratings filling 250 μm wide square boxes. We used a Mueller matrix polarimeter directly installed in the clean room has motorized rotating stage allowing the access to arbitrary conical grating configurations. The projected beam spot size could be reduced to 60x25 μm, but for the measurements reported here this size was 100x100 μm. The optimal values of parameters of a trapezoidal profile model, acquired for each azimuthal angle separately using a non-linear least-square minimization algorithm, are shown for a typical grating. Further statistical analysis of the azimuth-dependent dimensional parameters provided realistic estimates of the confidence interval giving direct information about the accuracy of the results. The mean values and the standard deviations were calculated for 21 different grating boxes featuring in total 399 measured spectra and fits. The results for all boxes are summarized in a table which compares the optical method to the 3D-AFM. The essential conclusion of our work is that the 3D-AFM values always fall into the confidence intervals provided by the optical method, which means that we have successfully estimated the accuracy of our results without using direct comparison with another, non-optical, method. Moreover, this approach may provide a way to improve the accuracy of grating profile modeling by minimizing the standard deviations evaluated from multiple-azimuths results.

  15. Supernova Driving. IV. The Star-formation Rate of Molecular Clouds

    NASA Astrophysics Data System (ADS)

    Padoan, Paolo; Haugbølle, Troels; Nordlund, Åke; Frimann, Søren

    2017-05-01

    We compute the star-formation rate (SFR) in molecular clouds (MCs) that originate ab initio in a new, higher-resolution simulation of supernova-driven turbulence. Because of the large number of well-resolved clouds with self-consistent boundary and initial conditions, we obtain a large range of cloud physical parameters with realistic statistical distributions, which is an unprecedented sample of star-forming regions to test SFR models and to interpret observational surveys. We confirm the dependence of the SFR per free-fall time, SFRff, on the virial parameter, α vir, found in previous simulations, and compare a revised version of our turbulent fragmentation model with the numerical results. The dependences on Mach number, { M }, gas to magnetic pressure ratio, β, and compressive to solenoidal power ratio, χ at fixed α vir are not well constrained, because of random scatter due to time and cloud-to-cloud variations in SFRff. We find that SFRff in MCs can take any value in the range of 0 ≤ SFRff ≲ 0.2, and its probability distribution peaks at a value of SFRff ≈ 0.025, consistent with observations. The values of SFRff and the scatter in the SFRff-α vir relation are consistent with recent measurements in nearby MCs and in clouds near the Galactic center. Although not explicitly modeled by the theory, the scatter is consistent with the physical assumptions of our revised model and may also result in part from a lack of statistical equilibrium of the turbulence, due to the transient nature of MCs.

  16. Pulsating Hydrodynamic Instability in a Dynamic Model of Liquid-Propellant Combustion

    NASA Technical Reports Server (NTRS)

    Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)

    1999-01-01

    Hydrodynamic (Landau) instability in combustion is typically associated with the onset of wrinkling of a flame surface, corresponding to the formation of steady cellular structures as the stability threshold is crossed. In the context of liquid-propellant combustion, such instability has recently been shown to occur for critical values of the pressure sensitivity of the burning rate and the disturbance wavenumber, significantly generalizing previous classical results for this problem that assumed a constant normal burning rate. Additionally, however, a pulsating form of hydrodynamic instability has been shown to occur as well, corresponding to the onset of temporal oscillations in the location of the liquid/gas interface. In the present work, we consider the realistic influence of a nonzero temperature sensitivity in the local burning rate on both types of stability thresholds. It is found that for sufficiently small values of this parameter, there exists a stable range of pressure sensitivities for steady, planar burning such that the classical cellular form of hydrodynamic instability and the more recent pulsating form of hydrodynamic instability can each occur as the corresponding stability threshold is crossed. For larger thermal sensitivities, however, the pulsating stability boundary evolves into a C-shaped curve in the disturbance-wavenumber/ pressure-sensitivity plane, indicating loss of stability to pulsating perturbations for all sufficiently large disturbance wavelengths. It is thus concluded, based on characteristic parameter values, that an equally likely form of hydrodynamic instability in liquid-propellant combustion is of a nonsteady, long-wave nature, distinct from the steady, cellular form originally predicted by Landau.

  17. Pulsating Hydrodynamic Instability and Thermal Coupling in an Extended Landau/Levich Model of Liquid-Propellant Combustion. 1; Inviscid Analysis

    NASA Technical Reports Server (NTRS)

    Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)

    1999-01-01

    Hydrodynamic (Landau) instability in combustion is typically associated with the onset of wrinkling of a flame surface, corresponding to the formation of steady cellular structures as the stability threshold is crossed. In the context of liquid-propellant combustion, such instability has recently been shown to occur for critical values of the pressure sensitivity of the burning rate and the disturbance wavenumber, significantly generalizing previous classical results for this problem that assumed a constant normal burning rate. Additionally, however, a pulsating form of hydrodynamic instability has been shown to occur as well, corresponding to the onset of temporal oscillations in the location of the liquid/gas interface. In the present work, we consider the realistic influence of a non-zero temperature sensitivity in the local burning rate on both types of stability thresholds. It is found that for sufficiently small values of this parameter, there exists a stable range of pressure sensitivities for steady, planar burning such that the classical cellular form of hydrodynamic instability and the more recent pulsating form of hydrodynamic instability can each occur as the corresponding stability threshold is crossed. For larger thermal sensitivities, however, the pulsating stability boundary evolves into a C-shaped curve in the (disturbance-wavenumber, pressure-sensitivity) plane, indicating loss of stability to pulsating perturbations for all sufficiently large disturbance wavelengths. It is thus concluded, based on characteristic parameter values, that an equally likely form of hydrodynamic instability in liquid-propellant combustion is of a non-steady, long-wave nature, distinct from the steady, cellular form originally predicted by Landau.

  18. Modeling of soil water retention from saturation to oven dryness

    USGS Publications Warehouse

    Rossi, Cinzia; Nimmo, John R.

    1994-01-01

    Most analytical formulas used to model moisture retention in unsaturated porous media have been developed for the wet range and are unsuitable for applications in which low water contents are important. We have developed two models that fit the entire range from saturation to oven dryness in a practical and physically realistic way with smooth, continuous functions that have few parameters. Both models incorporate a power law and a logarithmic dependence of water content on suction, differing in how these two components are combined. In one model, functions are added together (model “sum”); in the other they are joined smoothly together at a discrete point (model “junction”). Both models also incorporate recent developments that assure a continuous derivative and force the function to reach zero water content at a finite value of suction that corresponds to oven dryness. The models have been tested with seven sets of water retention data that each cover nearly the entire range. The three-parameter sum model fits all data well and is useful for extrapolation into the dry range when data for it are unavailable. The two-parameter junction model fits most data sets almost as well as the sum model and has the advantage of being analytically integrable for convenient use with capillary-bundle models to obtain the unsaturated hydraulic conductivity.

  19. Evaluation of SimpleTreat 4.0: Simulations of pharmaceutical removal in wastewater treatment plant facilities.

    PubMed

    Lautz, L S; Struijs, J; Nolte, T M; Breure, A M; van der Grinten, E; van de Meent, D; van Zelm, R

    2017-02-01

    In this study, the removal of pharmaceuticals from wastewater as predicted by SimpleTreat 4.0 was evaluated. Field data obtained from literature of 43 pharmaceuticals, measured in 51 different activated sludge WWTPs were used. Based on reported influent concentrations, the effluent concentrations were calculated with SimpleTreat 4.0 and compared to measured effluent concentrations. The model predicts effluent concentrations mostly within a factor of 10, using the specific WWTP parameters as well as SimpleTreat default parameters, while it systematically underestimates concentrations in secondary sludge. This may be caused by unexpected sorption, resulting from variability in WWTP operating conditions, and/or QSAR applicability domain mismatch and background concentrations prior to measurements. Moreover, variability in detection techniques and sampling methods can cause uncertainty in measured concentration levels. To find possible structural improvements, we also evaluated SimpleTreat 4.0 using several specific datasets with different degrees of uncertainty and variability. This evaluation verified that the most influencing parameters for water effluent predictions were biodegradation and the hydraulic retention time. Results showed that model performance is highly dependent on the nature and quality, i.e. degree of uncertainty, of the data. The default values for reactor settings in SimpleTreat result in realistic predictions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Can we settle with single-band radiometric temperature monitoring during hyperthermia treatment of chestwall recurrence of breast cancer using a dual-mode transceiving applicator?

    PubMed

    Jacobsen, Svein; Stauffer, Paul R

    2007-02-21

    The total thermal dose that can be delivered during hyperthermia treatments is frequently limited by temperature heterogeneities in the heated tissue volume. Reliable temperature information on the heated area is thus vital for the optimization of clinical dosimetry. Microwave radiometry has been proposed as an accurate, quick and painless temperature sensing technique for biological tissue. Advantages include the ability to sense volume-averaged temperatures from subsurface tissue non-invasively, rather than with a limited set of point measurements typical of implanted temperature probes. We present a procedure to estimate the maximum tissue temperature from a single radiometric brightness temperature which is based on a numerical simulation of 3D tissue temperature distributions induced by microwave heating at 915 MHz. The temperature retrieval scheme is evaluated against errors arising from unknown variations in thermal, electromagnetic and design model parameters. Whereas realistic deviations from base values of dielectric and thermal parameters have only marginal impact on performance, pronounced deviations in estimated maximum tissue temperature are observed for unanticipated variations of the temperature or thickness of the bolus compartment. The need to pay particular attention to these latter applicator construction parameters in future clinical implementation of the thermometric method is emphasized.

  1. Can we settle with single-band radiometric temperature monitoring during hyperthermia treatment of chestwall recurrence of breast cancer using a dual-mode transceiving applicator?

    NASA Astrophysics Data System (ADS)

    Jacobsen, Svein; Stauffer, Paul R.

    2007-02-01

    The total thermal dose that can be delivered during hyperthermia treatments is frequently limited by temperature heterogeneities in the heated tissue volume. Reliable temperature information on the heated area is thus vital for the optimization of clinical dosimetry. Microwave radiometry has been proposed as an accurate, quick and painless temperature sensing technique for biological tissue. Advantages include the ability to sense volume-averaged temperatures from subsurface tissue non-invasively, rather than with a limited set of point measurements typical of implanted temperature probes. We present a procedure to estimate the maximum tissue temperature from a single radiometric brightness temperature which is based on a numerical simulation of 3D tissue temperature distributions induced by microwave heating at 915 MHz. The temperature retrieval scheme is evaluated against errors arising from unknown variations in thermal, electromagnetic and design model parameters. Whereas realistic deviations from base values of dielectric and thermal parameters have only marginal impact on performance, pronounced deviations in estimated maximum tissue temperature are observed for unanticipated variations of the temperature or thickness of the bolus compartment. The need to pay particular attention to these latter applicator construction parameters in future clinical implementation of the thermometric method is emphasized.

  2. Generation of anatomically realistic numerical phantoms for photoacoustic and ultrasonic breast imaging

    NASA Astrophysics Data System (ADS)

    Lou, Yang; Zhou, Weimin; Matthews, Thomas P.; Appleton, Catherine M.; Anastasio, Mark A.

    2017-04-01

    Photoacoustic computed tomography (PACT) and ultrasound computed tomography (USCT) are emerging modalities for breast imaging. As in all emerging imaging technologies, computer-simulation studies play a critically important role in developing and optimizing the designs of hardware and image reconstruction methods for PACT and USCT. Using computer-simulations, the parameters of an imaging system can be systematically and comprehensively explored in a way that is generally not possible through experimentation. When conducting such studies, numerical phantoms are employed to represent the physical properties of the patient or object to-be-imaged that influence the measured image data. It is highly desirable to utilize numerical phantoms that are realistic, especially when task-based measures of image quality are to be utilized to guide system design. However, most reported computer-simulation studies of PACT and USCT breast imaging employ simple numerical phantoms that oversimplify the complex anatomical structures in the human female breast. We develop and implement a methodology for generating anatomically realistic numerical breast phantoms from clinical contrast-enhanced magnetic resonance imaging data. The phantoms will depict vascular structures and the volumetric distribution of different tissue types in the breast. By assigning optical and acoustic parameters to different tissue structures, both optical and acoustic breast phantoms will be established for use in PACT and USCT studies.

  3. An evaluation of differences due to changing source directivity in room acoustic computer modeling

    NASA Astrophysics Data System (ADS)

    Vigeant, Michelle C.; Wang, Lily M.

    2004-05-01

    This project examines the effects of changing source directivity in room acoustic computer models on objective parameters and subjective perception. Acoustic parameters and auralizations calculated from omnidirectional versus directional sources were compared. Three realistic directional sources were used, measured in a limited number of octave bands from a piano, singing voice, and violin. A highly directional source that beams only within a sixteenth-tant of a sphere was also tested. Objectively, there were differences of 5% or more in reverberation time (RT) between the realistic directional and omnidirectional sources. Between the beaming directional and omnidirectional sources, differences in clarity were close to the just-noticeable-difference (jnd) criterion of 1 dB. Subjectively, participants had great difficulty distinguishing between the realistic and omnidirectional sources; very few could discern the differences in RTs. However, a larger percentage (32% vs 20%) could differentiate between the beaming and omnidirectional sources, as well as the respective differences in clarity. Further studies of the objective results from different beaming sources have been pursued. The direction of the beaming source in the room is changed, as well as the beamwidth. The objective results are analyzed to determine if differences fall within the jnd of sound-pressure level, RT, and clarity.

  4. Keno-21: Fundamental Issues in the Design of Geophysical Simulation Experiments and Resource Allocation in Climate Modelling

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2001-05-01

    Many sources of uncertainty come into play when modelling geophysical systems by simulation. These include uncertainty in the initial condition, uncertainty in model parameter values (and the parameterisations themselves) and error in the model class from which the model(s) was selected. In recent decades, climate simulations have focused resources on reducing the last of these by including more and more details into the model. One can question when this ``kitchen sink'' approach should be complimented with realistic estimates of the impact from other uncertainties noted above. Indeed while the impact of model error can never be fully quantified, as all simulation experiments are interpreted a the rosy scenario which assumes a priori that nothing crucial is missing, the impact of other uncertainties can be quantified at only the cost of computational power; as illustrated, for example, in ensemble climate modelling experiments like Casino-21. This talk illustrates the interplay uncertainties in the context of a trivial nonlinear system and an ensemble of models. The simple systems considered in this small scale experiment, Keno-21, are meant to illustrate issues of experimental design; they are not intended to provide true climate simulations. The use of simulation models with huge numbers of parameters given limited data is usually justified by an appeal to the Laws of Physics: the number of free degrees-of-freedom are many fewer than the number of variables; both variables, parameterisations, and parameter values are constrained by ``the physics" and the resulting simulation yields a realistic reproduction of the entire planet's climate system to within reasonable bounds. But what bounds? exactly? In a single model run under transient forcing scenario, there are good statistical grounds for considering only large space and time averages; most of these reasons vanish if an ensemble of runs are made. Ensemble runs can quantify the (in)ability of a model to provide insight on regional changes: if a model cannot capture regional variations in the data on which the model was constructed (that is, in-sample) claims that out-of-sample predictions of those same regional averages should be used in policy making are vacuous. While motivated by climate modelling and illustrated on a trivial nonlinear system, these issues have implications across the range of geophysical modelling. These include implications for appropriate resource allocation, on the making of science policy, and on the public understanding of science and the role of uncertainty in decision making.

  5. Benefits of detailed models of muscle activation and mechanics

    NASA Technical Reports Server (NTRS)

    Lehman, S. L.; Stark, L.

    1981-01-01

    Recent biophysical and physiological studies identified some of the detailed mechanisms involved in excitation-contraction coupling, muscle contraction, and deactivation. Mathematical models incorporating these mechanisms allow independent estimates of key parameters, direct interplay between basic muscle research and the study of motor control, and realistic model behaviors, some of which are not accessible to previous, simpler, models. The existence of previously unmodeled behaviors has important implications for strategies of motor control and identification of neural signals. New developments in the analysis of differential equations make the more detailed models feasible for simulation in realistic experimental situations.

  6. Model parameters for representative wetland plant functional groups

    USDA-ARS?s Scientific Manuscript database

    Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and...

  7. Applying Dynamic Energy Budget (DEB) theory to simulate growth and bio-energetics of blue mussels under low seston conditions

    NASA Astrophysics Data System (ADS)

    Rosland, R.; Strand, Ø.; Alunno-Bruscia, M.; Bacher, C.; Strohmeier, T.

    2009-08-01

    A Dynamic Energy Budget (DEB) model for simulation of growth and bioenergetics of blue mussels ( Mytilus edulis) has been tested in three low seston sites in southern Norway. The observations comprise four datasets from laboratory experiments (physiological and biometrical mussel data) and three datasets from in situ growth experiments (biometrical mussel data). Additional in situ data from commercial farms in southern Norway were used for estimation of biometrical relationships in the mussels. Three DEB parameters (shape coefficient, half saturation coefficient, and somatic maintenance rate coefficient) were estimated from experimental data, and the estimated parameters were complemented with parameter values from literature to establish a basic parameter set. Model simulations based on the basic parameter set and site specific environmental forcing matched fairly well with observations, but the model was not successful in simulating growth at the extreme low seston regimes in the laboratory experiments in which the long period of negative growth caused negative reproductive mass. Sensitivity analysis indicated that the model was moderately sensitive to changes in the parameter and initial conditions. The results show the robust properties of the DEB model as it manages to simulate mussel growth in several independent datasets from a common basic parameter set. However, the results also demonstrate limitations of Chl a as a food proxy for blue mussels and limitations of the DEB model to simulate long term starvation. Future work should aim at establishing better food proxies and improving the model formulations of the processes involved in food ingestion and assimilation. The current DEB model should also be elaborated to allow shrinking in the structural tissue in order to produce more realistic growth simulations during long periods of starvation.

  8. Defining and Delivering Measurable Value: A Mega Thinking and Planning Primer

    ERIC Educational Resources Information Center

    Kaufman, Roger

    2005-01-01

    Mega planning has a primary focus on adding value for all stakeholders. It is realistic, practical, and ethical. Denning and then achieving sustained organizational success is possible. It relies on three basic elements: (1) "A societal value-added "frame of mind" or paradigm": your perspective about your organization, people, and our world. It…

  9. The effect of collagen fibril orientation on the biphasic mechanics of articular cartilage.

    PubMed

    Meng, Qingen; An, Shuqiang; Damion, Robin A; Jin, Zhongmin; Wilcox, Ruth; Fisher, John; Jones, Alison

    2017-01-01

    The highly inhomogeneous distribution of collagen fibrils may have important effects on the biphasic mechanics of articular cartilage. However, the effect of the inhomogeneity of collagen fibrils has mainly been investigated using simplified three-layered models, which may have underestimated the effect of collagen fibrils by neglecting their realistic orientation. The aim of this study was to investigate the effect of the realistic orientation of collagen fibrils on the biphasic mechanics of articular cartilage. Five biphasic material models, each of which included a different level of complexity of fibril reinforcement, were solved using two different finite element software packages (Abaqus and FEBio). Model 1 considered the realistic orientation of fibrils, which was derived from diffusion tensor magnetic resonance images. The simplified three-layered orientation was used for Model 2. Models 3-5 were three control models. The realistic collagen orientations obtained in this study were consistent with the literature. Results from the two finite element implementations were in agreement for each of the conditions modelled. The comparison between the control models confirmed some functions of collagen fibrils. The comparison between Models 1 and 2 showed that the widely-used three-layered inhomogeneous model can produce similar fluid load support to the model including the realistic fibril orientation; however, an accurate prediction of the other mechanical parameters requires the inclusion of the realistic orientation of collagen fibrils. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Evolutionary algorithm optimization of biological learning parameters in a biomimetic neuroprosthesis

    PubMed Central

    Dura-Bernal, S.; Neymotin, S. A.; Kerr, C. C.; Sivagnanam, S.; Majumdar, A.; Francis, J. T.; Lytton, W. W.

    2017-01-01

    Biomimetic simulation permits neuroscientists to better understand the complex neuronal dynamics of the brain. Embedding a biomimetic simulation in a closed-loop neuroprosthesis, which can read and write signals from the brain, will permit applications for amelioration of motor, psychiatric, and memory-related brain disorders. Biomimetic neuroprostheses require real-time adaptation to changes in the external environment, thus constituting an example of a dynamic data-driven application system. As model fidelity increases, so does the number of parameters and the complexity of finding appropriate parameter configurations. Instead of adapting synaptic weights via machine learning, we employed major biological learning methods: spike-timing dependent plasticity and reinforcement learning. We optimized the learning metaparameters using evolutionary algorithms, which were implemented in parallel and which used an island model approach to obtain sufficient speed. We employed these methods to train a cortical spiking model to utilize macaque brain activity, indicating a selected target, to drive a virtual musculoskeletal arm with realistic anatomical and biomechanical properties to reach to that target. The optimized system was able to reproduce macaque data from a comparable experimental motor task. These techniques can be used to efficiently tune the parameters of multiscale systems, linking realistic neuronal dynamics to behavior, and thus providing a useful tool for neuroscience and neuroprosthetics. PMID:29200477

  11. Building Better Planet Populations for EXOSIMS

    NASA Astrophysics Data System (ADS)

    Garrett, Daniel; Savransky, Dmitry

    2018-01-01

    The Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS) software package simulates ensembles of space-based direct imaging surveys to provide a variety of science and engineering yield distributions for proposed mission designs. These mission simulations rely heavily on assumed distributions of planetary population parameters including semi-major axis, planetary radius, eccentricity, albedo, and orbital orientation to provide heuristics for target selection and to simulate planetary systems for detection and characterization. The distributions are encoded in PlanetPopulation modules within EXOSIMS which are selected by the user in the input JSON script when a simulation is run. The earliest written PlanetPopulation modules available in EXOSIMS are based on planet population models where the planetary parameters are considered to be independent from one another. While independent parameters allow for quick computation of heuristics and sampling for simulated planetary systems, results from planet-finding surveys have shown that many parameters (e.g., semi-major axis/orbital period and planetary radius) are not independent. We present new PlanetPopulation modules for EXOSIMS which are built on models based on planet-finding survey results where semi-major axis and planetary radius are not independent and provide methods for sampling their joint distribution. These new modules enhance the ability of EXOSIMS to simulate realistic planetary systems and give more realistic science yield distributions.

  12. Inter-Individual Variability in High-Throughput Risk ...

    EPA Pesticide Factsheets

    We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have little or no existing TK data. Chemicals are prioritized based on model estimates of hazard and exposure, to decide which chemicals should be first in line for further study. Hazard may be estimated with in vitro HT screening assays, e.g., U.S. EPA’s ToxCast program. Bioactive ToxCast concentrations can be extrapolated to doses that produce equivalent concentrations in body tissues using a reverse TK approach in which generic TK models are parameterized with 1) chemical-specific parameters derived from in vitro measurements and predicted from chemical structure; and 2) with physiological parameters for a virtual population. Here we draw physiological parameters from realistic estimates of distributions of demographic and anthropometric quantities in the modern U.S. population, based on the most recent CDC NHANES data. A Monte Carlo approach, accounting for the correlation structure in physiological parameters, is used to estimate ToxCast equivalent doses for the most sensitive portion of the population. To quantify risk, ToxCast equivalent doses are compared to estimates of exposure rates based on Bayesian inferences drawn from NHANES urinary analyte biomonitoring data. The inclusion

  13. Towards adjoint-based inversion of time-dependent mantle convection with nonlinear viscosity

    NASA Astrophysics Data System (ADS)

    Li, Dunzhu; Gurnis, Michael; Stadler, Georg

    2017-04-01

    We develop and study an adjoint-based inversion method for the simultaneous recovery of initial temperature conditions and viscosity parameters in time-dependent mantle convection from the current mantle temperature and historic plate motion. Based on a realistic rheological model with temperature-dependent and strain-rate-dependent viscosity, we formulate the inversion as a PDE-constrained optimization problem. The objective functional includes the misfit of surface velocity (plate motion) history, the misfit of the current mantle temperature, and a regularization for the uncertain initial condition. The gradient of this functional with respect to the initial temperature and the uncertain viscosity parameters is computed by solving the adjoint of the mantle convection equations. This gradient is used in a pre-conditioned quasi-Newton minimization algorithm. We study the prospects and limitations of the inversion, as well as the computational performance of the method using two synthetic problems, a sinking cylinder and a realistic subduction model. The subduction model is characterized by the migration of a ridge toward a trench whereby both plate motions and subduction evolve. The results demonstrate: (1) for known viscosity parameters, the initial temperature can be well recovered, as in previous initial condition-only inversions where the effective viscosity was given; (2) for known initial temperature, viscosity parameters can be recovered accurately, despite the existence of trade-offs due to ill-conditioning; (3) for the joint inversion of initial condition and viscosity parameters, initial condition and effective viscosity can be reasonably recovered, but the high dimension of the parameter space and the resulting ill-posedness may limit recovery of viscosity parameters.

  14. Mechanical Degradation of Graphite/PVDF Composite Electrodes: A Model-Experimental Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, Kenji; Higa, Kenneth; Mair, Sunil

    2015-12-11

    Mechanical failure modes of a graphite/polyvinylidene difluoride (PVDF) composite electrode for lithium-ion batteries were investigated by combining realistic stress-stain tests and mathematical model predictions. Samples of PVDF mixed with conductive additive were prepared in a similar way to graphite electrodes and tested while submerged in electrolyte solution. Young's modulus and tensile strength values of wet samples were found to be approximately one-fifth and one-half of those measured for dry samples. Simulations of graphite particles surrounded by binder layers given the measured material property values suggest that the particles are unlikely to experience mechanical damage during cycling, but that the fatemore » of the surrounding composite of PVDF and conductive additive depends completely upon the conditions under which its mechanical properties were obtained. Simulations using realistic property values produced results that were consistent with earlier experimental observations.« less

  15. Systematic Construction of Kinetic Models from Genome-Scale Metabolic Networks

    PubMed Central

    Smallbone, Kieran; Klipp, Edda; Mendes, Pedro; Liebermeister, Wolfram

    2013-01-01

    The quantitative effects of environmental and genetic perturbations on metabolism can be studied in silico using kinetic models. We present a strategy for large-scale model construction based on a logical layering of data such as reaction fluxes, metabolite concentrations, and kinetic constants. The resulting models contain realistic standard rate laws and plausible parameters, adhere to the laws of thermodynamics, and reproduce a predefined steady state. These features have not been simultaneously achieved by previous workflows. We demonstrate the advantages and limitations of the workflow by translating the yeast consensus metabolic network into a kinetic model. Despite crudely selected data, the model shows realistic control behaviour, a stable dynamic, and realistic response to perturbations in extracellular glucose concentrations. The paper concludes by outlining how new data can continuously be fed into the workflow and how iterative model building can assist in directing experiments. PMID:24324546

  16. Dynamics of entanglement and uncertainty relation in coupled harmonic oscillator system: exact results

    NASA Astrophysics Data System (ADS)

    Park, DaeKil

    2018-06-01

    The dynamics of entanglement and uncertainty relation is explored by solving the time-dependent Schrödinger equation for coupled harmonic oscillator system analytically when the angular frequencies and coupling constant are arbitrarily time dependent. We derive the spectral and Schmidt decompositions for vacuum solution. Using the decompositions, we derive the analytical expressions for von Neumann and Rényi entropies. Making use of Wigner distribution function defined in phase space, we derive the time dependence of position-momentum uncertainty relations. To show the dynamics of entanglement and uncertainty relation graphically, we introduce two toy models and one realistic quenched model. While the dynamics can be conjectured by simple consideration in the toy models, the dynamics in the realistic quenched model is somewhat different from that in the toy models. In particular, the dynamics of entanglement exhibits similar pattern to dynamics of uncertainty parameter in the realistic quenched model.

  17. More physics in the laundromat

    NASA Astrophysics Data System (ADS)

    Denny, Mark

    2010-12-01

    The physics of a washing machine spin cycle is extended to include the spin-up and spin-down phases. We show that, for realistic parameters, an adiabatic approximation applies, and thus the familiar forced, damped harmonic oscillator analysis can be applied to these phases.

  18. Airship stresses due to vertical velocity gradients and atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Sheldon, D.

    1975-01-01

    Munk's potential flow method is used to calculate the resultant moment experienced by an ellipsoidal airship. This method is first used to calculate the moment arising from basic maneuvers considered by early designers, and then expended to calculate the moment arising from vertical velocity gradients and atmospheric turbulence. This resultant moment must be neutralized by the transverse force of the fins. The results show that vertical velocity gradients at a height of 6000 feet in thunderstorms produce a resultant moment approximately three to four times greater than the moment produced in still air by realistic values of pitch angle or steady turning. Realistic values of atmospheric turbulence produce a moment which is significantly less than the moment produced by maneuvers in still air.

  19. Bayesian imperfect information analysis for clinical recurrent data

    PubMed Central

    Chang, Chih-Kuang; Chang, Chi-Chang

    2015-01-01

    In medical research, clinical practice must often be undertaken with imperfect information from limited resources. This study applied Bayesian imperfect information-value analysis to realistic situations to produce likelihood functions and posterior distributions, to a clinical decision-making problem for recurrent events. In this study, three kinds of failure models are considered, and our methods illustrated with an analysis of imperfect information from a trial of immunotherapy in the treatment of chronic granulomatous disease. In addition, we present evidence toward a better understanding of the differing behaviors along with concomitant variables. Based on the results of simulations, the imperfect information value of the concomitant variables was evaluated and different realistic situations were compared to see which could yield more accurate results for medical decision-making. PMID:25565853

  20. How much expert knowledge is it worth to put in conceptual hydrological models?

    NASA Astrophysics Data System (ADS)

    Antonetti, Manuel; Zappa, Massimiliano

    2017-04-01

    Both modellers and experimentalists agree on using expert knowledge to improve our conceptual hydrological simulations on ungauged basins. However, they use expert knowledge differently for both hydrologically mapping the landscape and parameterising a given hydrological model. Modellers use generally very simplified (e.g. topography-based) mapping approaches and put most of the knowledge for constraining the model by defining parameter and process relational rules. In contrast, experimentalists tend to invest all their detailed and qualitative knowledge about processes to obtain a spatial distribution of areas with different dominant runoff generation processes (DRPs) as realistic as possible, and for defining plausible narrow value ranges for each model parameter. Since, most of the times, the modelling goal is exclusively to simulate runoff at a specific site, even strongly simplified hydrological classifications can lead to satisfying results due to equifinality of hydrological models, overfitting problems and the numerous uncertainty sources affecting runoff simulations. Therefore, to test to which extent expert knowledge can improve simulation results under uncertainty, we applied a typical modellers' modelling framework relying on parameter and process constraints defined based on expert knowledge to several catchments on the Swiss Plateau. To map the spatial distribution of the DRPs, mapping approaches with increasing involvement of expert knowledge were used. Simulation results highlighted the potential added value of using all the expert knowledge available on a catchment. Also, combinations of event types and landscapes, where even a simplified mapping approach can lead to satisfying results, were identified. Finally, the uncertainty originated by the different mapping approaches was compared with the one linked to meteorological input data and catchment initial conditions.

  1. Like-sign dimuon charge asymmetry at the Tevatron: Corrections from B meson fragmentation

    NASA Astrophysics Data System (ADS)

    Mitov, Alexander

    2011-07-01

    The existing predictions for the like-sign dimuon charge asymmetry at the Tevatron are expressed in terms of parameters related to B mesons’ mixing and inclusive production fractions. We show that in the realistic case when phase-space cuts are applied, the asymmetry depends also on the details of the production mechanism for the B mesons. In particular, it is sensitive to the difference in the fragmentation functions of Bd0 and Bs0 mesons. We estimate these fragmentation effects and find that they shift the theory prediction for this observable by approximately 10%. We also point out the approximately 20% sensitivity of the asymmetry depending on which set of values for the B meson production fractions is used: as measured at the Z pole or at the Tevatron. The impact of these effects on the extraction of ASLs from the D0 measurement is presented.

  2. Shaping effects on toroidal magnetohydrodynamic modes in the presence of plasma and wall resistivity

    NASA Astrophysics Data System (ADS)

    Rhodes, Dov J.; Cole, A. J.; Brennan, D. P.; Finn, J. M.; Li, M.; Fitzpatrick, R.; Mauel, M. E.; Navratil, G. A.

    2018-01-01

    This study explores the effects of plasma shaping on magnetohydrodynamic mode stability and rotational stabilization in a tokamak, including both plasma and wall resistivity. Depending upon the plasma shape, safety factor, and distance from the wall, the β-limit for rotational stabilization is given by either the resistive-plasma ideal-wall (tearing mode) limit or the ideal-plasma resistive-wall (resistive wall mode) limit. In order to explore this broad parameter space, a sharp-boundary model is developed with a realistic geometry, resonant tearing surfaces, and a resistive wall. The β-limit achievable in the presence of stabilization by rigid plasma rotation, or by an equivalent feedback control with imaginary normal-field gain, is shown to peak at specific values of elongation and triangularity. It is shown that the optimal shaping with rotation typically coincides with transitions between tearing-dominated and wall-dominated mode behavior.

  3. Magnetism and thermal evolution of the terrestrial planets

    NASA Technical Reports Server (NTRS)

    Stevenson, D. J.; Spohn, T.; Schubert, G.

    1983-01-01

    The absence in the cases of Venus and Mars of the substantial intrinsic magnetic fields of the earth and Mercury is considered, in light of thermal history calculations which suggest that, while the cores of Mercury and the earth are continuing to freeze, the cores of Venus and Mars may still be completely liquid. It is noted that completely fluid cores, lacking intrinsic heat sources, are not likely to sustain thermal convection for the age of the solar system, but cool to a subadiabatic, conductive state that cannot maintain a dynamo because of the gravitational energy release and the chemically driven convection that accompany inner core growth. The models presented include realistic pressure- and composition-dependent freezing curves for the core, and material parameters are chosen so that correct present-day values of heat outflow, upper mantle temperature and viscosity, and inner core radius, are obtained for the earth.

  4. Thermospheric temperature measurement technique.

    NASA Technical Reports Server (NTRS)

    Hueser, J. E.; Fowler, P.

    1972-01-01

    A method for measurement of temperature in the earth's lower thermosphere from a high-velocity probes is described. An undisturbed atmospheric sample is admitted to the instrument by means of a free molecular flow inlet system of skimmers which avoids surface collisions of the molecules prior to detection. Measurement of the time-of-flight distribution of an initially well-localized group of nitrogen metastable molecular states produced in an open, crossed electron-molecular beam source, yields information on the atmospheric temperature. It is shown that for high vehicle velocities, the time-of-flight distribution of the metastable flux is a sensitive indicator of atmospheric temperature. The temperature measurement precision should be greater than 94% at the 99% confidence level over the range of altitudes from 120-170 km. These precision and altitude range estimates are based on the statistical consideration of the counting rates achieved with a multichannel analyzer using realistic values for system parameters.

  5. Research and implementation of group animation based on normal cloud model

    NASA Astrophysics Data System (ADS)

    Li, Min; Wei, Bin; Peng, Bao

    2011-12-01

    Group Animation is a difficult technology problem which always has not been solved in computer Animation technology, All current methods have their limitations. This paper put forward a method: the Motion Coordinate and Motion Speed of true fish group was collected as sample data, reverse cloud generator was designed and run, expectation, entropy and super entropy are gotten. Which are quantitative value of qualitative concept. These parameters are used as basis, forward cloud generator was designed and run, Motion Coordinate and Motion Speed of two-dimensional fish group animation are produced, And two spirit state variable about fish group : the feeling of hunger, the feeling of fear are designed. Experiment is used to simulated the motion state of fish Group Animation which is affected by internal cause and external cause above, The experiment shows that the Group Animation which is designed by this method has strong Realistic.

  6. Droplet size in flow: Theoretical model and application to polymer blends

    NASA Astrophysics Data System (ADS)

    Fortelný, Ivan; Jůza, Josef

    2017-05-01

    The paper is focused on prediction of the average droplet radius, R, in flowing polymer blends where the droplet size is determined by dynamic equilibrium between the droplet breakup and coalescence. Expressions for the droplet breakup frequency in systems with low and high contents of the dispersed phase are derived using available theoretical and experimental results for model blends. Dependences of the coalescence probability, Pc, on system parameters, following from recent theories, is considered and approximate equation for Pc in a system with a low polydispersity in the droplet size is proposed. Equations for R in systems with low and high contents of the dispersed phase are derived. Combination of these equations predicts realistic dependence of R on the volume fraction of dispersed droplets, φ. Theoretical prediction of the ratio of R to the critical droplet radius at breakup agrees fairly well with experimental values for steadily mixed polymer blends.

  7. Bottomonium suppression using a lattice QCD vetted potential

    NASA Astrophysics Data System (ADS)

    Krouppa, Brandon; Rothkopf, Alexander; Strickland, Michael

    2018-01-01

    We estimate bottomonium yields in relativistic heavy-ion collisions using a lattice QCD vetted, complex-valued, heavy-quark potential embedded in a realistic, hydrodynamically evolving medium background. We find that the lattice-vetted functional form and temperature dependence of the proper heavy-quark potential dramatically reduces the dependence of the yields on parameters other than the temperature evolution, strengthening the picture of bottomonium as QGP thermometer. Our results also show improved agreement between computed yields and experimental data produced in RHIC 200 GeV /nucleon collisions. For LHC 2.76 TeV /nucleon collisions, the excited states, whose suppression has been used as a vital sign for quark-gluon-plasma production in a heavy-ion collision, are reproduced better than previous perturbatively-motivated potential models; however, at the highest LHC energies our estimates for bottomonium suppression begin to underestimate the data. Possible paths to remedy this situation are discussed.

  8. New imaging algorithm in diffusion tomography

    NASA Astrophysics Data System (ADS)

    Klibanov, Michael V.; Lucas, Thomas R.; Frank, Robert M.

    1997-08-01

    A novel imaging algorithm for diffusion/optical tomography is presented for the case of the time dependent diffusion equation. Numerical tests are conducted for ranges of parameters realistic for applications to an early breast cancer diagnosis using ultrafast laser pulses. This is a perturbation-like method which works for both homogeneous a heterogeneous background media. Its main innovation lies in a new approach for a novel linearized problem (LP). Such an LP is derived and reduced to a boundary value problem for a coupled system of elliptic partial differential equations. As is well known, the solution of such a system amounts to the factorization of well conditioned, sparse matrices with few non-zero entries clustered along the diagonal, which can be done very rapidly. Thus, the main advantages of this technique are that it is fast and accurate. The authors call this approach the elliptic systems method (ESM). The ESM can be extended for other data collection schemes.

  9. Motion of kinesin in a viscoelastic medium

    NASA Astrophysics Data System (ADS)

    Knoops, Gert; Vanderzande, Carlo

    2018-05-01

    Kinesin is a molecular motor that transports cargo along microtubules. The results of many in vitro experiments on kinesin-1 are described by kinetic models in which one transition corresponds to the forward motion and subsequent binding of the tethered motor head. We argue that in a viscoelastic medium like the cytosol of a cell this step is not Markov and has to be described by a nonexponential waiting time distribution. We introduce a semi-Markov kinetic model for kinesin that takes this effect into account. We calculate, for arbitrary waiting time distributions, the moment generating function of the number of steps made, and determine from this the average velocity and the diffusion constant of the motor. We illustrate our results for the case of a waiting time distribution that is Weibull. We find that for realistic parameter values, viscoelasticity decreases the velocity and the diffusion constant, but increases the randomness (or Fano factor).

  10. Aircraft Simulators and Pilot Training.

    ERIC Educational Resources Information Center

    Caro, Paul W.

    Flight simulators are built as realistically as possible, presumably to enhance their training value. Yet, their training value is determined by the way they are used. Traditionally, simulators have been less important for training than have aircraft, but they are currently emerging as primary pilot training vehicles. This new emphasis is an…

  11. Studying flow close to an interface by total internal reflection fluorescence cross-correlation spectroscopy: Quantitative data analysis

    NASA Astrophysics Data System (ADS)

    Schmitz, R.; Yordanov, S.; Butt, H. J.; Koynov, K.; Dünweg, B.

    2011-12-01

    Total internal reflection fluorescence cross-correlation spectroscopy (TIR-FCCS) has recently [S. Yordanov , Optics ExpressOPEXFF1094-408710.1364/OE.17.021149 17, 21149 (2009)] been established as an experimental method to probe hydrodynamic flows near surfaces, on length scales of tens of nanometers. Its main advantage is that fluorescence occurs only for tracer particles close to the surface, thus resulting in high sensitivity. However, the measured correlation functions provide only rather indirect information about the flow parameters of interest, such as the shear rate and the slip length. In the present paper, we show how to combine detailed and fairly realistic theoretical modeling of the phenomena by Brownian dynamics simulations with accurate measurements of the correlation functions, in order to establish a quantitative method to retrieve the flow properties from the experiments. First, Brownian dynamics is used to sample highly accurate correlation functions for a fixed set of model parameters. Second, these parameters are varied systematically by means of an importance-sampling Monte Carlo procedure in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for massively parallel computers, which allows us to do the data analysis within moderate computing times. The method is applied to flow near a hydrophilic surface, where the slip length is observed to be smaller than 10nm, and, within the limitations of the experiments and the model, indistinguishable from zero.

  12. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  13. Influence of Resting Venous Blood Volume Fraction on Dynamic Causal Modeling and System Identifiability

    PubMed Central

    Hu, Zhenghui; Ni, Pengyu; Wan, Qun; Zhang, Yan; Shi, Pengcheng; Lin, Qiang

    2016-01-01

    Changes in BOLD signals are sensitive to the regional blood content associated with the vasculature, which is known as V0 in hemodynamic models. In previous studies involving dynamic causal modeling (DCM) which embodies the hemodynamic model to invert the functional magnetic resonance imaging signals into neuronal activity, V0 was arbitrarily set to a physiolog-ically plausible value to overcome the ill-posedness of the inverse problem. It is interesting to investigate how the V0 value influences DCM. In this study we addressed this issue by using both synthetic and real experiments. The results show that the ability of DCM analysis to reveal information about brain causality depends critically on the assumed V0 value used in the analysis procedure. The choice of V0 value not only directly affects the strength of system connections, but more importantly also affects the inferences about the network architecture. Our analyses speak to a possible refinement of how the hemody-namic process is parameterized (i.e., by making V0 a free parameter); however, the conditional dependencies induced by a more complex model may create more problems than they solve. Obtaining more realistic V0 information in DCM can improve the identifiability of the system and would provide more reliable inferences about the properties of brain connectivity. PMID:27389074

  14. Empirical and numerical investigation of mass movements - data fusion and analysis

    NASA Astrophysics Data System (ADS)

    Schmalz, Thilo; Eichhorn, Andreas; Buhl, Volker; Tinkhof, Kurt Mair Am; Preh, Alexander; Tentschert, Ewald-Hans; Zangerl, Christian

    2010-05-01

    Increasing settlement activities of people in mountanious regions and the appearance of extreme climatic conditions motivate the investigation of landslides. Within the last few years a significant rising of disastrous slides could be registered which generated a broad public interest and the request for security measures. The FWF (Austrian Science Fund) funded project ‘KASIP' (Knowledge-based Alarm System with Identified Deformation Predictor) deals with the development of a new type of alarm system based on calibrated numerical slope models for the realistic calculation of failure scenarios. In KASIP, calibration is the optimal adaptation of a numerical model to available monitoring data by least-squares techniques (e.g. adaptive Kalman-filtering). Adaptation means the determination of a priori uncertain physical parameters like the strength of the geological structure. The object of our studies in KASIP is the landslide ‘Steinlehnen' near Innsbruck (Northern Tyrol, Austria). The first part of the presentation is focussed on the determination of geometrical surface-information. This also includes the description of the monitoring system for the collection of the displacement data and filter approaches for the estimation of the slopes kinematic behaviour. The necessity of continous monitoring and the effect of data gaps for reliable filter results and the prediction of the future state is discussed. The second part of the presentation is more focussed on the numerical modelling of the slope by FD- (Finite Difference-) methods and the development of the adaptive Kalman-filter. The realisation of the numerical slope model is developed by FLAC3D (software company HCItasca Ltd.). The model contains different geomechanical approaches (like Mohr-Coulomb) and enables the calculation of great deformations and the failure of the slope. Stability parameters (like the factor-of-safety FS) allow the evaluation of the current state of the slope. Until now, the adaptation of relevant material parameters is often performed by trial and error methods. This common method shall be improved by adaptive Kalman-filtering methods. In contrast to trial and error, Kalman-filtering also considers stochastical information of the input data. Especially the estimation of strength parameters (cohesion c, angle of internal friction phi) in a dynamic consideration of the slope is discussed. Problems with conditioning and numerical stability of the filter matrices, memory overflow and computing time are outlined. It is shown that the Kalman-filter is in principle suitable for an semi-automated adaptation process and obtains realistic values for the unknown material parameters.

  15. Combined loading criterial influence on structural performance

    NASA Technical Reports Server (NTRS)

    Kuchta, B. J.; Sealey, D. M.; Howell, L. J.

    1972-01-01

    An investigation was conducted to determine the influence of combined loading criteria on the space shuttle structural performance. The study consisted of four primary phases: Phase (1) The determination of the sensitivity of structural weight to various loading parameters associated with the space shuttle. Phase (2) The determination of the sensitivity of structural weight to various levels of loading parameter variability and probability. Phase (3) The determination of shuttle mission loading parameters variability and probability as a function of design evolution and the identification of those loading parameters where inadequate data exists. Phase (4) The determination of rational methods of combining both deterministic time varying and probabilistic loading parameters to provide realistic design criteria. The study results are presented.

  16. Effects of two-temperature parameter and thermal nonlocal parameter on transient responses of a half-space subjected to ramp-type heating

    NASA Astrophysics Data System (ADS)

    Xue, Zhang-Na; Yu, Ya-Jun; Tian, Xiao-Geng

    2017-07-01

    Based upon the coupled thermoelasticity and Green and Lindsay theory, the new governing equations of two-temperature thermoelastic theory with thermal nonlocal parameter is formulated. To more realistically model thermal loading of a half-space surface, a linear temperature ramping function is adopted. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Specific attention is paid to study the effect of thermal nonlocal parameter, ramping time, and two-temperature parameter on the distributions of temperature, displacement and stress distribution.

  17. Assessment of the Potential Impacts of Wheat Plant Traits across Environments by Combining Crop Modeling and Global Sensitivity Analysis

    PubMed Central

    Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine

    2016-01-01

    A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483

  18. Assessment of the Potential Impacts of Wheat Plant Traits across Environments by Combining Crop Modeling and Global Sensitivity Analysis.

    PubMed

    Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine

    2016-01-01

    A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement.

  19. Assessing the performance of community-available global MHD models using key system parameters and empirical relationships

    NASA Astrophysics Data System (ADS)

    Gordeev, E.; Sergeev, V.; Honkonen, I.; Kuznetsova, M.; Rastätter, L.; Palmroth, M.; Janhunen, P.; Tóth, G.; Lyon, J.; Wiltberger, M.

    2015-12-01

    Global magnetohydrodynamic (MHD) modeling is a powerful tool in space weather research and predictions. There are several advanced and still developing global MHD (GMHD) models that are publicly available via Community Coordinated Modeling Center's (CCMC) Run on Request system, which allows the users to simulate the magnetospheric response to different solar wind conditions including extraordinary events, like geomagnetic storms. Systematic validation of GMHD models against observations still continues to be a challenge, as well as comparative benchmarking of different models against each other. In this paper we describe and test a new approach in which (i) a set of critical large-scale system parameters is explored/tested, which are produced by (ii) specially designed set of computer runs to simulate realistic statistical distributions of critical solar wind parameters and are compared to (iii) observation-based empirical relationships for these parameters. Being tested in approximately similar conditions (similar inputs, comparable grid resolution, etc.), the four models publicly available at the CCMC predict rather well the absolute values and variations of those key parameters (magnetospheric size, magnetic field, and pressure) which are directly related to the large-scale magnetospheric equilibrium in the outer magnetosphere, for which the MHD is supposed to be a valid approach. At the same time, the models have systematic differences in other parameters, being especially different in predicting the global convection rate, total field-aligned current, and magnetic flux loading into the magnetotail after the north-south interplanetary magnetic field turning. According to validation results, none of the models emerges as an absolute leader. The new approach suggested for the evaluation of the models performance against reality may be used by model users while planning their investigations, as well as by model developers and those interesting to quantitatively evaluate progress in magnetospheric modeling.

  20. Upper Limit of the Viscosity Parameter in Accretion Flows around a Black Hole with Shock Waves

    NASA Astrophysics Data System (ADS)

    Nagarkoti, Shreeram; Chakrabarti, Sandip K.

    2016-01-01

    Black hole accretion is necessarily transonic; thus, flows must become supersonic and, therefore, sub-Keplerian before they enter into the black hole. The viscous timescale is much longer than the infall timescale close to a black hole. Hence, the angular momentum remains almost constant and the centrifugal force ˜ {l}2/{r}3 becomes increasingly dominant over the gravitational force ˜ 1/{r}2. The slowed down matter piles creating an accretion shock. The flow between shock and inner sonic point is puffed up and behaves like a boundary layer. This so-called Comptonizing cloud/corona produces hard X-rays and jets/outflows and, therefore, is an important component of black hole astrophysics. In this paper, we study steady state viscous, axisymmetric, transonic accretion flows around a Schwarzschild black hole. We adopt a viscosity parameter α and compute the highest possible value of α (namely, {α }{cr}) for each pair of two inner boundary parameters (namely, specific angular momentum carried to horizon, lin and specific energy at inner sonic point, E({x}{in})) which is still capable of producing a standing or oscillating shock. We find that while such possibilities exist for α as high as {α }{cr}=0.3 in very small regions of the flow parameter space, typical {α }{cr} appears to be about ˜0.05-0.1. Coincidentally, this also happens to be the typical viscosity parameter achieved by simulations of magnetorotational instabilities in accretion flows. We therefore believe that all realistic accretion flows are likely to have centrifugal pressure supported shocks unless the viscosity parameter everywhere is higher than {α }{cr}.

  1. Comparison of temporal realistic telecommunication base station exposure with worst-case estimation in two countries.

    PubMed

    Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc

    2013-12-01

    The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.

  2. Must Metaethical Realism Make a Semantic Claim?

    PubMed

    Kahane, Guy

    2013-02-01

    Mackie drew attention to the distinct semantic and metaphysical claims made by meta ethical realists, arguing that although our evaluative discourse is cognitive and objective, there are no objective evaluative facts. This distinction, however, also opens up a reverse possibility: that our evaluative discourse is antirealist, yet objective values do exist. I suggest that this seemingly far-fetched possibility merits serious attention; realism seems com mitted to its intelligibility, and, despite appearances, it isn't incoherent, ineffable, inherently implausible or impossible to defend. I argue that reflection on this possibility should lead us to revise our understanding of the debate between realists and antirealists. It is not only that the realist's semantic claim is insufficient for realism to be true, as Mackie argued; it's not even necessary. Robust metaethical realism is best understood as making a purely metaphysical claim. It is thus not enough for antirealists to show that our discourse is antirealist. They must directly attack the realist's metaphysical claim.

  3. Material and shape optimization for multi-layered vocal fold models using transient loadings.

    PubMed

    Schmidt, Bastian; Leugering, Günter; Stingl, Michael; Hüttner, Björn; Agaimy, Abbas; Döllinger, Michael

    2013-08-01

    Commonly applied models to study vocal fold vibrations in combination with air flow distributions are self-sustained physical models of the larynx consisting of artificial silicone vocal folds. Choosing appropriate mechanical parameters and layer geometries for these vocal fold models while considering simplifications due to manufacturing restrictions is difficult but crucial for achieving realistic behavior. In earlier work by Schmidt et al. [J. Acoust. Soc. Am. 129, 2168-2180 (2011)], the authors presented an approach in which material parameters of a static numerical vocal fold model were optimized to achieve an agreement of the displacement field with data retrieved from hemilarynx experiments. This method is now generalized to a fully transient setting. Moreover in addition to the material parameters, the extended approach is capable of finding optimized layer geometries. Depending on chosen material restriction, significant modifications of the reference geometry are predicted. The additional flexibility in the design space leads to a significantly more realistic deformation behavior. At the same time, the predicted biomechanical and geometrical results are still feasible for manufacturing physical vocal fold models consisting of several silicone layers. As a consequence, the proposed combined experimental and numerical method is suited to guide the construction of physical vocal fold models.

  4. Testing anthropic reasoning for the cosmological constant with a realistic galaxy formation model

    NASA Astrophysics Data System (ADS)

    Sudoh, Takahiro; Totani, Tomonori; Makiya, Ryu; Nagashima, Masahiro

    2017-01-01

    The anthropic principle is one of the possible explanations for the cosmological constant (Λ) problem. In previous studies, a dark halo mass threshold comparable with our Galaxy must be assumed in galaxy formation to get a reasonably large probability of finding the observed small value, P(<Λobs), though stars are found in much smaller galaxies as well. Here we examine the anthropic argument by using a semi-analytic model of cosmological galaxy formation, which can reproduce many observations such as galaxy luminosity functions. We calculate the probability distribution of Λ by running the model code for a wide range of Λ, while other cosmological parameters and model parameters for baryonic processes of galaxy formation are kept constant. Assuming that the prior probability distribution is flat per unit Λ, and that the number of observers is proportional to stellar mass, we find P(<Λobs) = 6.7 per cent without introducing any galaxy mass threshold. We also investigate the effect of metallicity; we find P(<Λobs) = 9.0 per cent if observers exist only in galaxies whose metallicity is higher than the solar abundance. If the number of observers is proportional to metallicity, we find P(<Λobs) = 9.7 per cent. Since these probabilities are not extremely small, we conclude that the anthropic argument is a viable explanation, if the value of Λ observed in our Universe is determined by a probability distribution.

  5. An efficient spectral method for the simulation of dynamos in Cartesian geometry and its implementation on massively parallel computers

    NASA Astrophysics Data System (ADS)

    Stellmach, Stephan; Hansen, Ulrich

    2008-05-01

    Numerical simulations of the process of convection and magnetic field generation in planetary cores still fail to reach geophysically realistic control parameter values. Future progress in this field depends crucially on efficient numerical algorithms which are able to take advantage of the newest generation of parallel computers. Desirable features of simulation algorithms include (1) spectral accuracy, (2) an operation count per time step that is small and roughly proportional to the number of grid points, (3) memory requirements that scale linear with resolution, (4) an implicit treatment of all linear terms including the Coriolis force, (5) the ability to treat all kinds of common boundary conditions, and (6) reasonable efficiency on massively parallel machines with tens of thousands of processors. So far, algorithms for fully self-consistent dynamo simulations in spherical shells do not achieve all these criteria simultaneously, resulting in strong restrictions on the possible resolutions. In this paper, we demonstrate that local dynamo models in which the process of convection and magnetic field generation is only simulated for a small part of a planetary core in Cartesian geometry can achieve the above goal. We propose an algorithm that fulfills the first five of the above criteria and demonstrate that a model implementation of our method on an IBM Blue Gene/L system scales impressively well for up to O(104) processors. This allows for numerical simulations at rather extreme parameter values.

  6. A competitive binding model predicts the response of mammalian olfactory receptors to mixtures

    NASA Astrophysics Data System (ADS)

    Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay

    Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.

  7. Ecological risk assessment in a large river-reservoir. 5: Aerial insectivorous wildlife

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baron, L.A.; Sample, B.E.; Suter, G.W. II

    Risks to aerial insectivores (e.g., rough-winged swallows, little brown bats, and endangered gray bats) were assessed for the remedial investigation of the Clinch River/Poplar Creek (CR/PC) system. Adult mayflies and sediment were collected from three locations and analyzed for contaminants. Sediment-to-mayfly contaminant uptake factors were generated from these data and used to estimate contaminant concentrations in mayflies from 13 additional locations. Contaminants of potential ecological concern (COPECs) were identified by comparing exposure estimates generated using point estimates of parameter values to NOAELs. To incorporate the variation in exposure parameters and to provide a better estimate of the potential exposure, themore » exposure model was recalculated using Monte Carlo methods. The potential for adverse effects was estimated based on the comparison of exposure distribution and the LOAEL. The results of this assessment suggested that population-level effects to rough-winged swallows and little brown bats are considered unlikely. However, because gray bats are endangered, effects on individuals may be significant from foraging in limited subreaches of the CR/PC system. This assessment illustrates the advantage of an iterative approach to ecological risk assessments, using fewer conservative assumptions and more realistic modeling of exposure.« less

  8. A Stochastic Differential Equation Model for the Spread of HIV amongst People Who Inject Drugs.

    PubMed

    Liang, Yanfeng; Greenhalgh, David; Mao, Xuerong

    2016-01-01

    We introduce stochasticity into the deterministic differential equation model for the spread of HIV amongst people who inject drugs (PWIDs) studied by Greenhalgh and Hay (1997). This was based on the original model constructed by Kaplan (1989) which analyses the behaviour of HIV/AIDS amongst a population of PWIDs. We derive a stochastic differential equation (SDE) for the fraction of PWIDs who are infected with HIV at time. The stochasticity is introduced using the well-known standard technique of parameter perturbation. We first prove that the resulting SDE for the fraction of infected PWIDs has a unique solution in (0, 1) provided that some infected PWIDs are initially present and next construct the conditions required for extinction and persistence. Furthermore, we show that there exists a stationary distribution for the persistence case. Simulations using realistic parameter values are then constructed to illustrate and support our theoretical results. Our results provide new insight into the spread of HIV amongst PWIDs. The results show that the introduction of stochastic noise into a model for the spread of HIV amongst PWIDs can cause the disease to die out in scenarios where deterministic models predict disease persistence.

  9. Dynamics of aspherical dust grains in a cometary atmosphere: I. axially symmetric grains in a spherically symmetric atmosphere

    NASA Astrophysics Data System (ADS)

    Ivanovski, S. L.; Zakharov, V. V.; Della Corte, V.; Crifo, J.-F.; Rotundi, A.; Fulle, M.

    2017-01-01

    In-situ measurements of individual dust grain parameters in the immediate vicinity of a cometary nucleus are being carried by the Rosetta spacecraft at comet 67P/Churyumov-Gerasimenko. For the interpretations of these observational data, a model of dust grain motion as realistic as possible is requested. In particular, the results of the Stardust mission and analysis of samples of interplanetary dust have shown that these particles are highly aspherical, which should be taken into account in any credible model. The aim of the present work is to study the dynamics of ellipsoidal shape particles with various aspect ratios introduced in a spherically symmetric expanding gas flow and to reveal the possible differences in dynamics between spherical and aspherical particles. Their translational and rotational motion under influence of the gravity and of the aerodynamic force and torque is numerically integrated in a wide range of physical parameters values including those of comet 67P/Churyumov-Gerasimenko. The main distinctions of the dynamics of spherical and ellipsoidal particles are discussed. The aerodynamic characteristics of the ellipsoidal particles, and examples of their translational and rotational motion in the postulated gas flow are presented.

  10. X-ray detectability of accreting isolated black holes in our Galaxy

    NASA Astrophysics Data System (ADS)

    Tsuna, Daichi; Kawanaka, Norita; Totani, Tomonori

    2018-06-01

    Detectability of isolated black holes (IBHs) without a companion star but emitting X-rays by accretion from dense interstellar medium (ISM) or molecular cloud gas is investigated. We calculate orbits of IBHs in the Galaxy to derive a realistic spatial distribution of IBHs for various mean values of kick velocity at their birth υavg. X-ray luminosities of these IBHs are then calculated considering various phases of ISM and molecular clouds for a wide range of the accretion efficiency λ (a ratio of the actual accretion rate to the Bondi rate) that is rather uncertain. It is found that detectable IBHs mostly reside near the Galactic Centre (GC), and hence taking the Galactic structure into account is essential. In the hard X-ray band, where identification of IBHs from other contaminating X-ray sources may be easier, the expected number of IBHs detectable by the past survey by NuSTAR towards GC is at most order unity. However, 30-100 IBHs may be detected by the future survey by FORCE with an optimistic parameter set of υavg = 50 km s-1 and λ = 0.1, implying that it may be possible to detect IBHs or constrain the model parameters.

  11. Modelling of seasonal influenza and estimation of the burden in Tunisia.

    PubMed

    Chlif, S; Aissi, W; Bettaieb, J; Kharroubi, G; Nouira, M; Yazidi, R; El Moussi, A; Maazaoui, L; Slim, A; Salah, A Ben

    2016-10-02

    The burden of influenza was estimated from surveillance data in Tunisia using epidemiological parameters of transmission with WHO classical tools and mathematical modelling. The incidence rates of influenza-associated influenza-like illness (ILI) per 100 000 were 18 735 in 2012/2013 season; 5536 in 2013/14 and 12 602 in 2014/15. The estimated proportions of influenza-associated ILI in the total outpatient load were 3.16%; 0.86% and 1.98% in the 3 seasons respectively. Distribution of influenza viruses among positive patients was: A(H3N2) 15.5%; A(H1N1)pdm2009 39.2%; and B virus 45.3% in 2014/2015 season. From the estimated numbers of symptomatic cases, we estimated that the critical proportions of the population that should be vaccinated were 15%, 4% and 10% respectively. Running the model for the different values of R0, we quantified the number of symptomatic clinical cases, the clinical attack rates, the symptomatic clinical attack rates and the number of deaths. More realistic versions of this model and improved estimates of parameters from surveillance data will strengthen the estimation of the burden of influenza.

  12. Spatio-Temporal Fluctuations of the Earthquake Magnitude Distribution: Robust Estimation and Predictive Power

    NASA Astrophysics Data System (ADS)

    Olsen, S.; Zaliapin, I.

    2008-12-01

    We establish positive correlation between the local spatio-temporal fluctuations of the earthquake magnitude distribution and the occurrence of regional earthquakes. In order to accomplish this goal, we develop a sequential Bayesian statistical estimation framework for the b-value (slope of the Gutenberg-Richter's exponential approximation to the observed magnitude distribution) and for the ratio a(t) between the earthquake intensities in two non-overlapping magnitude intervals. The time-dependent dynamics of these parameters is analyzed using Markov Chain Models (MCM). The main advantage of this approach over the traditional window-based estimation is its "soft" parameterization, which allows one to obtain stable results with realistically small samples. We furthermore discuss a statistical methodology for establishing lagged correlations between continuous and point processes. The developed methods are applied to the observed seismicity of California, Nevada, and Japan on different temporal and spatial scales. We report an oscillatory dynamics of the estimated parameters, and find that the detected oscillations are positively correlated with the occurrence of large regional earthquakes, as well as with small events with magnitudes as low as 2.5. The reported results have important implications for further development of earthquake prediction and seismic hazard assessment methods.

  13. Predicting pedestrian flow: a methodology and a proof of concept based on real-life data.

    PubMed

    Davidich, Maria; Köster, Gerta

    2013-01-01

    Building a reliable predictive model of pedestrian motion is very challenging: Ideally, such models should be based on observations made in both controlled experiments and in real-world environments. De facto, models are rarely based on real-world observations due to the lack of available data; instead, they are largely based on intuition and, at best, literature values and laboratory experiments. Such an approach is insufficient for reliable simulations of complex real-life scenarios: For instance, our analysis of pedestrian motion under natural conditions at a major German railway station reveals that the values for free-flow velocities and the flow-density relationship differ significantly from widely used literature values. It is thus necessary to calibrate and validate the model against relevant real-life data to make it capable of reproducing and predicting real-life scenarios. In this work we aim at constructing such realistic pedestrian stream simulation. Based on the analysis of real-life data, we present a methodology that identifies key parameters and interdependencies that enable us to properly calibrate the model. The success of the approach is demonstrated for a benchmark model, a cellular automaton. We show that the proposed approach significantly improves the reliability of the simulation and hence the potential prediction accuracy. The simulation is validated by comparing the local density evolution of the measured data to that of the simulated data. We find that for our model the most sensitive parameters are: the source-target distribution of the pedestrian trajectories, the schedule of pedestrian appearances in the scenario and the mean free-flow velocity. Our results emphasize the need for real-life data extraction and analysis to enable predictive simulations.

  14. Tibiofemoral wear in standard and non-standard squat: implication for total knee arthroplasty.

    PubMed

    Fekete, Gusztáv; Sun, Dong; Gu, Yaodong; Neis, Patric Daniel; Ferreira, Ney Francisco; Innocenti, Bernardo; Csizmadia, Béla M

    2017-01-01

    Due to the more resilient biomaterials, problems related to wear in total knee replacements (TKRs) have decreased but not disappeared. In the design-related factors, wear is still the second most important mechanical factor that limits the lifetime of TKRs and it is also highly influenced by the local kinematics of the knee. During wear experiments, constant load and slide-roll ratio is frequently applied in tribo-tests beside other important parameters. Nevertheless, numerous studies demonstrated that constant slide-roll ratio is not accurate approach if TKR wear is modelled, while instead of a constant load, a flexion-angle dependent tibiofemoral force should be involved into the wear model to obtain realistic results. A new analytical wear model, based upon Archard's law, is introduced, which can determine the effect of the tibiofemoral force and the varying slide-roll on wear between the tibiofemoral connection under standard and non-standard squat movement. The calculated total wear with constant slide-roll during standard squat was 5.5 times higher compared to the reference value, while if total wear includes varying slide-roll during standard squat, the calculated wear was approximately 6.25 times higher. With regard to non-standard squat, total wear with constant slide-roll during standard squat was 4.16 times higher than the reference value. If total wear included varying slide-roll, the calculated wear was approximately 4.75 times higher. It was demonstrated that the augmented force parameter solely caused 65% higher wear volume while the slide-roll ratio itself increased wear volume by 15% higher compared to the reference value. These results state that the force component has the major effect on wear propagation while non-standard squat should be proposed for TKR patients as rehabilitation exercise.

  15. Tibiofemoral wear in standard and non-standard squat: implication for total knee arthroplasty

    PubMed Central

    Sun, Dong; Gu, Yaodong; Neis, Patric Daniel; Ferreira, Ney Francisco; Innocenti, Bernardo; Csizmadia, Béla M.

    2017-01-01

    Summary Introduction Due to the more resilient biomaterials, problems related to wear in total knee replacements (TKRs) have decreased but not disappeared. In the design-related factors, wear is still the second most important mechanical factor that limits the lifetime of TKRs and it is also highly influenced by the local kinematics of the knee. During wear experiments, constant load and slide-roll ratio is frequently applied in tribo-tests beside other important parameters. Nevertheless, numerous studies demonstrated that constant slide-roll ratio is not accurate approach if TKR wear is modelled, while instead of a constant load, a flexion-angle dependent tibiofemoral force should be involved into the wear model to obtain realistic results. Methods A new analytical wear model, based upon Archard’s law, is introduced, which can determine the effect of the tibiofemoral force and the varying slide-roll on wear between the tibiofemoral connection under standard and non-standard squat movement. Results The calculated total wear with constant slide-roll during standard squat was 5.5 times higher compared to the reference value, while if total wear includes varying slide-roll during standard squat, the calculated wear was approximately 6.25 times higher. With regard to non-standard squat, total wear with constant slide-roll during standard squat was 4.16 times higher than the reference value. If total wear included varying slide-roll, the calculated wear was approximately 4.75 times higher. Conclusions It was demonstrated that the augmented force parameter solely caused 65% higher wear volume while the slide-roll ratio itself increased wear volume by 15% higher compared to the reference value. These results state that the force component has the major effect on wear propagation while non-standard squat should be proposed for TKR patients as rehabilitation exercise. PMID:29721453

  16. Discrete Sparse Coding.

    PubMed

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  17. Poster — Thur Eve — 45: Comparison of different Monte Carlo methods of scoring linear energy transfer in modulated proton therapy beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granville, DA; Sawakuchi, GO

    2014-08-15

    In this work, we demonstrate inconsistencies in commonly used Monte Carlo methods of scoring linear energy transfer (LET) in proton therapy beams. In particle therapy beams, the LET is an important parameter because the relative biological effectiveness (RBE) depends on it. LET is often determined using Monte Carlo techniques. We used a realistic Monte Carlo model of a proton therapy nozzle to score proton LET in spread-out Bragg peak (SOBP) depth-dose distributions. We used three different scoring and calculation techniques to determine average LET at varying depths within a 140 MeV beam with a 4 cm SOBP and a 250more » MeV beam with a 10 cm SOBP. These techniques included fluence-weighted (Φ-LET) and dose-weighted average (D-LET) LET calculations from: 1) scored energy spectra converted to LET spectra through a lookup table, 2) directly scored LET spectra and 3) accumulated LET scored ‘on-the-fly’ during simulations. All protons (primary and secondary) were included in the scoring. Φ-LET was found to be less sensitive to changes in scoring technique than D-LET. In addition, the spectral scoring methods were sensitive to low-energy (high-LET) cutoff values in the averaging. Using cutoff parameters chosen carefully for consistency between techniques, we found variations in Φ-LET values of up to 1.6% and variations in D-LET values of up to 11.2% for the same irradiation conditions, depending on the method used to score LET. Variations were largest near the end of the SOBP, where the LET and energy spectra are broader.« less

  18. Relating Data and Models to Characterize Parameter and Prediction Uncertainty

    EPA Science Inventory

    Applying PBPK models in risk analysis requires that we realistically assess the uncertainty of relevant model predictions in as quantitative a way as possible. The reality of human variability may add a confusing feature to the overall uncertainty assessment, as uncertainty and v...

  19. A Gaia DR2 Mock Stellar Catalog

    NASA Astrophysics Data System (ADS)

    Rybizki, Jan; Demleitner, Markus; Fouesneau, Morgan; Bailer-Jones, Coryn; Rix, Hans-Walter; Andrae, René

    2018-07-01

    We present a mock catalog of Milky Way stars, matching in volume and depth the content of the Gaia data release 2 (GDR2). We generated our catalog using Galaxia, a tool to sample stars from a Besançon Galactic model, together with a realistic 3D dust extinction map. The catalog mimics the complete GDR2 data model and contains most of the entries in the Gaia source catalog: five-parameter astrometry, three-band photometry, radial velocities, stellar parameters, and associated scaled nominal uncertainty estimates. In addition, we supplemented the catalog with extinctions and photometry for non-Gaia bands. This catalog can be used to prepare GDR2 queries in a realistic runtime environment, and it can serve as a Galactic model against which to compare the actual GDR2 data in the space of observables. The catalog is hosted through the virtual observatory GAVO’s Heidelberg data center (http://dc.g-vo.org/tableinfo/gdr2mock.main) service, and thus can be queried using ADQL as for GDR2 data.

  20. Environmentally realistic concentrations of the antibiotic Trimethoprim affect haemocyte parameters but not antioxidant enzyme activities in the clam Ruditapes philippinarum.

    PubMed

    Matozzo, Valerio; De Notaris, Chiara; Finos, Livio; Filippini, Raffaella; Piovan, Anna

    2015-11-01

    Several biomarkers were measured to evaluate the effects of Trimethoprim (TMP; 300, 600 and 900 ng/L) in the clam Ruditapes philippinarum after exposure for 1, 3 and 7 days. The actual TMP concentrations were also measured in the experimental tanks. The total haemocyte count significantly increased in 7 day-exposed clams, whereas alterations in haemocyte volume were observed after 1 and 3 days of exposure. Haemocyte proliferation was increased significantly in animals exposed for 1 and 7 days, whereas haemocyte lysate lysozyme activity decreased significantly after 1 and 3 days. In addition, TMP significantly increased haemolymph lactate dehydrogenase activity after 3 and 7 days. Regarding antioxidant enzymes, only a significant time-dependent effect on CAT activity was recorded. This study demonstrated that environmentally realistic concentrations of TMP affect haemocyte parameters in clams, suggesting that haemocytes are a useful cellular model for the assessment of the impact of TMP on bivalves. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Protocol for fermionic positive-operator-valued measures

    NASA Astrophysics Data System (ADS)

    Arvidsson-Shukur, D. R. M.; Lepage, H. V.; Owen, E. T.; Ferrus, T.; Barnes, C. H. W.

    2017-11-01

    In this paper we present a protocol for the implementation of a positive-operator-valued measure (POVM) on massive fermionic qubits. We present methods for implementing nondispersive qubit transport, spin rotations, and spin polarizing beam-splitter operations. Our scheme attains linear opticslike control of the spatial extent of the qubits by considering ground-state electrons trapped in the minima of surface acoustic waves in semiconductor heterostructures. Furthermore, we numerically simulate a high-fidelity POVM that carries out Procrustean entanglement distillation in the framework of our scheme, using experimentally realistic potentials. Our protocol can be applied not only to pure ensembles with particle pairs of known identical entanglement, but also to realistic ensembles of particle pairs with a distribution of entanglement entropies. This paper provides an experimentally realizable design for future quantum technologies.

  2. Rough Electrode Creates Excess Capacitance in Thin-Film Capacitors

    PubMed Central

    2017-01-01

    The parallel-plate capacitor equation is widely used in contemporary material research for nanoscale applications and nanoelectronics. To apply this equation, flat and smooth electrodes are assumed for a capacitor. This essential assumption is often violated for thin-film capacitors because the formation of nanoscale roughness at the electrode interface is very probable for thin films grown via common deposition methods. In this work, we experimentally and theoretically show that the electrical capacitance of thin-film capacitors with realistic interface roughness is significantly larger than the value predicted by the parallel-plate capacitor equation. The degree of the deviation depends on the strength of the roughness, which is described by three roughness parameters for a self-affine fractal surface. By applying an extended parallel-plate capacitor equation that includes the roughness parameters of the electrode, we are able to calculate the excess capacitance of the electrode with weak roughness. Moreover, we introduce the roughness parameter limits for which the simple parallel-plate capacitor equation is sufficiently accurate for capacitors with one rough electrode. Our results imply that the interface roughness beyond the proposed limits cannot be dismissed unless the independence of the capacitance from the interface roughness is experimentally demonstrated. The practical protocols suggested in our work for the reliable use of the parallel-plate capacitor equation can be applied as general guidelines in various fields of interest. PMID:28745040

  3. A new parameter to simultaneously assess antioxidant activity for multiple phenolic compounds present in food products.

    PubMed

    Yang, Hong; Xue, Xuejia; Li, Huan; Tay-Chan, Su Chin; Ong, Seng Poon; Tian, Edmund Feng

    2017-08-15

    In this work, we established a new methodology to simultaneously assess the relative reaction rates of multiple antioxidant compounds in one experimental set-up. This new methodology hypothesizes that the competition among antioxidant compounds towards limiting amount of free radical (in this article, DPPH) would reflect their relative reaction rates. In contrast with the conventional detection of DPPH decrease at 515nm on a spectrophotometer, depletion of antioxidant compounds treated by a series of DPPH concentrations was monitored instead using liquid chromatography coupled with quadrupole time-of-flight (LC-QTOF). A new parameter, namely relative antioxidant activity (RAA), has been proposed to rank these antioxidants according to their reaction rate constants. We have investigated the applicability of RAA using pre-mixed standard phenolic compounds, and also extended this application to two food products, i.e. red wine and green tea. It has been found that RAA correlates well with the reported k values. This new parameter, RAA, provides a new perspective in evaluating antioxidant compounds present in food and herbal matrices. It not only realistically reflects the antioxidant activity of compounds when co-existing with competitive constituents; and it could also quicken up the discovery process in the search for potent yet rare antioxidants from many herbs of food/medicinal origins. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Towards a quantitative understanding of oxygen tension and cell density evolution in fibrin hydrogels.

    PubMed

    Demol, Jan; Lambrechts, Dennis; Geris, Liesbet; Schrooten, Jan; Van Oosterwyck, Hans

    2011-01-01

    The in vitro culture of hydrogel-based constructs above a critical size is accompanied by problems of unequal cell distribution when diffusion is the primary mode of oxygen transfer. In this study, an experimentally-informed mathematical model was developed to relate cell proliferation and death inside fibrin hydrogels to the local oxygen tension in a quantitative manner. The predictive capacity of the resulting model was tested by comparing its outcomes to the density, distribution and viability of human periosteum derived cells (hPDCs) that were cultured inside fibrin hydrogels in vitro. The model was able to reproduce important experimental findings, such as the formation of a multilayered cell sheet at the hydrogel periphery and the occurrence of a cell density gradient throughout the hydrogel. In addition, the model demonstrated that cell culture in fibrin hydrogels can lead to complete anoxia in the centre of the hydrogel for realistic values of oxygen diffusion and consumption. A sensitivity analysis also identified these two parameters, together with the proliferation parameters of the encapsulated cells, as the governing parameters for the occurrence of anoxia. In conclusion, this study indicates that mathematical models can help to better understand oxygen transport limitations and its influence on cell behaviour during the in vitro culture of cell-seeded hydrogels. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Rough Electrode Creates Excess Capacitance in Thin-Film Capacitors.

    PubMed

    Torabi, Solmaz; Cherry, Megan; Duijnstee, Elisabeth A; Le Corre, Vincent M; Qiu, Li; Hummelen, Jan C; Palasantzas, George; Koster, L Jan Anton

    2017-08-16

    The parallel-plate capacitor equation is widely used in contemporary material research for nanoscale applications and nanoelectronics. To apply this equation, flat and smooth electrodes are assumed for a capacitor. This essential assumption is often violated for thin-film capacitors because the formation of nanoscale roughness at the electrode interface is very probable for thin films grown via common deposition methods. In this work, we experimentally and theoretically show that the electrical capacitance of thin-film capacitors with realistic interface roughness is significantly larger than the value predicted by the parallel-plate capacitor equation. The degree of the deviation depends on the strength of the roughness, which is described by three roughness parameters for a self-affine fractal surface. By applying an extended parallel-plate capacitor equation that includes the roughness parameters of the electrode, we are able to calculate the excess capacitance of the electrode with weak roughness. Moreover, we introduce the roughness parameter limits for which the simple parallel-plate capacitor equation is sufficiently accurate for capacitors with one rough electrode. Our results imply that the interface roughness beyond the proposed limits cannot be dismissed unless the independence of the capacitance from the interface roughness is experimentally demonstrated. The practical protocols suggested in our work for the reliable use of the parallel-plate capacitor equation can be applied as general guidelines in various fields of interest.

  6. Automatic Perceptual Color Map Generation for Realistic Volume Visualization

    PubMed Central

    Silverstein, Jonathan C.; Parsad, Nigel M.; Tsirline, Victor

    2008-01-01

    Advances in computed tomography imaging technology and inexpensive high performance computer graphics hardware are making high-resolution, full color (24-bit) volume visualizations commonplace. However, many of the color maps used in volume rendering provide questionable value in knowledge representation and are non-perceptual thus biasing data analysis or even obscuring information. These drawbacks, coupled with our need for realistic anatomical volume rendering for teaching and surgical planning, has motivated us to explore the auto-generation of color maps that combine natural colorization with the perceptual discriminating capacity of grayscale. As evidenced by the examples shown that have been created by the algorithm described, the merging of perceptually accurate and realistically colorized virtual anatomy appears to insightfully interpret and impartially enhance volume rendered patient data. PMID:18430609

  7. Modeling of Long-Term Fate of Mobilized Fines due to Dam-Embankment Interfacial Dislocations

    NASA Astrophysics Data System (ADS)

    Glascoe, L. G.; Ezzedine, S. M.; Kanarska, Y.; Lomov, I.; Antoun, T. H.

    2011-12-01

    Transverse cracks in embankment dams can develop as a result of post-construction settlements, earthquake deformations, or anthropogenic loads such as emplaced explosives. During these dislocations, fine particles are released from the damaged zones and can create unwanted inertial erosion and piping through the transverse cracks. These processes are equally critical to the overall stability of the dam. We present numerical results related to the problem of the fluid flow, transport, and filtration of particulates from damaged zones between the concrete sections of a gravity dam and the embankment wraparound sections. The model solves simultaneously the flow, attachment, and washout of fine particles within a wraparound heterogeneous porous media. We used a state-of-the-art finite element method with adaptive mesh refinement to capture 1) the interface between water dense with fines and clear water, and 2) the non-linearity of the free surface itself. A few scenarios of sediment entrapment in the filter layers of a gravity dam were considered. Several parameterizations of the filtration model and constitutive laws of soil behavior were also investigated. Through these analyses, we concluded that the attachment kinetic isotherm is the key function of the model. More parametric studies need to be conducted to assess the sensitivity of the kinetic isotherm parameters on the overall stability of the embankment. These kinetic parameters can be obtained, for example, through numerical micro- and meso-scale studies. It is worth mentioning that the current model, for the more realistic non-linear kinetic isotherms, has predicted a self-rehabilitation of the breached core with retention of 50% of the mobilized fines using a very conservative filtration length. A more realistic value should exceed the assumed one, resulting in a retention exceeding 50%. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the U.S. Department of Homeland Security, Science and Technology Directorate.

  8. Heuristic modelling of laser written mid-infrared LiNbO3 stressed-cladding waveguides.

    PubMed

    Nguyen, Huu-Dat; Ródenas, Airán; Vázquez de Aldana, Javier R; Martínez, Javier; Chen, Feng; Aguiló, Magdalena; Pujol, Maria Cinta; Díaz, Francesc

    2016-04-04

    Mid-infrared lithium niobate cladding waveguides have great potential in low-loss on-chip non-linear optical instruments such as mid-infrared spectrometers and frequency converters, but their three-dimensional femtosecond-laser fabrication is currently not well understood due to the complex interplay between achievable depressed index values and the stress-optic refractive index changes arising as a function of both laser fabrication parameters, and cladding arrangement. Moreover, both the stress-field anisotropy and the asymmetric shape of low-index tracks yield highly birefringent waveguides not useful for most applications where controlling and manipulating the polarization state of a light beam is crucial. To achieve true high performance devices a fundamental understanding on how these waveguides behave and how they can be ultimately optimized is required. In this work we employ a heuristic modelling approach based on the use of standard optical characterization data along with standard computational numerical methods to obtain a satisfactory approximate solution to the problem of designing realistic laser-written circuit building-blocks, such as straight waveguides, bends and evanescent splitters. We infer basic waveguide design parameters such as the complex index of refraction of laser-written tracks at 3.68 µm mid-infrared wavelengths, as well as the cross-sectional stress-optic index maps, obtaining an overall waveguide simulation that closely matches the measured mid-infrared waveguide properties in terms of anisotropy, mode field distributions and propagation losses. We then explore experimentally feasible waveguide designs in the search of a single-mode low-loss behaviour for both ordinary and extraordinary polarizations. We evaluate the overall losses of s-bend components unveiling the expected radiation bend losses of this type of waveguides, and finally showcase a prototype design of a low-loss evanescent splitter. Developing a realistic waveguide model with which robust waveguide designs can be developed will be key for exploiting the potential of the technology.

  9. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    NASA Astrophysics Data System (ADS)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.

  10. Experimental violation of a Bell's inequality with efficient detection.

    PubMed

    Rowe, M A; Kielpinski, D; Meyer, V; Sackett, C A; Itano, W M; Monroe, C; Wineland, D J

    2001-02-15

    Local realism is the idea that objects have definite properties whether or not they are measured, and that measurements of these properties are not affected by events taking place sufficiently far away. Einstein, Podolsky and Rosen used these reasonable assumptions to conclude that quantum mechanics is incomplete. Starting in 1965, Bell and others constructed mathematical inequalities whereby experimental tests could distinguish between quantum mechanics and local realistic theories. Many experiments have since been done that are consistent with quantum mechanics and inconsistent with local realism. But these conclusions remain the subject of considerable interest and debate, and experiments are still being refined to overcome 'loopholes' that might allow a local realistic interpretation. Here we have measured correlations in the classical properties of massive entangled particles (9Be+ ions): these correlations violate a form of Bell's inequality. Our measured value of the appropriate Bell's 'signal' is 2.25 +/- 0.03, whereas a value of 2 is the maximum allowed by local realistic theories of nature. In contrast to previous measurements with massive particles, this violation of Bell's inequality was obtained by use of a complete set of measurements. Moreover, the high detection efficiency of our apparatus eliminates the so-called 'detection' loophole.

  11. Quantum information transfer and entanglement with SQUID qubits in cavity QED: a dark-state scheme with tolerance for nonuniform device parameter.

    PubMed

    Yang, Chui-Ping; Chu, Shih-I; Han, Siyuan

    2004-03-19

    We investigate the experimental feasibility of realizing quantum information transfer (QIT) and entanglement with SQUID qubits in a microwave cavity via dark states. Realistic system parameters are presented. Our results show that QIT and entanglement with two-SQUID qubits can be achieved with a high fidelity. The present scheme is tolerant to device parameter nonuniformity. We also show that the strong coupling limit can be achieved with SQUID qubits in a microwave cavity. Thus, cavity-SQUID systems provide a new way for production of nonclassical microwave source and quantum communication.

  12. Relating Vegetation Aerodynamic Roughness Length to Interferometric SAR Measurements

    NASA Technical Reports Server (NTRS)

    Saatchi, Sassan; Rodriquez, Ernesto

    1998-01-01

    In this paper, we investigate the feasibility of estimating aerodynamic roughness parameter from interferometric SAR (INSAR) measurements. The relation between the interferometric correlation and the rms height of the surface is presented analytically. Model simulations performed over realistic canopy parameters obtained from field measurements in boreal forest environment demonstrate the capability of the INSAR measurements for estimating and mapping surface roughness lengths over forests and/or other vegetation types. The procedure for estimating this parameter over boreal forests using the INSAR data is discussed and the possibility of extending the methodology over tropical forests is examined.

  13. Estimating Age Distributions of Base Flow in Watersheds Underlain by Single and Dual Porosity Formations Using Groundwater Transport Simulation and Weighted Weibull Functions

    NASA Astrophysics Data System (ADS)

    Sanford, W. E.

    2015-12-01

    Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.

  14. Weak value controversy

    NASA Astrophysics Data System (ADS)

    Vaidman, L.

    2017-10-01

    Recent controversy regarding the meaning and usefulness of weak values is reviewed. It is argued that in spite of recent statistical arguments by Ferrie and Combes, experiments with anomalous weak values provide useful amplification techniques for precision measurements of small effects in many realistic situations. The statistical nature of weak values is questioned. Although measuring weak values requires an ensemble, it is argued that the weak value, similarly to an eigenvalue, is a property of a single pre- and post-selected quantum system. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  15. Simulating the value of electric-vehicle-grid integration using a behaviourally realistic model

    NASA Astrophysics Data System (ADS)

    Wolinetz, Michael; Axsen, Jonn; Peters, Jotham; Crawford, Curran

    2018-02-01

    Vehicle-grid integration (VGI) uses the interaction between electric vehicles and the electrical grid to provide benefits that may include reducing the cost of using intermittent renwable electricity or providing a financial incentive for electric vehicle ownerhip. However, studies that estimate the value of VGI benefits have largely ignored how consumer behaviour will affect the magnitude of the impact. Here, we simulate the long-term impact of VGI using behaviourally realistic and empirically derived models of vehicle adoption and charging combined with an electricity system model. We focus on the case where a central entity manages the charging rate and timing for participating electric vehicles. VGI is found not to increase the adoption of electric vehicles, but does have a a small beneficial impact on electricity prices. By 2050, VGI reduces wholesale electricity prices by 0.6-0.7% (0.7 MWh-1, 2010 CAD) relative to an equivalent scenario without VGI. Excluding consumer behaviour from the analysis inflates the value of VGI.

  16. A principled dimension-reduction method for the population density approach to modeling networks of neurons with synaptic dynamics.

    PubMed

    Ly, Cheng

    2013-10-01

    The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.

  17. Local turbulence simulations for the multiphase ISM

    NASA Astrophysics Data System (ADS)

    Kissmann, R.; Kleimann, J.; Fichtner, H.; Grauer, R.

    2008-12-01

    In this paper, we show results of numerical simulations for the turbulence in the interstellar medium (ISM). These results were obtained using a Riemann solver-free numerical scheme for high-Mach number hyperbolic equations. Here, we especially concentrate on the physical properties of the ISM. That is, we do not present turbulence simulations trimmed to be applicable to the ISM. The simulations are rather based on physical estimates for the relevant parameters of the interstellar gas. Applying our code to simulate the turbulent plasma motion within a typical interstellar molecular cloud, we investigate the influence of different equations of state (isothermal and adiabatic) on the statistical properties of the resulting turbulent structures. We find slightly different density power spectra and dispersion maps, while both cases yield qualitatively similar dissipative structures, and exhibit a departure from the classical Kolmogorov case towards a scaling described by the She-Leveque model. Solving the full energy equation with realistic heating/cooling terms appropriate for the diffuse interstellar gas (DIG), we are able to reproduce a realistic two-phase distribution of cold and warm plasma. When extracting maps of polarized intensity from our simulation data, we find encouraging similarity to actual observations. Finally, we compare the actual magnetic field strength of our simulations to its value inferred from the rotation measure. We find these to be systematically different by a factor of about 1.15, thus highlighting the often-underestimated influence of varying line-of-sight particle densities on the magnetic field strength derived from observed rotation measures.

  18. Passive simulation of the nonlinear port-Hamiltonian modeling of a Rhodes Piano

    NASA Astrophysics Data System (ADS)

    Falaize, Antoine; Hélie, Thomas

    2017-03-01

    This paper deals with the time-domain simulation of an electro-mechanical piano: the Fender Rhodes. A simplified description of this multi-physical system is considered. It is composed of a hammer (nonlinear mechanical component), a cantilever beam (linear damped vibrating component) and a pickup (nonlinear magneto-electronic transducer). The approach is to propose a power-balanced formulation of the complete system, from which a guaranteed-passive simulation is derived to generate physically-based realistic sound synthesis. Theses issues are addressed in four steps. First, a class of Port-Hamiltonian Systems is introduced: these input-to-output systems fulfill a power balance that can be decomposed into conservative, dissipative and source parts. Second, physical models are proposed for each component and are recast in the port-Hamiltonian formulation. In particular, a finite-dimensional model of the cantilever beam is derived, based on a standard modal decomposition applied to the Euler-Bernoulli model. Third, these systems are interconnected, providing a nonlinear finite-dimensional Port-Hamiltonian System of the piano. Fourth, a passive-guaranteed numerical method is proposed. This method is built to preserve the power balance in the discrete-time domain, and more precisely, its decomposition structured into conservative, dissipative and source parts. Finally, simulations are performed for a set of physical parameters, based on empirical but realistic values. They provide a variety of audio signals which are perceptively relevant and qualitatively similar to some signals measured on a real instrument.

  19. Thermal conductivity of a single polymer chain

    NASA Astrophysics Data System (ADS)

    Freeman, J. J.; Morgan, G. J.; Cullen, C. A.

    1987-05-01

    Numerical experiments have been performed with use of a fairly realistic model for polyethylene which has enabled the effects of anharmonicity, temperature, and positional disorder on the thermal conductivity to be investigated. It has been shown that the classical conductivity may be substantially increased by both increasing the strength of the anharmonic forces and by decreasing the chain temperature. Although the conductivity of individual chains is found to be high, realistic values for the conductivity of a bulk material may be understood provided that due account is taken of the polymer conformation and interchain coupling.

  20. Optimal control of hydroelectric facilities

    NASA Astrophysics Data System (ADS)

    Zhao, Guangzhi

    This thesis considers a simple yet realistic model of pump-assisted hydroelectric facilities operating in a market with time-varying but deterministic power prices. Both deterministic and stochastic water inflows are considered. The fluid mechanical and engineering details of the facility are described by a model containing several parameters. We present a dynamic programming algorithm for optimizing either the total energy produced or the total cash generated by these plants. The algorithm allows us to give the optimal control strategy as a function of time and to see how this strategy, and the associated plant value, varies with water inflow and electricity price. We investigate various cases. For a single pumped storage facility experiencing deterministic power prices and water inflows, we investigate the varying behaviour for an oversimplified constant turbine- and pump-efficiency model with simple reservoir geometries. We then generalize this simple model to include more realistic turbine efficiencies, situations with more complicated reservoir geometry, and the introduction of dissipative switching costs between various control states. We find many results which reinforce our physical intuition about this complicated system as well as results which initially challenge, though later deepen, this intuition. One major lesson of this work is that the optimal control strategy does not differ much between two differing objectives of maximizing energy production and maximizing its cash value. We then turn our attention to the case of stochastic water inflows. We present a stochastic dynamic programming algorithm which can find an on-average optimal control in the face of this randomness. As the operator of a facility must be more cautious when inflows are random, the randomness destroys facility value. Following this insight we quantify exactly how much a perfect hydrological inflow forecast would be worth to a dam operator. In our final chapter we discuss the challenging problem of optimizing a sequence of two hydro dams sharing the same river system. The complexity of this problem is magnified and we just scratch its surface here. The thesis concludes with suggestions for future work in this fertile area. Keywords: dynamic programming, hydroelectric facility, optimization, optimal control, switching cost, turbine efficiency.

  1. Goal-Directed Decision Making with Spiking Neurons.

    PubMed

    Friedrich, Johannes; Lengyel, Máté

    2016-02-03

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. Copyright © 2016 the authors 0270-6474/16/361529-18$15.00/0.

  2. Goal-Directed Decision Making with Spiking Neurons

    PubMed Central

    Lengyel, Máté

    2016-01-01

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. PMID:26843636

  3. A spectral element method with adaptive segmentation for accurately simulating extracellular electrical stimulation of neurons.

    PubMed

    Eiber, Calvin D; Dokos, Socrates; Lovell, Nigel H; Suaning, Gregg J

    2017-05-01

    The capacity to quickly and accurately simulate extracellular stimulation of neurons is essential to the design of next-generation neural prostheses. Existing platforms for simulating neurons are largely based on finite-difference techniques; due to the complex geometries involved, the more powerful spectral or differential quadrature techniques cannot be applied directly. This paper presents a mathematical basis for the application of a spectral element method to the problem of simulating the extracellular stimulation of retinal neurons, which is readily extensible to neural fibers of any kind. The activating function formalism is extended to arbitrary neuron geometries, and a segmentation method to guarantee an appropriate choice of collocation points is presented. Differential quadrature may then be applied to efficiently solve the resulting cable equations. The capacity for this model to simulate action potentials propagating through branching structures and to predict minimum extracellular stimulation thresholds for individual neurons is demonstrated. The presented model is validated against published values for extracellular stimulation threshold and conduction velocity for realistic physiological parameter values. This model suggests that convoluted axon geometries are more readily activated by extracellular stimulation than linear axon geometries, which may have ramifications for the design of neural prostheses.

  4. Comparisons of subsonic drag estimates derived from Pioneer Venus probes flight data with wind-tunnel results

    NASA Technical Reports Server (NTRS)

    Blanchard, R. C.; Phillips, W. P.; Kelly, G. M.; Findlay, J. T.

    1980-01-01

    Subsonic drag coefficients have been obtained from flight data for the Pioneer Venus multiprobes. The technique used to extract the information from the data consisted of utilizing in situ pressure and temperature measurements. Analysis of the major model parameter error sources indicates overall error levels of five percent or less in the flight values of the drag coefficient. Comparisons of the flight coefficients with preflight wind-tunnel test data showed generally good agreement except for the Sounder descent probe configuration. To preclude atmospheric phenomena as a possible explanation of this difference, additional wind-tunnel tests were performed on the Sounder descent probe. Special attempts were made to duplicate the probe geometry for tests in a high Reynolds number environment in order to achieve as realistic model and flight conditions as practical. Preliminary results from this testing in the NASA LaRC Low Turbulence Pressure Tunnel produced a drag coefficient of 0.68 at 0 deg angle of attack which is within the expected accuracy limits of the flight derived drag coefficient value of 0.72 + or - 0.04, thus eliminating atmospheric phenomena as the explanation for the initial difference.

  5. Complex dynamics in the Leslie-Gower type of the food chain system with multiple delays

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Song, Zi-Gen; Xu, Jian

    2014-08-01

    In this paper, we present a Leslie-Gower type of food chain system composed of three species, which are resource, consumer, and predator, respectively. The digestion time delays corresponding to consumer-eat-resource and predator-eat-consumer are introduced for more realistic consideration. It is called the resource digestion delay (RDD) and consumer digestion delay (CDD) for simplicity. Analyzing the corresponding characteristic equation, the stabilities of the boundary and interior equilibrium points are studied. The food chain system exhibits the species coexistence for the small values of digestion delays. Large RDD/CDD may destabilize the species coexistence and induce the system dynamic into recurrent bloom or system collapse. Further, the present of multiple delays can control species population into the stable coexistence. To investigate the effect of time delays on the recurrent bloom of species population, the Hopf bifurcation and periodic solution are investigated in detail in terms of the central manifold reduction and normal form method. Finally, numerical simulations are performed to display some complex dynamics, which include multiple periodic solution and chaos motion for the different values of system parameters. The system dynamic behavior evolves into the chaos motion by employing the period-doubling bifurcation.

  6. Modeling the seasonal cycle of CO2 on Mars: A fit to the Viking lander pressure curves

    NASA Technical Reports Server (NTRS)

    Wood, S. E.; Paige, D. A.

    1992-01-01

    We have constructed a more accurate Mars thermal model, similar to the one used by Leighton and Murray in 1966, which solves radiative, conductive, and latent heat balance at the surface as well as the one-dimensional heat conduction equation for 40 layers to a depth of 15 meters every 1/36 of a Martian day. The planet is divided into 42 latitude bands with a resolution of two degrees near the poles and five degrees at lower latitudes, with elevations relative to the 6.1 mbar reference areoid. This estimate of the Martian zonally averaged topography was derived primarily from radio occultations. We show that a realistic one-dimensional thermal model is able to reproduce the VL1 pressure curve reasonably well without having to invoke complicated atmospheric effects such as dust storms and polar hoods. Although these factors may cause our deduced values for each model parameter to differ from its true value, we believe that this simple model can be used as a platform to study many aspects of the Martian CO2 cycle over seasonal, interannual, and long-term climate timescales.

  7. Acoustic radiation damping of flat rectangular plates subjected to subsonic flows

    NASA Technical Reports Server (NTRS)

    Lyle, Karen Heitman

    1993-01-01

    The acoustic radiation damping for various isotropic and laminated composite plates and semi-infinite strips subjected to a uniform, subsonic and steady flow has been predicted. The predictions are based on the linear vibration of a flat plate. The fluid loading is characterized as the perturbation pressure derived from the linearized Bernoulli and continuity equations. Parameters varied in the analysis include Mach number, mode number and plate size, aspect ratio and mass. The predictions are compared with existing theoretical results and experimental data. The analytical results show that the fluid loading can significantly affect realistic plate responses. Generally, graphite/epoxy and carbon/carbon plates have higher acoustic radiation damping values than similar aluminum plates, except near plate divergence conditions resulting from aeroelastic instability. Universal curves are presented where the acoustic radiation damping normalized by the mass ratio is a linear function of the reduced frequency. A separate curve is required for each Mach number and plate aspect ratio. In addition, acoustic radiation damping values can be greater than or equal to the structural component of the modal critical damping ratio (assumed as 0.01) for the higher subsonic Mach numbers. New experimental data were acquired for comparison with the analytical results.

  8. Monolithic multigrid method for the coupled Stokes flow and deformable porous medium system

    NASA Astrophysics Data System (ADS)

    Luo, P.; Rodrigo, C.; Gaspar, F. J.; Oosterlee, C. W.

    2018-01-01

    The interaction between fluid flow and a deformable porous medium is a complicated multi-physics problem, which can be described by a coupled model based on the Stokes and poroelastic equations. A monolithic multigrid method together with either a coupled Vanka smoother or a decoupled Uzawa smoother is employed as an efficient numerical technique for the linear discrete system obtained by finite volumes on staggered grids. A specialty in our modeling approach is that at the interface of the fluid and poroelastic medium, two unknowns from the different subsystems are defined at the same grid point. We propose a special discretization at and near the points on the interface, which combines the approximation of the governing equations and the considered interface conditions. In the decoupled Uzawa smoother, Local Fourier Analysis (LFA) helps us to select optimal values of the relaxation parameter appearing. To implement the monolithic multigrid method, grid partitioning is used to deal with the interface updates when communication is required between two subdomains. Numerical experiments show that the proposed numerical method has an excellent convergence rate. The efficiency and robustness of the method are confirmed in numerical experiments with typically small realistic values of the physical coefficients.

  9. A Three-Dimensional Coupled Internal/External Simulation of a Film-Cooled Turbine Vane

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.; Rigby, David L.; Ameri, Ali A.

    1999-01-01

    A three-dimensional Navier-Stokes simulation has been performed for a realistic film-cooled turbine vane using the LeRC-HT code. The simulation includes the flow regions inside the coolant plena and film cooling holes in addition to the external flow. The vane is the subject of an upcoming NASA Glenn Research Center experiment and has both circular cross-section and shaped film cooling holes. This complex geometry is modeled using a multi-block grid which accurately discretizes the actual vane geometry including shaped holes. The simulation matches operating conditions for the planned experiment and assumes periodicity in the spanwise direction on the scale of one pitch of the film cooling hole pattern. Two computations were performed for different isothermal wall temperatures, allowing independent determination of heat transfer coefficients and film effectiveness values. The results indicate separate localized regions of high heat transfer coefficient values, while the shaped holes provide a reduction in heat flux through both parameters. Hole exit data indicate rather simple skewed profiles for the round holes, but complex profiles for the shaped holes with mass fluxes skewed strongly toward their leading edges.

  10. Interactive Web-based Floodplain Simulation System for Realistic Experiments of Flooding and Flood Damage

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2013-12-01

    Recent developments in web technologies make it easy to manage and visualize large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The floodplain simulation system is a web-based 3D interactive flood simulation environment to create real world flooding scenarios. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create and modify predefined scenarios, control environmental parameters, and evaluate flood mitigation techniques. The web-based simulation system provides an environment to children and adults learn about the flooding, flood damage, and effects of development and human activity in the floodplain. The system provides various scenarios customized to fit the age and education level of the users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various flooding and land use scenarios.

  11. End-to-end simulation and verification of GNC and robotic systems considering both space segment and ground segment

    NASA Astrophysics Data System (ADS)

    Benninghoff, Heike; Rems, Florian; Risse, Eicke; Brunner, Bernhard; Stelzer, Martin; Krenn, Rainer; Reiner, Matthias; Stangl, Christian; Gnat, Marcin

    2018-01-01

    In the framework of a project called on-orbit servicing end-to-end simulation, the final approach and capture of a tumbling client satellite in an on-orbit servicing mission are simulated. The necessary components are developed and the entire end-to-end chain is tested and verified. This involves both on-board and on-ground systems. The space segment comprises a passive client satellite, and an active service satellite with its rendezvous and berthing payload. The space segment is simulated using a software satellite simulator and two robotic, hardware-in-the-loop test beds, the European Proximity Operations Simulator (EPOS) 2.0 and the OOS-Sim. The ground segment is established as for a real servicing mission, such that realistic operations can be performed from the different consoles in the control room. During the simulation of the telerobotic operation, it is important to provide a realistic communication environment with different parameters like they occur in the real world (realistic delay and jitter, for example).

  12. User's instructions for the 41-node thermoregulatory model (steady state version)

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A user's guide for the steady-state thermoregulatory model is presented. The model was modified to provide conversational interaction on a remote terminal, greater flexibility for parameter estimation, increased efficiency of convergence, greater choice of output variable and more realistic equations for respiratory and skin diffusion water losses.

  13. All-optical nanomechanical heat engine.

    PubMed

    Dechant, Andreas; Kiesel, Nikolai; Lutz, Eric

    2015-05-08

    We propose and theoretically investigate a nanomechanical heat engine. We show how a levitated nanoparticle in an optical trap inside a cavity can be used to realize a Stirling cycle in the underdamped regime. The all-optical approach enables fast and flexible control of all thermodynamical parameters and the efficient optimization of the performance of the engine. We develop a systematic optimization procedure to determine optimal driving protocols. Further, we perform numerical simulations with realistic parameters and evaluate the maximum power and the corresponding efficiency.

  14. All-Optical Nanomechanical Heat Engine

    NASA Astrophysics Data System (ADS)

    Dechant, Andreas; Kiesel, Nikolai; Lutz, Eric

    2015-05-01

    We propose and theoretically investigate a nanomechanical heat engine. We show how a levitated nanoparticle in an optical trap inside a cavity can be used to realize a Stirling cycle in the underdamped regime. The all-optical approach enables fast and flexible control of all thermodynamical parameters and the efficient optimization of the performance of the engine. We develop a systematic optimization procedure to determine optimal driving protocols. Further, we perform numerical simulations with realistic parameters and evaluate the maximum power and the corresponding efficiency.

  15. Digital Simulation Of Precise Sensor Degradations Including Non-Linearities And Shift Variance

    NASA Astrophysics Data System (ADS)

    Kornfeld, Gertrude H.

    1987-09-01

    Realistic atmospheric and Forward Looking Infrared Radiometer (FLIR) degradations were digitally simulated. Inputs to the routine are environmental observables and the FLIR specifications. It was possible to achieve realism in the thermal domain within acceptable computer time and random access memory (RAM) requirements because a shift variant recursive convolution algorithm that well describes thermal properties was invented and because each picture element (pixel) has radiative temperature, a materials parameter and range and altitude information. The computer generation steps start with the image synthesis of an undegraded scene. Atmospheric and sensor degradation follow. The final result is a realistic representation of an image seen on the display of a specific FLIR.

  16. Alpha effect of Alfv{acute e}n waves and current drive in reversed-field pinches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litwin, C.; Prager, S.C.

    Circularly polarized Alfv{acute e}n waves give rise to an {alpha}-dynamo effect that can be exploited to drive parallel current. In a {open_quotes}laminar{close_quotes} magnetic the effect is weak and does not give rise to significant currents for realistic parameters (e.g., in tokamaks). However, in reversed-field pinches (RFPs) in which magnetic field in the plasma core is stochastic, a significant enhancement of the {alpha} effect occurs. Estimates of this effect show that it may be a realistic method of current generation in the present-day RFP experiments and possibly also in future RFP-based fusion reactors. {copyright} {ital 1998 American Institute of Physics.}

  17. Trans-dimensional inversion of microtremor array dispersion data with hierarchical autoregressive error models

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.

    2012-02-01

    This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.

  18. Creating Realistic Data Sets with Specified Properties via Simulation

    ERIC Educational Resources Information Center

    Goldman, Robert N.; McKenzie, John D. Jr.

    2009-01-01

    We explain how to simulate both univariate and bivariate raw data sets having specified values for common summary statistics. The first example illustrates how to "construct" a data set having prescribed values for the mean and the standard deviation--for a one-sample t test with a specified outcome. The second shows how to create a bivariate data…

  19. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  20. Boole and Bell inequality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michielsen, K.; De Raedt, H.; Hess, K.

    2011-03-28

    We discuss the relation between Bell's and Boole's inequality. We apply both to the analysis of measurement results in idealized Einstein-Podolsky-Rosen-Bohm experiments. We present a local realist model that violates Bell's and Boole's inequality due to the absence of Boole's one-to-one correspondence between the two-valued variables of the mathematical description and the two-valued measurement results.

  1. Galaxy clusters in the context of superfluid dark matter

    NASA Astrophysics Data System (ADS)

    Hodson, Alistair O.; Zhao, Hongsheng; Khoury, Justin; Famaey, Benoit

    2017-11-01

    Context. The mass discrepancy in the Universe has not been solved by the cold dark matter (CDM) or the modified Newtonian dynamics (MOND) paradigms so far. The problems and solutions of either scenario are mutually exclusive on large and small scales. It has recently been proposed, by assuming that dark matter is a superfluid, that MOND-like effects can be achieved on small scales whilst preserving the success of ΛCDM on large scales. Detailed models within this "superfluid dark matter" (SfDM) paradigm are yet to be constructed. Aims: Here, we aim to provide the first set of spherical models of galaxy clusters in the context of SfDM. We aim to determine whether the superfluid formulation is indeed sufficient to explain the mass discrepancy in galaxy clusters. Methods: The SfDM model is defined by two parameters. Λ can be thought of as a mass scale in the Lagrangian of the scalar field that effectively describes the phonons, and it acts as a coupling constant between the phonons and baryons. m is the mass of the DM particles. Based on these parameters, we outline the theoretical structure of the superfluid core and the surrounding "normal-phase" dark halo of quasi-particles. The latter are thought to encompass the largest part of galaxy clusters. Here, we set the SfDM transition at the radius where the density and pressure of the superfluid and normal phase coincide, neglecting the effect of phonons in the superfluid core. We then apply the formalism to a sample of galaxy clusters, and directly compare the SfDM predicted mass profiles to data. Results: We find that the superfluid formulation can reproduce the X-ray dynamical mass profile of clusters reasonably well, but with a slight under-prediction of the gravity in the central regions. This might be partly related to our neglecting of the effect of phonons in these regions. Two normal-phase halo profiles are tested, and it is found that clusters are better defined by a normal-phase halo resembling an Navarro-Frenk-White-like structure than an isothermal profile. Conclusions: In this first exploratory work on the topic, we conclude that depending on the amount of baryons present in the central galaxy and on the actual effect of phonons in the inner regions, this superfluid formulation could be successful in describing galaxy clusters. In the future, our model could be made more realistic by exploring non-sphericity and a more realistic SfDM to normal phase transition. The main result of this study is an estimate of the order of magnitude of the theory parameters for the superfluid formalism to be reasonably consistent with clusters. These values will have to be compared to the true values needed in galaxies.

  2. Optical filter highlighting spectral features part II: quantitative measurements of cosmetic foundation and assessment of their spatial distributions under realistic facial conditions.

    PubMed

    Nishino, Ken; Nakamura, Mutsuko; Matsumoto, Masayuki; Tanno, Osamu; Nakauchi, Shigeki

    2011-03-28

    We previously proposed a filter that could detect cosmetic foundations with high discrimination accuracy [Opt. Express 19, 6020 (2011)]. This study extends the filter's functionality to the quantification of the amount of foundation and applies the filter for the assessment of spatial distributions of foundation under realistic facial conditions. Human faces that are applied with quantitatively controlled amounts of cosmetic foundations were measured using the filter. A calibration curve between pixel values of the image and the amount of foundation was created. The optical filter was applied to visualize spatial foundation distributions under realistic facial conditions, which clearly indicated areas on the face where foundation remained even after cleansing. Results confirm that the proposed filter could visualize and nondestructively inspect the foundation distributions.

  3. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  4. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  5. Comparison of parameters of modern cooled and uncooled thermal cameras

    NASA Astrophysics Data System (ADS)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2017-10-01

    During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.

  6. Monte Carlo simulation of the operational quantities at the realistic mixed neutron-photon radiation fields CANEL and SIGMA.

    PubMed

    Lacoste, V; Gressier, V

    2007-01-01

    The Institute for Radiological Protection and Nuclear Safety owns two facilities producing realistic mixed neutron-photon radiation fields, CANEL, an accelerator driven moderator modular device, and SIGMA, a graphite moderated americium-beryllium assembly. These fields are representative of some of those encountered at nuclear workplaces, and the corresponding facilities are designed and used for calibration of various instruments, such as survey meters, personal dosimeters or spectrometric devices. In the framework of the European project EVIDOS, irradiations of personal dosimeters were performed at CANEL and SIGMA. Monte Carlo calculations were performed to estimate the reference values of the personal dose equivalent at both facilities. The Hp(10) values were calculated for three different angular positions, 0 degrees, 45 degrees and 75 degrees, of an ICRU phantom located at the position of irradiation.

  7. Realistic diversity loss and variation in soil depth independently affect community-level plant nitrogen use.

    PubMed

    Selmants, Paul C; Zavaleta, Erika S; Wolf, Amelia A

    2014-01-01

    Numerous experiments have demonstrated that diverse plant communities use nitrogen (N) more completely and efficiently, with implications for how species conservation efforts might influence N cycling and retention in terrestrial ecosystems. However, most such experiments have randomly manipulated species richness and minimized environmental heterogeneity, two design aspects that may reduce applicability to real ecosystems. Here we present results from an experiment directly comparing how realistic and randomized plant species losses affect plant N use across a gradient of soil depth in a native-dominated serpentine grassland in California. We found that the strength of the species richness effect on plant N use did not increase with soil depth in either the realistic or randomized species loss scenarios, indicating that the increased vertical heterogeneity conferred by deeper soils did not lead to greater complementarity among species in this ecosystem. Realistic species losses significantly reduced plant N uptake and altered N-use efficiency, while randomized species losses had no effect on plant N use. Increasing soil depth positively affected plant N uptake in both loss order scenarios but had a weaker effect on plant N use than did realistic species losses. Our results illustrate that realistic species losses can have functional consequences that differ distinctly from randomized losses, and that species diversity effects can be independent of and outweigh those of environmental heterogeneity on ecosystem functioning. Our findings also support the value of conservation efforts aimed at maintaining biodiversity to help buffer ecosystems against increasing anthropogenic N loading.

  8. On Pulsating and Cellular Forms of Hydrodynamic Instability in Liquid-Propellant Combustion

    NASA Technical Reports Server (NTRS)

    Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)

    1998-01-01

    An extended Landau-Levich model of liquid-propellant combustion, one that allows for a local dependence of the burning rate on the (gas) pressure at the liquid-gas interface, exhibits not only the classical hydrodynamic cellular instability attributed to Landau but also a pulsating hydrodynamic instability associated with sufficiently negative pressure sensitivities. Exploiting the realistic limit of small values of the gas-to-liquid density ratio p, analytical formulas for both neutral stability boundaries may be obtained by expanding all quantities in appropriate powers of p in each of three distinguished wave-number regimes. In particular, composite analytical expressions are derived for the neutral stability boundaries A(sub p)(k), where A, is the pressure sensitivity of the burning rate and k is the wave number of the disturbance. For the cellular boundary, the results demonstrate explicitly the stabilizing effect of gravity on long-wave disturbances, the stabilizing effect of viscosity (both liquid and gas) and surface tension on short-wave perturbations, and the instability associated with intermediate wave numbers for negative values of A(sub p), which is characteristic of many hydroxylammonium nitrate-based liquid propellants over certain pressure ranges. In contrast, the pulsating hydrodynamic stability boundary is insensitive to gravitational and surface-tension effects but is more sensitive to the effects of liquid viscosity because, for typical nonzero values of the latter, the pulsating boundary decreases to larger negative values of A(sub p) as k increases through O(l) values. Thus, liquid-propellant combustion is predicted to be stable (that is, steady and planar) only for a range of negative pressure sensitivities that lie below the cellular boundary that exists for sufficiently small negative values of A(sub p) and above the pulsating boundary that exists for larger negative values of this parameter.

  9. Identifying Crucial Parameter Correlations Maintaining Bursting Activity

    PubMed Central

    Doloc-Mihu, Anca; Calabrese, Ronald L.

    2014-01-01

    Recent experimental and computational studies suggest that linearly correlated sets of parameters (intrinsic and synaptic properties of neurons) allow central pattern-generating networks to produce and maintain their rhythmic activity regardless of changing internal and external conditions. To determine the role of correlated conductances in the robust maintenance of functional bursting activity, we used our existing database of half-center oscillator (HCO) model instances of the leech heartbeat CPG. From the database, we identified functional activity groups of burster (isolated neuron) and half-center oscillator model instances and realistic subgroups of each that showed burst characteristics (principally period and spike frequency) similar to the animal. To find linear correlations among the conductance parameters maintaining functional leech bursting activity, we applied Principal Component Analysis (PCA) to each of these four groups. PCA identified a set of three maximal conductances (leak current, Leak; a persistent K current, K2; and of a persistent Na+ current, P) that correlate linearly for the two groups of burster instances but not for the HCO groups. Visualizations of HCO instances in a reduced space suggested that there might be non-linear relationships between these parameters for these instances. Experimental studies have shown that period is a key attribute influenced by modulatory inputs and temperature variations in heart interneurons. Thus, we explored the sensitivity of period to changes in maximal conductances of Leak, K2, and P, and we found that for our realistic bursters the effect of these parameters on period could not be assessed because when varied individually bursting activity was not maintained. PMID:24945358

  10. Imperfection sensitivity of pressured buckling of biopolymer spherical shells

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Ru, C. Q.

    2016-06-01

    Imperfection sensitivity is essential for mechanical behavior of biopolymer shells [such as ultrasound contrast agents (UCAs) and spherical viruses] characterized by high geometric heterogeneity. In this work, an imperfection sensitivity analysis is conducted based on a refined shell model recently developed for spherical biopolymer shells of high structural heterogeneity and thickness nonuniformity. The influence of related parameters (including the ratio of radius to average shell thickness, the ratio of transverse shear modulus to in-plane shear modulus, and the ratio of effective bending thickness to average shell thickness) on imperfection sensitivity is examined for pressured buckling. Our results show that the ratio of effective bending thickness to average shell thickness has a major effect on the imperfection sensitivity, while the effect of the ratio of transverse shear modulus to in-plane shear modulus is usually negligible. For example, with physically realistic parameters for typical imperfect spherical biopolymer shells, the present model predicts that actual maximum external pressure could be reduced to as low as 60% of that of a perfect UCA spherical shell or 55%-65% of that of a perfect spherical virus shell, respectively. The moderate imperfection sensitivity of spherical biopolymer shells with physically realistic imperfection is largely attributed to the fact that biopolymer shells are relatively thicker (defined by smaller radius-to-thickness ratio) and therefore practically realistic imperfection amplitude normalized by thickness is very small as compared to that of classical elastic thin shells which have much larger radius-to-thickness ratio.

  11. WE-D-18A-05: Construction of Realistic Liver Phantoms From Patient Images and a Commercial 3D Printer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leng, S; Vrieze, T; Kuhlmann, J

    2014-06-15

    Purpose: To assess image quality and radiation dose reduction in abdominal CT imaging, physical phantoms having realistic background textures and lesions are highly desirable. The purpose of this work was to construct a liver phantom with realistic background and lesions using patient CT images and a 3D printer. Methods: Patient CT images containing liver lesions were segmented into liver tissue, contrast-enhanced vessels, and liver lesions using commercial software (Mimics, Materialise, Belgium). Stereolithography (STL) files of each segmented object were created and imported to a 3D printer (Object350 Connex, Stratasys, MN). After test scans were performed to map the eight availablemore » printing materials into CT numbers, printing materials were assigned to each object and a physical liver phantom printed. The printed phantom was scanned on a clinical CT scanner and resulting images were compared with the original patient CT images. Results: The eight available materials used to print the liver phantom had CT number ranging from 62 to 117 HU. In scans of the liver phantom, the liver lesions and veins represented in the STL files were all visible. Although the absolute value of the CT number in the background liver material (approx. 85 HU) was higher than in patients (approx. 40 HU), the difference in CT numbers between lesions and background were representative of the low contrast values needed for optimization tasks. Future work will investigate materials with contrast sufficient to emulate contrast-enhanced arteries. Conclusion: Realistic liver phantoms can be constructed from patient CT images using a commercial 3D printer. This technique may provide phantoms able to determine the effect of radiation dose reduction and noise reduction techniques on the ability to detect subtle liver lesions in the context of realistic background textures.« less

  12. Overflow Simulations using MPAS-Ocean in Idealized and Realistic Domains

    NASA Astrophysics Data System (ADS)

    Reckinger, S.; Petersen, M. R.; Reckinger, S. J.

    2016-02-01

    MPAS-Ocean is used to simulate an idealized, density-driven overflow using the dynamics of overflow mixing and entrainment (DOME) setup. Numerical simulations are benchmarked against other models, including the MITgcm's z-coordinate model and HIM's isopycnal coordinate model. A full parameter study is presented that looks at how sensitive overflow simulations are to vertical grid type, resolution, and viscosity. Horizontal resolutions with 50 km grid cells are under-resolved and produce poor results, regardless of other parameter settings. Vertical grids ranging in thickness from 15 m to 120 m were tested. A horizontal resolution of 10 km and a vertical resolution of 60 m are sufficient to resolve the mesoscale dynamics of the DOME configuration, which mimics real-world overflow parameters. Mixing and final buoyancy are least sensitive to horizontal viscosity, but strongly sensitive to vertical viscosity. This suggests that vertical viscosity could be adjusted in overflow water formation regions to influence mixing and product water characteristics. Also, the study shows that sigma coordinates produce much less mixing than z-type coordinates, resulting in heavier plumes that go further down slope. Sigma coordinates are less sensitive to changes in resolution but as sensitive to vertical viscosity compared to z-coordinates. Additionally, preliminary measurements of overflow diagnostics on global simulations using a realistic oceanic domain are presented.

  13. Monitoring of deep brain temperature in infants using multi-frequency microwave radiometry and thermal modelling.

    PubMed

    Han, J W; Van Leeuwen, G M; Mizushina, S; Van de Kamer, J B; Maruyama, K; Sugiura, T; Azzopardi, D V; Edwards, A D

    2001-07-01

    In this study we present a design for a multi-frequency microwave radiometer aimed at prolonged monitoring of deep brain temperature in newborn infants and suitable for use during hypothermic neural rescue therapy. We identify appropriate hardware to measure brightness temperature and evaluate the accuracy of the measurements. We describe a method to estimate the tissue temperature distribution from measured brightness temperatures which uses the results of numerical simulations of the tissue temperature as well as the propagation of the microwaves in a realistic detailed three-dimensional infant head model. The temperature retrieval method is then used to evaluate how the statistical fluctuations in the measured brightness temperatures limit the confidence interval for the estimated temperature: for an 18 degrees C temperature differential between cooled surface and deep brain we found a standard error in the estimated central brain temperature of 0.75 degrees C. Evaluation of the systematic errors arising from inaccuracies in model parameters showed that realistic deviations in tissue parameters have little impact compared to uncertainty in the thickness of the bolus between the receiving antenna and the infant's head or in the skull thickness. This highlights the need to pay particular attention to these latter parameters in future practical implementation of the technique.

  14. Variability of pulsed energy outputs from three dermatology lasers during multiple simulated treatments.

    PubMed

    Britton, Jason

    2018-01-20

    Dermatology laser treatments are undertaken at regional departments using lasers of different powers and wavelengths. In order to achieve good outcomes, there needs to be good consistency of laser output across different weeks as it is custom and practice to break down the treatments into individual fractions. Departments will also collect information from test patches to help decide on the most appropriate treatment parameters for individual patients. The objective of these experiments is to assess the variability of the energy outputs from a small number of lasers across multiple weeks at realistic parameters. The energy outputs from 3 lasers were measured at realistic treatment parameters using a thermopile detector across a period of 6 weeks. All lasers fired in single-pulse mode demonstrated good repeatability of energy output. In spite of one of the lasers being scheduled for a dye canister change in the next 2 weeks, there was good energy matching between the two devices with only a 4%-5% variation in measured energies. Based on the results presented, clinical outcomes should not be influenced by variability in the energy outputs of the dermatology lasers used as part of the treatment procedure. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Effect of progressive wear on the contact mechanics of hip replacements--does the realistic surface profile matter?

    PubMed

    Wang, Ling; Yang, Wenjian; Peng, Xifeng; Li, Dichen; Dong, Shuangpeng; Zhang, Shu; Zhu, Jinyu; Jin, Zhongmin

    2015-04-13

    The contact mechanics of artificial metal-on-polyethylene hip joints are believed to affect the lubrication, wear and friction of the articulating surfaces and may lead to the joint loosening. Finite element analysis has been widely used for contact mechanics studies and good agreements have been achieved with current experimental data; however, most studies were carried out with idealist spherical geometries of the hip prostheses rather than the realistic worn surfaces, either for simplification reason or lacking of worn surface profile. In this study, the worn surfaces of the samples from various stages of hip simulator testing (0 to 5 million cycles) were reconstructed as solid models and were applied in the contact mechanics study. The simulator testing results suggested that the center of the head has various departure value from that of the cup and the value of the departure varies with progressively increased wear. This finding was adopted into the finite element study for better evaluation accuracy. Results indicated that the realistic model provided different evaluation from that of the ideal spherical model. Moreover, with the progressively increased wear, large increase of the contact pressure (from 12 to 31 MPa) was predicted on the articulating surface, and the predicted maximum von Mises stress was increased from 7.47 to 13.26 MPa, indicating the marked effect of the worn surface profiles on the contact mechanics of the joint. This study seeks to emphasize the importance of realistic worn surface profile of the acetabular cup especially following large wear volume. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. The Charging of Dust Grains in the Inner Heliosheath

    NASA Astrophysics Data System (ADS)

    Avinash, K.; Slavin, J.; Zank, G. P.; Frisch, P.

    2008-12-01

    Equilibrium electric charge and surface potential on a dust grain in the heliosheath are calculated. The grain is charged due to heliosheath plasma flux, photo electrons flux, secondary electron emission flux and transmission flux. Realistically, the heliosheath plasma consists of solar electrons, solar wind ions [SWI] and pick up ions [PUI]. These species interact differently with TS and thus have different characteristics down stream in the heliosheath. The PUI suffer multiple reflections at TS and are accelerated to high energies in the range of ~ 106 K. The solar electrons, on the other hand, are heated adiabatically through the TS and have temperature in the range ~ 5x105 K. The SWI may have a smaller temperature typically in the range 1-5x104 K The density of electrons could be in the range ~5 x 10-4 cm-3, while the ratio of PUI to SWI density could range from 0.1 to 0.5. Taking into account these parameters, grain charging due to different plasma species and other fluxes mentioned earlier, is calculated. Our results show that (a) surface potential is very sensitive to electron temp. It goes through a maxima and for realistic values close to or less than 5x105 K it can be as big as 26V which is twice the value calculated by Kimura and Mann1. This may have implications for electrostatic disruption and the size distribution of dust particles in the heliosheath. With PUI density the surface potential increases about 10 to 20 %. Though temperature of PUI is significantly larger than that of electrons, it is not large enough to make up for the mass ratio of electrons to protons. On account small temperature and electron/proton mass ratio, the effect of SWI on dust charge is very weak. (1) H. Kimura and I. Mann, Ap.J. 499, 454 (1998).

  17. Simulation of electron-proton coupling with a Monte Carlo method: application to cytochrome c3 using continuum electrostatics.

    PubMed Central

    Baptista, A M; Martel, P J; Soares, C M

    1999-01-01

    A new method is presented for simulating the simultaneous binding equilibrium of electrons and protons on protein molecules, which makes it possible to study the full equilibrium thermodynamics of redox and protonation processes, including electron-proton coupling. The simulations using this method reflect directly the pH and electrostatic potential of the environment, thus providing a much closer and realistic connection with experimental parameters than do usual methods. By ignoring the full binding equilibrium, calculations usually overlook the twofold effect that binding fluctuations have on the behavior of redox proteins: first, they affect the energy of the system by creating partially occupied sites; second, they affect its entropy by introducing an additional empty/occupied site disorder (here named occupational entropy). The proposed method is applied to cytochrome c3 of Desulfovibrio vulgaris Hildenborough to study its redox properties and electron-proton coupling (redox-Bohr effect), using a continuum electrostatic method based on the linear Poisson-Boltzmann equation. Unlike previous studies using other methods, the full reduction order of the four hemes at physiological pH is successfully predicted. The sites more strongly involved in the redox-Bohr effect are identified by analysis of their titration curves/surfaces and the shifts of their midpoint redox potentials and pKa values. Site-site couplings are analyzed using statistical correlations, a method much more realistic than the usual analysis based on direct interactions. The site found to be more strongly involved in the redox-Bohr effect is propionate D of heme I, in agreement with previous studies; other likely candidates are His67, the N-terminus, and propionate D of heme IV. Even though the present study is limited to equilibrium conditions, the possible role of binding fluctuations in the concerted transfer of protons and electrons under nonequilibrium conditions is also discussed. The occupational entropy contributions to midpoint redox potentials and pKa values are computed and shown to be significant. PMID:10354425

  18. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.

  19. Impact of Variable-Density Flow on the Value-of-Information from Pressure and Concentration Data for Saline Aquifer Characterization

    NASA Astrophysics Data System (ADS)

    Yoon, S.; Williams, J. R.; Juanes, R.; Kang, P. K.

    2017-12-01

    Managed aquifer recharge (MAR) is becoming an important solution for ensuring sustainable water resources and mitigating saline water intrusion in coastal aquifers. Accurate estimates of hydrogeological parameters in subsurface flow and solute transport models are critical for making predictions and managing aquifer systems. In the presence of a density difference between the injected freshwater and ambient saline groundwater, the pressure field is coupled to the spatial distribution of salinity distribution, and therefore experiences transient changes. The variable-density effects can be quantified by a mixed convection ratio between two characteristic types of convection: free convection due to density contrast, and forced convection due to a hydraulic gradient. We analyze the variable-density effects on the value-of-information of pressure and concentration data for saline aquifer characterization. An ensemble Kalman filter is used to estimate permeability fields by assimilating the data, and the performance of the estimation is analyzed in terms of the accuracy and the uncertainty of estimated permeability fields and the predictability of arrival times of breakthrough curves in a realistic push-pull setting. This study demonstrates that: 1. Injecting fluids with the velocity that balances the two characteristic convections maximizes the value of data for saline aquifer characterization; 2. The variable-density effects on the value of data for the inverse estimation decrease as the permeability heterogeneity increases; 3. The advantage of joint inversion of pressure and concentration data decreases as the coupling effects between flow and transport increase.

  20. Evaluating the Relationships Between NTNU/SINTEF Drillability Indices with Index Properties and Petrographic Data of Hard Igneous Rocks

    NASA Astrophysics Data System (ADS)

    Aligholi, Saeed; Lashkaripour, Gholam Reza; Ghafoori, Mohammad; Azali, Sadegh Tarigh

    2017-11-01

    Thorough and realistic performance predictions are among the main requisites for estimating excavation costs and time of the tunneling projects. Also, NTNU/SINTEF rock drillability indices, including the Drilling Rate Index™ (DRI), Bit Wear Index™ (BWI), and Cutter Life Index™ (CLI), are among the most effective indices for determining rock drillability. In this study, brittleness value (S20), Sievers' J-Value (SJ), abrasion value (AV), and Abrasion Value Cutter Steel (AVS) tests are conducted to determine these indices for a wide range of Iranian hard igneous rocks. In addition, relationships between such drillability parameters with petrographic features and index properties of the tested rocks are investigated. The results from multiple regression analysis revealed that the multiple regression models prepared using petrographic features provide a better estimation of drillability compared to those prepared using index properties. Also, it was found that the semiautomatic petrography and multiple regression analyses provide a suitable complement to determine drillability properties of igneous rocks. Based on the results of this study, AV has higher correlations with studied mineralogical indices than AVS. The results imply that, in general, rock surface hardness of hard igneous rocks is very high, and the acidic igneous rocks have a lower strength and density and higher S20 than those of basic rocks. Moreover, DRI is higher, while BWI is lower in acidic igneous rocks, suggesting that drill and blast tunneling is more convenient in these rocks than basic rocks.

  1. Truncated disc surface brightness profiles produced by flares

    NASA Astrophysics Data System (ADS)

    Borlaff, Alejandro; Eliche-Moral, M. Carmen; Beckman, John; Font, Joan

    2017-03-01

    Previous studies have discarded that flares in galactic discs may explain the truncation that are frequently observed in highly-inclined galaxies (Kregel et al. 2002). However, no study has systematically analysed this hypothesis using realistic models for the disc, the flare and the bulge. We derive edge-on and face-on surface brightness profiles for a series of realistic galaxy models with flared discs that sample a wide range of structural and photometric parameters across the Hubble Sequence, accordingly to observations. The surface brightness profile for each galaxy model has been simulated for edge-on and face-on views to find out whether the flared disc produces a significant truncation in the disc in the edge-on view compared to the face-on view or not. In order to simulate realistic images of disc galaxies, we have considered the observational distribution of the photometric parameters as a function of the morphological type for three mass bins (10 < log10(M/M ⊙) < 10.7, 10.7 < log10(M/M ⊙) < 11 and log10(M/M ⊙) > 11), and four morphological type bins (S0-Sa, Sb-Sbc, Sc-Scd and Sd-Sdm). For each mass bin, we have restricted the photometric and structural parameters of each modelled galaxy to their characteristic observational ranges (μ0, disc, μeff, bulge, B/T, M abs, r eff, n bulge, h R, disc) and the flare in the disc (h z, disc/h R, disc, ∂h z, disc/∂R, see de Grijs & Peletier 1997, Graham 2001, López-Corredoira et al. 2002, Yoachim & Dalcanton 2006, Bizyaev et al. 2014, Mosenkov et al. 2015). Contrary to previous claims, the simulations show that realistic flared disks can be responsible for the truncations observed in many edge-on systems, preserving the profile of the non-flared analogous model in face-on view. These breaks reproduce the properties of the weak-to-intermediate breaks observed in many real Type-II galaxies in the diagram relating the radial location of the break (R brkII) in units of the inner disk scale-length with the break strength S (Laine et al. 2014). Radial variation of the scale-height of the disc (flaring) can explain the existence of many breaks in edge-on galaxies, especially of those with low break strengths 10\\frac{ho}{hi} \\sim \\ [-0.3,-0.1]$ .

  2. Dynamic Modeling from Flight Data with Unknown Time Skews

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2016-01-01

    A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.

  3. Comment on ``Symmetry and structure of quantized vortices in superfluid 3'

    NASA Astrophysics Data System (ADS)

    Sauls, J. A.; Serene, J. W.

    1985-10-01

    Recent theoretical attempts to explain the observed vortex-core phase transition in superfluid 3B yield conflicting results. Variational calculations by Fetter and Theodrakis, based on realistic strong-coupling parameters, yield a phase transition in the Ginzburg-Landau region that is in qualitative agreement with the phase diagram. Numerically precise calculations by Salomaa and Volivil (SV), based on the Brinkman-Serene-Anderson (BSA) parameters, do not yield a phase transition between axially symmetric vortices. The ambiguity of these results is in part due to the large differences between the β parameters, which are inputs to the vortex free-energy functional. We comment on the relative merits of the β parameters based on recent improvements in the quasiparticle scattering amplitude and the BSA parameters used by SV.

  4. The application of the sinusoidal model to lung cancer patient respiratory motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, R.; Vedam, S.S.; Chung, T.D.

    2005-09-15

    Accurate modeling of the respiratory cycle is important to account for the effect of organ motion on dose calculation for lung cancer patients. The aim of this study is to evaluate the accuracy of a respiratory model for lung cancer patients. Lujan et al. [Med. Phys. 26(5), 715-720 (1999)] proposed a model, which became widely used, to describe organ motion due to respiration. This model assumes that the parameters do not vary between and within breathing cycles. In this study, first, the correlation of respiratory motion traces with the model f(t) as a function of the parameter n(n=1,2,3) was undertakenmore » for each breathing cycle from 331 four-minute respiratory traces acquired from 24 lung cancer patients using three breathing types: free breathing, audio instruction, and audio-visual biofeedback. Because cos{sup 2} and cos{sup 4} had similar correlation coefficients, and cos{sup 2} and cos{sup 1} have a trigonometric relationship, for simplicity, the cos{sup 1} value was consequently used for further analysis in which the variations in mean position (z{sub 0}), amplitude of motion (b) and period ({tau}) with and without biofeedback or instructions were investigated. For all breathing types, the parameter values, mean position (z{sub 0}), amplitude of motion (b), and period ({tau}) exhibited significant cycle-to-cycle variations. Audio-visual biofeedback showed the least variations for all three parameters (z{sub 0}, b, and {tau}). It was found that mean position (z{sub 0}) could be approximated with a normal distribution, and the amplitude of motion (b) and period ({tau}) could be approximated with log normal distributions. The overall probability density function (pdf) of f(t) for each of the three breathing types was fitted with three models: normal, bimodal, and the pdf of a simple harmonic oscillator. It was found that the normal and the bimodal models represented the overall respiratory motion pdfs with correlation values from 0.95 to 0.99, whereas the range of the simple harmonic oscillator pdf correlation values was 0.71 to 0.81. This study demonstrates that the pdfs of mean position (z{sub 0}), amplitude of motion (b), and period ({tau}) can be used for sampling to obtain more realistic respiratory traces. The overall standard deviations of respiratory motion were 0.48, 0.57, and 0.55 cm for free breathing, audio instruction, and audio-visual biofeedback, respectively.« less

  5. Some remarks on the early evolution of Enceladus

    NASA Astrophysics Data System (ADS)

    Czechowski, Leszek

    2014-12-01

    Thermal history of Enceladus is investigated from the beginning of accretion to formation of its core (~400 My). We consider model with solid state convection (in a solid layer) as well as liquid state convection (in molten parts of the satellite). The numerical model of convection uses full conservative finite difference method. The roles of two modes of convection are considered using the parameterized theory of convection. The following heat sources are included: short lived and long lived radioactive isotopes, accretion, serpentinization, and phase changes. Heat transfer processes are: conduction, solid state convection, and liquid state convection. It is found that core formation was completed only when liquid state convection had slowed down. Eventually, the porous core with pores filled with water was formed. Recent data concerning gravity field of Enceladus confirm low density of the core. We investigated also thermal history for different values of the following parameters: time of beginning of accretion tini, duration of accretion tacr, viscosity of ice close to the melting point ηm, activation energy in formula for viscosity E, thermal conductivity of silicate component ksil, ammonia content XNH3, and energy of serpentinization cserp. All these parameters are important for evolution, but not dramatic differences are found for realistic values. Moreover, the hypothesis of proto-Enceladus (stating that initially Enceladus was substantially larger) is considered and thermal history of such body is calculated. The last subject is the Mimas-Enceladus paradox. Comparison of thermal models of Mimas and Enceladus indicates that period favorable for 'excited path of evolution' was significantly shorter for Mimas than for Enceladus.

  6. Salinity stratification of the Mediterranean Sea during the Messinian crisis: A first model analysis

    NASA Astrophysics Data System (ADS)

    Simon, Dirk; Meijer, Paul Th.

    2017-12-01

    In the late Miocene, a thick and complex sequence of evaporites was deposited in the Mediterranean Sea during an interruption of normal marine sedimentation known as the Messinian Salinity Crisis. Because the related deposits are mostly hidden from scrutiny in the deep basin, correlation between onshore and offshore sediments is difficult, hampering the development of a comprehensive stratigraphic model. Since the various facies correspond to different salinities of the basin waters, it would help to have physics-based understanding of the spatial distribution of salt concentration. Here, we focus on modelling salinity as a function of depth, i.e., on the stratification of the water column. A box model is set up that includes a simple representation of a haline overturning circulation and of mixing. It is forced by Atlantic exchange and evaporative loss and is used to systematically explore the degree of stratification that results under a wide range of combinations of parameter values. The model demonstrates counterintuitive behaviour close to the saturation of halite. For parameter values that may well be realistic for the Messinian, we show that a significantly stratified Mediterranean water column can be established. In this case, Atlantic connectivity is limited but may be closer to modern magnitudes than previously thought. In addition, a slowing of Mediterranean overturning and a larger deep-water formation region (both in comparison to the present day) are required. Under these conditions, we would expect a longer duration of halite deposition than currently considered in the MSC stratigraphic consensus model.

  7. A Facial Control Method Using Emotional Parameters in Sensibility Robot

    NASA Astrophysics Data System (ADS)

    Shibata, Hiroshi; Kanoh, Masayoshi; Kato, Shohei; Kunitachi, Tsutomu; Itoh, Hidenori

    The “Ifbot” robot communicates with people by considering its own “emotions”. Ifbot has many facial expressions to communicate enjoyment. These are used to express its internal emotions, purposes, reactions caused by external stimulus, and entertainment such as singing songs. All these facial expressions are developed by designers manually. Using this approach, we must design all facial motions, if we want Ifbot to express them. It, however, is not realistic. We have therefore developed a system which convert Ifbot's emotions to its facial expressions automatically. In this paper, we propose a method for creating Ifbot's facial expressions from parameters, emotional parameters, which handle its internal emotions computationally.

  8. Pseudo-dynamic source characterization accounting for rough-fault effects

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin

    2016-04-01

    Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.

  9. Utility usage forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hosking, Jonathan R. M.; Natarajan, Ramesh

    The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.

  10. Engaging GPs in commissioning: realist evaluation of the early experiences of Clinical Commissioning Groups in the English NHS.

    PubMed

    McDermott, Imelda; Checkland, Kath; Coleman, Anna; Osipovič, Dorota; Petsoulas, Christina; Perkins, Neil

    2017-01-01

    Objectives To explore the 'added value' that general practitioners (GPs) bring to commissioning in the English NHS. We describe the experience of Clinical Commissioning Groups (CCGs) in the context of previous clinically led commissioning policy initiatives. Methods Realist evaluation. We identified the programme theories underlying the claims made about GP 'added value' in commissioning from interviews with key informants. We tested these theories against observational data from four case study sites to explore whether and how these claims were borne out in practice. Results The complexity of CCG structures means CCGs are quite different from one another with different distributions of responsibilities between the various committees. This makes it difficult to compare CCGs with one another. Greater GP involvement was important but it was not clear where and how GPs could add most value. We identified some of the mechanisms and conditions which enable CCGs to maximize the 'added value' that GPs bring to commissioning. Conclusion To maximize the value of clinical input, CCGs need to invest time and effort in preparing those involved, ensuring that they systematically gather evidence about service gaps and problems from their members, and engaging members in debate about the future shape of services.

  11. Quantum energy teleportation in a quantum Hall system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yusa, Go; Izumida, Wataru; Hotta, Masahiro

    2011-09-15

    We propose an experimental method for a quantum protocol termed quantum energy teleportation (QET), which allows energy transportation to a remote location without physical carriers. Using a quantum Hall system as a realistic model, we discuss the physical significance of QET and estimate the order of energy gain using reasonable experimental parameters.

  12. Interdisciplinary Modeling and Dynamics of Archipelago Straits

    DTIC Science & Technology

    2009-01-01

    modeling, tidal modeling and multi-dynamics nested domains and non-hydrostatic modeling WORK COMPLETED Realistic Multiscale Simulations, Real-time...six state variables (chlorophyll, nitrate , ammonium, detritus, phytoplankton, and zooplankton) were needed to initialize simulations. Using biological...parameters from literature, climatology from World Ocean Atlas data for nitrate and chlorophyll profiles extracted from satellite data, a first

  13. 3D numerical simulations of negative hydrogen ion extraction using realistic plasma parameters, geometry of the extraction aperture and full 3D magnetic field map

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Franzen, P.; Fantz, U.; Minea, T.

    2014-02-01

    Decreasing the co-extracted electron current while simultaneously keeping negative ion (NI) current sufficiently high is a crucial issue on the development plasma source system for ITER Neutral Beam Injector. To support finding the best extraction conditions the 3D Particle-in-Cell Monte Carlo Collision electrostatic code ONIX (Orsay Negative Ion eXtraction) has been developed. Close collaboration with experiments and other numerical models allows performing realistic simulations with relevant input parameters: plasma properties, geometry of the extraction aperture, full 3D magnetic field map, etc. For the first time ONIX has been benchmarked with commercial positive ions tracing code KOBRA3D. A very good agreement in terms of the meniscus position and depth has been found. Simulation of NI extraction with different e/NI ratio in bulk plasma shows high relevance of the direct negative ion extraction from the surface produced NI in order to obtain extracted NI current as in the experimental results from BATMAN testbed.

  14. Improving Forecasts Through Realistic Uncertainty Estimates: A Novel Data Driven Method for Model Uncertainty Quantification in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.

    2016-12-01

    Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.

  15. Modern Perspectives on Numerical Modeling of Cardiac Pacemaker Cell

    PubMed Central

    Maltsev, Victor A.; Yaniv, Yael; Maltsev, Anna V.; Stern, Michael D.; Lakatta, Edward G.

    2015-01-01

    Cardiac pacemaking is a complex phenomenon that is still not completely understood. Together with experimental studies, numerical modeling has been traditionally used to acquire mechanistic insights in this research area. This review summarizes the present state of numerical modeling of the cardiac pacemaker, including approaches to resolve present paradoxes and controversies. Specifically we discuss the requirement for realistic modeling to consider symmetrical importance of both intracellular and cell membrane processes (within a recent “coupled-clock” theory). Promising future developments of the complex pacemaker system models include the introduction of local calcium control, mitochondria function, and biochemical regulation of protein phosphorylation and cAMP production. Modern numerical and theoretical methods such as multi-parameter sensitivity analyses within extended populations of models and bifurcation analyses are also important for the definition of the most realistic parameters that describe a robust, yet simultaneously flexible operation of the coupled-clock pacemaker cell system. The systems approach to exploring cardiac pacemaker function will guide development of new therapies, such as biological pacemakers for treating insufficient cardiac pacemaker function that becomes especially prevalent with advancing age. PMID:24748434

  16. Kinetics of devolatilization and oxidation of a pulverized biomass in an entrained flow reactor under realistic combustion conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jimenez, Santiago; Remacha, Pilar; Ballester, Javier

    2008-03-15

    In this paper the results of a complete set of devolatilization and combustion experiments performed with pulverized ({proportional_to}500 {mu}m) biomass in an entrained flow reactor under realistic combustion conditions are presented. The data obtained are used to derive the kinetic parameters that best fit the observed behaviors, according to a simple model of particle combustion (one-step devolatilization, apparent oxidation kinetics, thermally thin particles). The model is found to adequately reproduce the experimental trends regarding both volatile release and char oxidation rates for the range of particle sizes and combustion conditions explored. The experimental and numerical procedures, similar to those recentlymore » proposed for the combustion of pulverized coal [J. Ballester, S. Jimenez, Combust. Flame 142 (2005) 210-222], have been designed to derive the parameters required for the analysis of biomass combustion in practical pulverized fuel configurations and allow a reliable characterization of any finely pulverized biomass. Additionally, the results of a limited study on the release rate of nitrogen from the biomass particle along combustion are shown. (author)« less

  17. Non-invasive breast biopsy method using GD-DTPA contrast enhanced MRI series and F-18-FDG PET/CT dynamic image series

    NASA Astrophysics Data System (ADS)

    Magri, Alphonso William

    This study was undertaken to develop a nonsurgical breast biopsy from Gd-DTPA Contrast Enhanced Magnetic Resonance (CE-MR) images and F-18-FDG PET/CT dynamic image series. A five-step process was developed to accomplish this. (1) Dynamic PET series were nonrigidly registered to the initial frame using a finite element method (FEM) based registration that requires fiducial skin markers to sample the displacement field between image frames. A commercial FEM package (ANSYS) was used for meshing and FEM calculations. Dynamic PET image series registrations were evaluated using similarity measurements SAVD and NCC. (2) Dynamic CE-MR series were nonrigidly registered to the initial frame using two registration methods: a multi-resolution free-form deformation (FFD) registration driven by normalized mutual information, and a FEM-based registration method. Dynamic CE-MR image series registrations were evaluated using similarity measurements, localization measurements, and qualitative comparison of motion artifacts. FFD registration was found to be superior to FEM-based registration. (3) Nonlinear curve fitting was performed for each voxel of the PET/CT volume of activity versus time, based on a realistic two-compartmental Patlak model. Three parameters for this model were fitted; two of them describe the activity levels in the blood and in the cellular compartment, while the third characterizes the washout rate of F-18-FDG from the cellular compartment. (4) Nonlinear curve fitting was performed for each voxel of the MR volume of signal intensity versus time, based on a realistic two-compartment Brix model. Three parameters for this model were fitted: rate of Gd exiting the compartment, representing the extracellular space of a lesion; rate of Gd exiting a blood compartment; and a parameter that characterizes the strength of signal intensities. Curve fitting used for PET/CT and MR series was accomplished by application of the Levenburg-Marquardt nonlinear regression algorithm. The best-fit parameters were used to create 3D parametric images. Compartmental modeling evaluation was based on the ability of parameter values to differentiate between tissue types. This evaluation was used on registered and unregistered image series and found that registration improved results. (5) PET and MR parametric images were registered through FEM- and FFD-based registration. Parametric image registration was evaluated using similarity measurements, target registration error, and qualitative comparison. Comparing FFD and FEM-based registration results showed that the FEM method is superior. This five-step process constitutes a novel multifaceted approach to a nonsurgical breast biopsy that successfully executes each step. Comparison of this method to biopsy still needs to be done with a larger set of subject data.

  18. Profile of a leader. Mary Agnes Snively: realistic optimist.

    PubMed

    Mansell, D

    1999-01-01

    This paper examines the leadership Mary Agnes Snively gave to Canadian nursing during the late-nineteenth and early-twentieth century with a particular focus on her practical views regarding nursing education. Although surrounded by the Victorian values of her day, Snively developed a vision of nursing education that was both optimistic and realistic. This investigation of Snively's ideas as they were articulated in papers she presented to the American Society of Superintendents of Training Schools for Nurses in 1895 and 1898, is further testament to the validity of the accolade, "Mother of Nurses in Canada," given her in 1924 by her biographer.

  19. A universal surface complexation framework for modeling proton binding onto bacterial surfaces in geologic settings

    USGS Publications Warehouse

    Borrok, D.; Turner, B.F.; Fein, J.B.

    2005-01-01

    Adsorption onto bacterial cell walls can significantly affect the speciation and mobility of aqueous metal cations in many geologic settings. However, a unified thermodynamic framework for describing bacterial adsorption reactions does not exist. This problem originates from the numerous approaches that have been chosen for modeling bacterial surface protonation reactions. In this study, we compile all currently available potentiometric titration datasets for individual bacterial species, bacterial consortia, and bacterial cell wall components. Using a consistent, four discrete site, non-electrostatic surface complexation model, we determine total functional group site densities for all suitable datasets, and present an averaged set of 'universal' thermodynamic proton binding and site density parameters for modeling bacterial adsorption reactions in geologic systems. Modeling results demonstrate that the total concentrations of proton-active functional group sites for the 36 bacterial species and consortia tested are remarkably similar, averaging 3.2 ?? 1.0 (1??) ?? 10-4 moles/wet gram. Examination of the uncertainties involved in the development of proton-binding modeling parameters suggests that ignoring factors such as bacterial species, ionic strength, temperature, and growth conditions introduces relatively small error compared to the unavoidable uncertainty associated with the determination of cell abundances in realistic geologic systems. Hence, we propose that reasonable estimates of the extent of bacterial cell wall deprotonation can be made using averaged thermodynamic modeling parameters from all of the experiments that are considered in this study, regardless of bacterial species used, ionic strength, temperature, or growth condition of the experiment. The average site densities for the four discrete sites are 1.1 ?? 0.7 ?? 10-4, 9.1 ?? 3.8 ?? 10-5, 5.3 ?? 2.1 ?? 10-5, and 6.6 ?? 3.0 ?? 10-5 moles/wet gram bacteria for the sites with pKa values of 3.1, 4.7, 6.6, and 9.0, respectively. It is our hope that this thermodynamic framework for modeling bacteria-proton binding reactions will also provide the basis for the development of an internally consistent set of bacteria-metal binding constants. 'Universal' constants for bacteria-metal binding reactions can then be used in conjunction with equilibrium constants for other important metal adsorption and complexation reactions to calculate the overall distribution of metals in realistic geologic systems.

  20. Monetary Shocks in Models with Inattentive Producers.

    PubMed

    Alvarez, Fernando E; Lippi, Francesco; Paciello, Luigi

    2016-04-01

    We study models where prices respond slowly to shocks because firms are rationally inattentive. Producers must pay a cost to observe the determinants of the current profit maximizing price, and hence observe them infrequently. To generate large real effects of monetary shocks in such a model the time between observations must be long and/or highly volatile. Previous work on rational inattentiveness has allowed for observation intervals that are either constant-but-long ( e.g . Caballero, 1989 or Reis, 2006) or volatile-but-short ( e.g . Reis's, 2006 example where observation costs are negligible), but not both. In these models, the real effects of monetary policy are small for realistic values of the duration between observations. We show that non-negligible observation costs produce both of these effects: intervals between observations are infrequent and volatile. This generates large real effects of monetary policy for realistic values of the average time between observations.

  1. Monetary Shocks in Models with Inattentive Producers

    PubMed Central

    Alvarez, Fernando E.; Lippi, Francesco; Paciello, Luigi

    2016-01-01

    We study models where prices respond slowly to shocks because firms are rationally inattentive. Producers must pay a cost to observe the determinants of the current profit maximizing price, and hence observe them infrequently. To generate large real effects of monetary shocks in such a model the time between observations must be long and/or highly volatile. Previous work on rational inattentiveness has allowed for observation intervals that are either constant-but-long (e.g. Caballero, 1989 or Reis, 2006) or volatile-but-short (e.g. Reis's, 2006 example where observation costs are negligible), but not both. In these models, the real effects of monetary policy are small for realistic values of the duration between observations. We show that non-negligible observation costs produce both of these effects: intervals between observations are infrequent and volatile. This generates large real effects of monetary policy for realistic values of the average time between observations. PMID:27516627

  2. Stability analysis of a controlled mechanical system with parametric uncertainties in LuGre friction model

    NASA Astrophysics Data System (ADS)

    Sun, Yun-Hsiang; Sun, Yuming; Wu, Christine Qiong; Sepehri, Nariman

    2018-04-01

    Parameters of friction model identified for a specific control system development are not constants. They vary over time and have a significant effect on the control system stability. Although much research has been devoted to the stability analysis under parametric uncertainty, less attention has been paid to incorporating a realistic friction model into their analysis. After reviewing the common friction models for controller design, a modified LuGre friction model is selected to carry out the stability analysis in this study. Two parameters of the LuGre model, namely σ0 and σ1, are critical to the demonstration of dynamic friction features, yet the identification of which is difficult to carry out, resulting in a high level of uncertainties in their values. Aiming at uncovering the effect of the σ0 and σ1 variations on the control system stability, a servomechanism with modified LuGre friction model is investigated. Two set-point position controllers are synthesised based on the servomechanism model to form two case studies. Through Lyapunov exponents, it is clear that the variation of σ0 and σ1 has an obvious effect on the stabiltiy of the studied systems and should not be overlooked in the design phase.

  3. Fluid-structure interaction of patient-specific Circle of Willis with aneurysm: Investigation of hemodynamic parameters.

    PubMed

    Jahed, Mahsa; Ghalichi, Farzan; Farhoudi, Mehdi

    2018-01-01

    Circle of Willis (COW) is a network of cerebral artery which continually supplies the brain with blood. Any disturbance in this supply will result in trauma or even death. One of these damages is known as brain Aneurysm. Clinical methods for diagnosing aneurysm can only measure blood velocity; while, in order to understand the causes of these occurrences it is necessary to have information about the amount of pressure and wall shear stress, which is possible through computational models. In this study purpose is achieving exact information of hemodynamic blood flow in COW with an aneurysm and investigation of effective factors on growth and rupture of aneurysm. Here, realistic three-dimensional models have been produced from angiography images. Considering fluid-structure interaction have been simulated by the ANSYS.CFX software. Hemodynamic Studying of the COW and intra-aneurysm showed that the WSS and wall tension in the neck of aneurysms for case A are 129.5 Pa, and 12.2 kPa and for case B they are 53.3 Pa and 56.2 kPa, and more than their fundus, thus neck of aneurysm is prone to rupture. This study showed that the distribution of parameters was dependent on the geometry of the COW, and maximum values are seen in areas prone to aneurysm formation.

  4. A Verification-Driven Approach to Control Analysis and Tuning

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2008-01-01

    This paper proposes a methodology for the analysis and tuning of controllers using control verification metrics. These metrics, which are introduced in a companion paper, measure the size of the largest uncertainty set of a given class for which the closed-loop specifications are satisfied. This framework integrates deterministic and probabilistic uncertainty models into a setting that enables the deformation of sets in the parameter space, the control design space, and in the union of these two spaces. In regard to control analysis, we propose strategies that enable bounding regions of the design space where the specifications are satisfied by all the closed-loop systems associated with a prescribed uncertainty set. When this is unfeasible, we bound regions where the probability of satisfying the requirements exceeds a prescribed value. In regard to control tuning, we propose strategies for the improvement of the robust characteristics of a baseline controller. Some of these strategies use multi-point approximations to the control verification metrics in order to alleviate the numerical burden of solving a min-max problem. Since this methodology targets non-linear systems having an arbitrary, possibly implicit, functional dependency on the uncertain parameters and for which high-fidelity simulations are available, they are applicable to realistic engineering problems..

  5. Theoretical and Field Experimental Investigation of an Arrayed Solar Thermoelectric Flat-Plate Generator

    NASA Astrophysics Data System (ADS)

    Rehman, Naveed ur; Siddiqui, Mubashir Ali

    2018-05-01

    This work theoretically and experimentally investigated the performance of an arrayed solar flat-plate thermoelectric generator (ASFTEG). An analytical model, based on energy balances, was established for determining load voltage, power output and overall efficiency of ASFTEGs. An array consists of TEG devices (or modules) connected electrically in series and operating in closed-circuit mode with a load. The model takes into account the distinct temperature difference across each module, which is a major feature of this model. Parasitic losses have also been included in the model for realistic results. With the given set of simulation parameters, an ASFTEG consisting of four commercially available Bi2Te3 modules had a predicted load voltage of 200 mV and generated 3546 μW of electric power output. Predictions from the model were in good agreement with field experimental outcomes from a prototype ASFTEG, which was developed for validation purposes. Later, the model was simulated to maximize the performance of the ASFTEG by adjusting the thermal and electrical design of the system. Optimum values of design parameters were evaluated and discussed in detail. Beyond the current limitations associated with improvements in thermoelectric materials, this study will eventually lead to the successful development of portable roof-top renewable TEGs.

  6. Multistationarity in mass action networks with applications to ERK activation.

    PubMed

    Conradi, Carsten; Flockerzi, Dietrich

    2012-07-01

    Ordinary Differential Equations (ODEs) are an important tool in many areas of Quantitative Biology. For many ODE systems multistationarity (i.e. the existence of at least two positive steady states) is a desired feature. In general establishing multistationarity is a difficult task as realistic biological models are large in terms of states and (unknown) parameters and in most cases poorly parameterized (because of noisy measurement data of few components, a very small number of data points and only a limited number of repetitions). For mass action networks establishing multistationarity hence is equivalent to establishing the existence of at least two positive solutions of a large polynomial system with unknown coefficients. For mass action networks with certain structural properties, expressed in terms of the stoichiometric matrix and the reaction rate-exponent matrix, we present necessary and sufficient conditions for multistationarity that take the form of linear inequality systems. Solutions of these inequality systems define pairs of steady states and parameter values. We also present a sufficient condition to identify networks where the aforementioned conditions hold. To show the applicability of our results we analyse an ODE system that is defined by the mass action network describing the extracellular signal-regulated kinase (ERK) cascade (i.e. ERK-activation).

  7. Impact of Humidity on In Vitro Human Skin Permeation Experiments for Predicting In Vivo Permeability.

    PubMed

    Ishida, Masahiro; Takeuchi, Hiroyuki; Endo, Hiromi; Yamaguchi, Jun-Ichi

    2015-12-01

    In vitro skin permeation studies have been commonly conducted to predict in vivo permeability for the development of transdermal therapeutic systems (TTSs). We clarified the impact of humidity on in vitro human skin permeation of two TTSs having different breathability and then elucidated the predictability of in vivo permeability based on in vitro experimental data. Nicotinell(®) TTS(®) 20 and Frandol(®) tape 40mg were used as model TTSs in this study. The in vitro human skin permeation experiments were conducted under humidity levels similar to those used in clinical trials (approximately 50%) as well as under higher humidity levels (approximately 95%). The skin permeability values of drugs at 95% humidity were higher than those at 50% humidity. The time profiles of the human plasma concentrations after TTS application fitted well with the clinical data when predicted based on the in vitro permeation parameters at 50% humidity. On the other hand, those profiles predicted based on the parameters at 95% humidity were overestimated. The impact of humidity was higher for the more breathable TTS; Frandol(®) tape 40mg. These results show that in vitro human skin permeation experiments should be investigated under realistic clinical humidity levels especially for breathable TTSs. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  8. A Generalized Simple Formulation of Convective Adjustment ...

    EPA Pesticide Factsheets

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la

  9. Emergence of liquid crystalline order in the lowest Landau level of a quantum Hall system with internal anisotropy

    NASA Astrophysics Data System (ADS)

    Ciftja, Orion

    2018-05-01

    It has now become evident that interplay between internal anisotropy parameters (such as electron mass anisotropy and/or anisotropic coupling of electrons to the substrate) and electron-electron correlation effects can create a rich variety of possibilities especially in quantum Hall systems. The electron mass anisotropy or material substrate effects (for example, the piezoelectric effect in GaAs) can lead to an effective anisotropic interaction potential between electrons. For lack of knowledge of realistic ab-initio potentials that may describe such effects, we adopt a phenomenological approach and assume that an anisotropic Coulomb interaction potential mimics the internal anisotropy of the system. In this work we investigate the emergence of liquid crystalline order at filling factor ν = 1/6 of the lowest Landau level, a state very close to the point where a transition from the liquid to the Wigner solid happens. We consider small finite systems of electrons interacting with an anisotropic Coulomb interaction potential and study the energy stability of an anisotropic liquid crystalline state relative to its isotropic Fermi-liquid counterpart. Quantum Monte Carlo simulation results in disk geometry show stabilization of liquid crystalline order driven by an anisotropic Coulomb interaction potential at all values of the interaction anisotropy parameter studied.

  10. Selection of optimum median-filter-based ambiguity removal algorithm parameters for NSCAT. [NASA scatterometer

    NASA Technical Reports Server (NTRS)

    Shaffer, Scott; Dunbar, R. Scott; Hsiao, S. Vincent; Long, David G.

    1989-01-01

    The NASA Scatterometer, NSCAT, is an active spaceborne radar designed to measure the normalized radar backscatter coefficient (sigma0) of the ocean surface. These measurements can, in turn, be used to infer the surface vector wind over the ocean using a geophysical model function. Several ambiguous wind vectors result because of the nature of the model function. A median-filter-based ambiguity removal algorithm will be used by the NSCAT ground data processor to select the best wind vector from the set of ambiguous wind vectors. This process is commonly known as dealiasing or ambiguity removal. The baseline NSCAT ambiguity removal algorithm and the method used to select the set of optimum parameter values are described. An extensive simulation of the NSCAT instrument and ground data processor provides a means of testing the resulting tuned algorithm. This simulation generates the ambiguous wind-field vectors expected from the instrument as it orbits over a set of realistic meoscale wind fields. The ambiguous wind field is then dealiased using the median-based ambiguity removal algorithm. Performance is measured by comparison of the unambiguous wind fields with the true wind fields. Results have shown that the median-filter-based ambiguity removal algorithm satisfies NSCAT mission requirements.

  11. Inversion of ocean-bottom seismometer (OBS) waveforms for oceanic crust structure: a synthetic study

    NASA Astrophysics Data System (ADS)

    Li, Xueyan; Wang, Yanbin; Chen, Yongshun John

    2016-08-01

    The waveform inversion method is applied—using synthetic ocean-bottom seismometer (OBS) data—to study oceanic crust structure. A niching genetic algorithm (NGA) is used to implement the inversion for the thickness and P-wave velocity of each layer, and to update the model by minimizing the objective function, which consists of the misfit and cross-correlation of observed and synthetic waveforms. The influence of specific NGA method parameters is discussed, and suitable values are presented. The NGA method works well for various observation systems, such as those with irregular and sparse distribution of receivers as well as single receiver systems. A strategy is proposed to accelerate the convergence rate by a factor of five with no increase in computational complexity; this is achieved using a first inversion with several generations to impose a restriction on the preset range of each parameter and then conducting a second inversion with the new range. Despite the successes of this method, its usage is limited. A shallow water layer is not favored because the direct wave in water will suppress the useful reflection signals from the crust. A more precise calculation of the air-gun source signal should be considered in order to better simulate waveforms generated in realistic situations; further studies are required to investigate this issue.

  12. Neutron Star Models in Alternative Theories of Gravity

    NASA Astrophysics Data System (ADS)

    Manolidis, Dimitrios

    We study the structure of neutron stars in a broad class of alternative theories of gravity. In particular, we focus on Scalar-Tensor theories and f(R) theories of gravity. We construct static and slowly rotating numerical star models for a set of equations of state, including a polytropic model and more realistic equations of state motivated by nuclear physics. Observable quantities such as masses, radii, etc are calculated for a set of parameters of the theories. Specifically for Scalar-Tensor theories, we also calculate the sensitivities of the mass and moment of inertia of the models to variations in the asymptotic value of the scalar field at infinity. These quantities enter post-Newtonian equations of motion and gravitational waveforms of two body systems that are used for gravitational-wave parameter estimation, in order to test these theories against observations. The construction of numerical models of neutron stars in f(R) theories of gravity has been difficult in the past. Using a new formalism by Jaime, Patino and Salgado we were able to construct models with high interior pressure, namely pc > rho c/3, both for constant density models and models with a polytropic equation of state. Thus, we have shown that earlier objections to f(R) theories on the basis of the inability to construct viable neutron star models are unfounded.

  13. A delay differential model of ENSO variability: parametric instability and the distribution of extremes

    NASA Astrophysics Data System (ADS)

    Zaliapin, I.; Ghil, M.; Thompson, S.

    2007-12-01

    We consider a Delay Differential Equation (DDE) model for El-Nino Southern Oscillation (ENSO) variability. The model combines two key mechanisms that participate in the ENSO dynamics: delayed negative feedback and seasonal forcing. Descriptive and metric stability analyses of the model are performed in a complete 3D space of its physically relevant parameters. Existence of two regimes --- stable and unstable --- is reported. The domains of the regimes are separated by a sharp neutral curve in the parameter space. The detailed structure of the neutral curve become very complicated (possibly fractal), and individual trajectories within the unstable region become highly complex (possibly chaotic) as the atmosphere-ocean coupling increases. In the unstable regime, spontaneous transitions in the mean "temperature" (i.e., thermocline depth), period, and extreme annual values occur, for purely periodic, seasonal forcing. This indicates (via the continuous dependence theorem) the existence of numerous unstable solutions responsible for the complex dynamics of the system. In the stable regime, only periodic solutions are found. Our results illustrate the role of the distinct parameters of ENSO variability, such as strength of seasonal forcing vs. atmosphere ocean coupling and propagation period of oceanic waves across the Tropical Pacific. The model reproduces, among other phenomena, the Devil's bleachers (caused by period locking) documented in other ENSO models, such as nonlinear PDEs and GCMs, as well as in certain observations. We expect such behavior in much more detailed and realistic models, where it is harder to describe its causes as completely.

  14. Towards quantifying uncertainty in Greenland's contribution to 21st century sea-level rise

    NASA Astrophysics Data System (ADS)

    Perego, M.; Tezaur, I.; Price, S. F.; Jakeman, J.; Eldred, M.; Salinger, A.; Hoffman, M. J.

    2015-12-01

    We present recent work towards developing a methodology for quantifying uncertainty in Greenland's 21st century contribution to sea-level rise. While we focus on uncertainties associated with the optimization and calibration of the basal sliding parameter field, the methodology is largely generic and could be applied to other (or multiple) sets of uncertain model parameter fields. The first step in the workflow is the solution of a large-scale, deterministic inverse problem, which minimizes the mismatch between observed and computed surface velocities by optimizing the two-dimensional coefficient field in a linear-friction sliding law. We then expand the deviation in this coefficient field from its estimated "mean" state using a reduced basis of Karhunen-Loeve Expansion (KLE) vectors. A Bayesian calibration is used to determine the optimal coefficient values for this expansion. The prior for the Bayesian calibration can be computed using the Hessian of the deterministic inversion or using an exponential covariance kernel. The posterior distribution is then obtained using Markov Chain Monte Carlo run on an emulator of the forward model. Finally, the uncertainty in the modeled sea-level rise is obtained by performing an ensemble of forward propagation runs. We present and discuss preliminary results obtained using a moderate-resolution model of the Greenland Ice sheet. As demonstrated in previous work, the primary difficulty in applying the complete workflow to realistic, high-resolution problems is that the effective dimension of the parameter space is very large.

  15. Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities

    NASA Astrophysics Data System (ADS)

    Baylin-Stern, Adam C.

    This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.

  16. Adjustments of the TaD electron density reconstruction model with GNSS-TEC parameters for operational application purposes

    NASA Astrophysics Data System (ADS)

    Kutiev, Ivan; Marinov, Pencho; Fidanova, Stefka; Belehaki, Anna; Tsagouri, Ioanna

    2012-12-01

    Validation results on the latest version of TaD model (TaDv2) show realistic reconstruction of the electron density profiles (EDPs) with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.

  17. Monte Carlo simulation of ferroelectric domain growth

    NASA Astrophysics Data System (ADS)

    Li, B. L.; Liu, X. P.; Fang, F.; Zhu, J. L.; Liu, J.-M.

    2006-01-01

    The kinetics of two-dimensional isothermal domain growth in a quenched ferroelectric system is investigated using Monte Carlo simulation based on a realistic Ginzburg-Landau ferroelectric model with cubic-tetragonal (square-rectangle) phase transitions. The evolution of the domain pattern and domain size with annealing time is simulated, and the stability of trijunctions and tetrajunctions of domain walls is analyzed. It is found that in this much realistic model with strong dipole alignment anisotropy and long-range Coulomb interaction, the powerlaw for normal domain growth still stands applicable. Towards the late stage of domain growth, both the average domain area and reciprocal density of domain wall junctions increase linearly with time, and the one-parameter dynamic scaling of the domain growth is demonstrated.

  18. Realistic Solar Surface Convection Simulations

    NASA Technical Reports Server (NTRS)

    Stein, Robert F.; Nordlund, Ake

    2000-01-01

    We perform essentially parameter free simulations with realistic physics of convection near the solar surface. We summarize the physics that is included and compare the simulation results with observations. Excellent agreement is obtained for the depth of the convection zone, the p-mode frequencies, the p-mode excitation rate, the distribution of the emergent continuum intensity, and the profiles of weak photospheric lines. We describe how solar convection is nonlocal. It is driven from a thin surface thermal boundary layer where radiative cooling produces low entropy gas which forms the cores of the downdrafts in which most of the buoyancy work occurs. We show that turbulence and vorticity are mostly confined to the intergranular lanes and underlying downdrafts. Finally, we illustrate our current work on magneto-convection.

  19. Realistic facial animation generation based on facial expression mapping

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe

    2014-01-01

    Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.

  20. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato

    2017-12-01

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.

  1. A statistical approach to quasi-extinction forecasting.

    PubMed

    Holmes, Elizabeth Eli; Sabo, John L; Viscido, Steven Vincent; Fagan, William Fredric

    2007-12-01

    Forecasting population decline to a certain critical threshold (the quasi-extinction risk) is one of the central objectives of population viability analysis (PVA), and such predictions figure prominently in the decisions of major conservation organizations. In this paper, we argue that accurate forecasting of a population's quasi-extinction risk does not necessarily require knowledge of the underlying biological mechanisms. Because of the stochastic and multiplicative nature of population growth, the ensemble behaviour of population trajectories converges to common statistical forms across a wide variety of stochastic population processes. This paper provides a theoretical basis for this argument. We show that the quasi-extinction surfaces of a variety of complex stochastic population processes (including age-structured, density-dependent and spatially structured populations) can be modelled by a simple stochastic approximation: the stochastic exponential growth process overlaid with Gaussian errors. Using simulated and real data, we show that this model can be estimated with 20-30 years of data and can provide relatively unbiased quasi-extinction risk with confidence intervals considerably smaller than (0,1). This was found to be true even for simulated data derived from some of the noisiest population processes (density-dependent feedback, species interactions and strong age-structure cycling). A key advantage of statistical models is that their parameters and the uncertainty of those parameters can be estimated from time series data using standard statistical methods. In contrast for most species of conservation concern, biologically realistic models must often be specified rather than estimated because of the limited data available for all the various parameters. Biologically realistic models will always have a prominent place in PVA for evaluating specific management options which affect a single segment of a population, a single demographic rate, or different geographic areas. However, for forecasting quasi-extinction risk, statistical models that are based on the convergent statistical properties of population processes offer many advantages over biologically realistic models.

  2. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  3. Computer Modeling of Non-Isothermal Crystallization

    NASA Technical Reports Server (NTRS)

    Kelton, K. F.; Narayan, K. Lakshmi; Levine, L. E.; Cull, T. C.; Ray, C. S.

    1996-01-01

    A realistic computer model for simulating isothermal and non-isothermal phase transformations proceeding by homogeneous and heterogeneous nucleation and interface-limited growth is presented. A new treatment for particle size effects on the crystallization kinetics is developed and is incorporated into the numerical model. Time-dependent nucleation rates, size-dependent growth rates, and surface crystallization are also included. Model predictions are compared with experimental measurements of DSC/DTA peak parameters for the crystallization of lithium disilicate glass as a function of particle size, Pt doping levels, and water content. The quantitative agreement that is demonstrated indicates that the numerical model can be used to extract key kinetic data from easily obtained calorimetric data. The model can also be used to probe nucleation and growth behavior in regimes that are otherwise inaccessible. Based on a fit to data, an earlier prediction that the time-dependent nucleation rate in a DSC/DTA scan can rise above the steady-state value at a temperature higher than the peak in the steady-state rate is demonstrated.

  4. A stochastic method for stand-alone photovoltaic system sizing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabral, Claudia Valeria Tavora; Filho, Delly Oliveira; Martins, Jose Helvecio

    Photovoltaic systems utilize solar energy to generate electrical energy to meet load demands. Optimal sizing of these systems includes the characterization of solar radiation. Solar radiation at the Earth's surface has random characteristics and has been the focus of various academic studies. The objective of this study was to stochastically analyze parameters involved in the sizing of photovoltaic generators and develop a methodology for sizing of stand-alone photovoltaic systems. Energy storage for isolated systems and solar radiation were analyzed stochastically due to their random behavior. For the development of the methodology proposed stochastic analysis were studied including the Markov chainmore » and beta probability density function. The obtained results were compared with those for sizing of stand-alone using from the Sandia method (deterministic), in which the stochastic model presented more reliable values. Both models present advantages and disadvantages; however, the stochastic one is more complex and provides more reliable and realistic results. (author)« less

  5. Streaming sausage, kink and tearing instabilities in a current sheet with applications to the earth's magnetotail

    NASA Technical Reports Server (NTRS)

    Lee, L. C.; Wang, S.; Wei, C. Q.; Tsurutani, B. T.

    1988-01-01

    This paper investigates the growth rates and eigenmode structures of the streaming sausage, kink, and tearing instabilities in a current sheet with a super-Alfvenic flow. The growth rates and eigenmode structures are first considered in the ideal incompressible limit by using a four-layer model, as well as a more realistic case in which all plasma parameters and the magnetic field vary continuously along the direction perpendicular to the magnetic field and plasma flow. An initial-value method is applied to obtain the growth rate and eigenmode profiles of the fastest growing mode, which is either the sausage mode or kink mode. It is shown that, in the earth's magnetotail, where super-Alfvenic plasma flows are observed in the plasma sheet and the ratio between the plasma and magnetic pressures far away from the current layer is about 0.1-0.3 in the lobes, the streaming sausage and streaming tearing instabilities, but not kink modes, are likely to occur.

  6. Noise induced stabilization of chaotic free-running laser diode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Virte, Martin, E-mail: mvirte@b-phot.org

    In this paper, we investigate theoretically the stabilization of a free-running vertical-cavity surface-emitting laser exhibiting polarization chaos dynamics. We report the existence of a boundary isolating the chaotic attractor on one side and a steady-state on the other side and identify the unstable periodic orbit playing the role of separatrix. In addition, we highlight a small range of parameters where the chaotic attractor passes through this boundary, and therefore where chaos only appears as a transient behaviour. Then, including the effect of spontaneous emission noise in the laser, we demonstrate that, for realistic levels of noise, the system is systematicallymore » pushed over the separating solution. As a result, we show that the chaotic dynamics cannot be sustained unless the steady-state on the other side of the separatrix becomes unstable. Finally, we link the stability of this steady-state to a small value of the birefringence in the laser cavity and discuss the significance of this result on future experimental work.« less

  7. Assessing the accuracy of different simplified frictional rolling contact algorithms

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Iwnicki, S. D.; Xie, G.; Shackleton, P.

    2012-01-01

    This paper presents an approach for assessing the accuracy of different frictional rolling contact theories. The main characteristic of the approach is that it takes a statistically oriented view. This yields a better insight into the behaviour of the methods in diverse circumstances (varying contact patch ellipticities, mixed longitudinal, lateral and spin creepages) than is obtained when only a small number of (basic) circumstances are used in the comparison. The range of contact parameters that occur for realistic vehicles and tracks are assessed using simulations with the Vampire vehicle system dynamics (VSD) package. This shows that larger values for the spin creepage occur rather frequently. Based on this, our approach is applied to typical cases for which railway VSD packages are used. The results show that particularly the USETAB approach but also FASTSIM give considerably better results than the linear theory, Vermeulen-Johnson, Shen-Hedrick-Elkins and Polach methods, when compared with the 'complete theory' of the CONTACT program.

  8. Evolutionary dynamics on networks of selectively neutral genotypes: effects of topology and sequence stability.

    PubMed

    Aguirre, Jacobo; Buldú, Javier M; Manrubia, Susanna C

    2009-12-01

    Networks of selectively neutral genotypes underlie the evolution of populations of replicators in constant environments. Previous theoretical analysis predicted that such populations will evolve toward highly connected regions of the genome space. We first study the evolution of populations of replicators on simple networks and quantify how the transient time to equilibrium depends on the initial distribution of sequences on the neutral network, on the topological properties of the latter, and on the mutation rate. Second, network neutrality is broken through the introduction of an energy for each sequence. This allows to study the competition between two features (neutrality and energetic stability) relevant for survival and subjected to different selective pressures. In cases where the two features are negatively correlated, the population experiences sudden migrations in the genome space for values of the relevant parameters that we calculate. The numerical study of larger networks indicates that the qualitative behavior to be expected in more realistic cases is already seen in representative examples of small networks.

  9. Zoonotic Transmission of Waterborne Disease: A Mathematical Model.

    PubMed

    Waters, Edward K; Hamilton, Andrew J; Sidhu, Harvinder S; Sidhu, Leesa A; Dunbar, Michelle

    2016-01-01

    Waterborne parasites that infect both humans and animals are common causes of diarrhoeal illness, but the relative importance of transmission between humans and animals and vice versa remains poorly understood. Transmission of infection from animals to humans via environmental reservoirs, such as water sources, has attracted attention as a potential source of endemic and epidemic infections, but existing mathematical models of waterborne disease transmission have limitations for studying this phenomenon, as they only consider contamination of environmental reservoirs by humans. This paper develops a mathematical model that represents the transmission of waterborne parasites within and between both animal and human populations. It also improves upon existing models by including animal contamination of water sources explicitly. Linear stability analysis and simulation results, using realistic parameter values to describe Giardia transmission in rural Australia, show that endemic infection of an animal host with zoonotic protozoa can result in endemic infection in human hosts, even in the absence of person-to-person transmission. These results imply that zoonotic transmission via environmental reservoirs is important.

  10. Radiative breaking of the minimal supersymmetric left–right model

    DOE PAGES

    Okada, Nobuchika; Papapietro, Nathan

    2016-03-03

    We study a variation to the SUSY Left-Right symmetric model based on the gauge group SU(3) c×SU(2) L×SU(2) R×U(1) BL. Beyond the quark and lepton superfields we only introduce a second Higgs bidoublet to produce realistic fermion mass matrices. This model does not include any SU(2) R triplets. We also calculate renormalization group evolutions of soft SUSY parameters at the one-loop level down to low energy. We find that an SU(2) R slepton doublet acquires a negative mass squared at low energies, so that the breaking of SU(2) R×U(1) BL→U(1) Y is realized by a non-zero vacuum expectation value ofmore » a right-handed sneutrino. Small neutrino masses are produced through neutrino mixings with gauginos. We obtained mass limits on the SU(2) R×U(1) BL sector by direct search results at the LHC as well as lepton-gaugino mixing bounds from the LEP precision data.« less

  11. Aphid vector population density determines the emergence of necrogenic satellite RNAs in populations of cucumber mosaic virus.

    PubMed

    Betancourt, Mónica; Fraile, Aurora; Milgroom, Michael G; García-Arenal, Fernando

    2016-06-01

    The satellite RNAs of cucumber mosaic virus (CMV) that induce systemic necrosis in tomato plants (N-satRNA) multiply to high levels in the infected host while severely depressing CMV accumulation and, hence, its aphid transmission efficiency. As N-satRNAs are transmitted into CMV particles, the conditions for N-satRNA emergence are not obvious. Model analyses with realistic parameter values have predicted that N-satRNAs would invade CMV populations only when transmission rates are high. Here, we tested this hypothesis experimentally by passaging CMV or CMV+N-satRNAs at low or high aphid densities (2 or 8 aphids/plant). As predicted, high aphid densities were required for N-satRNA emergence. The results showed that at low aphid densities, random effects due to population bottlenecks during transmission dominate the epidemiological dynamics of CMV/CMV+N-satRNA. The results suggest that maintaining aphid populations at low density will prevent the emergence of highly virulent CMV+N-satRNA isolates.

  12. Evolutionary dynamics on networks of selectively neutral genotypes: Effects of topology and sequence stability

    NASA Astrophysics Data System (ADS)

    Aguirre, Jacobo; Buldú, Javier M.; Manrubia, Susanna C.

    2009-12-01

    Networks of selectively neutral genotypes underlie the evolution of populations of replicators in constant environments. Previous theoretical analysis predicted that such populations will evolve toward highly connected regions of the genome space. We first study the evolution of populations of replicators on simple networks and quantify how the transient time to equilibrium depends on the initial distribution of sequences on the neutral network, on the topological properties of the latter, and on the mutation rate. Second, network neutrality is broken through the introduction of an energy for each sequence. This allows to study the competition between two features (neutrality and energetic stability) relevant for survival and subjected to different selective pressures. In cases where the two features are negatively correlated, the population experiences sudden migrations in the genome space for values of the relevant parameters that we calculate. The numerical study of larger networks indicates that the qualitative behavior to be expected in more realistic cases is already seen in representative examples of small networks.

  13. Constraints upon the spectral indices of relic gravitational waves by LIGO S5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y.; Tong, M. L.; Fu, Z. W.

    With LIGO having achieved its design sensitivity and the LIGO S5 strain data being available, constraints on the relic gravitational waves (RGWs) become realistic. The analytical spectrum of RGWs generated during inflation depends sensitively on the initial condition, which is generically described by the index {beta}, the running index {alpha}{sub t}, and the tensor-to-scalar ratio r. By the LIGO S5 data of the cross-correlated two detectors, we obtain constraints on the parameters ({beta},{alpha}{sub t},r). As a main result, we have computed the theoretical signal-to-noise ratio of RGWs for various values of ({beta},{alpha}{sub t},r), using the cross-correlation for the given pairmore » of LIGO detectors. The constraints by the indirect bound on the energy density of RGWs by big bang nucleosynthesis and cosmic microwave background have been obtained, which turn out to be still more stringent than LIGO S5.« less

  14. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE PAGES

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...

    2017-08-19

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  15. Biomedical and Human Factors Requirements for a Manned Earth Orbiting Station

    NASA Technical Reports Server (NTRS)

    Benjamin, F.; Helvey, W. M.; Martell, C.; Peters, J.; Rosenthal, G.

    1964-01-01

    This report is the result of a study conducted by Republic Aviation Corporation in conjunction with Spacelabs, Inc.,in a team effort in which Republic Aviation Corporation was prime contractor. In order to determine the realistic engineering design requirements associated with the medical and human factors problems of a manned space station, an interdisciplinary team of personnel from the Research and Space Divisions was organized. This team included engineers, physicians, physiologists, psychologists, and physicists. Recognizing that the value of the study is dependent upon medical judgments as well as more quantifiable factors (such as design parameters) a group of highly qualified medical consultants participated in working sessions to determine which medical measurements are required to meet the objectives of the study. In addition, various Life Sciences personnel from NASA (Headquarters, Langley, MSC) participated in monthly review sessions. The organization, team members, consultants, and some of the part-time contributors are shown in Figure 1. This final report embodies contributions from all of these participants.

  16. The Langley thermal protection system test facility: A description including design operating boundaries

    NASA Technical Reports Server (NTRS)

    Klich, G. F.

    1976-01-01

    A description of the Langley thermal protection system test facility is presented. This facility was designed to provide realistic environments and times for testing thermal protection systems proposed for use on high speed vehicles such as the space shuttle. Products from the combustion of methane-air-oxygen mixtures, having a maximum total enthalpy of 10.3 MJ/kg, are used as a test medium. Test panels with maximum dimensions of 61 cm x 91.4 cm are mounted in the side wall of the test region. Static pressures in the test region can range from .005 to .1 atm and calculated equilibrium temperatures of test panels range from 700 K to 1700 K. Test times can be as long as 1800 sec. Some experimental data obtained while using combustion products of methane-air mixtures are compared with theory, and calibration of the facility is being continued to verify calculated values of parameters which are within the design operating boundaries.

  17. Calculating LOAEL/NOAEL uncertainty factors for wildlife species in ecological risk assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suedel, B.C.; Clifford, P.A.; Ludwig, D.F.

    1995-12-31

    Terrestrial ecological risk assessments frequently require derivation of NOAELs or toxicity reference values (TRVS) against which to compare exposure estimates. However, much of the available information from the literature is LOAELS, not NOAELS. Lacking specific guidance, arbitrary factors of ten are sometimes employed for extrapolating NOAELs from LOAELs. In this study, the scientific literature was searched to obtain chronic and subchronic studies reporting NOAEL and LOAEL data for wildlife and laboratory species. Results to date indicate a mean conversion factor of 4.0 ({+-} 2.61 S.D.), with a minimum of 1. 6 and a maximum of 10 for 106 studies acrossmore » several classes of compounds (I.e., metals, pesticides, volatiles, etc.). These data suggest that an arbitrary factor of 10 conversion factor is unnecessarily restrictive for extrapolating NOAELs from LOAELs and that a factor of 4--5 would be more realistic for deriving toxicity reference values for wildlife species. Applying less arbitrary and more realistic conversion factors in ecological risk assessments will allow for a more accurate estimate of NOAEL values for assessing risk to wildlife populations.« less

  18. Deriving realistic source boundary conditions for a CFD simulation of concentrations in workroom air.

    PubMed

    Feigley, Charles E; Do, Thanh H; Khan, Jamil; Lee, Emily; Schnaufer, Nicholas D; Salzberg, Deborah C

    2011-05-01

    Computational fluid dynamics (CFD) is used increasingly to simulate the distribution of airborne contaminants in enclosed spaces for exposure assessment and control, but the importance of realistic boundary conditions is often not fully appreciated. In a workroom for manufacturing capacitors, full-shift samples for isoamyl acetate (IAA) were collected for 3 days at 16 locations, and velocities were measured at supply grills and at various points near the source. Then, velocity and concentration fields were simulated by 3-dimensional steady-state CFD using 295K tetrahedral cells, the k-ε turbulence model, standard wall function, and convergence criteria of 10(-6) for all scalars. Here, we demonstrate the need to represent boundary conditions accurately, especially emission characteristics at the contaminant source, and to obtain good agreement between observations and CFD results. Emission rates for each day were determined from six concentrations measured in the near field and one upwind using an IAA mass balance. The emission was initially represented as undiluted IAA vapor, but the concentrations estimated using CFD differed greatly from the measured concentrations. A second set of simulations was performed using the same IAA emission rates but a more realistic representation of the source. This yielded good agreement with measured values. Paying particular attention to the region with highest worker exposure potential-within 1.3 m of the source center-the air speed and IAA concentrations estimated by CFD were not significantly different from the measured values (P = 0.92 and P = 0.67, respectively). Thus, careful consideration of source boundary conditions greatly improved agreement with the measured values.

  19. Global sensitivity analysis of a local water balance model predicting evaporation, water yield and drought

    NASA Astrophysics Data System (ADS)

    Speich, Matthias; Zappa, Massimiliano; Lischke, Heike

    2017-04-01

    Evaporation and transpiration affect both catchment water yield and the growing conditions for vegetation. They are driven by climate, but also depend on vegetation, soil and land surface properties. In hydrological and land surface models, these properties may be included as constant parameters, or as state variables. Often, little is known about the effect of these variables on model outputs. In the present study, the effect of surface properties on evaporation was assessed in a global sensitivity analysis. To this effect, we developed a simple local water balance model combining state-of-the-art process formulations for evaporation, transpiration and soil water balance. The model is vertically one-dimensional, and the relative simplicity of its process formulations makes it suitable for integration in a spatially distributed model at regional scale. The main model outputs are annual total evaporation (TE, i.e. the sum of transpiration, soil evaporation and interception), and a drought index (DI), which is based on the ratio of actual and potential transpiration. This index represents the growing conditions for forest trees. The sensitivity analysis was conducted in two steps. First, a screening analysis was applied to identify unimportant parameters out of an initial set of 19 parameters. In a second step, a statistical meta-model was applied to a sample of 800 model runs, in which the values of the important parameters were varied. Parameter effect and interactions were analyzed with effects plots. The model was driven with forcing data from ten meteorological stations in Switzerland, representing a wide range of precipitation regimes across a strong temperature gradient. Of the 19 original parameters, eight were identified as important in the screening analysis. Both steps highlighted the importance of Plant Available Water Capacity (AWC) and Leaf Area Index (LAI). However, their effect varies greatly across stations. For example, while a transition from a sparse to a closed forest canopy has almost no effect on annual TE at warm and dry sites, it increases TE by up to 100 mm/year at cold-humid and warm-humid sites. Further parameters of importance describe infiltration, as well as canopy resistance and its response to environmental variables. This study offers insights for future development of hydrological and ecohydrological models. First, it shows that although local water balance is primarily controlled by climate, the vegetation and soil parameters may have a large impact on the outputs. Second, it indicates that modeling studies should prioritize a realistic parameterization of LAI and AWC, while other parameters may be set to fixed values. Third, it illustrates to which extent parameter effect and interactions depend on local climate.

  20. Towards a Universal Calving Law: Modeling Ice Shelves Using Damage Mechanics

    NASA Astrophysics Data System (ADS)

    Whitcomb, M.; Bassis, J. N.; Price, S. F.; Lipscomb, W. H.

    2017-12-01

    Modeling iceberg calving from ice shelves and ice tongues is a particularly difficult problem in glaciology because of the wide range of observed calving rates. Ice shelves naturally calve large tabular icebergs at infrequent intervals, but may instead calve smaller bergs regularly or disintegrate due to hydrofracturing in warmer conditions. Any complete theory of iceberg calving in ice shelves must be able to generate realistic calving rate values depending on the magnitudes of the external forcings. Here we show that a simple damage evolution law, which represents crevasse distributions as a continuum field, produces reasonable estimates of ice shelf calving rates when added to the Community Ice Sheet Model (CISM). Our damage formulation is based on a linear stability analysis and depends upon the bulk stress and strain rate in the ice shelf, as well as the surface and basal melt rates. The basal melt parameter in our model enhances crevasse growth near the ice shelf terminus, leading to an increased iceberg production rate. This implies that increasing ocean temperatures underneath ice shelves will drive ice shelf retreat, as has been observed in the Amundsen and Bellingshausen Seas. We show that our model predicts broadly correct calving rates for ice tongues ranging in length from 10 km (Erebus) to over 100 km (Drygalski), by matching the computed steady state lengths to observations. In addition, we apply the model to idealized Antarctic ice shelves and show that we can also predict realistic ice shelf extents. Our damage mechanics model provides a promising, computationally efficient way to compute calving fluxes and links ice shelf stability to climate forcing.

  1. Reassessing The Fundamentals New Constraints on the Evolution, Ages and Masses of Neutron Stars

    NASA Astrophysics Data System (ADS)

    Kızıltan, Bülent

    2011-09-01

    The ages and masses of neutron stars (NSs) are two fundamental threads that make pulsars accessible to other sub-disciplines of astronomy and physics. A realistic and accurate determination of these two derived parameters play an important role in understanding of advanced stages of stellar evolution and the physics that govern relevant processes. Here I summarize new constraints on the ages and masses of NSs with an evolutionary perspective. I show that the observed P-Ṗ demographics is more diverse than what is theoretically predicted for the standard evolutionary channel. In particular, standard recycling followed by dipole spin-down fails to reproduce the population of millisecond pulsars with higher magnetic fields (B > 4 × 108 G) at rates deduced from observations. A proper inclusion of constraints arising from binary evolution and mass accretion offers a more realistic insight into the age distribution. By analytically implementing these constraints, I propose a ``modified'' spin-down age (τ~) for millisecond pulsars that gives estimates closer to the true age. Finally, I independently analyze the peak, skewness and cutoff values of the underlying mass distribution from a comprehensive list of radio pulsars for which secure mass measurements are available. The inferred mass distribution shows clear peaks at 1.35 Msolar and 1.50 Msolar for NSs in double neutron star (DNS) and neutron star-white dwarf (NS-WD) systems respectively. I find a mass cutoff at 2 Msolar for NSs with WD companions, which establishes a firm lower bound for the maximum mass of NSs.

  2. Dosimetry applications in GATE Monte Carlo toolkit.

    PubMed

    Papadimitroulas, Panagiotis

    2017-09-01

    Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. Comparative kinetic analysis on thermal degradation of some cephalosporins using TG and DSC data

    PubMed Central

    2013-01-01

    Background The thermal decomposition of cephalexine, cefadroxil and cefoperazone under non-isothermal conditions using the TG, respectively DSC methods, was studied. In case of TG, a hyphenated technique, including EGA, was used. Results The kinetic analysis was performed using the TG and DSC data in air for the first step of cephalosporin’s decomposition at four heating rates. The both TG and DSC data were processed according to an appropriate strategy to the following kinetic methods: Kissinger-Akahira-Sunose, Friedman, and NPK, in order to obtain realistic kinetic parameters, even if the decomposition process is a complex one. The EGA data offer some valuable indications about a possible decomposition mechanism. The obtained data indicate a rather good agreement between the activation energy’s values obtained by different methods, whereas the EGA data and the chemical structures give a possible explanation of the observed differences on the thermal stability. A complete kinetic analysis needs a data processing strategy using two or more methods, but the kinetic methods must also be applied to the different types of experimental data (TG and DSC). Conclusion The simultaneous use of DSC and TG data for the kinetic analysis coupled with evolved gas analysis (EGA) provided us a more complete picture of the degradation of the three cephalosporins. It was possible to estimate kinetic parameters by using three different kinetic methods and this allowed us to compare the Ea values obtained from different experimental data, TG and DSC. The thermodegradation being a complex process, the both differential and integral methods based on the single step hypothesis are inadequate for obtaining believable kinetic parameters. Only the modified NPK method allowed an objective separation of the temperature, respective conversion influence on the reaction rate and in the same time to ascertain the existence of two simultaneous steps. PMID:23594763

  4. Continental-scale river flow in climate models

    NASA Technical Reports Server (NTRS)

    Miller, James R.; Russell, Gary L.; Caliri, Guilherme

    1994-01-01

    The hydrologic cycle is a major part of the global climate system. There is an atmospheric flux of water from the ocean surface to the continents. The cycle is closed by return flow in rivers. In this paper a river routing model is developed to use with grid box climate models for the whole earth. The routing model needs an algorithm for the river mass flow and a river direction file, which has been compiled for 4 deg x 5 deg and 2 deg x 2.5 deg resolutions. River basins are defined by the direction files. The river flow leaving each grid box depends on river and lake mass, downstream distance, and an effective flow speed that depends on topography. As input the routing model uses monthly land source runoff from a 5-yr simulation of the NASA/GISS atmospheric climate model (Hansen et al.). The land source runoff from the 4 deg x 5 deg resolution model is quartered onto a 2 deg x 2.5 deg grid, and the effect of grid resolution is examined. Monthly flow at the mouth of the world's major rivers is compared with observations, and a global error function for river flow is used to evaluate the routing model and its sensitivity to physical parameters. Three basinwide parameters are introduced: the river length weighted by source runoff, the turnover rate, and the basinwide speed. Although the values of these parameters depend on the resolution at which the rivers are defined, the values should converge as the grid resolution becomes finer. When the routing scheme described here is coupled with a climate model's source runoff, it provides the basis for closing the hydrologic cycle in coupled atmosphere-ocean models by realistically allowing water to return to the ocean at the correct location and with the proper magnitude and timing.

  5. A fast Bayesian approach to discrete object detection in astronomical data sets - PowellSnakes I

    NASA Astrophysics Data System (ADS)

    Carvalho, Pedro; Rocha, Graça; Hobson, M. P.

    2009-03-01

    A new fast Bayesian approach is introduced for the detection of discrete objects immersed in a diffuse background. This new method, called PowellSnakes, speeds up traditional Bayesian techniques by (i) replacing the standard form of the likelihood for the parameters characterizing the discrete objects by an alternative exact form that is much quicker to evaluate; (ii) using a simultaneous multiple minimization code based on Powell's direction set algorithm to locate rapidly the local maxima in the posterior and (iii) deciding whether each located posterior peak corresponds to a real object by performing a Bayesian model selection using an approximate evidence value based on a local Gaussian approximation to the peak. The construction of this Gaussian approximation also provides the covariance matrix of the uncertainties in the derived parameter values for the object in question. This new approach provides a speed up in performance by a factor of `100' as compared to existing Bayesian source extraction methods that use Monte Carlo Markov chain to explore the parameter space, such as that presented by Hobson & McLachlan. The method can be implemented in either real or Fourier space. In the case of objects embedded in a homogeneous random field, working in Fourier space provides a further speed up that takes advantage of the fact that the correlation matrix of the background is circulant. We illustrate the capabilities of the method by applying to some simplified toy models. Furthermore, PowellSnakes has the advantage of consistently defining the threshold for acceptance/rejection based on priors which cannot be said of the frequentist methods. We present here the first implementation of this technique (version I). Further improvements to this implementation are currently under investigation and will be published shortly. The application of the method to realistic simulated Planck observations will be presented in a forthcoming publication.

  6. Electron percolation in realistic models of carbon nanotube networks

    NASA Astrophysics Data System (ADS)

    Simoneau, Louis-Philippe; Villeneuve, Jérémie; Rochefort, Alain

    2015-09-01

    The influence of penetrable and curved carbon nanotubes (CNT) on the charge percolation in three-dimensional disordered CNT networks have been studied with Monte-Carlo simulations. By considering carbon nanotubes as solid objects but where the overlap between their electron cloud can be controlled, we observed that the structural characteristics of networks containing lower aspect ratio CNT are highly sensitive to the degree of penetration between crossed nanotubes. Following our efficient strategy to displace CNT to different positions to create more realistic statistical models, we conclude that the connectivity between objects increases with the hard-core/soft-shell radii ratio. In contrast, the presence of curved CNT in the random networks leads to an increasing percolation threshold and to a decreasing electrical conductivity at saturation. The waviness of CNT decreases the effective distance between the nanotube extremities, hence reducing their connectivity and degrading their electrical properties. We present the results of our simulation in terms of thickness of the CNT network from which simple structural parameters such as the volume fraction or the carbon nanotube density can be accurately evaluated with our more realistic models.

  7. An Investigation of the Impact of Guessing on Coefficient α and Reliability

    PubMed Central

    2014-01-01

    Guessing is known to influence the test reliability of multiple-choice tests. Although there are many studies that have examined the impact of guessing, they used rather restrictive assumptions (e.g., parallel test assumptions, homogeneous inter-item correlations, homogeneous item difficulty, and homogeneous guessing levels across items) to evaluate the relation between guessing and test reliability. Based on the item response theory (IRT) framework, this study investigated the extent of the impact of guessing on reliability under more realistic conditions where item difficulty, item discrimination, and guessing levels actually vary across items with three different test lengths (TL). By accommodating multiple item characteristics simultaneously, this study also focused on examining interaction effects between guessing and other variables entered in the simulation to be more realistic. The simulation of the more realistic conditions and calculations of reliability and classical test theory (CTT) item statistics were facilitated by expressing CTT item statistics, coefficient α, and reliability in terms of IRT model parameters. In addition to the general negative impact of guessing on reliability, results showed interaction effects between TL and guessing and between guessing and test difficulty.

  8. Alternative Approaches to Land Initialization for Seasonal Precipitation and Temperature Forecasts

    NASA Technical Reports Server (NTRS)

    Koster, Randal; Suarez, Max; Liu, Ping; Jambor, Urszula

    2004-01-01

    The seasonal prediction system of the NASA Global Modeling and Assimilation Office is used to generate ensembles of summer forecasts utilizing realistic soil moisture initialization. To derive the realistic land states, we drive offline the system's land model with realistic meteorological forcing over the period 1979-1993 (in cooperation with the Global Land Data Assimilation System project at GSFC) and then extract the state variables' values on the chosen forecast start dates. A parallel series of forecast ensembles is performed with a random (though climatologically consistent) set of land initial conditions; by comparing the two sets of ensembles, we can isolate the impact of land initialization on forecast skill from that of the imposed SSTs. The base initialization experiment is supplemented with several forecast ensembles that use alternative initialization techniques. One ensemble addresses the impact of minimizing climate drift in the system through the scaling of the initial conditions, and another is designed to isolate the importance of the precipitation signal from that of all other signals in the antecedent offline forcing. A third ensemble includes a more realistic initialization of the atmosphere along with the land initialization. The impact of each variation on forecast skill is quantified.

  9. Dispersal of Volcanic Ash on Mars: Ash Grain Shape Analysis

    NASA Astrophysics Data System (ADS)

    Langdalen, Z.; Fagents, S. A.; Fitch, E. P.

    2017-12-01

    Many ash dispersal models use spheres as ash-grain analogs in drag calculations. These simplifications introduce inaccuracies in the treatment of drag coefficients, leading to inaccurate settling velocities and dispersal predictions. Therefore, we are investigating the use of a range of shape parameters, calculated using grain dimensions, to derive a better representation of grain shape and effective grain cross-sectional area. Specifically, our goal is to apply our results to the modeling of ash deposition to investigate the proposed volcanic origin of certain fine-grained deposits on Mars. Therefore, we are documenting the dimensions and shapes of ash grains from terrestrial subplinian to plinian deposits, in eight size divisions from 2 mm to 16 μm, employing a high resolution optical microscope. The optical image capture protocol provides an accurate ash grain outline by taking multiple images at different focus heights prior to combining them into a composite image. Image composite mosaics are then processed through ImageJ, a robust scientific measurement software package, to calculate a range of dimensionless shape parameters. Since ash grains rotate as they fall, drag forces act on a changing cross-sectional area. Therefore, we capture images and calculate shape parameters of each grain positioned in three orthogonal orientations. We find that the difference between maximum and minimum aspect ratios of the three orientations of a given grain best quantifies the degree of elongation of that grain. However, the average aspect ratio calculated for each grain provides a good representation of relative differences among grains. We also find that convexity provides the best representation of surface irregularity. For both shape parameters, natural ash grains display notably different shape parameter values than sphere analogs. Therefore, Mars ash dispersal modeling that incorporates shape parameters will provide more realistic predictions of deposit extents because volcanic ash-grain morphologies differ substantially from simplified geometric shapes.

  10. Modeling metapopulation dynamics for single species of seabirds

    USGS Publications Warehouse

    Buckley, P.A.; Downer, R.; McCullough, D.R.; Barrett, R.H.

    1992-01-01

    Seabirds share many characteristics setting them apart from other birds. Importantly, they breed more or less obligatorily in local clusters of colonies that can move regularly from site to site, and they routinely exchange breeders. The properties of such metapopulations have only recently begun to be examined, often with models that are occupancy-based (using only colony presence or absence data) and deterministic (using single, empirically determined values for each of several population biology parameters). Some recent models are now frequency-based (using actual population sizes at each site), as well as stochastic (randomly varying critical parameters between biologically realistic limits), yielding better estimates of the behavior of future populations. Using two such models designed to quantify relative risks of population changes under different future scenarios (RAMAS/stage and RAMAS/space), we have examined probable future populations dynamics for three hypothetical seabirds -- an albatross, a cormorant, and a tern. With real parameters and ranges of values we alternatively modelled each species with and without density dependence, as well as with their numbers in a single, large colony, or in many smaller ones, distributed evenly or lognormally. We produced a series of species-typical lines for different population risks over the 50 years we simulated. We call these curves Instantaneous Threat Assessments (ITAs), and their shapes mirror the varying life history characteristics of our three species. We also demonstrated (by a process known as sensitivity analysis) that the most important parameters determining future population fates of all three species were correlation of mean growth rate among colonies; dispersal rate of present and future breeders; subadult survivorship; and the number of subpopulations (=colonies) - in roughly that descending order of importance. In addition, density dependence was found to markedly alter ITA line shape and position, dramatically in the tern. Finally, we show that for each of our three seabirds, a substantial reduction in the risk of the entire population's going to extinction was provided by a metapopulation (i.e. colonial) breeding structure -- thus comfortably confirming what avian ecologists have long known but about which population modellers are somtimes still unsure.

  11. Effects of model definitions and parameter values in finite element modeling of human middle ear mechanics.

    PubMed

    De Greef, Daniel; Pires, Felipe; Dirckx, Joris J J

    2017-02-01

    Despite continuing advances in finite element software, the realistic simulation of middle ear response under acoustic stimulation continues to be challenging. One reason for this is the wide range of possible choices that can be made during the definition of a model. Therefore, an explorative study of the relative influences of some of these choices is potentially very helpful. Three finite element models of the human middle ear were constructed, based on high-resolution micro-computed tomography scans from three different human temporal bones. Interesting variations in modeling definitions and parameter values were selected and their influences on middle ear transmission were evaluated. The models were compared against different experimental validation criteria, both from the literature and from our own measurements. Simulation conditions were restricted to the frequency range 0.1-10 kHz. Modeling the three geometries with the same modeling definitions and parameters produces stapes footplate response curves that exhibit similar shapes, but quantitative differences of 4 dB in the lower frequencies and up to 6 dB around the resonance peaks. The model properties with the largest influences on our model outcomes are the tympanic membrane (TM) damping and stiffness and the cochlear load. Model changes with a small to negligible influence include the isotropy or orthotropy of the TM, the geometry of the connection between the TM and the malleus, the microstructure of the incudostapedial joint, and the length of the tensor tympani tendon. The presented results provide insights into the importance of different features in middle ear finite element modeling. The application of three different individual middle ear geometries in a single study reduces the possibility that the conclusions are strongly affected by geometrical abnormalities. Some modeling variations that were hypothesized to be influential turned out to be of minor importance. Furthermore, it could be confirmed that different geometries, simulated using the same parameters and definitions, can produce significantly different responses. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Exploratory modeling of forest disturbance scenarios in central Oregon using computational experiments in GIS

    Treesearch

    Deana D. Pennington

    2007-01-01

    Exploratory modeling is an approach used when process and/or parameter uncertainties are such that modeling attempts at realistic prediction are not appropriate. Exploratory modeling makes use of computational experimentation to test how varying model scenarios drive model outcome. The goal of exploratory modeling is to better understand the system of interest through...

  13. [Mathematical models and epidemiological analysis].

    PubMed

    Gerasimov, A N

    2010-01-01

    The limited use of mathematical simulation in epidemiology is due not only to the difficulty of monitoring the epidemic process and identifying its parameters but also to the application of oversimplified models. It is shown that realistic reproduction of actual morbidity dynamics requires taking into account heterogeneity and finiteness of the population and seasonal character of pathogen transmission mechanism.

  14. An Eight-Parameter Function for Simulating Model Rocket Engine Thrust Curves

    ERIC Educational Resources Information Center

    Dooling, Thomas A.

    2007-01-01

    The toy model rocket is used extensively as an example of a realistic physical system. Teachers from grade school to the university level use them. Many teachers and students write computer programs to investigate rocket physics since the problem involves nonlinear functions related to air resistance and mass loss. This paper describes a nonlinear…

  15. Toxicity of aged gasoline exhaust particles to normal and diseased airway epithelia

    NASA Astrophysics Data System (ADS)

    Künzi, Lisa; Krapf, Manuel; Daher, Nancy; Dommen, Josef; Jeannet, Natalie; Schneider, Sarah; Platt, Stephen; Slowik, Jay G.; Baumlin, Nathalie; Salathe, Matthias; Prévôt, André S. H.; Kalberer, Markus; Strähl, Christof; Dümbgen, Lutz; Sioutas, Constantinos; Baltensperger, Urs; Geiser, Marianne

    2015-06-01

    Particulate matter (PM) pollution is a leading cause of premature death, particularly in those with pre-existing lung disease. A causative link between particle properties and adverse health effects remains unestablished mainly due to complex and variable physico-chemical PM parameters. Controlled laboratory experiments are required. Generating atmospherically realistic aerosols and performing cell-exposure studies at relevant particle-doses are challenging. Here we examine gasoline-exhaust particle toxicity from a Euro-5 passenger car in a uniquely realistic exposure scenario, combining a smog chamber simulating atmospheric ageing, an aerosol enrichment system varying particle number concentration independent of particle chemistry, and an aerosol deposition chamber physiologically delivering particles on air-liquid interface (ALI) cultures reproducing normal and susceptible health status. Gasoline-exhaust is an important PM source with largely unknown health effects. We investigated acute responses of fully-differentiated normal, distressed (antibiotics-treated) normal, and cystic fibrosis human bronchial epithelia (HBE), and a proliferating, single-cell type bronchial epithelial cell-line (BEAS-2B). We show that a single, short-term exposure to realistic doses of atmospherically-aged gasoline-exhaust particles impairs epithelial key-defence mechanisms, rendering it more vulnerable to subsequent hazards. We establish dose-response curves at realistic particle-concentration levels. Significant differences between cell models suggest the use of fully-differentiated HBE is most appropriate in future toxicity studies.

  16. Toxicity of aged gasoline exhaust particles to normal and diseased airway epithelia

    PubMed Central

    Künzi, Lisa; Krapf, Manuel; Daher, Nancy; Dommen, Josef; Jeannet, Natalie; Schneider, Sarah; Platt, Stephen; Slowik, Jay G.; Baumlin, Nathalie; Salathe, Matthias; Prévôt, André S. H.; Kalberer, Markus; Strähl, Christof; Dümbgen, Lutz; Sioutas, Constantinos; Baltensperger, Urs; Geiser, Marianne

    2015-01-01

    Particulate matter (PM) pollution is a leading cause of premature death, particularly in those with pre-existing lung disease. A causative link between particle properties and adverse health effects remains unestablished mainly due to complex and variable physico-chemical PM parameters. Controlled laboratory experiments are required. Generating atmospherically realistic aerosols and performing cell-exposure studies at relevant particle-doses are challenging. Here we examine gasoline-exhaust particle toxicity from a Euro-5 passenger car in a uniquely realistic exposure scenario, combining a smog chamber simulating atmospheric ageing, an aerosol enrichment system varying particle number concentration independent of particle chemistry, and an aerosol deposition chamber physiologically delivering particles on air-liquid interface (ALI) cultures reproducing normal and susceptible health status. Gasoline-exhaust is an important PM source with largely unknown health effects. We investigated acute responses of fully-differentiated normal, distressed (antibiotics-treated) normal, and cystic fibrosis human bronchial epithelia (HBE), and a proliferating, single-cell type bronchial epithelial cell-line (BEAS-2B). We show that a single, short-term exposure to realistic doses of atmospherically-aged gasoline-exhaust particles impairs epithelial key-defence mechanisms, rendering it more vulnerable to subsequent hazards. We establish dose-response curves at realistic particle-concentration levels. Significant differences between cell models suggest the use of fully-differentiated HBE is most appropriate in future toxicity studies. PMID:26119831

  17. Toxicity of aged gasoline exhaust particles to normal and diseased airway epithelia.

    PubMed

    Künzi, Lisa; Krapf, Manuel; Daher, Nancy; Dommen, Josef; Jeannet, Natalie; Schneider, Sarah; Platt, Stephen; Slowik, Jay G; Baumlin, Nathalie; Salathe, Matthias; Prévôt, André S H; Kalberer, Markus; Strähl, Christof; Dümbgen, Lutz; Sioutas, Constantinos; Baltensperger, Urs; Geiser, Marianne

    2015-06-29

    Particulate matter (PM) pollution is a leading cause of premature death, particularly in those with pre-existing lung disease. A causative link between particle properties and adverse health effects remains unestablished mainly due to complex and variable physico-chemical PM parameters. Controlled laboratory experiments are required. Generating atmospherically realistic aerosols and performing cell-exposure studies at relevant particle-doses are challenging. Here we examine gasoline-exhaust particle toxicity from a Euro-5 passenger car in a uniquely realistic exposure scenario, combining a smog chamber simulating atmospheric ageing, an aerosol enrichment system varying particle number concentration independent of particle chemistry, and an aerosol deposition chamber physiologically delivering particles on air-liquid interface (ALI) cultures reproducing normal and susceptible health status. Gasoline-exhaust is an important PM source with largely unknown health effects. We investigated acute responses of fully-differentiated normal, distressed (antibiotics-treated) normal, and cystic fibrosis human bronchial epithelia (HBE), and a proliferating, single-cell type bronchial epithelial cell-line (BEAS-2B). We show that a single, short-term exposure to realistic doses of atmospherically-aged gasoline-exhaust particles impairs epithelial key-defence mechanisms, rendering it more vulnerable to subsequent hazards. We establish dose-response curves at realistic particle-concentration levels. Significant differences between cell models suggest the use of fully-differentiated HBE is most appropriate in future toxicity studies.

  18. Instruction-level performance modeling and characterization of multimedia applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Cameron, K.W.

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based onmore » microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.« less

  19. Evolution of a mini-scale biphasic dissolution model: Impact of model parameters on partitioning of dissolved API and modelling of in vivo-relevant kinetics.

    PubMed

    Locher, Kathrin; Borghardt, Jens M; Frank, Kerstin J; Kloft, Charlotte; Wagner, Karl G

    2016-08-01

    Biphasic dissolution models are proposed to have good predictive power for the in vivo absorption. The aim of this study was to improve our previously introduced mini-scale dissolution model to mimic in vivo situations more realistically and to increase the robustness of the experimental model. Six dissolved APIs (BCS II) were tested applying the improved mini-scale biphasic dissolution model (miBIdi-pH-II). The influence of experimental model parameters including various excipients, API concentrations, dual paddle and its rotation speed was investigated. The kinetics in the biphasic model was described applying a one- and four-compartment pharmacokinetic (PK) model. The improved biphasic dissolution model was robust related to differing APIs and excipient concentrations. The dual paddle guaranteed homogenous mixing in both phases; the optimal rotation speed was 25 and 75rpm for the aqueous and the octanol phase, respectively. A one-compartment PK model adequately characterised the data of fully dissolved APIs. A four-compartment PK model best quantified dissolution, precipitation, and partitioning also of undissolved amounts due to realistic pH profiles. The improved dissolution model is a powerful tool for investigating the interplay between dissolution, precipitation and partitioning of various poorly soluble APIs (BCS II). In vivo-relevant PK parameters could be estimated applying respective PK models. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Degrees of reality: airway anatomy of high-fidelity human patient simulators and airway trainers.

    PubMed

    Schebesta, Karl; Hüpfl, Michael; Rössler, Bernhard; Ringl, Helmut; Müller, Michael P; Kimberger, Oliver

    2012-06-01

    Human patient simulators and airway training manikins are widely used to train airway management skills to medical professionals. Furthermore, these patient simulators are employed as standardized "patients" to evaluate airway devices. However, little is known about how realistic these patient simulators and airway-training manikins really are. This trial aimed to evaluate the upper airway anatomy of four high-fidelity patient simulators and two airway trainers in comparison with actual patients by means of radiographic measurements. The volume of the pharyngeal airspace was the primary outcome parameter. Computed tomography scans of 20 adult trauma patients without head or neck injuries were compared with computed tomography scans of four high-fidelity patient simulators and two airway trainers. By using 14 predefined distances, two cross-sectional areas and three volume parameters of the upper airway, the manikins' similarity to a human patient was assessed. The pharyngeal airspace of all manikins differed significantly from the patients' pharyngeal airspace. The HPS Human Patient Simulator (METI®, Sarasota, FL) was the most realistic high-fidelity patient simulator (6/19 [32%] of all parameters were within the 95% CI of human airway measurements). The airway anatomy of four high-fidelity patient simulators and two airway trainers does not reflect the upper airway anatomy of actual patients. This finding may impact airway training and confound comparative airway device studies.

Top