Sample records for key parameter values

  1. Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment

    NASA Astrophysics Data System (ADS)

    Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin

    2017-10-01

    Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.

  2. Anomaly Monitoring Method for Key Components of Satellite

    PubMed Central

    Fan, Linjun; Xiao, Weidong; Tang, Jun

    2014-01-01

    This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703

  3. Intra-rater repeatability of gait parameters in healthy adults during self-paced treadmill-based virtual reality walking.

    PubMed

    Al-Amri, Mohammad; Al Balushi, Hilal; Mashabi, Abdulrhman

    2017-12-01

    Self-paced treadmill walking is becoming increasingly popular for the gait assessment and re-education, in both research and clinical settings. Its day-to-day repeatability is yet to be established. This study scrutinised the test-retest repeatability of key gait parameters, obtained from the Gait Real-time Analysis Interactive Lab (GRAIL) system. Twenty-three male able-bodied adults (age: 34.56 ± 5.12 years) completed two separate gait assessments on the GRAIL system, separated by 5 ± 3 days. Key gait kinematic, kinetic, and spatial-temporal parameters were analysed. The Intraclass-Correlation Coefficients (ICC), Standard Error Measurement (SEM), Minimum Detectable Change (MDC), and the 95% limits of agreements were calculated to evaluate the repeatability of these gait parameters. Day-to-day agreements were excellent (ICCs > 0.87) for spatial-temporal parameters with low MDC and SEM values, <0.153 and <0.055, respectively. The repeatability was higher for joint kinetic than kinematic parameters, as reflected in small values of SEM (<0.13 Nm/kg and <3.4°) and MDC (<0.335 Nm/kg and <9.44°). The obtained values of all parameters fell within the 95% limits of agreement. Our findings demonstrate the repeatability of the GRAIL system available in our laboratory. The SEM and MDC values can be used to assist researchers and clinicians to distinguish 'real' changes in gait performance over time.

  4. State and Parameter Estimation for a Coupled Ocean--Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Ghil, M.; Kondrashov, D.; Sun, C.

    2006-12-01

    The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.

  5. User's design handbook for a Standardized Control Module (SCM) for DC to DC Converters, volume 2

    NASA Technical Reports Server (NTRS)

    Lee, F. C.

    1980-01-01

    A unified design procedure is presented for selecting the key SCM control parameters for an arbitrarily given power stage configuration and parameter values, such that all regulator performance specifications can be met and optimized concurrently in a single design attempt. All key results and performance indices, for buck, boost, and buck/boost switching regulators which are relevant to SCM design considerations are included to facilitate frequent references.

  6. A methodology for global-sensitivity analysis of time-dependent outputs in systems biology modelling.

    PubMed

    Sumner, T; Shephard, E; Bogle, I D L

    2012-09-07

    One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.

  7. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    NASA Astrophysics Data System (ADS)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  8. Calculations of key magnetospheric parameters using the isotropic and anisotropic SPSU global MHD code

    NASA Astrophysics Data System (ADS)

    Samsonov, Andrey; Gordeev, Evgeny; Sergeev, Victor

    2017-04-01

    As it was recently suggested (e.g., Gordeev et al., 2015), the global magnetospheric configuration can be characterized by a set of key parameters, such as the magnetopause distance at the subsolar point and on the terminator plane, the magnetic field in the magnetotail lobe and the plasma sheet thermal pressure, the cross polar cap electric potential drop and the total field-aligned current. For given solar wind conditions, the values of these parameters can be obtained from both empirical models and global MHD simulations. We validate the recently developed global MHD code SPSU-16 using the key magnetospheric parameters mentioned above. The code SPSU-16 can calculate both the isotropic and anisotropic MHD equations. In the anisotropic version, we use the modified double-adiabatic equations in which the T⊥/T∥ (the ratio of perpendicular to parallel thermal pressures) has been bounded from above by the mirror and ion-cyclotron thresholds and from below by the firehose threshold. The results of validation for the SPSU-16 code well agree with the previously published results of other global codes. Some key parameters coincide in the isotropic and anisotropic MHD simulations, but some are different.

  9. The contribution of NOAA/CMDL ground-based measurements to understanding long-term stratospheric changes

    NASA Astrophysics Data System (ADS)

    Montzka, S. A.; Butler, J. H.; Dutton, G.; Thompson, T. M.; Hall, B.; Mondeel, D. J.; Elkins, J. W.

    2005-05-01

    The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.

  10. Evaluation of hydrogen embrittlement and temper embrittlement by key curve method in instrumented Charpy test

    NASA Astrophysics Data System (ADS)

    Ohtsuka, N.; Shindo, Y.; Makita, A.

    2010-06-01

    Instrumented Charpy test was conducted on small sized specimen of 21/4Cr-1Mo steel. In the test the single specimen key curve method was applied to determine the value of fracture toughness for the initiation of crack extension with hydrogen free, KIC, and for hydrogen embrittlement cracking, KIH. Also the tearing modulus as a parameter for resistance to crack extension was determined. The role of these parameters was discussed at an upper shelf temperature and at a transition temperature. Then the key curve method combined with instrumented Charpy test was proven to be used to evaluate not only temper embrittlement but also hydrogen embrittlement.

  11. Engineering trade studies for a quantum key distribution system over a 30  km free-space maritime channel.

    PubMed

    Gariano, John; Neifeld, Mark; Djordjevic, Ivan

    2017-01-20

    Here, we present the engineering trade studies of a free-space optical communication system operating over a 30 km maritime channel for the months of January and July. The system under study follows the BB84 protocol with the following assumptions: a weak coherent source is used, Eve is performing the intercept resend attack and photon number splitting attack, prior knowledge of Eve's location is known, and Eve is allowed to know a small percentage of the final key. In this system, we examine the effect of changing several parameters in the following areas: the implementation of the BB84 protocol over the public channel, the technology in the receiver, and our assumptions about Eve. For each parameter, we examine how different values impact the secure key rate for a constant brightness. Additionally, we will optimize the brightness of the source for each parameter to study the improvement in the secure key rate.

  12. Estimation of key parameters in adaptive neuron model according to firing patterns based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Chunhua; Wang, Jiang; Yi, Guosheng

    2017-03-01

    Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.

  13. A Probabilistic Approach to Model Update

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.

    2001-01-01

    Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.

  14. Method for Household Refrigerators Efficiency Increasing

    NASA Astrophysics Data System (ADS)

    Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.

    2017-11-01

    The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.

  15. A Microwave Radiometric Method to Obtain the Average Path Profile of Atmospheric Temperature and Humidity Structure Parameters and Its Application to Optical Propagation System Assessment

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.; Vyhnalek, Brian E.

    2015-01-01

    The values of the key atmospheric propagation parameters Ct2, Cq2, and Ctq are highly dependent upon the vertical height within the atmosphere thus making it necessary to specify profiles of these values along the atmospheric propagation path. The remote sensing method suggested and described in this work makes use of a rapidly integrating microwave profiling radiometer to capture profiles of temperature and humidity through the atmosphere. The integration times of currently available profiling radiometers are such that they are approaching the temporal intervals over which one can possibly make meaningful assessments of these key atmospheric parameters. Since these parameters are fundamental to all propagation conditions, they can be used to obtain Cn2 profiles for any frequency, including those for an optical propagation path. In this case the important performance parameters of the prevailing isoplanatic angle and Greenwood frequency can be obtained. The integration times are such that Kolmogorov turbulence theory and the Taylor frozen-flow hypothesis must be transcended. Appropriate modifications to these classical approaches are derived from first principles and an expression for the structure functions are obtained. The theory is then applied to an experimental scenario and shows very good results.

  16. Model Update of a Micro Air Vehicle (MAV) Flexible Wing Frame with Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.; Waszak, Martin R.; Morgan, Benjamin G.

    2004-01-01

    This paper describes a procedure to update parameters in the finite element model of a Micro Air Vehicle (MAV) to improve displacement predictions under aerodynamics loads. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with Multidisciplinary Design Optimization (MDO) is used to modify key model parameters. Static test data collected using photogrammetry are used to correlate with model predictions. Results show significant improvements in model predictions after parameters are updated; however, computed probabilities values indicate low confidence in updated values and/or model structure errors. Lessons learned in the areas of wing design, test procedures, modeling approaches with geometric nonlinearities, and uncertainties quantification are all documented.

  17. Optimization of process parameters for RF sputter deposition of tin-nitride thin-films

    NASA Astrophysics Data System (ADS)

    Jangid, Teena; Rao, G. Mohan

    2018-05-01

    Radio frequency Magnetron sputtering technique was employed to deposit Tin-nitride thin films on Si and glass substrate at different process parameters. Influence of varying parameters like substrate temperature, target-substrate distance and RF power is studied in detail. X-ray diffraction method is used as a key technique for analyzing the changes in the stoichiometric and structural properties of the deposited films. Depending on the combination of deposition parameters, crystalline as well as amorphous films were obtained. Pure tin-nitride thin films were deposited at 15W RF power and 600°C substrate temperature with target-substrate distance fixed at 10cm. Bandgap value of 1.6 eV calculated for the film deposited at optimum process conditions matches well with reported values.

  18. A numerical testbed for hypotheses of extraterrestrial life and intelligence

    NASA Astrophysics Data System (ADS)

    Forgan, D. H.

    2009-04-01

    The search for extraterrestrial intelligence (SETI) has been heavily influenced by solutions to the Drake Equation, which returns an integer value for the number of communicating civilizations resident in the Milky Way, and by the Fermi Paradox, glibly stated as: ‘If they are there, where are they?’. Both rely on using average values of key parameters, such as the mean signal lifetime of a communicating civilization. A more accurate answer must take into account the distribution of stellar, planetary and biological attributes in the galaxy, as well as the stochastic nature of evolution itself. This paper outlines a method of Monte Carlo realization that does this, and hence allows an estimation of the distribution of key parameters in SETI, as well as allowing a quantification of their errors (and the level of ignorance therein). Furthermore, it provides a means for competing theories of life and intelligence to be compared quantitatively.

  19. Analysis of parameter estimation and optimization application of ant colony algorithm in vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun

    2018-03-01

    Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.

  20. Assessment of chronic kidney disease using skin texture as a key parameter: for South Indian population.

    PubMed

    Udhayarasu, Madhanlal; Ramakrishnan, Kalpana; Periasamy, Soundararajan

    2017-12-01

    Periodical monitoring of renal function, specifically for subjects with history of diabetic or hypertension would prevent them from entering into chronic kidney disease (CKD) condition. The recent increase in numbers may be due to food habits or lack of physical exercise, necessitates a rapid kidney function monitoring system. Presently, it is determined by evaluating glomerular filtration rate (GFR) that is mainly dependent on serum creatinine value and demographic parameters and ethnic value. Attempted here is to develop ethnic parameter based on skin texture for every individual. This value when used in GFR computation, the results are much agreeable with GFR obtained through standard modification of diet in renal disease and CKD epidemiology collaboration equations. Once correlation between CKD and skin texture is established, classification tool using artificial neural network is built to categorise CKD level based on demographic values and parameter obtained through skin texture (without using creatinine). This network when tested gives almost at par results with the network that is trained with demographic and creatinine values. The results of this Letter demonstrate the possibility of non-invasively determining kidney function and hence for making a device that would readily assess the kidney function even at home.

  1. A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.

    PubMed

    Faya, Paul; Stamey, James D; Seaman, John W

    2017-01-01

    For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.

  2. The statistical fluctuation study of quantum key distribution in means of uncertainty principle

    NASA Astrophysics Data System (ADS)

    Liu, Dunwei; An, Huiyao; Zhang, Xiaoyu; Shi, Xuemei

    2018-03-01

    Laser defects in emitting single photon, photon signal attenuation and propagation of error cause our serious headaches in practical long-distance quantum key distribution (QKD) experiment for a long time. In this paper, we study the uncertainty principle in metrology and use this tool to analyze the statistical fluctuation of the number of received single photons, the yield of single photons and quantum bit error rate (QBER). After that we calculate the error between measured value and real value of every parameter, and concern the propagation error among all the measure values. We paraphrase the Gottesman-Lo-Lutkenhaus-Preskill (GLLP) formula in consideration of those parameters and generate the QKD simulation result. In this study, with the increase in coding photon length, the safe distribution distance is longer and longer. When the coding photon's length is N = 10^{11}, the safe distribution distance can be almost 118 km. It gives a lower bound of safe transmission distance than without uncertainty principle's 127 km. So our study is in line with established theory, but we make it more realistic.

  3. Normative values for the spine shape parameters using 3D standing analysis from a database of 268 asymptomatic Caucasian and Japanese subjects.

    PubMed

    Le Huec, Jean Charles; Hasegawa, Kazuhiro

    2016-11-01

    Sagittal balance analysis has gained importance and the measure of the radiographic spinopelvic parameters is now a routine part of many interventions of spine surgery. Indeed, surgical correction of lumbar lordosis must be proportional to the pelvic incidence (PI). The compensatory mechanisms [pelvic retroversion with increased pelvic tilt (PT) and decreased thoracic kyphosis] spontaneously reverse after successful surgery. This study is the first to provide 3D standing spinopelvic reference values from a large database of Caucasian (n = 137) and Japanese (n = 131) asymptomatic subjects. The key spinopelvic parameters [e.g., PI, PT, sacral slope (SS)] were comparable in Japanese and Caucasian populations. Three equations, namely lumbar lordosis based on PI, PT based on PI and SS based on PI, were calculated after linear regression modeling and were comparable in both populations: lumbar lordosis (L1-S1) = 0.54*PI + 27.6, PT = 0.44*PI - 11.4 and SS = 0.54*PI + 11.90. We showed that the key spinopelvic parameters obtained from a large database of healthy subjects were comparable for Causasian and Japanese populations. The normative values provided in this study and the equations obtained after linear regression modeling could help to estimate pre-operatively the lumbar lordosis restoration and could be also used as guidelines for spinopelvic sagittal balance.

  4. Reliability and performance evaluation of systems containing embedded rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.

    1989-01-01

    A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.

  5. Sequential weighted Wiener estimation for extraction of key tissue parameters in color imaging: a phantom study

    NASA Astrophysics Data System (ADS)

    Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan

    2014-12-01

    Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.

  6. Errors in Air Permeability Rationing as Key Sources of Construction Quality Risk Assessment

    NASA Astrophysics Data System (ADS)

    Popov, A. A.; Nitievski, A. A.; Ivanov, R. N.

    2018-04-01

    The article deals with different approaches to the valuation parameters of air permeability n50 and q50. Examples of erroneous conclusions about the state of the building are presented as well as the ways to obtain reliable results. There are obtained comparative data of the air permeability parameters on examples of buildings with different configuration and with different values of compactness factor.

  7. Unifying mechanical and thermodynamic descriptions across the thioredoxin protein family.

    PubMed

    Mottonen, James M; Xu, Minli; Jacobs, Donald J; Livesay, Dennis R

    2009-05-15

    We compare various predicted mechanical and thermodynamic properties of nine oxidized thioredoxins (TRX) using a Distance Constraint Model (DCM). The DCM is based on a nonadditive free energy decomposition scheme, where entropic contributions are determined from rigidity and flexibility of structure based on distance constraints. We perform averages over an ensemble of constraint topologies to calculate several thermodynamic and mechanical response functions that together yield quantitative stability/flexibility relationships (QSFR). Applied to the TRX protein family, QSFR metrics display a rich variety of similarities and differences. In particular, backbone flexibility is well conserved across the family, whereas cooperativity correlation describing mechanical and thermodynamic couplings between the residue pairs exhibit distinctive features that readily standout. The diversity in predicted QSFR metrics that describe cooperativity correlation between pairs of residues is largely explained by a global flexibility order parameter describing the amount of intrinsic flexibility within the protein. A free energy landscape is calculated as a function of the flexibility order parameter, and key values are determined where the native-state, transition-state, and unfolded-state are located. Another key value identifies a mechanical transition where the global nature of the protein changes from flexible to rigid. The key values of the flexibility order parameter help characterize how mechanical and thermodynamic response is linked. Variation in QSFR metrics and key characteristics of global flexibility are related to the native state X-ray crystal structure primarily through the hydrogen bond network. Furthermore, comparison of three TRX redox pairs reveals differences in thermodynamic response (i.e., relative melting point) and mechanical properties (i.e., backbone flexibility and cooperativity correlation) that are consistent with experimental data on thermal stabilities and NMR dynamical profiles. The results taken together demonstrate that small-scale structural variations are amplified into discernible global differences by propagating mechanical couplings through the H-bond network.

  8. The selection criteria elements of X-ray optics system

    NASA Astrophysics Data System (ADS)

    Plotnikova, I. V.; Chicherina, N. V.; Bays, S. S.; Bildanov, R. G.; Stary, O.

    2018-01-01

    At the design of new modifications of x-ray tomography there are difficulties in the right choice of elements of X-ray optical system. Now this problem is solved by practical consideration, selection of values of the corresponding parameters - tension on an x-ray tube taking into account the thickness and type of the studied material. For reduction of time and labor input of design it is necessary to create the criteria of the choice, to determine key parameters and characteristics of elements. In the article two main elements of X-ray optical system - an x-ray tube and the detector of x-ray radiation - are considered. Criteria of the choice of elements, their key characteristics, the main dependences of parameters, quality indicators and also recommendations according to the choice of elements of x-ray systems are received.

  9. Models based on value and probability in health improve shared decision making.

    PubMed

    Ortendahl, Monica

    2008-10-01

    Diagnostic reasoning and treatment decisions are a key competence of doctors. A model based on values and probability provides a conceptual framework for clinical judgments and decisions, and also facilitates the integration of clinical and biomedical knowledge into a diagnostic decision. Both value and probability are usually estimated values in clinical decision making. Therefore, model assumptions and parameter estimates should be continually assessed against data, and models should be revised accordingly. Introducing parameter estimates for both value and probability, which usually pertain in clinical work, gives the model labelled subjective expected utility. Estimated values and probabilities are involved sequentially for every step in the decision-making process. Introducing decision-analytic modelling gives a more complete picture of variables that influence the decisions carried out by the doctor and the patient. A model revised for perceived values and probabilities by both the doctor and the patient could be used as a tool for engaging in a mutual and shared decision-making process in clinical work.

  10. Scheduling on the basis of the research of dependences among the construction process parameters

    NASA Astrophysics Data System (ADS)

    Romanovich, Marina; Ermakov, Alexander; Mukhamedzhanova, Olga

    2017-10-01

    The dependences among the construction process parameters are investigated in the article: average integrated value of qualification of the shift, number of workers per shift and average daily amount of completed work on the basis of correlation coefficient are considered. Basic data for the research of dependences among the above-stated parameters have been collected during the construction of two standard objects A and B (monolithic houses), in four months of construction (October, November, December, January). Kobb-Douglas production function has proved the values of coefficients of correlation close to 1. Function is simple to be used and is ideal for the description of the considered dependences. The development function, describing communication among the considered parameters of the construction process, is developed. The function of the development gives the chance to select optimum quantitative and qualitative (qualification) structure of the brigade link for the work during the next period of time, according to a preset value of amount of works. Function of the optimized amounts of works, which reflects interrelation of key parameters of construction process, is developed. Values of function of the optimized amounts of works should be used as the average standard for scheduling of the storming periods of construction.

  11. Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2008-07-15

    In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper's capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determinedmore » for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency ({eta} {approx} 20%) and dark count probability (p{sub dark} {approx} 10{sup -7})« less

  12. Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol

    NASA Astrophysics Data System (ADS)

    Molotkov, S. N.

    2008-07-01

    In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper’s capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determined for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency (η ≈ 20%) and dark count probability ( p dark ˜ 10-7).

  13. Enhancing performance of next generation FSO communication systems using soft computing-based predictions.

    PubMed

    Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori

    2006-06-12

    The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.

  14. Approaches to highly parameterized inversion: Pilot-point theory, guidelines, and research directions

    USGS Publications Warehouse

    Doherty, John E.; Fienen, Michael N.; Hunt, Randall J.

    2011-01-01

    Pilot points have been used in geophysics and hydrogeology for at least 30 years as a means to bridge the gap between estimating a parameter value in every cell of a model and subdividing models into a small number of homogeneous zones. Pilot points serve as surrogate parameters at which values are estimated in the inverse-modeling process, and their values are interpolated onto the modeling domain in such a way that heterogeneity can be represented at a much lower computational cost than trying to estimate parameters in every cell of a model. Although the use of pilot points is increasingly common, there are few works documenting the mathematical implications of their use and even fewer sources of guidelines for their implementation in hydrogeologic modeling studies. This report describes the mathematics of pilot-point use, provides guidelines for their use in the parameter-estimation software suite (PEST), and outlines several research directions. Two key attributes for pilot-point definitions are highlighted. First, the difference between the information contained in the every-cell parameter field and the surrogate parameter field created using pilot points should be in the realm of parameters which are not informed by the observed data (the null space). Second, the interpolation scheme for projecting pilot-point values onto model cells ideally should be orthogonal. These attributes are informed by the mathematics and have important ramifications for both the guidelines and suggestions for future research.

  15. Simulation tests of the optimization method of Hopfield and Tank using neural networks

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.

    1988-01-01

    The method proposed by Hopfield and Tank for using the Hopfield neural network with continuous valued neurons to solve the traveling salesman problem is tested by simulation. Several researchers have apparently been unable to successfully repeat the numerical simulation documented by Hopfield and Tank. However, as suggested to the author by Adams, it appears that the reason for those difficulties is that a key parameter value is reported erroneously (by four orders of magnitude) in the original paper. When a reasonable value is used for that parameter, the network performs generally as claimed. Additionally, a new method of using feedback to control the input bias currents to the amplifiers is proposed and successfully tested. This eliminates the need to set the input currents by trial and error.

  16. A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na

    2013-01-01

    We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.

  17. A novel chaos-based image encryption algorithm using DNA sequence operations

    NASA Astrophysics Data System (ADS)

    Chai, Xiuli; Chen, Yiran; Broyde, Lucie

    2017-01-01

    An image encryption algorithm based on chaotic system and deoxyribonucleic acid (DNA) sequence operations is proposed in this paper. First, the plain image is encoded into a DNA matrix, and then a new wave-based permutation scheme is performed on it. The chaotic sequences produced by 2D Logistic chaotic map are employed for row circular permutation (RCP) and column circular permutation (CCP). Initial values and parameters of the chaotic system are calculated by the SHA 256 hash of the plain image and the given values. Then, a row-by-row image diffusion method at DNA level is applied. A key matrix generated from the chaotic map is used to fuse the confused DNA matrix; also the initial values and system parameters of the chaotic system are renewed by the hamming distance of the plain image. Finally, after decoding the diffused DNA matrix, we obtain the cipher image. The DNA encoding/decoding rules of the plain image and the key matrix are determined by the plain image. Experimental results and security analyses both confirm that the proposed algorithm has not only an excellent encryption result but also resists various typical attacks.

  18. New best estimates for radionuclide solid-liquid distribution coefficients in soils. Part 2: naturally occurring radionuclides.

    PubMed

    Vandenhove, H; Gil-García, C; Rigol, A; Vidal, M

    2009-09-01

    Predicting the transfer of radionuclides in the environment for normal release, accidental, disposal or remediation scenarios in order to assess exposure requires the availability of an important number of generic parameter values. One of the key parameters in environmental assessment is the solid liquid distribution coefficient, K(d), which is used to predict radionuclide-soil interaction and subsequent radionuclide transport in the soil column. This article presents a review of K(d) values for uranium, radium, lead, polonium and thorium based on an extensive literature survey, including recent publications. The K(d) estimates were presented per soil groups defined by their texture and organic matter content (Sand, Loam, Clay and Organic), although the texture class seemed not to significantly affect K(d). Where relevant, other K(d) classification systems are proposed and correlations with soil parameters are highlighted. The K(d) values obtained in this compilation are compared with earlier review data.

  19. Absolute Isotopic Abundance Ratios and the Accuracy of Δ47 Measurements

    NASA Astrophysics Data System (ADS)

    Daeron, M.; Blamart, D.; Peral, M.; Affek, H. P.

    2016-12-01

    Conversion from raw IRMS data to clumped isotope anomalies in CO2 (Δ47) relies on four external parameters: the (13C/12C) ratio of VPDB, the (17O/16O) and (18O/16O) ratios of VSMOW (or VPDB-CO2), and the slope of the triple oxygen isotope line (λ). Here we investigate the influence that these isotopic parameters exert on measured Δ47 values, using real-world data corresponding to 7 months of measurements; simulations based on randomly generated data; precise comparisons between water-equilibrated CO2 samples and between carbonate standards believed to share quasi-identical Δ47 values; reprocessing of two carbonate calibration data sets with different slopes of Δ47 versus T. Using different sets of isotopic parameters generally produces systematic offsets as large as 0.04 ‰ in final Δ47 values. What's more, even using a single set of isotopic parameters can produce intra- and inter-laboratory discrepancies in final Δ47 values, if some of these parameters are inaccurate. Depending on the isotopic compositions of the standards used for conversion to "absolute" values, these errors should correlate strongly with either δ13C or δ18O, or more weakly with both. Based on measurements of samples expected to display identical Δ47 values, such as 25°C water-equilibrated CO2 with different carbon and oxygen isotope compositions, or high-temperature standards ETH-1 and ETH-2, we conclude that the isotopic parameters used so far in most clumped isotope studies produces large, systematic errors controlled by the relative bulk isotopic compositions of samples and standards, which should be one of the key factors responsible for current inter-laboratory discrepancies. By contrast, the isotopic parameters of Brand et al. [2010] appear to yield accurate Δ47 values regardless of bulk isotopic composition. References:Brand, Assonov and Coplen [2010] http://dx.doi.org/10.1351/PAC-REP-09-01-05

  20. A software tool to assess uncertainty in transient-storage model parameters using Monte Carlo simulations

    USGS Publications Warehouse

    Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.

    2017-01-01

    Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.

  1. The water retention curve and relative permeability for gas production from hydrate-bearing sediments: pore-network model simulation

    NASA Astrophysics Data System (ADS)

    Mahabadi, Nariman; Dai, Sheng; Seol, Yongkoo; Sup Yun, Tae; Jang, Jaewon

    2016-08-01

    The water retention curve and relative permeability are critical to predict gas and water production from hydrate-bearing sediments. However, values for key parameters that characterize gas and water flows during hydrate dissociation have not been identified due to experimental challenges. This study utilizes the combined techniques of micro-focus X-ray computed tomography (CT) and pore-network model simulation to identify proper values for those key parameters, such as gas entry pressure, residual water saturation, and curve fitting values. Hydrates with various saturation and morphology are realized in the pore-network that was extracted from micron-resolution CT images of sediments recovered from the hydrate deposit at the Mallik site, and then the processes of gas invasion, hydrate dissociation, gas expansion, and gas and water permeability are simulated. Results show that greater hydrate saturation in sediments lead to higher gas entry pressure, higher residual water saturation, and steeper water retention curve. An increase in hydrate saturation decreases gas permeability but has marginal effects on water permeability in sediments with uniformly distributed hydrate. Hydrate morphology has more significant impacts than hydrate saturation on relative permeability. Sediments with heterogeneously distributed hydrate tend to result in lower residual water saturation and higher gas and water permeability. In this sense, the Brooks-Corey model that uses two fitting parameters individually for gas and water permeability properly capture the effect of hydrate saturation and morphology on gas and water flows in hydrate-bearing sediments.

  2. Optimization of Terrestrial Ecosystem Model Parameters Using Atmospheric CO2 Concentration Data With the Global Carbon Assimilation System (GCAS)

    NASA Astrophysics Data System (ADS)

    Chen, Zhuoqi; Chen, Jing M.; Zhang, Shupeng; Zheng, Xiaogu; Ju, Weiming; Mo, Gang; Lu, Xiaoliang

    2017-12-01

    The Global Carbon Assimilation System that assimilates ground-based atmospheric CO2 data is used to estimate several key parameters in a terrestrial ecosystem model for the purpose of improving carbon cycle simulation. The optimized parameters are the leaf maximum carboxylation rate at 25°C (Vmax25), the temperature sensitivity of ecosystem respiration (Q10), and the soil carbon pool size. The optimization is performed at the global scale at 1° resolution for the period from 2002 to 2008. The results indicate that vegetation from tropical zones has lower Vmax25 values than vegetation in temperate regions. Relatively high values of Q10 are derived over high/midlatitude regions. Both Vmax25 and Q10 exhibit pronounced seasonal variations at middle-high latitudes. The maxima in Vmax25 occur during growing seasons, while the minima appear during nongrowing seasons. Q10 values decrease with increasing temperature. The seasonal variabilities of Vmax25 and Q10 are larger at higher latitudes. Optimized Vmax25 and Q10 show little seasonal variabilities at tropical regions. The seasonal variabilities of Vmax25 are consistent with the variabilities of LAI for evergreen conifers and broadleaf evergreen forests. Variations in leaf nitrogen and leaf chlorophyll contents may partly explain the variations in Vmax25. The spatial distribution of the total soil carbon pool size after optimization is compared favorably with the gridded Global Soil Data Set for Earth System. The results also suggest that atmospheric CO2 data are a source of information that can be tapped to gain spatially and temporally meaningful information for key ecosystem parameters that are representative at the regional and global scales.

  3. Bayesian Inference for Time Trends in Parameter Values using Weighted Evidence Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. L. Kelly; A. Malkhasyan

    2010-09-01

    There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “in-dustry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an applica-tion of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates an approach to incorporating multiple sources of data via applicability weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less

  4. Estimation of Filling and Afterload Conditions by Pump Intrinsic Parameters in a Pulsatile Total Artificial Heart.

    PubMed

    Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich

    2016-07-01

    A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  5. Bayesian Inference for Time Trends in Parameter Values: Case Study for the Ageing PSA Network of the European Commission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana L. Kelly; Albert Malkhasyan

    2010-06-01

    There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “industry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an application of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates the development of a generic prior distribution, which incorporates multiple sources of generic data via weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less

  6. Finite-key security analyses on passive decoy-state QKD protocols with different unstable sources.

    PubMed

    Song, Ting-Ting; Qin, Su-Juan; Wen, Qiao-Yan; Wang, Yu-Kun; Jia, Heng-Yue

    2015-10-16

    In quantum communication, passive decoy-state QKD protocols can eliminate many side channels, but the protocols without any finite-key analyses are not suitable for in practice. The finite-key securities of passive decoy-state (PDS) QKD protocols with two different unstable sources, type-II parametric down-convention (PDC) and phase randomized weak coherent pulses (WCPs), are analyzed in our paper. According to the PDS QKD protocols, we establish an optimizing programming respectively and obtain the lower bounds of finite-key rates. Under some reasonable values of quantum setup parameters, the lower bounds of finite-key rates are simulated. The simulation results show that at different transmission distances, the affections of different fluctuations on key rates are different. Moreover, the PDS QKD protocol with an unstable PDC source can resist more intensity fluctuations and more statistical fluctuation.

  7. Multi-party Measurement-Device-Independent Quantum Key Distribution Based on Cluster States

    NASA Astrophysics Data System (ADS)

    Liu, Chuanqi; Zhu, Changhua; Ma, Shuquan; Pei, Changxing

    2018-03-01

    We propose a novel multi-party measurement-device-independent quantum key distribution (MDI-QKD) protocol based on cluster states. A four-photon analyzer which can distinguish all the 16 cluster states serves as the measurement device for four-party MDI-QKD. Any two out of four participants can build secure keys after the analyzers obtains successful outputs and the two participants perform post-processing. We derive a security analysis for the protocol, and analyze the key rates under different values of polarization misalignment. The results show that four-party MDI-QKD is feasible over 280 km in the optical fiber channel when the key rate is about 10- 6 with the polarization misalignment parameter 0.015. Moreover, our work takes an important step toward a quantum communication network.

  8. Systems Analysis of the Hydrogen Transition with HyTrans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leiby, Paul Newsome; Greene, David L; Bowman, David Charles

    2007-01-01

    The U.S. Federal government is carefully considering the merits and long-term prospects of hydrogen-fueled vehicles. NAS (1) has called for the careful application of systems analysis tools to structure the complex assessment required. Others, raising cautionary notes, question whether a consistent and plausible transition to hydrogen light-duty vehicles can identified (2) and whether that transition would, on balance, be environmentally preferred. Modeling the market transition to hydrogen-powered vehicles is an inherently complex process, encompassing hydrogen production, delivery and retailing, vehicle manufacturing, and vehicle choice and use. We describe the integration of key technological and market factors in a dynamic transitionmore » model, HyTrans. The usefulness of HyTrans and its predictions depends on three key factors: (1) the validity of the economic theories that underpin the model, (2) the authenticity with which the key processes are represented, and (3) the accuracy of specific parameter values used in the process representations. This paper summarizes the theoretical basis of HyTrans, and highlights the implications of key parameter specifications with sensitivity analysis.« less

  9. Rules of parameter variation in homotype series of birdsong can indicate a 'sollwert' significance.

    PubMed

    Hultsch, H; Todt, D

    1996-11-01

    Various bird species produce songs which include homotype pattern series, i.e. segments composed of a number of repeated vocal units. We compared such units and analyzed the variation of their parameters, especially in the time and the frequency domain. In addition, we examined whether and how serial changes of both the range and the trend of variation were related to song constituents following the repetitions. Data evaluation showed that variation of specific serial parameters (e.g., unit pitch or unit duration) occurring in the whistle song-types of nightingales (Luscinia megarhynchos) were converging towards a distinct terminal value. Although song-types differed in this terminal value, it was found to play the role of a key cue ('sollwert'). The continuation of a song depended on a preceding attainment of its specific 'sollwert'. Our results suggest that the study of signal parameters and rules of their variations make a useful tool for the behavioral access to the properties of the control systems mediating serial signal performances.

  10. Assessment of key transport parameters in a karst system under different dynamic conditions based on tracer experiments: the Jeita karst system, Lebanon

    NASA Astrophysics Data System (ADS)

    Doummar, Joanna; Margane, Armin; Geyer, Tobias; Sauter, Martin

    2018-03-01

    Artificial tracer experiments were conducted in the mature karst system of Jeita (Lebanon) under various flow conditions using surface and subsurface tracer injection points, to determine the variation of transport parameters (attenuation of peak concentration, velocity, transit times, dispersivity, and proportion of immobile and mobile regions) along fast and slow flow pathways. Tracer breakthrough curves (TBCs) observed at the karst spring were interpreted using a two-region nonequilibrium approach (2RNEM) to account for the skewness in the TBCs' long tailings. The conduit test results revealed a discharge threshold in the system dynamics, beyond which the transport parameters vary significantly. The polynomial relationship between transport velocity and discharge can be related to the variation of the conduit's cross-sectional area. Longitudinal dispersivity in the conduit system is not a constant value (α = 7-10 m) and decreases linearly with increasing flow rate because of dilution effects. Additionally, the proportion of immobile regions (arising from conduit irregularities) increases with decreasing water level in the conduit system. From tracer tests with injection at the surface, longitudinal dispersivity values are found to be large (8-27 m). The tailing observed in some TBCs is generated in the unsaturated zone before the tracer actually arrives at the major subsurface conduit draining the system. This work allows the estimation and prediction of the key transport parameters in karst aquifers. It shows that these parameters vary with time and flow dynamics, and they reflect the geometry of the flow pathway and the origin of infiltrating (potentially contaminated) recharge.

  11. Happiness Inequality: How Much Is Reasonable?

    ERIC Educational Resources Information Center

    Gandelman, Nestor; Porzecanski, Rafael

    2013-01-01

    We compute the Gini indexes for income, happiness and various simulated utility levels. Due to decreasing marginal utility of income, happiness inequality should be lower than income inequality. We find that happiness inequality is about half that of income inequality. To compute the utility levels we need to assume values for a key parameter that…

  12. An experimental study on pseudoelasticity of a NiTi-based damper for civil applications

    NASA Astrophysics Data System (ADS)

    Nespoli, Adelaide; Bassani, Enrico; Della Torre, Davide; Donnini, Riccardo; Villa, Elena; Passaretti, Francesca

    2017-10-01

    In this work, a pseudoelastic damper composed by NiTi wires is tested at 0.5, 1 and 2 Hz for 1000 mechanical cycles. The damping performances were evaluated by three key parameters: the damping capacity, the dissipated energy per cycle and the maximum force. During testing, the temperature of the pseudoelastic elements was registered as well. Results show that the damper assures a bi-directional motion throughout the 1000 cycles together with the maintenance of the recentering. It was observed a stabilization process in the first 50 mechanical cycles, where the key parameters reach stable values; in particular it was found that the damping capacity and the dissipated energy both decrease with frequency. Besides, the mean temperature of the pseudoleastic elements reaches a stable value during tests and confirms the different response of the pseudoelastic wires accordingly with the specific length and stain. Finally, interesting thermal effects were observed at 1 and 2 Hz: at these frequencies and at high strains, the maximum force increases but the temperature of the NiTi wire decreases being in contraddiction with the Clausius-Clapeyron law.

  13. Diurnal variations in blood gases and metabolites for draught Zebu and Simmental oxen.

    PubMed

    Zanzinger, J; Hoffmann, I; Becker, K

    1994-01-01

    In previous articles it has been shown that blood parameters may be useful to assess physical fitness in draught cattle. The aim of the present study was to detect possible variations in baseline values for the key metabolites: lactate and free fatty acids (FFA), and for blood gases in samples drawn from a catheterized jugular vein. Sampling took place immediately after venipuncture at intervals of 3 min for 1 hr in Simmental oxen (N = 6) and during a period of 24 hr at intervals of 60 min for Zebu (N = 4) and Simmental (N = 6) oxen. After puncture of the vein, plasma FFA and oxygen (pvO2) were elevated for approximately 15 min. All parameters returned to baseline values within 1 hr of the catheter being inserted. Twenty-four-hour mean baseline values for all measured parameters were significantly different (P < or = 0.001) between Zebu and Simmental. All parameters elicited diurnal variations which were mainly related to feed intake. The magnitude of these variations is comparable to the responses to light draught work. It is concluded that a strict standardization of blood sampling, at least in respect of time after feeding, is required for a reliable interpretation of endurance-indicating blood parameters measured under field conditions.

  14. Variability of non-Gaussian diffusion MRI and intravoxel incoherent motion (IVIM) measurements in the breast.

    PubMed

    Iima, Mami; Kataoka, Masako; Kanao, Shotaro; Kawai, Makiko; Onishi, Natsuko; Koyasu, Sho; Murata, Katsutoshi; Ohashi, Akane; Sakaguchi, Rena; Togashi, Kaori

    2018-01-01

    We prospectively examined the variability of non-Gaussian diffusion magnetic resonance imaging (MRI) and intravoxel incoherent motion (IVIM) measurements with different numbers of b-values and excitations in normal breast tissue and breast lesions. Thirteen volunteers and fourteen patients with breast lesions (seven malignant, eight benign; one patient had bilateral lesions) were recruited in this prospective study (approved by the Internal Review Board). Diffusion-weighted MRI was performed with 16 b-values (0-2500 s/mm2 with one number of excitations [NEX]) and five b-values (0-2500 s/mm2, 3 NEX), using a 3T breast MRI. Intravoxel incoherent motion (flowing blood volume fraction [fIVIM] and pseudodiffusion coefficient [D*]) and non-Gaussian diffusion (theoretical apparent diffusion coefficient [ADC] at b value of 0 sec/mm2 [ADC0] and kurtosis [K]) parameters were estimated from IVIM and Kurtosis models using 16 b-values, and synthetic apparent diffusion coefficient (sADC) values were obtained from two key b-values. The variabilities between and within subjects and between different diffusion acquisition methods were estimated. There were no statistical differences in ADC0, K, or sADC values between the different b-values or NEX. A good agreement of diffusion parameters was observed between 16 b-values (one NEX), five b-values (one NEX), and five b-values (three NEX) in normal breast tissue or breast lesions. Insufficient agreement was observed for IVIM parameters. There were no statistical differences in the non-Gaussian diffusion MRI estimated values obtained from a different number of b-values or excitations in normal breast tissue or breast lesions. These data suggest that a limited MRI protocol using a few b-values might be relevant in a clinical setting for the estimation of non-Gaussian diffusion MRI parameters in normal breast tissue and breast lesions.

  15. Variability of non-Gaussian diffusion MRI and intravoxel incoherent motion (IVIM) measurements in the breast

    PubMed Central

    Kataoka, Masako; Kanao, Shotaro; Kawai, Makiko; Onishi, Natsuko; Koyasu, Sho; Murata, Katsutoshi; Ohashi, Akane; Sakaguchi, Rena; Togashi, Kaori

    2018-01-01

    We prospectively examined the variability of non-Gaussian diffusion magnetic resonance imaging (MRI) and intravoxel incoherent motion (IVIM) measurements with different numbers of b-values and excitations in normal breast tissue and breast lesions. Thirteen volunteers and fourteen patients with breast lesions (seven malignant, eight benign; one patient had bilateral lesions) were recruited in this prospective study (approved by the Internal Review Board). Diffusion-weighted MRI was performed with 16 b-values (0–2500 s/mm2 with one number of excitations [NEX]) and five b-values (0–2500 s/mm2, 3 NEX), using a 3T breast MRI. Intravoxel incoherent motion (flowing blood volume fraction [fIVIM] and pseudodiffusion coefficient [D*]) and non-Gaussian diffusion (theoretical apparent diffusion coefficient [ADC] at b value of 0 sec/mm2 [ADC0] and kurtosis [K]) parameters were estimated from IVIM and Kurtosis models using 16 b-values, and synthetic apparent diffusion coefficient (sADC) values were obtained from two key b-values. The variabilities between and within subjects and between different diffusion acquisition methods were estimated. There were no statistical differences in ADC0, K, or sADC values between the different b-values or NEX. A good agreement of diffusion parameters was observed between 16 b-values (one NEX), five b-values (one NEX), and five b-values (three NEX) in normal breast tissue or breast lesions. Insufficient agreement was observed for IVIM parameters. There were no statistical differences in the non-Gaussian diffusion MRI estimated values obtained from a different number of b-values or excitations in normal breast tissue or breast lesions. These data suggest that a limited MRI protocol using a few b-values might be relevant in a clinical setting for the estimation of non-Gaussian diffusion MRI parameters in normal breast tissue and breast lesions. PMID:29494639

  16. Security of a single-state semi-quantum key distribution protocol

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Qiu, Daowen; Mateus, Paulo

    2018-06-01

    Semi-quantum key distribution protocols are allowed to set up a secure secret key between two users. Compared with their full quantum counterparts, one of the two users is restricted to perform some "classical" or "semi-quantum" operations, which potentially makes them easily realizable by using less quantum resource. However, the semi-quantum key distribution protocols mainly rely on a two-way quantum channel. The eavesdropper has two opportunities to intercept the quantum states transmitted in the quantum communication stage. It may allow the eavesdropper to get more information and make the security analysis more complicated. In the past ten years, many semi-quantum key distribution protocols have been proposed and proved to be robust. However, there are few works concerning their unconditional security. It is doubted that how secure the semi-quantum ones are and how much noise they can tolerate to establish a secure secret key. In this paper, we prove the unconditional security of a single-state semi-quantum key distribution protocol proposed by Zou et al. (Phys Rev A 79:052312, 2009). We present a complete proof from information theory aspect by deriving a lower bound of the protocol's key rate in the asymptotic scenario. Using this bound, we figure out an error threshold value such that for all error rates that are less than this threshold value, the secure secret key can be established between the legitimate users definitely. Otherwise, the users should abort the protocol. We make an illustration of the protocol under the circumstance that the reverse quantum channel is a depolarizing one with parameter q. Additionally, we compare the error threshold value with some full quantum protocols and several existing semi-quantum ones whose unconditional security proofs have been provided recently.

  17. Particle swarm optimization algorithm based parameters estimation and control of epileptiform spikes in a neural mass model

    NASA Astrophysics Data System (ADS)

    Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan

    2016-07-01

    This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.

  18. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  19. Statistical Bayesian method for reliability evaluation based on ADT data

    NASA Astrophysics Data System (ADS)

    Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong

    2018-05-01

    Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.

  20. Threat evaluation for impact assessment in situation analysis systems

    NASA Astrophysics Data System (ADS)

    Roy, Jean; Paradis, Stephane; Allouche, Mohamad

    2002-07-01

    Situation analysis is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of situation awareness, for the decision maker. Data fusion is a key enabler to meeting the demanding requirements of military situation analysis support systems. According to the data fusion model maintained by the Joint Directors of Laboratories' Data Fusion Group, impact assessment estimates the effects on situations of planned or estimated/predicted actions by the participants, including interactions between action plans of multiple players. In this framework, the appraisal of actual or potential threats is a necessary capability for impact assessment. This paper reviews and discusses in details the fundamental concepts of threat analysis. In particular, threat analysis generally attempts to compute some threat value, for the individual tracks, that estimates the degree of severity with which engagement events will potentially occur. Presenting relevant tracks to the decision maker in some threat list, sorted from the most threatening to the least, is clearly in-line with the cognitive demands associated with threat evaluation. A key parameter in many threat value evaluation techniques is the Closest Point of Approach (CPA). Along this line of thought, threatening tracks are often prioritized based upon which ones will reach their CPA first. Hence, the Time-to-CPA (TCPA), i.e., the time it will take for a track to reach its CPA, is also a key factor. Unfortunately, a typical assumption for the computation of the CPA/TCPA parameters is that the track velocity will remain constant. When a track is maneuvering, the CPA/TCPA values will change accordingly. These changes will in turn impact the threat value computations and, ultimately, the resulting threat list. This is clearly undesirable from a command decision-making perspective. In this regard, the paper briefly discusses threat value stabilization approaches based on neural networks and other mathematical techniques.

  1. An empirical-statistical model for laser cladding of Ti-6Al-4V powder on Ti-6Al-4V substrate

    NASA Astrophysics Data System (ADS)

    Nabhani, Mohammad; Razavi, Reza Shoja; Barekat, Masoud

    2018-03-01

    In this article, Ti-6Al-4V powder alloy was directly deposited on Ti-6Al-4V substrate using laser cladding process. In this process, some key parameters such as laser power (P), laser scanning rate (V) and powder feeding rate (F) play important roles. Using linear regression analysis, this paper develops the empirical-statistical relation between these key parameters and geometrical characteristics of single clad tracks (i.e. clad height, clad width, penetration depth, wetting angle, and dilution) as a combined parameter (PαVβFγ). The results indicated that the clad width linearly depended on PV-1/3 and powder feeding rate had no effect on it. The dilution controlled by a combined parameter as VF-1/2 and laser power was a dispensable factor. However, laser power was the dominant factor for the clad height, penetration depth, and wetting angle so that they were proportional to PV-1F1/4, PVF-1/8, and P3/4V-1F-1/4, respectively. Based on the results of correlation coefficient (R > 0.9) and analysis of residuals, it was confirmed that these empirical-statistical relations were in good agreement with the measured values of single clad tracks. Finally, these relations led to the design of a processing map that can predict the geometrical characteristics of the single clad tracks based on the key parameters.

  2. Laser Trimming of CuAlMo Thin-Film Resistors: Effect of Laser Processing Parameters

    NASA Astrophysics Data System (ADS)

    Birkett, Martin; Penlington, Roger

    2012-08-01

    This paper reports the effect of varying laser trimming process parameters on the electrical performance of a novel CuAlMo thin-film resistor material. The films were prepared on Al2O3 substrates by direct-current (DC) magnetron sputtering, before being laser trimmed to target resistance value. The effect of varying key laser parameters of power, Q-rate, and bite size on the resistor stability and tolerance accuracy were systematically investigated. By reducing laser power and bite size and balancing this with Q-rate setting, significant improvements in resistor stability and resistor tolerance accuracies of less than ±0.5% were achieved.

  3. Critical laboratory values in hemostasis: toward consensus.

    PubMed

    Lippi, Giuseppe; Adcock, Dorothy; Simundic, Ana-Maria; Tripodi, Armando; Favaloro, Emmanuel J

    2017-09-01

    The term "critical values" can be defined to entail laboratory test results that significantly lie outside the normal (reference) range and necessitate immediate reporting to safeguard patient health, as well as those displaying a highly and clinically significant variation compared to previous data. The identification and effective communication of "highly pathological" values has engaged the minds of many clinicians, health care and laboratory professionals for decades, since these activities are vital to good laboratory practice. This is especially true in hemostasis, where a timely and efficient communication of critical values strongly impacts patient management. Due to the heterogeneity of available data, this paper is hence aimed to analyze the state of the art and provide an expert opinion about the parameters, measurement units and alert limits pertaining to critical values in hemostasis, thus providing a basic document for future consultation that assists laboratory professionals and clinicians alike. KEY MESSAGES Critical values are laboratory test results significantly lying outside the normal (reference) range and necessitating immediate reporting to safeguard patient health. A broad heterogeneity exists about critical values in hemostasis worldwide. We provide here an expert opinion about the parameters, measurement units and alert limits pertaining to critical values in hemostasis.

  4. Finite-key security analyses on passive decoy-state QKD protocols with different unstable sources

    PubMed Central

    Song, Ting-Ting; Qin, Su-Juan; Wen, Qiao-Yan; Wang, Yu-Kun; Jia, Heng-Yue

    2015-01-01

    In quantum communication, passive decoy-state QKD protocols can eliminate many side channels, but the protocols without any finite-key analyses are not suitable for in practice. The finite-key securities of passive decoy-state (PDS) QKD protocols with two different unstable sources, type-II parametric down-convention (PDC) and phase randomized weak coherent pulses (WCPs), are analyzed in our paper. According to the PDS QKD protocols, we establish an optimizing programming respectively and obtain the lower bounds of finite-key rates. Under some reasonable values of quantum setup parameters, the lower bounds of finite-key rates are simulated. The simulation results show that at different transmission distances, the affections of different fluctuations on key rates are different. Moreover, the PDS QKD protocol with an unstable PDC source can resist more intensity fluctuations and more statistical fluctuation. PMID:26471947

  5. Scientific guidelines for preservation of samples collected from Mars

    NASA Technical Reports Server (NTRS)

    Gooding, James L. (Editor)

    1990-01-01

    The maximum scientific value of Martian geologic and atmospheric samples is retained when the samples are preserved in the conditions that applied prior to their collection. Any sample degradation equates to loss of information. Based on detailed review of pertinent scientific literature, and advice from experts in planetary sample analysis, number values are recommended for key parameters in the environmental control of collected samples with respect to material contamination, temperature, head-space gas pressure, ionizing radiation, magnetic fields, and acceleration/shock. Parametric values recommended for the most sensitive geologic samples should also be adequate to preserve any biogenic compounds or exobiological relics.

  6. Validation of systems biology derived molecular markers of renal donor organ status associated with long term allograft function.

    PubMed

    Perco, Paul; Heinzel, Andreas; Leierer, Johannes; Schneeberger, Stefan; Bösmüller, Claudia; Oberhuber, Rupert; Wagner, Silvia; Engler, Franziska; Mayer, Gert

    2018-05-03

    Donor organ quality affects long term outcome after renal transplantation. A variety of prognostic molecular markers is available, yet their validity often remains undetermined. A network-based molecular model reflecting donor kidney status based on transcriptomics data and molecular features reported in scientific literature to be associated with chronic allograft nephropathy was created. Significantly enriched biological processes were identified and representative markers were selected. An independent kidney pre-implantation transcriptomics dataset of 76 organs was used to predict estimated glomerular filtration rate (eGFR) values twelve months after transplantation using available clinical data and marker expression values. The best-performing regression model solely based on the clinical parameters donor age, donor gender, and recipient gender explained 17% of variance in post-transplant eGFR values. The five molecular markers EGF, CD2BP2, RALBP1, SF3B1, and DDX19B representing key molecular processes of the constructed renal donor organ status molecular model in addition to the clinical parameters significantly improved model performance (p-value = 0.0007) explaining around 33% of the variability of eGFR values twelve months after transplantation. Collectively, molecular markers reflecting donor organ status significantly add to prediction of post-transplant renal function when added to the clinical parameters donor age and gender.

  7. Optimization of terrestrial ecosystem model parameters using atmospheric CO2 concentration data with a global carbon assimilation system (GCAS)

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Chen, J.; Zhang, S.; Zheng, X.; Shangguan, W.

    2016-12-01

    A global carbon assimilation system (GCAS) that assimilates ground-based atmospheric CO2 data is used to estimate several key parameters in a terrestrial ecosystem model for the purpose of improving carbon cycle simulation. The optimized parameters are the leaf maximum carboxylation rate at 25° (Vmax25 ), the temperature sensitivity of ecosystem respiration (Q10), and the soil carbon pool size. The optimization is performed at the global scale at 1°resolution for the period from 2002 to 2008. Optimized multi-year average Vmax25 values range from 49 to 51 μmol m-2 s-1 over most regions of world. Vegetation from tropical zones has relatively lower values than vegetation in temperate regions. Optimized multi-year average Q10 values varied from 1.95 to 2.05 over most regions of the world. Relatively high values of Q10 are derived over high/mid latitude regions. Both Vmax25 and Q10 exhibit pronounced seasonal variations at mid-high latitudes. The maximum in occurs during the growing season, while the minima appear during non-growing seasons. Q10 values decreases with increasing temperature. The seasonal variabilities of and Q10 are larger at higher latitudes with tropical or low latitude regions showing little seasonal variabilities.

  8. Method for extracting relevant electrical parameters from graphene field-effect transistors using a physical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boscá, A., E-mail: alberto.bosca@upm.es; Dpto. de Ingeniería Electrónica, E.T.S.I. de Telecomunicación, Universidad Politécnica de Madrid, Madrid 28040; Pedrós, J.

    2015-01-28

    Due to its intrinsic high mobility, graphene has proved to be a suitable material for high-speed electronics, where graphene field-effect transistor (GFET) has shown excellent properties. In this work, we present a method for extracting relevant electrical parameters from GFET devices using a simple electrical characterization and a model fitting. With experimental data from the device output characteristics, the method allows to calculate parameters such as the mobility, the contact resistance, and the fixed charge. Differentiated electron and hole mobilities and direct connection with intrinsic material properties are some of the key aspects of this method. Moreover, the method outputmore » values can be correlated with several issues during key fabrication steps such as the graphene growth and transfer, the lithographic steps, or the metalization processes, providing a flexible tool for quality control in GFET fabrication, as well as a valuable feedback for improving the material-growth process.« less

  9. The IMS Software Integration Platform

    DTIC Science & Technology

    1993-04-12

    products to incorporate all data shared by the IMS applications. Some entities (time-series, images, a algorithm -specific parameters) must be managed...dbwhoanii, dbcancel Transaction Management: dbcommit, dbrollback Key Counter Assignment: dbgetcounter String Handling: cstr ~to~pad, pad-to- cstr Error...increment *value; String Maniputation: int cstr topad (array, string, arraylength) char *array, *string; int arrayjlength; int pad tocstr (string

  10. Accounting for ethnicity in recreation demand: a flexible count data approach

    Treesearch

    J. Michael Bowker; V.R. Leeworthy

    1998-01-01

    The authors examine ethnicity and individual trip-taking behavior associated with natural resource based recreation in the Florida Keys. Bowker and Leeworthy estimate trip demand using the travel cost method. They then extend this model with a varying parameter adaptation to test the congruency of' demand and economic value across white and Hispanic user subgroups...

  11. Rehydration of freeze-dried and convective dried boletus edulis mushrooms: effect on some quality parameters.

    PubMed

    Hernando, I; Sanjuán, N; Pérez-Munuera, I; Mulet, A

    2008-10-01

    Quality of rehydrated products is a key aspect linked to rehydration conditions. To assess the effect of rehydration temperature on some quality parameters, experiments at 20 and 70 degrees C were performed with convective dried and freeze-dried Boletus edulis mushrooms. Rehydration characteristics (through Peleg's parameter, k(1), and equilibrium moisture, W(e)), texture (Kramer), and microstructure (Cryo-Scanning Electron Microscopy) were evaluated. Freeze-dried samples absorbed water more quickly and attained higher W(e) values than convective dried ones. Convective dehydrated samples rehydrated at 20 degrees C showed significantly lower textural values (11.9 +/- 3.3 N/g) than those rehydrated at 70 degrees C (15.7 +/- 1.2 N/g). For the freeze-dried Boletus edulis, the textural values also exhibited significant differences, being 8.2 +/- 1.3 and 10.5 +/- 2.3 N/g for 20 and 70 degrees C, respectively. Freeze-dried samples showed a porous structure that allows rehydration to take place mainly at the extracellular level. This explains the fact that, regardless of temperature, freeze-dried mushrooms absorbed water more quickly and reached higher W(e) values than convective dried ones. Whatever the dehydration technique used, rehydration at 70 degrees C produced a structural damage that hindered water absorption; consequently lower W(e) values and higher textural values were attained than when rehydrating at 20 degrees C.

  12. TWT transmitter fault prediction based on ANFIS

    NASA Astrophysics Data System (ADS)

    Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen

    2017-11-01

    Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.

  13. Determination of key parameters of vector multifractal vector fields

    NASA Astrophysics Data System (ADS)

    Schertzer, D. J. M.; Tchiguirinskaia, I.

    2017-12-01

    For too long time, multifractal analyses and simulations have been restricted to scalar-valued fields (Schertzer and Tchiguirinskaia, 2017a,b). For instance, the wind velocity multifractality has been mostly analysed in terms of scalar structure functions and with the scalar energy flux. This restriction has had the unfortunate consequences that multifractals were applicable to their full extent in geophysics, whereas it has inspired them. Indeed a key question in geophysics is the complexity of the interactions between various fields or they components. Nevertheless, sophisticated methods have been developed to determine the key parameters of scalar valued fields. In this communication, we first present the vector extensions of the universal multifractal analysis techniques to multifractals whose generator belong to a Levy-Clifford algebra (Schertzer and Tchiguirinskaia, 2015). We point out further extensions noting the increased complexity. For instance, the (scalar) index of multifractality becomes a matrice. Schertzer, D. and Tchiguirinskaia, I. (2015) `Multifractal vector fields and stochastic Clifford algebra', Chaos: An Interdisciplinary Journal of Nonlinear Science, 25(12), p. 123127. doi: 10.1063/1.4937364. Schertzer, D. and Tchiguirinskaia, I. (2017) `An Introduction to Multifractals and Scale Symmetry Groups', in Ghanbarian, B. and Hunt, A. (eds) Fractals: Concepts and Applications in Geosciences. CRC Press, p. (in press). Schertzer, D. and Tchiguirinskaia, I. (2017b) `Pandora Box of Multifractals: Barely Open ?', in Tsonis, A. A. (ed.) 30 Years of Nonlinear Dynamics in Geophysics. Berlin: Springer, p. (in press).

  14. Understanding identifiability as a crucial step in uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.

    2016-12-01

    The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.

  15. Analysis and design of a standardized control module for switching regulators

    NASA Astrophysics Data System (ADS)

    Lee, F. C.; Mahmoud, M. F.; Yu, Y.; Kolecki, J. C.

    1982-07-01

    Three basic switching regulators: buck, boost, and buck/boost, employing a multiloop standardized control module (SCM) were characterized by a common small signal block diagram. Employing the unified model, regulator performances such as stability, audiosusceptibility, output impedance, and step load transient are analyzed and key performance indexes are expressed in simple analytical forms. More importantly, the performance characteristics of all three regulators are shown to enjoy common properties due to the unique SCM control scheme which nullifies the positive zero and provides adaptive compensation to the moving poles of the boost and buck/boost converters. This allows a simple unified design procedure to be devised for selecting the key SCM control parameters for an arbitrarily given power stage configuration and parameter values, such that all regulator performance specifications can be met and optimized concurrently in a single design attempt.

  16. A Bayesian Framework for Coupled Estimation of Key Unknown Parameters of Land Water and Energy Balance Equations

    NASA Astrophysics Data System (ADS)

    Farhadi, L.; Abdolghafoorian, A.

    2015-12-01

    The land surface is a key component of climate system. It controls the partitioning of available energy at the surface between sensible and latent heat, and partitioning of available water between evaporation and runoff. Water and energy cycle are intrinsically coupled through evaporation, which represents a heat exchange as latent heat flux. Accurate estimation of fluxes of heat and moisture are of significant importance in many fields such as hydrology, climatology and meteorology. In this study we develop and apply a Bayesian framework for estimating the key unknown parameters of terrestrial water and energy balance equations (i.e. moisture and heat diffusion) and their uncertainty in land surface models. These equations are coupled through flux of evaporation. The estimation system is based on the adjoint method for solving a least-squares optimization problem. The cost function consists of aggregated errors on state (i.e. moisture and temperature) with respect to observation and parameters estimation with respect to prior values over the entire assimilation period. This cost function is minimized with respect to parameters to identify models of sensible heat, latent heat/evaporation and drainage and runoff. Inverse of Hessian of the cost function is an approximation of the posterior uncertainty of parameter estimates. Uncertainty of estimated fluxes is estimated by propagating the uncertainty for linear and nonlinear function of key parameters through the method of First Order Second Moment (FOSM). Uncertainty analysis is used in this method to guide the formulation of a well-posed estimation problem. Accuracy of the method is assessed at point scale using surface energy and water fluxes generated by the Simultaneous Heat and Water (SHAW) model at the selected AmeriFlux stations. This method can be applied to diverse climates and land surface conditions with different spatial scales, using remotely sensed measurements of surface moisture and temperature states

  17. Decreasing Kd uncertainties through the application of thermodynamic sorption models.

    PubMed

    Domènech, Cristina; García, David; Pękala, Marek

    2015-09-15

    Radionuclide retardation processes during transport are expected to play an important role in the safety assessment of subsurface disposal facilities for radioactive waste. The linear distribution coefficient (Kd) is often used to represent radionuclide retention, because analytical solutions to the classic advection-diffusion-retardation equation under simple boundary conditions are readily obtainable, and because numerical implementation of this approach is relatively straightforward. For these reasons, the Kd approach lends itself to probabilistic calculations required by Performance Assessment (PA) calculations. However, it is widely recognised that Kd values derived from laboratory experiments generally have a narrow field of validity, and that the uncertainty of the Kd outside this field increases significantly. Mechanistic multicomponent geochemical simulators can be used to calculate Kd values under a wide range of conditions. This approach is powerful and flexible, but requires expert knowledge on the part of the user. The work presented in this paper aims to develop a simplified approach of estimating Kd values whose level of accuracy would be comparable with those obtained by fully-fledged geochemical simulators. The proposed approach consists of deriving simplified algebraic expressions by combining relevant mass action equations. This approach was applied to three distinct geochemical systems involving surface complexation and ion-exchange processes. Within bounds imposed by model simplifications, the presented approach allows radionuclide Kd values to be estimated as a function of key system-controlling parameters, such as the pH and mineralogy. This approach could be used by PA professionals to assess the impact of key geochemical parameters on the variability of radionuclide Kd values. Moreover, the presented approach could be relatively easily implemented in existing codes to represent the influence of temporal and spatial changes in geochemistry on Kd values. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. CAN A NANOFLARE MODEL OF EXTREME-ULTRAVIOLET IRRADIANCES DESCRIBE THE HEATING OF THE SOLAR CORONA?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajfirouze, E.; Safari, H.

    2012-01-10

    Nanoflares, the basic units of impulsive energy release, may produce much of the solar background emission. Extrapolation of the energy frequency distribution of observed microflares, which follows a power law to lower energies, can give an estimation of the importance of nanoflares for heating the solar corona. If the power-law index is greater than 2, then the nanoflare contribution is dominant. We model a time series of extreme-ultraviolet emission radiance as random flares with a power-law exponent of the flare event distribution. The model is based on three key parameters: the flare rate, the flare duration, and the power-law exponentmore » of the flare intensity frequency distribution. We use this model to simulate emission line radiance detected in 171 A, observed by Solar Terrestrial Relation Observatory/Extreme-Ultraviolet Imager and Solar Dynamics Observatory/Atmospheric Imaging Assembly. The observed light curves are matched with simulated light curves using an Artificial Neural Network, and the parameter values are determined across the active region, quiet Sun, and coronal hole. The damping rate of nanoflares is compared with the radiative losses cooling time. The effect of background emission, data cadence, and network sensitivity on the key parameters of the model is studied. Most of the observed light curves have a power-law exponent, {alpha}, greater than the critical value 2. At these sites, nanoflare heating could be significant.« less

  19. Diagnosing ΛHDE model with statefinder hierarchy and fractional growth parameter

    NASA Astrophysics Data System (ADS)

    Zhou, LanJun; Wang, Shuang

    2016-07-01

    Recently, a new dark energy model called ΛHDE was proposed. In this model, dark energy consists of two parts: cosmological constant Λ and holographic dark energy (HDE). Two key parameters of this model are the fractional density of cosmological constant ΩΛ0, and the dimensionless HDE parameter c. Since these two parameters determine the dynamical properties of DE and the destiny of universe, it is important to study the impacts of different values of ΩΛ0 and c on the ΛHDE model. In this paper, we apply various DE diagnostic tools to diagnose ΛHDE models with different values of ΩΛ0 and c; these tools include statefinder hierarchy {S 3 (1) , S 4 (1) }, fractional growth parameter ɛ, and composite null diagnostic (CND), which is a combination of {S 3 (1) , S 4 (1) } and ɛ. We find that: (1) adopting different values of ΩΛ0 only has quantitative impacts on the evolution of the ΛHDE model, while adopting different c has qualitative impacts; (2) compared with S 3 (1) , S 4 (1) can give larger differences among the cosmic evolutions of the ΛHDE model associated with different ΩΛ0 or different c; (3) compared with the case of using a single diagnostic, adopting a CND pair has much stronger ability to diagnose the ΛHDE model.

  20. An estimate of the error caused by the elongation of the wavelength in a focused beam in free-space electromagnetic parameters measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yunpeng; Li, En, E-mail: lien@uestc.edu.cn; Guo, Gaofeng

    2014-09-15

    A pair of spot-focusing horn lens antenna is the key component in a free-space measurement system. The electromagnetic constitutive parameters of a planar sample are determined using transmitted and reflected electromagnetic beams. These parameters are obtained from the measured scattering parameters by the microwave network analyzer, thickness of the sample, and wavelength of a focused beam on the sample. Free-space techniques introduced by most papers consider the focused wavelength as the free-space wavelength. But in fact, the incident wave projected by a lens into the sample approximates a Gaussian beam, thus, there has an elongation of the wavelength in themore » focused beam and this elongation should be taken into consideration in dielectric and magnetic measurement. In this paper, elongation of the wavelength has been analyzed and measured. Measurement results show that the focused wavelength in the vicinity of the focus has an elongation of 1%–5% relative to the free-space wavelength. Elongation's influence on the measurement result of the permittivity and permeability has been investigated. Numerical analyses show that the elongation of the focused wavelength can cause the increase of the measured value of the permeability relative to traditionally measured value, but for the permittivity, it is affected by several parameters and may increase or decrease relative to traditionally measured value.« less

  1. Performance evaluation of algebraic reconstruction technique (ART) for prototype chest digital tomosynthesis (CDT) system

    NASA Astrophysics Data System (ADS)

    Lee, Haenghwa; Choi, Sunghoon; Jo, Byungdu; Kim, Hyemi; Lee, Donghoon; Kim, Dohyeon; Choi, Seungyeon; Lee, Youngjin; Kim, Hee-Joung

    2017-03-01

    Chest digital tomosynthesis (CDT) is a new 3D imaging technique that can be expected to improve the detection of subtle lung disease over conventional chest radiography. Algorithm development for CDT system is challenging in that a limited number of low-dose projections are acquired over a limited angular range. To confirm the feasibility of algebraic reconstruction technique (ART) method under variations in key imaging parameters, quality metrics were conducted using LUNGMAN phantom included grand-glass opacity (GGO) tumor. Reconstructed images were acquired from the total 41 projection images over a total angular range of +/-20°. We evaluated contrast-to-noise ratio (CNR) and artifacts spread function (ASF) to investigate the effect of reconstruction parameters such as number of iterations, relaxation parameter and initial guess on image quality. We found that proper value of ART relaxation parameter could improve image quality from the same projection. In this study, proper value of relaxation parameters for zero-image (ZI) and back-projection (BP) initial guesses were 0.4 and 0.6, respectively. Also, the maximum CNR values and the minimum full width at half maximum (FWHM) of ASF were acquired in the reconstructed images after 20 iterations and 3 iterations, respectively. According to the results, BP initial guess for ART method could provide better image quality than ZI initial guess. In conclusion, ART method with proper reconstruction parameters could improve image quality due to the limited angular range in CDT system.

  2. Influences on cocaine tolerance assessed under a multiple conjunctive schedule of reinforcement.

    PubMed

    Yoon, Jin Ho; Branch, Marc N

    2009-11-01

    Under multiple schedules of reinforcement, previous research has generally observed tolerance to the rate-decreasing effects of cocaine that has been dependent on schedule-parameter size in the context of fixed-ratio (FR) schedules, but not under the context of fixed-interval (FI) schedules of reinforcement. The current experiment examined the effects of cocaine on key-pecking responses of White Carneau pigeons maintained under a three-component multiple conjunctive FI (10 s, 30 s, & 120 s) FR (5 responses) schedule of food presentation. Dose-effect curves representing the effects of presession cocaine on responding were assessed in the context of (1) acute administration of cocaine (2) chronic administration of cocaine and (3) daily administration of saline. Chronic administration of cocaine generally resulted in tolerance to the response-rate decreasing effects of cocaine, and that tolerance was generally independent of relative FI value, as measured by changes in ED50 values. Daily administration of saline decreased ED50 values to those observed when cocaine was administered acutely. The results show that adding a FR requirement to FI schedules is not sufficient to produce schedule-parameter-specific tolerance. Tolerance to cocaine was generally independent of FI-parameter under the present conjunctive schedules, indicating that a ratio requirement, per se, is not sufficient for tolerance to be dependent on FI parameter.

  3. A parametric comparative study of electrocoagulation and coagulation using ultrafine quartz suspensions.

    PubMed

    Kiliç, Mehtap Gülsün; Hoşten, Cetin; Demirci, Sahinde

    2009-11-15

    This paper attempts to compare electrocoagulation using aluminum anodes and stainless steel cathodes with conventional coagulation by aluminum sulfate dosing on aqueous suspensions of ultrafine quartz. Several key parameters affecting the efficiency of electrocoagulation and coagulation were investigated with laboratory scale experiments in search of optimal parameter values. Optimal values of the parameters were determined on the basis of the efficiency of turbidity removal from ultrafine quartz suspensions. The parameters investigated in the study were suspension pH, electrical potential, current density, electrocoagulation time, and aluminum dosage. A comparison between electrocoagulation and coagulation was made on the basis of total dissolved aluminum, revealing that electrocoagulation and coagulation were equally effective at the same aluminum dosage for the removal of quartz particles from suspensions. Coagulation, however, was more effective in a wider pH range (pH 6-9) than electrocoagulation which yielded optimum effectiveness in a relatively narrower pH range around 9, where, in both methods, these pH values corresponded to near-zero zeta potentials of quartz particles. Furthermore, experimental results confirmed that electrocoagulation could display some pH buffering capacity. The kinetics of electrocoagulation was very fast (<10 min) in approaching a residual turbidity, which could be modeled with a second-order rate equation.

  4. Intrinsic physical conditions and structure of relativistic jets in active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Nokhrina, E. E.; Beskin, V. S.; Kovalev, Y. Y.; Zheltoukhov, A. A.

    2015-03-01

    The analysis of the frequency dependence of the observed shift of the cores of relativistic jets in active galactic nuclei (AGNs) allows us to evaluate the number density of the outflowing plasma ne and, hence, the multiplicity parameter λ = ne/nGJ, where nGJ is the Goldreich-Julian number density. We have obtained the median value for λmed = 3 × 1013 and the median value for the Michel magnetization parameter σM, med = 8 from an analysis of 97 sources. Since the magnetization parameter can be interpreted as the maximum possible Lorentz factor Γ of the bulk motion which can be obtained for relativistic magnetohydrodynamic (MHD) flow, this estimate is in agreement with the observed superluminal motion of bright features in AGN jets. Moreover, knowing these key parameters, one can determine the transverse structure of the flow. We show that the poloidal magnetic field and particle number density are much larger in the centre of the jet than near the jet boundary. The MHD model can also explain the typical observed level of jet acceleration. Finally, casual connectivity of strongly collimated jets is discussed.

  5. Determination of remodeling parameters for a strain-adaptive finite element model of the distal ulna.

    PubMed

    Neuert, Mark A C; Dunning, Cynthia E

    2013-09-01

    Strain energy-based adaptive material models are used to predict bone resorption resulting from stress shielding induced by prosthetic joint implants. Generally, such models are governed by two key parameters: a homeostatic strain-energy state (K) and a threshold deviation from this state required to initiate bone reformation (s). A refinement procedure has been performed to estimate these parameters in the femur and glenoid; this study investigates the specific influences of these parameters on resulting density distributions in the distal ulna. A finite element model of a human ulna was created using micro-computed tomography (µCT) data, initialized to a homogeneous density distribution, and subjected to approximate in vivo loading. Values for K and s were tested, and the resulting steady-state density distribution compared with values derived from µCT images. The sensitivity of these parameters to initial conditions was examined by altering the initial homogeneous density value. The refined model parameters selected were then applied to six additional human ulnae to determine their performance across individuals. Model accuracy using the refined parameters was found to be comparable with that found in previous studies of the glenoid and femur, and gross bone structures, such as the cortical shell and medullary canal, were reproduced. The model was found to be insensitive to initial conditions; however, a fair degree of variation was observed between the six specimens. This work represents an important contribution to the study of changes in load transfer in the distal ulna following the implementation of commercial orthopedic implants.

  6. Semiconductive 3-D haloplumbate framework hybrids with high color rendering index white-light emission.

    PubMed

    Wang, Guan-E; Xu, Gang; Wang, Ming-Sheng; Cai, Li-Zhen; Li, Wen-Hua; Guo, Guo-Cong

    2015-12-01

    Single-component white light materials may create great opportunities for novel conventional lighting applications and display systems; however, their reported color rendering index (CRI) values, one of the key parameters for lighting, are less than 90, which does not satisfy the demand of color-critical upmarket applications, such as photography, cinematography, and art galleries. In this work, two semiconductive chloroplumbate (chloride anion of lead(ii)) hybrids, obtained using a new inorganic-organic hybrid strategy, show unprecedented 3-D inorganic framework structures and white-light-emitting properties with high CRI values around 90, one of which shows the highest value to date.

  7. Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters

    NASA Astrophysics Data System (ADS)

    Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.

    2018-06-01

    Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters ( i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.

  8. What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.

    2012-12-01

    A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less

  9. A consistent framework to predict mass fluxes and depletion times for DNAPL contaminations in heterogeneous aquifers under uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Jonas; Nowak, Wolfgang

    2013-04-01

    At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.

  10. Joint cosmic microwave background and weak lensing analysis: constraints on cosmological parameters.

    PubMed

    Contaldi, Carlo R; Hoekstra, Henk; Lewis, Antony

    2003-06-06

    We use cosmic microwave background (CMB) observations together with the red-sequence cluster survey weak lensing results to derive constraints on a range of cosmological parameters. This particular choice of observations is motivated by their robust physical interpretation and complementarity. Our combined analysis, including a weak nucleosynthesis constraint, yields accurate determinations of a number of parameters including the amplitude of fluctuations sigma(8)=0.89+/-0.05 and matter density Omega(m)=0.30+/-0.03. We also find a value for the Hubble parameter of H(0)=70+/-3 km s(-1) Mpc(-1), in good agreement with the Hubble Space Telescope key-project result. We conclude that the combination of CMB and weak lensing data provides some of the most powerful constraints available in cosmology today.

  11. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-05-01

    Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  12. Crop Damage by Primates: Quantifying the Key Parameters of Crop-Raiding Events

    PubMed Central

    Wallace, Graham E.; Hill, Catherine M.

    2012-01-01

    Human-wildlife conflict often arises from crop-raiding, and insights regarding which aspects of raiding events determine crop loss are essential when developing and evaluating deterrents. However, because accounts of crop-raiding behaviour are frequently indirect, these parameters are rarely quantified or explicitly linked to crop damage. Using systematic observations of the behaviour of non-human primates on farms in western Uganda, this research identifies number of individuals raiding and duration of raid as the primary parameters determining crop loss. Secondary factors include distance travelled onto farm, age composition of the raiding group, and whether raids are in series. Regression models accounted for greater proportions of variation in crop loss when increasingly crop and species specific. Parameter values varied across primate species, probably reflecting differences in raiding tactics or perceptions of risk, and thereby providing indices of how comfortable primates are on-farm. Median raiding-group sizes were markedly smaller than the typical sizes of social groups. The research suggests that key parameters of raiding events can be used to measure the behavioural impacts of deterrents to raiding. Furthermore, farmers will benefit most from methods that discourage raiding by multiple individuals, reduce the size of raiding groups, or decrease the amount of time primates are on-farm. This study demonstrates the importance of directly relating crop loss to the parameters of raiding events, using systematic observations of the behaviour of multiple primate species. PMID:23056378

  13. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A; Cole, Wesley J; Sun, Yinong

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss commonmore » modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges associate with integration of variable generation resources.« less

  14. Composable security proof for continuous-variable quantum key distribution with coherent States.

    PubMed

    Leverrier, Anthony

    2015-02-20

    We give the first composable security proof for continuous-variable quantum key distribution with coherent states against collective attacks. Crucially, in the limit of large blocks the secret key rate converges to the usual value computed from the Holevo bound. Combining our proof with either the de Finetti theorem or the postselection technique then shows the security of the protocol against general attacks, thereby confirming the long-standing conjecture that Gaussian attacks are optimal asymptotically in the composable security framework. We expect that our parameter estimation procedure, which does not rely on any assumption about the quantum state being measured, will find applications elsewhere, for instance, for the reliable quantification of continuous-variable entanglement in finite-size settings.

  15. Metal contamination disturbs biochemical and microbial properties of calcareous agricultural soils of the Mediterranean area.

    PubMed

    de Santiago-Martín, Ana; Cheviron, Natalie; Quintana, Jose R; González, Concepción; Lafuente, Antonio L; Mougin, Christian

    2013-04-01

    Mediterranean climate characteristics and carbonate are key factors governing soil heavy-metal accumulation, and low organic matter (OM) content could limit the ability of microbial populations to cope with resulting stress. We studied the effects of metal contamination on a combination of biological parameters in soils having these characteristics. With this aim, soils were spiked with a mixture of cadmium, copper, lead, and zinc, at the two limit values proposed by current European legislation, and incubated for ≤12 months. Then we measured biochemical (phosphatase, urease, β-galactosidase, arylsulfatase, and dehydrogenase activities) and microbial (fungal and bacterial DNA concentration by quantitative polymerase chain reaction) parameters. All of the enzyme activities were strongly affected by metal contamination and showed the following inhibition sequence: phosphatase (30-64 %) < arylsulfatase (38-97 %) ≤ urease (1-100 %) ≤ β-galactosidase (30-100 %) < dehydrogenase (69-100 %). The high variability among soils was attributed to the different proportion of fine mineral fraction, OM, crystalline iron oxides, and divalent cations in soil solution. The decrease of fungal DNA concentration in metal-spiked soils was negligible, whereas the decrease of bacterial DNA was ~1-54 % at the lowest level and 2-69 % at the highest level of contamination. The lowest bacterial DNA decrease occurred in soils with the highest OM, clay, and carbonate contents. Finally, regarding the strong inhibition of the biological parameters measured and the alteration of the fungal/bacterial DNA ratio, we provide strong evidence that disturbance on the system, even within the limiting values of contamination proposed by the current European Directive, could alter key soil processes. These limiting values should be established according to soil characteristics and/or revised when contamination is produced by a mixture of heavy metals.

  16. Quantitative Studies of the Optical and UV Spectra of Galactic Early B Supergiants

    NASA Technical Reports Server (NTRS)

    Searle, S. C.; Prinja, R. K.; Massa, D.; Ryans, R.

    2008-01-01

    We undertake an optical and ultraviolet spectroscopic analysis of a sample of 20 Galactic B0-B5 supergiants of luminosity classes Ia, Ib, Iab, and II. Fundamental stellar parameters are obtained from optical diagnostics and a critical comparison of the model predictions to observed UV spectral features is made. Methods. Fundamental parameters (e.g., T(sub eff), log L(sub *), mass-loss rates and CNO abundances) are derived for individual stars using CMFGEN, a nLTE, line-blanketed model atmosphere code. The impact of these newly derived parameters on the Galactic B supergiant Ten scale, mass discrepancy, and wind-momentum luminosity relation is examined. Results. The B supergiant temperature scale derived here shows a reduction of about 1000-3000 K compared to previous results using unblanketed codes. Mass-loss rate estimates are in good agreement with predicted theoretical values, and all of the 20 BO-B5 supergiants analysed show evidence of CNO processing. A mass discrepancy still exists between spectroscopic and evolutionary masses, with the largest discrepancy occuring at log (L/(solar)L approx. 5.4. The observed WLR values calculated for B0-B0.7 supergiants are higher than predicted values, whereas the reverse is true for B1-B5 supergiants. This means that the discrepancy between observed and theoretical values cannot be resolved by adopting clumped (i.e., lower) mass-loss rates as for O stars. The most surprising result is that, although CMFGEN succeeds in reproducing the optical stellar spectrum accurately, it fails to precisely reproduce key UV diagnostics, such as the N v and C IV P Cygni profiles. This problem arises because the models are not ionised enough and fail to reproduce the full extent of the observed absorption trough of the P Cygni profiles. Conclusions. Newly-derived fundamental parameters for early B supergiants are in good agreement with similar work in the field. The most significant discovery, however, is the failure of CMFGEN to predict the correct ionisation fraction for some ions. Such findings add further support to revising the current standard model of massive star winds, as our understanding of these winds is incomplete without a precise knowledge of the ionisation structure and distribution of clumping in the wind. Key words. techniques: spectroscopic - stars: mass-loss - stars: supergiants - stars: abundances - stars: atmospheres - stars: fundamental parameters

  17. Information filtering via a scaling-based function.

    PubMed

    Qiu, Tian; Zhang, Zi-Ke; Chen, Guang

    2013-01-01

    Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem.

  18. Broadcasting satellite feeder links - Characteristics and planning

    NASA Technical Reports Server (NTRS)

    Kiebler, J. W.

    1982-01-01

    The paper presents the results of recent studies by the Feeder Link Sub-Working Group of the FCC Advisory Committee for the 1983 Regional Administrative Radio Conference (RARC). These studies conclude that specification of a few key parameters will make feeder link planning relatively straightforward. Feeder links can be located anywhere within a country if satellite orbit locations are separated by 10 deg for adjacent service areas and key parameter values presented in the paper are adopted. Colocated satellites serving a common service area need special attention to attain sufficient isolation between a desired channel and its adjacent cross-polarized channels and alternate co-polarized channels. In addition to presenting planning conclusions by the Advisory Committee, the paper presents and analyzes actions of the International Radio Consultative Committee's Conference Planning Meeting (CPM) concerning feeder links. The CPM reached conclusions similar to, and compatible with, those of the Advisory Committee.

  19. User manual of the CATSS system (version 1.0) communication analysis tool for space station

    NASA Technical Reports Server (NTRS)

    Tsang, C. S.; Su, Y. T.; Lindsey, W. C.

    1983-01-01

    The Communication Analysis Tool for the Space Station (CATSS) is a FORTRAN language software package capable of predicting the communications links performance for the Space Station (SS) communication and tracking (C & T) system. An interactive software package was currently developed to run on the DEC/VAX computers. The CATSS models and evaluates the various C & T links of the SS, which includes the modulation schemes such as Binary-Phase-Shift-Keying (BPSK), BPSK with Direct Sequence Spread Spectrum (PN/BPSK), and M-ary Frequency-Shift-Keying with Frequency Hopping (FH/MFSK). Optical Space Communication link is also included. CATSS is a C & T system engineering tool used to predict and analyze the system performance for different link environment. Identification of system weaknesses is achieved through evaluation of performance with varying system parameters. System tradeoff for different values of system parameters are made based on the performance prediction.

  20. The drift velocity monitoring system of the CMS barrel muon chambers

    NASA Astrophysics Data System (ADS)

    Altenhöfer, Georg; Hebbeker, Thomas; Heidemann, Carsten; Reithler, Hans; Sonnenschein, Lars; Teyssier, Daniel

    2018-04-01

    The drift velocity is a key parameter of drift chambers. Its value depends on several parameters: electric field, pressure, temperature, gas mixture, and contamination, for example, by ambient air. A dedicated Velocity Drift Chamber (VDC) with 1-L volume has been built at the III. Phys. Institute A, RWTH Aachen, in order to monitor the drift velocity of all CMS barrel muon Drift Tube chambers. A system of six VDCs was installed at CMS and has been running since January 2011. We present the VDC monitoring system, its principle of operation, and measurements performed.

  1. Key parameters of the sediment surface morphodynamics in an estuary - An assessment of model solutions

    NASA Astrophysics Data System (ADS)

    Sampath, D. M. R.; Boski, T.

    2018-05-01

    Large-scale geomorphological evolution of an estuarine system was simulated by means of a hybrid estuarine sedimentation model (HESM) applied to the Guadiana Estuary, in Southwest Iberia. The model simulates the decadal-scale morphodynamics of the system under environmental forcing, using a set of analytical solutions to simplified equations of tidal wave propagation in shallow waters, constrained by empirical knowledge of estuarine sedimentary dynamics and topography. The key controlling parameters of the model are bed friction (f), current velocity power of the erosion rate function (N), and sea-level rise rate. An assessment of sensitivity of the simulated sediment surface elevation (SSE) change to these controlling parameters was performed. The model predicted the spatial differentiation of accretion and erosion, the latter especially marked in the mudflats within mean sea level and low tide level and accretion was mainly in a subtidal channel. The average SSE change mutually depended on both the friction coefficient and power of the current velocity. Analysis of the average annual SSE change suggests that the state of intertidal and subtidal compartments of the estuarine system vary differently according to the dominant processes (erosion and accretion). As the Guadiana estuarine system shows dominant erosional behaviour in the context of sea-level rise and sediment supply reduction after the closure of the Alqueva Dam, the most plausible sets of parameter values for the Guadiana Estuary are N = 1.8 and f = 0.8f0, or N = 2 and f = f0, where f0 is the empirically estimated value. For these sets of parameter values, the relative errors in SSE change did not exceed ±20% in 73% of simulation cells in the studied area. Such a limit of accuracy can be acceptable for an idealized modelling of coastal evolution in response to uncertain sea-level rise scenarios in the context of reduced sediment supply due to flow regulation. Therefore, the idealized but cost-effective HESM model will be suitable for estimating the morphological impacts of sea-level rise on estuarine systems on a decadal timescale.

  2. Perceiving while producing: Modeling the dynamics of phonological planning

    PubMed Central

    Roon, Kevin D.; Gafos, Adamantios I.

    2016-01-01

    We offer a dynamical model of phonological planning that provides a formal instantiation of how the speech production and perception systems interact during online processing. The model is developed on the basis of evidence from an experimental task that requires concurrent use of both systems, the so-called response-distractor task in which speakers hear distractor syllables while they are preparing to produce required responses. The model formalizes how ongoing response planning is affected by perception and accounts for a range of results reported across previous studies. It does so by explicitly addressing the setting of parameter values in representations. The key unit of the model is that of the dynamic field, a distribution of activation over the range of values associated with each representational parameter. The setting of parameter values takes place by the attainment of a stable distribution of activation over the entire field, stable in the sense that it persists even after the response cue in the above experiments has been removed. This and other properties of representations that have been taken as axiomatic in previous work are derived by the dynamics of the proposed model. PMID:27440947

  3. A theoretical and experimental study on the pulsed laser dressing of bronze-bonded diamond grinding wheels

    NASA Astrophysics Data System (ADS)

    Deng, H.; Chen, G. Y.; Zhou, C.; Zhou, X. C.; He, J.; Zhang, Y.

    2014-09-01

    A series of theoretical analyses and experimental investigations were performed to examine a pulsed fiber-laser tangential profiling and radial sharpening technique for bronze-bonded diamond grinding wheels. The mechanisms for the pulsed laser tangential profiling and radial sharpening of grinding wheels were theoretically analyzed, and the four key processing parameters that determine the quality, accuracy, and efficiency of pulsed laser dressing, namely, the laser power density, laser spot overlap ratio, laser scanning track line overlap ratio, and number of laser scanning cycles, were proposed. Further, by utilizing cylindrical bronze wheels (without diamond grains) and bronze-bonded diamond grinding wheels as the experimental subjects, the effects of these four processing parameters on the removal efficiency and the surface smoothness of the bond material after pulsed laser ablation, as well as the effects on the contour accuracy of the grinding wheels, the protrusion height of the diamond grains, the sharpness of the grain cutting edges, and the graphitization degree of the diamond grains after pulsed laser dressing, were explored. The optimal values of the four key processing parameters were identified.

  4. Evaluation and application of site-specific data to revise the first-order decay model for estimating landfill gas generation and emissions at Danish landfills.

    PubMed

    Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter

    2015-06-01

    Methane (CH₄) generated from low-organic waste degradation at four Danish landfills was estimated by three first-order decay (FOD) landfill gas (LFG) generation models (LandGEM, IPCC, and Afvalzorg). Actual waste data from Danish landfills were applied to fit model (IPCC and Afvalzorg) required categories. In general, the single-phase model, LandGEM, significantly overestimated CH₄generation, because it applied too high default values for key parameters to handle low-organic waste scenarios. The key parameters were biochemical CH₄potential (BMP) and CH₄generation rate constant (k-value). In comparison to the IPCC model, the Afvalzorg model was more suitable for estimating CH₄generation at Danish landfills, because it defined more proper waste categories rather than traditional municipal solid waste (MSW) fractions. Moreover, the Afvalzorg model could better show the influence of not only the total disposed waste amount, but also various waste categories. By using laboratory-determined BMPs and k-values for shredder, sludge, mixed bulky waste, and street-cleaning waste, the Afvalzorg model was revised. The revised model estimated smaller cumulative CH₄generation results at the four Danish landfills (from the start of disposal until 2020 and until 2100). Through a CH₄mass balance approach, fugitive CH₄emissions from whole sites and a specific cell for shredder waste were aggregated based on the revised Afvalzorg model outcomes. Aggregated results were in good agreement with field measurements, indicating that the revised Afvalzorg model could provide practical and accurate estimation for Danish LFG emissions. This study is valuable for both researchers and engineers aiming to predict, control, and mitigate fugitive CH₄emissions from landfills receiving low-organic waste. Landfill operators use the first-order decay (FOD) models to estimate methane (CH₄) generation. A single-phase model (LandGEM) and a traditional model (IPCC) could result in overestimation when handling a low-organic waste scenario. Site-specific data were important and capable of calibrating key parameter values in FOD models. The comparison study of the revised Afvalzorg model outcomes and field measurements at four Danish landfills provided a guideline for revising the Pollutants Release and Transfer Registers (PRTR) model, as well as indicating noteworthy waste fractions that could emit CH₄at modern landfills.

  5. Economic evaluation in chronic pain: a systematic review and de novo flexible economic model.

    PubMed

    Sullivan, W; Hirst, M; Beard, S; Gladwell, D; Fagnani, F; López Bastida, J; Phillips, C; Dunlop, W C N

    2016-07-01

    There is unmet need in patients suffering from chronic pain, yet innovation may be impeded by the difficulty of justifying economic value in a field beset by data limitations and methodological variability. A systematic review was conducted to identify and summarise the key areas of variability and limitations in modelling approaches in the economic evaluation of treatments for chronic pain. The results of the literature review were then used to support the development of a fully flexible open-source economic model structure, designed to test structural and data assumptions and act as a reference for future modelling practice. The key model design themes identified from the systematic review included: time horizon; titration and stabilisation; number of treatment lines; choice/ordering of treatment; and the impact of parameter uncertainty (given reliance on expert opinion). Exploratory analyses using the model to compare a hypothetical novel therapy versus morphine as first-line treatments showed cost-effectiveness results to be sensitive to structural and data assumptions. Assumptions about the treatment pathway and choice of time horizon were key model drivers. Our results suggest structural model design and data assumptions may have driven previous cost-effectiveness results and ultimately decisions based on economic value. We therefore conclude that it is vital that future economic models in chronic pain are designed to be fully transparent and hope our open-source code is useful in order to aspire to a common approach to modelling pain that includes robust sensitivity analyses to test structural and parameter uncertainty.

  6. Key-value store with internal key-value storage interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Ting, Dennis P. J.

    A key-value store is provided having one or more key-value storage interfaces. A key-value store on at least one compute node comprises a memory for storing a plurality of key-value pairs; and an abstract storage interface comprising a software interface module that communicates with at least one persistent storage device providing a key-value interface for persistent storage of one or more of the plurality of key-value pairs, wherein the software interface module provides the one or more key-value pairs to the at least one persistent storage device in a key-value format. The abstract storage interface optionally processes one or moremore » batch operations on the plurality of key-value pairs. A distributed embodiment for a partitioned key-value store is also provided.« less

  7. Optimizing Photosynthetic and Respiratory Parameters Based on the Seasonal Variation Pattern in Regional Net Ecosystem Productivity Obtained from Atmospheric Inversion

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Chen, J.; Zheng, X.; Jiang, F.; Zhang, S.; Ju, W.; Yuan, W.; Mo, G.

    2014-12-01

    In this study, we explore the feasibility of optimizing ecosystem photosynthetic and respiratory parameters from the seasonal variation pattern of the net carbon flux. An optimization scheme is proposed to estimate two key parameters (Vcmax and Q10) by exploiting the seasonal variation in the net ecosystem carbon flux retrieved by an atmospheric inversion system. This scheme is implemented to estimate Vcmax and Q10 of the Boreal Ecosystem Productivity Simulator (BEPS) to improve its NEP simulation in the Boreal North America (BNA) region. Simultaneously, in-situ NEE observations at six eddy covariance sites are used to evaluate the NEE simulations. The results show that the performance of the optimized BEPS is superior to that of the BEPS with the default parameter values. These results have the implication on using atmospheric CO2 data for optimizing ecosystem parameters through atmospheric inversion or data assimilation techniques.

  8. The Differentiation of Response Numerosities in the Pigeon

    PubMed Central

    Machado, Armando; Rodrigues, Paulo

    2007-01-01

    Two experiments examined how pigeons differentiate response patterns along the dimension of number. In Experiment 1, 5 pigeons received food after pecking the left key at least N times and then switching to the right key (Mechner’s Fixed Consecutive Number schedule). Parameter N varied across conditions from 4 to 32. Results showed that run length on the left key followed a normal distribution whose mean and standard deviation increased linearly with N; the coefficient of variation approached a constant value (the scalar property). In Experiment 2, 4 pigeons received food with probability p for pecking the left key exactly four times and then switching. If that did not happen, the pigeons still could receive food by returning to the left key and pecking it for a total of at least 16 times and then switching. Parameter p varied across conditions from 1.0 to .25. Results showed that when p = 1.0 or p = .5, pigeons learned two response numerosities within the same condition. When p = .25, each pigeon adapted to the schedule differently. Two of them emitted first runs well described by a mixture of two normal distributions, one with mean close to 4 and the other with mean close to 16 pecks. A mathematical model for the differentiation of response numerosity in Fixed Consecutive Number schedules is proposed. PMID:17970413

  9. Characteristics and Impact Factors of Parameter Alpha in the Nonlinear Advection-Aridity Method for Estimating Evapotranspiration at Interannual Scale in the Loess Plateau

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Liu, W.; Ning, T.

    2017-12-01

    Land surface actual evapotranspiration plays a key role in the global water and energy cycles. Accurate estimation of evapotranspiration is crucial for understanding the interactions between the land surface and the atmosphere, as well as for managing water resources. The nonlinear advection-aridity approach was formulated by Brutsaert to estimate actual evapotranspiration in 2015. Subsequently, this approach has been verified, applied and developed by many scholars. The estimation, impact factors and correlation analysis of the parameter alpha (αe) of this approach has become important aspects of the research. According to the principle of this approach, the potential evapotranspiration (ETpo) (taking αe as 1) and the apparent potential evapotranspiration (ETpm) were calculated using the meteorological data of 123 sites of the Loess Plateau and its surrounding areas. Then the mean spatial values of precipitation (P), ETpm and ETpo for 13 catchments were obtained by a CoKriging interpolation algorithm. Based on the runoff data of the 13 catchments, actual evapotranspiration was calculated using the catchment water balance equation at the hydrological year scale (May to April of the following year) by ignoring the change of catchment water storage. Thus, the parameter was estimated, and its relationships with P, ETpm and aridity index (ETpm/P) were further analyzed. The results showed that the general range of annual parameter value was 0.385-1.085, with an average value of 0.751 and a standard deviation of 0.113. The mean annual parameter αe value showed different spatial characteristics, with lower values in northern and higher values in southern. The annual scale parameter linearly related with annual P (R2=0.89) and ETpm (R2=0.49), while it exhibited a power function relationship with the aridity index (R2=0.83). Considering the ETpm is a variable in the nonlinear advection-aridity approach in which its effect has been incorporated, the relationship of precipitation and parameter (αe=1.0×10-3*P+0.301) was developed. The value of αe in this study is lower than those in the published literature. The reason is unclear at this point and yet need further investigation. The preliminary application of the nonlinear advection-aridity approach in the Loess Plateau has shown promising results.

  10. Quantification of tidal parameters from Solar System data

    NASA Astrophysics Data System (ADS)

    Lainey, Valéry

    2016-11-01

    Tidal dissipation is the main driver of orbital evolution of natural satellites and a key point to understand the exoplanetary system configurations. Despite its importance, its quantification from observations still remains difficult for most objects of our own Solar System. In this work, we overview the method that has been used to determine, directly from observations, the tidal parameters, with emphasis on the Love number k_2 and the tidal quality factor Q. Up-to-date values of these tidal parameters are summarized. Last, an assessment on the possible determination of the tidal ratio k_2/Q of Uranus and Neptune is done. This may be particularly relevant for coming astrometric campaigns and future space missions focused on these systems.

  11. An analytic formula for the supercluster mass function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seunghwan; Lee, Jounghun, E-mail: slim@astro.umass.edu, E-mail: jounghun@astro.snu.ac.kr

    2014-03-01

    We present an analytic formula for the supercluster mass function, which is constructed by modifying the extended Zel'dovich model for the halo mass function. The formula has two characteristic parameters whose best-fit values are determined by fitting to the numerical results from N-body simulations for the standard ΛCDM cosmology. The parameters are found to be independent of redshifts and robust against variation of the key cosmological parameters. Under the assumption that the same formula for the supercluster mass function is valid for non-standard cosmological models, we show that the relative abundance of the rich superclusters should be a powerful indicatormore » of any deviation of the real universe from the prediction of the standard ΛCDM model.« less

  12. Optimizing chaos time-delay signature in two mutually-coupled semiconductor lasers through controlling internal parameters

    NASA Astrophysics Data System (ADS)

    Mu, Penghua; Pan, Wei; Yan, Lianshan; Luo, Bin; Zou, Xihua

    2017-04-01

    In this contribution, the effects of two key internal parameters, i.e. the linewidth-enhancement factor (α) and gain nonlinearity (𝜀), on time-delay signatures (TDS) concealment of two mutually-coupled semiconductor lasers (MCSLs) are numerically investigated. In particular, the influences of α and 𝜀 on the TDS concealment are compared and discussed systematically by setting different values of frequency detuning (Δf) and injection strength (η). The results show that the TDS can be better suppressed with high α or lower 𝜀 in the MCSLs. Two sets of desired optical chaos with TDS being strongly suppressed can be generated simultaneously in a wide injection parameter plane provided that α and 𝜀 are properly chosen, indicating that optimizing TDS suppression through controlling internal parameters can be generalized to any delayed-coupled laser systems.

  13. Handwriting: Feature Correlation Analysis for Biometric Hashes

    NASA Astrophysics Data System (ADS)

    Vielhauer, Claus; Steinmetz, Ralf

    2004-12-01

    In the application domain of electronic commerce, biometric authentication can provide one possible solution for the key management problem. Besides server-based approaches, methods of deriving digital keys directly from biometric measures appear to be advantageous. In this paper, we analyze one of our recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash. Our interest is to investigate to which degree each of the underlying feature parameters contributes to the overall intrapersonal stability and interpersonal value space. We will briefly discuss related work in feature evaluation and introduce a new methodology based on three components: the intrapersonal scatter (deviation), the interpersonal entropy, and the correlation between both measures. Evaluation of the technique is presented based on two data sets of different size. The method presented will allow determination of effects of parameterization of the biometric system, estimation of value space boundaries, and comparison with other feature selection approaches.

  14. Analyses of microstructural and elastic properties of porous SOFC cathodes based on focused ion beam tomography

    NASA Astrophysics Data System (ADS)

    Chen, Zhangwei; Wang, Xin; Giuliani, Finn; Atkinson, Alan

    2015-01-01

    Mechanical properties of porous SOFC electrodes are largely determined by their microstructures. Measurements of the elastic properties and microstructural parameters can be achieved by modelling of the digitally reconstructed 3D volumes based on the real electrode microstructures. However, the reliability of such measurements is greatly dependent on the processing of raw images acquired for reconstruction. In this work, the actual microstructures of La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) cathodes sintered at an elevated temperature were reconstructed based on dual-beam FIB/SEM tomography. Key microstructural and elastic parameters were estimated and correlated. Analyses of their sensitivity to the grayscale threshold value applied in the image segmentation were performed. The important microstructural parameters included porosity, tortuosity, specific surface area, particle and pore size distributions, and inter-particle neck size distribution, which may have varying extent of effect on the elastic properties simulated from the microstructures using FEM. Results showed that different threshold value range would result in different degree of sensitivity for a specific parameter. The estimated porosity and tortuosity were more sensitive than surface area to volume ratio. Pore and neck size were found to be less sensitive than particle size. Results also showed that the modulus was essentially sensitive to the porosity which was largely controlled by the threshold value.

  15. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-11-01

    Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  16. Conversion and matched filter approximations for serial minimum-shift keyed modulation

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Ryan, C. R.; Stilwell, J. H.

    1982-01-01

    Serial minimum-shift keyed (MSK) modulation, a technique for generating and detecting MSK using series filtering, is ideally suited for high data rate applications provided the required conversion and matched filters can be closely approximated. Low-pass implementations of these filters as parallel inphase- and quadrature-mixer structures are characterized in this paper in terms of signal-to-noise ratio (SNR) degradation from ideal and envelope deviation. Several hardware implementation techniques utilizing microwave devices or lumped elements are presented. Optimization of parameter values results in realizations whose SNR degradation is less than 0.5 dB at error probabilities of .000001.

  17. Integration of quantum key distribution and private classical communication through continuous variable

    NASA Astrophysics Data System (ADS)

    Wang, Tianyi; Gong, Feng; Lu, Anjiang; Zhang, Damin; Zhang, Zhengping

    2017-12-01

    In this paper, we propose a scheme that integrates quantum key distribution and private classical communication via continuous variables. The integrated scheme employs both quadratures of a weak coherent state, with encrypted bits encoded on the signs and Gaussian random numbers encoded on the values of the quadratures. The integration enables quantum and classical data to share the same physical and logical channel. Simulation results based on practical system parameters demonstrate that both classical communication and quantum communication can be implemented over distance of tens of kilometers, thus providing a potential solution for simultaneous transmission of quantum communication and classical communication.

  18. Composite multi-parameter ranking of real and virtual compounds for design of MC4R agonists: renaissance of the Free-Wilson methodology.

    PubMed

    Nilsson, Ingemar; Polla, Magnus O

    2012-10-01

    Drug design is a multi-parameter task present in the analysis of experimental data for synthesized compounds and in the prediction of new compounds with desired properties. This article describes the implementation of a binned scoring and composite ranking scheme for 11 experimental parameters that were identified as key drivers in the MC4R project. The composite ranking scheme was implemented in an AstraZeneca tool for analysis of project data, thereby providing an immediate re-ranking as new experimental data was added. The automated ranking also highlighted compounds overlooked by the project team. The successful implementation of a composite ranking on experimental data led to the development of an equivalent virtual score, which was based on Free-Wilson models of the parameters from the experimental ranking. The individual Free-Wilson models showed good to high predictive power with a correlation coefficient between 0.45 and 0.97 based on the external test set. The virtual ranking adds value to the selection of compounds for synthesis but error propagation must be controlled. The experimental ranking approach adds significant value, is parameter independent and can be tuned and applied to any drug discovery project.

  19. ESR paper on structured reporting in radiology.

    PubMed

    2018-02-01

    Structured reporting is emerging as a key element of optimising radiology's contribution to patient outcomes and ensuring the value of radiologists' work. It is being developed and supported by many national and international radiology societies, based on the recognised need to use uniform language and structure to accurately describe radiology findings. Standardisation of report structures ensures that all relevant areas are addressed. Standardisation of terminology prevents ambiguity in reports and facilitates comparability of reports. The use of key data elements and quantified parameters in structured reports ("radiomics") permits automatic functions (e.g. TNM staging), potential integration with other clinical parameters (e.g. laboratory results), data sharing (e.g. registries, biobanks) and data mining for research, teaching and other purposes. This article outlines the requirements for a successful structured reporting strategy (definition of content and structure, standard terminologies, tools and protocols). A potential implementation strategy is outlined. Moving from conventional prose reports to structured reporting is endorsed as a positive development, and must be an international effort, with international design and adoption of structured reporting templates that can be translated and adapted in local environments as needed. Industry involvement is key to success, based on international data standards and guidelines. • Standardisation of radiology report structure ensures completeness and comparability of reports. • Use of standardised language in reports minimises ambiguity. • Structured reporting facilitates automatic functions, integration with other clinical parameters and data sharing. • International and inter-society cooperation is key to developing successful structured report templates. • Integration with industry providers of radiology-reporting software is also crucial.

  20. AmapSim: A Structural Whole-plant Simulator Based on Botanical Knowledge and Designed to Host External Functional Models

    PubMed Central

    Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry

    2008-01-01

    Background and Aims AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. Methods The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Key Results Simulations were performed on tomato plants to demostrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. Conclusions The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment. PMID:17766310

  1. Quantifying uncertainty in NDSHA estimates due to earthquake catalogue

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano

    2014-05-01

    The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate of ground motion error can therefore be the factor of 2, intrinsic in MCS scale. We tested this hypothesis by the analysis of uncertainty in ground motion maps due to the catalogue random errors in magnitude and localization.

  2. Parameter optimisation for a better representation of drought by LSMs: inverse modelling vs. sequential data assimilation

    NASA Astrophysics Data System (ADS)

    Dewaele, Hélène; Munier, Simon; Albergel, Clément; Planque, Carole; Laanaia, Nabil; Carrer, Dominique; Calvet, Jean-Christophe

    2017-09-01

    Soil maximum available water content (MaxAWC) is a key parameter in land surface models (LSMs). However, being difficult to measure, this parameter is usually uncertain. This study assesses the feasibility of using a 15-year (1999-2013) time series of satellite-derived low-resolution observations of leaf area index (LAI) to estimate MaxAWC for rainfed croplands over France. LAI interannual variability is simulated using the CO2-responsive version of the Interactions between Soil, Biosphere and Atmosphere (ISBA) LSM for various values of MaxAWC. Optimal value is then selected by using (1) a simple inverse modelling technique, comparing simulated and observed LAI and (2) a more complex method consisting in integrating observed LAI in ISBA through a land data assimilation system (LDAS) and minimising LAI analysis increments. The evaluation of the MaxAWC estimates from both methods is done using simulated annual maximum above-ground biomass (Bag) and straw cereal grain yield (GY) values from the Agreste French agricultural statistics portal, for 45 administrative units presenting a high proportion of straw cereals. Significant correlations (p value < 0.01) between Bag and GY are found for up to 36 and 53 % of the administrative units for the inverse modelling and LDAS tuning methods, respectively. It is found that the LDAS tuning experiment gives more realistic values of MaxAWC and maximum Bag than the inverse modelling experiment. Using undisaggregated LAI observations leads to an underestimation of MaxAWC and maximum Bag in both experiments. Median annual maximum values of disaggregated LAI observations are found to correlate very well with MaxAWC.

  3. Stacking faults density driven collapse of magnetic energy in hcp-cobalt nano-magnets

    NASA Astrophysics Data System (ADS)

    Nong, H. T. T.; Mrad, K.; Schoenstein, F.; Piquemal, J.-Y.; Jouini, N.; Leridon, B.; Mercone, S.

    2017-06-01

    Cobalt nanowires with different shape parameters were synthesized via the polyol process. By calculating the magnetic energy product (BH max) both for dried nano-powder and for nanowires in their synthesis solution, we observed unexpected independent BH max values from the nanowires shape. A good alignment of the nanowires leads to a higher BH max value. Our results show that the key parameter driving the magnetic energy product of the cobalt nanowires is the stacking fault density. An exponential collapse of the magnetic energy is observed at very low percentage of structural faults. Cobalt nanowires with almost perfect hcp crystalline structures should present high magnetic energy, which is promising for application in rare earth-free permanent magnets. Oral talk at 8th International Workshop on Advanced Materials Science and Nanotechnology (IWAMSN2016), 8-12 November 2016, Ha Long City, Vietnam.

  4. Comment on "High resolution coherence analysis between planetary and climate oscillations"

    NASA Astrophysics Data System (ADS)

    Holm, Sverre

    2018-07-01

    The paper by Scafetta entitled "High resolution coherence analysis between planetary and climate oscillations", May 2016 claims coherence between planetary movements and the global temperature anomaly. The claim is based on data analysis using the canonical covariance analysis (CCA) estimator for the magnitude squared coherence (MSC). It assumes a model with a predetermined number of sinusoids for the climate data. The results are highly dependent on this prior assumption, and may therefore be criticized for being based on the opposite of a null hypothesis. More importantly, since values of key parameters in the CCA method are not given, some experiments have been performed using the software of the original authors of the CCA estimator. The purpose was to replicate the results of Scafetta using what was perceived to be the most probable parameter values. Despite best efforts, this was not possible.

  5. Determination of the Critical Micelle Concentration of Neutral and Ionic Surfactants with Fluorometry, Conductometry, and Surface Tension-A Method Comparison.

    PubMed

    Scholz, Norman; Behnke, Thomas; Resch-Genger, Ute

    2018-01-01

    Micelles are of increasing importance as versatile carriers for hydrophobic substances and nanoprobes for a wide range of pharmaceutical, diagnostic, medical, and therapeutic applications. A key parameter indicating the formation and stability of micelles is the critical micelle concentration (CMC). In this respect, we determined the CMC of common anionic, cationic, and non-ionic surfactants fluorometrically using different fluorescent probes and fluorescence parameters for signal detection and compared the results with conductometric and surface tension measurements. Based upon these results, requirements, advantages, and pitfalls of each method are discussed. Our study underlines the versatility of fluorometric methods that do not impose specific requirements on surfactants and are especially suited for the quantification of very low CMC values. Conductivity and surface tension measurements yield smaller uncertainties particularly for high CMC values, yet are more time- and substance consuming and not suitable for every surfactant.

  6. Modeling a Material's Instantaneous Velocity during Acceleration Driven by a Detonation's Gas-Push Process

    NASA Astrophysics Data System (ADS)

    Backofen, Joseph E.

    2005-07-01

    This paper will describe both the scientific findings and the model developed in order to quantfy a material's instantaneous velocity versus position, time, or the expansion ratio of an explosive's gaseous products while its gas pressure is accelerating the material. The formula derived to represent this gas-push process for the 2nd stage of the BRIGS Two-Step Detonation Propulsion Model was found to fit very well the published experimental data available for twenty explosives. When the formula's two key parameters (the ratio Vinitial / Vfinal and ExpansionRatioFinal) were adjusted slightly from the average values describing closely many explosives to values representing measured data for a particular explosive, the formula's representation of that explosive's gas-push process was improved. The time derivative of the velocity formula representing acceleration and/or pressure compares favorably to Jones-Wilkins-Lee equation-of-state model calculations performed using published JWL parameters.

  7. Effect of the medium's density on the hydrocyclonic separation of waste plastics with different densities.

    PubMed

    Fu, Shuangcheng; Fang, Yong; Yuan, Huixin; Tan, Wanjiang; Dong, Yiwen

    2017-09-01

    Hydrocyclones can be applied to recycle waste plastics with different densities through separating plastics based on their differences in densities. In the process, the medium density is one of key parameters and the value of the medium's density is not just the average of the density of two kinds of plastics separated. Based on the force analysis and establishing the equation of motion of particles in the hydrocyclone, a formula to calculate the optimum separation medium density has been deduced. This value of the medium's density is a function of various parameters including the diameter, density, radial position and tangential velocity of particles, and viscosity of the medium. Tests on the separation performance of the hydrocyclone has been conducted with PET and PVC particles. The theoretical result appeared to be in good agreement with experimental results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  9. Determining Hypocentral Parameters for Local Earthquakes in 1-D Using a Genetic Algorithm and Two-point ray tracing

    NASA Astrophysics Data System (ADS)

    Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.

    2005-12-01

    This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing

  10. Analysis and design of a genetic circuit for dynamic metabolic engineering.

    PubMed

    Anesiadis, Nikolaos; Kobayashi, Hideki; Cluett, William R; Mahadevan, Radhakrishnan

    2013-08-16

    Recent advances in synthetic biology have equipped us with new tools for bioprocess optimization at the genetic level. Previously, we have presented an integrated in silico design for the dynamic control of gene expression based on a density-sensing unit and a genetic toggle switch. In the present paper, analysis of a serine-producing Escherichia coli mutant shows that an instantaneous ON-OFF switch leads to a maximum theoretical productivity improvement of 29.6% compared to the mutant. To further the design, global sensitivity analysis is applied here to a mathematical model of serine production in E. coli coupled with a genetic circuit. The model of the quorum sensing and the toggle switch involves 13 parameters of which 3 are identified as having a significant effect on serine concentration. Simulations conducted in this reduced parameter space further identified the optimal ranges for these 3 key parameters to achieve productivity values close to the maximum theoretical values. This analysis can now be used to guide the experimental implementation of a dynamic metabolic engineering strategy and reduce the time required to design the genetic circuit components.

  11. Nanoscale content-addressable memory

    NASA Technical Reports Server (NTRS)

    Davis, Bryan (Inventor); Principe, Jose C. (Inventor); Fortes, Jose (Inventor)

    2009-01-01

    A combined content addressable memory device and memory interface is provided. The combined device and interface includes one or more one molecular wire crossbar memories having spaced-apart key nanowires, spaced-apart value nanowires adjacent to the key nanowires, and configurable switches between the key nanowires and the value nanowires. The combination further includes a key microwire-nanowire grid (key MNG) electrically connected to the spaced-apart key nanowires, and a value microwire-nanowire grid (value MNG) electrically connected to the spaced-apart value nanowires. A key or value MNGs selects multiple nanowires for a given key or value.

  12. The application of the pilot points in groundwater numerical inversion model

    NASA Astrophysics Data System (ADS)

    Hu, Bin; Teng, Yanguo; Cheng, Lirong

    2015-04-01

    Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity

  13. Nutrient control of phytoplankton photosynthesis in the western North Atlantic

    NASA Technical Reports Server (NTRS)

    Platt, Trevor; Sathyendranath, Shubha; Ulloa, Osvaldo; Harrison, William G.; Hoepffner, Nicolas; Goes, Joaquim

    1992-01-01

    Results from several years of oceanographic cruises are reported which show that the parameters of the photosynthesis-light curve of the flora of the North Sargasso Sea are remarkably constant in magnitude, except during the spring phytoplankton bloom when their magnitudes are noticeably higher. These results are interpreted as providing direct evidence for nutrient control of photosynthesis in the open ocean. The findings also reinforce the plausibility of using biogeochemical provinces to partition the ocean into manageable units for basin- or global-scale analysis. They show that seasonal changes in critical parameter should not be overlooked if robust carbon budgets are to be constructed, and illustrate the value of attacking the parameters that control the key fluxes, rather than the fluxes themselves, when investigating the ocean carbon cycle.

  14. A Stokes drift approximation based on the Phillips spectrum

    NASA Astrophysics Data System (ADS)

    Breivik, Øyvind; Bidlot, Jean-Raymond; Janssen, Peter A. E. M.

    2016-04-01

    A new approximation to the Stokes drift velocity profile based on the exact solution for the Phillips spectrum is explored. The profile is compared with the monochromatic profile and the recently proposed exponential integral profile. ERA-Interim spectra and spectra from a wave buoy in the central North Sea are used to investigate the behavior of the profile. It is found that the new profile has a much stronger gradient near the surface and lower normalized deviation from the profile computed from the spectra. Based on estimates from two open-ocean locations, an average value has been estimated for a key parameter of the profile. Given this parameter, the profile can be computed from the same two parameters as the monochromatic profile, namely the transport and the surface Stokes drift velocity.

  15. Earthquake ground motion: Chapter 3

    USGS Publications Warehouse

    Luco, Nicolas; Kircher, Charles A.; Crouse, C. B.; Charney, Finley; Haselton, Curt B.; Baker, Jack W.; Zimmerman, Reid; Hooper, John D.; McVitty, William; Taylor, Andy

    2016-01-01

    Most of the effort in seismic design of buildings and other structures is focused on structural design. This chapter addresses another key aspect of the design process—characterization of earthquake ground motion into parameters for use in design. Section 3.1 describes the basis of the earthquake ground motion maps in the Provisions and in ASCE 7 (the Standard). Section 3.2 has examples for the determination of ground motion parameters and spectra for use in design. Section 3.3 describes site-specific ground motion requirements and provides example site-specific design and MCER response spectra and example values of site-specific ground motion parameters. Section 3.4 discusses and provides an example for the selection and scaling of ground motion records for use in various types of response history analysis permitted in the Standard.

  16. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  17. Simulation-based Extraction of Key Material Parameters from Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Alsafi, Huseen; Peninngton, Gray

    Models for the atomic force microscopy (AFM) tip and sample interaction contain numerous material parameters that are often poorly known. This is especially true when dealing with novel material systems or when imaging samples that are exposed to complicated interactions with the local environment. In this work we use Monte Carlo methods to extract sample material parameters from the experimental AFM analysis of a test sample. The parameterized theoretical model that we use is based on the Virtual Environment for Dynamic AFM (VEDA) [1]. The extracted material parameters are then compared with the accepted values for our test sample. Using this procedure, we suggest a method that can be used to successfully determine unknown material properties in novel and complicated material systems. We acknowledge Fisher Endowment Grant support from the Jess and Mildred Fisher College of Science and Mathematics,Towson University.

  18. Aspects of metallic low-temperature transport in Mott-insulator/band-insulator superlattices: Optical conductivity and thermoelectricity

    NASA Astrophysics Data System (ADS)

    Rüegg, Andreas; Pilgram, Sebastian; Sigrist, Manfred

    2008-06-01

    We investigate the low-temperature electrical and thermal transport properties in atomically precise metallic heterostructures involving strongly correlated electron systems. The model of the Mott-insulator/band-insulator superlattice was discussed in the framework of the slave-boson mean-field approximation and transport quantities were derived by use of the Boltzmann transport equation in the relaxation-time approximation. The results for the optical conductivity are in good agreement with recently published experimental data on (LaTiO3)N/(SrTiO3)M superlattices and allow us to estimate the values of key parameters of the model. Furthermore, predictions for the thermoelectric response were made and the dependence of the Seebeck coefficient on model parameters was studied in detail. The width of the Mott-insulating material was identified as the most relevant parameter, in particular, this parameter provides a way to optimize the thermoelectric power factor at low temperatures.

  19. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.

    PubMed

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  20. Accounting for uncertainty in model-based prevalence estimation: paratuberculosis control in dairy herds.

    PubMed

    Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R

    2012-09-10

    A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.

  1. Grey water characterization and treatment for reuse in an arid environment.

    PubMed

    Smith, E; Bani-Melhem, K

    2012-01-01

    Grey water from a university facilities building in Cairo, Egypt was analysed for basic wastewater parameters. Mean concentrations were calculated based on grab samples over a 16-month period. Values for chemical oxygen demand (COD) and nutrients exceeded values reported in a number of other studies of grey water, while coliform counts were also high. A submerged membrane bioreactor (SMBR) system using a hollow fibre ultrafiltration membrane was used to treat the grey water with the aim of producing effluent that meets reuse guidelines for agriculture. A test run for 50 days at constant transmembrane pressure resulted in very good removal for key parameters including COD, total suspended solids (TSS), colour, turbidity, ammonia nitrogen, anionic surfactants, and coliform bacteria. High standard deviations were observed for COD and coliform concentrations for both monthly grab samples and influent values from the 50-day SMBR experiment. SMBR effluent meets international and local guidelines for at least restricted irrigation, particularly as pertains to COD, TSS, and faecal coliforms which were reduced to mean treated values of 50 mg/L, 0 mg/L (i.e., not detected), and <50 cfu/100 mL, respectively.

  2. Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard

    PubMed Central

    Tang, Liansheng Larry; Yuan, Ao; Collins, John; Che, Xuan; Chan, Leighton

    2017-01-01

    The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration. PMID:28469385

  3. Information Filtering via a Scaling-Based Function

    PubMed Central

    Qiu, Tian; Zhang, Zi-Ke; Chen, Guang

    2013-01-01

    Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem. PMID:23696829

  4. Properties of behavior under different random ratio and random interval schedules: A parametric study.

    PubMed

    Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H

    1985-03-01

    Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.

  5. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, James C.

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  6. MARA (Multimode Airborne Radar Altimeter) system documentation. Volume 1: MARA system requirements document

    NASA Technical Reports Server (NTRS)

    Parsons, C. L. (Editor)

    1989-01-01

    The Multimode Airborne Radar Altimeter (MARA), a flexible airborne radar remote sensing facility developed by NASA's Goddard Space Flight Center, is discussed. This volume describes the scientific justification for the development of the instrument and the translation of these scientific requirements into instrument design goals. Values for key instrument parameters are derived to accommodate these goals, and simulations and analytical models are used to estimate the developed system's performance.

  7. Mutation rates among RNA viruses

    PubMed Central

    Drake, John W.; Holland, John J.

    1999-01-01

    The rate of spontaneous mutation is a key parameter in modeling the genetic structure and evolution of populations. The impact of the accumulated load of mutations and the consequences of increasing the mutation rate are important in assessing the genetic health of populations. Mutation frequencies are among the more directly measurable population parameters, although the information needed to convert them into mutation rates is often lacking. A previous analysis of mutation rates in RNA viruses (specifically in riboviruses rather than retroviruses) was constrained by the quality and quantity of available measurements and by the lack of a specific theoretical framework for converting mutation frequencies into mutation rates in this group of organisms. Here, we describe a simple relation between ribovirus mutation frequencies and mutation rates, apply it to the best (albeit far from satisfactory) available data, and observe a central value for the mutation rate per genome per replication of μg ≈ 0.76. (The rate per round of cell infection is twice this value or about 1.5.) This value is so large, and ribovirus genomes are so informationally dense, that even a modest increase extinguishes the population. PMID:10570172

  8. Quantum Dense Coding About a Two-Qubit Heisenberg XYZ Model

    NASA Astrophysics Data System (ADS)

    Xu, Hui-Yun; Yang, Guo-Hui

    2017-09-01

    By taking into account the nonuniform magnetic field, the quantum dense coding with thermal entangled states of a two-qubit anisotropic Heisenberg XYZ chain are investigated in detail. We mainly show the different properties about the dense coding capacity ( χ) with the changes of different parameters. It is found that dense coding capacity χ can be enhanced by decreasing the magnetic field B, the degree of inhomogeneity b and temperature T, or increasing the coupling constant along z-axis J z . In addition, we also find χ remains the stable value as the change of the anisotropy of the XY plane Δ in a certain temperature condition. Through studying different parameters effect on χ, it presents that we can properly turn the values of B, b, J z , Δ or adjust the temperature T to obtain a valid dense coding capacity ( χ satisfies χ > 1). Moreover, the temperature plays a key role in adjusting the value of dense coding capacity χ. The valid dense coding capacity could be always obtained in the lower temperature-limit case.

  9. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  10. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  11. Microwave spectrum and structural parameters for the formamide-formic acid dimer.

    PubMed

    Daly, Adam M; Sargus, Bryan A; Kukolich, Stephen G

    2010-11-07

    The rotational spectra for six isotopologues of the complex formed between formamide and formic acid have been measured using a pulsed-beam Fourier transform microwave spectrometer and analyzed to obtain rotational constants and quadrupole coupling parameters. The rotational constants and quadrupole coupling strengths obtained for H  (12)COOH-H(2)  (14)NCOH are A = 5889.465(2), B = 2148.7409(7), 1575.1234(6), eQq(aa) = 1.014(5), eQq(bb) = 1.99(1), and eQq(cc) = -3.00(1) MHz. Using the 15 rotational constants obtained for the H  (13)COOH, HCOOD, DCOOH, and H(2)  (15)NCHO isotopologues, key structural parameters were obtained from a least-squares structure fit. Hydrogen bond distances of 1.78 Å for R(O3⋯H1) and 1.79 Å for R(H4⋯O1) were obtained. The "best fit" value for the angle(C-O-H) of formic acid is significantly larger than the monomer value of 106.9° with an optimum value of 121.7(3)°. The complex is nearly planar with inertial defect Δ = -0.158 amu  Å(2). The formamide proton is moved out of the molecular plane by 15(3)° for the best fit structure. Density functional theory using B3PW91, HCTH407, and TPSS as well as MP2 and CCSD calculations were performed using 6-311++G(d,p) and the results were compared to experimentally determined parameters.

  12. Temperature-dependent thermal properties of ex vivo liver undergoing thermal ablation.

    PubMed

    Guntur, Sitaramanjaneya Reddy; Lee, Kang Il; Paeng, Dong-Guk; Coleman, Andrew John; Choi, Min Joo

    2013-10-01

    Thermotherapy uses a heat source that raises temperatures in the target tissue, and the temperature rise depends on the thermal properties of the tissue. Little is known about the temperature-dependent thermal properties of tissue, which prevents us from accurately predicting the temperature distribution of the target tissue undergoing thermotherapy. The present study reports the key thermal parameters (specific heat capacity, thermal conductivity and heat diffusivity) measured in ex vivo porcine liver while being heated from 20 ° C to 90 ° C and then naturally cooled down to 20 ° C. The study indicates that as the tissue was heated, all the thermal parameters resulted in plots with asymmetric quasi-parabolic curves with temperature, being convex downward with their minima at the turning temperature of 35-40 ° C. The largest change was observed for thermal conductivity, which decreased by 9.6% from its initial value (at 20 ° C) at the turning temperature (35 ° C) and rose by 45% at 90 ° C from its minimum (at 35 ° C). The minima were 3.567 mJ/(m(3) ∙ K) for specific heat capacity, 0.520 W/(m.K) for thermal conductivity and 0.141 mm(2)/s for thermal diffusivity. The minimum at the turning temperature was unique, and it is suggested that it be taken as a characteristic value of the thermal parameter of the tissue. On the other hand, the thermal parameters were insensitive to temperature and remained almost unchanged when the tissue cooled down, indicating that their variations with temperature were irreversible. The rate of the irreversible rise at 35 ° C was 18% in specific heat capacity, 40% in thermal conductivity and 38.3% in thermal diffusivity. The study indicates that the key thermal parameters of ex vivo porcine liver vary largely with temperature when heated, as described by asymmetric quasi-parabolic curves of the thermal parameters with temperature, and therefore, substantial influence on the temperature distribution of the tissue undergoing thermotherapy is expected. 2013. Published by Elsevier Inc

  13. Distributed metadata servers for cluster file systems using shared low latency persistent key-value metadata store

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.

    A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less

  14. Analytical approach to the multi-state lasing phenomenon in quantum dot lasers

    NASA Astrophysics Data System (ADS)

    Korenev, V. V.; Savelyev, A. V.; Zhukov, A. E.; Omelchenko, A. V.; Maximov, M. V.

    2013-03-01

    We introduce an analytical approach to describe the multi-state lasing phenomenon in quantum dot lasers. We show that the key parameter is the hole-to-electron capture rate ratio. If it is lower than a certain critical value, the complete quenching of ground-state lasing takes place at high injection levels. At higher values of the ratio, the model predicts saturation of the ground-state power. This explains the diversity of experimental results and their contradiction to the conventional rate equation model. Recently found enhancement of ground-state lasing in p-doped samples and temperature dependence of the ground-state power are also discussed.

  15. Utilizing a one-dimensional multispecies model to simulate the nutrient reduction and biomass structure in two types of H2-based membrane-aeration biofilm reactors (H2-MBfR): model development and parametric analysis.

    PubMed

    Wang, Zuowei; Xia, Siqing; Xu, Xiaoyin; Wang, Chenhui

    2016-02-01

    In this study, a one-dimensional multispecies model (ODMSM) was utilized to simulate NO3(-)-N and ClO4(-) reduction performances in two kinds of H2-based membrane-aeration biofilm reactors (H2-MBfR) within different operating conditions (e.g., NO3(-)-N/ClO4(-) loading rates, H2 partial pressure, etc.). Before the simulation process, we conducted the sensitivity analysis of some key parameters which would fluctuate in different environmental conditions, then we used the experimental data to calibrate the more sensitive parameters μ1 and μ2 (maximum specific growth rates of denitrification bacteria and perchlorate reduction bacteria) in two H2-MBfRs, and the diversity of the two key parameters' values in two types of reactors may be resulted from the different carbon source fed in the reactors. From the simulation results of six different operating conditions (four in H2-MBfR 1 and two in H2-MBfR 2), the applicability of the model was approved, and the variation of the removal tendency in different operating conditions could be well simulated. Besides, the rationality of operating parameters (H2 partial pressure, etc.) could be judged especially in condition of high nutrients' loading rates. To a certain degree, the model could provide theoretical guidance to determine the operating parameters on some specific conditions in practical application.

  16. Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model

    NASA Astrophysics Data System (ADS)

    Prakash, Shashi; Kumar, Nitish; Kumar, Subrata

    2016-09-01

    CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.

  17. Thermal inflation with a thermal waterfall scalar field coupled to a light spectator scalar field

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Konstantinos; Lyth, David H.; Rumsey, Arron

    2017-05-01

    A new model of thermal inflation is introduced, in which the mass of the thermal waterfall field is dependent on a light spectator scalar field. Using the δ N formalism, the "end of inflation" scenario is investigated in order to ascertain whether this model is able to produce the dominant contribution to the primordial curvature perturbation. A multitude of constraints are considered so as to explore the parameter space, with particular emphasis on key observational signatures. For natural values of the parameters, the model is found to yield a sharp prediction for the scalar spectral index and its running, well within the current observational bounds.

  18. Parameterization of a mesoscopic model for the self-assembly of linear sodium alkyl sulfates

    NASA Astrophysics Data System (ADS)

    Mai, Zhaohuan; Couallier, Estelle; Rakib, Mohammed; Rousseau, Bernard

    2014-05-01

    A systematic approach to develop mesoscopic models for a series of linear anionic surfactants (CH3(CH2)n - 1OSO3Na, n = 6, 9, 12, 15) by dissipative particle dynamics (DPD) simulations is presented in this work. The four surfactants are represented by coarse-grained models composed of the same head group and different numbers of identical tail beads. The transferability of the DPD model over different surfactant systems is carefully checked by adjusting the repulsive interaction parameters and the rigidity of surfactant molecules, in order to reproduce key equilibrium properties of the aqueous micellar solutions observed experimentally, including critical micelle concentration (CMC) and average micelle aggregation number (Nag). We find that the chain length is a good index to optimize the parameters and evaluate the transferability of the DPD model. Our models qualitatively reproduce the essential properties of these surfactant analogues with a set of best-fit parameters. It is observed that the logarithm of the CMC value decreases linearly with the surfactant chain length, in agreement with Klevens' rule. With the best-fit and transferable set of parameters, we have been able to calculate the free energy contribution to micelle formation per methylene unit of -1.7 kJ/mol, very close to the experimentally reported value.

  19. Utility usage forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hosking, Jonathan R. M.; Natarajan, Ramesh

    The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.

  20. Generalized Smooth Transition Map Between Tent and Logistic Maps

    NASA Astrophysics Data System (ADS)

    Sayed, Wafaa S.; Fahmy, Hossam A. H.; Rezk, Ahmed A.; Radwan, Ahmed G.

    There is a continuous demand on novel chaotic generators to be employed in various modeling and pseudo-random number generation applications. This paper proposes a new chaotic map which is a general form for one-dimensional discrete-time maps employing the power function with the tent and logistic maps as special cases. The proposed map uses extra parameters to provide responses that fit multiple applications for which conventional maps were not enough. The proposed generalization covers also maps whose iterative relations are not based on polynomials, i.e. with fractional powers. We introduce a framework for analyzing the proposed map mathematically and predicting its behavior for various combinations of its parameters. In addition, we present and explain the transition map which results in intermediate responses as the parameters vary from their values corresponding to tent map to those corresponding to logistic map case. We study the properties of the proposed map including graph of the map equation, general bifurcation diagram and its key-points, output sequences, and maximum Lyapunov exponent. We present further explorations such as effects of scaling, system response with respect to the new parameters, and operating ranges other than transition region. Finally, a stream cipher system based on the generalized transition map validates its utility for image encryption applications. The system allows the construction of more efficient encryption keys which enhances its sensitivity and other cryptographic properties.

  1. The logic of comparative life history studies for estimating key parameters, with a focus on natural mortality rate

    USGS Publications Warehouse

    Hoenig, John M; Then, Amy Y.-H.; Babcock, Elizabeth A.; Hall, Norman G.; Hewitt, David A.; Hesp, Sybrand A.

    2016-01-01

    There are a number of key parameters in population dynamics that are difficult to estimate, such as natural mortality rate, intrinsic rate of population growth, and stock-recruitment relationships. Often, these parameters of a stock are, or can be, estimated indirectly on the basis of comparative life history studies. That is, the relationship between a difficult to estimate parameter and life history correlates is examined over a wide variety of species in order to develop predictive equations. The form of these equations may be derived from life history theory or simply be suggested by exploratory data analysis. Similarly, population characteristics such as potential yield can be estimated by making use of a relationship between the population parameter and bio-chemico–physical characteristics of the ecosystem. Surprisingly, little work has been done to evaluate how well these indirect estimators work and, in fact, there is little guidance on how to conduct comparative life history studies and how to evaluate them. We consider five issues arising in such studies: (i) the parameters of interest may be ill-defined idealizations of the real world, (ii) true values of the parameters are not known for any species, (iii) selecting data based on the quality of the estimates can introduce a host of problems, (iv) the estimates that are available for comparison constitute a non-random sample of species from an ill-defined population of species of interest, and (v) the hierarchical nature of the data (e.g. stocks within species within genera within families, etc., with multiple observations at each level) warrants consideration. We discuss how these issues can be handled and how they shape the kinds of questions that can be asked of a database of life history studies.

  2. Key comparison SIM.EM.RF-K5b.CL: scattering coefficients by broad-band methods, 2 GHz-18 GHz — type N connector

    NASA Astrophysics Data System (ADS)

    Silva, H.; Monasterios, G.

    2016-01-01

    The first key comparison in microwave frequencies within the SIM (Sistema Interamericano de Metrología) region has been carried out. The measurands were the S-parameters of 50 ohm coaxial devices with Type-N connectors and were measured at 2 GHz, 9 GHz and 18 GHz. SIM.EM.RF-K5b.CL was the identification assigned and it was based on a parent CCEM key comparison named CCEM.RF-K5b.CL. For this reason, the measurements standards and their nominal values were selected accordingly, i.e. two one-port devices (a matched and a mismatched load) to cover low and high reflection coefficients and two attenuators (3dB and 20 dB) to cover low and high transmission coefficients. This key comparison has met the need for ensuring traceability in high-frequency measurements across America by linking SIM's results to CCEM. Six NMIs have participated in this comparison which was piloted by the Instituto Nacional de Tecnología Industrial (Argentina). A linking method of multivariate values was proposed and implemented in order to allow the linking of 2-dimensional results. KEY WORDS FOR SEARCH Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  3. Image encryption with chaotic map and Arnold transform in the gyrator transform domains

    NASA Astrophysics Data System (ADS)

    Sang, Jun; Luo, Hongling; Zhao, Jun; Alam, Mohammad S.; Cai, Bin

    2017-05-01

    An image encryption method combing chaotic map and Arnold transform in the gyrator transform domains was proposed. Firstly, the original secret image is XOR-ed with a random binary sequence generated by a logistic map. Then, the gyrator transform is performed. Finally, the amplitude and phase of the gyrator transform are permutated by Arnold transform. The decryption procedure is the inverse operation of encryption. The secret keys used in the proposed method include the control parameter and the initial value of the logistic map, the rotation angle of the gyrator transform, and the transform number of the Arnold transform. Therefore, the key space is large, while the key data volume is small. The numerical simulation was conducted to demonstrate the effectiveness of the proposed method and the security analysis was performed in terms of the histogram of the encrypted image, the sensitiveness to the secret keys, decryption upon ciphertext loss, and resistance to the chosen-plaintext attack.

  4. Towards Improving our Understanding on the Retrievals of Key Parameters Characterising Land Surface Interactions from Space: Introduction & First Results from the PREMIER-EO Project

    NASA Astrophysics Data System (ADS)

    Ireland, Gareth; North, Matthew R.; Petropoulos, George P.; Srivastava, Prashant K.; Hodges, Crona

    2015-04-01

    Acquiring accurate information on the spatio-temporal variability of soil moisture content (SM) and evapotranspiration (ET) is of key importance to extend our understanding of the Earth system's physical processes, and is also required in a wide range of multi-disciplinary research studies and applications. The utility and applicability of Earth Observation (EO) technology provides an economically feasible solution to derive continuous spatio-temporal estimates of key parameters characterising land surface interactions, including ET as well as SM. Such information is of key value to practitioners, decision makers and scientists alike. The PREMIER-EO project recently funded by High Performance Computing Wales (HPCW) is a research initiative directed towards the development of a better understanding of EO technology's present ability to derive operational estimations of surface fluxes and SM. Moreover, the project aims at addressing knowledge gaps related to the operational estimation of such parameters, and thus contribute towards current ongoing global efforts towards enhancing the accuracy of those products. In this presentation we introduce the PREMIER-EO project, providing a detailed overview of the research aims and objectives for the 1 year duration of the project's implementation. Subsequently, we make available the initial results of the work carried out herein, in particular, related to an all-inclusive and robust evaluation of the accuracy of existing operational products of ET and SM from different ecosystems globally. The research outcomes of this project, once completed, will provide an important contribution towards addressing the knowledge gaps related to the operational estimation of ET and SM. This project results will also support efforts ongoing globally towards the operational development of related products using technologically advanced EO instruments which were launched recently or planned be launched in the next 1-2 years. Key Words: PREMIER-EO, HPC Wales, Soil Moisture, Evapotranspiration, , Earth Observation

  5. Assessing the importance of rainfall uncertainty on hydrological models with different spatial and temporal scale

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2015-04-01

    Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.

  6. Simulating the Refractive Index Structure Constant ({C}_{n}^{2}) in the Surface Layer at Antarctica with a Mesoscale Model

    NASA Astrophysics Data System (ADS)

    Qing, Chun; Wu, Xiaoqing; Li, Xuebin; Tian, Qiguo; Liu, Dong; Rao, Ruizhong; Zhu, Wenyue

    2018-01-01

    In this paper, we introduce an approach wherein the Weather Research and Forecasting (WRF) model is coupled with the bulk aerodynamic method to estimate the surface layer refractive index structure constant (C n 2) above Taishan Station in Antarctica. First, we use the measured meteorological parameters to estimate C n 2 using the bulk aerodynamic method, and second, we use the WRF model output parameters to estimate C n 2 using the bulk aerodynamic method. Finally, the corresponding C n 2 values from the micro-thermometer are compared with the C n 2 values estimated using the WRF model coupled with the bulk aerodynamic method. We analyzed the statistical operators—the bias, root mean square error (RMSE), bias-corrected RMSE (σ), and correlation coefficient (R xy )—in a 20 day data set to assess how this approach performs. In addition, we employ contingency tables to investigate the estimation quality of this approach, which provides complementary key information with respect to the bias, RMSE, σ, and R xy . The quantitative results are encouraging and permit us to confirm the fine performance of this approach. The main conclusions of this study tell us that this approach provides a positive impact on optimizing the observing time in astronomical applications and provides complementary key information for potential astronomical sites.

  7. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks.

    PubMed

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-03-24

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.

  8. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks

    PubMed Central

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-01-01

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure. PMID:28338632

  9. ESTIMATION OF CONSTANT AND TIME-VARYING DYNAMIC PARAMETERS OF HIV INFECTION IN A NONLINEAR DIFFERENTIAL EQUATION MODEL.

    PubMed

    Liang, Hua; Miao, Hongyu; Wu, Hulin

    2010-03-01

    Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and quantified for individual patients. As a result, personalized treatment decision based on viral dynamic models is possible.

  10. Practical quantum key distribution protocol without monitoring signal disturbance.

    PubMed

    Sasaki, Toshihiko; Yamamoto, Yoshihisa; Koashi, Masato

    2014-05-22

    Quantum cryptography exploits the fundamental laws of quantum mechanics to provide a secure way to exchange private information. Such an exchange requires a common random bit sequence, called a key, to be shared secretly between the sender and the receiver. The basic idea behind quantum key distribution (QKD) has widely been understood as the property that any attempt to distinguish encoded quantum states causes a disturbance in the signal. As a result, implementation of a QKD protocol involves an estimation of the experimental parameters influenced by the eavesdropper's intervention, which is achieved by randomly sampling the signal. If the estimation of many parameters with high precision is required, the portion of the signal that is sacrificed increases, thus decreasing the efficiency of the protocol. Here we propose a QKD protocol based on an entirely different principle. The sender encodes a bit sequence onto non-orthogonal quantum states and the receiver randomly dictates how a single bit should be calculated from the sequence. The eavesdropper, who is unable to learn the whole of the sequence, cannot guess the bit value correctly. An achievable rate of secure key distribution is calculated by considering complementary choices between quantum measurements of two conjugate observables. We found that a practical implementation using a laser pulse train achieves a key rate comparable to a decoy-state QKD protocol, an often-used technique for lasers. It also has a better tolerance of bit errors and of finite-sized-key effects. We anticipate that this finding will give new insight into how the probabilistic nature of quantum mechanics can be related to secure communication, and will facilitate the simple and efficient use of conventional lasers for QKD.

  11. Improvement of two-way continuous-variable quantum key distribution with virtual photon subtraction

    NASA Astrophysics Data System (ADS)

    Zhao, Yijia; Zhang, Yichen; Li, Zhengyu; Yu, Song; Guo, Hong

    2017-08-01

    We propose a method to improve the performance of two-way continuous-variable quantum key distribution protocol by virtual photon subtraction. The virtual photon subtraction implemented via non-Gaussian post-selection not only enhances the entanglement of two-mode squeezed vacuum state but also has advantages in simplifying physical operation and promoting efficiency. In two-way protocol, virtual photon subtraction could be applied on two sources independently. Numerical simulations show that the optimal performance of renovated two-way protocol is obtained with photon subtraction only used by Alice. The transmission distance and tolerable excess noise are improved by using the virtual photon subtraction with appropriate parameters. Moreover, the tolerable excess noise maintains a high value with the increase in distance so that the robustness of two-way continuous-variable quantum key distribution system is significantly improved, especially at long transmission distance.

  12. Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2018-01-01

    Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ NEE) for the different ensemble members from ˜ 2 to 3 g C m-2 yr-1 (with uncertain parameters) to ˜ 45 g C m-2 yr-1 (C3 grass) and ˜ 75 g C m-2 yr-1 (C3 crops) with perturbed forcings. This increase in uncertainty is related to the impact of the meteorological forcings on leaf onset and senescence, and enhanced/reduced drought stress related to perturbation of precipitation. The NEE uncertainty for the forest plant functional type (PFT) was considerably lower (σ NEE ˜ 4.0-13.5 g C m-2 yr-1 with perturbed parameters, meteorological forcings and initial states). We conclude that LAI and NEE uncertainty with CLM is clearly underestimated if uncertain meteorological forcings and initial states are not taken into account.

  13. Stirling Engine Dynamic System Modeling

    NASA Technical Reports Server (NTRS)

    Nakis, Christopher G.

    2004-01-01

    The Thermo-Mechanical systems branch at the Glenn Research Center focuses a large amount time on Stirling engines. These engines will be used on missions where solar power is inefficient, especially in deep space. I work with Tim Regan and Ed Lewandowski who are currently developing and validating a mathematical model for the Stirling engines. This model incorporates all aspects of the system including, mechanical, electrical and thermodynamic components. Modeling is done through Simplorer, a program capable of running simulations of the model. Once created and then proven to be accurate, a model is used for developing new ideas for engine design. My largest specific project involves varying key parameters in the model and quantifying the results. This can all be done relatively trouble-free with the help of Simplorer. Once the model is complete, Simplorer will do all the necessary calculations. The more complicated part of this project is determining which parameters to vary. Finding key parameters depends on the potential for a value to be independently altered in the design. For example, a change in one dimension may lead to a proportional change to the rest of the model, and no real progress is made. Also, the ability for a changed value to have a substantial impact on the outputs of the system is important. Results will be condensed into graphs and tables with the purpose of better communication and understanding of the data. With the changing of these parameters, a more optimal design can be created without having to purchase or build any models. Also, hours and hours of results can be simulated in minutes. In the long run, using mathematical models can save time and money. Along with this project, I have many other smaller assignments throughout the summer. My main goal is to assist in the processes of model development, validation and testing.

  14. Concomitant semi-quantitative and visual analysis improves the predictive value on treatment outcome of interim 18F-fluorodeoxyglucose / Positron Emission Tomography in advanced Hodgkin lymphoma.

    PubMed

    Biggi, Alberto; Bergesio, Fabrizio; Chauvie, Stephane; Bianchi, Andrea; Menga, Massimo; Fallanca, Federico; Hutchings, Martin; Gregianin, Michele; Meignan, Michel; Gallamini, Andrea

    2017-07-27

    Qualitative assessment using the Deauville five-point scale (DS) is the gold standard for interim and end-of treatment PET interpretation in lymphoma. In the present study we assessed the reliability and the prognostic value of different semi- quantitative (SQ) parameters in comparison with DS for interim PET (iPET) interpretation in Hodgkin lymphoma (HL). A cohort of 82 out of 260 patients with advanced stage HL enrolled in the International Validation Study (IVS), scored as 3 to 5 by the expert panel was included in the present report. Two nuclear medicine physicians blinded to patient history, clinical data and treatment outcome reviewed independently the iPET using the following parameters: DS, SUVMax, SUVPeak of the most active lesion, QMax (ratio of SUVMax of the lesion to liver SUVMax) and QRes (ratio of SUVPeak of the lesion to liver SUVMean). The optimal sensitivity, specificity, positive and negative predictive value to predict treatment outcome was calculated for all the above parameters with the Receiver Operator Characteristics analysis. The prognostic value of all parameters were similar, the best cut-off value being 4 for DS (Area Under the Curve, AUC, 0.81 CI95%: 0.72-0.90), 3.81 for SUVMax (AUC 0.82 CI95%: 0.73-0.91), 3.20 for SUVPeak (AUC 0.86 CI95%: 0.77-0.94), 1.07 for QMax (AUC 0.84 CI95%: 0.75-0.93) and 1.38 for QRes (AUC 0.84 CI95%: 0.75-0.93). The reproducibility of different parameters was similar as the inter-observer variability measured with Cohen's kappa were 0.93 (95% CI 0.84-1.01) for the DS, 0.88 (0.77-0.98) for SUVMax, 0.82 (0.70-0.95) for SUVPeak, 0.85 (0.74-0.97) for QRes and 0.78 (0.65-0.92) for QMax. Due to the high specificity of SUVPeak (0.87) and to the good sensitivity of DS (0.86), upon the use of both parameters the positive predictive value increased from 0.65 of the DS alone to 0.79. When both parameters were positive in iPET, 3-years Failure-Free Survival (FFS) was significantly lower compared to patients whose iPET was interpreted with qualitative parameters only (DS 4 or 5): 21% vs 35%. On the other hand, the FFS of patients with negative results was not significantly different (88% vs 86%). In this study we demonstrated that, combining semi-quantitative parameters with SUVPeak to a pure qualitative interpretation key with DS, it is possible to increase the positive predictive value of iPET and to identify with higher precision the patients subset with a very dismal prognosis. However, these retrospective findings should be confirmed prospectively in a larger patient cohort.

  15. Effect of Simultaneous Inoculation with Yeast and Bacteria on Fermentation Kinetics and Key Wine Parameters of Cool-Climate Chardonnay

    PubMed Central

    Jussier, Delphine; Dubé Morneau, Amélie; Mira de Orduña, Ramón

    2006-01-01

    Inoculating grape musts with wine yeast and lactic acid bacteria (LAB) concurrently in order to induce simultaneous alcoholic fermentation (AF) and malolactic fermentation (MLF) can be an efficient alternative to overcome potential inhibition of LAB in wines because of high ethanol concentrations and reduced nutrient content. In this study, the simultaneous inoculation of yeast and LAB into must was compared with a traditional vinification protocol, where MLF was induced after completion of AF. For this, two suitable commercial yeast-bacterium combinations were tested in cool-climate Chardonnay must. The time courses of glucose and fructose, acetaldehyde, several organic acids, and nitrogenous compounds were measured along with the final values of other key wine parameters. Sensory evaluation was done after 12 months of storage. The current study could not confirm a negative impact of simultaneous AF/MLF on fermentation success and kinetics or on final wine parameters. While acetic acid concentrations were slightly increased in wines after simultaneous AF/MLF, the differences were of neither practical nor legal significance. No statistically significant differences were found with regard to the final values of pH or total acidity and the concentrations of ethanol, acetaldehyde, glycerol, citric and lactic acids, and the nitrogen compounds arginine, ammonia, urea, citrulline, and ornithine. Sensory evaluation by a semiexpert panel confirmed the similarity of the wines. However, simultaneous inoculation led to considerable reductions in overall fermentation durations. Furthermore, differences of physiological and microbiological relevance were found. Specifically, we report the vinification of “super-dry” wines devoid of glucose and fructose after simultaneous inoculation of yeast and bacteria. PMID:16391046

  16. Numerical study on injection parameters optimization of thin wall and biodegradable polymers parts

    NASA Astrophysics Data System (ADS)

    Santos, C.; Mendes, A.; Carreira, P.; Mateus, A.; Malça, C.

    2017-07-01

    Nowadays, the molds industry searches new markets, with diversified and added value products. The concept associated to the production of thin walled and biodegradable parts mostly manufactured by injection process has assumed a relevant importance due to environmental and economic factors. The growth of a global consciousness about the harmful effects of the conventional polymers in our life quality associated with the legislation imposed, become key factors for the choice of a particular product by the consumer. The target of this work is to provide an integrated solution for the injection of parts with thin walls and manufactured using biodegradable materials. This integrated solution includes the design and manufacture processes of the mold as well as to find the optimum values for the injection parameters in order to become the process effective and competitive. For this, the Moldflow software was used. It was demonstrated that this computational tool provides an effective responsiveness and it can constitute an important tool in supporting the injection molding of thin-walled and biodegradable parts.

  17. Late-stage pharmaceutical R&D and pricing policies under two-stage regulation.

    PubMed

    Jobjörnsson, Sebastian; Forster, Martin; Pertile, Paolo; Burman, Carl-Fredrik

    2016-12-01

    We present a model combining the two regulatory stages relevant to the approval of a new health technology: the authorisation of its commercialisation and the insurer's decision about whether to reimburse its cost. We show that the degree of uncertainty concerning the true value of the insurer's maximum willingness to pay for a unit increase in effectiveness has a non-monotonic impact on the optimal price of the innovation, the firm's expected profit and the optimal sample size of the clinical trial. A key result is that there exists a range of values of the uncertainty parameter over which a reduction in uncertainty benefits the firm, the insurer and patients. We consider how different policy parameters may be used as incentive mechanisms, and the incentives to invest in R&D for marginal projects such as those targeting rare diseases. The model is calibrated using data on a new treatment for cystic fibrosis. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Mad cows and computer models: the U.S. response to BSE.

    PubMed

    Ackerman, Frank; Johnecheck, Wendy A

    2008-01-01

    The proportion of slaughtered cattle tested for BSE is much smaller in the U.S. than in Europe and Japan, leaving the U.S. heavily dependent on statistical models to estimate both the current prevalence and the spread of BSE. We examine the models relied on by USDA, finding that the prevalence model provides only a rough estimate, due to limited data availability. Reassuring forecasts from the model of the spread of BSE depend on the arbitrary constraint that worst-case values are assumed by only one of 17 key parameters at a time. In three of the six published scenarios with multiple worst-case parameter values, there is at least a 25% probability that BSE will spread rapidly. In public policy terms, reliance on potentially flawed models can be seen as a gamble that no serious BSE outbreak will occur. Statistical modeling at this level of abstraction, with its myriad, compound uncertainties, is no substitute for precautionary policies to protect public health against the threat of epidemics such as BSE.

  19. Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research

    PubMed Central

    Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi

    2016-01-01

    The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637

  20. Triphenylamine-based fluorescent NLO phores with ICT characteristics: Solvatochromic and theoretical study

    NASA Astrophysics Data System (ADS)

    Katariya, Santosh B.; Patil, Dinesh; Rhyman, Lydia; Alswaidan, Ibrahim A.; Ramasami, Ponnadurai; Sekar, Nagaiyan

    2017-12-01

    The static first and second hyperpolarizability and their related properties were calculated for triphenylamine-based "push-pull" dyes using the B3LYP, CAM-B3LYP and BHHLYP functionals in conjunction with the 6-311+G(d,p) basis set. The electronic coupling for the electron transfer reaction of the dyes were calculated with the generalized Mulliken-Hush method. The results obtained were correlated with the polarizability parameter αCT , first hyperpolarizability parameter βCT, and the solvatochromic descriptor of 〈 γ〉 SD obtained by the solvatochromic method. The dyes studied show a high total first order hyperpolarizability (70-238 times) and second order hyperpolarizability (412-778 times) compared to urea. Among the three functionals, the CAM-B3LYP and BHHLYP functionals show hyperpolarizability values closer to experimental values. Experimental absorption and emission wavelengths measured for all the synthesized dyes are in good agreement with those predicted using the time-dependent density functional theory. The theoretical examination on non-linear optical properties was performed on the key parameters of polarizability and hyperpolarizability. A remarkable increase in non-linear optical response is observed on insertion of benzothiazole unit compared to benzimidazole unit.

  1. Controlled recovery of phylogenetic communities from an evolutionary model using a network approach

    NASA Astrophysics Data System (ADS)

    Sousa, Arthur M. Y. R.; Vieira, André P.; Prado, Carmen P. C.; Andrade, Roberto F. S.

    2016-04-01

    This works reports the use of a complex network approach to produce a phylogenetic classification tree of a simple evolutionary model. This approach has already been used to treat proteomic data of actual extant organisms, but an investigation of its reliability to retrieve a traceable evolutionary history is missing. The used evolutionary model includes key ingredients for the emergence of groups of related organisms by differentiation through random mutations and population growth, but purposefully omits other realistic ingredients that are not strictly necessary to originate an evolutionary history. This choice causes the model to depend only on a small set of parameters, controlling the mutation probability and the population of different species. Our results indicate that for a set of parameter values, the phylogenetic classification produced by the used framework reproduces the actual evolutionary history with a very high average degree of accuracy. This includes parameter values where the species originated by the evolutionary dynamics have modular structures. In the more general context of community identification in complex networks, our model offers a simple setting for evaluating the effects, on the efficiency of community formation and identification, of the underlying dynamics generating the network itself.

  2. Sandpile-based model for capturing magnitude distributions and spatiotemporal clustering and separation in regional earthquakes

    NASA Astrophysics Data System (ADS)

    Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.

    2017-04-01

    We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.

  3. Research on key factors and their interaction effects of electromagnetic force of high-speed solenoid valve.

    PubMed

    Liu, Peng; Fan, Liyun; Hayat, Qaisar; Xu, De; Ma, Xiuzhen; Song, Enzhe

    2014-01-01

    Analysis consisting of numerical simulations along with lab experiments of interaction effects between key parameters on the electromagnetic force based on response surface methodology (RSM) has been also proposed to optimize the design of high-speed solenoid valve (HSV) and improve its performance. Numerical simulation model of HSV has been developed in Ansoft Maxwell environment and its accuracy has been validated through lab experiments. Effect of change of core structure, coil structure, armature structure, working air gap, and drive current on the electromagnetic force of HSV has been analyzed through simulation model and influence rules of various parameters on the electromagnetic force have been established. The response surface model of the electromagnetic force has been utilized to analyze the interaction effect between major parameters. It has been concluded that six interaction factors including working air gap with armature radius, drive current with armature thickness, coil turns with side pole radius, armature thickness with its radius, armature thickness with side pole radius, and armature radius with side pole radius have significant influence on the electromagnetic force. Optimal match values between coil turns and side pole radius; armature thickness and side pole radius; and armature radius and side pole radius have also been determined.

  4. Research on Key Factors and Their Interaction Effects of Electromagnetic Force of High-Speed Solenoid Valve

    PubMed Central

    Fan, Liyun; Xu, De; Ma, Xiuzhen; Song, Enzhe

    2014-01-01

    Analysis consisting of numerical simulations along with lab experiments of interaction effects between key parameters on the electromagnetic force based on response surface methodology (RSM) has been also proposed to optimize the design of high-speed solenoid valve (HSV) and improve its performance. Numerical simulation model of HSV has been developed in Ansoft Maxwell environment and its accuracy has been validated through lab experiments. Effect of change of core structure, coil structure, armature structure, working air gap, and drive current on the electromagnetic force of HSV has been analyzed through simulation model and influence rules of various parameters on the electromagnetic force have been established. The response surface model of the electromagnetic force has been utilized to analyze the interaction effect between major parameters. It has been concluded that six interaction factors including working air gap with armature radius, drive current with armature thickness, coil turns with side pole radius, armature thickness with its radius, armature thickness with side pole radius, and armature radius with side pole radius have significant influence on the electromagnetic force. Optimal match values between coil turns and side pole radius; armature thickness and side pole radius; and armature radius and side pole radius have also been determined. PMID:25243217

  5. Parallel sort with a ranged, partitioned key-value store in a high perfomance computing environment

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron; Poole, Stephen W.

    2016-01-26

    Improved sorting techniques are provided that perform a parallel sort using a ranged, partitioned key-value store in a high performance computing (HPC) environment. A plurality of input data files comprising unsorted key-value data in a partitioned key-value store are sorted. The partitioned key-value store comprises a range server for each of a plurality of ranges. Each input data file has an associated reader thread. Each reader thread reads the unsorted key-value data in the corresponding input data file and performs a local sort of the unsorted key-value data to generate sorted key-value data. A plurality of sorted, ranged subsets of each of the sorted key-value data are generated based on the plurality of ranges. Each sorted, ranged subset corresponds to a given one of the ranges and is provided to one of the range servers corresponding to the range of the sorted, ranged subset. Each range server sorts the received sorted, ranged subsets and provides a sorted range. A plurality of the sorted ranges are concatenated to obtain a globally sorted result.

  6. Understanding which parameters control shallow ascent of silicic effusive magma

    NASA Astrophysics Data System (ADS)

    Thomas, Mark E.; Neuberg, Jurgen W.

    2014-11-01

    The estimation of the magma ascent rate is key to predicting volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. Linking potential changes of such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models Soufrière that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. We show that variability in the rate of low frequency seismicity, assumed to correlate directly with the rate of magma movement, can be used as an indicator for changes in ascent rate and, therefore, eruptive activity. The results indicate that conduit diameter and excess pressure in the magma chamber are amongst the dominant controlling variables, but the single most important parameter is the volatile content (assumed as only water). Modeling this parameter in the range of reported values causes changes in the calculated ascent velocities of up to 800%.

  7. Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.

    PubMed

    Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis

    2008-10-01

    We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)

  8. Sustainment and Net-ready Key Performance Parameters (KPP) in an Enterprise Information System (EIS) Value Assurance Framework (VAF)

    DTIC Science & Technology

    2014-04-01

    adapts the concept of “Communities of Interest” (COI) identified in DoD GIG policy for this purpose. In the VAF construct COIs become hands-on beta...developers. This approach both leverages COTS economy of scale and nudges COTS development in directions useful to the government. Programs can write...schedule  Exploit new GIG acquisition policies  Extend and expand pure COTS competition  Issue simple use cases in lieu of traditional RFI/RFP

  9. Tool Efficiency Analysis model research in SEMI industry

    NASA Astrophysics Data System (ADS)

    Lei, Ma; Nana, Zhang; Zhongqiu, Zhang

    2018-06-01

    One of the key goals in SEMI industry is to improve equipment through put and ensure equipment production efficiency maximization. This paper is based on SEMI standards in semiconductor equipment control, defines the transaction rules between different tool states, and presents a TEA system model which is to analysis tool performance automatically based on finite state machine. The system was applied to fab tools and verified its effectiveness successfully, and obtained the parameter values used to measure the equipment performance, also including the advices of improvement.

  10. Dopamine does double duty in motivating cognitive effort

    PubMed Central

    Westbrook, Andrew; Braver, Todd S.

    2015-01-01

    Cognitive control is subjectively costly, suggesting that engagement is modulated in relationship to incentive state. Dopamine appears to play key roles. In particular, dopamine may mediate cognitive effort by two broad classes of functions: 1) modulating the functional parameters of working memory circuits subserving effortful cognition, and 2) mediating value-learning and decision-making about effortful cognitive action. Here we tie together these two lines of research, proposing how dopamine serves “double duty”, translating incentive information into cognitive motivation. PMID:26889810

  11. A novel procedure for detecting and focusing moving objects with SAR based on the Wigner-Ville distribution

    NASA Astrophysics Data System (ADS)

    Barbarossa, S.; Farina, A.

    A novel scheme for detecting moving targets with synthetic aperture radar (SAR) is presented. The proposed approach is based on the use of the Wigner-Ville distribution (WVD) for simultaneously detecting moving targets and estimating their motion kinematic parameters. The estimation plays a key role for focusing the target and correctly locating it with respect to the stationary background. The method has a number of advantages: (i) the detection is efficiently performed on the samples in the time-frequency domain, provided the WVD, without resorting to the use of a bank of filters, each one matched to possible values of the unknown target motion parameters; (ii) the estimation of the target motion parameters can be done on the same time-frequency domain by locating the line where the maximum energy of the WVD is concentrated. A validation of the approach is given by both analytical and simulation means. In addition, the estimation of the target kinematic parameters and the corresponding image focusing are also demonstrated.

  12. Statistical sensitivity analysis of a simple nuclear waste repository model

    NASA Astrophysics Data System (ADS)

    Ronen, Y.; Lucius, J. L.; Blow, E. M.

    1980-06-01

    A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.

  13. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    PubMed Central

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650

  14. Assessing the performance of community-available global MHD models using key system parameters and empirical relationships

    NASA Astrophysics Data System (ADS)

    Gordeev, E.; Sergeev, V.; Honkonen, I.; Kuznetsova, M.; Rastätter, L.; Palmroth, M.; Janhunen, P.; Tóth, G.; Lyon, J.; Wiltberger, M.

    2015-12-01

    Global magnetohydrodynamic (MHD) modeling is a powerful tool in space weather research and predictions. There are several advanced and still developing global MHD (GMHD) models that are publicly available via Community Coordinated Modeling Center's (CCMC) Run on Request system, which allows the users to simulate the magnetospheric response to different solar wind conditions including extraordinary events, like geomagnetic storms. Systematic validation of GMHD models against observations still continues to be a challenge, as well as comparative benchmarking of different models against each other. In this paper we describe and test a new approach in which (i) a set of critical large-scale system parameters is explored/tested, which are produced by (ii) specially designed set of computer runs to simulate realistic statistical distributions of critical solar wind parameters and are compared to (iii) observation-based empirical relationships for these parameters. Being tested in approximately similar conditions (similar inputs, comparable grid resolution, etc.), the four models publicly available at the CCMC predict rather well the absolute values and variations of those key parameters (magnetospheric size, magnetic field, and pressure) which are directly related to the large-scale magnetospheric equilibrium in the outer magnetosphere, for which the MHD is supposed to be a valid approach. At the same time, the models have systematic differences in other parameters, being especially different in predicting the global convection rate, total field-aligned current, and magnetic flux loading into the magnetotail after the north-south interplanetary magnetic field turning. According to validation results, none of the models emerges as an absolute leader. The new approach suggested for the evaluation of the models performance against reality may be used by model users while planning their investigations, as well as by model developers and those interesting to quantitatively evaluate progress in magnetospheric modeling.

  15. The effects of dynamical synapses on firing rate activity: a spiking neural network model.

    PubMed

    Khalil, Radwa; Moftah, Marie Z; Moustafa, Ahmed A

    2017-11-01

    Accumulating evidence relates the fine-tuning of synaptic maturation and regulation of neural network activity to several key factors, including GABA A signaling and a lateral spread length between neighboring neurons (i.e., local connectivity). Furthermore, a number of studies consider short-term synaptic plasticity (STP) as an essential element in the instant modification of synaptic efficacy in the neuronal network and in modulating responses to sustained ranges of external Poisson input frequency (IF). Nevertheless, evaluating the firing activity in response to the dynamical interaction between STP (triggered by ranges of IF) and these key parameters in vitro remains elusive. Therefore, we designed a spiking neural network (SNN) model in which we incorporated the following parameters: local density of arbor essences and a lateral spread length between neighboring neurons. We also created several network scenarios based on these key parameters. Then, we implemented two classes of STP: (1) short-term synaptic depression (STD) and (2) short-term synaptic facilitation (STF). Each class has two differential forms based on the parametric value of its synaptic time constant (either for depressing or facilitating synapses). Lastly, we compared the neural firing responses before and after the treatment with STP. We found that dynamical synapses (STP) have a critical differential role on evaluating and modulating the firing rate activity in each network scenario. Moreover, we investigated the impact of changing the balance between excitation (E) and inhibition (I) on stabilizing this firing activity. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Printability of calcium phosphate powders for three-dimensional printing of tissue engineering scaffolds.

    PubMed

    Butscher, Andre; Bohner, Marc; Roth, Christian; Ernstberger, Annika; Heuberger, Roman; Doebelin, Nicola; von Rohr, Philipp Rudolf; Müller, Ralph

    2012-01-01

    Three-dimensional printing (3DP) is a versatile method to produce scaffolds for tissue engineering. In 3DP the solid is created by the reaction of a liquid selectively sprayed onto a powder bed. Despite the importance of the powder properties, there has to date been a relatively poor understanding of the relation between the powder properties and the printing outcome. This article aims at improving this understanding by looking at the link between key powder parameters (particle size, flowability, roughness, wettability) and printing accuracy. These powder parameters are determined as key factors with a predictive value for the final 3DP outcome. Promising results can be expected for mean particle size in the range of 20-35 μm, compaction rate in the range of 1.3-1.4, flowability in the range of 5-7 and powder bed surface roughness of 10-25 μm. Finally, possible steps and strategies in pushing the physical limits concerning improved quality in 3DP are addressed and discussed. Copyright © 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  17. An investigation of the key parameters for predicting PV soiling losses

    DOE PAGES

    Micheli, Leonardo; Muller, Matthew

    2017-01-25

    One hundred and two environmental and meteorological parameters have been investigated and compared with the performance of 20 soiling stations installed in the USA, in order to determine their ability to predict the soiling losses occurring on PV systems. The results of this investigation showed that the annual average of the daily mean particulate matter values recorded by monitoring stations deployed near the PV systems are the best soiling predictors, with coefficients of determination ( R 2) as high as 0.82. The precipitation pattern was also found to be relevant: among the different meteorological parameters, the average length of drymore » periods had the best correlation with the soiling ratio. Lastly, a preliminary investigation of two-variable regressions was attempted and resulted in an adjusted R 2 of 0.90 when a combination of PM 2.5 and a binary classification for the average length of the dry period was introduced.« less

  18. Investigation of the current yaw engineering models for simulation of wind turbines in BEM and comparison with CFD and experiment

    NASA Astrophysics Data System (ADS)

    Rahimi, H.; Hartvelt, M.; Peinke, J.; Schepers, J. G.

    2016-09-01

    The aim of this work is to investigate the capabilities of current engineering tools based on Blade Element Momentum (BEM) and free vortex wake codes for the prediction of key aerodynamic parameters of wind turbines in yawed flow. Axial induction factor and aerodynamic loads of three wind turbines (NREL VI, AVATAR and INNWIND.EU) were investigated using wind tunnel measurements and numerical simulations for 0 and 30 degrees of yaw. Results indicated that for axial conditions there is a good agreement between all codes in terms of mean values of aerodynamic parameters, however in yawed flow significant deviations were observed. This was due to unsteady phenomena such as advancing & retreating and skewed wake effect. These deviations were more visible in aerodynamic parameters in comparison to the rotor azimuthal angle for the sections at the root and tip where the skewed wake effect plays a major role.

  19. Evaluation of performance of select fusion experiments and projected reactors

    NASA Technical Reports Server (NTRS)

    Miley, G. H.

    1978-01-01

    The performance of NASA Lewis fusion experiments (SUMMA and Bumpy Torus) is compared with other experiments and that necessary for a power reactor. Key parameters cited are gain (fusion power/input power) and the time average fusion power, both of which may be more significant for real fusion reactors than the commonly used Lawson parameter. The NASA devices are over 10 orders of magnitude below the required powerplant values in both gain and time average power. The best experiments elsewhere are also as much as 4 to 5 orders of magnitude low. However, the NASA experiments compare favorably with other alternate approaches that have received less funding than the mainline experiments. The steady-state character and efficiency of plasma heating are strong advantages of the NASA approach. The problem, though, is to move ahead to experiments of sufficient size to advance in gain and average power parameters.

  20. Parameter Stability of the Functional–Structural Plant Model GREENLAB as Affected by Variation within Populations, among Seasons and among Growth Stages

    PubMed Central

    Ma, Yuntao; Li, Baoguo; Zhan, Zhigang; Guo, Yan; Luquet, Delphine; de Reffye, Philippe; Dingkuhn, Michael

    2007-01-01

    Background and Aims It is increasingly accepted that crop models, if they are to simulate genotype-specific behaviour accurately, should simulate the morphogenetic process generating plant architecture. A functional–structural plant model, GREENLAB, was previously presented and validated for maize. The model is based on a recursive mathematical process, with parameters whose values cannot be measured directly and need to be optimized statistically. This study aims at evaluating the stability of GREENLAB parameters in response to three types of phenotype variability: (1) among individuals from a common population; (2) among populations subjected to different environments (seasons); and (3) among different development stages of the same plants. Methods Five field experiments were conducted in the course of 4 years on irrigated fields near Beijing, China. Detailed observations were conducted throughout the seasons on the dimensions and fresh biomass of all above-ground plant organs for each metamer. Growth stage-specific target files were assembled from the data for GREENLAB parameter optimization. Optimization was conducted for specific developmental stages or the entire growth cycle, for individual plants (replicates), and for different seasons. Parameter stability was evaluated by comparing their CV with that of phenotype observation for the different sources of variability. A reduced data set was developed for easier model parameterization using one season, and validated for the four other seasons. Key Results and Conclusions The analysis of parameter stability among plants sharing the same environment and among populations grown in different environments indicated that the model explains some of the inter-seasonal variability of phenotype (parameters varied less than the phenotype itself), but not inter-plant variability (parameter and phenotype variability were similar). Parameter variability among developmental stages was small, indicating that parameter values were largely development-stage independent. The authors suggest that the high level of parameter stability observed in GREENLAB can be used to conduct comparisons among genotypes and, ultimately, genetic analyses. PMID:17158141

  1. Target-in-the-loop remote sensing of laser beam and atmospheric turbulence characteristics.

    PubMed

    Vorontsov, Mikhail A; Lachinova, Svetlana L; Majumdar, Arun K

    2016-07-01

    A new target-in-the-loop (TIL) atmospheric sensing concept for in situ remote measurements of major laser beam characteristics and atmospheric turbulence parameters is proposed and analyzed numerically. The technique is based on utilization of an integral relationship between complex amplitudes of the counterpropagating optical waves known as overlapping integral or interference metric, whose value is preserved along the propagation path. It is shown that the interference metric can be directly measured using the proposed TIL sensing system composed of a single-mode fiber-based optical transceiver and a remotely located retro-target. The measured signal allows retrieval of key beam and atmospheric turbulence characteristics including scintillation index and the path-integrated refractive index structure parameter.

  2. Strength and deformability of light-toned layered deposits observed by MER Opportunity: Eagle to Erebus craters, Mars

    NASA Astrophysics Data System (ADS)

    Okubo, Chris H.

    2007-10-01

    Quantifying host rock deformation is vital to understanding the geologic evolution and productivity of subsurface fluid reservoirs. In support of on-going characterization of fracture controlled fluid flow through the light-toned layered deposits on Mars, key parameters of strength and deformability are derived from Microscopic Imager and Rock Abrasion Tool data collected by the Mars Exploration Rover Opportunity in Meridiani Planum. Analysis of 21 targets of light-toned layered deposits yields a median apparent porosity of 0.25. Additional physical parameters for each target are derived from these porosity measurements. The median value of unconfined compressive strength is 11.23 MPa, Young's modulus is 1.86 GPa, and the brittle-ductile transition pressure is 8.77 MPa.

  3. The Abundances of Methane and Ortho/Para Hydrogen in Uranus and Neptune: Implications of New Laboratory 4-0 H(sub 2) Quadrapole Line Parameters

    NASA Technical Reports Server (NTRS)

    Baines, K.; Mickelson, M.; Larson, L.; Ferguson, D.

    1994-01-01

    The tropospheric methane molar fraction (f(sub ch4,t)) and the ortho/para hydrogen ratio are derived for Uranus and Neptune based on new determinations of spectroscopic parameters for key hydrogen features as reported by Ferguson et al. (1993, J. Mol. Spec 160, 315-325). For each planet, the relatively weak laboratory linestrengths (approximately 30% and 15% less than the theoretical 4-0 S(0) and S(1) linestrengths, respectively) results, when compared to analyses adopting theoretical values, in a 30% decrease in the tropospheric methane ratio and a comparable increase in the pressure level of the optically-thick cloudtop marking the bottom of the visible atmosphere (P(sub cld)).

  4. Modelling audiovisual integration of affect from videos and music.

    PubMed

    Gao, Chuanji; Wedell, Douglas H; Kim, Jongwan; Weber, Christine E; Shinkareva, Svetlana V

    2018-05-01

    Two experiments examined how affective values from visual and auditory modalities are integrated. Experiment 1 paired music and videos drawn from three levels of valence while holding arousal constant. Experiment 2 included a parallel combination of three levels of arousal while holding valence constant. In each experiment, participants rated their affective states after unimodal and multimodal presentations. Experiment 1 revealed a congruency effect in which stimulus combinations of the same extreme valence resulted in more extreme state ratings than component stimuli presented in isolation. An interaction between music and video valence reflected the greater influence of negative affect. Video valence was found to have a significantly greater effect on combined ratings than music valence. The pattern of data was explained by a five parameter differential weight averaging model that attributed greater weight to the visual modality and increased weight with decreasing values of valence. Experiment 2 revealed a congruency effect only for high arousal combinations and no interaction effects. This pattern was explained by a three parameter constant weight averaging model with greater weight for the auditory modality and a very low arousal value for the initial state. These results demonstrate key differences in audiovisual integration between valence and arousal.

  5. Meteorology and GNSS? What is the benefit?

    NASA Astrophysics Data System (ADS)

    Drummond, P.; Grünig, S.

    2010-12-01

    Due to the strong correlation between water vapor in the atmosphere and GNSS tropospheric propagation delays, we can estimate the Integrated Precipitable Water Vapor (IPWV) in the atmosphere through GNSS measurements. This parameter is crucial for meteorologists as the water content in the atmosphere is a key parameter in the weather models. The Total Electron Content (TEC) in the ionosphere has a huge impact on the ionospheric propagation delay in GNSS signals. By computing the ionospheric delay from GNSS measurements it is possible to predict the TEC which is an excellent indicator for ionospheric activity. The benefit is that we can estimate the influence on the RTK performance from TEC values. The atmospheric feature in the Trimble Atmosphere App (as well as in VRSNet software) allows computing both IPWV and TEC values from a CORS network. IPWV is computed using surface meteorological data such as temperature and pressure as well as radiosonde data. The results are shown in a table like form as well as in numerous graphical forms such as contour and surface plots, station and condition charts. The computed values can be animated in a movie over the last 24 hours.

  6. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection

    PubMed Central

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-01-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393

  7. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection.

    PubMed

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-12-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.

  8. A statistical survey of heat input parameters into the cusp thermosphere

    NASA Astrophysics Data System (ADS)

    Moen, J. I.; Skjaeveland, A.; Carlson, H. C.

    2017-12-01

    Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.

  9. Statistical Analyses of Femur Parameters for Designing Anatomical Plates.

    PubMed

    Wang, Lin; He, Kunjin; Chen, Zhengming

    2016-01-01

    Femur parameters are key prerequisites for scientifically designing anatomical plates. Meanwhile, individual differences in femurs present a challenge to design well-fitting anatomical plates. Therefore, to design anatomical plates more scientifically, analyses of femur parameters with statistical methods were performed in this study. The specific steps were as follows. First, taking eight anatomical femur parameters as variables, 100 femur samples were classified into three classes with factor analysis and Q-type cluster analysis. Second, based on the mean parameter values of the three classes of femurs, three sizes of average anatomical plates corresponding to the three classes of femurs were designed. Finally, based on Bayes discriminant analysis, a new femur could be assigned to the proper class. Thereafter, the average anatomical plate suitable for that new femur was selected from the three available sizes of plates. Experimental results showed that the classification of femurs was quite reasonable based on the anatomical aspects of the femurs. For instance, three sizes of condylar buttress plates were designed. Meanwhile, 20 new femurs are judged to which classes the femurs belong. Thereafter, suitable condylar buttress plates were determined and selected.

  10. Rhelogical constraints on ridge formation on Icy Satellites

    NASA Astrophysics Data System (ADS)

    Rudolph, M. L.; Manga, M.

    2010-12-01

    The processes responsible for forming ridges on Europa remain poorly understood. We use a continuum damage mechanics approach to model ridge formation. The main objectives of this contribution are to constrain (1) choice of rheological parameters and (2) maximum ridge size and rate of formation. The key rheological parameters to constrain appear in the evolution equation for a damage variable (D): ˙ {D} = B <<σ >>r}(1-D){-k-α D (p)/(μ ) and in the equation relating damage accumulation to volumetric changes, Jρ 0 = δ (1-D). Similar damage evolution laws have been applied to terrestrial glaciers and to the analysis of rock mechanics experiments. However, it is reasonable to expect that, like viscosity, the rheological constants B, α , and δ depend strongly on temperature, composition, and ice grain size. In order to determine whether the damage model is appropriate for Europa’s ridges, we must find values of the unknown damage parameters that reproduce ridge topography. We perform a suite of numerical experiments to identify the region of parameter space conducive to ridge production and show the sensitivity to changes in each unknown parameter.

  11. Multi-response calibration of a conceptual hydrological model in the semiarid catchment of Wadi al Arab, Jordan

    NASA Astrophysics Data System (ADS)

    Rödiger, T.; Geyer, S.; Mallast, U.; Merz, R.; Krause, P.; Fischer, C.; Siebert, C.

    2014-02-01

    A key factor for sustainable management of groundwater systems is the accurate estimation of groundwater recharge. Hydrological models are common tools for such estimations and widely used. As such models need to be calibrated against measured values, the absence of adequate data can be problematic. We present a nested multi-response calibration approach for a semi-distributed hydrological model in the semi-arid catchment of Wadi al Arab in Jordan, with sparsely available runoff data. The basic idea of the calibration approach is to use diverse observations in a nested strategy, in which sub-parts of the model are calibrated to various observation data types in a consecutive manner. First, the available different data sources have to be screened for information content of processes, e.g. if data sources contain information on mean values, spatial or temporal variability etc. for the entire catchment or only sub-catchments. In a second step, the information content has to be mapped to relevant model components, which represent these processes. Then the data source is used to calibrate the respective subset of model parameters, while the remaining model parameters remain unchanged. This mapping is repeated for other available data sources. In that study the gauged spring discharge (GSD) method, flash flood observations and data from the chloride mass balance (CMB) are used to derive plausible parameter ranges for the conceptual hydrological model J2000g. The water table fluctuation (WTF) method is used to validate the model. Results from modelling using a priori parameter values from literature as a benchmark are compared. The estimated recharge rates of the calibrated model deviate less than ±10% from the estimates derived from WTF method. Larger differences are visible in the years with high uncertainties in rainfall input data. The performance of the calibrated model during validation produces better results than applying the model with only a priori parameter values. The model with a priori parameter values from literature tends to overestimate recharge rates with up to 30%, particular in the wet winter of 1991/1992. An overestimation of groundwater recharge and hence available water resources clearly endangers reliable water resource managing in water scarce region. The proposed nested multi-response approach may help to better predict water resources despite data scarcity.

  12. Partitioned key-value store with atomic memory operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    A partitioned key-value store is provided that supports atomic memory operations. A server performs a memory operation in a partitioned key-value store by receiving a request from an application for at least one atomic memory operation, the atomic memory operation comprising a memory address identifier; and, in response to the atomic memory operation, performing one or more of (i) reading a client-side memory location identified by the memory address identifier and storing one or more key-value pairs from the client-side memory location in a local key-value store of the server; and (ii) obtaining one or more key-value pairs from themore » local key-value store of the server and writing the obtained one or more key-value pairs into the client-side memory location identified by the memory address identifier. The server can perform functions obtained from a client-side memory location and return a result to the client using one or more of the atomic memory operations.« less

  13. Parameter Estimation with Almost No Public Communication for Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano

    2018-06-01

    One crucial step in any quantum key distribution (QKD) scheme is parameter estimation. In a typical QKD protocol the users have to sacrifice part of their raw data to estimate the parameters of the communication channel as, for example, the error rate. This introduces a trade-off between the secret key rate and the accuracy of parameter estimation in the finite-size regime. Here we show that continuous-variable QKD is not subject to this constraint as the whole raw keys can be used for both parameter estimation and secret key generation, without compromising the security. First, we show that this property holds for measurement-device-independent (MDI) protocols, as a consequence of the fact that in a MDI protocol the correlations between Alice and Bob are postselected by the measurement performed by an untrusted relay. This result is then extended beyond the MDI framework by exploiting the fact that MDI protocols can simulate device-dependent one-way QKD with arbitrarily high precision.

  14. Processing parameter optimization for the laser dressing of bronze-bonded diamond wheels

    NASA Astrophysics Data System (ADS)

    Deng, H.; Chen, G. Y.; Zhou, C.; Li, S. C.; Zhang, M. J.

    2014-01-01

    In this paper, a pulsed fiber-laser dressing method for bronze-bonded diamond wheels was studied systematically and comprehensively. The mechanisms for the laser dressing of bronze-bonded diamond wheels were theoretically analyzed, and the key processing parameters that determine the results of laser dressing, including the laser power density, pulse overlap ratio, ablation track line overlap ratio, and number of scanning cycles, were proposed for the first time. Further, the effects of these four key parameters on the oxidation-damaged layer of the material surface, the material removal efficiency, the material surface roughness, and the average protrusion height of the diamond grains were explored and summarized through pulsed laser ablation experiments. Under the current experimental conditions, the ideal values of the laser power density, pulse overlap ratio, ablation track line overlap ratio, and number of scanning cycles were determined to be 4.2 × 107 W/cm2, 30%, 30%, and 16, respectively. Pulsed laser dressing experiments were conducted on bronze-bonded diamond wheels using the optimized processing parameters; next, both the normal and tangential grinding forces produced by the dressed grinding wheel were measured while grinding alumina ceramic materials. The results revealed that the normal and tangential grinding forces produced by the laser-dressed grinding wheel during grinding were smaller than those of grinding wheels dressed using the conventional mechanical method, indicating that the pulsed laser dressing technology provides irreplaceable advantages relative to the conventional mechanical dressing method.

  15. Evaluating the Controls on Magma Ascent Rates Through Numerical Modelling

    NASA Astrophysics Data System (ADS)

    Thomas, M. E.; Neuberg, J. W.

    2015-12-01

    The estimation of the magma ascent rate is a key factor in predicting styles of volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. The ability to link potential changes in such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. The results indicate that potential changes to conduit geometry and excess pressure in the magma chamber are amongst the dominant controlling variables that effect ascent rate, but the single most important parameter is the volatile content (assumed in this case as only water). Modelling this parameter across a range of reported values causes changes in the calculated ascent velocities of up to 800%, triggering fluctuations in ascent rates that span the potential threshold between effusive and explosive eruptions.

  16. Gender influencers on work values of black adolescents.

    PubMed

    Thomas, V G; Shields, L C

    1987-01-01

    Work values and key influencers of a sample of black male and female adolescents were examined. Results indicated that boys and girls valued both the intrinsic and extrinsic rewards of work; however, girls reported slighter stronger extrinsic values than did boys. In addition, the sexes reported differences in the importance of specific work values such as "making lots of money," and "doing important things." When naming a key influencer, respondents tended to cite a same-sex and race individual. Sex of one's key influencer was related to certain work values, with subjects reporting a male key influencer valuing "trying out one's own ideas" and "having a secure future" more than those reporting a female key influencer. The interaction of sex of subject and sex of key influencer was significant on one of the work value outcomes. Implications of these findings are considered.

  17. Economics of movable interior blankets for greenhouses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, G.B.; Fohner, G.R.; Albright, L.D.

    1981-01-01

    A model for evaluating the economic impact of investment in a movable interior blanket was formulated. The method of analysis was net present value (NPV), in which the discounted, after-tax cash flow of costs and benefits was computed for the useful life of the system. An added feature was a random number component which permitted any or all of the input parameters to be varied within a specified range. Results from 100 computer runs indicated that all of the NPV estimates generated were positive, showing that the investment was profitable. However, there was a wide range of NPV estimates, frommore » $16.00/m/sup 2/ to $86.40/m/sup 2/, with a median value of $49.34/m/sup 2/. Key variables allowed to range in the analysis were: (1) the cost of fuel before the blanket is installed; (2) the percent fuel savings resulting from use of the blanket; (3) the annual real increase in the cost of fuel; and (4) the change in the annual value of the crop. The wide range in NPV estimates indicates the difficulty in making general recommendations regarding the economic feasibility of the investment when uncertainty exists as to the correct values for key variables in commercial settings. The results also point out needed research into the effect of the blanket on the crop, and on performance characteristics of the blanket.« less

  18. Discrete Event Simulation Modeling and Analysis of Key Leader Engagements

    DTIC Science & Technology

    2012-06-01

    to offer. GreenPlayer agents require four parameters, pC, pKLK, pTK, and pRK , which give probabilities for being corrupt, having key leader...HandleMessageRequest component. The same parameter constraints apply to these four parameters. The parameter pRK is the same parameter from the CreatePlayers component...whether the local Green player has resource critical knowledge by using the parameter pRK . It schedules an EndResourceKnowledgeRequest event, passing

  19. Securing Digital Audio using Complex Quadratic Map

    NASA Astrophysics Data System (ADS)

    Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi

    2018-03-01

    In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.

  20. Video encryption using chaotic masks in joint transform correlator

    NASA Astrophysics Data System (ADS)

    Saini, Nirmala; Sinha, Aloka

    2015-03-01

    A real-time optical video encryption technique using a chaotic map has been reported. In the proposed technique, each frame of video is encrypted using two different chaotic random phase masks in the joint transform correlator architecture. The different chaotic random phase masks can be obtained either by using different iteration levels or by using different seed values of the chaotic map. The use of different chaotic random phase masks makes the decryption process very complex for an unauthorized person. Optical, as well as digital, methods can be used for video encryption but the decryption is possible only digitally. To further enhance the security of the system, the key parameters of the chaotic map are encoded using RSA (Rivest-Shamir-Adleman) public key encryption. Numerical simulations are carried out to validate the proposed technique.

  1. Predicting dredging-associated effects to coral reefs in Apra Harbor, Guam - Part 1: Sediment exposure modeling.

    PubMed

    Gailani, Joseph Z; Lackey, Tahirih C; King, David B; Bryant, Duncan; Kim, Sung-Chan; Shafer, Deborah J

    2016-03-01

    Model studies were conducted to investigate the potential coral reef sediment exposure from dredging associated with proposed development of a deepwater wharf in Apra Harbor, Guam. The Particle Tracking Model (PTM) was applied to quantify the exposure of coral reefs to material suspended by the dredging operations at two alternative sites. Key PTM features include the flexible capability of continuous multiple releases of sediment parcels, control of parcel/substrate interaction, and the ability to efficiently track vast numbers of parcels. This flexibility has facilitated simulating the combined effects of sediment released from clamshell dredging and chiseling within Apra Harbor. Because the rate of material released into the water column by some of the processes is not well understood or known a priori, the modeling approach was to bracket parameters within reasonable ranges to produce a suite of potential results from multiple model runs. Sensitivity analysis to model parameters is used to select the appropriate parameter values for bracketing. Data analysis results include mapping the time series and the maximum values of sedimentation, suspended sediment concentration, and deposition rate. Data were used to quantify various exposure processes that affect coral species in Apra Harbor. The goal of this research is to develop a robust methodology for quantifying and bracketing exposure mechanisms to coral (or other receptors) from dredging operations. These exposure values were utilized in an ecological assessment to predict effects (coral reef impacts) from various dredging scenarios. Copyright © 2015. Published by Elsevier Ltd.

  2. The seasonal behaviour of carbon fluxes in the Amazon: fusion of FLUXNET data and the ORCHIDEE model

    NASA Astrophysics Data System (ADS)

    Verbeeck, H.; Peylin, P.; Bacour, C.; Ciais, P.

    2009-04-01

    Eddy covariance measurements at the Santarém (km 67) site revealed an unexpected seasonal pattern in carbon fluxes which could not be simulated by existing state-of-the-art global ecosystem models (Saleska et al., Sciece 2003). An unexpected high carbon uptake was measured during dry season. In contrast, carbon release was observed in the wet season. There are several possible (combined) underlying mechanisms of this phenomenon: (1) an increased soil respiration due to soil moisture in the wet season, (2) increased photosynthesis during the dry season due to deep rooting, hydraulic lift, increased radiation and/or a leaf flush. The objective of this study is to optimise the ORCHIDEE model using eddy covariance data in order to be able to mimic the seasonal response of carbon fluxes to dry/wet conditions in tropical forest ecosystems. By doing this, we try to identify the underlying mechanisms of this seasonal response. The ORCHIDEE model is a state of the art mechanistic global vegetation model that can be run at local or global scale. It calculates the carbon and water cycle in the different soil and vegetation pools and resolves the diurnal cycle of fluxes. ORCHIDEE is built on the concept of plant functional types (PFT) to describe vegetation. To bring the different carbon pool sizes to realistic values, spin-up runs are used. ORCHIDEE uses climate variables as drivers together with a number of ecosystem parameters that have been assessed from laboratory and in situ experiments. These parameters are still associated with a large uncertainty and may vary between and within PFTs in a way that is currently not informed or captured by the model. Recently, the development of assimilation techniques allows the objective use of eddy covariance data to improve our knowledge of these parameters in a statistically coherent approach. We use a Bayesian optimisation approach. This approach is based on the minimization of a cost function containing the mismatch between simulated model output and observations as well as the mismatch between a priori and optimized parameters. The parameters can be optimized on different time scales (annually, monthly, daily). For this study the model is optimised at local scale for 5 eddy flux sites: 4 sites in Brazil and one in French Guyana. The seasonal behaviour of C fluxes in response to wet and dry conditions differs among these sites. Key processes that are optimised include: the effect of the soil water on heterotrophic soil respiration, the effect of soil water availability on stomatal conductance and photosynthesis, and phenology. By optimising several key parameters we could improve the simulation of the seasonal pattern of NEE significantly. Nevertheless, posterior parameters should be interpreted with care, because resulting parameter values might compensate for uncertainties on the model structure or other parameters. Moreover, several critical issues appeared during this study e.g. how to assimilate latent and sensible heat data, when the energy balance is not closed in the data? Optimisation of the Q10 parameter showed that on some sites respiration was not sensitive at all to temperature, which show only small variations in this region. Considering this, one could question the reliability of the partitioned fluxes (GPP/Reco) at these sites. This study also tests if there is coherence between optimised parameter values of different sites within the tropical forest PFT and if the forward model response to climate variations is similar between sites.

  3. Anticipating abrupt shifts in temporal evolution of probability of eruption

    NASA Astrophysics Data System (ADS)

    Rohmer, J.; Loschetter, A.

    2016-04-01

    Estimating the probability of eruption by jointly accounting for different sources of monitoring parameters over time is a key component for volcano risk management. In the present study, we are interested in the transition from a state of low-to-moderate probability value to a state of high probability value. By using the data of MESIMEX exercise at the Vesuvius volcano, we investigated the potential for time-varying indicators related to the correlation structure or to the variability of the probability time series for detecting in advance this critical transition. We found that changes in the power spectra and in the standard deviation estimated over a rolling time window both present an abrupt increase, which marks the approaching shift. Our numerical experiments revealed that the transition from an eruption probability of 10-15% to > 70% could be identified up to 1-3 h in advance. This additional lead time could be useful to place different key services (e.g., emergency services for vulnerable groups, commandeering additional transportation means, etc.) on a higher level of alert before the actual call for evacuation.

  4. On the Astrid asteroid family

    NASA Astrophysics Data System (ADS)

    Carruba, V.

    2016-09-01

    Among asteroid families, the Astrid family is peculiar because of its unusual inclination distribution. Objects at a ≃ 2.764 au are quite dispersed in this orbital element, giving the family a `crab-like' appearance. Recent works showed that this feature is caused by the interaction of the family with the s - sC nodal secular resonance with Ceres, that spreads the inclination of asteroids near its separatrix. As a consequence, the currently observed distribution of the vW component of terminal ejection velocities obtained from inverting Gauss equation is quite leptokurtic, since this parameter mostly depends on the asteroids inclination. The peculiar orbital configuration of the Astrid family can be used to set constraints on key parameters describing the strength of the Yarkovsky force, such as the bulk and surface density and the thermal conductivity of surface material. By simulating various fictitious families with different values of these parameters, and by demanding that the current value of the kurtosis of the distribution in vW be reached over the estimated lifetime of the family, we obtained that the thermal conductivity of Astrid family members should be ≃0.001 W m-1 K-1, and that the surface and bulk density should be higher than 1000 kg m-3. Monte Carlo methods simulating Yarkovsky and stochastic Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) evolution of the Astrid family show its age to be T = 140 ± 30 Myr old, in good agreement with estimates from other groups. Its terminal ejection velocity parameter is in the range V_{EJ}= 5^{+17}_{-5} m s-1. Values of VEJ larger than 25 m s-1 are excluded from constraints from the current inclination distribution.

  5. Nutritional status and nutritional habits of men with benign prostatic hyperplasia or prostate cancer - preliminary investigation.

    PubMed

    Goluch-Koniuszy, Zuzanna; Rygielska, Magda; Nowacka, Ilona

    2013-01-01

    The ageing in men, the most frequent pathologic lesions affecting the prostatic gland in this period are benign prostatic hyperplasia (BPH) and prostate cancer (PC), the course of which may be influenced by the improper nutritional status of patients and their nutritional habits. The aim of this study was, therefore, to evaluate the nutritional status and eating habits of men diagnosed and treated for one of the above diseases. MATERIAL AND METODS: The nutritional status of 30 male patients with clinically confirmed and treated disease of the prostatic gland, including 15 men (aged 51-75 years) with BPH and 15 men (aged 51-73 years) with PC, was evaluated based on their BMI, WC, WHR, and WHtR parameters. In turn, the energy and nutritive value of 90 daily food rations (DFRs) was evaluated. Finally, calculations were made for the Key's index of diet atherogenicity, resultant Glycemic Index (GI) and Glycemic Load (GL). Higher values of the BMI, WC, WHR and WHtR parameters were noted in the men with PC, they were also characterized by a higher incidence of peripheral subcutaneous obesity and visceral obesity. The DFRs of the men were characterized by a low energy value and by a low intake of available carbohydrates, dietary fi ber, K, Ca, Mg, vitamins D and C, and fl uids at a simultaneously high intake of total and animal protein, cholesterol, Na, P, Fe, Cu as well as vitamins B2 and PP. The contribution of energy derived from the basic nutrients diverged from the recommended values. In addition, the DFRs were characterized by high values of Key's index and 24-h GL. Differences in meeting the RDA for selected nutrients between the analysed groups of men were statistically significant. The improper nutritional status of the men may result from their incorrect nutritional habits which fail to improve their health status, and even predispose them to the development of some diet-dependent diseases. In view of that, both correction of diets of the surveyed men, as well as their health-promoting nutritional education in the aspect of prostate diseases seem necessary.

  6. Continuous depth profile of mechanical properties in the Nankai accretionary prism based on drilling performance parameters

    NASA Astrophysics Data System (ADS)

    Hamada, Y.; Kitamura, M.; Yamada, Y.; Sanada, Y.; Moe, K.; Hirose, T.

    2016-12-01

    In-situ rock properties in/around seismogenic zone in an accretionary prism are key parameters to understand the development mechanisms of an accretionary prism, spatio-temporal variation of stress state, and so on. For the purpose of acquiring continuous-depth-profile of in-situ formation strength in an accretionary prism, here we propose the new method to evaluate the in-situ rock strength using drilling performance property. Drilling parameters are inevitably obtained by any drilling operation even in the non-coring intervals or at challenging environment where core recovery may be poor. The relationship between the rock properties and drilling parameters has been proposed by previous researches [e.g. Teale 1964]. We introduced the relationship theory of Teale [1964], and developed a converting method to estimate in-situ rock strength without depending on uncertain parameters such as weight on bit (WOB). Specifically, we first calculated equivalent specific toughness (EST) which represents gradient of the relationship between Torque energy and volume of penetration at arbitrary interval (in this study, five meters). Then the EST values were converted into strength using the drilling parameters-rock strengths correlation obtained by Karasawa et al. [2002]. This method was applied to eight drilling holes in the Site C0002 of IODP NanTroSEIZE in order to evaluate in-situ rock strength in shallow to deep accretionary prism. In the shallower part (0 - 300 mbsf), the calculated strength shows sharp increase up to 20 MPa. Then the strength has approximate constant value to 1500 mbsf without significant change even at unconformity around 1000 mbsf (boundary between forearc basin and accretionary prism). Below that depth, value of the strength gradually increases with depth up to 60 MPa at 3000 mbsf with variation between 10 and 80 MPa. Because the calculated strength is across approximately the same lithology, the increase trend can responds to the rock strength. This strength-depth curve correspond reasonably well with the strength data of core and cutting samples collected from hole C0002N and C0002P [Kitamura et al., 2016 AGU]. These results show the validity of the method evaluating in-situ strength from the drilling parameters.

  7. Robustness Analysis and Reliable Flight Regime Estimation of an Integrated Resilent Control System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine

    2008-01-01

    Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. As a part of the validation process, this paper describes an analysis method for determining a reliable flight regime in the flight envelope within which an integrated resilent control system can achieve the desired performance of tracking command signals and detecting additive faults in the presence of parameter uncertainty and unmodeled dynamics. To calculate a reliable flight regime, a structured singular value analysis method is applied to analyze the closed-loop system over the entire flight envelope. To use the structured singular value analysis method, a linear fractional transform (LFT) model of a transport aircraft longitudinal dynamics is developed over the flight envelope by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The developed LFT model can capture original nonlinear dynamics over the flight envelope with the ! block which contains key varying parameters: angle of attack and velocity, and real parameter uncertainty: aerodynamic coefficient uncertainty and moment of inertia uncertainty. Using the developed LFT model and a formal robustness analysis method, a reliable flight regime is calculated for a transport aircraft closed-loop system.

  8. Bayes Analysis and Reliability Implications of Stress-Rupture Testing a Kevlar/Epoxy COPV using Temperature and Pressure Acceleration

    NASA Technical Reports Server (NTRS)

    Phoenix, S. Leigh; Kezirian, Michael T.; Murthy, Pappu L. N.

    2009-01-01

    Composite Overwrapped Pressure Vessel (COPVs) that have survived a long service time under pressure generally must be recertified before service is extended. Sometimes lifetime testing is performed on an actual COPV in service in an effort to validate the reliability model that is the basis for certifying the continued flight worthiness of its sisters. Currently, testing of such a Kevlar49(registered TradeMark)/epoxy COPV is nearing completion. The present paper focuses on a Bayesian statistical approach to analyze the possible failure time results of this test and to assess the implications in choosing between possible model parameter values that in the past have had significant uncertainty. The key uncertain parameters in this case are the actual fiber stress ratio at operating pressure, and the Weibull shape parameter for lifetime; the former has been uncertain due to ambiguities in interpreting the original and a duplicate burst test. The latter has been uncertain due to major differences between COPVs in the data base and the actual COPVs in service. Any information obtained that clarifies and eliminates uncertainty in these parameters will have a major effect on the predicted reliability of the service COPVs going forward. The key result is that the longer the vessel survives, the more likely the more optimistic stress ratio is correct. At the time of writing, the resulting effect on predicted future reliability is dramatic, increasing it by about one nine , that is, reducing the probability of failure by an order of magnitude. However, testing one vessel does not change the uncertainty on the Weibull shape parameter for lifetime since testing several would be necessary.

  9. Viscoinertial regime of immersed granular flows

    NASA Astrophysics Data System (ADS)

    Amarsid, L.; Delenne, J.-Y.; Mutabaruka, P.; Monerie, Y.; Perales, F.; Radjai, F.

    2017-07-01

    By means of extensive coupled molecular dynamics-lattice Boltzmann simulations, accounting for grain dynamics and subparticle resolution of the fluid phase, we analyze steady inertial granular flows sheared by a viscous fluid. We show that, for a broad range of system parameters (shear rate, confining stress, fluid viscosity, and relative fluid-grain density), the frictional strength and packing fraction can be described by a modified inertial number incorporating the fluid effect. In a dual viscous description, the effective viscosity diverges as the inverse square of the difference between the packing fraction and its jamming value, as observed in experiments. We also find that the fabric and force anisotropies extracted from the contact network are well described by the modified inertial number, thus providing clear evidence for the role of these key structural parameters in dense suspensions.

  10. Atmospheric gas-to-particle conversion: why NPF events are observed in megacities?

    PubMed

    Kulmala, M; Kerminen, V-M; Petäjä, T; Ding, A J; Wang, L

    2017-08-24

    In terms of the global aerosol particle number load, atmospheric new particle formation (NPF) dominates over primary emissions. The key for quantifying the importance of atmospheric NPF is to understand how gas-to-particle conversion (GTP) takes place at sizes below a few nanometers in particle diameter in different environments, and how this nano-GTP affects the survival of small clusters into larger sizes. The survival probability of growing clusters is tied closely to the competition between their growth and scavenging by pre-existing aerosol particles, and the key parameter in this respect is the ratio between the condensation sink (CS) and the cluster growth rate (GR). Here we define their ratio as a dimensionless survival parameter, P, as P = (CS/10 -4 s -1 )/(GR/nm h -1 ). Theoretical arguments and observations in clean and moderately-polluted conditions indicate that P needs to be smaller than about 50 for a notable NPF to take place. However, the existing literature shows that in China, NPF occurs frequently in megacities such as in Beijing, Nanjing and Shanghai, and our analysis shows that the calculated values of P are even larger than 200 in these cases. By combining direct observations and conceptual modelling, we explore the variability of the survival parameter P in different environments and probe the reasons for NPF occurrence under highly-polluted conditions.

  11. Encryption for Remote Control via Internet or Intranet

    NASA Technical Reports Server (NTRS)

    Lineberger, Lewis

    2005-01-01

    A data-communication protocol has been devised to enable secure, reliable remote control of processes and equipment via a collision-based network, while using minimal bandwidth and computation. The network could be the Internet or an intranet. Control is made secure by use of both a password and a dynamic key, which is sent transparently to a remote user by the controlled computer (that is, the computer, located at the site of the equipment or process to be controlled, that exerts direct control over the process). The protocol functions in the presence of network latency, overcomes errors caused by missed dynamic keys, and defeats attempts by unauthorized remote users to gain control. The protocol is not suitable for real-time control, but is well suited for applications in which control latencies up to about 0.5 second are acceptable. The encryption scheme involves the use of both a dynamic and a private key, without any additional overhead that would degrade performance. The dynamic key is embedded in the equipment- or process-monitor data packets sent out by the controlled computer: in other words, the dynamic key is a subset of the data in each such data packet. The controlled computer maintains a history of the last 3 to 5 data packets for use in decrypting incoming control commands. In addition, the controlled computer records a private key (password) that is given to the remote computer. The encrypted incoming command is permuted by both the dynamic and private key. A person who records the command data in a given packet for hostile purposes cannot use that packet after the public key expires (typically within 3 seconds). Even a person in possession of an unauthorized copy of the command/remote-display software cannot use that software in the absence of the password. The use of a dynamic key embedded in the outgoing data makes the central-processing unit overhead very small. The use of a National Instruments DataSocket(TradeMark) (or equivalent) protocol or the User Datagram Protocol makes it possible to obtain reasonably short response times: Typical response times in event-driven control, using packets sized .300 bytes, are <0.2 second for commands issued from locations anywhere on Earth. The protocol requires that control commands represent absolute values of controlled parameters (e.g., a specified temperature), as distinguished from changes in values of controlled parameters (e.g., a specified increment of temperature). Each command is issued three or more times to ensure delivery in crowded networks. The use of absolute-value commands prevents additional (redundant) commands from causing trouble. Because a remote controlling computer receives "talkback" in the form of data packets from the controlled computer, typically within a time interval < or =1 s, the controlling computer can re-issue a command if network failure has occurred. The controlled computer, the process or equipment that it controls, and any human operator(s) at the site of the controlled equipment or process should be equipped with safety measures to prevent damage to equipment or injury to humans. These features could be a combination of software, external hardware, and intervention by the human operator(s). The protocol is not fail-safe, but by adopting these safety measures as part of the protocol, one makes the protocol a robust means of controlling remote processes and equipment by use of typical office computers via intranets and/or the Internet.

  12. Ordinary differential equations and Boolean networks in application to modelling of 6-mercaptopurine metabolism.

    PubMed

    Lavrova, Anastasia I; Postnikov, Eugene B; Zyubin, Andrey Yu; Babak, Svetlana V

    2017-04-01

    We consider two approaches to modelling the cell metabolism of 6-mercaptopurine, one of the important chemotherapy drugs used for treating acute lymphocytic leukaemia: kinetic ordinary differential equations, and Boolean networks supplied with one controlling node, which takes continual values. We analyse their interplay with respect to taking into account ATP concentration as a key parameter of switching between different pathways. It is shown that the Boolean networks, which allow avoiding the complexity of general kinetic modelling, preserve the possibility of reproducing the principal switching mechanism.

  13. Comparison of Several Methods for Determining the Internal Resistance of Lithium Ion Cells

    PubMed Central

    Schweiger, Hans-Georg; Obeidi, Ossama; Komesker, Oliver; Raschke, André; Schiemann, Michael; Zehner, Christian; Gehnen, Markus; Keller, Michael; Birke, Peter

    2010-01-01

    The internal resistance is the key parameter for determining power, energy efficiency and lost heat of a lithium ion cell. Precise knowledge of this value is vital for designing battery systems for automotive applications. Internal resistance of a cell was determined by current step methods, AC (alternating current) methods, electrochemical impedance spectroscopy and thermal loss methods. The outcomes of these measurements have been compared with each other. If charge or discharge of the cell is limited, current step methods provide the same results as energy loss methods. PMID:22219678

  14. Nonmarket economic user values of the Florida Keys/Key West

    Treesearch

    Vernon R. Leeworthy; J. Michael Bowker

    1997-01-01

    This report provides estimates of the nonmarket economic user values for recreating visitors to the Florida Keys/Key West that participated in natural resource-based activities. Results from estimated travel cost models are presented, including visitor’s responses to prices and estimated per person-trip user values. Annual user values are also calculated and presented...

  15. Bio-mathematical analysis for the peristaltic flow of single wall carbon nanotubes under the impact of variable viscosity and wall properties.

    PubMed

    Shahzadi, Iqra; Sadaf, Hina; Nadeem, Sohail; Saleem, Anber

    2017-02-01

    The main objective of this paper is to study the Bio-mathematical analysis for the peristaltic flow of single wall carbon nanotubes under the impact of variable viscosity and wall properties. The right and the left walls of the curved channel possess sinusoidal wave that is travelling along the outer boundary. The features of the peristaltic motion are determined by using long wavelength and low Reynolds number approximation. Exact solutions are determined for the axial velocity and for the temperature profile. Graphical results have been presented for velocity profile, temperature and stream function for various physical parameters of interest. Symmetry of the curved channel is disturbed for smaller values of the curvature parameter. It is found that the altitude of the velocity profile increases for larger values of variable viscosity parameter for both the cases (pure blood as well as single wall carbon nanotubes). It is detected that velocity profile increases with increasing values of rigidity parameter. It is due to the fact that an increase in rigidity parameter decreases tension in the walls of the blood vessels which speeds up the blood flow for pure blood as well as single wall carbon nanotubes. Increase in Grashof number decreases the fluid velocity. This is due to the reason that viscous forces play a prominent role that's why increase in Grashof number decreases the velocity profile. It is also found that temperature drops for increasing values of nanoparticle volume fraction. Basically, higher thermal conductivity of the nanoparticles plays a key role for quick heat dissipation, and this justifies the use of the single wall carbon nanotubes in different situations as a coolant. Exact solutions are calculated for the temperature and the velocity profile. Symmetry of the curved channel is destroyed due to the curvedness for velocity, temperature and contour plots. Addition of single wall carbon nanotubes shows a decrease in fluid temperature. Trapping phenomena show that the size of the trapped bolus is smaller for pure blood case as compared to the single wall carbon nanotubes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. The Belgian repository of fundamental atomic data and stellar spectra (BRASS). I. Cross-matching atomic databases of astrophysical interest

    NASA Astrophysics Data System (ADS)

    Laverick, M.; Lobel, A.; Merle, T.; Royer, P.; Martayan, C.; David, M.; Hensberge, H.; Thienpont, E.

    2018-04-01

    Context. Fundamental atomic parameters, such as oscillator strengths, play a key role in modelling and understanding the chemical composition of stars in the Universe. Despite the significant work underway to produce these parameters for many astrophysically important ions, uncertainties in these parameters remain large and can propagate throughout the entire field of astronomy. Aims: The Belgian repository of fundamental atomic data and stellar spectra (BRASS) aims to provide the largest systematic and homogeneous quality assessment of atomic data to date in terms of wavelength, atomic and stellar parameter coverage. To prepare for it, we first compiled multiple literature occurrences of many individual atomic transitions, from several atomic databases of astrophysical interest, and assessed their agreement. In a second step synthetic spectra will be compared against extremely high-quality observed spectra, for a large number of BAFGK spectral type stars, in order to critically evaluate the atomic data of a large number of important stellar lines. Methods: Several atomic repositories were searched and their data retrieved and formatted in a consistent manner. Data entries from all repositories were cross-matched against our initial BRASS atomic line list to find multiple occurrences of the same transition. Where possible we used a new non-parametric cross-match depending only on electronic configurations and total angular momentum values. We also checked for duplicate entries of the same physical transition, within each retrieved repository, using the non-parametric cross-match. Results: We report on the number of cross-matched transitions for each repository and compare their fundamental atomic parameters. We find differences in log(gf) values of up to 2 dex or more. We also find and report that 2% of our line list and Vienna atomic line database retrievals are composed of duplicate transitions. Finally we provide a number of examples of atomic spectral lines with different retrieved literature log(gf) values, and discuss the impact of these uncertain log(gf) values on quantitative spectroscopy. All cross-matched atomic data and duplicate transition pairs are available to download at http://brass.sdf.org

  17. Black Hole Mergers as Probes of Structure Formation

    NASA Technical Reports Server (NTRS)

    Alicea-Munoz, E.; Miller, M. Coleman

    2008-01-01

    Intense structure formation and reionization occur at high redshift, yet there is currently little observational information about this very important epoch. Observations of gravitational waves from massive black hole (MBH) mergers can provide us with important clues about the formation of structures in the early universe. Past efforts have been limited to calculating merger rates using different models in which many assumptions are made about the specific values of physical parameters of the mergers, resulting in merger rate estimates that span a very wide range (0.1 - 104 mergers/year). Here we develop a semi-analytical, phenomenological model of MBH mergers that includes plausible combinations of several physical parameters, which we then turn around to determine how well observations with the Laser Interferometer Space Antenna (LISA) will be able to enhance our understanding of the universe during the critical z 5 - 30 structure formation era. We do this by generating synthetic LISA observable data (total BH mass, BH mass ratio, redshift, merger rates), which are then analyzed using a Markov Chain Monte Carlo method. This allows us to constrain the physical parameters of the mergers. We find that our methodology works well at estimating merger parameters, consistently giving results within 1- of the input parameter values. We also discover that the number of merger events is a key discriminant among models. This helps our method be robust against observational uncertainties. Our approach, which at this stage constitutes a proof of principle, can be readily extended to physical models and to more general problems in cosmology and gravitational wave astrophysics.

  18. Semiconductive 3-D haloplumbate framework hybrids with high color rendering index white-light emission† †Electronic supplementary information (ESI) available. CCDC 1055380 and 1055381. For ESI and crystallographic data in CIF or other electronic format see DOI: 10.1039/c5sc02501j Click here for additional data file. Click here for additional data file.

    PubMed Central

    Wang, Guan-E; Wang, Ming-Sheng; Cai, Li-Zhen; Li, Wen-Hua

    2015-01-01

    Single-component white light materials may create great opportunities for novel conventional lighting applications and display systems; however, their reported color rendering index (CRI) values, one of the key parameters for lighting, are less than 90, which does not satisfy the demand of color-critical upmarket applications, such as photography, cinematography, and art galleries. In this work, two semiconductive chloroplumbate (chloride anion of lead(ii)) hybrids, obtained using a new inorganic–organic hybrid strategy, show unprecedented 3-D inorganic framework structures and white-light-emitting properties with high CRI values around 90, one of which shows the highest value to date. PMID:28757985

  19. Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables

    DTIC Science & Technology

    2013-06-01

    1 18th ICCRTS Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables...AND SUBTITLE Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables 5a. CONTRACT...command in crisis management. C2 Agility Model Agility can be conceptualized at a number of different levels; for instance at the team

  20. System and method for motor parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less

  1. Study on Correlation Between Shear Wave Velocity and Ground Properties for Ground Liquefaction Investigation of Silts

    NASA Astrophysics Data System (ADS)

    Che, Ailan; Luo, Xianqi; Qi, Jinghua; Wang, Deyong

    Shear wave velocity (Vs) of soil is one of the key parameters used in assessment of liquefaction potential of saturated soils in the base with leveled ground surface; determination of shear module of soils used in seismic response analyses. Such parameter can be experimentally obtained from laboratory soil tests and field measurements. Statistical relation of shear wave velocity with soil properties based on the surface wave survey investigation, and resonant column triaxial tests, which are taken from more than 14 sites within the depth of 10 m under ground surface, is obtained in Tianjin (China) area. The relationship between shear wave velocity and the standard penetration test N value (SPT-N value) of silt and clay in the quaternary formation are summarized. It is an important problem to research the effect of shear wave velocity on liquefaction resistance of saturated silts (sandy loams) for evaluating liquefaction resistance. According the results of cyclic triaxial tests, a correlation between liquefaction resistance and shear wave velocity is presented. The results are useful for ground liquefaction investigation and the evaluation of liquefaction resistance.

  2. Optimal Cytoplasmic Transport in Viral Infections

    PubMed Central

    D'Orsogna, Maria R.; Chou, Tom

    2009-01-01

    For many viruses, the ability to infect eukaryotic cells depends on their transport through the cytoplasm and across the nuclear membrane of the host cell. During this journey, viral contents are biochemically processed into complexes capable of both nuclear penetration and genomic integration. We develop a stochastic model of viral entry that incorporates all relevant aspects of transport, including convection along microtubules, biochemical conversion, degradation, and nuclear entry. Analysis of the nuclear infection probabilities in terms of the transport velocity, degradation, and biochemical conversion rates shows how certain values of key parameters can maximize the nuclear entry probability of the viral material. The existence of such “optimal” infection scenarios depends on the details of the biochemical conversion process and implies potentially counterintuitive effects in viral infection, suggesting new avenues for antiviral treatment. Such optimal parameter values provide a plausible transport-based explanation of the action of restriction factors and of experimentally observed optimal capsid stability. Finally, we propose a new interpretation of how genetic mutations unrelated to the mechanism of drug action may nonetheless confer novel types of overall drug resistance. PMID:20046829

  3. An Investigation of Candidate Sensor-Observable Wake Vortex Strength Parameters for the NASA Aircraft Vortex Spacing System (AVOSS)

    NASA Technical Reports Server (NTRS)

    Tatnall, Chistopher R.

    1998-01-01

    The counter-rotating pair of wake vortices shed by flying aircraft can pose a threat to ensuing aircraft, particularly on landing approach. To allow adequate time for the vortices to disperse/decay, landing aircraft are required to maintain certain fixed separation distances. The Aircraft Vortex Spacing System (AVOSS), under development at NASA, is designed to prescribe safe aircraft landing approach separation distances appropriate to the ambient weather conditions. A key component of the AVOSS is a ground sensor, to ensure, safety by making wake observations to verify predicted behavior. This task requires knowledge of a flowfield strength metric which gauges the severity of disturbance an encountering aircraft could potentially experience. Several proposed strength metric concepts are defined and evaluated for various combinations of metric parameters and sensor line-of-sight elevation angles. Representative populations of generating and following aircraft types are selected, and their associated wake flowfields are modeled using various wake geometry definitions. Strength metric candidates are then rated and compared based on the correspondence of their computed values to associated aircraft response values, using basic statistical analyses.

  4. Spectral distortion of dual-comb spectrometry due to repetition rate fluctuation

    NASA Astrophysics Data System (ADS)

    Hong-Lei, Yang; Hao-Yun, Wei; Yan, Li

    2016-04-01

    Dual-comb spectrometry suffers the fluctuations of parameters in combs. We demonstrate that the repetition rate is more important than any other parameter, since the fluctuation of the repetition rate leads to a change of difference in the repetition rate between both combs, consequently causing the conversion factor variation and spectral frequency misalignment. The measured frequency noise power spectral density of the repetition rate exhibits an integrated residual frequency modulation of 1.4 Hz from 1 Hz to 100 kHz in our system. This value corresponds to the absorption peak fluctuation within a root mean square value of 0.19 cm-1 that is verified by both simulation and experimental result. Further, we can also simulate spectrum degradation as the fluctuation varies. After modifying misaligned spectra and averaging, the measured result agrees well with the simulated spectrum based on the GEISA database. Project supported by the State Key Laboratory of Precision Measurement Technology & Instruments of Tsinghua University and the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61205147).

  5. On the zeroth-order hamiltonian for CASPT2 calculations of spin crossover compounds.

    PubMed

    Vela, Sergi; Fumanal, Maria; Ribas-Ariño, Jordi; Robert, Vincent

    2016-04-15

    Complete active space self-consistent field theory (CASSCF) calculations and subsequent second-order perturbation theory treatment (CASPT2) are discussed in the evaluation of the spin-states energy difference (ΔH(elec)) of a series of seven spin crossover (SCO) compounds. The reference values have been extracted from a combination of experimental measurements and DFT + U calculations, as discussed in a recent article (Vela et al., Phys Chem Chem Phys 2015, 17, 16306). It is definitely proven that the critical IPEA parameter used in CASPT2 calculations of ΔH(elec), a key parameter in the design of SCO compounds, should be modified with respect to its default value of 0.25 a.u. and increased up to 0.50 a.u. The satisfactory agreement observed previously in the literature might result from an error cancellation originated in the default IPEA, which overestimates the stability of the HS state, and the erroneous atomic orbital basis set contraction of carbon atoms, which stabilizes the LS states. © 2015 Wiley Periodicals, Inc.

  6. Reentrant behaviors in the phase diagram of spin-1 planar ferromagnet with single-ion anisotropy

    NASA Astrophysics Data System (ADS)

    Rabuffo, I.; De Cesare, L.; Caramico D'Auria, A.; Mercaldo, M. T.

    2018-05-01

    We used the two-time Green function framework to investigate the role played by the easy-axis single-ion anisotropy on the phase diagram of (d > 2)-dimensional spin-1planar ferromagnets, which exhibit a magnetic field induced quantum phase transition. We tackled the problem using two different kind of approximations: the Anderson-Callen decoupling scheme and the Devlin approach. In the latter scheme, the exchange anisotropy terms in the equations of motion are treated at the Tyablikov decoupling level while the crystal field anisotropy contribution is handled exactly. The emerging key result is a reentrant structure of the phase diagram close to the quantum critical point, for certain values of the single-ion anisotropy parameter. We compare the results obtained within the two approximation schemes. In particular, we recover the same qualitative behavior. We show the phase diagram, close to the field-induced quantum critical point and the behavior of the susceptibility for different values of the single-ion anisotropy parameter, enhancing the differences between the two different scenarios (i.e. with and without reentrant behavior).

  7. Radionuclide transfer to fruit in the IAEA TRS No. 472

    NASA Astrophysics Data System (ADS)

    Carini, F.; Pellizzoni, M.; Giosuè, S.

    2012-04-01

    This paper describes the approach taken to present the information on fruits in the IAEA report TRS No. 472, supported by the IAEA-TECDOC-1616, which describes the key transfer processes, concepts and conceptual models regarded as important for dose assessment, as well as relevant parameters for modelling radionuclide transfer in fruits. Information relate to fruit plants grown in agricultural ecosystems of temperate regions. The relative significance of each pathway after release of radionuclides depends upon the radionuclide, the kind of crop, the stage of plant development and the season at time of deposition. Fruit intended as a component of the human diet is borne by plants that are heterogeneous in habits, and morphological and physiological traits. Information on radionuclides in fruit systems has therefore been rationalised by characterising plants in three groups: woody trees, shrubs, and herbaceous plants. Parameter values have been collected from open literature, conference proceedings, institutional reports, books and international databases. Data on root uptake are reported as transfer factor values related to fresh weight, being consumption data for fruits usually given in fresh weight.

  8. Can the Hypothesis 'Photon Interferes only with Itself' be Reconciled with Superposition of Light from Multiple Beams or Sources?

    NASA Technical Reports Server (NTRS)

    Roychoudhuri, Chandrasekhar; Prasad, Narasimha S.; Peng, Qing

    2007-01-01

    Any superposition effect as measured (SEM) by us is the summation of simultaneous stimulations experienced by a detector due to the presence of multiple copies of a detectee each carrying different values of the same parameter. We discus the cases with light beams carrying same frequency for both diffraction and multiple beam Fabry-Perot interferometer and also a case where the two superposed light beams carry different frequencies. Our key argument is that if light really consists of indivisible elementary particle, photon, then it cannot by itself create superposition effect since the state vector of an elementary particle cannot carry more than one values of any parameter at the same time. Fortunately, semiclassical model explains all light induced interactions using quantized atoms and classical EM wave packet. Classical physics, with its deeper commitment to Reality Ontology, was better prepared to nurture the emergence of Quantum Mechanics and still can provide guidance to explore nature deeper if we pay careful attention to successful classical formulations like Huygens-Fresnel diffraction integral.

  9. Thermodynamic and structure-property study of liquid-vapor equilibrium for aroma compounds.

    PubMed

    Tromelin, Anne; Andriot, Isabelle; Kopjar, Mirela; Guichard, Elisabeth

    2010-04-14

    Thermodynamic parameters (T, DeltaH degrees , DeltaS degrees , K) were collected from the literature and/or calculated for five esters, four ketones, two aldehydes, and three alcohols, pure compounds and compounds in aqueous solution. Examination of correlations between these parameters and the range values of DeltaH degrees and DeltaS degrees puts forward the key roles of enthalpy for vaporization of pure compounds and of entropy in liquid-vapor equilibrium of compounds in aqueous solution. A structure-property relationship (SPR) study was performed using molecular descriptors on aroma compounds to better understand their vaporization behavior. In addition to the role of polarity for vapor-liquid equilibrium of compounds in aqueous solution, the structure-property study points out the role of chain length and branching, illustrated by the correlation between the connectivity index CHI-V-1 and the difference between T and log K for vaporization of pure compounds and compounds in aqueous solution. Moreover, examination of the esters' enthalpy values allowed a probable conformation adopted by ethyl octanoate in aqueous solution to be proposed.

  10. INDIVIDUALIZED FETAL GROWTH ASSESSMENT: CRITICAL EVALUATION OF KEY CONCEPTS IN THE SPECIFICATION OF THIRD TRIMESTER GROWTH TRAJECTORIES

    PubMed Central

    Deter, Russell L.; Lee, Wesley; Yeo, Lami; Romero, Roberto

    2012-01-01

    Objectives To characterize 2nd and 3rd trimester fetal growth using Individualized Growth Assessment in a large cohort of fetuses with normal growth outcomes. Methods A prospective longitudinal study of 119 pregnancies was carried out from 18 weeks, MA, to delivery. Measurements of eleven fetal growth parameters were obtained from 3D scans at 3–4 week intervals. Regression analyses were used to determine Start Points [SP] and Rossavik model [P = c (t) k + st] coefficients c, k and s for each parameter in each fetus. Second trimester growth model specification functions were re-established. These functions were used to generate individual growth models and determine predicted s and s-residual [s = pred s + s-resid] values. Actual measurements were compared to predicted growth trajectories obtained from the growth models and Percent Deviations [% Dev = {{actual − predicted}/predicted} × 100] calculated. Age-specific reference standards for this statistic were defined using 2-level statistical modeling for the nine directly measured parameters and estimated weight. Results Rossavik models fit the data for all parameters very well [R2: 99%], with SP’s and k values similar to those found in a much smaller cohort. The c values were strongly related to the 2nd trimester slope [R2: 97%] as was predicted s to estimated c [R2: 95%]. The latter was negative for skeletal parameters and positive for soft tissue parameters. The s-residuals were unrelated to estimated c’s [R2: 0%], and had mean values of zero. Rossavik models predicted 3rd trimester growth with systematic errors close to 0% and random errors [95% range] of 5.7 – 10.9% and 20.0 – 24.3% for one and three dimensional parameters, respectively. Moderate changes in age-specific variability were seen in the 3rd trimester.. Conclusions IGA procedures for evaluating 2nd and 3rd trimester growth are now established based on a large cohort [4–6 fold larger than those used previously], thus permitting more reliable growth assessment with each fetus acting as its own control. New, more rigorously defined, age-specific standards for the evaluation of 3rd trimester growth deviations are now available for 10 anatomical parameters. Our results are also consistent with the predicted s and s-residual being representatives of growth controllers operating through the insulin-like growth factor [IGF] axis. PMID:23962305

  11. Calibration of micro-capacitance measurement system for thermal barrier coating testing

    NASA Astrophysics Data System (ADS)

    Ren, Yuan; Chen, Dixiang; Wan, Chengbiao; Tian, Wugang; Pan, Mengchun

    2018-06-01

    In order to comprehensively evaluate the thermal barrier coating system of an engine blade, an integrated planar sensor combining electromagnetic coils with planar capacitors is designed, in which the capacitance measurement accuracy of the planar capacitor is a key factor. The micro-capacitance measurement system is built based on an impedance analyzer. Because of the influence of non-ideal factors on the measuring system, there is an obvious difference between the measured value and the actual value. It is necessary to calibrate the measured results and eliminate the difference. In this paper, the measurement model of a planar capacitive sensor is established, and the relationship between the measured value and the actual value of capacitance is deduced. The model parameters are estimated with the least square method, and the calibration accuracy is evaluated with experiments under different dielectric conditions. The capacitance measurement error is reduced from 29% ˜ 46.5% to around 1% after calibration, which verifies the feasibility of the calibration method.

  12. Selected physical properties of various diesel blends

    NASA Astrophysics Data System (ADS)

    Hlaváčová, Zuzana; Božiková, Monika; Hlaváč, Peter; Regrut, Tomáš; Ardonová, Veronika

    2018-01-01

    The quality determination of biofuels requires identifying the chemical and physical parameters. The key physical parameters are rheological, thermal and electrical properties. In our study, we investigated samples of diesel blends with rape-seed methyl esters content in the range from 3 to 100%. In these, we measured basic thermophysical properties, including thermal conductivity and thermal diffusivity, using two different transient methods - the hot-wire method and the dynamic plane source. Every thermophysical parameter was measured 100 times using both methods for all samples. Dynamic viscosity was measured during the heating process under the temperature range 20-80°C. A digital rotational viscometer (Brookfield DV 2T) was used for dynamic viscosity detection. Electrical conductivity was measured using digital conductivity meter (Model 1152) in a temperature range from -5 to 30°C. The highest values of thermal parameters were reached in the diesel sample with the highest biofuel content. The dynamic viscosity of samples increased with higher concentration of bio-component rapeseed methyl esters. The electrical conductivity of blends also increased with rapeseed methyl esters content.

  13. Event-based stormwater management pond runoff temperature model

    NASA Astrophysics Data System (ADS)

    Sabouri, F.; Gharabaghi, B.; Sattar, A. M. A.; Thompson, A. M.

    2016-09-01

    Stormwater management wet ponds are generally very shallow and hence can significantly increase (about 5.4 °C on average in this study) runoff temperatures in summer months, which adversely affects receiving urban stream ecosystems. This study uses gene expression programming (GEP) and artificial neural networks (ANN) modeling techniques to advance our knowledge of the key factors governing thermal enrichment effects of stormwater ponds. The models developed in this study build upon and compliment the ANN model developed by Sabouri et al. (2013) that predicts the catchment event mean runoff temperature entering the pond as a function of event climatic and catchment characteristic parameters. The key factors that control pond outlet runoff temperature, include: (1) Upland Catchment Parameters (catchment drainage area and event mean runoff temperature inflow to the pond); (2) Climatic Parameters (rainfall depth, event mean air temperature, and pond initial water temperature); and (3) Pond Design Parameters (pond length-to-width ratio, pond surface area, pond average depth, and pond outlet depth). We used monitoring data for three summers from 2009 to 2011 in four stormwater management ponds, located in the cities of Guelph and Kitchener, Ontario, Canada to develop the models. The prediction uncertainties of the developed ANN and GEP models for the case study sites are around 0.4% and 1.7% of the median value. Sensitivity analysis of the trained models indicates that the thermal enrichment of the pond outlet runoff is inversely proportional to pond length-to-width ratio, pond outlet depth, and directly proportional to event runoff volume, event mean pond inflow runoff temperature, and pond initial water temperature.

  14. Color image encryption based on color blend and chaos permutation in the reality-preserving multiple-parameter fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Lang, Jun

    2015-03-01

    In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.

  15. Effect of varying two key parameters in simulating evacuation for a dormitory in China

    NASA Astrophysics Data System (ADS)

    Lei, Wenjun; Li, Angui; Gao, Ran

    2013-01-01

    Student dormitories are both living and resting areas for students in their spare time. There are many small rooms in the dormitories. And the students are distributed densely in the dormitories. High occupant density is the main characteristic of student dormitories. Once there is an accident, such as fire or earthquake, the losses will be cruel. Computer evacuation models developed overseas are commonly applied in working out safety management schemes. The average minimum widths of corridor and exit are the two key parameters affecting the evacuation for the dormitory. The effect of varying these two parameters will be studied in this paper by taking a dormitory in our university as an example. Evacuation performance is predicted with the software FDS + Evac. The default values in the software are used and adjusted through a field survey. The effect of varying either of the two parameters is discussed. It is found that the simulated results agree well with the experimental results. From our study it seems that the evacuation time is not in proportion to the evacuation distance. And we also named a phenomenon of “the closer is not the faster”. For the building researched in this article, a corridor width of 3 m is the most appropriate. And the suitable exit width of the dormitory for evacuation is about 2.5 to 3 m. The number of people has great influence on the walking speed of people. The purpose of this study is to optimize the building, and to make the building in favor of personnel evacuation. Then the damage could be minimized.

  16. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  17. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  18. The Neurobiology of Reference-Dependent Value Computation

    PubMed Central

    De Martino, Benedetto; Kumaran, Dharshan; Holt, Beatrice; Dolan, Raymond J.

    2009-01-01

    A key focus of current research in neuroeconomics concerns how the human brain computes value. Although, value has generally been viewed as an absolute measure (e.g., expected value, reward magnitude), much evidence suggests that value is more often computed with respect to a changing reference point, rather than in isolation. Here, we present the results of a study aimed to dissociate brain regions involved in reference-independent (i.e., “absolute”) value computations, from those involved in value computations relative to a reference point. During functional magnetic resonance imaging, subjects acted as buyers and sellers during a market exchange of lottery tickets. At a behavioral level, we demonstrate that subjects systematically accorded a higher value to objects they owned relative to those they did not, an effect that results from a shift in reference point (i.e., status quo bias or endowment effect). Our results show that activity in orbitofrontal cortex and dorsal striatum track parameters such as the expected value of lottery tickets indicating the computation of reference-independent value. In contrast, activity in ventral striatum indexed the degree to which stated prices, at a within-subjects and between-subjects level, were distorted with respect to a reference point. The findings speak to the neurobiological underpinnings of reference dependency during real market value computations. PMID:19321780

  19. Estimation of the viscosities of liquid binary alloys

    NASA Astrophysics Data System (ADS)

    Wu, Min; Su, Xiang-Yu

    2018-01-01

    As one of the most important physical and chemical properties, viscosity plays a critical role in physics and materials as a key parameter to quantitatively understanding the fluid transport process and reaction kinetics in metallurgical process design. Experimental and theoretical studies on liquid metals are problematic. Today, there are many empirical and semi-empirical models available with which to evaluate the viscosity of liquid metals and alloys. However, the parameter of mixed energy in these models is not easily determined, and most predictive models have been poorly applied. In the present study, a new thermodynamic parameter Δ G is proposed to predict liquid alloy viscosity. The prediction equation depends on basic physical and thermodynamic parameters, namely density, melting temperature, absolute atomic mass, electro-negativity, electron density, molar volume, Pauling radius, and mixing enthalpy. Our results show that the liquid alloy viscosity predicted using the proposed model is closely in line with the experimental values. In addition, if the component radius difference is greater than 0.03 nm at a certain temperature, the atomic size factor has a significant effect on the interaction of the binary liquid metal atoms. The proposed thermodynamic parameter Δ G also facilitates the study of other physical properties of liquid metals.

  20. Parameter Estimation for Thurstone Choice Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vojnovic, Milan; Yun, Seyoung

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one ormore » more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.« less

  1. Improving the Accuracy of Urban Environmental Quality Assessment Using Geographically-Weighted Regression Techniques.

    PubMed

    Faisal, Kamil; Shaker, Ahmed

    2017-03-07

    Urban Environmental Quality (UEQ) can be treated as a generic indicator that objectively represents the physical and socio-economic condition of the urban and built environment. The value of UEQ illustrates a sense of satisfaction to its population through assessing different environmental, urban and socio-economic parameters. This paper elucidates the use of the Geographic Information System (GIS), Principal Component Analysis (PCA) and Geographically-Weighted Regression (GWR) techniques to integrate various parameters and estimate the UEQ of two major cities in Ontario, Canada. Remote sensing, GIS and census data were first obtained to derive various environmental, urban and socio-economic parameters. The aforementioned techniques were used to integrate all of these environmental, urban and socio-economic parameters. Three key indicators, including family income, higher level of education and land value, were used as a reference to validate the outcomes derived from the integration techniques. The results were evaluated by assessing the relationship between the extracted UEQ results and the reference layers. Initial findings showed that the GWR with the spatial lag model represents an improved precision and accuracy by up to 20% with respect to those derived by using GIS overlay and PCA techniques for the City of Toronto and the City of Ottawa. The findings of the research can help the authorities and decision makers to understand the empirical relationships among environmental factors, urban morphology and real estate and decide for more environmental justice.

  2. Improving the Accuracy of Urban Environmental Quality Assessment Using Geographically-Weighted Regression Techniques

    PubMed Central

    Faisal, Kamil; Shaker, Ahmed

    2017-01-01

    Urban Environmental Quality (UEQ) can be treated as a generic indicator that objectively represents the physical and socio-economic condition of the urban and built environment. The value of UEQ illustrates a sense of satisfaction to its population through assessing different environmental, urban and socio-economic parameters. This paper elucidates the use of the Geographic Information System (GIS), Principal Component Analysis (PCA) and Geographically-Weighted Regression (GWR) techniques to integrate various parameters and estimate the UEQ of two major cities in Ontario, Canada. Remote sensing, GIS and census data were first obtained to derive various environmental, urban and socio-economic parameters. The aforementioned techniques were used to integrate all of these environmental, urban and socio-economic parameters. Three key indicators, including family income, higher level of education and land value, were used as a reference to validate the outcomes derived from the integration techniques. The results were evaluated by assessing the relationship between the extracted UEQ results and the reference layers. Initial findings showed that the GWR with the spatial lag model represents an improved precision and accuracy by up to 20% with respect to those derived by using GIS overlay and PCA techniques for the City of Toronto and the City of Ottawa. The findings of the research can help the authorities and decision makers to understand the empirical relationships among environmental factors, urban morphology and real estate and decide for more environmental justice. PMID:28272334

  3. Experimental investigation of analog and digital dimming techniques on photometric performance of an indoor Visible Light Communication (VLC) system

    NASA Astrophysics Data System (ADS)

    Zafar, Fahad; Kalavally, Vineetha; Bakaul, Masuduzzaman; Parthiban, R.

    2015-09-01

    For making commercial implementation of light emitting diode (LED) based visible light communication (VLC) systems feasible, it is necessary to incorporate it with dimming schemes which will provide energy savings, moods and increase the aesthetic value of the places using this technology. There are two general methods which are used to dim LEDs commonly categorized as analog and digital dimming. Incorporating fast data transmission with these techniques is a key challenge in VLC. In this paper, digital and analog dimming for a 10 Mb/s non return to zero on-off keying (NRZ-OOK) based VLC system is experimentally investigated considering both photometric and communicative parameters. A spectrophotometer was used for photometric analysis and a line of sight (LOS) configuration in the presence of ambient light was used for analyzing communication parameters. Based on the experimental results, it was determined that digital dimming scheme is preferable for use in indoor VLC systems requiring high dimming precision and data transmission at lower brightness levels. On the other hand, analog dimming scheme is a cost effective solution for high speed systems where dimming precision is insignificant.

  4. Oxygen consumption by bovine granulosa cells with prediction of oxygen transport in preantral follicles.

    PubMed

    Li, Dongxing; Redding, Gabe P; Bronlund, John E

    2013-01-01

    The rate of oxygen consumption by granulosa cells is a key parameter in mathematical models that describe oxygen transport across ovarian follicles. This work measured the oxygen consumption rate of bovine granulosa cells in vitro to be in the range 2.1-3.3×10⁻¹⁶ mol cell⁻¹ s⁻¹ (0.16-0.25 mol m⁻³ s⁻¹). The implications of the rates for oxygen transport in large bovine preantral follicles were examined using a mathematical model. The results indicate that oocyte oxygenation becomes increasingly constrained as preantral follicles grow, reaching hypoxic levels near the point of antrum formation. Beyond a preantral follicle radius of 134 µm, oxygen cannot reach the oocyte surface at typical values of model parameters. Since reported sizes of large bovine preantral follicles range from 58 to 145 µm in radius, this suggests that oocyte oxygenation is possible in all but the largest preantral follicles, which are on the verge of antrum formation. In preantral bovine follicles, the oxygen consumption rate of granulosa cells and fluid voidage will be the key determinants of oxygen levels across the follicle.

  5. Gas Gun Model and Comparison to Experimental Performance of Pipe Guns Operating with Light Propellant Gases and Large Cryogenic Pellets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J. R.; Carmichael, J. R.; Gebhart, T. E.

    Injection of multiple large (~10 to 30 mm diameter) shattered pellets into ITER plasmas is presently part of the scheme planned to mitigate the deleterious effects of disruptions on the vessel components. To help in the design and optimize performance of the pellet injectors for this application, a model referred to as “the gas gun simulator” has been developed and benchmarked against experimental data. The computer code simulator is a Java program that models the gas-dynamics characteristics of a single-stage gas gun. Following a stepwise approach, the code utilizes a variety of input parameters to incrementally simulate and analyze themore » dynamics of the gun as the projectile is launched down the barrel. Using input data, the model can calculate gun performance based on physical characteristics, such as propellant-gas and fast-valve properties, barrel geometry, and pellet mass. Although the model is fundamentally generic, the present version is configured to accommodate cryogenic pellets composed of H2, D2, Ne, Ar, and mixtures of them and light propellant gases (H2, D2, and He). The pellets are solidified in situ in pipe guns that consist of stainless steel tubes and fast-acting valves that provide the propellant gas for pellet acceleration (to speeds ~200 to 700 m/s). The pellet speed is the key parameter in determining the response time of a shattered pellet system to a plasma disruption event. The calculated speeds from the code simulations of experiments were typically in excellent agreement with the measured values. With the gas gun simulator validated for many test shots and over a wide range of physical and operating parameters, it is a valuable tool for optimization of the injector design, including the fast valve design (orifice size and volume) for any operating pressure (~40 bar expected for the ITER application) and barrel length for any pellet size (mass, diameter, and length). Key design parameters and proposed values for the pellet injectors for the ITER disruption mitigation systems are discussed.« less

  6. Gas Gun Model and Comparison to Experimental Performance of Pipe Guns Operating with Light Propellant Gases and Large Cryogenic Pellets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combs, S. K.; Reed, J. R.; Lyttle, M. S.

    2016-01-01

    Injection of multiple large (~10 to 30 mm diameter) shattered pellets into ITER plasmas is presently part of the scheme planned to mitigate the deleterious effects of disruptions on the vessel components. To help in the design and optimize performance of the pellet injectors for this application, a model referred to as “the gas gun simulator” has been developed and benchmarked against experimental data. The computer code simulator is a Java program that models the gas-dynamics characteristics of a single-stage gas gun. Following a stepwise approach, the code utilizes a variety of input parameters to incrementally simulate and analyze themore » dynamics of the gun as the projectile is launched down the barrel. Using input data, the model can calculate gun performance based on physical characteristics, such as propellant-gas and fast-valve properties, barrel geometry, and pellet mass. Although the model is fundamentally generic, the present version is configured to accommodate cryogenic pellets composed of H2, D2, Ne, Ar, and mixtures of them and light propellant gases (H2, D2, and He). The pellets are solidified in situ in pipe guns that consist of stainless steel tubes and fast-acting valves that provide the propellant gas for pellet acceleration (to speeds ~200 to 700 m/s). The pellet speed is the key parameter in determining the response time of a shattered pellet system to a plasma disruption event. The calculated speeds from the code simulations of experiments were typically in excellent agreement with the measured values. With the gas gun simulator validated for many test shots and over a wide range of physical and operating parameters, it is a valuable tool for optimization of the injector design, including the fast valve design (orifice size and volume) for any operating pressure (~40 bar expected for the ITER application) and barrel length for any pellet size (mass, diameter, and length). Key design parameters and proposed values for the pellet injectors for the ITER disruption mitigation systems are discussed.« less

  7. Evaluating the Community Land Model (CLM4.5) at a coniferous forest site in northwestern United States using flux and carbon-isotope measurements

    DOE PAGES

    Duarte, Henrique F.; Raczka, Brett M.; Ricciuto, Daniel M.; ...

    2017-09-28

    Droughts in the western United States are expected to intensify with climate change. Thus, an adequate representation of ecosystem response to water stress in land models is critical for predicting carbon dynamics. The goal of this study was to evaluate the performance of the Community Land Model (CLM) version 4.5 against observations at an old-growth coniferous forest site in the Pacific Northwest region of the United States (Wind River AmeriFlux site), characterized by a Mediterranean climate that subjects trees to water stress each summer. CLM was driven by site-observed meteorology and calibrated primarily using parameter values observed at the site ormore » at similar stands in the region. Key model adjustments included parameters controlling specific leaf area and stomatal conductance. Default values of these parameters led to significant underestimation of gross primary production, overestimation of evapotranspiration, and consequently overestimation of photosynthetic 13C discrimination, reflected in reduced 13C: 12C ratios of carbon fluxes and pools. Adjustments in soil hydraulic parameters within CLM were also critical, preventing significant underestimation of soil water content and unrealistic soil moisture stress during summer. After calibration, CLM was able to simulate energy and carbon fluxes, leaf area index, biomass stocks, and carbon isotope ratios of carbon fluxes and pools in reasonable agreement with site observations. Overall, the calibrated CLM was able to simulate the observed response of canopy conductance to atmospheric vapor pressure deficit (VPD) and soil water content, reasonably capturing the impact of water stress on ecosystem functioning. Both simulations and observations indicate that stomatal response from water stress at Wind River was primarily driven by VPD and not soil moisture. The calibration of the Ball–Berry stomatal conductance slope ( m bb) at Wind River aligned with findings from recent CLM experiments at sites characterized by the same plant functional type (needleleaf evergreen temperate forest), despite significant differences in stand composition and age and climatology, suggesting that CLM could benefit from a revised m bb value of 6, rather than the default value of 9, for this plant functional type. Conversely, Wind River required a unique calibration of the hydrology submodel to simulate soil moisture, suggesting that the default hydrology has a more limited applicability. Here, this study demonstrates that carbon isotope data can be used to constrain stomatal conductance and intrinsic water use efficiency in CLM, as an alternative to eddy covariance flux measurements. It also demonstrates that carbon isotopes can expose structural weaknesses in the model and provide a key constraint that may guide future model development.« less

  8. Evaluating the Community Land Model (CLM4.5) at a coniferous forest site in northwestern United States using flux and carbon-isotope measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duarte, Henrique F.; Raczka, Brett M.; Ricciuto, Daniel M.

    Droughts in the western United States are expected to intensify with climate change. Thus, an adequate representation of ecosystem response to water stress in land models is critical for predicting carbon dynamics. The goal of this study was to evaluate the performance of the Community Land Model (CLM) version 4.5 against observations at an old-growth coniferous forest site in the Pacific Northwest region of the United States (Wind River AmeriFlux site), characterized by a Mediterranean climate that subjects trees to water stress each summer. CLM was driven by site-observed meteorology and calibrated primarily using parameter values observed at the site ormore » at similar stands in the region. Key model adjustments included parameters controlling specific leaf area and stomatal conductance. Default values of these parameters led to significant underestimation of gross primary production, overestimation of evapotranspiration, and consequently overestimation of photosynthetic 13C discrimination, reflected in reduced 13C: 12C ratios of carbon fluxes and pools. Adjustments in soil hydraulic parameters within CLM were also critical, preventing significant underestimation of soil water content and unrealistic soil moisture stress during summer. After calibration, CLM was able to simulate energy and carbon fluxes, leaf area index, biomass stocks, and carbon isotope ratios of carbon fluxes and pools in reasonable agreement with site observations. Overall, the calibrated CLM was able to simulate the observed response of canopy conductance to atmospheric vapor pressure deficit (VPD) and soil water content, reasonably capturing the impact of water stress on ecosystem functioning. Both simulations and observations indicate that stomatal response from water stress at Wind River was primarily driven by VPD and not soil moisture. The calibration of the Ball–Berry stomatal conductance slope ( m bb) at Wind River aligned with findings from recent CLM experiments at sites characterized by the same plant functional type (needleleaf evergreen temperate forest), despite significant differences in stand composition and age and climatology, suggesting that CLM could benefit from a revised m bb value of 6, rather than the default value of 9, for this plant functional type. Conversely, Wind River required a unique calibration of the hydrology submodel to simulate soil moisture, suggesting that the default hydrology has a more limited applicability. Here, this study demonstrates that carbon isotope data can be used to constrain stomatal conductance and intrinsic water use efficiency in CLM, as an alternative to eddy covariance flux measurements. It also demonstrates that carbon isotopes can expose structural weaknesses in the model and provide a key constraint that may guide future model development.« less

  9. Evaluating the Community Land Model (CLM4.5) at a coniferous forest site in northwestern United States using flux and carbon-isotope measurements

    NASA Astrophysics Data System (ADS)

    Duarte, Henrique F.; Raczka, Brett M.; Ricciuto, Daniel M.; Lin, John C.; Koven, Charles D.; Thornton, Peter E.; Bowling, David R.; Lai, Chun-Ta; Bible, Kenneth J.; Ehleringer, James R.

    2017-09-01

    Droughts in the western United States are expected to intensify with climate change. Thus, an adequate representation of ecosystem response to water stress in land models is critical for predicting carbon dynamics. The goal of this study was to evaluate the performance of the Community Land Model (CLM) version 4.5 against observations at an old-growth coniferous forest site in the Pacific Northwest region of the United States (Wind River AmeriFlux site), characterized by a Mediterranean climate that subjects trees to water stress each summer. CLM was driven by site-observed meteorology and calibrated primarily using parameter values observed at the site or at similar stands in the region. Key model adjustments included parameters controlling specific leaf area and stomatal conductance. Default values of these parameters led to significant underestimation of gross primary production, overestimation of evapotranspiration, and consequently overestimation of photosynthetic 13C discrimination, reflected in reduced 13C : 12C ratios of carbon fluxes and pools. Adjustments in soil hydraulic parameters within CLM were also critical, preventing significant underestimation of soil water content and unrealistic soil moisture stress during summer. After calibration, CLM was able to simulate energy and carbon fluxes, leaf area index, biomass stocks, and carbon isotope ratios of carbon fluxes and pools in reasonable agreement with site observations. Overall, the calibrated CLM was able to simulate the observed response of canopy conductance to atmospheric vapor pressure deficit (VPD) and soil water content, reasonably capturing the impact of water stress on ecosystem functioning. Both simulations and observations indicate that stomatal response from water stress at Wind River was primarily driven by VPD and not soil moisture. The calibration of the Ball-Berry stomatal conductance slope (mbb) at Wind River aligned with findings from recent CLM experiments at sites characterized by the same plant functional type (needleleaf evergreen temperate forest), despite significant differences in stand composition and age and climatology, suggesting that CLM could benefit from a revised mbb value of 6, rather than the default value of 9, for this plant functional type. Conversely, Wind River required a unique calibration of the hydrology submodel to simulate soil moisture, suggesting that the default hydrology has a more limited applicability. This study demonstrates that carbon isotope data can be used to constrain stomatal conductance and intrinsic water use efficiency in CLM, as an alternative to eddy covariance flux measurements. It also demonstrates that carbon isotopes can expose structural weaknesses in the model and provide a key constraint that may guide future model development.

  10. Sensitivity Analysis of the Bone Fracture Risk Model

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane

    2017-01-01

    Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including environmental factors, factors associated with the fall event, mass and anthropometric values of the astronaut, BMD characteristics, characteristics of the relationship between BMD and bone strength and bone fracture characteristics. The uncertainty in these factors is captured through the use of parameter distributions and the fracture predictions are probability distributions with a mean value and an associated uncertainty. To determine parameter sensitivity, a correlation coefficient is found between the sample set of each model parameter and the calculated fracture probabilities. Each parameters contribution to the variance is found by squaring the correlation coefficients, dividing by the sum of the squared correlation coefficients, and multiplying by 100. Results: Sensitivity analyses of BFxRM simulations of preflight, 0 days post-flight and 365 days post-flight falls onto the hip revealed a subset of the twelve factors within the model which cause the most variation in the fracture predictions. These factors include the spring constant used in the hip biomechanical model, the midpoint FRI parameter within the equation used to convert FRI to fracture probability and preflight BMD values. Future work: Plans are underway to update the BFxRM by incorporating bone strength information from finite element models (FEM) into the bone strength portion of the BFxRM. Also, FEM bone strength information along with fracture outcome data will be incorporated into the FRI to fracture probability.

  11. The character of scaling earthquake source spectra for Kamchatka in the 3.5-6.5 magnitude range

    NASA Astrophysics Data System (ADS)

    Gusev, A. A.; Guseva, E. M.

    2017-02-01

    The properties of the source spectra of local shallow-focus earthquakes on Kamchatka in the range of magnitudes M w = 3.5-6.5 are studied using 460 records of S-waves obtained at the PET station. The family of average source spectra is constructed; the spectra are used to study the relationship between M w and the key quasi-dimensionless source parameters: stress drop Δσ and apparent stress σa. It is found that the parameter Δσ is almost stable, while σa grows steadily as the magnitude M w increases, indicating that the similarity is violated. It is known that at sufficiently large M w the similarity hypothesis is approximately valid: both parameters Δσ and σa do not show any noticeable magnitude dependence. It has been established that M w ≈ 5.7 is the threshold value of the magnitude when the change in regimes described occurs for the conditions on Kamchatka.

  12. Hydrologic Modeling in the Kenai River Watershed using Event Based Calibration

    NASA Astrophysics Data System (ADS)

    Wells, B.; Toniolo, H. A.; Stuefer, S. L.

    2015-12-01

    Understanding hydrologic changes is key for preparing for possible future scenarios. On the Kenai Peninsula in Alaska the yearly salmon runs provide a valuable stimulus to the economy. It is the focus of a large commercial fishing fleet, but also a prime tourist attraction. Modeling of anadromous waters provides a tool that assists in the prediction of future salmon run size. Beaver Creek, in Kenai, Alaska, is a lowlands stream that has been modeled using the Army Corps of Engineers event based modeling package HEC-HMS. With the use of historic precipitation and discharge data, the model was calibrated to observed discharge values. The hydrologic parameters were measured in the field or calculated, while soil parameters were estimated and adjusted during the calibration. With the calibrated parameter for HEC-HMS, discharge estimates can be used by other researches studying the area and help guide communities and officials to make better-educated decisions regarding the changing hydrology in the area and the tied economic drivers.

  13. Determination of key diffusion and partition parameters and their use in migration modelling of benzophenone from low-density polyethylene (LDPE) into different foodstuffs.

    PubMed

    Maia, Joaquim; Rodríguez-Bernaldo de Quirós, Ana; Sendón, Raquel; Cruz, José Manuel; Seiler, Annika; Franz, Roland; Simoneau, Catherine; Castle, Laurence; Driffield, Malcolm; Mercea, Peter; Oldring, Peter; Tosa, Valer; Paseiro, Perfecto

    2016-01-01

    The mass transport process (migration) of a model substance, benzophenone (BZP), from LDPE into selected foodstuffs at three temperatures was studied. A mathematical model based on Fick's Second Law of Diffusion was used to simulate the migration process and a good correlation between experimental and predicted values was found. The acquired results contribute to a better understanding of this phenomenon and the parameters so-derived were incorporated into the migration module of the recently launched FACET tool (Flavourings, Additives and Food Contact Materials Exposure Tool). The migration tests were carried out at different time-temperature conditions, and BZP was extracted from LDPE and analysed by HPLC-DAD. With all data, the parameters for migration modelling (diffusion and partition coefficients) were calculated. Results showed that the diffusion coefficients (within both the polymer and the foodstuff) are greatly affected by the temperature and food's physical state, whereas the partition coefficient was affected significantly only by food characteristics, particularly fat content.

  14. Provably secure identity-based identification and signature schemes from code assumptions

    PubMed Central

    Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940

  15. Provably secure identity-based identification and signature schemes from code assumptions.

    PubMed

    Song, Bo; Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.

  16. Technology advances needed for photovoltaics to achieve widespread grid price parity: Widespread grid price parity for photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones-Albertus, Rebecca; Feldman, David; Fu, Ran

    2016-04-20

    To quantify the potential value of technological advances to the photovoltaics (PV) sector, this paper examines the impact of changes to key PV module and system parameters on the levelized cost of energy (LCOE). The parameters selected include module manufacturing cost, efficiency, degradation rate, and service lifetime. NREL's System Advisor Model (SAM) is used to calculate the lifecycle cost per kilowatt-hour (kWh) for residential, commercial, and utility scale PV systems within the contiguous United States, with a focus on utility scale. Different technological pathways are illustrated that may achieve the Department of Energy's SunShot goal of PV electricity that ismore » at grid price parity with conventional electricity sources. In addition, the impacts on the 2015 baseline LCOE due to changes to each parameter are shown. These results may be used to identify research directions with the greatest potential to impact the cost of PV electricity.« less

  17. A fuzzy discrete harmony search algorithm applied to annual cost reduction in radial distribution systems

    NASA Astrophysics Data System (ADS)

    Ameli, Kazem; Alfi, Alireza; Aghaebrahimi, Mohammadreza

    2016-09-01

    Similarly to other optimization algorithms, harmony search (HS) is quite sensitive to the tuning parameters. Several variants of the HS algorithm have been developed to decrease the parameter-dependency character of HS. This article proposes a novel version of the discrete harmony search (DHS) algorithm, namely fuzzy discrete harmony search (FDHS), for optimizing capacitor placement in distribution systems. In the FDHS, a fuzzy system is employed to dynamically adjust two parameter values, i.e. harmony memory considering rate and pitch adjusting rate, with respect to normalized mean fitness of the harmony memory. The key aspect of FDHS is that it needs substantially fewer iterations to reach convergence in comparison with classical discrete harmony search (CDHS). To the authors' knowledge, this is the first application of DHS to specify appropriate capacitor locations and their best amounts in the distribution systems. Simulations are provided for 10-, 34-, 85- and 141-bus distribution systems using CDHS and FDHS. The results show the effectiveness of FDHS over previous related studies.

  18. Natural abundance (25)Mg solid-state NMR of mg oxyanion systems: a combined experimental and computational study.

    PubMed

    Cahill, Lindsay S; Hanna, John V; Wong, Alan; Freitas, Jair C C; Yates, Jonathan R; Harris, Robin K; Smith, Mark E

    2009-09-28

    Solid-state (25)Mg magic angle spinning nuclear magnetic resonance (MAS NMR) data are reported from a range of organic and inorganic magnesium-oxyanion compounds at natural abundance. To constrain the determination of the NMR interaction parameters (delta(iso), chi(Q), eta(Q)) data have been collected at three external magnetic fields (11.7, 14.1 and 18.8 T). Corresponding NMR parameters have also been calculated by using density functional theory (DFT) methods using the GIPAW approach, with good correlations being established between experimental and calculated values of both chi(Q) and delta(iso). These correlations demonstrate that the (25)Mg NMR parameters are very sensitive to the structure, with small changes in the local Mg(2+) environment and the overall hydration state profoundly affecting the observed spectra. The observations suggest that (25)Mg NMR spectroscopy is a potentially potent probe for addressing some key problems in inorganic materials and of metal centres in biologically relevant molecules.

  19. Combined control-structure optimization

    NASA Technical Reports Server (NTRS)

    Salama, M.; Milman, M.; Bruno, R.; Scheid, R.; Gibson, S.

    1989-01-01

    An approach for combined control-structure optimization keyed to enhancing early design trade-offs is outlined and illustrated by numerical examples. The approach employs a homotopic strategy and appears to be effective for generating families of designs that can be used in these early trade studies. Analytical results were obtained for classes of structure/control objectives with linear quadratic Gaussian (LQG) and linear quadratic regulator (LQR) costs. For these, researchers demonstrated that global optima can be computed for small values of the homotopy parameter. Conditions for local optima along the homotopy path were also given. Details of two numerical examples employing the LQR control cost were given showing variations of the optimal design variables along the homotopy path. The results of the second example suggest that introducing a second homotopy parameter relating the two parts of the control index in the LQG/LQR formulation might serve to enlarge the family of Pareto optima, but its effect on modifying the optimal structural shapes may be analogous to the original parameter lambda.

  20. Biochemical methane potential (BMP) tests: Reducing test time by early parameter estimation.

    PubMed

    Da Silva, C; Astals, S; Peces, M; Campos, J L; Guerrero, L

    2018-01-01

    Biochemical methane potential (BMP) test is a key analytical technique to assess the implementation and optimisation of anaerobic biotechnologies. However, this technique is characterised by long testing times (from 20 to >100days), which is not suitable for waste utilities, consulting companies or plants operators whose decision-making processes cannot be held for such a long time. This study develops a statistically robust mathematical strategy using sensitivity functions for early prediction of BMP first-order model parameters, i.e. methane yield (B 0 ) and kinetic constant rate (k). The minimum testing time for early parameter estimation showed a potential correlation with the k value, where (i) slowly biodegradable substrates (k≤0.1d -1 ) have a minimum testing times of ≥15days, (ii) moderately biodegradable substrates (0.1

  1. Assessing value of innovative molecular diagnostic tests in the concept of predictive, preventive, and personalized medicine.

    PubMed

    Akhmetov, Ildar; Bubnov, Rostyslav V

    2015-01-01

    Molecular diagnostic tests drive the scientific and technological uplift in the field of predictive, preventive, and personalized medicine offering invaluable clinical and socioeconomic benefits to the key stakeholders. Although the results of diagnostic tests are immensely influential, molecular diagnostic tests (MDx) are still grudgingly reimbursed by payers and amount for less than 5 % of the overall healthcare costs. This paper aims at defining the value of molecular diagnostic test and outlining the most important components of "value" from miscellaneous assessment frameworks, which go beyond accuracy and feasibility and impact the clinical adoption, informing healthcare resource allocation decisions. The authors suggest that the industry should facilitate discussions with various stakeholders throughout the entire assessment process in order to arrive at a consensus about the depth of evidence required for positive marketing authorization or reimbursement decisions. In light of the evolving "value-based healthcare" delivery practices, it is also recommended to account for social and ethical parameters of value, since these are anticipated to become as critical for reimbursement decisions and test acceptance as economic and clinical criteria.

  2. Identification of linkages between potential Environmental and Social Impacts of Surface Mining and Ecosystem Services in Thar Coal field, Pakistan

    NASA Astrophysics Data System (ADS)

    Hina, A.

    2017-12-01

    Although Thar coal is recognized to be one of the most abundant fossil fuel that could meet the need to combat energy crisis of Pakistan, but there still remains a challenge to tackle the associated environmental and socio-ecological changes and its linkage to the provision of ecosystem services of the region. The study highlights the importance of considering Ecosystem service assessment to be undertaken in all strategic Environmental and Social Assessments of Thar coal field projects. The three-step approach has been formulated to link the project impacts to the provision of important ecosystem services; 1) Identification of impact indicators and parameters by analyzing the environmental and social impacts of surface mining in Thar Coal field through field investigation, literature review and stakeholder consultations; 2) Ranking of parameters and criteria alternatives using Multi-criteria Decision Analysis(MCDA) tool: (AHP method); 3) Using ranked parameters as a proxy to prioritize important ecosystem services of the region; The ecosystem services that were prioritized because of both high significance of project impact and high project dependence are highlighted as: Water is a key ecosystem service to be addressed and valued due to its high dependency in the area for livestock, human wellbeing, agriculture and other purposes. Crop production related to agricultural services, in association with supply services such as soil quality, fertility, and nutrient recycling and water retention need to be valued. Cultural services affected in terms of land use change and resettlement and rehabilitation factors are recommended to be addressed. The results of the analysis outline a framework of identifying these linkages as key constraints to foster the emergence of green growth and development in Pakistan. The practicality of implementing these assessments requires policy instruments and strategies to support human well-being and social inclusion while minimizing environmental degradation and loss of ecosystem services. Keywords Ecosystem service assessment; Environmental and Social Impact Assessment; coal mining; Thar Coal Field; Sustainable development

  3. In vivo quantitative evaluation of vascular parameters for angiogenesis based on sparse principal component analysis and aggregated boosted trees

    NASA Astrophysics Data System (ADS)

    Zhao, Fengjun; Liu, Junting; Qu, Xiaochao; Xu, Xianhui; Chen, Xueli; Yang, Xiang; Cao, Feng; Liang, Jimin; Tian, Jie

    2014-12-01

    To solve the multicollinearity issue and unequal contribution of vascular parameters for the quantification of angiogenesis, we developed a quantification evaluation method of vascular parameters for angiogenesis based on in vivo micro-CT imaging of hindlimb ischemic model mice. Taking vascular volume as the ground truth parameter, nine vascular parameters were first assembled into sparse principal components (PCs) to reduce the multicolinearity issue. Aggregated boosted trees (ABTs) were then employed to analyze the importance of vascular parameters for the quantification of angiogenesis via the loadings of sparse PCs. The results demonstrated that vascular volume was mainly characterized by vascular area, vascular junction, connectivity density, segment number and vascular length, which indicated they were the key vascular parameters for the quantification of angiogenesis. The proposed quantitative evaluation method was compared with both the ABTs directly using the nine vascular parameters and Pearson correlation, which were consistent. In contrast to the ABTs directly using the vascular parameters, the proposed method can select all the key vascular parameters simultaneously, because all the key vascular parameters were assembled into the sparse PCs with the highest relative importance.

  4. Core Problem: Does the CV Parent Body Magnetization require differentiation?

    NASA Astrophysics Data System (ADS)

    O'Brien, T.; Tarduno, J. A.; Smirnov, A. V.

    2016-12-01

    Evidence for the presence of past dynamos from magnetic studies of meteorites can provide key information on the nature and evolution of parent bodies. However, the suggestion of a past core dynamo for the CV parent body based on the study of the Allende meteorite has led to a paradox: a core dynamo requires differentiation, evidence for which is missing in the meteorite record. The key parameter used to distinguish core dynamo versus external field mechanisms is absolute field paleointensity, with high values (>>1 μT) favoring the former. Here we explore the fundamental requirements for absolute field intensity measurement in the Allende meteorite: single domain grains that are non-interacting. Magnetic hysteresis and directional data define strong magnetic interactions, negating a standard interpretation of paleointensity measurements in terms of absolute paleofield values. The Allende low field magnetic susceptibility is dominated by magnetite and FeNi grains, whereas the magnetic remanence is carried by an iron sulfide whose remanence-carrying capacity increases with laboratory cycling at constant field values, indicating reordering. The iron sulfide and FeNi grains are in close proximity, providing mineralogical context for interactions. We interpret the magnetization of Allende to record the intense early solar wind with metal-sulfide interactions amplifying the field, giving the false impression of a higher field value in some prior studies. An undifferentiated CV parent body is thus compatible with Allende's magnetization. Early solar wind magnetization should be the null hypothesis for evaluating the source of magnetization for chondrites and other meteorites.

  5. Surface micromachined MEMS deformable mirror based on hexagonal parallel-plate electrostatic actuator

    NASA Astrophysics Data System (ADS)

    Ma, Wenying; Ma, Changwei; Wang, Weimin

    2018-03-01

    Deformable mirrors (DM) based on microelectromechanical system (MEMS) technology are being applied in adaptive optics (AO) system for astronomical telescopes and human eyes more and more. In this paper a MEMS DM with hexagonal actuator is proposed and designed. The relationship between structural design and performance parameters, mainly actuator coupling, is analyzed carefully and calculated. The optimum value of actuator coupling is obtained. A 7-element DM prototype is fabricated using a commercial available standard three-layer polysilicon surface multi-user-MEMS-processes (PolyMUMPs). Some key performances, including surface figure and voltage-displacement curve, are measured through a 3D white light profiler. The measured performances are very consistent with the theoretical values. The proposed DM will benefit the miniaturization of AO systems and lower their cost.

  6. Monte Carlo simulation: Its status and future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murtha, J.A.

    1997-04-01

    Monte Carlo simulation is a statistics-based analysis tool that yields probability-vs.-value relationships for key parameters, including oil and gas reserves, capital exposure, and various economic yardsticks, such as net present value (NPV) and return on investment (ROI). Monte Carlo simulation is a part of risk analysis and is sometimes performed in conjunction with or as an alternative to decision [tree] analysis. The objectives are (1) to define Monte Carlo simulation in a more general context of risk and decision analysis; (2) to provide some specific applications, which can be interrelated; (3) to respond to some of the criticisms; (4) tomore » offer some cautions about abuses of the method and recommend how to avoid the pitfalls; and (5) to predict what the future has in store.« less

  7. Stream temperature and stage monitoring using fisherman looking for fish.

    NASA Astrophysics Data System (ADS)

    Hut, Rolf; Tyler, Scott

    2015-04-01

    Fly Fishing is a popular pastime in large parts of the world. Two key facts that fly fisherman need to know to find the ideal fishing spot is water depth and water temperature. These are also two parameters of interest to hydrologist, especially those interested in the hyporheic zone. We present a device that serves both fisherman and hydrologists: sensor-waders. A classic pair of waders is equipped with temperature and water height sensors. Measurement values are communicated to an app on the smartphone of the fisherman. This app provides the fisherman with real time information on local conditions. By using the geolocation of the smartphone, the measurement values are also send to a remote server for use in hydrological research. We will present a first proof of concept of the sensor-waders.

  8. Reproducibility of geochemical and climatic signals in the Atlantic coral Montastraea faveolata

    USGS Publications Warehouse

    Smith, Joseph M.; Quinn, T.M.; Helmle, K.P.; Halley, R.B.

    2006-01-01

    Monthly resolved, 41-year-long stable isotopic and elemental ratio time series were generated from two separate heads of Montastraea faveolata from Looe Key, Florida, to assess the fidelity of using geochemical variations in Montastraea, the dominant reef-building coral of the Atlantic, to reconstruct sea surface environmental conditions at this site. The stable isotope time series of the two corals replicate well; mean values of ??18O and ??13C are indistinguishable between cores (compare 0.70??? versus 0.68??? for ??13C and -3.90??? versus - 3.94??? for ??18O). Mean values from the Sr/Ca time series differ by 0.037 mmol/mol, which is outside of analytical error and indicates that nonenvironmental factors are influencing the coral Sr/ Ca records at Looe Key. We have generated significant ?? 18O-sea surface temperature (SST) (R = -0.84) and Sr/ Ca-SST (R = -0.86) calibration equations at Looe Key; however, these equations are different from previously published equations for Montastraea. Variations in growth parameters or kinetic effects are not sufficient to explain either the observed differences in the mean offset between Sr/Ca time series or the disagreement between previous calibrations and our calculated ??18O-SST and Sr/Ca-SST relationships. Calibration differences are most likely due to variations in seawater chemistry in the continentally influenced waters at Looe Key. Additional geochemical replication studies of Montastraea are needed and should include multiple coral heads from open ocean localities complemented whenever possible by seawater chemistry determinations. Copyright 2006 by the American Geophysical Union.

  9. Theoretical Advances in Sequential Data Assimilation for the Atmosphere and Oceans

    NASA Astrophysics Data System (ADS)

    Ghil, M.

    2007-05-01

    We concentrate here on two aspects of advanced Kalman--filter-related methods: (i) the stability of the forecast- assimilation cycle, and (ii) parameter estimation for the coupled ocean-atmosphere system. The nonlinear stability of a prediction-assimilation system guarantees the uniqueness of the sequentially estimated solutions in the presence of partial and inaccurate observations, distributed in space and time; this stability is shown to be a necessary condition for the convergence of the state estimates to the true evolution of the turbulent flow. The stability properties of the governing nonlinear equations and of several data assimilation systems are studied by computing the spectrum of the associated Lyapunov exponents. These ideas are applied to a simple and an intermediate model of atmospheric variability and we show that the degree of stabilization depends on the type and distribution of the observations, as well as on the data assimilation method. These results represent joint work with A. Carrassi, A. Trevisan and F. Uboldi. Much is known by now about the main physical mechanisms that give rise to and modulate the El-Nino/Southern- Oscillation (ENSO), but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean-atmosphere model of ENSO. Model behavior is very sensitive to two key parameters: (a) "mu", the ocean-atmosphere coupling coefficient between the sea-surface temperature (SST) and wind stress anomalies; and (b) "delta-s", the surface-layer coefficient. Previous work has shown that "delta- s" determines the period of the model's self-sustained oscillation, while "mu' measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Assimilation of SST data from the NCEP- NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean-atmosphere GCMs will be discussed. These results arise from joint work with D. Kondrashov and C.-j. Sun.

  10. Complex Conjugated certificateless-based signcryption with differential integrated factor for secured message communication in mobile network

    PubMed Central

    Rajagopalan, S. P.

    2017-01-01

    Certificateless-based signcryption overcomes inherent shortcomings in traditional Public Key Infrastructure (PKI) and Key Escrow problem. It imparts efficient methods to design PKIs with public verifiability and cipher text authenticity with minimum dependency. As a classic primitive in public key cryptography, signcryption performs validity of cipher text without decryption by combining authentication, confidentiality, public verifiability and cipher text authenticity much more efficiently than the traditional approach. In this paper, we first define a security model for certificateless-based signcryption called, Complex Conjugate Differential Integrated Factor (CC-DIF) scheme by introducing complex conjugates through introduction of the security parameter and improving secured message distribution rate. However, both partial private key and secret value changes with respect to time. To overcome this weakness, a new certificateless-based signcryption scheme is proposed by setting the private key through Differential (Diff) Equation using an Integration Factor (DiffEIF), minimizing computational cost and communication overhead. The scheme is therefore said to be proven secure (i.e. improving the secured message distributing rate) against certificateless access control and signcryption-based scheme. In addition, compared with the three other existing schemes, the CC-DIF scheme has the least computational cost and communication overhead for secured message communication in mobile network. PMID:29040290

  11. Complex Conjugated certificateless-based signcryption with differential integrated factor for secured message communication in mobile network.

    PubMed

    Alagarsamy, Sumithra; Rajagopalan, S P

    2017-01-01

    Certificateless-based signcryption overcomes inherent shortcomings in traditional Public Key Infrastructure (PKI) and Key Escrow problem. It imparts efficient methods to design PKIs with public verifiability and cipher text authenticity with minimum dependency. As a classic primitive in public key cryptography, signcryption performs validity of cipher text without decryption by combining authentication, confidentiality, public verifiability and cipher text authenticity much more efficiently than the traditional approach. In this paper, we first define a security model for certificateless-based signcryption called, Complex Conjugate Differential Integrated Factor (CC-DIF) scheme by introducing complex conjugates through introduction of the security parameter and improving secured message distribution rate. However, both partial private key and secret value changes with respect to time. To overcome this weakness, a new certificateless-based signcryption scheme is proposed by setting the private key through Differential (Diff) Equation using an Integration Factor (DiffEIF), minimizing computational cost and communication overhead. The scheme is therefore said to be proven secure (i.e. improving the secured message distributing rate) against certificateless access control and signcryption-based scheme. In addition, compared with the three other existing schemes, the CC-DIF scheme has the least computational cost and communication overhead for secured message communication in mobile network.

  12. Review on the Celestial Sphere Positioning of FITS Format Image Based on WCS and Research on General Visualization

    NASA Astrophysics Data System (ADS)

    Song, W. M.; Fan, D. W.; Su, L. Y.; Cui, C. Z.

    2017-11-01

    Calculating the coordinate parameters recorded in the form of key/value pairs in FITS (Flexible Image Transport System) header is the key to determine FITS images' position in the celestial system. As a result, it has great significance in researching the general process of calculating the coordinate parameters. By combining CCD related parameters of astronomical telescope (such as field, focal length, and celestial coordinates in optical axis, etc.), astronomical images recognition algorithm, and WCS (World Coordinate System) theory, the parameters can be calculated effectively. CCD parameters determine the scope of star catalogue, so that they can be used to build a reference star catalogue by the corresponding celestial region of astronomical images; Star pattern recognition completes the matching between the astronomical image and reference star catalogue, and obtains a table with a certain number of stars between CCD plane coordinates and their celestial coordinates for comparison; According to different projection of the sphere to the plane, WCS can build different transfer functions between these two coordinates, and the astronomical position of image pixels can be determined by the table's data we have worked before. FITS images are used to carry out scientific data transmission and analyze as a kind of mainstream data format, but only to be viewed, edited, and analyzed in the professional astronomy software. It decides the limitation of popular science education in astronomy. The realization of a general image visualization method is significant. FITS is converted to PNG or JPEG images firstly. The coordinate parameters in the FITS header are converted to metadata in the form of AVM (Astronomy Visualization Metadata), and then the metadata is added to the PNG or JPEG header. This method can meet amateur astronomers' general needs of viewing and analyzing astronomical images in the non-astronomical software platform. The overall design flow is realized through the java program and tested by SExtractor, WorldWide Telescope, picture viewer, and other software.

  13. Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones

    NASA Astrophysics Data System (ADS)

    Mao, X.; Gerhard, J. I.; Barry, D. A.

    2005-12-01

    The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical modelling parameters typically employed for simulating TCE dechlorination relevant for a range of system conditions (e.g, bioaugmented, high TCE concentrations, etc.). The significance of the obtained variability of parameters is illustrated with one-dimensional simulations of enhanced anaerobic bioremediation of residual TCE DNAPL.

  14. New Quality Standards of Testing Idlers for Highly Effective Belt Conveyors

    NASA Astrophysics Data System (ADS)

    Król, Robert; Gladysiewicz, Lech; Kaszuba, Damian; Kisielewski, Waldemar

    2017-12-01

    The paper presents result of research and analyses carried out into the belt conveyors idlers’ rotational resistance which is one of the key factor indicating the quality of idlers. Moreover, idlers’ rotational resistance is important factor in total resistance to motion of belt conveyor. The evaluation of the technical condition of belt conveyor idlers is carried out in accordance with actual national and international standards which determine the methodology of measurements and acceptable values of measured idlers’ parameters. Requirements defined by the standards, which determine the suitability of idlers to a specific application, despite the development of knowledge on idlers and quality of presently manufactured idlers maintain the same level of parameters values over long periods of time. Nowadays the need to implement new, efficient and economically justified solution for belt conveyor transportation systems characterized by long routes and energy-efficiency is often discussed as one of goals in belt conveyors’ future. One of the basic conditions for achieving this goal is to use only carefully selected idlers with low rotational resistance under the full range of operational loads and high durability. Due to this it is necessary to develop new guidelines for evaluation of the technical condition of belt conveyor idlers in accordance with actual standards and perfecting of existing and development of new methods of idlers testing. The changes in particular should concern updating of values of parameters used for evaluation of the technical condition of belt conveyor idlers in relation to belt conveyors’ operational challenges and growing demands in terms of belt conveyors’ energy efficiency.

  15. Improving the representation of Arctic photosynthesis in Earth System Models

    NASA Astrophysics Data System (ADS)

    Rogers, A.; Serbin, S.; Sloan, V. L.; Norby, R. J.; Wullschleger, S. D.

    2014-12-01

    The primary goal of Earth System Models (ESMs) is to improve understanding and projection of future global change. In order to do this models must accurately represent the terrestrial carbon cycle. Although Arctic carbon fluxes are small relative to global carbon fluxes, uncertainty is large. Photosynthetic CO2 uptake is well described by the Farquhar, von Caemmerer and Berry (FvCB) model of photosynthesis and most ESMs use a derivation of the FvCB model to calculate gross primary productivity. Two key parameters required by the FvCB model are an estimate of the maximum rate of carboxylation by the enzyme Rubisco (Vc,max) and the maximum rate of electron transport (Jmax). In ESMs the parameter Vc,max is typically fixed for a given plant functional type (PFT). Only four ESMs currently have an explicit Arctic PFT and the data used to derive Vc,max in these models relies on small data sets and unjustified assumptions. We examined the derivation of Vc,max and Jmax in current Arctic PFTs and estimated Vc,max and Jmax for a range of Arctic PFTs growing on the Barrow Environmental Observatory, Barrow, AK. We found that the values of Vc,max currently used to represent Arctic plants in ESMs are 70% lower than the values we measured, and contemporary temperature response functions for Vc,max also appear to underestimate Vc,max at low temperature. ESMs typically use a single multiplier (JVratio) to convert Vc,max to Jmax, however we found that the JVratio of Arctic plants is higher than current estimates suggesting that Arctic PFTs will be more responsive to rising carbon dioxide than currently projected. In addition we are exploring remotely sensed methods to scale up key biochemical (e.g. leaf N, leaf mass area) and physiological (e.g. Vc,max and Jmax) properties that drive model representation of photosynthesis in the Arctic. Our data suggest that the Arctic tundra has a much greater capacity for CO2 uptake, particularly at low temperature, and will be more CO2 responsive than is currently represented in ESMs. As we build robust relationships between physiology and spectral signatures we hope to provide spatially and temporally resolved trait maps of key model parameters that can be ingested by new model frameworks, or used to validate emergent model properties.

  16. Synthesis of NH4-Substituted Muscovite at 6.3 GPa and 1000°C: Implications for Nitrogen Transport to the Earth's Mantle

    NASA Astrophysics Data System (ADS)

    Sokol, A. G.; Sokol, E. V.; Kupriyanov, I. N.; Sobolev, N. V.

    2018-03-01

    The synthesis of NH4-bearing muscovite at P = 6.3 GPa and T = 1000°C in equilibrium with NH3-H2O fluid is performed. It is determined that the newly formed muscovite is enriched in celadonite minal and contains 370 ppm of NH4. The obtained data make it possible to conclude that ammonium-bearing micas have sufficient thermal stability and can transport crustal nitrogen to the mantle in the presence of a reduced water-ammonia fluid at fO2 less than the values of IW + 2 log units even in the regime of "hot" subduction. The key parameter that determines the efficiency of this mechanism for the deep nitrogen cycle is redox stability of NH4-bearing muscovite at the mantle PT-parameters.

  17. Finding Top-kappa Unexplained Activities in Video

    DTIC Science & Technology

    2012-03-09

    parameters that define an UAP instance affect the running time by varying the values of each parameter while keeping the others fixed to a default...value. Runtime of Top-k TUA. Table 1 reports the values we considered for each parameter along with the corresponding default value. Parameter Values...Default value k 1, 2, 5, All All τ 0.4, 0.6, 0.8 0.6 L 160, 200, 240, 280 200 # worlds 7 E+04, 4 E+05, 2 E+07 2 E+07 TABLE 1: Parameter values used in

  18. Replacing Fortran Namelists with JSON

    NASA Astrophysics Data System (ADS)

    Robinson, T. E., Jr.

    2017-12-01

    Maintaining a log of input parameters for a climate model is very important to understanding potential causes for answer changes during the development stages. Additionally, since modern Fortran is now interoperable with C, a more modern approach to software infrastructure to include code written in C is necessary. Merging these two separate facets of climate modeling requires a quality control for monitoring changes to input parameters and model defaults that can work with both Fortran and C. JSON will soon replace namelists as the preferred key/value pair input in the GFDL model. By adding a JSON parser written in C into the model, the input can be used by all functions and subroutines in the model, errors can be handled by the model instead of by the internal namelist parser, and the values can be output into a single file that is easily parsable by readily available tools. Input JSON files can handle all of the functionality of a namelist while being portable between C and Fortran. Fortran wrappers using unlimited polymorphism are crucial to allow for simple and compact code which avoids the need for many subroutines contained in an interface. Errors can be handled with more detail by providing information about location of syntax errors or typos. The output JSON provides a ground truth for values that the model actually uses by providing not only the values loaded through the input JSON, but also any default values that were not included. This kind of quality control on model input is crucial for maintaining reproducibility and understanding any answer changes resulting from changes in the input.

  19. Determination of representative dimension parameter values of Korean knee joints for knee joint implant design.

    PubMed

    Kwak, Dai Soon; Tao, Quang Bang; Todo, Mitsugu; Jeon, Insu

    2012-05-01

    Knee joint implants developed by western companies have been imported to Korea and used for Korean patients. However, many clinical problems occur in knee joints of Korean patients after total knee joint replacement owing to the geometric mismatch between the western implants and Korean knee joint structures. To solve these problems, a method to determine the representative dimension parameter values of Korean knee joints is introduced to aid in the design of knee joint implants appropriate for Korean patients. Measurements of the dimension parameters of 88 male Korean knee joint subjects were carried out. The distribution of the subjects versus each measured parameter value was investigated. The measured dimension parameter values of each parameter were grouped by suitable intervals called the "size group," and average values of the size groups were calculated. The knee joint subjects were grouped as the "patient group" based on "size group numbers" of each parameter. From the iterative calculations to decrease the errors between the average dimension parameter values of each "patient group" and the dimension parameter values of the subjects, the average dimension parameter values that give less than the error criterion were determined to be the representative dimension parameter values for designing knee joint implants for Korean patients.

  20. Mapping Surface Cover Parameters Using Aggregation Rules and Remotely Sensed Cover Classes. Version 1.9

    NASA Technical Reports Server (NTRS)

    Arain, Altaf M.; Shuttleworth, W. James; Yang, Z-Liang; Michaud, Jene; Dolman, Johannes

    1997-01-01

    A coupled model, which combines the Biosphere-Atmosphere Transfer Scheme (BATS) with an advanced atmospheric boundary-layer model, was used to validate hypothetical aggregation rules for BATS-specific surface cover parameters. The model was initialized and tested with observations from the Anglo-Brazilian Amazonian Climate Observational Study and used to simulate surface fluxes for rain forest and pasture mixes at a site near Manaus in Brazil. The aggregation rules are shown to estimate parameters which give area-average surface fluxes similar to those calculated with explicit representation of forest and pasture patches for a range of meteorological and surface conditions relevant to this site, but the agreement deteriorates somewhat when there are large patch-to-patch differences in soil moisture. The aggregation rules, validated as above, were then applied to remotely sensed 1 km land cover data set to obtain grid-average values of BATS vegetation parameters for 2.8 deg x 2.8 deg and 1 deg x 1 deg grids within the conterminous United States. There are significant differences in key vegetation parameters (aerodynamic roughness length, albedo, leaf area index, and stomatal resistance) when aggregate parameters are compared to parameters for the single, dominant cover within the grid. However, the surface energy fluxes calculated by stand-alone BATS with the 2-year forcing, data from the International Satellite Land Surface Climatology Project (ISLSCP) CDROM were reasonably similar using aggregate-vegetation parameters and dominant-cover parameters, but there were some significant differences, particularly in the western USA.

  1. Application of Statistically Derived CPAS Parachute Parameters

    NASA Technical Reports Server (NTRS)

    Romero, Leah M.; Ray, Eric S.

    2013-01-01

    The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in trajectory benchmark testing assessments with parachute inflation parameters, drag area, and reefing cutter timing used by CPAS.

  2. Section modulus is the optimum geometric predictor for stress fractures and medial tibial stress syndrome in both male and female athletes.

    PubMed

    Franklyn, Melanie; Oakes, Barry; Field, Bruce; Wells, Peter; Morgan, David

    2008-06-01

    Various tibial dimensions and geometric parameters have been linked to stress fractures in athletes and military recruits, but many mechanical parameters have still not been investigated. Sedentary people, athletes with medial tibial stress syndrome, and athletes with stress fractures have smaller tibial geometric dimensions and parameters than do uninjured athletes. Cohort study; Level of evidence, 3. Using a total of 88 subjects, male and female patients with either a tibial stress fracture or medial tibial stress syndrome were compared with both uninjured aerobically active controls and uninjured sedentary controls. Tibial scout radiographs and cross-sectional computed tomography images of all subjects were scanned at the junction of the midthird and distal third of the tibia. Tibial dimensions were measured directly from the films; other parameters were calculated numerically. Uninjured exercising men have a greater tibial cortical cross-sectional area than do their sedentary and injured counterparts, resulting in a greater value of some other cross-sectional geometric parameters, particularly the section modulus. However, for women, the cross-sectional areas are either not different or only marginally different, and there are few tibial dimensions or geometric parameters that distinguish the uninjured exercisers from the sedentary and injured subjects. In women, the main difference between the groups was the distribution of cortical bone about the centroid as a result of the different values of section modulus. Last, medial tibial stress syndrome subjects had smaller tibial cross-sectional dimensions than did their uninjured exercising counterparts, suggesting that medial tibial stress syndrome is not just a soft-tissue injury but also a bony injury. The results show that in men, the cross-sectional area and the section modulus are the key parameters in the tibia to distinguish exercise and injury status, whereas for women, it is the section modulus only.

  3. Effective on-site Coulomb interaction and electron configurations in transition-metal complexes from constraint density functional theory

    NASA Astrophysics Data System (ADS)

    Nawa, Kenji; Nakamura, Kohji; Akiyama, Toru; Ito, Tomonori; Weinert, Michael

    Effective on-site Coulomb interactions (Ueff) and electron configurations in the localized d and f orbitals of metal complexes in transition-metal oxides and organometallic molecules, play a key role in the first-principles search for the true ground-state. However, wide ranges of values in the Ueff parameter of a material, even in the same ionic state, are often reported. Here, we revisit this issue from constraint density functional theory (DFT) by using the full-potential linearized augmented plane wave method. The Ueff parameters for prototypical transition-metal oxides, TMO (TM =Mn, Fe, Co, Ni), were calculated by the second derivative of the total energy functional with respect to the d occupation numbers inside the muffin-tin (MT) spheres as a function of the sphere radius. We find that the calculated Ueff values depend significantly on the MT radius, with a variation of more than 3 eV when the MT radius changes from 2.0 to 2.7 a.u., but importantly an identical valence band structure can be produced in all the cases, with an approximate scaling of Ueff. This indicates that a simple transferability of the Ueff value among different calculation methods is not allowed. We further extend the constraint DFT to treat various electron configurations of the localized d-orbitals in organometallic molecules, TMCp2 (TM =Cr, Mn, Fe, Co, Ni), and find that the calculated Ueff values can reproduce the experimentally determined ground-state electron configurations.

  4. Raising the standards of the calf-raise test: a systematic review.

    PubMed

    Hébert-Losier, Kim; Newsham-West, Richard J; Schneiders, Anthony G; Sullivan, S John

    2009-11-01

    The calf-raise test is used by clinicians and researchers in sports medicine to assess properties of the calf muscle-tendon unit. The test generally involves repetitive concentric-eccentric muscle action of the plantar-flexors in unipedal stance and is quantified by the number of raises performed. Although the calf-raise test appears to have acceptable reliability and face validity, and is commonly used for medical assessment and rehabilitation of injuries, no universally acceptable test parameters have been published to date. A systematic review of the existing literature was conducted to investigate the consistency as well as universal acceptance of the evaluation purposes, test parameters, outcome measurements and psychometric properties of the calf-raise test. Nine electronic databases were searched during the period May 30th to September 21st 2008. Forty-nine articles met the inclusion criteria and were quality assessed. Information on study characteristics and calf-raise test parameters, as well as quantitative data, were extracted; tabulated; and statistically analysed. The average quality score of the reviewed articles was 70.4+/-12.2% (range 44-90%). Articles provided various test parameters; however, a consensus was not ascertained. Key testing parameters varied, were often unstated, and few studies reported reliability or validity values, including sensitivity and specificity. No definitive normative values could be established and the utility of the test in subjects with pathologies remained unclear. Although adapted for use in several disciplines and traditionally recommended for clinical assessment, there is no uniform description of the calf-raise test in the literature. Further investigation is recommended to ensure consistent use and interpretation of the test by researchers and clinicians.

  5. Robust determination of surface relaxivity from nuclear magnetic resonance DT2 measurements

    NASA Astrophysics Data System (ADS)

    Luo, Zhi-Xiang; Paulsen, Jeffrey; Song, Yi-Qiao

    2015-10-01

    Nuclear magnetic resonance (NMR) is a powerful tool to probe into geological materials such as hydrocarbon reservoir rocks and groundwater aquifers. It is unique in its ability to obtain in situ the fluid type and the pore size distributions (PSD). The T1 and T2 relaxation times are closely related to the pore geometry through the parameter called surface relaxivity. This parameter is critical for converting the relaxation time distribution into the PSD and so is key to accurately predicting permeability. The conventional way to determine the surface relaxivity ρ2 had required independent laboratory measurements of the pore size. Recently Zielinski et al. proposed a restricted diffusion model to extract the surface relaxivity from the NMR diffusion-T2 relaxation (DT2) measurement. Although this method significantly improved the ability to directly extract surface relaxivity from a pure NMR measurement, there are inconsistencies with their model and it relies on a number of preset parameters. Here we propose an improved signal model to incorporate a scalable LT and extend their method to extract the surface relaxivity based on analyzing multiple DT2 maps with varied diffusion observation time. With multiple diffusion observation times, the apparent diffusion coefficient correctly describes the restricted diffusion behavior in samples with wide PSDs, and the new method does not require predetermined parameters, such as the bulk diffusion coefficient and tortuosity. Laboratory experiments on glass beads packs with the beads diameter ranging from 50 μm to 500 μm are used to validate the new method. The extracted diffusion parameters are consistent with their known values and the determined surface relaxivity ρ2 agrees with the expected value within ±7%. This method is further successfully applied on a Berea sandstone core and yields surface relaxivity ρ2 consistent with the literature.

  6. Extreme multistability analysis of memristor-based chaotic system and its application in image decryption

    NASA Astrophysics Data System (ADS)

    Li, Chuang; Min, Fuhong; Jin, Qiusen; Ma, Hanyuan

    2017-12-01

    An active charge-controlled memristive Chua's circuit is implemented, and its basic properties are analyzed. Firstly, with the system trajectory starting from an equilibrium point, the dynamic behavior of multiple coexisting attractors depending on the memristor initial value and the system parameter is studied, which shows the coexisting behaviors of point, period, chaos, and quasic-period. Secondly, with the system motion starting from a non-equilibrium point, the dynamics of extreme multistability in a wide initial value domain are easily conformed by new analytical methods. Furthermore, the simulation results indicate that some strange chaotic attractors like multi-wing type and multi-scroll type are observed when the observed signals are extended from voltage and current to power and energy, respectively. Specially, when different initial conditions are taken, the coexisting strange chaotic attractors between the power and energy signals are exhibited. Finally, the chaotic sequences of the new system are used for encrypting color image to protect image information security. The encryption performance is analyzed by statistic histogram, correlation, key spaces and key sensitivity. Simulation results show that the new memristive chaotic system has high security in color image encryption.

  7. Evaluation of the predictive capability of coupled thermo-hydro-mechanical models for a heated bentonite/clay system (HE-E) in the Mont Terri Rock Laboratory

    DOE PAGES

    Garitte, B.; Shao, H.; Wang, X. R.; ...

    2017-01-09

    Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less

  8. On-line vs off-line electrical conductivity characterization. Polycarbonate composites developed with multiwalled carbon nanotubes by compounding technology

    NASA Astrophysics Data System (ADS)

    Llorens-Chiralt, R.; Weiss, P.; Mikonsaari, I.

    2014-05-01

    Material characterization is one of the key steps when conductive polymers are developed. The dispersion of carbon nanotubes (CNTs) in a polymeric matrix using melt mixing influence final composite properties. The compounding becomes trial and error using a huge amount of materials, spending time and money to obtain competitive composites. Traditional methods to carry out electrical conductivity characterization include compression and injection molding. Both methods need extra equipments and moulds to obtain standard bars. This study aims to investigate the accuracy of the data obtained from absolute resistance recorded during the melt compounding, using an on-line setup developed by our group, and to correlate these values with off-line characterization and processing parameters (screw/barrel configuration, throughput, screw speed, temperature profile and CNTs percentage). Compounds developed with different percentages of multi walled carbon nanotubes (MWCNTs) and polycarbonate has been characterized during and after extrusion. Measurements, on-line resistance and off-line resistivity, showed parallel response and reproducibility, confirming method validity. The significance of the results obtained stems from the fact that we are able to measure on-line resistance and to change compounding parameters during production to achieve reference values reducing production/testing cost and ensuring material quality. Also, this method removes errors which can be found in test bars development, showing better correlation with compounding parameters.

  9. Evaluation of the predictive capability of coupled thermo-hydro-mechanical models for a heated bentonite/clay system (HE-E) in the Mont Terri Rock Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garitte, B.; Shao, H.; Wang, X. R.

    Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less

  10. C(m)-History Method, a Novel Approach to Simultaneously Measure Source and Sink Parameters Important for Estimating Indoor Exposures to Phthalates.

    PubMed

    Cao, Jianping; Weschler, Charles J; Luo, Jiajun; Zhang, Yinping

    2016-01-19

    The concentration of a gas-phase semivolatile organic compound (SVOC) in equilibrium with its mass-fraction in the source material, y0, and the coefficient for partitioning of an SVOC between clothing and air, K, are key parameters for estimating emission and subsequent dermal exposure to SVOCs. Most of the available methods for their determination depend on achieving steady-state in ventilated chambers. This can be time-consuming and of variable accuracy. Additionally, no existing method simultaneously determines y0 and K in a single experiment. In this paper, we present a sealed-chamber method, using early-stage concentration measurements, to simultaneously determine y0 and K. The measurement error for the method is analyzed, and the optimization of experimental parameters is explored. Using this method, y0 for phthalates (DiBP, DnBP, and DEHP) emitted by two types of PVC flooring, coupled with K values for these phthalates partitioning between a cotton T-shirt and air, were measured at 25 and 32 °C (room and skin temperatures, respectively). The measured y0 values agree well with results obtained by alternate methods. The changes of y0 and K with temperature were used to approximate the changes in enthalpy, ΔH, associated with the relevant phase changes. We conclude with suggestions for further related research.

  11. Two Key Parameters Controlling Particle Clumping Caused by Streaming Instability in the Dead-zone Dust Layer of a Protoplanetary Disk

    NASA Astrophysics Data System (ADS)

    Sekiya, Minoru; Onishi, Isamu K.

    2018-06-01

    The streaming instability and Kelvin–Helmholtz instability are considered the two major sources causing clumping of dust particles and turbulence in the dust layer of a protoplanetary disk as long as we consider the dead zone where the magnetorotational instability does not grow. Extensive numerical simulations have been carried out in order to elucidate the condition for the development of particle clumping caused by the streaming instability. In this paper, a set of two parameters suitable for classifying the numerical results is proposed. One is the Stokes number that has been employed in previous works and the other is the dust particle column density that is nondimensionalized using the gas density in the midplane, Keplerian angular velocity, and difference between the Keplerian and gaseous orbital velocities. The magnitude of dust clumping is a measure of the behavior of the dust layer. Using three-dimensional numerical simulations of dust particles and gas based on Athena code v. 4.2, it is confirmed that the magnitude of dust clumping for two disk models are similar if the corresponding sets of values of the two parameters are identical to each other, even if the values of the metallicity (i.e., the ratio of the columns density of the dust particles to that of the gas) are different.

  12. Nonlinear solar cycle forecasting: theory and perspectives

    NASA Astrophysics Data System (ADS)

    Baranovski, A. L.; Clette, F.; Nollau, V.

    2008-02-01

    In this paper we develop a modern approach to solar cycle forecasting, based on the mathematical theory of nonlinear dynamics. We start from the design of a static curve fitting model for the experimental yearly sunspot number series, over a time scale of 306 years, starting from year 1700 and we establish a least-squares optimal pulse shape of a solar cycle. The cycle-to-cycle evolution of the parameters of the cycle shape displays different patterns, such as a Gleissberg cycle and a strong anomaly in the cycle evolution during the Dalton minimum. In a second step, we extract a chaotic mapping for the successive values of one of the key model parameters - the rate of the exponential growth-decrease of the solar activity during the n-th cycle. We examine piece-wise linear techniques for the approximation of the derived mapping and we provide its probabilistic analysis: calculation of the invariant distribution and autocorrelation function. We find analytical relationships for the sunspot maxima and minima, as well as their occurrence times, as functions of chaotic values of the above parameter. Based on a Lyapunov spectrum analysis of the embedded mapping, we finally establish a horizon of predictability for the method, which allows us to give the most probable forecasting of the upcoming solar cycle 24, with an expected peak height of 93±21 occurring in 2011/2012.

  13. Temperature effects on gallium arsenide 63Ni betavoltaic cell.

    PubMed

    Butera, S; Lioliou, G; Barnett, A M

    2017-07-01

    A GaAs 63 Ni radioisotope betavoltaic cell is reported over the temperature range 70°C to -20°C. The temperature effects on the key cell parameters were investigated. The saturation current decreased with decreased temperature; whilst the open circuit voltage, the short circuit current, the maximum power and the internal conversion efficiency values decreased with increased temperature. A maximum output power and an internal conversion efficiency of 1.8pW (corresponding to 0.3μW/Ci) and 7% were observed at -20°C, respectively. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. A perturbation method to the tent map based on Lyapunov exponent and its application

    NASA Astrophysics Data System (ADS)

    Cao, Lv-Chen; Luo, Yu-Ling; Qiu, Sen-Hui; Liu, Jun-Xiu

    2015-10-01

    Perturbation imposed on a chaos system is an effective way to maintain its chaotic features. A novel parameter perturbation method for the tent map based on the Lyapunov exponent is proposed in this paper. The pseudo-random sequence generated by the tent map is sent to another chaos function — the Chebyshev map for the post processing. If the output value of the Chebyshev map falls into a certain range, it will be sent back to replace the parameter of the tent map. As a result, the parameter of the tent map keeps changing dynamically. The statistical analysis and experimental results prove that the disturbed tent map has a highly random distribution and achieves good cryptographic properties of a pseudo-random sequence. As a result, it weakens the phenomenon of strong correlation caused by the finite precision and effectively compensates for the digital chaos system dynamics degradation. Project supported by the Guangxi Provincial Natural Science Foundation, China (Grant No. 2014GXNSFBA118271), the Research Project of Guangxi University, China (Grant No. ZD2014022), the Fund from Guangxi Provincial Key Laboratory of Multi-source Information Mining & Security, China (Grant No. MIMS14-04), the Fund from the Guangxi Provincial Key Laboratory of Wireless Wideband Communication & Signal Processing, China (Grant No. GXKL0614205), the Education Development Foundation and the Doctoral Research Foundation of Guangxi Normal University, the State Scholarship Fund of China Scholarship Council (Grant No. [2014]3012), and the Innovation Project of Guangxi Graduate Education, China (Grant No. YCSZ2015102).

  15. Active inference and epistemic value.

    PubMed

    Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni

    2015-01-01

    We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.

  16. Application of nonlinear least-squares regression to ground-water flow modeling, west-central Florida

    USGS Publications Warehouse

    Yobbi, D.K.

    2000-01-01

    A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.

  17. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  18. Indoor air quality analysis based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tuo, Wang; Yunhua, Sun; Song, Tian; Liang, Yu; Weihong, Cui

    2014-03-01

    The air of the office environment is our research object. The data of temperature, humidity, concentrations of carbon dioxide, carbon monoxide and ammonia are collected peer one to eight seconds by the sensor monitoring system. And all the data are stored in the Hbase database of Hadoop platform. With the help of HBase feature of column-oriented store and versioned (automatically add the time column), the time-series data sets are bulit based on the primary key Row-key and timestamp. The parallel computing programming model MapReduce is used to process millions of data collected by sensors. By analysing the changing trend of parameters' value at different time of the same day and at the same time of various dates, the impact of human factor and other factors on the room microenvironment is achieved according to the liquidity of the office staff. Moreover, the effective way to improve indoor air quality is proposed in the end of this paper.

  19. Gaseous Sulfate Solubility in Glass: Experimental Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bliss, Mary

    2013-11-30

    Sulfate solubility in glass is a key parameter in many commercial glasses and nuclear waste glasses. This report summarizes key publications specific to sulfate solubility experimental methods and the underlying physical chemistry calculations. The published methods and experimental data are used to verify the calculations in this report and are expanded to a range of current technical interest. The calculations and experimental methods described in this report will guide several experiments on sulfate solubility and saturation for the Hanford Waste Treatment Plant Enhanced Waste Glass Models effort. There are several tables of sulfate gas equilibrium values at high temperature tomore » guide experimental gas mixing and to achieve desired SO3 levels. This report also describes the necessary equipment and best practices to perform sulfate saturation experiments for molten glasses. Results and findings will be published when experimental work is finished and this report is validated from the data obtained.« less

  20. Identification and determination of trapping parameters as key site parameters for CO2 storage for the active CO2 storage site in Ketzin (Germany) - Comparison of different experimental approaches and analysis of field data

    NASA Astrophysics Data System (ADS)

    Zemke, Kornelia; Liebscher, Axel

    2015-04-01

    Petrophysical properties like porosity and permeability are key parameters for a safe long-term storage of CO2 but also for the injection operation itself. The accurate quantification of residual trapping is difficult, but very important for both storage containment security and storage capacity; it is also an important parameter for dynamic simulation. The German CO2 pilot storage in Ketzin is a Triassic saline aquifer with initial conditions of the target sandstone horizon of 33.5 ° C/6.1 MPa at 630 m. One injection and two observation wells were drilled in 2007 and nearly 200 m of core material was recovered for site characterization. From June 2008 to September 2013, slightly more than 67 kt food-grade CO2 has been injected and continuously monitored. A fourth observation well has been drilled after 61 kt injected CO2 in summer 2012 at only 25 m distance to the injection well and new core material was recovered that allow study CO2 induced changes in petrophysical properties. The observed only minor differences between pre-injection and post-injection petrophysical parameters of the heterogeneous formation have no severe consequences on reservoir and cap rock integrity or on the injection behavior. Residual brine saturation for the Ketzin reservoir core material was estimated by different methods. Brine-CO2 flooding experiments for two reservoir samples resulted in 36% and 55% residual brine saturation (Kiessling, 2011). Centrifuge capillary pressure measurements (pc = 0.22 MPa) yielded the smallest residual brine saturation values with ~20% for the lower part of the reservoir sandstone and ~28% for the upper part (Fleury, 2010). The method by Cerepi (2002), which calculates the residual mercury saturation after pressure release on the imbibition path as trapped porosity and the retracted mercury volume as free porosity, yielded unrealistic low free porosity values of only a few percent, because over 80% of the penetrated mercury remained in the samples after pressure release to atmospheric pressure. The results from the centrifuge capillary pressure measurements were then used for calibrating the cutoff time of NMR T2 relaxation (average value 8 ms) to differentiate between the mobile and immobile water fraction (standard for clean sandstone 33 ms). Following Norden (2010) a cutoff time of 10 ms was applied to estimate the residual saturation as Bound Fluid Volume for the Ketzin core materials and to estimate NMR permeability after Timur-Coates. This adapted cutoff value is also consistent with results from RST logging after injection. The maximum measured CO2 saturation corresponds to the effective porosity for the upper most CO2 filled sandstone horizon. The directly measured values and the estimated residual brine saturations from NMR measurements with the adapted cutoff time of 10 ms are within the expected range compared to the literature data with a mean residual brine saturation of 53%. A. Cerepi et al., 2002, Journal of Petroleum Science and Engineering 35. M. Fleury et al., 2011, SCA2010-06. D. Kiessling et al., 2010, International Journal of Greenhouse Gas Control 4. B. Norden et al. 2010, SPE Reservoir Evaluation & Engineering 13. .

  1. Large-scale galaxy bias

    NASA Astrophysics Data System (ADS)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  2. Sensitivity of the Eocene climate to CO2 and orbital variability

    NASA Astrophysics Data System (ADS)

    Keery, John S.; Holden, Philip B.; Edwards, Neil R.

    2018-02-01

    The early Eocene, from about 56 Ma, with high atmospheric CO2 levels, offers an analogue for the response of the Earth's climate system to anthropogenic fossil fuel burning. In this study, we present an ensemble of 50 Earth system model runs with an early Eocene palaeogeography and variation in the forcing values of atmospheric CO2 and the Earth's orbital parameters. Relationships between simple summary metrics of model outputs and the forcing parameters are identified by linear modelling, providing estimates of the relative magnitudes of the effects of atmospheric CO2 and each of the orbital parameters on important climatic features, including tropical-polar temperature difference, ocean-land temperature contrast, Asian, African and South (S.) American monsoon rains, and climate sensitivity. Our results indicate that although CO2 exerts a dominant control on most of the climatic features examined in this study, the orbital parameters also strongly influence important components of the ocean-atmosphere system in a greenhouse Earth. In our ensemble, atmospheric CO2 spans the range 280-3000 ppm, and this variation accounts for over 90 % of the effects on mean air temperature, southern winter high-latitude ocean-land temperature contrast and northern winter tropical-polar temperature difference. However, the variation of precession accounts for over 80 % of the influence of the forcing parameters on the Asian and African monsoon rainfall, and obliquity variation accounts for over 65 % of the effects on winter ocean-land temperature contrast in high northern latitudes and northern summer tropical-polar temperature difference. Our results indicate a bimodal climate sensitivity, with values of 4.36 and 2.54 °C, dependent on low or high states of atmospheric CO2 concentration, respectively, with a threshold at approximately 1000 ppm in this model, and due to a saturated vegetation-albedo feedback. Our method gives a quantitative ranking of the influence of each of the forcing parameters on key climatic model outputs, with additional spatial information from singular value decomposition providing insights into likely physical mechanisms. The results demonstrate the importance of orbital variation as an agent of change in climates of the past, and we demonstrate that emulators derived from our modelling output can be used as rapid and efficient surrogates of the full complexity model to provide estimates of climate conditions from any set of forcing parameters.

  3. Assessment of the Fluorescence Spectra Characteristics of Dissolved Organic Matter Derived from Organic Waste Composting Based on Projection Pursuit Classification (PPC).

    PubMed

    Wei, Zi-min; Wang, Xing-lei; Pan, Hong-wei; Zhao, Yue; Xie, Xin-yu; Zhao, Yi; Zhang, Lin-xue; Zhao, Tao-zhi

    2015-10-01

    The characteristics of fluorescence spectra of dissolved organic matter (DOM) derived from composting is one of the key ways to assess the compost maturity. However, the existing methods mainly focus on the qualitative description for the humification degree of compost. In this paper, projection pursuit classification (PPC) was conducted to quantitative assess the grades of compost maturity, based on the characteristics of fluorescence spectra of DOM. Eight organic wastes (chicken manure, swine manure, kitchen waste, lawn waste, fruits and vegetables waste, straw, green waste, and municipal solid waste) composting were conducted, the germination percentage (GI) and fluorescence spectra of DOM were measured during composting. Statistic analysis with all fluorescence parameters of DOM indicated that I436/I383 (a ratio between the fluorescence intensities at 436 and 383 nm in excitation spectra), FLR (an area ratio between fulvic-like region from 308 to 363 nm and total region in emission spectra), P(HA/Pro) (a regional integration ratio between humic acid-like region to protein-like region in excitation emission matrix (EEM) spectra), A4/A1 (an area ratio of the last quarter to the first quarter in emission spectra), r(A,C) (a ratio between the fluorescence intensities of peak A and peak C in EEM spectra) were correlated with each other (p < 0.01), suggesting that this fluorescence parameters could be considered as comprehensive evaluation index system of PPC. Subsequently, the four degrades of compost maturity included the best degree of maturity (I, GI > 80%), better degree of compost maturity (II, 60% < GI < 80%), maturity (III, 50% < GI < 60%), and immaturity (IV, GI < 50%) were divided according the GI value during composting. The corresponding fluorescence parameter values were calculated at each degrade of compost maturity. Then the projection values were calculated based on PPC considering the above fluorescence parameter values. The projection value was 2.01 - 2.22 for I grade, 1.21 - 2.0 for II grade, 0.57 - 1.2 for III grade, and 0.10 - 0.56 for IV grade. Model validation was then carried out with composts samples, the results indicated that the simulated values were agreed with the observed values, and the accuracy of PPC was 75% for four grades of maturity, and 100% for maturity and immaturity, suggesting that PPC could meet the need of the assessment of compost maturity.

  4. Inverse gas chromatographic determination of solubility parameters of excipients.

    PubMed

    Adamska, Katarzyna; Voelkel, Adam

    2005-11-04

    The principle aim of this work was an application of inverse gas chromatography (IGC) for the estimation of solubility parameter for pharmaceutical excipients. The retention data of number of test solutes were used to calculate Flory-Huggins interaction parameter (chi1,2infinity) and than solubility parameter (delta2), corrected solubility parameter (deltaT) and its components (deltad, deltap, deltah) by using different procedures. The influence of different values of test solutes solubility parameter (delta1) over calculated values was estimated. The solubility parameter values obtained for all excipients from the slope, from Guillet and co-workers' procedure are higher than that obtained from components according Voelkel and Janas procedure. It was found that solubility parameter's value of the test solutes influences, but not significantly, values of solubility parameter of excipients.

  5. Forecasting surface-layer atmospheric parameters at the Large Binocular Telescope site

    NASA Astrophysics Data System (ADS)

    Turchi, Alessio; Masciadri, Elena; Fini, Luca

    2017-04-01

    In this paper, we quantify the performance of an automated weather forecast system implemented on the Large Binocular Telescope (LBT) site at Mt Graham (Arizona) in forecasting the main atmospheric parameters close to the ground. The system employs a mesoscale non-hydrostatic numerical model (Meso-Nh). To validate the model, we compare the forecasts of wind speed, wind direction, temperature and relative humidity close to the ground with the respective values measured by instrumentation installed on the telescope dome. The study is performed over a large sample of nights uniformly distributed over 2 yr. The quantitative analysis is done using classical statistical operators [bias, root-mean-square error (RMSE) and σ] and contingency tables, which allows us to extract complementary key information, such as the percentage of correct detections (PC) and the probability of obtaining a correct detection within a defined interval of values (POD). The results of our study indicate that the model performance in forecasting the atmospheric parameters we have just cited are very good, in some cases excellent: RMSE for temperature is below 1°C, for relative humidity it is 14 per cent and for the wind speed it is around 2.5 m s-1. The relative error of the RMSE for wind direction varies from 9 to 17 per cent depending on the wind speed conditions. This work is performed in the context of the ALTA (Advanced LBT Turbulence and Atmosphere) Center project, whose final goal is to provide forecasts of all the atmospheric parameters and the optical turbulence to support LBT observations, adaptive optics facilities and interferometric facilities.

  6. Multi-factor authentication

    DOEpatents

    Hamlet, Jason R; Pierson, Lyndon G

    2014-10-21

    Detection and deterrence of spoofing of user authentication may be achieved by including a cryptographic fingerprint unit within a hardware device for authenticating a user of the hardware device. The cryptographic fingerprint unit includes an internal physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generates a PUF value. Combining logic is coupled to receive the PUF value, combines the PUF value with one or more other authentication factors to generate a multi-factor authentication value. A key generator is coupled to generate a private key and a public key based on the multi-factor authentication value while a decryptor is coupled to receive an authentication challenge posed to the hardware device and encrypted with the public key and coupled to output a response to the authentication challenge decrypted with the private key.

  7. Increased Uptake of Chelated Copper Ions by Lolium perenne Attributed to Amplified Membrane and Endodermal Damage

    PubMed Central

    Johnson, Anthea; Singhal, Naresh

    2015-01-01

    The contributions of mechanisms by which chelators influence metal translocation to plant shoot tissues are analyzed using a combination of numerical modelling and physical experiments. The model distinguishes between apoplastic and symplastic pathways of water and solute movement. It also includes the barrier effects of the endodermis and plasma membrane. Simulations are used to assess transport pathways for free and chelated metals, identifying mechanisms involved in chelate-enhanced phytoextraction. Hypothesized transport mechanisms and parameters specific to amendment treatments are estimated, with simulated results compared to experimental data. Parameter values for each amendment treatment are estimated based on literature and experimental values, and used for model calibration and simulation of amendment influences on solute transport pathways and mechanisms. Modeling indicates that chelation alters the pathways for Cu transport. For free ions, Cu transport to leaf tissue can be described using purely apoplastic or transcellular pathways. For strong chelators (ethylenediaminetetraacetic acid (EDTA) and diethylenetriaminepentaacetic acid (DTPA)), transport by the purely apoplastic pathway is insufficient to represent measured Cu transport to leaf tissue. Consistent with experimental observations, increased membrane permeability is required for simulating translocation in EDTA and DTPA treatments. Increasing the membrane permeability is key to enhancing phytoextraction efficiency. PMID:26512647

  8. Mechanistic Parameters of Electrocatalytic Water Oxidation on LiMn2 O4 in Comparison to Natural Photosynthesis.

    PubMed

    Köhler, Lennart; Ebrahimizadeh Abrishami, Majid; Roddatis, Vladimir; Geppert, Janis; Risch, Marcel

    2017-11-23

    Targeted improvement of the low efficiency of water oxidation during the oxygen evolution reaction (OER) is severely hindered by insufficient knowledge of the electrocatalytic mechanism on heterogeneous surfaces. We chose LiMn 2 O 4 as a model system for mechanistic investigations as it shares the cubane structure with the active site of photosystem II and the valence of Mn 3.5+ with the dark-stable S1 state in the mechanism of natural photosynthesis. The investigated LiMn 2 O 4 nanoparticles are electrochemically stable in NaOH electrolytes and show respectable activity in any of the main metrics. At low overpotential, the key mechanistic parameters of Tafel slope, Nernst slope, and reaction order have constant values on the RHE scale of 62(1) mV dec -1 , 1(1) mV pH -1 , -0.04(2), respectively. These values are interpreted in the context of the well-studied mechanism of natural photosynthesis. The uncovered difference in the reaction sequence is important for the design of efficient bio-inspired electrocatalysts. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  9. Inversion Schemes to Retrieve Atmospheric and Oceanic Parameters from SeaWiFS Data

    NASA Technical Reports Server (NTRS)

    Frouin, Robert; Deschamps, Pierre-Yves

    1997-01-01

    Firstly, we have analyzed atmospheric transmittance and sky radiance data connected at the Scripps Institution of Oceanography pier, La Jolla during the winters of 1993 and 1994. Aerosol optical thickness at 870 nm was generally low in La Jolla, with most values below 0.1 after correction for stratospheric aerosols. For such low optical thickness, variability in aerosol scattering properties cannot be determined, and a mean background model, specified regionally under stable stratospheric component, may be sufficient for ocean color remote sensing, from space. For optical thicknesses above 0. 1, two modes of variability characterized by Angstrom exponents of 1.2 and 0.5 and corresponding, to Tropospheric and Maritime models, respectively, were identified in the measurements. The aerosol models selected for ocean color remote sensing, allowed one to fit, within measurement inaccuracies, the derived values of Angstrom exponent and 'pseudo' phase function (the product of single scattering albedo and phase function), key atmospheric correction parameters. Importantly, the 'pseudo' phase function can be derived from measurements of the Angstrom exponent. Shipborne sun photometer measurements at the time of satellite overpass are usually sufficient to verify atmospheric correction for ocean color.

  10. Model based estimation of sediment erosion in groyne fields along the River Elbe

    NASA Astrophysics Data System (ADS)

    Prohaska, Sandra; Jancke, Thomas; Westrich, Bernhard

    2008-11-01

    River water quality is still a vital environmental issue, even though ongoing emissions of contaminants are being reduced in several European rivers. The mobility of historically contaminated deposits is key issue in sediment management strategy and remediation planning. Resuspension of contaminated sediments impacts the water quality and thus, it is important for river engineering and ecological rehabilitation. The erodibility of the sediments and associated contaminants is difficult to predict due to complex time depended physical, chemical, and biological processes, as well as due to the lack of information. Therefore, in engineering practice the values for erosion parameters are usually assumed to be constant despite their high spatial and temporal variability, which leads to a large uncertainty of the erosion parameters. The goal of presented study is to compare the deterministic approach assuming constant critical erosion shear stress and an innovative approach which takes the critical erosion shear stress as a random variable. Furthermore, quantification of the effective value of the critical erosion shear stress, its applicability in numerical models, and erosion probability will be estimated. The results presented here are based on field measurements and numerical modelling of the River Elbe groyne fields.

  11. Programmable Gain Amplifiers with DC Suppression and Low Output Offset for Bioelectric Sensors

    PubMed Central

    Carrera, Albano; de la Rosa, Ramón; Alonso, Alonso

    2013-01-01

    DC-offset and DC-suppression are key parameters in bioelectric amplifiers. However, specific DC analyses are not often explained. Several factors influence the DC-budget: the programmable gain, the programmable cut-off frequencies for high pass filtering and, the low cut-off values and the capacitor blocking issues involved. A new intermediate stage is proposed to address the DC problem entirely. Two implementations were tested. The stage is composed of a programmable gain amplifier (PGA) with DC-rejection and low output offset. Cut-off frequencies are selectable and values from 0.016 to 31.83 Hz were tested, and the capacitor deblocking is embedded in the design. Hence, this PGA delivers most of the required gain with constant low output offset, notwithstanding the gain or cut-off frequency selected. PMID:24084109

  12. Cost-effective conservation of amphibian ecology and evolution

    PubMed Central

    Campos, Felipe S.; Lourenço-de-Moraes, Ricardo; Llorente, Gustavo A.; Solé, Mirco

    2017-01-01

    Habitat loss is the most important threat to species survival, and the efficient selection of priority areas is fundamental for good systematic conservation planning. Using amphibians as a conservation target, we designed an innovative assessment strategy, showing that prioritization models focused on functional, phylogenetic, and taxonomic diversity can include cost-effectiveness–based assessments of land values. We report new key conservation sites within the Brazilian Atlantic Forest hot spot, revealing a congruence of ecological and evolutionary patterns. We suggest payment for ecosystem services through environmental set-asides on private land, establishing potential trade-offs for ecological and evolutionary processes. Our findings introduce additional effective area-based conservation parameters that set new priorities for biodiversity assessment in the Atlantic Forest, validating the usefulness of a novel approach to cost-effectiveness–based assessments of conservation value for other species-rich regions. PMID:28691084

  13. A protocol to correct for intra- and interspecific variation in tail hair growth to align isotope signatures of segmentally cut tail hair to a common time line.

    PubMed

    Burnik Šturm, Martina; Pukazhenthi, Budhan; Reed, Dolores; Ganbaatar, Oyunsaikhan; Sušnik, Stane; Haymerle, Agnes; Voigt, Christian C; Kaczensky, Petra

    2015-06-15

    In recent years, segmental stable isotope analysis of hair has been a focus of research in animal dietary ecology and migration. To correctly assign tail hair segments to seasons or even Julian dates, information on tail hair growth rates is a key parameter, but is lacking for most species. We (a) reviewed the literature on tail hair growth rates in mammals; b) made own measurements of three captive equid species; (c) measured δ(2)H, δ(13)C and δ(15)N values in sequentially cut tail hairs of three sympatric, free-ranging equids from the Mongolian Gobi, using isotope ratio mass spectrometry (IRMS); and (d) collected environmental background data on seasonal variation by measuring δ(2)H values in precipitation by IRMS and by compiling pasture productivity measured by remote sensing via the normalized difference vegetation index (NDVI). Tail hair growth rates showed significant inter- and intra-specific variation making temporal alignment problematic. In the Mongolian Gobi, high seasonal variation of δ(2)H values in precipitation results in winter lows and summer highs of δ(2)H values of available water sources. In water-dependent equids, this seasonality is reflected in the isotope signatures of sequentially cut tails hairs. In regions which are subject to strong seasonal patterns we suggest identifying key isotopes which show strong seasonal variation in the environment and can be expected to be reflected in the animal tissue. The known interval between the maxima and minima of these isotope values can then be used to correctly temporally align the segmental stable isotope signature for each individual animal. © 2015 The Authors. Rapid Communications in Mass Spectrometry published by John Wiley & Sons Ltd.

  14. A protocol to correct for intra- and interspecific variation in tail hair growth to align isotope signatures of segmentally cut tail hair to a common time line

    PubMed Central

    Burnik Šturm, Martina; Pukazhenthi, Budhan; Reed, Dolores; Ganbaatar, Oyunsaikhan; Sušnik, Stane; Haymerle, Agnes; Voigt, Christian C; Kaczensky, Petra

    2015-01-01

    Rationale In recent years, segmental stable isotope analysis of hair has been a focus of research in animal dietary ecology and migration. To correctly assign tail hair segments to seasons or even Julian dates, information on tail hair growth rates is a key parameter, but is lacking for most species. Methods We (a) reviewed the literature on tail hair growth rates in mammals; b) made own measurements of three captive equid species; (c) measured δ2H, δ13C and δ15N values in sequentially cut tail hairs of three sympatric, free-ranging equids from the Mongolian Gobi, using isotope ratio mass spectrometry (IRMS); and (d) collected environmental background data on seasonal variation by measuring δ2H values in precipitation by IRMS and by compiling pasture productivity measured by remote sensing via the normalized difference vegetation index (NDVI). Results Tail hair growth rates showed significant inter- and intra-specific variation making temporal alignment problematic. In the Mongolian Gobi, high seasonal variation of δ2H values in precipitation results in winter lows and summer highs of δ2H values of available water sources. In water-dependent equids, this seasonality is reflected in the isotope signatures of sequentially cut tails hairs. Conclusions In regions which are subject to strong seasonal patterns we suggest identifying key isotopes which show strong seasonal variation in the environment and can be expected to be reflected in the animal tissue. The known interval between the maxima and minima of these isotope values can then be used to correctly temporally align the segmental stable isotope signature for each individual animal. © 2015 The Authors. Rapid Communications in Mass Spectrometry published by John Wiley & Sons Ltd. PMID:26044272

  15. Uncertainty quantification and propagation of errors of the Lennard-Jones 12-6 parameters for n-alkanes

    PubMed Central

    Knotts, Thomas A.

    2017-01-01

    Molecular simulation has the ability to predict various physical properties that are difficult to obtain experimentally. For example, we implement molecular simulation to predict the critical constants (i.e., critical temperature, critical density, critical pressure, and critical compressibility factor) for large n-alkanes that thermally decompose experimentally (as large as C48). Historically, molecular simulation has been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness in molecular simulation is the difficulty to quantify the uncertainty in the results. This is because molecular simulations have many sources of uncertainty that propagate and are difficult to quantify. We investigate one of the most important sources of uncertainty, namely, the intermolecular force field parameters. Specifically, we quantify the uncertainty in the Lennard-Jones (LJ) 12-6 parameters for the CH4, CH3, and CH2 united-atom interaction sites. We then demonstrate how the uncertainties in the parameters lead to uncertainties in the saturated liquid density and critical constant values obtained from Gibbs Ensemble Monte Carlo simulation. Our results suggest that the uncertainties attributed to the LJ 12-6 parameters are small enough that quantitatively useful estimates of the saturated liquid density and the critical constants can be obtained from molecular simulation. PMID:28527455

  16. Displacement back analysis for a high slope of the Dagangshan Hydroelectric Power Station based on BP neural network and particle swarm optimization.

    PubMed

    Liang, Zhengzhao; Gong, Bin; Tang, Chunan; Zhang, Yongbin; Ma, Tianhui

    2014-01-01

    The right bank high slope of the Dagangshan Hydroelectric Power Station is located in complicated geological conditions with deep fractures and unloading cracks. How to obtain the mechanical parameters and then evaluate the safety of the slope are the key problems. This paper presented a displacement back analysis for the slope using an artificial neural network model (ANN) and particle swarm optimization model (PSO). A numerical model was established to simulate the displacement increment results, acquiring training data for the artificial neural network model. The backpropagation ANN model was used to establish a mapping function between the mechanical parameters and the monitoring displacements. The PSO model was applied to initialize the weights and thresholds of the backpropagation (BP) network model and determine suitable values of the mechanical parameters. Then the elastic moduli of the rock masses were obtained according to the monitoring displacement data at different excavation stages, and the BP neural network model was proved to be valid by comparing the measured displacements, the displacements predicted by the BP neural network model, and the numerical simulation using the back-analyzed parameters. The proposed model is useful for rock mechanical parameters determination and instability investigation of rock slopes.

  17. Displacement Back Analysis for a High Slope of the Dagangshan Hydroelectric Power Station Based on BP Neural Network and Particle Swarm Optimization

    PubMed Central

    Liang, Zhengzhao; Gong, Bin; Tang, Chunan; Zhang, Yongbin; Ma, Tianhui

    2014-01-01

    The right bank high slope of the Dagangshan Hydroelectric Power Station is located in complicated geological conditions with deep fractures and unloading cracks. How to obtain the mechanical parameters and then evaluate the safety of the slope are the key problems. This paper presented a displacement back analysis for the slope using an artificial neural network model (ANN) and particle swarm optimization model (PSO). A numerical model was established to simulate the displacement increment results, acquiring training data for the artificial neural network model. The backpropagation ANN model was used to establish a mapping function between the mechanical parameters and the monitoring displacements. The PSO model was applied to initialize the weights and thresholds of the backpropagation (BP) network model and determine suitable values of the mechanical parameters. Then the elastic moduli of the rock masses were obtained according to the monitoring displacement data at different excavation stages, and the BP neural network model was proved to be valid by comparing the measured displacements, the displacements predicted by the BP neural network model, and the numerical simulation using the back-analyzed parameters. The proposed model is useful for rock mechanical parameters determination and instability investigation of rock slopes. PMID:25140345

  18. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  19. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties

    PubMed Central

    Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Abstract Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing. PMID:29707073

  20. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties.

    PubMed

    Tan, Chaolin; Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm 3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV 0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing.

  1. Determination of kinetic parameters of 1,3-propanediol fermentation by Clostridium diolis using statistically optimized medium.

    PubMed

    Kaur, Guneet; Srivastava, Ashok K; Chand, Subhash

    2012-09-01

    1,3-propanediol (1,3-PD) is a chemical compound of immense importance primarily used as a raw material for fiber and textile industry. It can be produced by the fermentation of glycerol available abundantly as a by-product from the biodiesel plant. The present study was aimed at determination of key kinetic parameters of 1,3-PD fermentation by Clostridium diolis. Initial experiments on microbial growth inhibition were followed by optimization of nutrient medium recipe by statistical means. Batch kinetic data from studies in bioreactor using optimum concentration of variables obtained from statistical medium design was used for estimation of kinetic parameters of 1,3-PD production. Direct use of raw glycerol from biodiesel plant without any pre-treatment for 1,3-PD production using this strain investigated for the first time in this work gave results comparable to commercial glycerol. The parameter values obtained in this study would be used to develop a mathematical model for 1,3-PD to be used as a guide for designing various reactor operating strategies for further improving 1,3-PD production. An outline of protocol for model development has been discussed in the present work.

  2. Automated inference procedure for the determination of cell growth parameters

    NASA Astrophysics Data System (ADS)

    Harris, Edouard A.; Koh, Eun Jee; Moffat, Jason; McMillen, David R.

    2016-01-01

    The growth rate and carrying capacity of a cell population are key to the characterization of the population's viability and to the quantification of its responses to perturbations such as drug treatments. Accurate estimation of these parameters necessitates careful analysis. Here, we present a rigorous mathematical approach for the robust analysis of cell count data, in which all the experimental stages of the cell counting process are investigated in detail with the machinery of Bayesian probability theory. We advance a flexible theoretical framework that permits accurate estimates of the growth parameters of cell populations and of the logical correlations between them. Moreover, our approach naturally produces an objective metric of avoidable experimental error, which may be tracked over time in a laboratory to detect instrumentation failures or lapses in protocol. We apply our method to the analysis of cell count data in the context of a logistic growth model by means of a user-friendly computer program that automates this analysis, and present some samples of its output. Finally, we note that a traditional least squares fit can provide misleading estimates of parameter values, because it ignores available information with regard to the way in which the data have actually been collected.

  3. Sediment residence times constrained by uranium-series isotopes: A critical appraisal of the comminution approach

    NASA Astrophysics Data System (ADS)

    Handley, Heather K.; Turner, Simon; Afonso, Juan C.; Dosseto, Anthony; Cohen, Tim

    2013-02-01

    Quantifying the rates of landscape evolution in response to climate change is inhibited by the difficulty of dating the formation of continental detrital sediments. We present uranium isotope data for Cooper Creek palaeochannel sediments from the Lake Eyre Basin in semi-arid South Australia in order to attempt to determine the formation ages and hence residence times of the sediments. To calculate the amount of recoil loss of 234U, a key input parameter used in the comminution approach, we use two suggested methods (weighted geometric and surface area measurement with an incorporated fractal correction) and typical assumed input parameter values found in the literature. The calculated recoil loss factors and comminution ages are highly dependent on the method of recoil loss factor determination used and the chosen assumptions. To appraise the ramifications of the assumptions inherent in the comminution age approach and determine individual and combined comminution age uncertainties associated to each variable, Monte Carlo simulations were conducted for a synthetic sediment sample. Using a reasonable associated uncertainty for each input factor and including variations in the source rock and measured (234U/238U) ratios, the total combined uncertainty on comminution age in our simulation (for both methods of recoil loss factor estimation) can amount to ±220-280 ka. The modelling shows that small changes in assumed input values translate into large effects on absolute comminution age. To improve the accuracy of the technique and provide meaningful absolute comminution ages, much tighter constraints are required on the assumptions for input factors such as the fraction of α-recoil lost 234Th and the initial (234U/238U) ratio of the source material. In order to be able to directly compare calculated comminution ages produced by different research groups, the standardisation of pre-treatment procedures, recoil loss factor estimation and assumed input parameter values is required. We suggest a set of input parameter values for such a purpose. Additional considerations for calculating comminution ages of sediments deposited within large, semi-arid drainage basins are discussed.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana Kelly; Kurt Vedros; Robert Youngblood

    This paper examines false indication probabilities in the context of the Mitigating System Performance Index (MSPI), in order to investigate the pros and cons of different approaches to resolving two coupled issues: (1) sensitivity to the prior distribution used in calculating the Bayesian-corrected unreliability contribution to the MSPI, and (2) whether (in a particular plant configuration) to model the fuel oil transfer pump (FOTP) as a separate component, or integrally to its emergency diesel generator (EDG). False indication probabilities were calculated for the following situations: (1) all component reliability parameters at their baseline values, so that the true indication ismore » green, meaning that an indication of white or above would be false positive; (2) one or more components degraded to the extent that the true indication would be (mid) white, and “false” would be green (negative) or yellow (negative) or red (negative). In key respects, this was the approach taken in NUREG-1753. The prior distributions examined were the constrained noninformative (CNI) prior used currently by the MSPI, a mixture of conjugate priors, the Jeffreys noninformative prior, a nonconjugate log(istic)-normal prior, and the minimally informative prior investigated in (Kelly et al., 2010). The mid-white performance state was set at ?CDF = ?10 ? 10-6/yr. For each simulated time history, a check is made of whether the calculated ?CDF is above or below 10-6/yr. If the parameters were at their baseline values, and ?CDF > 10-6/yr, this is counted as a false positive. Conversely, if one or all of the parameters are set to values corresponding to ?CDF > 10-6/yr but that time history’s ?CDF < 10-6/yr, this is counted as a false negative indication. The false indication (positive or negative) probability is then estimated as the number of false positive or negative counts divided by the number of time histories (100,000). Results are presented for a set of base case parameter values, and three sensitivity cases in which the number of FOTP demands was reduced, along with the Birnbaum importance of the FOTP.« less

  5. The power and robustness of maximum LOD score statistics.

    PubMed

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  6. Evaluation of gastric processing and duodenal digestion of starch in six cereal meals on the associated glycaemic response using an adult fasted dynamic gastric model.

    PubMed

    Ballance, Simon; Sahlstrøm, Stefan; Lea, Per; Nagy, Nina E; Andersen, Petter V; Dessev, Tzvetelin; Hull, Sarah; Vardakou, Maria; Faulks, Richard

    2013-03-01

    To identify the key parameters involved in cereal starch digestion and associated glycaemic response by the utilisation of a dynamic gastro-duodenal digestion model. Potential plasma glucose loading curves for each meal were calculated and fitted to an exponential function. The area under the curve (AUC) from 0 to 120 min and total digestible starch was used to calculate an in vitro glycaemic index (GI) value normalised against white bread. Microscopy was additionally used to examine cereal samples collected in vitro at different stages of gastric and duodenal digestion. Where in vivo GI data were available (4 out of 6 cereal meals) no significant difference was observed between these values and the corresponding calculated in vitro GI value. It is possible to simulate an in vivo glycaemic response for cereals when the gastric emptying rate (duodenal loading) and kinetics of digestible starch hydrolysis in the duodenum are known.

  7. The use and misuse of V(c,max) in Earth System Models.

    PubMed

    Rogers, Alistair

    2014-02-01

    Earth System Models (ESMs) aim to project global change. Central to this aim is the need to accurately model global carbon fluxes. Photosynthetic carbon dioxide assimilation by the terrestrial biosphere is the largest of these fluxes, and in many ESMs is represented by the Farquhar, von Caemmerer and Berry (FvCB) model of photosynthesis. The maximum rate of carboxylation by the enzyme Rubisco, commonly termed V c,max, is a key parameter in the FvCB model. This study investigated the derivation of the values of V c,max used to represent different plant functional types (PFTs) in ESMs. Four methods for estimating V c,max were identified; (1) an empirical or (2) mechanistic relationship was used to relate V c,max to leaf N content, (3) V c,max was estimated using an approach based on the optimization of photosynthesis and respiration or (4) calibration of a user-defined V c,max to obtain a target model output. Despite representing the same PFTs, the land model components of ESMs were parameterized with a wide range of values for V c,max (-46 to +77% of the PFT mean). In many cases, parameterization was based on limited data sets and poorly defined coefficients that were used to adjust model parameters and set PFT-specific values for V c,max. Examination of the models that linked leaf N mechanistically to V c,max identified potential changes to fixed parameters that collectively would decrease V c,max by 31% in C3 plants and 11% in C4 plants. Plant trait data bases are now available that offer an excellent opportunity for models to update PFT-specific parameters used to estimate V c,max. However, data for parameterizing some PFTs, particularly those in the Tropics and the Arctic are either highly variable or largely absent.

  8. Optical pulse characteristics of sonoluminescence at low acoustic drive levels.

    PubMed

    Arakeri, V H; Giri, A

    2001-06-01

    From a nonaqueous alkali-metal salt solution, it is possible to observe sonoluminescence (SL) at low acoustic drive levels with the ratio of the acoustic pressure amplitude to the ambient pressure being about 1. In this case, the emission has a narrowband spectral content and consists of a few flashes of light from a levitated gas bubble going through an unstable motion. A systematic statistical study of the optical pulse characteristics of this form of SL is reported here. The results support our earlier findings [Phys. Rev. E 58, R2713 (1998)], but in addition we have clearly established a variation in the optical pulse duration with certain physical parameters such as the gas thermal conductivity. Quantitatively, the SL optical pulse width is observed to vary from 10 ns to 165 ns with the most probable value being 82 ns, for experiments with krypton-saturated sodium salt ethylene glycol solution. With argon, the variation is similar to that of krypton but the most probable value is reduced to 62 ns. The range is significantly smaller with helium, being from 22 ns to 65 ns with the most probable value also being reduced to 42 ns. The observed large variation, for example with krypton, under otherwise fixed controllable experimental parameters indicates that it is an inherent property of the observed SL process, which is transient in nature. It is this feature that necessitated our statistical study. Numerical simulations of the SL process using the bubble dynamics approach of Kamath, Prosperetti, and Egolfopoulos [J. Acoust. Soc. Am. 94, 248 (1993)] suggest that a key uncontrolled parameter, namely the initial bubble radius, may be responsible for the observations. In spite of the fact that certain parameters in the numerical computations have to be fixed from a best fit to one set of experimental data, the observed overall experimental trends of optical pulse characteristics are predicted reasonably well.

  9. Optical pulse characteristics of sonoluminescence at low acoustic drive levels

    NASA Astrophysics Data System (ADS)

    Arakeri, Vijay H.; Giri, Asis

    2001-06-01

    From a nonaqueous alkali-metal salt solution, it is possible to observe sonoluminescence (SL) at low acoustic drive levels with the ratio of the acoustic pressure amplitude to the ambient pressure being about 1. In this case, the emission has a narrowband spectral content and consists of a few flashes of light from a levitated gas bubble going through an unstable motion. A systematic statistical study of the optical pulse characteristics of this form of SL is reported here. The results support our earlier findings [Phys. Rev. E 58, R2713 (1998)], but in addition we have clearly established a variation in the optical pulse duration with certain physical parameters such as the gas thermal conductivity. Quantitatively, the SL optical pulse width is observed to vary from 10 ns to 165 ns with the most probable value being 82 ns, for experiments with krypton-saturated sodium salt ethylene glycol solution. With argon, the variation is similar to that of krypton but the most probable value is reduced to 62 ns. The range is significantly smaller with helium, being from 22 ns to 65 ns with the most probable value also being reduced to 42 ns. The observed large variation, for example with krypton, under otherwise fixed controllable experimental parameters indicates that it is an inherent property of the observed SL process, which is transient in nature. It is this feature that necessitated our statistical study. Numerical simulations of the SL process using the bubble dynamics approach of Kamath, Prosperetti, and Egolfopoulos [J. Acoust. Soc. Am. 94, 248 (1993)] suggest that a key uncontrolled parameter, namely the initial bubble radius, may be responsible for the observations. In spite of the fact that certain parameters in the numerical computations have to be fixed from a best fit to one set of experimental data, the observed overall experimental trends of optical pulse characteristics are predicted reasonably well.

  10. KEY COMPARISON: Final report of APMP.T-K6 (original name APMP-IC-1-97): Comparison of humidity measurements using a dew point meter as a transfer standard

    NASA Astrophysics Data System (ADS)

    Li, Wang; Takahashi, C.; Hussain, F.; Hong, Yi; Nham, H. S.; Chan, K. H.; Lee, L. T.; Chahine, K.

    2007-01-01

    This APMP key comparison of humidity measurements using a dew point meter as a transfer standard was carried out among eight national metrology institutes from February 1999 to January 2001. The NMC/SPRING, Singapore was the pilot laboratory and a chilled mirror dew point meter offered by NMIJ was used as a transfer standard. The transfer standard was calibrated by each participating institute against local humidity standards in terms of frost and dew point temperature. Each institute selected its frost/dew point temperature calibration points within the range from -70 °C to 20 °C frost/dew point with 5 °C step. The majority of participating institutes measured from -60 °C to 20 °C frost/dew point and a simple mean evaluation was performed in this range. The differences between the institute values and the simple means for all participating institutes are within two standard deviations from the mean values. Bilateral equivalence was analysed in terms of pair difference and single parameter Quantified Demonstrated Equivalence. The results are presented in the report. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  11. Mathematical analysis of a lymphatic filariasis model with quarantine and treatment.

    PubMed

    Mwamtobe, Peter M; Simelane, Simphiwe M; Abelman, Shirley; Tchuenche, Jean M

    2017-03-16

    Lymphatic filariasis is a globally neglected tropical parasitic disease which affects individuals of all ages and leads to an altered lymphatic system and abnormal enlargement of body parts. A mathematical model of lymphatic filariaris with intervention strategies is developed and analyzed. Control of infections is analyzed within the model through medical treatment of infected-acute individuals and quarantine of infected-chronic individuals. We derive the effective reproduction number, [Formula: see text] and its interpretation/investigation suggests that treatment contributes to a reduction in lymphatic filariasis cases faster than quarantine. However, this reduction is greater when the two intervention approaches are applied concurrently. Numerical simulations are carried out to monitor the dynamics of the filariasis model sub-populations for various parameter values of the associated reproduction threshold. Lastly, sensitivity analysis on key parameters that drive the disease dynamics is performed in order to identify their relative importance on the disease transmission.

  12. Genuine binding energy of the hydrated electron

    PubMed Central

    Luckhaus, David; Yamamoto, Yo-ichi; Suzuki, Toshinori; Signorell, Ruth

    2017-01-01

    The unknown influence of inelastic and elastic scattering of slow electrons in water has made it difficult to clarify the role of the solvated electron in radiation chemistry and biology. We combine accurate scattering simulations with experimental photoemission spectroscopy of the hydrated electron in a liquid water microjet, with the aim of resolving ambiguities regarding the influence of electron scattering on binding energy spectra, photoelectron angular distributions, and probing depths. The scattering parameters used in the simulations are retrieved from independent photoemission experiments of water droplets. For the ground-state hydrated electron, we report genuine values devoid of scattering contributions for the vertical binding energy and the anisotropy parameter of 3.7 ± 0.1 eV and 0.6 ± 0.2, respectively. Our probing depths suggest that even vacuum ultraviolet probing is not particularly surface-selective. Our work demonstrates the importance of quantitative scattering simulations for a detailed analysis of key properties of the hydrated electron. PMID:28508051

  13. The frequency response of dynamic friction: Enhanced rate-and-state models

    NASA Astrophysics Data System (ADS)

    Cabboi, A.; Putelat, T.; Woodhouse, J.

    2016-07-01

    The prediction and control of friction-induced vibration requires a sufficiently accurate constitutive law for dynamic friction at the sliding interface: for linearised stability analysis, this requirement takes the form of a frictional frequency response function. Systematic measurements of this frictional frequency response function are presented for small samples of nylon and polycarbonate sliding against a glass disc. Previous efforts to explain such measurements from a theoretical model have failed, but an enhanced rate-and-state model is presented which is shown to match the measurements remarkably well. The tested parameter space covers a range of normal forces (10-50 N), of sliding speeds (1-10 mm/s) and frequencies (100-2000 Hz). The key new ingredient in the model is the inclusion of contact stiffness to take into account elastic deformations near the interface. A systematic methodology is presented to discriminate among possible variants of the model, and then to identify the model parameter values.

  14. Uncertainty Quantification in Aeroelasticity

    NASA Astrophysics Data System (ADS)

    Beran, Philip; Stanford, Bret; Schrock, Christopher

    2017-01-01

    Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.

  15. Advanced control of dissolved oxygen concentration in fed batch cultures during recombinant protein production.

    PubMed

    Kuprijanov, A; Gnoth, S; Simutis, R; Lübbert, A

    2009-02-01

    Design and experimental validation of advanced pO(2) controllers for fermentation processes operated in the fed-batch mode are described. In most situations, the presented controllers are able to keep the pO(2) in fermentations for recombinant protein productions exactly on the desired value. The controllers are based on the gain-scheduling approach to parameter-adaptive proportional-integral controllers. In order to cope with the most often appearing distortions, the basic gain-scheduling feedback controller was complemented with a feedforward control component. This feedforward/feedback controller significantly improved pO(2) control. By means of numerical simulations, the controller behavior was tested and its parameters were determined. Validation runs were performed with three Escherichia coli strains producing different recombinant proteins. It is finally shown that the new controller leads to significant improvements in the signal-to-noise ratio of other key process variables and, thus, to a higher process quality.

  16. Morphological leaf variability in natural populations of Pistacia atlantica Desf. subsp. atlantica along climatic gradient: new features to update Pistacia atlantica subsp. atlantica key.

    PubMed

    El Zerey-Belaskri, Asma; Benhassaini, Hachemi

    2016-04-01

    The effect of bioclimate range on the variation in Pistacia atlantica Desf. subsp. atlantica leaf morphology was studied on 16 sites in Northwest Algeria. The study examined biometrically mature leaves totaling 3520 compound leaves. Fifteen characters (10 quantitative and 5 qualitative) were assessed on each leaf. For each quantitative character, the nested analysis of variance (ANOVA) was used to examine relative magnitude of variation at each level of the nested hierarchy. The correlation between the climatic parameters and the leaf morphology was examined. The statistical analysis applied on the quantitative leaf characters showed highly significant variation at the within-site level and between-site variation. The correlation coefficient (r) showed also an important correlation between climatic parameters and leaf morphology. The results of this study exhibited several values reported for the first time on the species, such as the length and the width of the leaf (reaching up to 24.5 cm/21.9 cm), the number of leaflets (up to 18 leaflets/leaf), and the petiole length of the terminal leaflet (reaching up to 3.4 cm). The original findings of this study are used to update the P. atlantica subsp. atlantica identification key.

  17. Key parameters and practices controlling pesticide degradation efficiency of biobed substrates.

    PubMed

    Karanasios, Evangelos; Karpouzas, Dimitrios G; Tsiropoulos, Nikolaos G

    2012-01-01

    We studied the contribution of each of the components of a compost-based biomixture (BX), commonly used in Europe, on pesticide degradation. The impact of other key parameters including pesticide dose, temperature and repeated applications on the degradation of eight pesticides, applied as a mixture, in a BX and a peat-based biomixture (OBX) was compared and contrasted to their degradation in soil. Incubation studies showed that straw was essential in maintaining a high pesticide degradation capacity of the biomixture, whereas compost, when mixed with soil, retarded pesticide degradation. The highest rates of degradation were shown in the biomixture composed of soil/compost/straw suggesting that all three components are essential for maximum biobed performance. Increasing doses prolonged the persistence of most pesticides with biomixtures showing a higher tolerance to high pesticide dose levels compared to soil. Increasing the incubation temperature from 15 °C to 25 °C resulted in lower t(1/2) values, with biomixtures performing better than soil at the lower temperature. Repeated applications led to a decrease in the degradation rates of most pesticides in all the substrates, with the exception of iprodione and metalaxyl. Overall, our results stress the ability of biomixtures to perform better than soil under unfavorable conditions and extreme pesticide dose levels. Copyright © Taylor & Francis Group, LLC

  18. Revised age estimates of the Euphrosyne family

    NASA Astrophysics Data System (ADS)

    Carruba, Valerio; Masiero, Joseph R.; Cibulková, Helena; Aljbaae, Safwan; Espinoza Huaman, Mariela

    2015-08-01

    The Euphrosyne family, a high inclination asteroid family in the outer main belt, is considered one of the most peculiar groups of asteroids. It is characterized by the steepest size frequency distribution (SFD) among families in the main belt, and it is the only family crossed near its center by the ν6 secular resonance. Previous studies have shown that the steep size frequency distribution may be the result of the dynamical evolution of the family.In this work we further explore the unique dynamical configuration of the Euphrosyne family by refining the previous age values, considering the effects of changes in shapes of the asteroids during YORP cycle (``stochastic YORP''), the long-term effect of close encounters of family members with (31) Euphrosyne itself, and the effect that changing key parameters of the Yarkovsky force (such as density and thermal conductivity) has on the estimate of the family age obtained using Monte Carlo methods. Numerical simulations accounting for the interaction with the local web of secular and mean-motion resonances allow us to refine previous estimates of the family age. The cratering event that formed the Euphrosyne family most likely occurred between 560 and 1160 Myr ago, and no earlier than 1400 Myr ago when we allow for larger uncertainties in the key parameters of the Yarkovsky force.

  19. Clinical trial allocation in multinational pharmaceutical companies - a qualitative study on influential factors.

    PubMed

    Dombernowsky, Tilde; Haedersdal, Merete; Lassen, Ulrik; Thomsen, Simon F

    2017-06-01

    Clinical trial allocation in multinational pharmaceutical companies includes country selection and site selection. With emphasis on site selection, the overall aim of this study was to examine which factors pharmaceutical companies value most when allocating clinical trials. The specific aims were (1) to identify key decision makers during country and site selection, respectively, (2) to evaluate by which parameters subsidiaries are primarily assessed by headquarters with regard to conducting clinical trials, and (3) to evaluate which site-related qualities companies value most when selecting trial sites. Eleven semistructured interviews were conducted among employees engaged in trial allocation at 11 pharmaceutical companies. The interviews were analyzed by deductive content analysis, which included coding of data to a categorization matrix containing categories of site-related qualities. The results suggest that headquarters and regional departments are key decision makers during country selection, whereas subsidiaries decide on site selection. Study participants argued that headquarters primarily value timely patient recruitment and quality of data when assessing subsidiaries. The site-related qualities most commonly emphasized during interviews were study population availability, timely patient recruitment, resources at the site, and site personnel's interest and commitment. Costs of running the trials were described as less important. Site personnel experience in conducting trials was described as valuable but not imperative. In conclusion, multinational pharmaceutical companies consider recruitment-related factors as crucial when allocating clinical trials. Quality of data and site personnel's interest and commitment are also essential, whereas costs seem less important. While valued, site personnel experience in conducting clinical trials is not imperative.

  20. The Value Added National Project. Technical Report: Primary 4. Value-Added Key Stage 1 to Key Stage 2.

    ERIC Educational Resources Information Center

    Tymms, Peter

    This is the fourth in a series of technical reports that have dealt with issues surrounding the possibility of national value-added systems for primary schools in England. The main focus has been on the relative progress made by students between the ends of Key Stage 1 (KS1) and Key Stage 2 (KS2). The analysis has indicated that the strength of…

  1. MHD Flow and Heat Transfer of a Generalized Burgers’ Fluid due to a Periodic Oscillating and Periodic Heating Plate

    NASA Astrophysics Data System (ADS)

    Bai, Yu; Jiang, Yue-Hua; Zhang, Yan; Zhao, Hao-Jie

    2017-10-01

    This paper investigates the MHD flow and heat transfer of the incompressible generalized Burgers’ fluid due to a periodic oscillating plate with the effects of the second order slip and periodic heating plate. The momentum equation is formulated with multi-term fractional derivatives, and by means of viscous dissipation, the fractional derivative is considered in the energy equation. A finite difference scheme is established based on the G1-algorithm, whose convergence is confirmed by the comparison with the analytical solution in an example. Meanwhile the numerical solutions of velocity, temperature and shear stress are obtained. The effects of involved parameters on velocity and temperature fields are presented graphically and analyzed in detail. Increasing the fractional derivative parameter α, the velocity and temperature have a decreasing trend, while the influences of fractional derivative parameter β on the velocity and temperature behave conversely. Increasing the absolute value of the first order slip parameter and the second order slip parameter both cause a decrease of velocity. Furthermore, with the decreasing of the magnetic parameter, the shear stress decreases. Supported by the National Natural Science Foundations of China under Grant Nos. 21576023, 51406008, the National Key Research Program of China under Grant Nos. 2016YFC0700601, 2016YFC0700603 and the BUCEA Post Graduate Innovation Project (PG2017032)

  2. Monte Carlo sensitivity analysis of unknown parameters in hazardous materials transportation risk assessment.

    PubMed

    Pet-Armacost, J J; Sepulveda, J; Sakude, M

    1999-12-01

    The US Department of Transportation was interested in the risks associated with transporting Hydrazine in tanks with and without relief devices. Hydrazine is both highly toxic and flammable, as well as corrosive. Consequently, there was a conflict as to whether a relief device should be used or not. Data were not available on the impact of relief devices on release probabilities or the impact of Hydrazine on the likelihood of fires and explosions. In this paper, a Monte Carlo sensitivity analysis of the unknown parameters was used to assess the risks associated with highway transport of Hydrazine. To help determine whether or not relief devices should be used, fault trees and event trees were used to model the sequences of events that could lead to adverse consequences during transport of Hydrazine. The event probabilities in the event trees were derived as functions of the parameters whose effects were not known. The impacts of these parameters on the risk of toxic exposures, fires, and explosions were analyzed through a Monte Carlo sensitivity analysis and analyzed statistically through an analysis of variance. The analysis allowed the determination of which of the unknown parameters had a significant impact on the risks. It also provided the necessary support to a critical transportation decision even though the values of several key parameters were not known.

  3. Bank Erosion Vulnerability Zonation (BEVZ) -A Proposed Method of Preparing Bank Erosion Zonation and Its Application on the River Haora, Tripura, India

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Shreya; de, Sunil Kumar

    2014-05-01

    In the present paper an attempt has been made to propose RS-GIS based method for erosion vulnerability zonation for the entire river based on simple techniques that requires very less field investigation. This method consist of 8 parameters, such as, rainfall erosivity, lithological factor, bank slope, meander index, river gradient, soil erosivity, vegetation cover and anthropogenic impact. Meteorological data, GSI maps, LISS III (30m resolution), SRTM DEM (56m resolution) and Google Images have been used to determine rainfall erosivity, lithological factor, bank slope, meander index, river gradient, vegetation cover and anthropogenic impact; Soil map of the NBSSLP, India has been used for assessing Soil Erosivity index. By integrating the individual values of those six parameters (the 1st two parameters are remained constant for this particular study area) a bank erosion vulnerability zonation map of the River Haora, Tripura, India (23°37' - 23°53'N and 91°15'-91°37'E) has been prepared. The values have been compared with the existing BEHI-NBS method of 60 spots and also with field data of 30 cross sections (covering the 60 spots) taken along 51 km stretch of the river in Indian Territory and found that the estimated values are matching with the existing method as well as with field data. The whole stretch has been divided into 5 hazard zones, i.e. Very High, High, Moderate, Low and Very Low Hazard Zones and they are covering 5.66 km, 16.81 km, 40.82km, 29.67 km and 9.04 km respectively. KEY WORDS: Bank erosion, Bank Erosion Hazard Index (BEHI), Near Bank Stress (NBS), Erosivity, Bank Erosion Vulnerability Zonation.

  4. Reservoir Identification: Parameter Characterization or Feature Classification

    NASA Astrophysics Data System (ADS)

    Cao, J.

    2017-12-01

    The ultimate goal of oil and gas exploration is to find the oil or gas reservoirs with industrial mining value. Therefore, the core task of modern oil and gas exploration is to identify oil or gas reservoirs on the seismic profiles. Traditionally, the reservoir is identify by seismic inversion of a series of physical parameters such as porosity, saturation, permeability, formation pressure, and so on. Due to the heterogeneity of the geological medium, the approximation of the inversion model and the incompleteness and noisy of the data, the inversion results are highly uncertain and must be calibrated or corrected with well data. In areas where there are few wells or no well, reservoir identification based on seismic inversion is high-risk. Reservoir identification is essentially a classification issue. In the identification process, the underground rocks are divided into reservoirs with industrial mining value and host rocks with non-industrial mining value. In addition to the traditional physical parameters classification, the classification may be achieved using one or a few comprehensive features. By introducing the concept of seismic-print, we have developed a new reservoir identification method based on seismic-print analysis. Furthermore, we explore the possibility to use deep leaning to discover the seismic-print characteristics of oil and gas reservoirs. Preliminary experiments have shown that the deep learning of seismic data could distinguish gas reservoirs from host rocks. The combination of both seismic-print analysis and seismic deep learning is expected to be a more robust reservoir identification method. The work was supported by NSFC under grant No. 41430323 and No. U1562219, and the National Key Research and Development Program under Grant No. 2016YFC0601

  5. Computational Intelligence and Wavelet Transform Based Metamodel for Efficient Generation of Not-Yet Simulated Waveforms.

    PubMed

    Oltean, Gabriel; Ivanciu, Laura-Nicoleta

    2016-01-01

    The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the output waveform).

  6. Computational Intelligence and Wavelet Transform Based Metamodel for Efficient Generation of Not-Yet Simulated Waveforms

    PubMed Central

    Oltean, Gabriel; Ivanciu, Laura-Nicoleta

    2016-01-01

    The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the output waveform). PMID:26745370

  7. Bayes Analysis and Reliability Implications of Stress-Rupture Testing a Kevlar/Epoxy COPV Using Temperature and Pressure Acceleration

    NASA Technical Reports Server (NTRS)

    Phoenix, S. Leigh; Kezirian, Michael T.; Murthy, Pappu L. N.

    2009-01-01

    Composite Overwrapped Pressure Vessels (COPVs) that have survived a long service time under pressure generally must be recertified before service is extended. Flight certification is dependent on the reliability analysis to quantify the risk of stress rupture failure in existing flight vessels. Full certification of this reliability model would require a statistically significant number of lifetime tests to be performed and is impractical given the cost and limited flight hardware for certification testing purposes. One approach to confirm the reliability model is to perform a stress rupture test on a flight COPV. Currently, testing of such a Kevlar49 (Dupont)/epoxy COPV is nearing completion. The present paper focuses on a Bayesian statistical approach to analyze the possible failure time results of this test and to assess the implications in choosing between possible model parameter values that in the past have had significant uncertainty. The key uncertain parameters in this case are the actual fiber stress ratio at operating pressure, and the Weibull shape parameter for lifetime; the former has been uncertain due to ambiguities in interpreting the original and a duplicate burst test. The latter has been uncertain due to major differences between COPVs in the database and the actual COPVs in service. Any information obtained that clarifies and eliminates uncertainty in these parameters will have a major effect on the predicted reliability of the service COPVs going forward. The key result is that the longer the vessel survives, the more likely the more optimistic stress ratio model is correct. At the time of writing, the resulting effect on predicted future reliability is dramatic, increasing it by about one "nine," that is, reducing the predicted probability of failure by an order of magnitude. However, testing one vessel does not change the uncertainty on the Weibull shape parameter for lifetime since testing several vessels would be necessary.

  8. Description of the National Hydrologic Model for use with the Precipitation-Runoff Modeling System (PRMS)

    USGS Publications Warehouse

    Regan, R. Steven; Markstrom, Steven L.; Hay, Lauren E.; Viger, Roland J.; Norton, Parker A.; Driscoll, Jessica M.; LaFontaine, Jacob H.

    2018-01-08

    This report documents several components of the U.S. Geological Survey National Hydrologic Model of the conterminous United States for use with the Precipitation-Runoff Modeling System (PRMS). It provides descriptions of the (1) National Hydrologic Model, (2) Geospatial Fabric for National Hydrologic Modeling, (3) PRMS hydrologic simulation code, (4) parameters and estimation methods used to compute spatially and temporally distributed default values as required by PRMS, (5) National Hydrologic Model Parameter Database, and (6) model extraction tool named Bandit. The National Hydrologic Model Parameter Database contains values for all PRMS parameters used in the National Hydrologic Model. The methods and national datasets used to estimate all the PRMS parameters are described. Some parameter values are derived from characteristics of topography, land cover, soils, geology, and hydrography using traditional Geographic Information System methods. Other parameters are set to long-established default values and computation of initial values. Additionally, methods (statistical, sensitivity, calibration, and algebraic) were developed to compute parameter values on the basis of a variety of nationally-consistent datasets. Values in the National Hydrologic Model Parameter Database can periodically be updated on the basis of new parameter estimation methods and as additional national datasets become available. A companion ScienceBase resource provides a set of static parameter values as well as images of spatially-distributed parameters associated with PRMS states and fluxes for each Hydrologic Response Unit across the conterminuous United States.

  9. Comparison of soil moisture retrieval algorithms based on the synergy between SMAP and SMOS-IC

    NASA Astrophysics Data System (ADS)

    Ebrahimi-Khusfi, Mohsen; Alavipanah, Seyed Kazem; Hamzeh, Saeid; Amiraslani, Farshad; Neysani Samany, Najmeh; Wigneron, Jean-Pierre

    2018-05-01

    This study was carried out to evaluate possible improvements of the soil moisture (SM) retrievals from the SMAP observations, based on the synergy between SMAP and SMOS. We assessed the impacts of the vegetation and soil roughness parameters on SM retrievals from SMAP observations. To do so, the effects of three key input parameters including the vegetation optical depth (VOD), effective scattering albedo (ω) and soil roughness (HR) parameters were assessed with the emphasis on the synergy with the VOD product derived from SMOS-IC, a new and simpler version of the SMOS algorithm, over two years of data (April 2015 to April 2017). First, a comprehensive comparison of seven SM retrieval algorithms was made to find the best one for SM retrievals from the SMAP observations. All results were evaluated against in situ measurements over 548 stations from the International Soil Moisture Network (ISMN) in terms of four statistical metrics: correlation coefficient (R), root mean square error (RMSE), bias and unbiased RMSE (UbRMSE). The comparison of seven SM retrieval algorithms showed that the dual channel algorithm based on the additional use of the SMOS-IC VOD product (selected algorithm) led to the best results of SM retrievals over 378, 399, 330 and 271 stations (out of a total of 548 stations) in terms of R, RMSE, UbRMSE and both R & UbRMSE, respectively. Moreover, comparing the measured and retrieved SM values showed that this synergy approach led to an increase in median R value from 0.6 to 0.65 and a decrease in median UbRMSE from 0.09 m3/m3 to 0.06 m3/m3. Second, using the algorithm selected in a first step and defined above, the ω and HR parameters were calibrated over 218 rather homogenous ISMN stations. 72 combinations of various values of ω and HR were used for the calibration over different land cover classes. In this calibration process, the optimal values of ω and HR were found for the different land cover classes. The obtained results indicated that the impact of the VOD parameter on SM retrievals is more considerable than the effects of HR and ω. Overall, the inclusion of the VOD parameter in the SMAP SM retrieval algorithm was found to be a very interesting approach and showed the large potential benefit of the synergy between SMAP and SMOS.

  10. A Multi-Parameter Approach for Calculating Crack Instability

    NASA Technical Reports Server (NTRS)

    Zanganeh, M.; Forman, R. G.

    2014-01-01

    An accurate fracture control analysis of spacecraft pressure systems, boosters, rocket hardware and other critical low-cycle fatigue cases where the fracture toughness highly impacts cycles to failure requires accurate knowledge of the material fracture toughness. However, applicability of the measured fracture toughness values using standard specimens and transferability of the values to crack instability analysis of the realistically complex structures is refutable. The commonly used single parameter Linear Elastic Fracture Mechanics (LEFM) approach which relies on the key assumption that the fracture toughness is a material property would result in inaccurate crack instability predictions. In the past years extensive studies have been conducted to improve the single parameter (K-controlled) LEFM by introducing parameters accounting for the geometry or in-plane constraint effects]. Despite the importance of the thickness (out-of-plane constraint) effects in fracture control problems, the literature is mainly limited to some empirical equations for scaling the fracture toughness data] and only few theoretically based developments can be found. In aerospace hardware where the structure might have only one life cycle and weight reduction is crucial, reducing the design margin of safety by decreasing the uncertainty involved in fracture toughness evaluations would result in lighter hardware. In such conditions LEFM would not suffice and an elastic-plastic analysis would be vital. Multi-parameter elastic plastic crack tip field quantifying developments combined with statistical methods] have been shown to have the potential to be used as a powerful tool for tackling such problems. However, these approaches have not been comprehensively scrutinized using experimental tests. Therefore, in this paper a multi-parameter elastic-plastic approach has been used to study the crack instability problem and the transferability issue by considering the effects of geometrical constraints as well as the thickness. The feasibility of the approach has been examined using a wide range of specimen geometries and thicknesses manufactured from 7075-T7351 aluminum alloy.

  11. GaAs, AlAs, and AlxGa1-xAs: Material parameters for use in research and device applications

    NASA Astrophysics Data System (ADS)

    Adachi, Sadao

    1985-08-01

    The AlxGa1-xAs/GaAs heterostructure system is potentially useful material for high-speed digital, high-frequency microwave, and electro-optic device applications. Even though the basic AlxGa1-xAs/GaAs heterostructure concepts are understood at this time, some practical device parameters in this system have been hampered by a lack of definite knowledge of many material parameters. Recently, Blakemore has presented numerical and graphical information about many of the physical and electronic properties of GaAs [J. S. Blakemore, J. Appl. Phys. 53, R123 (1982)]. The purpose of this review is (i) to obtain and clarify all the various material parameters of AlxGa1-xAs alloy from a systematic point of view, and (ii) to present key properties of the material parameters for a variety of research works and device applications. A complete set of material parameters are considered in this review for GaAs, AlAs, and AlxGa1-xAs alloys. The model used is based on an interpolation scheme and, therefore, necessitates known values of the parameters for the related binaries (GaAs and AlAs). The material parameters and properties considered in the present review can be classified into sixteen groups: (1) lattice constant and crystal density, (2) melting point, (3) thermal expansion coefficient, (4) lattice dynamic properties, (5) lattice thermal properties, (6) electronic-band structure, (7) external perturbation effects on the band-gap energy, (8) effective mass, (9) deformation potential, (10) static and high-frequency dielectric constants, (11) magnetic susceptibility, (12) piezoelectric constant, (13) Fröhlich coupling parameter, (14) electron transport properties, (15) optical properties, and (16) photoelastic properties. Of particular interest is the deviation of material parameters from linearity with respect to the AlAs mole fraction x. Some material parameters, such as lattice constant, crystal density, thermal expansion coefficient, dielectric constant, and elastic constant, obey Vegard's rule well. Other parameters, e.g., electronic-band energy, lattice vibration (phonon) energy, Debye temperature, and impurity ionization energy, exhibit quadratic dependence upon the AlAs mole fraction. However, some kinds of the material parameters, e.g., lattice thermal conductivity, exhibit very strong nonlinearity with respect to x, which arises from the effects of alloy disorder. It is found that the present model provides generally acceptable parameters in good agreement with the existing experimental data. A detailed discussion is also given of the acceptability of such interpolated parameters from an aspect of solid-state physics. Key properties of the material parameters for use in research work and a variety of AlxGa1-xAs/GaAs device applications are also discussed in detail.

  12. Linking ecophysiological modelling with quantitative genetics to support marker-assisted crop design for improved yields of rice (Oryza sativa) under drought stress

    PubMed Central

    Gu, Junfei; Yin, Xinyou; Zhang, Chengwei; Wang, Huaqi; Struik, Paul C.

    2014-01-01

    Background and Aims Genetic markers can be used in combination with ecophysiological crop models to predict the performance of genotypes. Crop models can estimate the contribution of individual markers to crop performance in given environments. The objectives of this study were to explore the use of crop models to design markers and virtual ideotypes for improving yields of rice (Oryza sativa) under drought stress. Methods Using the model GECROS, crop yield was dissected into seven easily measured parameters. Loci for these parameters were identified for a rice population of 94 introgression lines (ILs) derived from two parents differing in drought tolerance. Marker-based values of ILs for each of these parameters were estimated from additive allele effects of the loci, and were fed to the model in order to simulate yields of the ILs grown under well-watered and drought conditions and in order to design virtual ideotypes for those conditions. Key Results To account for genotypic yield differences, it was necessary to parameterize the model for differences in an additional trait ‘total crop nitrogen uptake’ (Nmax) among the ILs. Genetic variation in Nmax had the most significant effect on yield; five other parameters also significantly influenced yield, but seed weight and leaf photosynthesis did not. Using the marker-based parameter values, GECROS also simulated yield variation among 251 recombinant inbred lines of the same parents. The model-based dissection approach detected more markers than the analysis using only yield per se. Model-based sensitivity analysis ranked all markers for their importance in determining yield differences among the ILs. Virtual ideotypes based on markers identified by modelling had 10–36 % more yield than those based on markers for yield per se. Conclusions This study outlines a genotype-to-phenotype approach that exploits the potential value of marker-based crop modelling in developing new plant types with high yields. The approach can provide more markers for selection programmes for specific environments whilst also allowing for prioritization. Crop modelling is thus a powerful tool for marker design for improved rice yields and for ideotyping under contrasting conditions. PMID:24984712

  13. Acceptable Tolerances for Matching Icing Similarity Parameters in Scaling Applications

    NASA Technical Reports Server (NTRS)

    Anderson, David N.

    2003-01-01

    This paper reviews past work and presents new data to evaluate how changes in similarity parameters affect ice shapes and how closely scale values of the parameters should match reference values. Experimental ice shapes presented are from tests by various researchers in the NASA Glenn Icing Research Tunnel. The parameters reviewed are the modified inertia parameter (which determines the stagnation collection efficiency), accumulation parameter, freezing fraction, Reynolds number, and Weber number. It was demonstrated that a good match of scale and reference ice shapes could sometimes be achieved even when values of the modified inertia parameter did not match precisely. Consequently, there can be some flexibility in setting scale droplet size, which is the test condition determined from the modified inertia parameter. A recommended guideline is that the modified inertia parameter be chosen so that the scale stagnation collection efficiency is within 10 percent of the reference value. The scale accumulation parameter and freezing fraction should also be within 10 percent of their reference values. The Weber number based on droplet size and water properties appears to be a more important scaling parameter than one based on model size and air properties. Scale values of both the Reynolds and Weber numbers need to be in the range of 60 to 160 percent of the corresponding reference values. The effects of variations in other similarity parameters have yet to be established.

  14. Atomic layer deposition for fabrication of HfO2/Al2O3 thin films with high laser-induced damage thresholds.

    PubMed

    Wei, Yaowei; Pan, Feng; Zhang, Qinghua; Ma, Ping

    2015-01-01

    Previous research on the laser damage resistance of thin films deposited by atomic layer deposition (ALD) is rare. In this work, the ALD process for thin film generation was investigated using different process parameters such as various precursor types and pulse duration. The laser-induced damage threshold (LIDT) was measured as a key property for thin films used as laser system components. Reasons for film damaged were also investigated. The LIDTs for thin films deposited by improved process parameters reached a higher level than previously measured. Specifically, the LIDT of the Al2O3 thin film reached 40 J/cm(2). The LIDT of the HfO2/Al2O3 anti-reflector film reached 18 J/cm(2), the highest value reported for ALD single and anti-reflect films. In addition, it was shown that the LIDT could be improved by further altering the process parameters. All results show that ALD is an effective film deposition technique for fabrication of thin film components for high-power laser systems.

  15. An analysis of sensitivity of CLIMEX parameters in mapping species potential distribution and the broad-scale changes observed with minor variations in parameters values: an investigation using open-field Solanum lycopersicum and Neoleucinodes elegantalis as an example

    NASA Astrophysics Data System (ADS)

    da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho

    2018-04-01

    A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.

  16. Photosynthesis, Earth System Models and the Arctic

    NASA Astrophysics Data System (ADS)

    Rogers, A.; Sloan, V. L.; Xu, C.; Wullschleger, S. D.

    2013-12-01

    The primary goal of Earth System Models (ESMs) is to improve understanding and projection of future global change. In order to do this they must accurately represent the huge carbon fluxes associated with the terrestrial carbon cycle. Photosynthetic CO2 uptake is the largest of these fluxes, and is well described by the Farquhar, von Caemmerer and Berry (FvCB) model of photosynthesis. Most ESMs use a derivation of the FvCB model to calculate gross primary productivity (GPP). One of the key parameters required by the FvCB model is an estimate of the maximum rate of carboxylation by the enzyme Rubisco (Vc,max). In ESMs the parameter Vc,max is usually fixed for a given plant functional type (PFT). Although Arctic GPP a small flux relative to global GPP, uncertainty is large. Only four ESMs currently have an explicit Arctic PFT and the data used to derive Vc,max for the Arctic PFT in these models relies on small data sets and unjustified assumptions. As part of a multidisciplinary project to improve the representation of the Arctic in ESMs (Next Generation Ecosystem Experiments - Arctic) we examined the derivation of Vc,max in current Arctic PFTs and estimated Vc,max for 12 species representing both dominant vegetation and key PFTs growing on the Barrow Environmental Observatory, Barrow, AK. The values of Vc,max currently used to represent Arctic PFTs in ESMs are 70% lower than the values we measured in these species. Separate measurements of CO2 assimilation (A) made at ambient conditions were compared with A modeled using the Vc,max values we measured in Barrow and those used by the ESMs. The A modeled with the Vc,max values used by the ESMs was 80% lower than the observed A. When our measured Vc,max values were used, modeled A was within 5% of observed A. Examination of the derivation of Vc,max in ESMs identified that the cause of the relatively low Vc,max value was the result of underestimating both the leaf N content and the investment of that N in Rubisco. Here we have identified possible improvements to the derivation of Vc,max in ESMs and provided new physiological characterization of Arctic species that is mechanistically consistent with observed leaf level CO2 uptake. These data suggest that the Arctic tundra has a much greater capacity for CO2 uptake than is currently represented in ESMs. Our parameterization can be used in future model projections to improve representation of the Arctic landscape in ESMs.

  17. The longevity of lava dome eruptions

    NASA Astrophysics Data System (ADS)

    Wolpert, Robert L.; Ogburn, Sarah E.; Calder, Eliza S.

    2016-02-01

    Understanding the duration of past, ongoing, and future volcanic eruptions is an important scientific goal and a key societal need. We present a new methodology for forecasting the duration of ongoing and future lava dome eruptions based on a database (DomeHaz) recently compiled by the authors. The database includes duration and composition for 177 such eruptions, with "eruption" defined as the period encompassing individual episodes of dome growth along with associated quiescent periods during which extrusion pauses but unrest continues. In a key finding, we show that probability distributions for dome eruption durations are both heavy tailed and composition dependent. We construct objective Bayesian statistical models featuring heavy-tailed Generalized Pareto distributions with composition-specific parameters to make forecasts about the durations of new and ongoing eruptions that depend on both eruption duration to date and composition. Our Bayesian predictive distributions reflect both uncertainty about model parameter values (epistemic uncertainty) and the natural variability of the geologic processes (aleatoric uncertainty). The results are illustrated by presenting likely trajectories for 14 dome-building eruptions ongoing in 2015. Full representation of the uncertainty is presented for two key eruptions, Soufriére Hills Volcano in Montserrat (10-139 years, median 35 years) and Sinabung, Indonesia (1-17 years, median 4 years). Uncertainties are high but, importantly, quantifiable. This work provides for the first time a quantitative and transferable method and rationale on which to base long-term planning decisions for lava dome-forming volcanoes, with wide potential use and transferability to forecasts of other types of eruptions and other adverse events across the geohazard spectrum.

  18. An Image Encryption Algorithm Utilizing Julia Sets and Hilbert Curves

    PubMed Central

    Sun, Yuanyuan; Chen, Lina; Xu, Rudan; Kong, Ruiqing

    2014-01-01

    Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets’ parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets’ properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack. PMID:24404181

  19. A no-key-exchange secure image sharing scheme based on Shamir's three-pass cryptography protocol and the multiple-parameter fractional Fourier transform.

    PubMed

    Lang, Jun

    2012-01-30

    In this paper, we propose a novel secure image sharing scheme based on Shamir's three-pass protocol and the multiple-parameter fractional Fourier transform (MPFRFT), which can safely exchange information with no advance distribution of either secret keys or public keys between users. The image is encrypted directly by the MPFRFT spectrum without the use of phase keys, and information can be shared by transmitting the encrypted image (or message) three times between users. Numerical simulation results are given to verify the performance of the proposed algorithm.

  20. Effects of expected-value information and display format on recognition of aircraft subsystem abnormalities

    NASA Technical Reports Server (NTRS)

    Palmer, Michael T.; Abbott, Kathy H.

    1994-01-01

    This study identifies improved methods to present system parameter information for detecting abnormal conditions and to identify system status. Two workstation experiments were conducted. The first experiment determined if including expected-value-range information in traditional parameter display formats affected subject performance. The second experiment determined if using a nontraditional parameter display format, which presented relative deviation from expected value, was better than traditional formats with expected-value ranges included. The inclusion of expected-value-range information onto traditional parameter formats was found to have essentially no effect. However, subjective results indicated support for including this information. The nontraditional column deviation parameter display format resulted in significantly fewer errors compared with traditional formats with expected-value-ranges included. In addition, error rates for the column deviation parameter display format remained stable as the scenario complexity increased, whereas error rates for the traditional parameter display formats with expected-value ranges increased. Subjective results also indicated that the subjects preferred this new format and thought that their performance was better with it. The column deviation parameter display format is recommended for display applications that require rapid recognition of out-of-tolerance conditions, especially for a large number of parameters.

  1. A Firefly-Inspired Method for Protein Structure Prediction in Lattice Models

    PubMed Central

    Maher, Brian; Albrecht, Andreas A.; Loomes, Martin; Yang, Xin-She; Steinhöfel, Kathleen

    2014-01-01

    We introduce a Firefly-inspired algorithmic approach for protein structure prediction over two different lattice models in three-dimensional space. In particular, we consider three-dimensional cubic and three-dimensional face-centred-cubic (FCC) lattices. The underlying energy models are the Hydrophobic-Polar (H-P) model, the Miyazawa–Jernigan (M-J) model and a related matrix model. The implementation of our approach is tested on ten H-P benchmark problems of a length of 48 and ten M-J benchmark problems of a length ranging from 48 until 61. The key complexity parameter we investigate is the total number of objective function evaluations required to achieve the optimum energy values for the H-P model or competitive results in comparison to published values for the M-J model. For H-P instances and cubic lattices, where data for comparison are available, we obtain an average speed-up over eight instances of 2.1, leaving out two extreme values (otherwise, 8.8). For six M-J instances, data for comparison are available for cubic lattices and runs with a population size of 100, where, a priori, the minimum free energy is a termination criterion. The average speed-up over four instances is 1.2 (leaving out two extreme values, otherwise 1.1), which is achieved for a population size of only eight instances. The present study is a test case with initial results for ad hoc parameter settings, with the aim of justifying future research on larger instances within lattice model settings, eventually leading to the ultimate goal of implementations for off-lattice models. PMID:24970205

  2. A firefly-inspired method for protein structure prediction in lattice models.

    PubMed

    Maher, Brian; Albrecht, Andreas A; Loomes, Martin; Yang, Xin-She; Steinhöfel, Kathleen

    2014-01-07

    We introduce a Firefly-inspired algorithmic approach for protein structure prediction over two different lattice models in three-dimensional space. In particular, we consider three-dimensional cubic and three-dimensional face-centred-cubic (FCC) lattices. The underlying energy models are the Hydrophobic-Polar (H-P) model, the Miyazawa-Jernigan (M-J) model and a related matrix model. The implementation of our approach is tested on ten H-P benchmark problems of a length of 48 and ten M-J benchmark problems of a length ranging from 48 until 61. The key complexity parameter we investigate is the total number of objective function evaluations required to achieve the optimum energy values for the H-P model or competitive results in comparison to published values for the M-J model. For H-P instances and cubic lattices, where data for comparison are available, we obtain an average speed-up over eight instances of 2.1, leaving out two extreme values (otherwise, 8.8). For six M-J instances, data for comparison are available for cubic lattices and runs with a population size of 100, where, a priori, the minimum free energy is a termination criterion. The average speed-up over four instances is 1.2 (leaving out two extreme values, otherwise 1.1), which is achieved for a population size of only eight instances. The present study is a test case with initial results for ad hoc parameter settings, with the aim of justifying future research on larger instances within lattice model settings, eventually leading to the ultimate goal of implementations for off-lattice models.

  3. Inadvertent Intruder Calculatios for F Tank Farm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koffman, L

    2005-09-12

    Savannah River National Laboratory (SRNL) has been providing radiological performance assessment analysis for Savannah River Site (SRS) solid waste disposal facilities (McDowell-Boyer 2000). The performance assessment considers numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. An Automated Intruder Analysis application was developed by SRNL (Koffman 2004) that simplifies the inadvertent intruder analysis into a routine, automated calculation. Based on SRNL's experience, personnel from Planning Integration & Technology of Closure Business Unitmore » asked SRNL to assist with inadvertent intruder calculations for F Tank Farm to support the development of the Tank Closure Waste Determination Document. Meetings were held to discuss the scenarios to be calculated and the assumptions to be used in the calculations. As a result of the meetings, SRNL was asked to perform four scenario calculations. Two of the scenarios are the same as those calculated by the Automated Intruder Analysis application and these can be calculated directly by providing appropriate inputs. The other two scenarios involve use of groundwater by the intruder and the Automated Intruder Analysis application was adapted to perform these calculations. The four calculations to be performed are: (1) A post-drilling scenario in which the drilling penetrates a transfer line. (2) A calculation of internal exposure due to drinking water from a well located near a waste tank. (3) A post-drilling calculation in which waste is introduced by irrigation of the garden with water from a well located near a waste tank. (4) A resident scenario where a house is built above transfer lines. Note that calculations 1 and 4 use sources from the waste inventory in the transfer line (given in Table 1) whereas calculations 2 and 3 use sources from groundwater beneath the waste tank (given in Appendix B). It is important to recognize that there are two different sources in the calculations. In these calculations, assumptions are made for parameter values. Three key parameters are the size of the garden, the amount of vegetables eaten, and the distance of the well from the waste tank. For these three parameters, different values are considered in the calculations to determine the impact of the change in these parameters. Another key parameter is the length of time of institutional control, which determines when an inadvertent intruder could first be exposed. The standard length of time for institutional control is 100 years from the time of closure. In this analysis, waste inventory values are used from year 2005 but tanks will not be closed until year 2020. Thus, the effective length of time of institutional control used in the calculations is 115 years from year 2005, which is taken to be time zero for radiological decay calculations. All calculations are carried out for a period of 10,000 years.« less

  4. Coda Q Attenuation and Source Parameters Analysis in North East India Using Local Earthquakes

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, W. K.; Earthquake Seismology

    2010-12-01

    Alok Kumar Mohapatra1* and William Kumar Mohanty1 *Corresponding author: alokgpiitkgp@gmail.com 1Department of Geology and Geophysics, Indian Institute of Technology, Kharagpur, West Bengal, India. Pin-721302 ABSTRACT In the present study, the quality factor of coda waves (Qc) and the source parameters has been estimated for the Northeastern India, using the digital data of ten local earthquakes from April 2001 to November 2002. Earthquakes with magnitude range from 3.8 to 4.9 have been taken into account. The time domain coda decay method of a single back scattering model is used to calculate frequency dependent values of Coda Q (Qc) where as, the source parameters like seismic moment(Mo), stress drop, source radius(r), radiant energy(Wo),and strain drop are estimated using displacement amplitude spectrum of body wave using Brune's model. The earthquakes with magnitude range 3.8 to 4.9 have been used for estimation Qc at six central frequencies 1.5 Hz, 3.0 Hz, 6.0 Hz, 9.0 Hz, 12.0 Hz, and 18.0 Hz. In the present work, the Qc value of local earthquakes are estimated to understand the attenuation characteristic, source parameters and tectonic activity of the region. Based on a criteria of homogeneity in the geological characteristics and the constrains imposed by the distribution of available events the study region has been classified into three zones such as the Tibetan Plateau Zone (TPZ), Bengal Alluvium and Arakan-Yuma Zone (BAZ), Shillong Plateau Zone (SPZ). It follows the power law Qc= Qo (f/fo)n where, Qo is the quality factor at the reference frequency (1Hz) fo and n is the frequency parameter which varies from region to region. The mean values of Qc reveals a dependence on frequency, varying from 292.9 at 1.5 Hz to 4880.1 at 18 Hz. Average frequency dependent relationship Qc values obtained of the Northeastern India is 198 f 1.035, while this relationship varies from the region to region such as, Tibetan Plateau Zone (TPZ): Qc= 226 f 1.11, Bengal Alluvium and Arakan-Yuma Zone (BAZ) : Qc= 301 f 0.87, Shillong Plateau Zone (SPZ): Qc=126 fo 0.85. It indicates Northeastern India is seismically active but comparing of all zones in the study region the Shillong Plateau Zone (SPZ): Qc= 126 f 0.85 is seismically most active. Where as the Bengal Alluvium and Arakan-Yuma Zone (BAZ) are less active and out of three the Tibetan Plateau Zone (TPZ)is intermediate active. This study may be useful for the seismic hazard assessment. The estimated seismic moments (Mo), range from 5.98×1020 to 3.88×1023 dyne-cm. The source radii(r) are confined between 152 to 1750 meter, the stress drop ranges between 0.0003×103 bar to 1.04×103 bar, the average radiant energy is 82.57×1018 ergs and the strain drop for the earthquake ranges from 0.00602×10-9 to 2.48×10-9 respectively. The estimated stress drop values for NE India depicts scattered nature of the larger seismic moment value whereas, they show a more systematic nature for smaller seismic moment values. The estimated source parameters are in agreement to previous works in this type of tectonic set up. Key words: Coda wave, Seismic source parameters, Lapse time, single back scattering model, Brune's model, Stress drop and North East India.

  5. From LCAs to simplified models: a generic methodology applied to wind power electricity.

    PubMed

    Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle

    2013-02-05

    This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.

  6. Nonlinear analysis of shock absorbers with amplitude-dependent damping

    NASA Astrophysics Data System (ADS)

    Łuczko, Jan; Ferdek, Urszula; Łatas, Waldemar

    2018-01-01

    This paper contains an analysis of a quarter-car model representing a vehicle equipped with a hydraulic damper whose characteristics are dependent on the piston stroke. The damper, compared to a classical mono-tube damper, has additional internal chambers. Oil flow in those chambers is controlled by relative piston displacement. The proposed nonlinear model of the system is aimed to test the effect of key design parameters of the damper on the quality indices representing ride comfort and driving safety. Numerical methods were used to determine the characteristic curves of the damper and responses of the system to harmonic excitations with their amplitude decreasing as the values of frequency increase.

  7. A Novel Color Image Encryption Algorithm Based on Quantum Chaos Sequence

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Jin, Cong

    2017-03-01

    In this paper, a novel algorithm of image encryption based on quantum chaotic is proposed. The keystreams are generated by the two-dimensional logistic map as initial conditions and parameters. And then general Arnold scrambling algorithm with keys is exploited to permute the pixels of color components. In diffusion process, a novel encryption algorithm, folding algorithm, is proposed to modify the value of diffused pixels. In order to get the high randomness and complexity, the two-dimensional logistic map and quantum chaotic map are coupled with nearest-neighboring coupled-map lattices. Theoretical analyses and computer simulations confirm that the proposed algorithm has high level of security.

  8. Development and application of a soil organic matter-based soil quality index in mineralized terrane of the Western US

    USGS Publications Warehouse

    Blecker, S.W.; Stillings, Lisa L.; Amacher, M.C.; Ippolito, J.A.; DeCrappeo, N.M.

    2013-01-01

    Soil quality indices provide a means of distilling large amounts of data into a single metric that evaluates the soil’s ability to carry out key ecosystem functions. Primarily developed in agroecosytems, then forested ecosystems, an index using the relation between soil organic matter and other key soil properties in more semi-arid systems of the Western US impacted by different geologic mineralization was developed. Three different sites in two different mineralization types, acid sulfate and Cu/Mo porphyry in California and Nevada, were studied. Soil samples were collected from undisturbed soils in both mineralized and nearby unmineralized terrane as well as waste rock and tailings. Eight different microbial parameters (carbon substrate utilization, microbial biomass-C, mineralized-C, mineralized-N and enzyme activities of acid phosphatase, alkaline phosphatase, arylsulfatase, and fluorescein diacetate) along with a number of physicochemical parameters were measured. Multiple linear regression models between these parameters and both total organic carbon and total nitrogen were developed, using the ratio of predicted to measured values as the soil quality index. In most instances, pooling unmineralized and mineralized soil data within a given study site resulted in lower model correlations. Enzyme activity was a consistent explanatory variable in the models across the study sites. Though similar indicators were significant in models across different mineralization types, pooling data across sites inhibited model differentiation of undisturbed and disturbed sites. This procedure could be used to monitor recovery of disturbed systems in mineralized terrane and help link scientific and management disciplines.

  9. A simple physiologically based pharmacokinetic model evaluating the effect of anti-nicotine antibodies on nicotine disposition in the brains of rats and humans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saylor, Kyle, E-mail: saylor@vt.edu; Zhang, Chenmi

    Physiologically based pharmacokinetic (PBPK) modeling was applied to investigate the effects of anti-nicotine antibodies on nicotine disposition in the brains of rats and humans. Successful construction of both rat and human models was achieved by fitting model outputs to published nicotine concentration time course data in the blood and in the brain. Key parameters presumed to have the most effect on the ability of these antibodies to prevent nicotine from entering the brain were selected for investigation using the human model. These parameters, which included antibody affinity for nicotine, antibody cross-reactivity with cotinine, and antibody concentration, were broken down intomore » different, clinically-derived in silico treatment levels and fed into the human PBPK model. Model predictions suggested that all three parameters, in addition to smoking status, have a sizable impact on anti-nicotine antibodies' ability to prevent nicotine from entering the brain and that the antibodies elicited by current human vaccines do not have sufficient binding characteristics to reduce brain nicotine concentrations. If the antibody binding characteristics achieved in animal studies can similarly be achieved in human studies, however, nicotine vaccine efficacy in terms of brain nicotine concentration reduction is predicted to meet threshold values for alleviating nicotine dependence. - Highlights: • Modelling of nicotine disposition in the presence of anti-nicotine antibodies • Key vaccine efficacy factors are evaluated in silico in rats and in humans. • Model predicts insufficient antibody binding in past human nicotine vaccines. • Improving immunogenicity and antibody specificity may lead to vaccine success.« less

  10. Global Nitrous Oxide Emissions from Agricultural Soils: Magnitude and Uncertainties Associated with Input Data and Model Parameters

    NASA Astrophysics Data System (ADS)

    Xu, R.; Tian, H.; Pan, S.; Yang, J.; Lu, C.; Zhang, B.

    2016-12-01

    Human activities have caused significant perturbations of the nitrogen (N) cycle, resulting in about 21% increase of atmospheric N2O concentration since the pre-industrial era. This large increase is mainly caused by intensive agricultural activities including the application of nitrogen fertilizer and the expansion of leguminous crops. Substantial efforts have been made to quantify the global and regional N2O emission from agricultural soils in the last several decades using a wide variety of approaches, such as ground-based observation, atmospheric inversion, and process-based model. However, large uncertainties exist in those estimates as well as methods themselves. In this study, we used a coupled biogeochemical model (DLEM) to estimate magnitude, spatial, and temporal patterns of N2O emissions from global croplands in the past five decades (1961-2012). To estimate uncertainties associated with input data and model parameters, we have implemented a number of simulation experiments with DLEM, accounting for key parameter values that affect calculation of N2O fluxes (i.e., maximum nitrification and denitrification rates, N fixation rate, and the adsorption coefficient for soil ammonium and nitrate), different sets of input data including climate, land management practices (i.e., nitrogen fertilizer types, application rates and timings, with/without irrigation), N deposition, and land use and land cover change. This work provides a robust estimate of global N2O emissions from agricultural soils as well as identifies key gaps and limitations in the existing model and data that need to be investigated in the future.

  11. Determination of protein secondary structure and solvent accessibility using site-directed fluorescence labeling. Studies of T4 lysozyme using the fluorescent probe monobromobimane.

    PubMed

    Mansoor, S E; McHaourab, H S; Farrens, D L

    1999-12-07

    We report an investigation of how much protein structural information could be obtained using a site-directed fluorescence labeling (SDFL) strategy. In our experiments, we used 21 consecutive single-cysteine substitution mutants in T4 lysozyme (residues T115-K135), located in a helix-turn-helix motif. The mutants were labeled with the fluorescent probe monobromobimane and subjected to an array of fluorescence measurements. Thermal stability measurements show that introduction of the label is substantially perturbing only when it is located at buried residue sites. At buried sites (solvent surface accessibility of <40 A(2)), the destabilizations are between 3 and 5.5 kcal/mol, whereas at more exposed sites, DeltaDeltaG values of < or = 1.5 kcal/mol are obtained. Of all the fluorescence parameters that were explored (excitation lambda(max), emission lambda(max), fluorescence lifetime, quantum yield, and steady-state anisotropy), the emission lambda(max) and the steady-state anisotropy values most accurately reflect the solvent surface accessibility at each site as calculated from the crystal structure of cysteine-less T4 lysozyme. The parameters we identify allow the classification of each site as buried, partially buried, or exposed. We find that the variations in these parameters as a function of residue number reflect the sequence-specific secondary structure, the determination of which is a key step for modeling a protein of unknown structure.

  12. Energy Bounds for a Compressed Elastic Film on a Substrate

    NASA Astrophysics Data System (ADS)

    Bourne, David P.; Conti, Sergio; Müller, Stefan

    2017-04-01

    We study pattern formation in a compressed elastic film which delaminates from a substrate. Our key tool is the determination of rigorous upper and lower bounds on the minimum value of a suitable energy functional. The energy consists of two parts, describing the two main physical effects. The first part represents the elastic energy of the film, which is approximated using the von Kármán plate theory. The second part represents the fracture or delamination energy, which is approximated using the Griffith model of fracture. A simpler model containing the first term alone was previously studied with similar methods by several authors, assuming that the delaminated region is fixed. We include the fracture term, transforming the elastic minimisation into a free boundary problem, and opening the way for patterns which result from the interplay of elasticity and delamination. After rescaling, the energy depends on only two parameters: the rescaled film thickness, {σ }, and a measure of the bonding strength between the film and substrate, {γ }. We prove upper bounds on the minimum energy of the form {σ }^a {γ }^b and find that there are four different parameter regimes corresponding to different values of a and b and to different folding patterns of the film. In some cases, the upper bounds are attained by self-similar folding patterns as observed in experiments. Moreover, for two of the four parameter regimes we prove matching, optimal lower bounds.

  13. Spatiotemporal and plantar pressure patterns of 1000 healthy individuals aged 3-101 years.

    PubMed

    McKay, Marnee J; Baldwin, Jennifer N; Ferreira, Paulo; Simic, Milena; Vanicek, Natalie; Wojciechowski, Elizabeth; Mudge, Anita; Burns, Joshua

    2017-10-01

    The purpose of this study was to establish normative reference values for spatiotemporal and plantar pressure parameters, and to investigate the influence of demographic, anthropometric and physical characteristics. In 1000 healthy males and females aged 3-101 years, spatiotemporal and plantar pressure data were collected barefoot with the Zeno™ walkway and Emed ® platform. Correlograms were developed to visualise the relationships between widely reported spatiotemporal and pressure variables with demographic (age, gender), anthropometric (height, mass, waist circumference) and physical characteristics (ankle strength, ankle range of motion, vibration perception) in children aged 3-9 years, adolescents aged 10-19 years, adults aged 20-59 years and older adults aged over 60 years. A comprehensive catalogue of 31 spatiotemporal and pressure variables were generated from 1000 healthy individuals. The key findings were that gait velocity was stable during adolescence and adulthood, while children and older adults walked at a comparable slower speed. Peak pressures increased during childhood to older adulthood. Children demonstrated highest peak pressures beneath the rearfoot whilst adolescents, adults and older adults demonstrated highest pressures at the forefoot. Main factors influencing spatiotemporal and pressure parameters were: increased age, height, body mass and waist circumference, as well as ankle dorsiflexion and plantarflexion strength. This study has established whole of life normative reference values of widely used spatiotemporal and plantar pressure parameters, and revealed changes to be expected across the lifespan. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Key parameters design of an aerial target detection system on a space-based platform

    NASA Astrophysics Data System (ADS)

    Zhu, Hanlu; Li, Yejin; Hu, Tingliang; Rao, Peng

    2018-02-01

    To ensure flight safety of an aerial aircraft and avoid recurrence of aircraft collisions, a method of multi-information fusion is proposed to design the key parameter to realize aircraft target detection on a space-based platform. The key parameters of a detection wave band and spatial resolution using the target-background absolute contrast, target-background relative contrast, and signal-to-clutter ratio were determined. This study also presented the signal-to-interference ratio for analyzing system performance. Key parameters are obtained through the simulation of a specific aircraft. And the simulation results show that the boundary ground sampling distance is 30 and 35 m in the mid- wavelength infrared (MWIR) and long-wavelength infrared (LWIR) bands for most aircraft detection, and the most reasonable detection wavebands is 3.4 to 4.2 μm and 4.35 to 4.5 μm in the MWIR bands, and 9.2 to 9.8 μm in the LWIR bands. We also found that the direction of detection has a great impact on the detection efficiency, especially in MWIR bands.

  15. Biophysical and physicochemical methods differentiate highly ligand-efficient human D-amino acid oxidase inhibitors.

    PubMed

    Lange, Jos H M; Venhorst, Jennifer; van Dongen, Maria J P; Frankena, Jurjen; Bassissi, Firas; de Bruin, Natasja M W J; den Besten, Cathaline; de Beer, Stephanie B A; Oostenbrink, Chris; Markova, Natalia; Kruse, Chris G

    2011-10-01

    Many early drug research efforts are too reductionist thereby not delivering key parameters such as kinetics and thermodynamics of target-ligand binding. A set of human D-Amino Acid Oxidase (DAAO) inhibitors 1-6 was applied to demonstrate the impact of key biophysical techniques and physicochemical methods in the differentiation of chemical entities that cannot be adequately distinguished on the basis of their normalized potency (ligand efficiency) values. The resulting biophysical and physicochemical data were related to relevant pharmacodynamic and pharmacokinetic properties. Surface Plasmon Resonance data indicated prolonged target-ligand residence times for 5 and 6 as compared to 1-4, based on the observed k(off) values. The Isothermal Titration Calorimetry-derived thermodynamic binding profiles of 1-6 to the DAAO enzyme revealed favorable contributions of both ΔH and ΔS to their ΔG values. Surprisingly, the thermodynamic binding profile of 3 elicited a substantially higher favorable contribution of ΔH to ΔG in comparison with the structurally closely related fused bicyclic acid 4. Molecular dynamics simulations and free energy calculations of 1, 3, and 4 led to novel insights into the thermodynamic properties of the binding process at an atomic level and in the different thermodynamic signatures of 3 and 4. The presented holistic approach is anticipated to facilitate the identification of compounds with best-in-class properties at an early research stage. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  16. Parallel Artificial Intelligence Search Techniques for Real Time Applications.

    DTIC Science & Technology

    1987-12-01

    list) (cond ((atom e) e) ((setq a-list (match ’((> v)) e nil)) (inf-to-pre (match-value ’v a-list))) ((setq a-list (match ’((+ 1) (restrict ? oneplus ...defun oneplus (x) 2 (equal x ’) :,- ""find the value of a key into an association list. 7,. :" (defun match-value (key a-list) : : (cadr (assoc key a

  17. Novel image encryption algorithm based on multiple-parameter discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Dong, Taiji; Wu, Jianhua

    2010-08-01

    A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.

  18. Torrefaction of Durian peel and bagasse for bio-briquette as an alternative solid fuel

    NASA Astrophysics Data System (ADS)

    Haryati, S.; Rahmatullah; Putri, R. W.

    2018-03-01

    Biomass waste of durian (Durio zibethinus) peel and bagasse could be used as solid fuel by a toreffaction process. Durian peel and bagasse were washed and crushed into small sizes then dryed in order to remove water content. The treated biomass was burned at varied temperature of 200 – 350 °C and a residence time of 30 min prior to producing torrified charcoal as intermediate product. Torrified charcoal was ground into a powder blended with tapioca glue followed by casting into a cylinder to form a bio-briqquette. The bio-briquette was characterized by determining its calorific value via bomb carolimeter analysis. The key parameter of bio-briquette are calorific value and combustion rate. The result that as the burning temperature was increased the calorific value of bio-briquettes also increased. The maximum calorific value was achieved at 350°C whereas the maximum calorific value of durian (6,157 cal/gr) is higher than bagasse (6,109 cal/gr). The minimum combustion rate was attained in durian peel torrefaction at 350 °C with the rate 0.0398 g/s. The result showed that bio-briquette of durian peel and bagasse have calorific values equivalent to that of subbituminus coal in the range of 4,900 - 6,800 cal/gr.

  19. Development of an Agent-Based Model (ABM) to Simulate the Immune System and Integration of a Regression Method to Estimate the Key ABM Parameters by Fitting the Experimental Data

    PubMed Central

    Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le

    2015-01-01

    Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589

  20. Fractal scaling laws of black carbon aerosol and their influence on spectral radiative properties

    NASA Astrophysics Data System (ADS)

    Tiwari, S.; Chakrabarty, R. K.; Heinson, W.

    2016-12-01

    Current estimates of the direct radiative forcing for Black Carbon (BC) aerosol span over a poorly constrained range between 0.2 and 1 W.m-2. To improve this large uncertainty, tighter constraints need to be placed on BC's key wavelength-dependent optical properties, namely, the absorption (MAC) and scattering (MSC) cross sections per unit mass and hemispherical upscatter fraction (β; a dimensionless scattering directionality parameter). These parameters are very sensitive to changes in particle morphology and complex refractive index nindex. Their interplay determines the magnitude of net positive or negative radiative forcing efficiencies. The current approach among climate modelers for estimating MAC and MSC values of BC is from their optical cross-sections calculated assuming spherical particle morphology with homogeneous, constant-valued refractive index in the visible solar spectrum. The β values are typically assumed to be a constant across this spectrum. This approach, while being computationally inexpensive and convenient, ignores the inherent fractal morphology of BC and its scaling behaviors, and resulting optical properties. In this talk, I will present recent results from my laboratory on determination of the fractal scaling laws of BC aggregate packing density and its complex refractive index for size spanning across three orders of magnitude, and their effects on spectral (Visible-infrared wavelength) scaling of MAC, MSC, and β values. Our experiments synergistically combined novel BC generation techniques, aggregation models, contact-free multi-wavelength optical measurements, and electron microscopy analysis. The scale dependence of nindex on aggregate size followed power-law exponents of -1.4 and -0.5 for sub- and super-micron size aggregates, respectively. The spherical Rayleigh-optics approximation limits, used by climate models for spectral extrapolation of BC optical cross-sections and deconvolution of multi-species mixing ratios, are redefined using the concept of phase shift parameter. I will highlight the importance of size-dependent β values and its role in offsetting the strong light absorbing nature of BC. Finally, the errors introduced in forcing efficiency calculations of BC by assuming spherical homogeneous morphology will be evaluated.

  1. On the detectability of key-MeV solar protons through their nonthermal Lyman-alpha emission

    NASA Technical Reports Server (NTRS)

    Canfield, R. C.; Chang, C. R.

    1985-01-01

    The intensity and timescale of nonthermal Doppler-shifted hydrogen L alpha photon emission as diagnostics of 10 keV to 10 MeV protons bombarding the solar chromosphere during flares are investigated. The steady-state excitation and ionization balance of the proton beam are determined, taking into account all important atomic interactions with the ambient chromosphere. For a proton energy flux comparable to the electron energy flux commonly inferred for large flares, L alpha wing intensities orders of magnitude larger than observed nonflaring values were found. Investigation of timescales for ionization and charge exchange leads researchers to conclude that over a wide range of values of mean proton energy and beam parameters, Doppler-shifted nonthermal L alpha emission is a useful observational diagnostic of the presence of 10 keV to 10 MeV superthermal proton beams in the solar flare chromosphere.

  2. Integration of altitude and airspeed information into a primary flight display via moving-tape formats: Evaluation during random tracking task

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.; Nataupsky, Mark; Steinmetz, George G.

    1987-01-01

    A ground-based aircraft simulation study was conducted to determine the effects on pilot preference and performance of integrating airspeed and altitude information into an advanced electronic primary flight display via moving-tape (linear moving scale) formats. Several key issues relating to the implementation of moving-tape formats were examined in this study: tape centering, tape orientation, and trend information. The factor of centering refers to whether the tape was centered about the actual airspeed or altitude or about some other defined reference value. Tape orientation refers to whether the represented values are arranged in descending or ascending order. Two pilots participated in this study, with each performing 32 runs along seemingly random, previously unknown flight profiles. The data taken, analyzed, and presented consisted of path performance parameters, pilot-control inputs, and electrical brain response measurements.

  3. Analysis of Hydrogen Generation through Thermochemical Gasification of Coconut Shell Using Thermodynamic Equilibrium Model Considering Char and Tar

    PubMed Central

    Rupesh, Shanmughom; Muraleedharan, Chandrasekharan; Arun, Palatel

    2014-01-01

    This work investigates the potential of coconut shell for air-steam gasification using thermodynamic equilibrium model. A thermodynamic equilibrium model considering tar and realistic char conversion was developed using MATLAB software to predict the product gas composition. After comparing it with experimental results the prediction capability of the model is enhanced by multiplying equilibrium constants with suitable coefficients. The modified model is used to study the effect of key process parameters like temperature, steam to biomass ratio, and equivalence ratio on product gas yield, composition, and heating value of syngas along with gasification efficiency. For a steam to biomass ratio of unity, the maximum mole fraction of hydrogen in the product gas is found to be 36.14% with a lower heating value of 7.49 MJ/Nm3 at a gasification temperature of 1500 K and equivalence ratio of 0.15. PMID:27433487

  4. Analysis of Hydrogen Generation through Thermochemical Gasification of Coconut Shell Using Thermodynamic Equilibrium Model Considering Char and Tar.

    PubMed

    Rupesh, Shanmughom; Muraleedharan, Chandrasekharan; Arun, Palatel

    2014-01-01

    This work investigates the potential of coconut shell for air-steam gasification using thermodynamic equilibrium model. A thermodynamic equilibrium model considering tar and realistic char conversion was developed using MATLAB software to predict the product gas composition. After comparing it with experimental results the prediction capability of the model is enhanced by multiplying equilibrium constants with suitable coefficients. The modified model is used to study the effect of key process parameters like temperature, steam to biomass ratio, and equivalence ratio on product gas yield, composition, and heating value of syngas along with gasification efficiency. For a steam to biomass ratio of unity, the maximum mole fraction of hydrogen in the product gas is found to be 36.14% with a lower heating value of 7.49 MJ/Nm(3) at a gasification temperature of 1500 K and equivalence ratio of 0.15.

  5. Mining-related metals in terrestrial food webs of the upper Clark Fork River basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pastorok, R.A.; LaTier, A.J.; Butcher, M.K.

    1994-12-31

    Fluvial deposits of tailings and other mining-related waste in selected riparian habitats of the Upper Clark Fork River basin (Montana) have resulted in metals enriched soils. The significance of metals exposure to selected wildlife species was evaluated by measuring tissue residues of metals (arsenic, cadmium, copper, lead, zinc) in key dietary species, including dominant grasses (tufted hair grass and redtop), willows, alfalfa, barley, invertebrates (grasshoppers, spiders, and beetles), and deer mice. Average metals concentrations in grasses, invertebrates, and deer mice collected from tailings-affected sites were elevated relative to reference to reference levels. Soil-tissue bioconcentration factors for grasses and invertebrates weremore » generally lower than expected based on the range of values in the literature, indicating the reduced bioavailability of metals from mining waste. In general, metals concentrations in willows, alfalfa, and barley were not elevated above reference levels. Using these data and plausible assumptions for other exposure parameters for white-tailed deer, red fox, and American kestrel, metals intake was estimated for soil and diet ingestion pathways. Comparisons of exposure estimates with toxicity reference values indicated that the elevated concentrations of metals in key food web species do not pose a significant risk to wildlife.« less

  6. Norms and values in sociohydrological models

    NASA Astrophysics Data System (ADS)

    Roobavannan, Mahendran; van Emmerik, Tim H. M.; Elshafei, Yasmina; Kandasamy, Jaya; Sanderson, Matthew R.; Vigneswaran, Saravanamuthu; Pande, Saket; Sivapalan, Murugesu

    2018-02-01

    Sustainable water resources management relies on understanding how societies and water systems coevolve. Many place-based sociohydrology (SH) modeling studies use proxies, such as environmental degradation, to capture key elements of the social component of system dynamics. Parameters of assumed relationships between environmental degradation and the human response to it are usually obtained through calibration. Since these relationships are not yet underpinned by social-science theories, confidence in the predictive power of such place-based sociohydrologic models remains low. The generalizability of SH models therefore requires major advances in incorporating more realistic relationships, underpinned by appropriate hydrological and social-science data and theories. The latter is a critical input, since human culture - especially values and norms arising from it - influences behavior and the consequences of behaviors. This paper reviews a key social-science theory that links cultural factors to environmental decision-making, assesses how to better incorporate social-science insights to enhance SH models, and raises important questions to be addressed in moving forward. This is done in the context of recent progress in sociohydrological studies and the gaps that remain to be filled. The paper concludes with a discussion of challenges and opportunities in terms of generalization of SH models and the use of available data to allow future prediction and model transfer to ungauged basins.

  7. Towards a library of synthetic galaxy spectra and preliminary results of classification and parametrization of unresolved galaxies for Gaia. II

    NASA Astrophysics Data System (ADS)

    Tsalmantza, P.; Kontizas, M.; Rocca-Volmerange, B.; Bailer-Jones, C. A. L.; Kontizas, E.; Bellas-Velidis, I.; Livanou, E.; Korakitis, R.; Dapergolas, A.; Vallenari, A.; Fioc, M.

    2009-09-01

    Aims: This paper is the second in a series, implementing a classification system for Gaia observations of unresolved galaxies. Our goals are to determine spectral classes and estimate intrinsic astrophysical parameters via synthetic templates. Here we describe (1) a new extended library of synthetic galaxy spectra; (2) its comparison with various observations; and (3) first results of classification and parametrization experiments using simulated Gaia spectrophotometry of this library. Methods: Using the PÉGASE.2 code, based on galaxy evolution models that take account of metallicity evolution, extinction correction, and emission lines (with stellar spectra based on the BaSeL library), we improved our first library and extended it to cover the domain of most of the SDSS catalogue. Our classification and regression models were support vector machines (SVMs). Results: We produce an extended library of 28 885 synthetic galaxy spectra at zero redshift covering four general Hubble types of galaxies, over the wavelength range between 250 and 1050 nm at a sampling of 1 nm or less. The library is also produced for 4 random values of redshift in the range of 0-0.2. It is computed on a random grid of four key astrophysical parameters (infall timescale and 3 parameters defining the SFR) and, depending on the galaxy type, on two values of the age of the galaxy. The synthetic library was compared and found to be in good agreement with various observations. The first results from the SVM classifiers and parametrizers are promising, indicating that Hubble types can be reliably predicted and several parameters estimated with low bias and variance.

  8. Optimum design of bridges with superelastic-friction base isolators against near-field earthquakes

    NASA Astrophysics Data System (ADS)

    Ozbulut, Osman E.; Hurlebaus, Stefan

    2010-04-01

    The seismic response of a multi-span continuous bridge isolated with novel superelastic-friction base isolator (S-FBI) is investigated under near-field earthquakes. The isolation system consists of a flat steel-Teflon sliding bearing and a superelastic NiTi shape memory alloy (SMA) device. Sliding bearings limit the maximum seismic forces transmitted to the superstructure to a certain value that is a function of friction coefficient of sliding interface. Superelastic SMA device provides restoring capability to the isolation system together with additional damping characteristics. The key design parameters of an S-FBI system are the natural period of the isolated, yielding displacement of SMA device, and the friction coefficient of the sliding bearings. The goal of this study is to obtain optimal values for each design parameter by performing sensitivity analyses of the isolated bridge. First, a three-span continuous bridge is modeled as a two-degrees-of-freedom with S-FBI system. A neuro-fuzzy model is used to capture rate-dependent nonlinear behavior of SMA device. A time-dependent method which employs wavelets to adjust accelerograms to match a target response spectrum with minimum changes on the other characteristics of ground motions is used to generate ground motions used in the simulations. Then, a set of nonlinear time history analyses of the isolated bridge is performed. The variation of the peak response quantities of the isolated bridge is shown as a function of design parameters. Also, the influence of temperature variations on the effectiveness of S-FBI system is evaluated. The results show that the optimum design of the isolated bridge with S-FBI system can be achieved by a judicious specification of design parameters.

  9. MeProRisk - Acquisition and Prediction of thermal and hydraulic properties

    NASA Astrophysics Data System (ADS)

    Arnold, J.; Mottaghy, D.; Pechnig, R.

    2009-04-01

    MeProRisk is a joint project of five university institutes at RWTH Aachen University, Free University Berlin, and Kiel University. Two partners, namely Geophysica Beratunggesellschaft mbH (Aachen) and RWE Dea AG (Hamburg) present the industrial side. It is funded by the German Ministry of Education and Science (BMBF). The MeProRisk project aims to improve strategies to reduce the risk for planning geothermal power plants. Within our subproject we estimate geothermal relevant parameters in the laboratory and in the borehole scale. This basis data will be integrated with hydraulic and seismic experiments to provide a 3D reservoir model. Hitherto we focussed on two different type locations in Germany. These are (1) the crystalline basement in South Germany and (2) the Rotliegend formation and volcanic rocks in the Northern German Sedimentary Basin. In the case of the crystalline basement an extensive dataset could be composed from the 9 km deep KTB borehole including logging, core and cutting data. The whole data could be interpreted with respect to lithology, structure and alteration of the formation which mainly consists of alternating sequences of gneiss and metabasite. For the different rock types the data was analyzed statistically to provide specific values for geothermal key parameters. Important key parameters are for example: p-wave velocity, density, thermal conductivity, permeability and porosity. For the second type location we used logging data recovered within one borehole (> 5 km deep) which was drilled in the so called Voelkersen gas field. The data was supplied by the RWE DEA company. The formation comprises volcanic rocks and sandstones. On corresponding cores we measured p-wave velocity, thermal conductivity, density and porosity in the laboratory. In the same way as for type location (1) the complete data set was analyzed statistically to derive specific values which are relevant for the geothermal reservoir model. Finally this study will end up in a multi-scale implementation of the bore and its direct environment into a 3D reservoir model. For this purpose we provide the basic data which is suitable for the model calculations.

  10. Natural parameter values for generalized gene adjacency.

    PubMed

    Yang, Zhenyu; Sankoff, David

    2010-09-01

    Given the gene orders in two modern genomes, it may be difficult to decide if some genes are close enough in both genomes to infer some ancestral proximity or some functional relationship. Current methods all depend on arbitrary parameters. We explore a class of gene proximity criteria and find two kinds of natural values for their parameters. One kind has to do with the parameter value where the expected information contained in two genomes about each other is maximized. The other kind of natural value has to do with parameter values beyond which all genes are clustered. We analyze these using combinatorial and probabilistic arguments as well as simulations.

  11. Modeling ferroelectric film properties and size effects from tetragonal interlayer in Hf1-xZrxO2 grains

    NASA Astrophysics Data System (ADS)

    Künneth, Christopher; Materlik, Robin; Kersch, Alfred

    2017-05-01

    Size effects from surface or interface energy play a pivotal role in stabilizing the ferroelectric phase in recently discovered thin film Zirconia-Hafnia. However, sufficient quantitative understanding has been lacking due to the interference with the stabilizing effect from dopants. For the important class of undoped Hf1-xZrxO2, a phase stability model based on free energy from Density functional theory (DFT) and surface energy values adapted to the sparse experimental and theoretical data has been successful to describe key properties of the available thin film data. Since surfaces and interfaces are prone to interference, the predictive capability of the model is surprising and directs to a hitherto undetected, underlying reason. New experimental data hint on the existence of an interlayer on the grain surface fixed in the tetragonal phase possibly shielding from external influence. To explore the consequences of such a mechanism, we develop an interface free energy model to include the fixed interlayer, generalize the grain model to include a grain radius distribution, calculate average polarization and permittivity, and compare the model with available experimental data. Since values for interface energies are sparse or uncertain, we obtain its values from minimizing the least square difference between predicted key parameters to experimental data in a global optimization. Since the detailed values for DFT energies depend on the chosen method, we repeat the search for different computed data sets and come out with quantitatively different but qualitatively consistent values for interface energies. The resulting values are physically very reasonable and the model is able to give qualitative prediction. On the other hand, the optimization reveals that the model is not able to fully capture the experimental data. We discuss possible physical effects and directions of research to possibly close this gap.

  12. Chaos control of Hastings-Powell model by combining chaotic motions.

    PubMed

    Danca, Marius-F; Chattopadhyay, Joydev

    2016-04-01

    In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.

  13. Chaos control of Hastings-Powell model by combining chaotic motions

    NASA Astrophysics Data System (ADS)

    Danca, Marius-F.; Chattopadhyay, Joydev

    2016-04-01

    In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.

  14. Microwave moisture sensing of seedcotton: Part 1: Seedcotton microwave material properties

    USDA-ARS?s Scientific Manuscript database

    Moisture content at harvest is a key parameter that impacts quality and how well the cotton crop can be stored without degrading before processing. It is also a key parameter of interest for harvest time field trials as it can directly influence the quality of the harvested crop as well as alter the...

  15. Microwave moisture sensing of seedcotton: Part 1: Seedcotton microwave material properties

    USDA-ARS?s Scientific Manuscript database

    Moisture content at harvest is a key parameter that impacts quality and how well the cotton crop can be stored without degrading before processing. It is also a key parameter of interest for harvest time field trials as it can directly influence the quality of the harvested crop as well as skew the...

  16. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    PubMed

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.

  17. Application of an automatic approach to calibrate the NEMURO nutrient-phytoplankton-zooplankton food web model in the Oyashio region

    NASA Astrophysics Data System (ADS)

    Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi

    2010-10-01

    The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.

  18. Optical image encryption method based on incoherent imaging and polarized light encoding

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C.

    2018-05-01

    We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spread function (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.

  19. Valuation of Mortality Risk Attributable to Climate Change: Investigating the Effect of Survey Administration Modes on a VSL

    PubMed Central

    Ščasný, Milan; Alberini, Anna

    2012-01-01

    The health impact attributable to climate change has been identified as one of the priority areas for impact assessment. The main goal of this paper is to estimate the monetary value of one key health effect, which is premature mortality. Specifically, our goal is to derive the value of a statistical life from people’s willingness to pay for avoiding the risk of dying in one post-transition country in Europe, i.e., the Czech Republic. We carried out a series of conjoint choice experiments in order to value mortality risk reductions. We found the responses to the conjoint choice questions to be reasonable and consistent with the economic paradigm. The VSL is about EUR 2.4 million, and our estimate is comparable with the value of preventing a fatality as used in one of the integrated assessment models. To investigate whether carrying out the survey through the internet may violate the welfare estimate, we administered our questionnaire to two independent samples of respondents using two different modes of survey administration. The results show that the VSLs for the two groups of respondents are €2.25 and €2.55 million, and these figures are statistically indistinguishable. However, the key parameters of indirect utility between the two modes of survey administration are statistically different when specific subgroups of population, such as older respondents, are concerned. Based on this evidence, we conclude that properly designed and administered on-line surveys are a reliable method for administering questionnaires, even when the latter are cognitively challenging. However, attention should be paid to sampling and choice regarding the mode of survey administration if the preference of specific segments of the population is elicited. PMID:23249861

  20. Bifurcation and Stability Analysis of the Equilibrium States in Thermodynamic Systems in a Small Vicinity of the Equilibrium Values of Parameters

    NASA Astrophysics Data System (ADS)

    Barsuk, Alexandr A.; Paladi, Florentin

    2018-04-01

    The dynamic behavior of thermodynamic system, described by one order parameter and one control parameter, in a small neighborhood of ordinary and bifurcation equilibrium values of the system parameters is studied. Using the general methods of investigating the branching (bifurcations) of solutions for nonlinear equations, we performed an exhaustive analysis of the order parameter dependences on the control parameter in a small vicinity of the equilibrium values of parameters, including the stability analysis of the equilibrium states, and the asymptotic behavior of the order parameter dependences on the control parameter (bifurcation diagrams). The peculiarities of the transition to an unstable state of the system are discussed, and the estimates of the transition time to the unstable state in the neighborhood of ordinary and bifurcation equilibrium values of parameters are given. The influence of an external field on the dynamic behavior of thermodynamic system is analyzed, and the peculiarities of the system dynamic behavior are discussed near the ordinary and bifurcation equilibrium values of parameters in the presence of external field. The dynamic process of magnetization of a ferromagnet is discussed by using the general methods of bifurcation and stability analysis presented in the paper.

  1. Uncertainty quantification for accident management using ACE surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varuttamaseni, A.; Lee, J. C.; Youngblood, R. W.

    The alternating conditional expectation (ACE) regression method is used to generate RELAP5 surrogates which are then used to determine the distribution of the peak clad temperature (PCT) during the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed (F and B) operation in the Zion-1 nuclear power plant. The construction of the surrogates assumes conditional independence relations among key reactor parameters. The choice of parameters to model is based on the macroscopic balance statements governing the behavior of the reactor. The peak clad temperature is calculated based on the independent variables that are known tomore » be important in determining the success of the F and B operation. The relationship between these independent variables and the plant parameters such as coolant pressure and temperature is represented by surrogates that are constructed based on 45 RELAP5 cases. The time-dependent PCT for different values of F and B parameters is calculated by sampling the independent variables from their probability distributions and propagating the information through two layers of surrogates. The results of our analysis show that the ACE surrogates are able to satisfactorily reproduce the behavior of the plant parameters even though a quasi-static assumption is primarily used in their construction. The PCT is found to be lower in cases where the F and B operation is initiated, compared to the case without F and B, regardless of the F and B parameters used. (authors)« less

  2. Sensitivity of corneal biomechanical and optical behavior to material parameters using design of experiments method.

    PubMed

    Xu, Mengchen; Lerner, Amy L; Funkenbusch, Paul D; Richhariya, Ashutosh; Yoon, Geunyoung

    2018-02-01

    The optical performance of the human cornea under intraocular pressure (IOP) is the result of complex material properties and their interactions. The measurement of the numerous material parameters that define this material behavior may be key in the refinement of patient-specific models. The goal of this study was to investigate the relative contribution of these parameters to the biomechanical and optical responses of human cornea predicted by a widely accepted anisotropic hyperelastic finite element model, with regional variations in the alignment of fibers. Design of experiments methods were used to quantify the relative importance of material properties including matrix stiffness, fiber stiffness, fiber nonlinearity and fiber dispersion under physiological IOP. Our sensitivity results showed that corneal apical displacement was influenced nearly evenly by matrix stiffness, fiber stiffness and nonlinearity. However, the variations in corneal optical aberrations (refractive power and spherical aberration) were primarily dependent on the value of the matrix stiffness. The optical aberrations predicted by variations in this material parameter were sufficiently large to predict clinically important changes in retinal image quality. Therefore, well-characterized individual variations in matrix stiffness could be critical in cornea modeling in order to reliably predict optical behavior under different IOPs or after corneal surgery.

  3. Prediction of Geomagnetic Activity and Key Parameters in High-latitude Ionosphere

    NASA Technical Reports Server (NTRS)

    Khazanov, George V.; Lyatsky, Wladislaw; Tan, Arjun; Ridley, Aaron

    2007-01-01

    Prediction of geomagnetic activity and related events in the Earth's magnetosphere and ionosphere are important tasks of US Space Weather Program. Prediction reliability is dependent on the prediction method, and elements included in the prediction scheme. Two of the main elements of such prediction scheme are: an appropriate geomagnetic activity index, and an appropriate coupling function (the combination of solar wind parameters providing the best correlation between upstream solar wind data and geomagnetic activity). We have developed a new index of geomagnetic activity, the Polar Magnetic (PM) index and an improved version of solar wind coupling function. PM index is similar to the existing polar cap PC index but it shows much better correlation with upstream solar wind/IMF data and other events in the magnetosphere and ionosphere. We investigate the correlation of PM index with upstream solar wind/IMF data for 10 years (1995-2004) that include both low and high solar activity. We also have introduced a new prediction function for the predicting of cross-polar-cap voltage and Joule heating based on using both PM index and upstream solar wind/IMF data. As we show such prediction function significantly increase the reliability of prediction of these important parameters. The correlation coefficients between the actual and predicted values of these parameters are approx. 0.9 and higher.

  4. Decoding of the light changes in eclipsing Wolf-Rayet binaries. I. A non-classical approach to the solution of light curves

    NASA Astrophysics Data System (ADS)

    Perrier, C.; Breysacher, J.; Rauw, G.

    2009-09-01

    Aims: We present a technique to determine the orbital and physical parameters of eclipsing eccentric Wolf-Rayet + O-star binaries, where one eclipse is produced by the absorption of the O-star light by the stellar wind of the W-R star. Methods: Our method is based on the use of the empirical moments of the light curve that are integral transforms evaluated from the observed light curves. The optical depth along the line of sight and the limb darkening of the W-R star are modelled by simple mathematical functions, and we derive analytical expressions for the moments of the light curve as a function of the orbital parameters and the key parameters of the transparency and limb-darkening functions. These analytical expressions are then inverted in order to derive the values of the orbital inclination, the stellar radii, the fractional luminosities, and the parameters of the wind transparency and limb-darkening laws. Results: The method is applied to the SMC W-R eclipsing binary HD 5980, a remarkable object that underwent an LBV-like event in August 1994. The analysis refers to the pre-outburst observational data. A synthetic light curve based on the elements derived for the system allows a quality assessment of the results obtained.

  5. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  6. Evaluation of laboratory-scale in-vessel co-composting of tobacco and apple waste.

    PubMed

    Kopčić, Nina; Vuković Domanovac, Marija; Kučić, Dajana; Briški, Felicita

    2014-02-01

    Efficient composting process requires set of adequate parameters among which physical-chemical properties of the composting substrate play the key-role. Combining different types of biodegradable solid waste it is possible to obtain a substrate eligible to microorganisms in the composting process. In this work the composting of apple and tobacco solid waste mixture (1:7, dry weight) was explored. The aim of the work was to investigate an efficiency of biodegradation of the given mixture and to characterize incurred raw compost. Composting was conducted in 24 L thermally insulated column reactor at airflow rate of 1.1 L min(-1). During 22 days several parameters were closely monitored: temperature and mass of the substrate, volatile solids content, C/N ratio and pH-value of the mixture and oxygen consumption. The composting of the apple and tobacco waste resulted with high degradation of the volatile solids (53.1%). During the experiment 1.76 kg of oxygen was consumed and the C/N ratio of the product was 11.6. The obtained temperature curve was almost a "mirror image" of the oxygen concentration curve while the peak values of the temperature were occurred 9.5h after the peak oxygen consumption. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Methane oxidation in a landfill cover soil reactor: Changing of kinetic parameters and microorganism community structure.

    PubMed

    Xing, Zhi L; Zhao, Tian T; Gao, Yan H; Yang, Xu; Liu, Shuai; Peng, Xu Y

    2017-02-23

    Changing of CH 4 oxidation potential and biological characteristics with CH 4 concentration was studied in a landfill cover soil reactor (LCSR). The maximum rate of CH 4 oxidation reached 32.40 mol d -1 m -2 by providing sufficient O 2 in the LCSR. The kinetic parameters of methane oxidation in landfill cover soil were obtained by fitting substrate diffusion and consumption model based on the concentration profile of CH 4 and O 2 . The values of [Formula: see text] (0.93-2.29%) and [Formula: see text] (140-524 nmol kg soil-DW -1 ·s -1 ) increased with CH 4 concentration (9.25-20.30%), while the values of [Formula: see text] (312.9-2.6%) and [Formula: see text] (1.3 × 10 -5 to 9.0 × 10 -3 nmol mL -1 h -1 ) were just the opposite. MiSeq pyrosequencing data revealed that Methylobacter (the relative abundance was decreased with height of LCSR) and Methylococcales_unclassified (the relative abundance was increased expect in H 80) became the key players after incubation with increasing CH 4 concentration. These findings provide information for assessing CH 4 oxidation potential and changing of biological characteristics in landfill cover soil.

  8. Estimating the spin diffusion length and the spin Hall angle from spin pumping induced inverse spin Hall voltages

    NASA Astrophysics Data System (ADS)

    Roy, Kuntal

    2017-11-01

    There exists considerable confusion in estimating the spin diffusion length of materials with high spin-orbit coupling from spin pumping experiments. For designing functional devices, it is important to determine the spin diffusion length with sufficient accuracy from experimental results. An inaccurate estimation of spin diffusion length also affects the estimation of other parameters (e.g., spin mixing conductance, spin Hall angle) concomitantly. The spin diffusion length for platinum (Pt) has been reported in the literature in a wide range of 0.5-14 nm, and in particular it is a constant value independent of Pt's thickness. Here, the key reasonings behind such a wide range of reported values of spin diffusion length have been identified comprehensively. In particular, it is shown here that a thickness-dependent conductivity and spin diffusion length is necessary to simultaneously match the experimental results of effective spin mixing conductance and inverse spin Hall voltage due to spin pumping. Such a thickness-dependent spin diffusion length is tantamount to the Elliott-Yafet spin relaxation mechanism, which bodes well for transitional metals. This conclusion is not altered even when there is significant interfacial spin memory loss. Furthermore, the variations in the estimated parameters are also studied, which is important for technological applications.

  9. Improving the efficiency of an Er:YAG laser on enamel and dentin.

    PubMed

    Rizcalla, Nicolas; Bader, Carl; Bortolotto, Tissiana; Krejci, Ivo

    2012-02-01

    To evaluate the influence of air pressure, water flow rate, and pulse frequency on the removal speed of enamel and dentin as well as on their surface morphology. Twenty-four bovine incisors were horizontally cut in slices. Each sample was mounted on an experimental assembly, allowing precise orientation. Eighteen cavities were prepared, nine in enamel and nine in dentin. Specific parameters for frequency, water flow rate, and air pressure were applied for each experimental group. Three groups were randomly formed according to the air pressure settings. Cavity depth was measured using a digital micrometer gauge, and surface morphology was checked by means of scanning electron microscopy. Data was analyzed with ANOVA and Duncan post hoc test. Irradiation at 25 Hz for enamel and 30 Hz for dentin provided the best ablation rates within this study, but efficiency decreased if the frequency was raised further. Greater tissue ablation was found with water flow rate set to low and dropped with higher values. Air pressure was found to have an interaction with the other settings, since ablation rates varied with different air pressure values. Fine-tuning of all parameters to get a good ablation rate with minimum surface damage seems to be key in achieving optimal efficiency for cavity preparation with an Er:YAG laser.

  10. Chemical-specific screening criteria for interpretation of biomonitoring data for volatile organic compounds (VOCs)--application of steady-state PBPK model solutions.

    PubMed

    Aylward, Lesa L; Kirman, Chris R; Blount, Ben C; Hays, Sean M

    2010-10-01

    The National Health and Nutrition Examination Survey (NHANES) generates population-representative biomonitoring data for many chemicals including volatile organic compounds (VOCs) in blood. However, no health or risk-based screening values are available to evaluate these data from a health safety perspective or to use in prioritizing among chemicals for possible risk management actions. We gathered existing risk assessment-based chronic exposure reference values such as reference doses (RfDs), reference concentrations (RfCs), tolerable daily intakes (TDIs), cancer slope factors, etc. and key pharmacokinetic model parameters for 47 VOCs. Using steady-state solutions to a generic physiologically-based pharmacokinetic (PBPK) model structure, we estimated chemical-specific steady-state venous blood concentrations across chemicals associated with unit oral and inhalation exposure rates and with chronic exposure at the identified exposure reference values. The geometric means of the slopes relating modeled steady-state blood concentrations to steady-state exposure to a unit oral dose or unit inhalation concentration among 38 compounds with available pharmacokinetic parameters were 12.0 microg/L per mg/kg-d (geometric standard deviation [GSD] of 3.2) and 3.2 microg/L per mg/m(3) (GSD=1.7), respectively. Chemical-specific blood concentration screening values based on non-cancer reference values for both oral and inhalation exposure range from 0.0005 to 100 microg/L; blood concentrations associated with cancer risk-specific doses at the 1E-05 risk level ranged from 5E-06 to 6E-02 microg/L. The distribution of modeled steady-state blood concentrations associated with unit exposure levels across VOCs may provide a basis for estimating blood concentration screening values for VOCs that lack chemical-specific pharmacokinetic data. The screening blood concentrations presented here provide a tool for risk assessment-based evaluation of population biomonitoring data for VOCs and are most appropriately applied to central tendency estimates for such datasets. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  11. Stony Endocarp Dimension and Shape Variation in Prunus Section Prunus

    PubMed Central

    Depypere, Leander; Chaerle, Peter; Mijnsbrugge, Kristine Vander; Goetghebeur, Paul

    2007-01-01

    Background and Aims Identification of Prunus groups at subspecies or variety level is complicated by the wide range of variation and morphological transitional states. Knowledge of the degree of variability within and between species is a sine qua non for taxonomists. Here, a detailed study of endocarp dimension and shape variation for taxa of Prunus section Prunus is presented. Method The sample size necessary to obtain an estimation of the population mean with a precision of 5 % was determined by iteration. Two cases were considered: (1) the population represents an individual; and (2) the population represents a species. The intra-individual and intraspecific variation of Prunus endocarps was studied by analysing the coefficients of variance for dimension and shape parameters. Morphological variation among taxa was assessed using univariate statistics. The influence of the time of sampling and the level of hydration on endocarp dimensions and shape was examined by means of pairwise t-tests. In total, 14 endocarp characters were examined for five Eurasian plum taxa. Key Results All linear measurements and index values showed a low or normal variability on the individual and species level. In contrast, the parameter ‘Vertical Asymmetry’ had high coefficients of variance for one or more of the taxa studied. Of all dimension and shape parameters studied, only ‘Triangle’ differed significantly between mature endocarps of P. insititia sampled with a time difference of 1 month. The level of hydration affected endocarp dimensions and shape significantly. Conclusions Index values and the parameters ‘Perimeter’, ‘Area’, ‘Triangle’, ‘Ellipse’, ‘Circular’ and ‘Rectangular’, based on sample sizes and coefficients of variance, were found to be most appropriate for further taxonomic analysis. However, use of one, single endocarp parameter is not satisfactory for discrimination between Eurasian plum taxa, mainly because of overlapping ranges. Before analysing dried endocarps, full hydration is recommended, as this restores the original dimensions and shape. PMID:17965026

  12. Predictive value of seven preoperative prognostic scoring systems for spinal metastases.

    PubMed

    Leithner, Andreas; Radl, Roman; Gruber, Gerald; Hochegger, Markus; Leithner, Katharina; Welkerling, Heike; Rehak, Peter; Windhager, Reinhard

    2008-11-01

    Predicting prognosis is the key factor in selecting the proper treatment modality for patients with spinal metastases. Therefore, various assessment systems have been designed in order to provide a basis for deciding the course of treatment. Such systems have been proposed by Tokuhashi, Sioutos, Tomita, Van der Linden, and Bauer. The scores differ greatly in the kind of parameters assessed. The aim of this study was to evaluate the prognostic value of each score. Eight parameters were assessed for 69 patients (37 male, 32 female): location, general condition, number of extraspinal bone metastases, number of spinal metastases, visceral metastases, primary tumour, severity of spinal cord palsy, and pathological fracture. Scores according to Tokuhashi (original and revised), Sioutos, Tomita, Van der Linden, and Bauer were assessed as well as a modified Bauer score without scoring for pathologic fracture. Nineteen patients were still alive as of September 2006 with a minimum follow-up of 12 months. All other patients died after a mean period of 17 months after operation. The mean overall survival period was only 3 months for lung cancer, followed by prostate (7 months), kidney (23 months), breast (35 months), and multiple myeloma (51 months). At univariate survival analysis, primary tumour and visceral metastases were significant parameters, while Karnofsky score was only significant in the group including myeloma patients. In multivariate analysis of all seven parameters assessed, primary tumour and visceral metastases were the only significant parameters. Of all seven scoring systems, the original Bauer score and a Bauer score without scoring for pathologic fracture had the best association with survival (P < 0.001). The data of the present study emphasize that the original Bauer score and a modified Bauer score without scoring for pathologic fracture seem to be practicable and highly predictive preoperative scoring systems for patients with spinal metastases. However, decision for or against surgery should never be based alone on a prognostic score but should take symptoms like pain or neurological compromise into account.

  13. Normative calcaneal quantitative ultrasound data for the indigenous Shuar and non-Shuar Colonos of the Ecuadorian Amazon.

    PubMed

    Madimenos, Felicia C; Snodgrass, J Josh; Blackwell, Aaron D; Liebert, Melissa A; Cepon, Tara J; Sugiyama, Lawrence S

    2011-01-01

    Minimal data on bone mineral density changes are available from populations in developing countries. Using calcaneal quantitative ultrasound (QUS) techniques, the current study contributes to remedying this gap in the literature by establishing a normative data set on the indigenous Shuar and non-Shuar Colonos of the Ecuadorian Amazon. The paucity of bone mineral density (BMD) data from populations in developing countries partially reflects the lack of diagnostic resources in these areas. Portable QUS techniques now enable researchers to collect bone health data in remote field-based settings and to contribute normative data from developing regions. The main objective of this study is to establish normative QUS data for two Ecuadorian Amazonian populations-the indigenous Shuar and non-Shuar Colonos. The effects of ethnic group, sex, age, and body size on QUS parameters are also considered. A study cohort consisting of 227 Shuar and 261 Colonos (15-91 years old) were recruited from several small rural Ecuadorian communities in the Upano River Valley. Calcaneal QUS parameters were collected on the right heel of each participant using a Sahara bone sonometer. Three ultrasound generated parameters were employed: broadband ultrasound attenuation (BUA), speed of sound (SOS), and calculated heel BMD (hBMD). In both populations and sexes, all QUS values were progressively lower with advancing age. Shuar have significantly higher QUS values than Colonos, with most pronounced differences found between pre-menopausal Shuar and Colono females. Multiple regression analyses show that age is a key predictor of QUS while weight alone is a less consistent determinant. Both Shuar males and females display comparatively greater QUS parameters than other reference populations. These normative data for three calcaneal QUS parameters will be useful for predicting fracture risk and determining diagnostic QUS criteria of osteoporosis in non-industrialized populations in South America and elsewhere.

  14. Use of Multiple Linear Regression Models for Setting Water Quality Criteria for Copper: A Complementary Approach to the Biotic Ligand Model.

    PubMed

    Brix, Kevin V; DeForest, David K; Tear, Lucinda; Grosell, Martin; Adams, William J

    2017-05-02

    Biotic Ligand Models (BLMs) for metals are widely applied in ecological risk assessments and in the development of regulatory water quality guidelines in Europe, and in 2007 the United States Environmental Protection Agency (USEPA) recommended BLM-based water quality criteria (WQC) for Cu in freshwater. However, to-date, few states have adopted BLM-based Cu criteria into their water quality standards on a state-wide basis, which appears to be due to the perception that the BLM is too complicated or requires too many input variables. Using the mechanistic BLM framework to first identify key water chemistry parameters that influence Cu bioavailability, namely dissolved organic carbon (DOC), pH, and hardness, we developed Cu criteria using the same basic methodology used by the USEPA to derive hardness-based criteria but with the addition of DOC and pH. As an initial proof of concept, we developed stepwise multiple linear regression (MLR) models for species that have been tested over wide ranges of DOC, pH, and hardness conditions. These models predicted acute Cu toxicity values that were within a factor of ±2 in 77% to 97% of tests (5 species had adequate data) and chronic Cu toxicity values that were within a factor of ±2 in 92% of tests (1 species had adequate data). This level of accuracy is comparable to the BLM. Following USEPA guidelines for WQC development, the species data were then combined to develop a linear model with pooled slopes for each independent parameter (i.e., DOC, pH, and hardness) and species-specific intercepts using Analysis of Covariance. The pooled MLR and BLM models predicted species-specific toxicity with similar precision; adjusted R 2 and R 2 values ranged from 0.56 to 0.86 and 0.66-0.85, respectively. Graphical exploration of relationships between predicted and observed toxicity, residuals and observed toxicity, and residuals and concentrations of key input parameters revealed many similarities and a few key distinctions between the performances of the two models. The pooled MLR model was then applied to the species sensitivity distribution to derive acute and chronic criteria equations similar in form to the USEPA's current hardness-based criteria equations but with DOC, pH, and hardness as the independent variables. Overall, the MLR is less responsive to DOC than the BLM across a range of hardness and pH conditions but more responsive to hardness than the BLM. Additionally, at low and intermediate hardness, the MLR model is less responsive than the BLM to pH, but the two models respond comparably at high hardness. The net effect of these different response profiles is that under many typical water quality conditions, MLR- and BLM-based criteria are quite comparable. Indeed, conditions where the two models differ most (high pH/low hardness and low pH/high hardness) are relatively rare in natural aquatic systems. We suggest that this MLR-based approach, which includes the mechanistic foundation of the BLM but is also consistent with widely accepted hardness-dependent WQC in terms of development and form, may facilitate adoption of updated state-wide Cu criteria that more accurately account for the parameters influencing Cu bioavailability than current hardness-based criteria.

  15. Probabilistic description of probable maximum precipitation

    NASA Astrophysics Data System (ADS)

    Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin

    2017-04-01

    Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.

  16. Contrast-enhanced 3T MR Perfusion of Musculoskeletal Tumours: T1 Value Heterogeneity Assessment and Evaluation of the Influence of T1 Estimation Methods on Quantitative Parameters.

    PubMed

    Gondim Teixeira, Pedro Augusto; Leplat, Christophe; Chen, Bailiang; De Verbizier, Jacques; Beaumont, Marine; Badr, Sammy; Cotten, Anne; Blum, Alain

    2017-12-01

    To evaluate intra-tumour and striated muscle T1 value heterogeneity and the influence of different methods of T1 estimation on the variability of quantitative perfusion parameters. Eighty-two patients with a histologically confirmed musculoskeletal tumour were prospectively included in this study and, with ethics committee approval, underwent contrast-enhanced MR perfusion and T1 mapping. T1 value variations in viable tumour areas and in normal-appearing striated muscle were assessed. In 20 cases, normal muscle perfusion parameters were calculated using three different methods: signal based and gadolinium concentration based on fixed and variable T1 values. Tumour and normal muscle T1 values were significantly different (p = 0.0008). T1 value heterogeneity was higher in tumours than in normal muscle (variation of 19.8% versus 13%). The T1 estimation method had a considerable influence on the variability of perfusion parameters. Fixed T1 values yielded higher coefficients of variation than variable T1 values (mean 109.6 ± 41.8% and 58.3 ± 14.1% respectively). Area under the curve was the least variable parameter (36%). T1 values in musculoskeletal tumours are significantly different and more heterogeneous than normal muscle. Patient-specific T1 estimation is needed for direct inter-patient comparison of perfusion parameters. • T1 value variation in musculoskeletal tumours is considerable. • T1 values in muscle and tumours are significantly different. • Patient-specific T1 estimation is needed for comparison of inter-patient perfusion parameters. • Technical variation is higher in permeability than semiquantitative perfusion parameters.

  17. Experimental Design for the LATOR Mission

    NASA Technical Reports Server (NTRS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.

    2004-01-01

    This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.

  18. Chaos control of Hastings–Powell model by combining chaotic motions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danca, Marius-F., E-mail: danca@rist.ro; Chattopadhyay, Joydev, E-mail: joydev@isical.ac.in

    2016-04-15

    In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings–Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can bemore » approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: “losing + losing = winning.” If “loosing” is replaced with “chaos” and, “winning” with “order” (as the opposite to “chaos”), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write “chaos + chaos = regular.” Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.« less

  19. Genomic data assimilation for estimating hybrid functional Petri net from time-course gene expression data.

    PubMed

    Nagasaki, Masao; Yamaguchi, Rui; Yoshida, Ryo; Imoto, Seiya; Doi, Atsushi; Tamada, Yoshinori; Matsuno, Hiroshi; Miyano, Satoru; Higuchi, Tomoyuki

    2006-01-01

    We propose an automatic construction method of the hybrid functional Petri net as a simulation model of biological pathways. The problems we consider are how we choose the values of parameters and how we set the network structure. Usually, we tune these unknown factors empirically so that the simulation results are consistent with biological knowledge. Obviously, this approach has the limitation in the size of network of interest. To extend the capability of the simulation model, we propose the use of data assimilation approach that was originally established in the field of geophysical simulation science. We provide genomic data assimilation framework that establishes a link between our simulation model and observed data like microarray gene expression data by using a nonlinear state space model. A key idea of our genomic data assimilation is that the unknown parameters in simulation model are converted as the parameter of the state space model and the estimates are obtained as the maximum a posteriori estimators. In the parameter estimation process, the simulation model is used to generate the system model in the state space model. Such a formulation enables us to handle both the model construction and the parameter tuning within a framework of the Bayesian statistical inferences. In particular, the Bayesian approach provides us a way of controlling overfitting during the parameter estimations that is essential for constructing a reliable biological pathway. We demonstrate the effectiveness of our approach using synthetic data. As a result, parameter estimation using genomic data assimilation works very well and the network structure is suitably selected.

  20. Novel application of parameters in waveform contour analysis for assessing arterial stiffness in aged and atherosclerotic subjects.

    PubMed

    Wu, Hsien-Tsai; Liu, Cyuan-Cin; Lin, Po-Hsun; Chung, Hui-Ming; Liu, Ming-Chien; Yip, Hon-Kan; Liu, An-Bang; Sun, Cheuk-Kwan

    2010-11-01

    Although contour analysis of pulse waves has been proposed as a non-invasive means in assessing arterial stiffness in atherosclerosis, accurate determination of the conventional parameters is usually precluded by distorted waveforms in the aged and atherosclerotic objects. We aimed at testing reliable indices in these patient populations. Digital volume pulse (DVP) curve was obtained from 428 subjects recruited from a health screening program at a single medical center from January 2007 to July 2008. Demographic data, blood pressure, and conventional parameters for contour analysis including pulse wave velocity (PWV), crest time (CT), stiffness index (SI), and reflection index (RI) were recorded. Two indices including normalized crest time (NCT) and crest time ratio (CTR) were also analysed and compared with the known parameters. Though ambiguity of dicrotic notch precluded an accurate determination of the two key conventional parameters for assessing arterial stiffness (i.e. SI and RI), NCT and CTR were unaffected because the sum of CT and T(DVP) (i.e. the duration between the systolic and diastolic peak) tended to remain constant. NCT and CTR also correlated significantly with age, systolic and diastolic blood pressure, PWV, SI and RI (all P<0.01). NCT and CTR not only showed significant positive correlations with the conventional parameters for assessment of atherosclerosis (i.e. SI, RI, and PWV), but they also are of particular value in assessing degree of arterial stiffness in subjects with indiscernible peak of diastolic wave that precludes the use of conventional parameters in waveform contour analysis. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Numerical Simulation Of Cratering Effects In Adobe

    DTIC Science & Technology

    2013-07-01

    DEVELOPMENT OF MATERIAL PARAMETERS .........................................................7 PROBLEM SETUP...37 PARAMETER ADJUSTMENTS ......................................................................................38 GLOSSARY...dependent yield surface with the Geological Yield Surface (GEO) modeled in CTH using well characterized adobe. By identifying key parameters that

  2. A general methodology for population analysis

    NASA Astrophysics Data System (ADS)

    Lazov, Petar; Lazov, Igor

    2014-12-01

    For a given population with N - current and M - maximum number of entities, modeled by a Birth-Death Process (BDP) with size M+1, we introduce utilization parameter ρ, ratio of the primary birth and death rates in that BDP, which, physically, determines (equilibrium) macrostates of the population, and information parameter ν, which has an interpretation as population information stiffness. The BDP, modeling the population, is in the state n, n=0,1,…,M, if N=n. In presence of these two key metrics, applying continuity law, equilibrium balance equations concerning the probability distribution pn, n=0,1,…,M, of the quantity N, pn=Prob{N=n}, in equilibrium, and conservation law, and relying on the fundamental concepts population information and population entropy, we develop a general methodology for population analysis; thereto, by definition, population entropy is uncertainty, related to the population. In this approach, what is its essential contribution, the population information consists of three basic parts: elastic (Hooke's) or absorption/emission part, synchronization or inelastic part and null part; the first two parts, which determine uniquely the null part (the null part connects them), are the two basic components of the Information Spectrum of the population. Population entropy, as mean value of population information, follows this division of the information. A given population can function in information elastic, antielastic and inelastic regime. In an information linear population, the synchronization part of the information and entropy is absent. The population size, M+1, is the third key metric in this methodology. Namely, right supposing a population with infinite size, the most of the key quantities and results for populations with finite size, emerged in this methodology, vanish.

  3. Business model design for a wearable biofeedback system.

    PubMed

    Hidefjäll, Patrik; Titkova, Dina

    2015-01-01

    Wearable sensor technologies used to track daily activities have become successful in the consumer market. In order for wearable sensor technology to offer added value in the more challenging areas of stress-rehab care and occupational health stress-related biofeedback parameters need to be monitored and more elaborate business models are needed. To identify probable success factors for a wearable biofeedback system (Affective Health) in the two mentioned market segments in a Swedish setting, we conducted literature studies and interviews with relevant representatives. Data were collected and used first to describe the two market segments and then to define likely feasible business model designs, according to the Business Model Canvas framework. Needs of stakeholders were identified as inputs to business model design. Value propositions, a key building block of a business model, were defined for each segment. The value proposition for occupational health was defined as "A tool that can both identify employees at risk of stress-related disorders and reinforce healthy sustainable behavior" and for healthcare as: "Providing therapists with objective data about the patient's emotional state and motivating patients to better engage in the treatment process".

  4. Are conventional statistical techniques exhaustive for defining metal background concentrations in harbour sediments? A case study: The Coastal Area of Bari (Southeast Italy).

    PubMed

    Mali, Matilda; Dell'Anna, Maria Michela; Mastrorilli, Piero; Damiani, Leonardo; Ungaro, Nicola; Belviso, Claudia; Fiore, Saverio

    2015-11-01

    Sediment contamination by metals poses significant risks to coastal ecosystems and is considered to be problematic for dredging operations. The determination of the background values of metal and metalloid distribution based on site-specific variability is fundamental in assessing pollution levels in harbour sediments. The novelty of the present work consists of addressing the scope and limitation of analysing port sediments through the use of conventional statistical techniques (such as: linear regression analysis, construction of cumulative frequency curves and the iterative 2σ technique), that are commonly employed for assessing Regional Geochemical Background (RGB) values in coastal sediments. This study ascertained that although the tout court use of such techniques in determining the RGB values in harbour sediments seems appropriate (the chemical-physical parameters of port sediments fit well with statistical equations), it should nevertheless be avoided because it may be misleading and can mask key aspects of the study area that can only be revealed by further investigations, such as mineralogical and multivariate statistical analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. An Assessment on Temperature Profile of Jet-A/Biodiesel Mixture in a Simple Combustion Chamber with Plain Orifice Atomiser

    NASA Astrophysics Data System (ADS)

    Ng, W. X.; Mazlan, N. M.; Ismail, M. A.; Rajendran, P.

    2018-05-01

    The preliminary study to evaluate influence of biodiesel/kerosene mixtures on combustion temperature profile is explored. A simple cylindrical combustion chamber configuration with plain orifice atomiser is used for the evaluation. The evaluation is performed under stoichiometric air to fuel ratio. Six samples of fuels are used: 100BD (pure biodiesel), 100KE (pure Jet-A), 20KE80BD (20% Jet-A/80% Biodiesel), 40KE60BD (40% Jet-A/60% Biodiesel), 60KE40BD (60% Jet-A/40% Biodiesel), and 80KE20BD (80% Jet-A/20% Biodiesel). Results showed that the oxygen content, viscosity, and lower heating value are key parameters in affecting the temperature profile inside the chamber. Biodiesel is known to have higher energy content, higher viscosity and lower heating value compared to kerosene. Mixing biodiesel with kerosene improves viscosity and caloric value but reduces oxygen content of the fuel. High oxygen content of the biodiesel resulted to the highest flame temperature. However the flame temperature reduce as the percentage of biodiesel in the fuel mixture reduces.

  6. Return period adjustment for runoff coefficients based on analysis in undeveloped Texas watersheds

    USGS Publications Warehouse

    Dhakal, Nirajan; Fang, Xing; Asquith, William H.; Cleveland, Theodore G.; Thompson, David B.

    2013-01-01

    The rational method for peak discharge (Qp) estimation was introduced in the 1880s. The runoff coefficient (C) is a key parameter for the rational method that has an implicit meaning of rate proportionality, and the C has been declared a function of the annual return period by various researchers. Rate-based runoff coefficients as a function of the return period, C(T), were determined for 36 undeveloped watersheds in Texas using peak discharge frequency from previously published regional regression equations and rainfall intensity frequency for return periods T of 2, 5, 10, 25, 50, and 100 years. The C(T) values and return period adjustments C(T)/C(T=10  year) determined in this study are most applicable to undeveloped watersheds. The return period adjustments determined for the Texas watersheds in this study and those extracted from prior studies of non-Texas data exceed values from well-known literature such as design manuals and textbooks. Most importantly, the return period adjustments exceed values currently recognized in Texas Department of Transportation design guidance when T>10  years.

  7. Inter- and intra-annual variations of clumping index derived from the MODIS BRDF product

    NASA Astrophysics Data System (ADS)

    He, Liming; Liu, Jane; Chen, Jing M.; Croft, Holly; Wang, Rong; Sprintsin, Michael; Zheng, Ting; Ryu, Youngryel; Pisek, Jan; Gonsamo, Alemu; Deng, Feng; Zhang, Yongqin

    2016-02-01

    Clumping index quantifies the level of foliage aggregation, relative to a random distribution, and is a key structural parameter of plant canopies and is widely used in ecological and meteorological models. In this study, the inter- and intra-annual variations in clumping index values, derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product, are investigated at six forest sites, including conifer forests, a mixed deciduous forest and an oak-savanna system. We find that the clumping index displays large seasonal variation, particularly for the deciduous sites, with the magnitude in clumping index values at each site comparable on an intra-annual basis, and the seasonality of clumping index well captured after noise removal. For broadleaved and mixed forest sites, minimum clumping index values are usually found during the season when leaf area index is at its maximum. The magnitude of MODIS clumping index is validated by ground data collected from 17 sites. Validation shows that the MODIS clumping index can explain 75% of variance in measured values (bias = 0.03 and rmse = 0.08), although with a narrower amplitude in variation. This study suggests that the MODIS BRDF product has the potential to produce good seasonal trajectories of clumping index values, but with an improved estimation of background reflectance.

  8. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  9. Optimisation of process parameters on thin shell part using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.

    2017-09-01

    This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.

  10. Energy and momentum analysis of the deployment dynamics of nets in space

    NASA Astrophysics Data System (ADS)

    Botta, Eleonora M.; Sharf, Inna; Misra, Arun K.

    2017-11-01

    In this paper, the deployment dynamics of nets in space is investigated through a combination of analysis and numerical simulations. The considered net is deployed by ejecting several corner masses and thanks to momentum and energy transfer from those to the innermost threads of the net. In this study, the net is modeled with a lumped-parameter approach, and assumed to be symmetrical, subject to symmetrical initial conditions, and initially slack. The work-energy and momentum conservation principles are employed to carry out centroidal analysis of the net, by conceptually partitioning the net into a system of corner masses and the net proper and applying the aforementioned principles to the corresponding centers of mass. The analysis provides bounds on the values that the velocity of the center of mass of the corner masses and the velocity of the center of mass of the net proper can individually attain, as well as relationships between these and different energy contributions. The analytical results allow to identify key parameters characterizing the deployment dynamics of nets in space, which include the ratio between the mass of the corner masses and the total mass, the initial linear momentum, and the direction of the initial velocity vectors. Numerical tools are employed to validate and interpret further the analytical observations. Comparison of deployment results with and without initial velocity of the net proper suggests that more complete and lasting deployment can be achieved if the corner masses alone are ejected. A sensitivity study is performed for the key parameters identified from the energy/momentum analysis, and the outcome establishes that more lasting deployment and safer capture (i.e., characterized by higher traveled distance) can be achieved by employing reasonably lightweight corner masses, moderate shooting angles, and low shooting velocities. A comparison with current literature on tether-nets for space debris capture confirms overall agreement on the importance and effect of the relevant inertial and ejection parameters on the deployment dynamics.

  11. Relationship between the Uncompensated Price Elasticity and the Income Elasticity of Demand under Conditions of Additive Preferences

    PubMed Central

    Sabatelli, Lorenzo

    2016-01-01

    Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis. Key parameters are: the elasticity of the marginal utility of income, and the average budget share. The proposed method can be used to forecast the direct and indirect impact of price changes and of financial instruments of policy using available estimates of the income elasticity of demand. PMID:26999511

  12. Brillouin gain enhancement in nano-scale photonic waveguide

    NASA Astrophysics Data System (ADS)

    Nouri Jouybari, Soodabeh

    2018-05-01

    The enhancement of stimulated Brillouin scattering in nano-scale waveguides has a great contribution in the improvement of the photonic devices technology. The key factors in Brillouin gain are the electrostriction force and radiation pressure generated by optical waves in the waveguide. In this article, we have proposed a new scheme of nano-scale waveguide in which the Brillouin gain is considerably improved compared to the previously-reported schemes. The role of radiation pressure in the Brillouin gain was much higher than the role of the electrostriction force. The Brillouin gain strongly depends on the structural parameters of the waveguide and the maximum value of 12127 W-1 m-1 is obtained for the Brillouin gain.

  13. Cryptanalysis of Chatterjee-Sarkar Hierarchical Identity-Based Encryption Scheme at PKC 06

    NASA Astrophysics Data System (ADS)

    Park, Jong Hwan; Lee, Dong Hoon

    In 2006, Chatterjee and Sarkar proposed a hierarchical identity-based encryption (HIBE) scheme which can support an unbounded number of identity levels. This property is particularly useful in providing forward secrecy by embedding time components within hierarchical identities. In this paper we show that their scheme does not provide the claimed property. Our analysis shows that if the number of identity levels becomes larger than the value of a fixed public parameter, an unintended receiver can reconstruct a new valid ciphertext and decrypt the ciphertext using his or her own private key. The analysis is similarly applied to a multi-receiver identity-based encryption scheme presented as an application of Chatterjee and Sarkar's HIBE scheme.

  14. Pyrolysis of corn stalk biomass briquettes in a scaled-up microwave technology.

    PubMed

    Salema, Arshad Adam; Afzal, Muhammad T; Bennamoun, Lyes

    2017-06-01

    Pyrolysis of corn stalk biomass briquettes was carried out in a developed microwave (MW) reactor supplied with 2.45GHz frequency using 3kW power generator. MW power and biomass loading were the key parameters investigated in this study. Highest bio-oil, biochar, and gas yield of 19.6%, 41.1%, and 54.0% was achieved at different process condition. In terms of quality, biochar exhibited good heating value (32MJ/kg) than bio-oil (2.47MJ/kg). Bio-oil was also characterised chemically using FTIR and GC-MS method. This work may open new dimension towards development of large-scale MW pyrolysis technology. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Relationship between the Uncompensated Price Elasticity and the Income Elasticity of Demand under Conditions of Additive Preferences.

    PubMed

    Sabatelli, Lorenzo

    2016-01-01

    Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis. Key parameters are: the elasticity of the marginal utility of income, and the average budget share. The proposed method can be used to forecast the direct and indirect impact of price changes and of financial instruments of policy using available estimates of the income elasticity of demand.

  16. [Research advances in water quality monitoring technology based on UV-Vis spectrum analysis].

    PubMed

    Wei, Kang-Lin; Wen, Zhi-yu; Wu, Xin; Zhang, Zhong-Wei; Zeng, Tian-Ling

    2011-04-01

    The application of spectral analysis to water quality monitoring is an important developing trend in the field of modern environment monitoring technology. The principle and characteristic of water quality monitoring technology based on UV-Vis spectrum analysis are briefly reviewed. And the research status and advances are introduced from two aspects, on-line monitoring and in-situ monitoring. Moreover, the existent key technical problems are put forward. Finally, the technology trends of multi-parameter water quality monitoring microsystem and microsystem networks based on microspectrometer are prospected, which has certain reference value for the research and development of environmental monitoring technology and modern scientific instrument in the authors' country.

  17. A Comparison of the Forecast Skills among Three Numerical Models

    NASA Astrophysics Data System (ADS)

    Lu, D.; Reddy, S. R.; White, L. J.

    2003-12-01

    Three numerical weather forecast models, MM5, COAMPS and WRF, operating with a joint effort of NOAA HU-NCAS and Jackson State University (JSU) during summer 2003 have been chosen to study their forecast skills against observations. The models forecast over the same region with the same initialization, boundary condition, forecast length and spatial resolution. AVN global dataset have been ingested as initial conditions. Grib resolution of 27 km is chosen to represent the current mesoscale model. The forecasts with the length of 36h are performed to output the result with 12h interval. The key parameters used to evaluate the forecast skill include 12h accumulated precipitation, sea level pressure, wind, surface temperature and dew point. Precipitation is evaluated statistically using conventional skill scores, Threat Score (TS) and Bias Score (BS), for different threshold values based on 12h rainfall observations whereas other statistical methods such as Mean Error (ME), Mean Absolute Error(MAE) and Root Mean Square Error (RMSE) are applied to other forecast parameters.

  18. Foundations for Measuring Volume Rendering Quality

    NASA Technical Reports Server (NTRS)

    Williams, Peter L.; Uselton, Samuel P.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The goal of this paper is to provide a foundation for objectively comparing volume rendered images. The key elements of the foundation are: (1) a rigorous specification of all the parameters that need to be specified to define the conditions under which a volume rendered image is generated; (2) a methodology for difference classification, including a suite of functions or metrics to quantify and classify the difference between two volume rendered images that will support an analysis of the relative importance of particular differences. The results of this method can be used to study the changes caused by modifying particular parameter values, to compare and quantify changes between images of similar data sets rendered in the same way, and even to detect errors in the design, implementation or modification of a volume rendering system. If one has a benchmark image, for example one created by a high accuracy volume rendering system, the method can be used to evaluate the accuracy of a given image.

  19. Shocks in the relativistic transonic accretion with low angular momentum

    NASA Astrophysics Data System (ADS)

    Suková, P.; Charzyński, S.; Janiuk, A.

    2017-12-01

    We perform 1D/2D/3D relativistic hydrodynamical simulations of accretion flows with low angular momentum, filling the gap between spherically symmetric Bondi accretion and disc-like accretion flows. Scenarios with different directional distributions of angular momentum of falling matter and varying values of key parameters such as spin of central black hole, energy and angular momentum of matter are considered. In some of the scenarios the shock front is formed. We identify ranges of parameters for which the shock after formation moves towards or outwards the central black hole or the long-lasting oscillating shock is observed. The frequencies of oscillations of shock positions which can cause flaring in mass accretion rate are extracted. The results are scalable with mass of central black hole and can be compared to the quasi-periodic oscillations of selected microquasars (such as GRS 1915+105, XTE J1550-564 or IGR J17091-3624), as well as to the supermassive black holes in the centres of weakly active galaxies, such as Sgr A*.

  20. Economics of Future Growth in Photovoltaics Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basore, Paul A.; Chung, Donald; Buonassisi, Tonio

    2015-06-14

    The past decade's record of growth in the photovoltaics manufacturing industry indicates that global investment in manufacturing capacity for photovoltaic modules tends to increase in proportion to the size of the industry. The slope of this proportionality determines how fast the industry will grow in the future. Two key parameters determine this slope. One is the annual global investment in manufacturing capacity normalized to the manufacturing capacity for the previous year (capacity-normalized capital investment rate, CapIR, units $/W). The other is how much capital investment is required for each watt of annual manufacturing capacity, normalized to the service life ofmore » the assets (capacity-normalized capital demand rate, CapDR, units $/W). If these two parameters remain unchanged from the values they have held for the past few years, global manufacturing capacity will peak in the next few years and then decline. However, it only takes a small improvement in CapIR to ensure future growth in photovoltaics. Any accompanying improvement in CapDR will accelerate that growth.« less

  1. New insights into the complex regulation of the glycolytic pathway in Lactococcus lactis. I. Construction and diagnosis of a comprehensive dynamic model.

    PubMed

    Dolatshahi, Sepideh; Fonseca, Luis L; Voit, Eberhard O

    2016-01-01

    This article and the companion paper use computational systems modeling to decipher the complex coordination of regulatory signals controlling the glycolytic pathway in the dairy bacterium Lactococcus lactis. In this first article, the development of a comprehensive kinetic dynamic model is described. The model is based on in vivo NMR data that consist of concentration trends in key glycolytic metabolites and cofactors. The model structure and parameter values are identified with a customized optimization strategy that uses as its core the method of dynamic flux estimation. For the first time, a dynamic model with a single parameter set fits all available glycolytic time course data under anaerobic operation. The model captures observations that had not been addressed so far and suggests the existence of regulatory effects that had been observed in other species, but not in L. lactis. The companion paper uses this model to analyze details of the dynamic control of glycolysis under aerobic and anaerobic conditions.

  2. Analysis of flow field characteristics in IC equipment chamber based on orthogonal design

    NASA Astrophysics Data System (ADS)

    Liu, W. F.; Yang, Y. Y.; Wang, C. N.

    2017-01-01

    This paper aims to study the influence of the configuration of processing chamber as a part of IC equipment on flow field characteristics. Four parameters, including chamber height, chamber diameter, inlet mass flow rate and outlet area, are arranged using orthogonally design method to study their influence on flow distribution in the processing chamber with the commercial software-Fluent. The velocity, pressure and temperature distribution above the holder were analysed respectively. The velocity difference value of the gas flow above the holder is defined as the evaluation criteria to evaluate the uniformity of the gas flow. The quantitative relationship between key parameters and the uniformity of gas flow was found through analysis of experimental results. According to our study, the chamber height is the most significant factor, and then follows the outlet area, chamber diameter and inlet mass flow rate. This research can provide insights into the study and design of configuration of etcher, plasma enhanced chemical vapor deposition (PECVD) equipment, and other systems with similar configuration and processing condition.

  3. On the dynamics of a generalized predator-prey system with Z-type control.

    PubMed

    Lacitignola, Deborah; Diele, Fasma; Marangi, Carmela; Provenzale, Antonello

    2016-10-01

    We apply the Z-control approach to a generalized predator-prey system and consider the specific case of indirect control of the prey population. We derive the associated Z-controlled model and investigate its properties from the point of view of the dynamical systems theory. The key role of the design parameter λ for the successful application of the method is stressed and related to specific dynamical properties of the Z-controlled model. Critical values of the design parameter are also found, delimiting the λ-range for the effectiveness of the Z-method. Analytical results are then numerically validated by the means of two ecological models: the classical Lotka-Volterra model and a model related to a case study of the wolf-wild boar dynamics in the Alta Murgia National Park. Investigations on these models also highlight how the Z-control method acts in respect to different dynamical regimes of the uncontrolled model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Online plasma calculator

    NASA Astrophysics Data System (ADS)

    Wisniewski, H.; Gourdain, P.-A.

    2017-10-01

    APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.

  5. Species-Specific Standard Redox Potential of Thiol-Disulfide Systems: A Key Parameter to Develop Agents against Oxidative Stress

    NASA Astrophysics Data System (ADS)

    Mirzahosseini, Arash; Noszál, Béla

    2016-11-01

    Microscopic standard redox potential, a new physico-chemical parameter was introduced and determined to quantify thiol-disulfide equilibria of biological significance. The highly composite, codependent acid-base and redox equilibria of thiols could so far be converted into pH-dependent, apparent redox potentials (E’°) only. Since the formation of stable metal-thiolate complexes precludes the direct thiol-disulfide redox potential measurements by usual electrochemical techniques, an indirect method had to be elaborated. In this work, the species-specific, pH-independent standard redox potentials of glutathione were determined primarily by comparing it to 1-methylnicotinamide, the simplest NAD+ analogue. Secondarily, the species-specific standard redox potentials of the two-electron redox transitions of cysteamine, cysteine, homocysteine, penicillamine, and ovothiol were determined using their microscopic redox equilibrium constants with glutathione. The 30 different, microscopic standard redox potential values show close correlation with the respective thiolate basicities and provide sound means for the development of potent agents against oxidative stress.

  6. Formability of a wrought Mg alloy evaluated by impression testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohamed, Walid; Gollapudi, Srikant; Charit, Indrajit

    This study is focused on furthering our understanding of the different factors that influence the formability of Magnesium alloys. Towards this end, formability studies were undertaken on a wrought Mg-2Zn-1Mn (ZM21) alloy. In contrast to conventional formability studies, the impression testing method was adopted here to evaluate the formability parameter, B, at temperatures ranging from 298 to 473 K. The variation of B of ZM21 with temperature and its rather limited values were discussed in the light of different deformation mechanisms such as activation of twinning, slip, grain boundary sliding (GBS) and dynamic recrystallization (DRX). It was found that themore » material characteristics such as grain size, texture and testing conditions such as temperature and strain rate, were key determinants of the mechanism of plastic deformation. A by-product of this analysis was the observation of an interesting correlation between the Zener-Hollomon parameter, Z, and the ability of Mg alloys to undergo DRX.« less

  7. Mass production of bacterial communities adapted to the degradation of volatile organic compounds (TEX).

    PubMed

    Lapertot, Miléna; Seignez, Chantal; Ebrahimi, Sirous; Delorme, Sandrine; Peringer, Paul

    2007-06-01

    This study focuses on the mass cultivation of bacteria adapted to the degradation of a mixture composed of toluene, ethylbenzene, o-, m- and p-xylenes (TEX). For the cultivation process Substrate Pulse Batch (SPB) technique was adapted under well-automated conditions. The key parameters to be monitored were handled by LabVIEW software including, temperature, pH, dissolved oxygen and turbidity. Other parameters, such as biomass, ammonium or residual substrate concentrations needed offline measurements. SPB technique has been successfully tested experimentally on TEX. The overall behavior of the mixed bacterial population was observed and discussed along the cultivation process. Carbon and nitrogen limitations were shown to affect the integrity of the bacterial cells as well as their production of exopolymeric substances (EPS). Average productivity and yield values successfully reached the industrial specifications, which were 0.45 kg(DW)m(-3) d(-1) and 0.59 g(DW)g (C) (-1) , respectively. Accuracy and reproducibility of the obtained results present the controlled SPB process as a feasible technique.

  8. Adaptive Neural Control for a Class of Pure-Feedback Nonlinear Systems via Dynamic Surface Technique.

    PubMed

    Liu, Zongcheng; Dong, Xinmin; Xue, Jianping; Li, Hongbo; Chen, Yong

    2016-09-01

    This brief addresses the adaptive control problem for a class of pure-feedback systems with nonaffine functions possibly being nondifferentiable. Without using the mean value theorem, the difficulty of the control design for pure-feedback systems is overcome by modeling the nonaffine functions appropriately. With the help of neural network approximators, an adaptive neural controller is developed by combining the dynamic surface control (DSC) and minimal learning parameter (MLP) techniques. The key features of our approach are that, first, the restrictive assumptions on the partial derivative of nonaffine functions are removed, second, the DSC technique is used to avoid "the explosion of complexity" in the backstepping design, and the number of adaptive parameters is reduced significantly using the MLP technique, third, smooth robust compensators are employed to circumvent the influences of approximation errors and disturbances. Furthermore, it is proved that all the signals in the closed-loop system are semiglobal uniformly ultimately bounded. Finally, the simulation results are provided to demonstrate the effectiveness of the designed method.

  9. An algorithm to count the number of repeated patient data entries with B tree.

    PubMed

    Okada, M; Okada, M

    1985-04-01

    An algorithm to obtain the number of different values that appear a specified number of times in a given data field of a given data file is presented. Basically, a well-known B-tree structure is employed in this study. Some modifications were made to the basic B-tree algorithm. The first step of the modifications is to allow a data item whose values are not necessary distinct from one record to another to be used as a primary key. When a key value is inserted, the number of previous appearances is counted. At the end of all the insertions, the number of key values which are unique in the tree, the number of key values which appear twice, three times, and so forth are obtained. This algorithm is especially powerful for a large size file in disk storage.

  10. Brownian motion model with stochastic parameters for asset prices

    NASA Astrophysics Data System (ADS)

    Ching, Soo Huei; Hin, Pooi Ah

    2013-09-01

    The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.

  11. Mechatronic design of a novel linear compliant positioning stage with large travel range and high out-of-plane payload capacity

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Xie, Xin; Tan, Ruoyu; Zhang, Lianchao; Fan, Dapeng

    2017-06-01

    Most of the XY positioning stages proposed in previous studies are mainly designed by considering only a single performance indicator of the stage. As a result, the other performance indicators are relatively weak. In this study, a 2-degree-of-freedom linear compliant positioning stage (LCPS) is developed by mechatronic design to balance the interacting performance indicators and realize the desired positioning stage. The key parameters and the coupling of the structure and actuators are completely considered in the design. The LCPS consists of four voice coil motors (VCMs), which are conformally designed for compactness, and six spatial leaf spring parallelograms. These parallelograms are serially connected for a large travel range and a high out-of-plane payload capacity. The mechatronic model is established by matrix structural analysis for structural modeling and by Kirchhoff's law for the VCMs. The sensitivities of the key parameters are analyzed, and the design parameters are subsequently determined. The analytical model of the stage is confirmed by experiments. The stage has a travel range of 4.4 mm × 7.0 mm and a 0.16% area ratio of workspace to the outer dimension of the stage. The values of these performance indicators are greater than those of any existing stage reported in the literature. The closed-loop bandwidth is 9.5 Hz in both working directions. The stage can track a circular trajectory with a radius of 1.5 mm, with 40 mm error and a resolution of lower than 3 mm. The results of payload tests indicate that the stage has at least 20 kg outof- plane payload capacity.

  12. Linked Sensitivity Analysis, Calibration, and Uncertainty Analysis Using a System Dynamics Model for Stroke Comparative Effectiveness Research.

    PubMed

    Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B

    2016-11-01

    As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.

  13. Calibration of a convective parameterization scheme in the WRF model and its impact on the simulation of East Asian summer monsoon precipitation

    DOE PAGES

    Yang, Ben; Zhang, Yaocun; Qian, Yun; ...

    2014-03-26

    Reasonably modeling the magnitude, south-north gradient and seasonal propagation of precipitation associated with the East Asian Summer Monsoon (EASM) is a challenging task in the climate community. In this study we calibrate five key parameters in the Kain-Fritsch convection scheme in the WRF model using an efficient importance-sampling algorithm to improve the EASM simulation. We also examine the impacts of the improved EASM precipitation on other physical process. Our results suggest similar model sensitivity and values of optimized parameters across years with different EASM intensities. By applying the optimal parameters, the simulated precipitation and surface energy features are generally improved.more » The parameters related to downdraft, entrainment coefficients and CAPE consumption time (CCT) can most sensitively affect the precipitation and atmospheric features. Larger downdraft coefficient or CCT decrease the heavy rainfall frequency, while larger entrainment coefficient delays the convection development but build up more potential for heavy rainfall events, causing a possible northward shift of rainfall distribution. The CCT is the most sensitive parameter over wet region and the downdraft parameter plays more important roles over drier northern region. Long-term simulations confirm that by using the optimized parameters the precipitation distributions are better simulated in both weak and strong EASM years. Due to more reasonable simulated precipitation condensational heating, the monsoon circulations are also improved. Lastly, by using the optimized parameters the biases in the retreating (beginning) of Mei-yu (northern China rainfall) simulated by the standard WRF model are evidently reduced and the seasonal and sub-seasonal variations of the monsoon precipitation are remarkably improved.« less

  14. Extension of the operational regime of the LHD towards a deuterium experiment

    NASA Astrophysics Data System (ADS)

    Takeiri, Y.; Morisaki, T.; Osakabe, M.; Yokoyama, M.; Sakakibara, S.; Takahashi, H.; Nakamura, Y.; Oishi, T.; Motojima, G.; Murakami, S.; Ito, K.; Ejiri, A.; Imagawa, S.; Inagaki, S.; Isobe, M.; Kubo, S.; Masamune, S.; Mito, T.; Murakami, I.; Nagaoka, K.; Nagasaki, K.; Nishimura, K.; Sakamoto, M.; Sakamoto, R.; Shimozuma, T.; Shinohara, K.; Sugama, H.; Watanabe, K. Y.; Ahn, J. W.; Akata, N.; Akiyama, T.; Ashikawa, N.; Baldzuhn, J.; Bando, T.; Bernard, E.; Castejón, F.; Chikaraishi, H.; Emoto, M.; Evans, T.; Ezumi, N.; Fujii, K.; Funaba, H.; Goto, M.; Goto, T.; Gradic, D.; Gunsu, Y.; Hamaguchi, S.; Hasegawa, H.; Hayashi, Y.; Hidalgo, C.; Higashiguchi, T.; Hirooka, Y.; Hishinuma, Y.; Horiuchi, R.; Ichiguchi, K.; Ida, K.; Ido, T.; Igami, H.; Ikeda, K.; Ishiguro, S.; Ishizaki, R.; Ishizawa, A.; Ito, A.; Ito, Y.; Iwamoto, A.; Kamio, S.; Kamiya, K.; Kaneko, O.; Kanno, R.; Kasahara, H.; Kato, D.; Kato, T.; Kawahata, K.; Kawamura, G.; Kisaki, M.; Kitajima, S.; Ko, W. H.; Kobayashi, M.; Kobayashi, S.; Kobayashi, T.; Koga, K.; Kohyama, A.; Kumazawa, R.; Lee, J. H.; López-Bruna, D.; Makino, R.; Masuzaki, S.; Matsumoto, Y.; Matsuura, H.; Mitarai, O.; Miura, H.; Miyazawa, J.; Mizuguchi, N.; Moon, C.; Morita, S.; Moritaka, T.; Mukai, K.; Muroga, T.; Muto, S.; Mutoh, T.; Nagasaka, T.; Nagayama, Y.; Nakajima, N.; Nakamura, Y.; Nakanishi, H.; Nakano, H.; Nakata, M.; Narushima, Y.; Nishijima, D.; Nishimura, A.; Nishimura, S.; Nishitani, T.; Nishiura, M.; Nobuta, Y.; Noto, H.; Nunami, M.; Obana, T.; Ogawa, K.; Ohdachi, S.; Ohno, M.; Ohno, N.; Ohtani, H.; Okamoto, M.; Oya, Y.; Ozaki, T.; Peterson, B. J.; Preynas, M.; Sagara, S.; Saito, K.; Sakaue, H.; Sanpei, A.; Satake, S.; Sato, M.; Saze, T.; Schmitz, O.; Seki, R.; Seki, T.; Sharov, I.; Shimizu, A.; Shiratani, M.; Shoji, M.; Skinner, C.; Soga, R.; Stange, T.; Suzuki, C.; Suzuki, Y.; Takada, S.; Takahata, K.; Takayama, A.; Takayama, S.; Takemura, Y.; Takeuchi, Y.; Tamura, H.; Tamura, N.; Tanaka, H.; Tanaka, K.; Tanaka, M.; Tanaka, T.; Tanaka, Y.; Toda, S.; Todo, Y.; Toi, K.; Toida, M.; Tokitani, M.; Tokuzawa, T.; Tsuchiya, H.; Tsujimura, T.; Tsumori, K.; Usami, S.; Velasco, J. L.; Wang, H.; Watanabe, T.-H.; Watanabe, T.; Yagi, J.; Yajima, M.; Yamada, H.; Yamada, I.; Yamagishi, O.; Yamaguchi, N.; Yamamoto, Y.; Yanagi, N.; Yasuhara, R.; Yatsuka, E.; Yoshida, N.; Yoshinuma, M.; Yoshimura, S.; Yoshimura, Y.

    2017-10-01

    As the finalization of a hydrogen experiment towards the deuterium phase, the exploration of the best performance of hydrogen plasma was intensively performed in the large helical device. High ion and electron temperatures, T i and T e, of more than 6 keV were simultaneously achieved by superimposing high-power electron cyclotron resonance heating onneutral beam injection (NBI) heated plasma. Although flattening of the ion temperature profile in the core region was observed during the discharges, one could avoid degradation by increasing the electron density. Another key parameter to present plasma performance is an averaged beta value ≤ft< β \\right> . The high ≤ft< β \\right> regime around 4% was extended to an order of magnitude lower than the earlier collisional regime. Impurity behaviour in hydrogen discharges with NBI heating was also classified with a wide range of edge plasma parameters. The existence of a no impurity accumulation regime, where the high performance plasma is maintained with high power heating  >10 MW, was identified. Wide parameter scan experiments suggest that the toroidal rotation and the turbulence are the candidates for expelling impurities from the core region.

  15. Stochastic Analysis of Orbital Lifetimes of Spacecraft

    NASA Technical Reports Server (NTRS)

    Sasamoto, Washito; Goodliff, Kandyce; Cornelius, David

    2008-01-01

    A document discusses (1) a Monte-Carlo-based methodology for probabilistic prediction and analysis of orbital lifetimes of spacecraft and (2) Orbital Lifetime Monte Carlo (OLMC)--a Fortran computer program, consisting of a previously developed long-term orbit-propagator integrated with a Monte Carlo engine. OLMC enables modeling of variances of key physical parameters that affect orbital lifetimes through the use of probability distributions. These parameters include altitude, speed, and flight-path angle at insertion into orbit; solar flux; and launch delays. The products of OLMC are predicted lifetimes (durations above specified minimum altitudes) for the number of user-specified cases. Histograms generated from such predictions can be used to determine the probabilities that spacecraft will satisfy lifetime requirements. The document discusses uncertainties that affect modeling of orbital lifetimes. Issues of repeatability, smoothness of distributions, and code run time are considered for the purpose of establishing values of code-specific parameters and number of Monte Carlo runs. Results from test cases are interpreted as demonstrating that solar-flux predictions are primary sources of variations in predicted lifetimes. Therefore, it is concluded, multiple sets of predictions should be utilized to fully characterize the lifetime range of a spacecraft.

  16. Statistical fusion of continuous labels: identification of cardiac landmarks

    NASA Astrophysics Data System (ADS)

    Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L.; Landman, Bennett A.

    2011-03-01

    Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.

  17. Comparison of the surface dielectric barrier discharge characteristics under different electrode gaps

    NASA Astrophysics Data System (ADS)

    Gao, Guoqiang; Dong, Lei; Peng, Kaisheng; Wei, Wenfu; Li, Chunmao; Wu, Guangning

    2017-01-01

    Currently, great interests are paid to the surface dielectric barrier discharge due to the diverse and interesting application. In this paper, the influences of the electrode gap on the discharge characteristics have been studied. Aspects of the electrical parameters, the optical emission, and the discharge induced gas flow were considered. The electrode gap varied from 0 mm to 21 mm, while the applied AC voltage was studied in the range of 17 kV-27 kV. Results indicate that with the increase of the electrode gap, the variation of discharge voltage exhibits an increasing trend, while the other parameters (i.e., the current, power, and induced flow velocity) increase first, and then decrease once the gap exceeded the critical value. Mechanisms of the electrode gap influencing these key parameters were discussed from the point of equivalent circuit. The experimental results reveal that an optimal discharge gap can be obtained, which is closely related to the applied voltage. Visualization of the induced flow with different electrode gaps was realized by the Schlieren diagnostic technique. Finally, the velocities of induced gas flow determined by the pitot tube were compared with the results of intensity-integral method, and good agreements were found.

  18. Study on warning radius of diffuse reflection laser warning based on fish-eye lens

    NASA Astrophysics Data System (ADS)

    Chen, Bolin; Zhang, Weian

    2013-09-01

    The diffuse reflection type of omni-directional laser warning based on fish-eye lens is becoming more and more important. As one of the key parameters of warning system, the warning radius should be put into investigation emphatically. The paper firstly theoretically analyzes the energy detected by single pixel of FPA detector in the system under complicated environment. Then the least energy detectable by each single pixel of the system is computed in terms of detector sensitivity, system noise, and minimum SNR. Subsequently, by comparison between the energy detected by single pixel and the least detectable energy, the warning radius is deduced from Torrance-Sparrow five-parameter semiempirical statistic model. Finally, a field experiment was developed to validate the computational results. It has been found that the warning radius has a close relationship with BRDF parameters of the irradiated target, propagation distance, angle of incidence, and detector sensitivity, etc. Furthermore, an important fact is shown that the experimental values of warning radius are always less than that of theoretical ones, due to such factors as the optical aberration of fish-eye lens, the transmissivity of narrowband filter, and the packing ratio of detector.

  19. Statistical Fusion of Continuous Labels: Identification of Cardiac Landmarks.

    PubMed

    Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L; Landman, Bennett A

    2011-01-01

    Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.

  20. Scaling relationships among drivers of aquatic respiration from the smallest to the largest freshwater ecosystems

    USGS Publications Warehouse

    Hall, Ed K; Schoolmaster, Donald; Amado, A.M; Stets, Edward G.; Lennon, J.T.; Domaine, L.; Cotner, J.B.

    2016-01-01

    To address how various environmental parameters control or constrain planktonic respiration (PR), we used geometric scaling relationships and established biological scaling laws to derive quantitative predictions for the relationships among key drivers of PR. We then used empirical measurements of PR and environmental (soluble reactive phosphate [SRP], carbon [DOC], chlorophyll a [Chl-a)], and temperature) and landscape parameters (lake area [LA] and watershed area [WA]) from a set of 44 lakes that varied in size and trophic status to test our hypotheses. We found that landscape-level processes affected PR through direct effects on DOC and temperature and indirectly via SRP. In accordance with predictions made from known relationships and scaling laws, scale coefficients (the parameter that describes the shape of a relationship between 2 variables) were found to be negative and have an absolute value 1, others <1). We also found evidence of a significant relationship between temperature and SRP. Because our dataset included measurements of respiration from small pond catchments to the largest body of freshwater on the planet, Lake Superior, these findings should be applicable to controls of PR for the great majority of temperate aquatic ecosystems.

  1. Analysis of hepatitis C viral dynamics using Latin hypercube sampling

    NASA Astrophysics Data System (ADS)

    Pachpute, Gaurav; Chakrabarty, Siddhartha P.

    2012-12-01

    We consider a mathematical model comprising four coupled ordinary differential equations (ODEs) to study hepatitis C viral dynamics. The model includes the efficacies of a combination therapy of interferon and ribavirin. There are two main objectives of this paper. The first one is to approximate the percentage of cases in which there is a viral clearance in absence of treatment as well as percentage of response to treatment for various efficacy levels. The other is to better understand and identify the parameters that play a key role in the decline of viral load and can be estimated in a clinical setting. A condition for the stability of the uninfected and the infected steady states is presented. A large number of sample points for the model parameters (which are physiologically feasible) are generated using Latin hypercube sampling. An analysis of the simulated values identifies that, approximately 29.85% cases result in clearance of the virus during the early phase of the infection. Results from the χ2 and the Spearman's tests done on the samples, indicate a distinctly different distribution for certain parameters for the cases exhibiting viral clearance under the combination therapy.

  2. Automatic approach to deriving fuzzy slope positions

    NASA Astrophysics Data System (ADS)

    Zhu, Liang-Jun; Zhu, A.-Xing; Qin, Cheng-Zhi; Liu, Jun-Zhi

    2018-03-01

    Fuzzy characterization of slope positions is important for geographic modeling. Most of the existing fuzzy classification-based methods for fuzzy characterization require extensive user intervention in data preparation and parameter setting, which is tedious and time-consuming. This paper presents an automatic approach to overcoming these limitations in the prototype-based inference method for deriving fuzzy membership value (or similarity) to slope positions. The key contribution is a procedure for finding the typical locations and setting the fuzzy inference parameters for each slope position type. Instead of being determined totally by users in the prototype-based inference method, in the proposed approach the typical locations and fuzzy inference parameters for each slope position type are automatically determined by a rule set based on prior domain knowledge and the frequency distributions of topographic attributes. Furthermore, the preparation of topographic attributes (e.g., slope gradient, curvature, and relative position index) is automated, so the proposed automatic approach has only one necessary input, i.e., the gridded digital elevation model of the study area. All compute-intensive algorithms in the proposed approach were speeded up by parallel computing. Two study cases were provided to demonstrate that this approach can properly, conveniently and quickly derive the fuzzy slope positions.

  3. Authentication and Encryption Using Modified Elliptic Curve Cryptography with Particle Swarm Optimization and Cuckoo Search Algorithm

    NASA Astrophysics Data System (ADS)

    Kota, Sujatha; Padmanabhuni, Venkata Nageswara Rao; Budda, Kishor; K, Sruthi

    2018-05-01

    Elliptic Curve Cryptography (ECC) uses two keys private key and public key and is considered as a public key cryptographic algorithm that is used for both authentication of a person and confidentiality of data. Either one of the keys is used in encryption and other in decryption depending on usage. Private key is used in encryption by the user and public key is used to identify user in the case of authentication. Similarly, the sender encrypts with the private key and the public key is used to decrypt the message in case of confidentiality. Choosing the private key is always an issue in all public key Cryptographic Algorithms such as RSA, ECC. If tiny values are chosen in random the security of the complete algorithm becomes an issue. Since the Public key is computed based on the Private Key, if they are not chosen optimally they generate infinity values. The proposed Modified Elliptic Curve Cryptography uses selection in either of the choices; the first option is by using Particle Swarm Optimization and the second option is by using Cuckoo Search Algorithm for randomly choosing the values. The proposed algorithms are developed and tested using sample database and both are found to be secured and reliable. The test results prove that the private key is chosen optimally not repetitive or tiny and the computations in public key will not reach infinity.

  4. Field spectrometer (S191H) preprocessor tape quality test program design document

    NASA Technical Reports Server (NTRS)

    Campbell, H. M.

    1976-01-01

    Program QA191H performs quality assurance tests on field spectrometer data recorded on 9-track magnetic tape. The quality testing involves the comparison of key housekeeping and data parameters with historic and predetermined tolerance limits. Samples of key parameters are processed during the calibration period and wavelength cal period, and the results are printed out and recorded on an historical file tape.

  5. Option B+ for the prevention of mother-to-child transmission of HIV infection in developing countries: a review of published cost-effectiveness analyses.

    PubMed

    Karnon, Jonathan; Orji, Nneka

    2016-10-01

    To review the published literature on the cost effectiveness of Option B+ (lifelong antiretroviral therapy) for preventing mother-to-child transmission (PMTCT) of HIV during pregnancy and breastfeeding to inform decision making in low- and middle-income countries. PubMed, Scopus, Google scholar and Medline were searched to identify studies of the cost effectiveness of the World Health Organization (WHO) treatment guidelines for PMTCT. Study quality was appraised using the consolidated health economic evaluation reporting standards checklist. Eligible studies were reviewed in detail to assess the relevance and impact of alternative evaluation frameworks, assumptions and input parameter values. Five published cost effectiveness analyses of Option B+ for the PMTCT of HIV were identified. The reported cost-effectiveness of Option B+ varies substantially, with the results of different studies implying that Option B+ is dominant (lower costs, greater benefits), cost-effective (additional benefits at acceptable additional costs) or not cost-effective (additional benefits at unacceptable additional costs). This variation is due to significant differences in model structures and input parameter values. Structural differences were observed around the estimation of programme effects on infants, HIV-infected mothers and their HIV negative partners, over multiple pregnancies, as well assumptions regarding routine access to antiretroviral therapies. Significant differences in key input parameters were observed in transmission rates, intervention costs and effects and downstream cost savings. Across five model-based cost-effectiveness analyses of strategies for the PMTCT of HIV, the most comprehensive analysis reported that option B+ is highly likely to be cost-effective. This evaluation may have been overly favourable towards option B+ with respect to some input parameter values, but potentially important additional benefits were omitted. Decision makers might be best advised to review this analysis, with a view to requesting additional analyses of the model to inform local funding decisions around alternative strategies for the PMTCT of HIV. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. An effective medium approach to predict the apparent contact angle of drops on super-hydrophobic randomly rough surfaces.

    PubMed

    Bottiglione, F; Carbone, G

    2015-01-14

    The apparent contact angle of large 2D drops with randomly rough self-affine profiles is numerically investigated. The numerical approach is based upon the assumption of large separation of length scales, i.e. it is assumed that the roughness length scales are much smaller than the drop size, thus making it possible to treat the problem through a mean-field like approach relying on the large-separation of scales. The apparent contact angle at equilibrium is calculated in all wetting regimes from full wetting (Wenzel state) to partial wetting (Cassie state). It was found that for very large values of the roughness Wenzel parameter (r(W) > -1/ cos θ(Y), where θ(Y) is the Young's contact angle), the interface approaches the perfect non-wetting condition and the apparent contact angle is almost equal to 180°. The results are compared with the case of roughness on one single scale (sinusoidal surface) and it is found that, given the same value of the Wenzel roughness parameter rW, the apparent contact angle is much larger for the case of a randomly rough surface, proving that the multi-scale character of randomly rough surfaces is a key factor to enhance superhydrophobicity. Moreover, it is shown that for millimetre-sized drops, the actual drop pressure at static equilibrium weakly affects the wetting regime, which instead seems to be dominated by the roughness parameter. For this reason a methodology to estimate the apparent contact angle is proposed, which relies only upon the micro-scale properties of the rough surface.

  7. Pharmacokinetic and pharmacodynamic study of dexmedetomidine in elderly patients during spinal anesthesia.

    PubMed

    Kuang, Yun; Zhang, Ran-Ran; Pei, Qi; Tan, Hong-Yi; Guo, Cheng-Xian; Huang, Jie; Xiang, Yu-Xia; Ouyang, Wen; Duan, Kai-Ming; Wang, Sai-Ying; Yang, Guo-Ping

    2015-12-01

    The application of dexmedetomidine in patient sedation is generally accepted, though its clinical application is limited because of the lack of information detailing the specific properties among diverse populations of patients. The aim of this study was to compare the pharmacokinetic and pharmacodynamic characteristics of dexmedetomidine between elderly and young patients during spinal anesthesia. 34 subjects (elderly group: n = 15; young group: n = 19) with spinal anesthesia were enrolled in the present study following the inclusion/exclusion criteria detailed below. All subjects received intravenous infusion of dexmedetomidine with a loading dose of 0.5 µg x kg⁻¹ for 10 minutes and a maintenance dose of 0.5 µg x kg⁻¹ x h⁻¹ for 50 minutes. Plasma concentrations of dexmedetomidine were detected by the HPLC-MS/MS method and pharmacokinetic parameters were calculated using WinNolin software. There was no significant difference between the elderly and young subjects in major pharmacokinetic parameters. There was a marked gender difference in the Cmax (peak plasma concentration) and tmax (time to reach Cmax) between genders in elderly subjects, though in this cohort the other pharmacokinetic parameters were not significantly different. In the young subjects there were no noteworthy variations between genders in pharmacokinetic parameters. There was no significant difference between the two groups in BISAUC(0-t) (the area under the bispectral index-time curve from time 0 to t hours), BISmin (the minimum value of the bispectral index after drug delivery), and or tmin-BIS (bispectral index for the minimum value of time). SBP (systolic blood pressure), DBP (diastolic blood pressure), HR (heart rate), and SpO₂(pulse oxygen saturation) developed substantive differences in a time-dependent manner, but there were no statistically significant differences in these four indicators in the time*group at three time points (1 hour, 2 hours, and 3 hours after drug administration); while SBP was significantly different between the groups, this differential declined in a time-dependent manner, and there were no significant attendant differences in the D-value. The observed values and D-values of DBP and HR were similar in the groups, but the observed value and D-value of SpO₂did differ. There were 14 drug-related adverse events in the young group, and 26 drug-related adverse events in the elderly group, a 46% differential. The percentage of patients who requiring intervention during surgery was 68.75% (11/16) in the elderly group and 36.84% (7/19) in the young group, with no significant difference between the two groups once age was factored in (p = 0.06). None of the pharmacodynamic indices, however, correlated with the key pharmacokinetic parameters (Cmax, AUC(0→t), AUC(0→∞)) of dexmedetomidine. The clearance of dexmedetomidine in elderly patients showed a declining trend compared to young patients. Interventions in the elderly group were more frequent than in the young group, and the elderly group showed significant adverse effects. It is suggested that elderly patients who use dexmedetomidine may benefit from a different dose. However, further research with a larger population size is required to confirm these findings.

  8. First trimester prediction of maternal glycemic status.

    PubMed

    Gabbay-Benziv, Rinat; Doyle, Lauren E; Blitzer, Miriam; Baschat, Ahmet A

    2015-05-01

    To predict gestational diabetes mellitus (GDM) or normoglycemic status using first trimester maternal characteristics. We used data from a prospective cohort study. First trimester maternal characteristics were compared between women with and without GDM. Association of these variables with sugar values at glucose challenge test (GCT) and subsequent GDM was tested to identify key parameters. A predictive algorithm for GDM was developed and receiver operating characteristics (ROC) statistics was used to derive the optimal risk score. We defined normoglycemic state, when GCT and all four sugar values at oral glucose tolerance test, whenever obtained, were normal. Using same statistical approach, we developed an algorithm to predict the normoglycemic state. Maternal age, race, prior GDM, first trimester BMI, and systolic blood pressure (SBP) were all significantly associated with GDM. Age, BMI, and SBP were also associated with GCT values. The logistic regression analysis constructed equation and the calculated risk score yielded sensitivity, specificity, positive predictive value, and negative predictive value of 85%, 62%, 13.8%, and 98.3% for a cut-off value of 0.042, respectively (ROC-AUC - area under the curve 0.819, CI - confidence interval 0.769-0.868). The model constructed for normoglycemia prediction demonstrated lower performance (ROC-AUC 0.707, CI 0.668-0.746). GDM prediction can be achieved during the first trimester encounter by integration of maternal characteristics and basic measurements while normoglycemic status prediction is less effective.

  9. Physically-based modelling of high magnitude torrent events with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wing-Yuen Chow, Candace; Ramirez, Jorge; Zimmermann, Markus; Keiler, Margreth

    2017-04-01

    High magnitude torrent events are associated with the rapid propagation of vast quantities of water and available sediment downslope where human settlements may be established. Assessing the vulnerability of built structures to these events is a part of consequence analysis, where hazard intensity is related to the degree of loss sustained. The specific contribution of the presented work describes a procedure simulate these damaging events by applying physically-based modelling and to include uncertainty information about the simulated results. This is a first step in the development of vulnerability curves based on several intensity parameters (i.e. maximum velocity, sediment deposition depth and impact pressure). The investigation process begins with the collection, organization and interpretation of detailed post-event documentation and photograph-based observation data of affected structures in three sites that exemplify the impact of highly destructive mudflows and flood occurrences on settlements in Switzerland. Hazard intensity proxies are then simulated with the physically-based FLO-2D model (O'Brien et al., 1993). Prior to modelling, global sensitivity analysis is conducted to support a better understanding of model behaviour, parameterization and the quantification of uncertainties (Song et al., 2015). The inclusion of information describing the degree of confidence in the simulated results supports the credibility of vulnerability curves developed with the modelled data. First, key parameters are identified and selected based on literature review. Truncated a priori ranges of parameter values were then defined by expert solicitation. Local sensitivity analysis is performed based on manual calibration to provide an understanding of the parameters relevant to the case studies of interest. Finally, automated parameter estimation is performed to comprehensively search for optimal parameter combinations and associated values, which are evaluated using the observed data collected in the first stage of the investigation. O'Brien, J.S., Julien, P.Y., Fullerton, W. T., 1993. Two-dimensional water flood and mudflow simulation. Journal of Hydraulic Engineering 119(2): 244-261.
 Song, X., Zhang, J., Zhan, C., Xuan, Y., Ye, M., Xu C., 2015. Global sensitivity analysis in hydrological modeling: Review of concepts, methods, theoretical frameworks, Journal of Hydrology 523: 739-757.

  10. Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph H.; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.

    2016-01-01

    The purpose of this paper is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  11. Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.

    2015-01-01

    The purpose of this presentation is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  12. Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph H.; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.

    2015-01-01

    The purpose of this paper is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  13. What Value "Value Added"?

    ERIC Educational Resources Information Center

    Richards, Andrew

    2015-01-01

    Two quantitative measures of school performance are currently used, the average points score (APS) at Key Stage 2 and value-added (VA), which measures the rate of academic improvement between Key Stage 1 and 2. These figures are used by parents and the Office for Standards in Education to make judgements and comparisons. However, simple…

  14. Calibration by Hydrological Response Unit of a National Hydrologic Model to Improve Spatial Representation and Distribution of Parameters

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II

    2015-12-01

    The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.

  15. Management of physical health in patients with schizophrenia: practical recommendations.

    PubMed

    Heald, A; Montejo, A L; Millar, H; De Hert, M; McCrae, J; Correll, C U

    2010-06-01

    Improved physical health care is a pressing need for patients with schizophrenia. It can be achieved by means of a multidisciplinary team led by the psychiatrist. Key priorities should include: selection of antipsychotic therapy with a low risk of weight gain and metabolic adverse effects; routine assessment, recording and longitudinal tracking of key physical health parameters, ideally by electronic spreadsheets; and intervention to control CVD risk following the same principles as for the general population. A few simple tools to assess and record key physical parameters, combined with lifestyle intervention and pharmacological treatment as indicated, could significantly improve physical outcomes. Effective implementation of strategies to optimise physical health parameters in patients with severe enduring mental illness requires engagement and communication between psychiatrists and primary care in most health settings. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.

  16. Channel-parameter estimation for satellite-to-submarine continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Xie, Cailang; Huang, Peng; Li, Jiawei; Zhang, Ling; Huang, Duan; Zeng, Guihua

    2018-05-01

    This paper deals with a channel-parameter estimation for continuous-variable quantum key distribution (CV-QKD) over a satellite-to-submarine link. In particular, we focus on the channel transmittances and the excess noise which are affected by atmospheric turbulence, surface roughness, zenith angle of the satellite, wind speed, submarine depth, etc. The estimation method is based on proposed algorithms and is applied to low-Earth orbits using the Monte Carlo approach. For light at 550 nm with a repetition frequency of 1 MHz, the effects of the estimated parameters on the performance of the CV-QKD system are assessed by a simulation by comparing the secret key bit rate in the daytime and at night. Our results show the feasibility of satellite-to-submarine CV-QKD, providing an unconditionally secure approach to achieve global networks for underwater communications.

  17. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  18. The Pressure Dependence of Thermal Expansion of Core-Forming Alloys: A Key Parameter in Determining the Convective Style of Planetary Cores

    NASA Astrophysics Data System (ADS)

    Williams, Q. C.; Manghnani, M. H.

    2017-12-01

    The convective style of planetary cores is critically dependent on the thermal properties of iron alloys. In particular, the relation between the adiabatic gradient and the melting curve governs whether planetary cores solidify from their top down (when the adiabat is steeper than the melting curve) or the bottom up (the converse). Molten iron alloys, in general, have large, ambient pressure thermal expansions: values in excess of 1.2 x 10^-4/K are dictated by data derived from levitated and sessile drop techniques. These high values of the thermal expansion imply that the adiabatic gradients within early planetesimals and present day moons that have comparatively low-pressure, iron-rich cores are steep (typically greater than 35 K/GPa at low pressures): values, at low pressures, that are greater than the slope of the melting curve, and hence show that the cores of small solar system objects probably crystallize from the top-down. Here, we deploy a different manifestation of these large values of thermal expansion to determine the pressure dependence of thermal expansion in iron-rich liquids: a difficult parameter to experimentally measure, and critical for determining the size range of cores in which top-down core solidification predominates. In particular, the difference between the adiabatic and isothermal bulk moduli of iron liquids is in the 20-30% range at the melting temperature, and scales as the product of the thermal expansion, the Grüneisen parameter, and the temperature. Hence, ultrasonic (and adiabatic) moduli of iron alloy liquids, when coupled with isothermal sink-float measurements, can yield quantitative constraints on the pressure dependence of thermal expansion. For liquid iron alloys containing 17 wt% Si, we find that the thermal expansion is reduced by 50% over the first 8 GPa of compression. This "squeezing out" of the anomalously high low-pressure thermal expansion of iron-rich alloys at relatively modest conditions likely limits the size range over which top-down crystallizing cores are anticipated within planetary bodies.

  19. Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production

    NASA Astrophysics Data System (ADS)

    Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne

    2018-05-01

    A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.

  20. Parameter as a Switch Between Dynamical States of a Network in Population Decoding.

    PubMed

    Yu, Jiali; Mao, Hua; Yi, Zhang

    2017-04-01

    Population coding is a method to represent stimuli using the collective activities of a number of neurons. Nevertheless, it is difficult to extract information from these population codes with the noise inherent in neuronal responses. Moreover, it is a challenge to identify the right parameter of the decoding model, which plays a key role for convergence. To address the problem, a population decoding model is proposed for parameter selection. Our method successfully identified the key conditions for a nonzero continuous attractor. Both the theoretical analysis and the application studies demonstrate the correctness and effectiveness of this strategy.

Top